├── .gitignore ├── README.md ├── examples ├── README.md ├── classification-data.ipynb ├── real-bandit-data.ipynb └── synthetic.ipynb ├── poetry.lock ├── pyproject.toml ├── real-world └── demo.ipynb └── simulations ├── README.md ├── evaluation-of-ope-1.ipynb ├── evaluation-of-ope-2.ipynb └── hyperparameter-tuning.ipynb /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | pip-wheel-metadata/ 24 | share/python-wheels/ 25 | *.egg-info/ 26 | .installed.cfg 27 | *.egg 28 | MANIFEST 29 | 30 | # PyInstaller 31 | # Usually these files are written by a python script from a template 32 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 33 | *.manifest 34 | *.spec 35 | 36 | # Installer logs 37 | pip-log.txt 38 | pip-delete-this-directory.txt 39 | 40 | # Unit test / coverage reports 41 | htmlcov/ 42 | .tox/ 43 | .nox/ 44 | .coverage 45 | .coverage.* 46 | .cache 47 | nosetests.xml 48 | coverage.xml 49 | *.cover 50 | *.py,cover 51 | .hypothesis/ 52 | .pytest_cache/ 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Django stuff: 59 | *.log 60 | local_settings.py 61 | db.sqlite3 62 | db.sqlite3-journal 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | target/ 76 | 77 | # Jupyter Notebook 78 | .ipynb_checkpoints 79 | 80 | # IPython 81 | profile_default/ 82 | ipython_config.py 83 | 84 | # pyenv 85 | .python-version 86 | 87 | # pipenv 88 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 89 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 90 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 91 | # install all needed dependencies. 92 | #Pipfile.lock 93 | 94 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow 95 | __pypackages__/ 96 | 97 | # Celery stuff 98 | celerybeat-schedule 99 | celerybeat.pid 100 | 101 | # SageMath parsed files 102 | *.sage.py 103 | 104 | # Environments 105 | .env 106 | .venv 107 | env/ 108 | venv/ 109 | ENV/ 110 | env.bak/ 111 | venv.bak/ 112 | 113 | # Spyder project settings 114 | .spyderproject 115 | .spyproject 116 | 117 | # Rope project settings 118 | .ropeproject 119 | 120 | # mkdocs documentation 121 | /site 122 | 123 | # mypy 124 | .mypy_cache/ 125 | .dmypy.json 126 | dmypy.json 127 | 128 | # Pyre type checker 129 | .pyre/ 130 | 131 | # others 132 | .DS_Store 133 | open_bandit_dataset 134 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## Counterfactual Learning and Evaluation for Recommender Systems (RecSys'21 Tutorial) 2 | 3 | Materials for **"Counterfactual Learning and Evaluation for Recommender Systems: Foundations, Implementations, and Recent Advances"**, a tutorial delivered at the 15th ACM Conference on Recommender System ([RecSys'21](https://recsys.acm.org/recsys21/)). 4 | 5 | - Presenters: [Yuta Saito](https://usait0.com/en/) (Cornell University, USA) and [Thorsten Joachims](https://www.cs.cornell.edu/people/tj/) (Cornell University, USA). 6 | 7 | - Tutorial website: https://sites.google.com/cornell.edu/recsys2021tutorial 8 | - Recording: https://youtu.be/HMo9fQMVB4w 9 | - Tutorial proposal: https://dl.acm.org/doi/10.1145/3460231.3473320 10 | - List of tutorials at RecSys2021: https://recsys.acm.org/recsys21/tutorials/ 11 | - Reference Package (Open Bandit Pipeline): https://github.com/st-tech/zr-obp 12 | - Survey of related papers: https://github.com/hanjuku-kaso/awesome-offline-rl 13 | 14 | 15 | ### Contents 16 | 17 | - [examples](./examples): brief examples describing how to use Open Bandit Pipeline with synthetic data, classification data, and real-world bandit data 18 | - [simulations](./simulations): simulation codes comparing a wide variety of existing OPE estimators on synthetic data 19 | - [real-world](./real-world): a brief demo of OPE/OPL on real bandit dataset (need [Open Bandit Dataset](https://research.zozo.com/data.html)) 20 | 21 | The Google Colab version of implementations (examples) are available [here](https://drive.google.com/drive/folders/1P3IPoFhVQ0n19EU5PCF_ZfkxRdpTJnJa?usp=sharing). 22 | 23 | ### Requirements and Setup 24 | 25 | The Python environment is built using [poetry](https://github.com/python-poetry/poetry). You can build the same environment as in our examples and simulations by cloning the repository and running `poetry install` directly under the folder (if you have not install poetry yet, please run `pip install poetry` first.). 26 | 27 | ``` 28 | # clone the obp repository 29 | git clone git@github.com:usaito/recsys2021-tutorial.git 30 | cd benchmark/ope 31 | 32 | # build the environment with poetry 33 | poetry install 34 | 35 | # activate jupyter-lab environment 36 | poetry run jupyter lab 37 | ``` 38 | 39 | The versions of Python and used packages are as follows. 40 | ``` 41 | [tool.poetry.dependencies] 42 | python = "^3.9,<3.10" 43 | scikit-learn = "0.24.2" 44 | numpy = "^1.21.2" 45 | pandas = "^1.3.3" 46 | obp = "0.5.1" 47 | matplotlib = "^3.4.3" 48 | jupyterlab = "^3.1.13" 49 | ``` 50 | 51 | ### Contact 52 | If you have any question, please feel free to contact: ys552@cornell.edu 53 | -------------------------------------------------------------------------------- /examples/README.md: -------------------------------------------------------------------------------- 1 | # Open Bandit Pipeline Quickstart Notebooks 2 | 3 | This page contains a list of quickstart notebooks written with the Open Bandit Pipeline. 4 | 5 | - [`synthetic.ipynb`](./synthetic.ipynb): how to implement OPE/OPL experiments using Open Bandit Pipeline with synthetic data. 6 | - [`classification-data.ipynb`](./classification-data.ipynb): how to implement OPE experiments using Open Bandit Pipeline with classification data. 7 | - [`real-bandit-data.ipynb`](./real-bandit-data.ipynb) : how to implement OPE experiments using Open Bandit Pipeline with real-bandit data. 8 | -------------------------------------------------------------------------------- /examples/classification-data.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "source": [ 6 | "# OPE Experiment with Classificatoin Data\n", 7 | "---\n", 8 | "This notebook provides an example of conducting OPE of an evaluation policy using classification data as logged bandit data.\n", 9 | "It is quite common to conduct OPE experiments using classification data. Appendix G of [Farajtabar et al.(2018)](https://arxiv.org/abs/1802.03493) describes how to conduct OPE experiments with classification data in detail." 10 | ], 11 | "metadata": {} 12 | }, 13 | { 14 | "cell_type": "code", 15 | "execution_count": 1, 16 | "source": [ 17 | "from sklearn.datasets import load_digits\n", 18 | "from sklearn.ensemble import RandomForestClassifier\n", 19 | "from sklearn.linear_model import LogisticRegression\n", 20 | "\n", 21 | "# import open bandit pipeline (obp)\n", 22 | "import obp\n", 23 | "from obp.dataset import MultiClassToBanditReduction\n", 24 | "from obp.ope import (\n", 25 | " OffPolicyEvaluation, \n", 26 | " RegressionModel,\n", 27 | " InverseProbabilityWeighting as IPS,\n", 28 | " DirectMethod as DM,\n", 29 | " DoublyRobust as DR, \n", 30 | ")" 31 | ], 32 | "outputs": [], 33 | "metadata": {} 34 | }, 35 | { 36 | "cell_type": "code", 37 | "execution_count": 2, 38 | "source": [ 39 | "# obp version\n", 40 | "print(obp.__version__)" 41 | ], 42 | "outputs": [ 43 | { 44 | "output_type": "stream", 45 | "name": "stdout", 46 | "text": [ 47 | "0.5.1\n" 48 | ] 49 | } 50 | ], 51 | "metadata": {} 52 | }, 53 | { 54 | "cell_type": "code", 55 | "execution_count": null, 56 | "source": [], 57 | "outputs": [], 58 | "metadata": {} 59 | }, 60 | { 61 | "cell_type": "markdown", 62 | "source": [ 63 | "## (1) Bandit Reduction\n", 64 | "`obp.dataset.MultiClassToBanditReduction` is an easy-to-use for transforming classification data to bandit data.\n", 65 | "It takes \n", 66 | "- feature vectors (`X`)\n", 67 | "- class labels (`y`)\n", 68 | "- classifier to construct behavior policy (`base_classifier_b`) \n", 69 | "- paramter of behavior policy (`alpha_b`) \n", 70 | "\n", 71 | "as its inputs and generates a bandit data that can be used to evaluate the performance of decision making policies (obtained by `off-policy learning`) and OPE estimators." 72 | ], 73 | "metadata": { 74 | "tags": [] 75 | } 76 | }, 77 | { 78 | "cell_type": "code", 79 | "execution_count": 3, 80 | "source": [ 81 | "# load raw digits data\n", 82 | "# `return_X_y` splits feature vectors and labels, instead of returning a Bunch object\n", 83 | "X, y = load_digits(return_X_y=True)" 84 | ], 85 | "outputs": [], 86 | "metadata": {} 87 | }, 88 | { 89 | "cell_type": "code", 90 | "execution_count": 4, 91 | "source": [ 92 | "# convert the raw classification data into a logged bandit dataset\n", 93 | "# we construct a behavior policy using Logistic Regression and parameter alpha_b\n", 94 | "# given a pair of a feature vector and a label (x, c), create a pair of a context vector and reward (x, r)\n", 95 | "# where r = 1 if the output of the behavior policy is equal to c and r = 0 otherwise\n", 96 | "# please refer to https://zr-obp.readthedocs.io/en/latest/_autosummary/obp.dataset.multiclass.html for the details\n", 97 | "dataset = MultiClassToBanditReduction(\n", 98 | " X=X,\n", 99 | " y=y,\n", 100 | " base_classifier_b=LogisticRegression(max_iter=10000, random_state=12345),\n", 101 | " alpha_b=0.8,\n", 102 | " dataset_name=\"digits\",\n", 103 | ")" 104 | ], 105 | "outputs": [], 106 | "metadata": { 107 | "tags": [] 108 | } 109 | }, 110 | { 111 | "cell_type": "code", 112 | "execution_count": 5, 113 | "source": [ 114 | "# split the original data into training and evaluation sets\n", 115 | "dataset.split_train_eval(eval_size=0.7, random_state=12345)" 116 | ], 117 | "outputs": [], 118 | "metadata": {} 119 | }, 120 | { 121 | "cell_type": "code", 122 | "execution_count": 6, 123 | "source": [ 124 | "# obtain logged bandit data generated by behavior policy\n", 125 | "bandit_data = dataset.obtain_batch_bandit_feedback(random_state=12345)\n", 126 | "\n", 127 | "# `bandit_data` is a dictionary storing logged bandit feedback\n", 128 | "bandit_data" 129 | ], 130 | "outputs": [ 131 | { 132 | "output_type": "execute_result", 133 | "data": { 134 | "text/plain": [ 135 | "{'n_actions': 10,\n", 136 | " 'n_rounds': 1258,\n", 137 | " 'context': array([[ 0., 0., 0., ..., 16., 1., 0.],\n", 138 | " [ 0., 0., 7., ..., 16., 3., 0.],\n", 139 | " [ 0., 0., 12., ..., 8., 0., 0.],\n", 140 | " ...,\n", 141 | " [ 0., 1., 13., ..., 8., 11., 1.],\n", 142 | " [ 0., 0., 15., ..., 0., 0., 0.],\n", 143 | " [ 0., 0., 4., ..., 15., 3., 0.]]),\n", 144 | " 'action': array([6, 8, 5, ..., 2, 5, 9]),\n", 145 | " 'reward': array([1., 1., 1., ..., 1., 1., 1.]),\n", 146 | " 'position': None,\n", 147 | " 'pscore': array([0.82, 0.82, 0.82, ..., 0.82, 0.82, 0.82])}" 148 | ] 149 | }, 150 | "metadata": {}, 151 | "execution_count": 6 152 | } 153 | ], 154 | "metadata": {} 155 | }, 156 | { 157 | "cell_type": "code", 158 | "execution_count": null, 159 | "source": [], 160 | "outputs": [], 161 | "metadata": {} 162 | }, 163 | { 164 | "cell_type": "markdown", 165 | "source": [ 166 | "## (2) Off-Policy Learning\n", 167 | "After generating logged bandit data, we now obtain an evaluation policy using the training set." 168 | ], 169 | "metadata": {} 170 | }, 171 | { 172 | "cell_type": "code", 173 | "execution_count": 7, 174 | "source": [ 175 | "# obtain action choice probabilities by an evaluation policy\n", 176 | "# we construct an evaluation policy using Random Forest and parameter alpha_e\n", 177 | "action_dist = dataset.obtain_action_dist_by_eval_policy(\n", 178 | " base_classifier_e=RandomForestClassifier(random_state=12345),\n", 179 | " alpha_e=0.9,\n", 180 | ")" 181 | ], 182 | "outputs": [], 183 | "metadata": {} 184 | }, 185 | { 186 | "cell_type": "code", 187 | "execution_count": 8, 188 | "source": [ 189 | "# which action to take for each context (a probability distribution over actions)\n", 190 | "action_dist[:, :, 0]" 191 | ], 192 | "outputs": [ 193 | { 194 | "output_type": "execute_result", 195 | "data": { 196 | "text/plain": [ 197 | "array([[0.01, 0.01, 0.01, ..., 0.01, 0.01, 0.01],\n", 198 | " [0.01, 0.01, 0.01, ..., 0.01, 0.91, 0.01],\n", 199 | " [0.01, 0.01, 0.01, ..., 0.01, 0.01, 0.01],\n", 200 | " ...,\n", 201 | " [0.01, 0.01, 0.91, ..., 0.01, 0.01, 0.01],\n", 202 | " [0.01, 0.01, 0.01, ..., 0.01, 0.01, 0.01],\n", 203 | " [0.01, 0.01, 0.01, ..., 0.01, 0.01, 0.91]])" 204 | ] 205 | }, 206 | "metadata": {}, 207 | "execution_count": 8 208 | } 209 | ], 210 | "metadata": {} 211 | }, 212 | { 213 | "cell_type": "code", 214 | "execution_count": null, 215 | "source": [], 216 | "outputs": [], 217 | "metadata": {} 218 | }, 219 | { 220 | "cell_type": "markdown", 221 | "source": [ 222 | "## (3) Off-Policy Evaluation (OPE)\n", 223 | "OPE attempts to estimate the performance of evaluation policies using their action choice probabilities.\n", 224 | "\n", 225 | "Here, we evaluate/compare the OPE performance (estimation accuracy) of \n", 226 | "- **Inverse Propensity Score (IPS)**\n", 227 | "- **DirectMethod (DM)**\n", 228 | "- **Doubly Robust (DR)**" 229 | ], 230 | "metadata": { 231 | "tags": [] 232 | } 233 | }, 234 | { 235 | "cell_type": "markdown", 236 | "source": [ 237 | "### (3-1) obtain a reward estimator\n", 238 | "`obp.ope.RegressionModel` simplifies the process of reward modeling\n", 239 | "\n", 240 | "$r(x,a) = \\mathbb{E} [r \\mid x, a] \\approx \\hat{r}(x,a)$" 241 | ], 242 | "metadata": {} 243 | }, 244 | { 245 | "cell_type": "code", 246 | "execution_count": 9, 247 | "source": [ 248 | "regression_model = RegressionModel(\n", 249 | " n_actions=dataset.n_actions, # number of actions; |A|\n", 250 | " base_model=LogisticRegression(C=100, max_iter=10000, random_state=12345), # any sklearn classifier\n", 251 | ")" 252 | ], 253 | "outputs": [], 254 | "metadata": {} 255 | }, 256 | { 257 | "cell_type": "code", 258 | "execution_count": 10, 259 | "source": [ 260 | "estimated_rewards = regression_model.fit_predict(\n", 261 | " context=bandit_data[\"context\"],\n", 262 | " action=bandit_data[\"action\"],\n", 263 | " reward=bandit_data[\"reward\"],\n", 264 | " position=bandit_data[\"position\"],\n", 265 | " random_state=12345,\n", 266 | ")" 267 | ], 268 | "outputs": [], 269 | "metadata": {} 270 | }, 271 | { 272 | "cell_type": "code", 273 | "execution_count": 11, 274 | "source": [ 275 | "estimated_rewards[:, :, 0] # \\hat{q}(x,a)" 276 | ], 277 | "outputs": [ 278 | { 279 | "output_type": "execute_result", 280 | "data": { 281 | "text/plain": [ 282 | "array([[0.91281795, 0.86737252, 0.91463069, ..., 0.81168002, 0.89845427,\n", 283 | " 0.9358936 ],\n", 284 | " [0.88903207, 0.83345002, 0.89128047, ..., 0.76733391, 0.87130206,\n", 285 | " 0.91783677],\n", 286 | " [0.74513876, 0.64616803, 0.74948112, ..., 0.54618716, 0.71186922,\n", 287 | " 0.80301895],\n", 288 | " ...,\n", 289 | " [0.81793626, 0.73726738, 0.82133568, ..., 0.64904708, 0.79151076,\n", 290 | " 0.86233814],\n", 291 | " [0.96992863, 0.95271105, 0.97059214, ..., 0.92995996, 0.96460941,\n", 292 | " 0.97824823],\n", 293 | " [0.59247763, 0.47591952, 0.59801785, ..., 0.37440688, 0.55128051,\n", 294 | " 0.66965761]])" 295 | ] 296 | }, 297 | "metadata": {}, 298 | "execution_count": 11 299 | } 300 | ], 301 | "metadata": {} 302 | }, 303 | { 304 | "cell_type": "code", 305 | "execution_count": null, 306 | "source": [], 307 | "outputs": [], 308 | "metadata": {} 309 | }, 310 | { 311 | "cell_type": "markdown", 312 | "source": [ 313 | "### (3-2) OPE\n", 314 | "`obp.ope.OffPolicyEvaluation` simplifies the OPE process\n", 315 | "\n", 316 | "$V(\\pi_e) \\approx \\hat{V} (\\pi_e; \\mathcal{D}_0, \\theta)$ using DM, IPS, and DR" 317 | ], 318 | "metadata": {} 319 | }, 320 | { 321 | "cell_type": "code", 322 | "execution_count": 12, 323 | "source": [ 324 | "ope = OffPolicyEvaluation(\n", 325 | " bandit_feedback=bandit_data, # bandit data\n", 326 | " ope_estimators=[\n", 327 | " IPS(estimator_name=\"IPS\"), \n", 328 | " DM(estimator_name=\"DM\"), \n", 329 | " DR(estimator_name=\"DR\"),\n", 330 | " ] # used estimators\n", 331 | ")" 332 | ], 333 | "outputs": [], 334 | "metadata": { 335 | "tags": [] 336 | } 337 | }, 338 | { 339 | "cell_type": "code", 340 | "execution_count": 13, 341 | "source": [ 342 | "estimated_policy_value = ope.estimate_policy_values(\n", 343 | " action_dist=action_dist, # \\pi_e(a|x)\n", 344 | " estimated_rewards_by_reg_model=estimated_rewards, # \\hat{q}\n", 345 | ")" 346 | ], 347 | "outputs": [], 348 | "metadata": { 349 | "tags": [] 350 | } 351 | }, 352 | { 353 | "cell_type": "code", 354 | "execution_count": 14, 355 | "source": [ 356 | "# OPE results given by the three estimators\n", 357 | "estimated_policy_value" 358 | ], 359 | "outputs": [ 360 | { 361 | "output_type": "execute_result", 362 | "data": { 363 | "text/plain": [ 364 | "{'IPS': 0.891155143665904, 'DM': 0.788981343944812, 'DR': 0.8752874797701774}" 365 | ] 366 | }, 367 | "metadata": {}, 368 | "execution_count": 14 369 | } 370 | ], 371 | "metadata": {} 372 | }, 373 | { 374 | "cell_type": "code", 375 | "execution_count": null, 376 | "source": [], 377 | "outputs": [], 378 | "metadata": {} 379 | }, 380 | { 381 | "cell_type": "markdown", 382 | "source": [ 383 | "## (4) Evaluation of OPE estimators\n", 384 | "Our final step is **the evaluation of OPE**, which evaluates and compares the estimation accuracy of OPE estimators.\n", 385 | "\n", 386 | "With the multi-class classification data, we can calculate the ground-truth policy value of the evaluation policy. \n", 387 | "Therefore, we can compare the policy values estimated by OPE estimators with the ground-turth to evaluate OPE estimators." 388 | ], 389 | "metadata": { 390 | "tags": [] 391 | } 392 | }, 393 | { 394 | "cell_type": "markdown", 395 | "source": [ 396 | "## (4-1) Approximate the Ground-truth Policy Value\n", 397 | "$V(\\pi) \\approx \\frac{1}{|\\mathcal{D}_{te}|} \\sum_{i=1}^{|\\mathcal{D}_{te}|} \\mathbb{E}_{a \\sim \\pi(a|x_i)} [r(x_i, a)], \\; \\, where \\; \\, r(x,a) := \\mathbb{E}_{r \\sim p(r|x,a)} [r]$" 398 | ], 399 | "metadata": {} 400 | }, 401 | { 402 | "cell_type": "code", 403 | "execution_count": 15, 404 | "source": [ 405 | "# calculate the ground-truth performance of the evaluation policy\n", 406 | "true_policy_value = dataset.calc_ground_truth_policy_value(action_dist=action_dist)\n", 407 | "\n", 408 | "true_policy_value" 409 | ], 410 | "outputs": [ 411 | { 412 | "output_type": "execute_result", 413 | "data": { 414 | "text/plain": [ 415 | "0.8770906200317964" 416 | ] 417 | }, 418 | "metadata": {}, 419 | "execution_count": 15 420 | } 421 | ], 422 | "metadata": {} 423 | }, 424 | { 425 | "cell_type": "code", 426 | "execution_count": null, 427 | "source": [], 428 | "outputs": [], 429 | "metadata": {} 430 | }, 431 | { 432 | "cell_type": "markdown", 433 | "source": [ 434 | "### (4-2) Evaluation of OPE\n", 435 | "Now, let's evaluate the OPE performance (estimation accuracy) of the three estimators \n", 436 | "\n", 437 | "$SE (\\hat{V}; \\mathcal{D}_0) := \\left( V(\\pi_e) - \\hat{V} (\\pi_e; \\mathcal{D}_0, \\theta) \\right)^2$, (squared error of $\\hat{V}$)" 438 | ], 439 | "metadata": {} 440 | }, 441 | { 442 | "cell_type": "code", 443 | "execution_count": 16, 444 | "source": [ 445 | "squared_errors = ope.evaluate_performance_of_estimators(\n", 446 | " ground_truth_policy_value=true_policy_value,\n", 447 | " action_dist=action_dist,\n", 448 | " estimated_rewards_by_reg_model=estimated_rewards,\n", 449 | " metric=\"se\", # squared error\n", 450 | ")" 451 | ], 452 | "outputs": [], 453 | "metadata": {} 454 | }, 455 | { 456 | "cell_type": "code", 457 | "execution_count": 17, 458 | "source": [ 459 | "squared_errors # DR is the most accurate " 460 | ], 461 | "outputs": [ 462 | { 463 | "output_type": "execute_result", 464 | "data": { 465 | "text/plain": [ 466 | "{'IPS': 0.00019781082505437113,\n", 467 | " 'DM': 0.007763244532572437,\n", 468 | " 'DR': 3.251314803071382e-06}" 469 | ] 470 | }, 471 | "metadata": {}, 472 | "execution_count": 17 473 | } 474 | ], 475 | "metadata": {} 476 | }, 477 | { 478 | "cell_type": "code", 479 | "execution_count": null, 480 | "source": [], 481 | "outputs": [], 482 | "metadata": {} 483 | }, 484 | { 485 | "cell_type": "markdown", 486 | "source": [ 487 | "We can iterate the above process several times and calculate the following MSE\n", 488 | "\n", 489 | "$MSE (\\hat{V}) := T^{-1} \\sum_{t=1}^T SE (\\hat{V}; \\mathcal{D}_0^{(t)}) $\n", 490 | "\n", 491 | "where $\\mathcal{D}_0^{(t)}$ is the synthetic data in the $t$-th iteration" 492 | ], 493 | "metadata": {} 494 | }, 495 | { 496 | "cell_type": "code", 497 | "execution_count": null, 498 | "source": [], 499 | "outputs": [], 500 | "metadata": {} 501 | } 502 | ], 503 | "metadata": { 504 | "kernelspec": { 505 | "name": "python3", 506 | "display_name": "Python 3.9.5 64-bit ('zr-obp': pyenv)" 507 | }, 508 | "language_info": { 509 | "codemirror_mode": { 510 | "name": "ipython", 511 | "version": 3 512 | }, 513 | "file_extension": ".py", 514 | "mimetype": "text/x-python", 515 | "name": "python", 516 | "nbconvert_exporter": "python", 517 | "pygments_lexer": "ipython3", 518 | "version": "3.9.5" 519 | }, 520 | "interpreter": { 521 | "hash": "64b446a4e17784c2dc3dbe74bf0708929a003fb4822b30671d57ebdef413b716" 522 | } 523 | }, 524 | "nbformat": 4, 525 | "nbformat_minor": 4 526 | } -------------------------------------------------------------------------------- /examples/real-bandit-data.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "source": [ 6 | "# OPE Experiment with Open Bandit Dataset\n", 7 | "---\n", 8 | "This notebook demonstrates an example of conducting OPE of Bernoulli Thompson Sampling (BernoulliTS) as an evaluation policy using some OPE estimators and logged bandit feedback generated by running the Random policy (behavior policy) on the ZOZOTOWN platform." 9 | ], 10 | "metadata": {} 11 | }, 12 | { 13 | "cell_type": "code", 14 | "execution_count": 1, 15 | "source": [ 16 | "from sklearn.linear_model import LogisticRegression\n", 17 | "\n", 18 | "# import open bandit pipeline (obp)\n", 19 | "import obp\n", 20 | "from obp.dataset import OpenBanditDataset\n", 21 | "from obp.policy import BernoulliTS\n", 22 | "from obp.ope import (\n", 23 | " OffPolicyEvaluation, \n", 24 | " RegressionModel,\n", 25 | " InverseProbabilityWeighting as IPS,\n", 26 | " DirectMethod as DM,\n", 27 | " DoublyRobust as DR,\n", 28 | ")" 29 | ], 30 | "outputs": [], 31 | "metadata": {} 32 | }, 33 | { 34 | "cell_type": "code", 35 | "execution_count": 2, 36 | "source": [ 37 | "# obp version\n", 38 | "print(obp.__version__)" 39 | ], 40 | "outputs": [ 41 | { 42 | "output_type": "stream", 43 | "name": "stdout", 44 | "text": [ 45 | "0.5.1\n" 46 | ] 47 | } 48 | ], 49 | "metadata": {} 50 | }, 51 | { 52 | "cell_type": "code", 53 | "execution_count": null, 54 | "source": [], 55 | "outputs": [], 56 | "metadata": {} 57 | }, 58 | { 59 | "cell_type": "markdown", 60 | "source": [ 61 | "## (1) Data Loading and Preprocessing\n", 62 | "\n", 63 | "`obp.dataset.OpenBanditDataset` is an easy-to-use data loader for Open Bandit Dataset. \n", 64 | "\n", 65 | "It takes behavior policy ('bts' or 'random') and campaign ('all', 'men', or 'women') as inputs and provides dataset preprocessing." 66 | ], 67 | "metadata": {} 68 | }, 69 | { 70 | "cell_type": "code", 71 | "execution_count": 3, 72 | "source": [ 73 | "# When `data_path` is not given, this class downloads the small-sized version of Open Bandit Dataset.\n", 74 | "dataset = OpenBanditDataset(behavior_policy='random', campaign='all')" 75 | ], 76 | "outputs": [ 77 | { 78 | "output_type": "stream", 79 | "name": "stderr", 80 | "text": [ 81 | "INFO:obp.dataset.real:When `data_path` is not given, this class downloads the example small-sized version of the Open Bandit Dataset.\n", 82 | "/Users/usaito/.pyenv/versions/3.9.5/envs/zr-obp/lib/python3.9/site-packages/obp-0.5.1-py3.9.egg/obp/dataset/real.py:203: FutureWarning: In a future version of pandas all arguments of DataFrame.drop except for the argument 'labels' will be keyword-only\n", 83 | " item_feature_cat = self.item_context.drop(\"item_feature_0\", 1).apply(\n", 84 | "/Users/usaito/.pyenv/versions/3.9.5/envs/zr-obp/lib/python3.9/site-packages/obp-0.5.1-py3.9.egg/obp/dataset/real.py:206: FutureWarning: In a future version of pandas all arguments of concat except for the argument 'objs' will be keyword-only\n", 85 | " self.action_context = pd.concat([item_feature_cat, item_feature_0], 1).values\n" 86 | ] 87 | } 88 | ], 89 | "metadata": { 90 | "tags": [] 91 | } 92 | }, 93 | { 94 | "cell_type": "code", 95 | "execution_count": 4, 96 | "source": [ 97 | "# obtain logged bandit data generated by behavior policy\n", 98 | "bandit_data = dataset.obtain_batch_bandit_feedback()\n", 99 | "\n", 100 | "# `bandit_data` is a dictionary storing logged bandit feedback\n", 101 | "bandit_data.keys()" 102 | ], 103 | "outputs": [ 104 | { 105 | "output_type": "execute_result", 106 | "data": { 107 | "text/plain": [ 108 | "dict_keys(['n_rounds', 'n_actions', 'action', 'position', 'reward', 'pscore', 'context', 'action_context'])" 109 | ] 110 | }, 111 | "metadata": {}, 112 | "execution_count": 4 113 | } 114 | ], 115 | "metadata": {} 116 | }, 117 | { 118 | "cell_type": "code", 119 | "execution_count": null, 120 | "source": [], 121 | "outputs": [], 122 | "metadata": {} 123 | }, 124 | { 125 | "cell_type": "markdown", 126 | "source": [ 127 | "### let's see some properties of the dataset class" 128 | ], 129 | "metadata": {} 130 | }, 131 | { 132 | "cell_type": "code", 133 | "execution_count": 5, 134 | "source": [ 135 | "# name of the dataset is 'obd' (open bandit dataset)\n", 136 | "dataset.dataset_name" 137 | ], 138 | "outputs": [ 139 | { 140 | "output_type": "execute_result", 141 | "data": { 142 | "text/plain": [ 143 | "'obd'" 144 | ] 145 | }, 146 | "metadata": {}, 147 | "execution_count": 5 148 | } 149 | ], 150 | "metadata": {} 151 | }, 152 | { 153 | "cell_type": "code", 154 | "execution_count": 6, 155 | "source": [ 156 | "# number of actions of the \"All\" campaign is 80\n", 157 | "dataset.n_actions" 158 | ], 159 | "outputs": [ 160 | { 161 | "output_type": "execute_result", 162 | "data": { 163 | "text/plain": [ 164 | "80" 165 | ] 166 | }, 167 | "metadata": {}, 168 | "execution_count": 6 169 | } 170 | ], 171 | "metadata": {} 172 | }, 173 | { 174 | "cell_type": "code", 175 | "execution_count": 7, 176 | "source": [ 177 | "# small example dataset has 10,000 rounds\n", 178 | "dataset.n_rounds" 179 | ], 180 | "outputs": [ 181 | { 182 | "output_type": "execute_result", 183 | "data": { 184 | "text/plain": [ 185 | "10000" 186 | ] 187 | }, 188 | "metadata": {}, 189 | "execution_count": 7 190 | } 191 | ], 192 | "metadata": {} 193 | }, 194 | { 195 | "cell_type": "code", 196 | "execution_count": 8, 197 | "source": [ 198 | "# default context (feature) engineering creates context vector with 20 dimensions\n", 199 | "dataset.dim_context" 200 | ], 201 | "outputs": [ 202 | { 203 | "output_type": "execute_result", 204 | "data": { 205 | "text/plain": [ 206 | "20" 207 | ] 208 | }, 209 | "metadata": {}, 210 | "execution_count": 8 211 | } 212 | ], 213 | "metadata": {} 214 | }, 215 | { 216 | "cell_type": "code", 217 | "execution_count": 9, 218 | "source": [ 219 | "# ZOZOTOWN recommendation interface has three positions\n", 220 | "# (please see https://github.com/st-tech/zr-obp/blob/master/images/recommended_fashion_items.png)\n", 221 | "dataset.len_list" 222 | ], 223 | "outputs": [ 224 | { 225 | "output_type": "execute_result", 226 | "data": { 227 | "text/plain": [ 228 | "3" 229 | ] 230 | }, 231 | "metadata": {}, 232 | "execution_count": 9 233 | } 234 | ], 235 | "metadata": {} 236 | }, 237 | { 238 | "cell_type": "code", 239 | "execution_count": null, 240 | "source": [], 241 | "outputs": [], 242 | "metadata": {} 243 | }, 244 | { 245 | "cell_type": "markdown", 246 | "source": [ 247 | "## (2) Production Policy Replication\n", 248 | "\n", 249 | "After preparing the dataset, we now replicate the BernoulliTS policy implemented on the ZOZOTOWN recommendation interface during the data collection period.\n", 250 | "\n", 251 | "Here, we use `obp.policy.BernoulliTS` as an evaluation policy. \n", 252 | "By activating its `is_zozotown_prior` argument, we can replicate (the policy parameters of) BernoulliTS used in ZOZOTOWN production." 253 | ], 254 | "metadata": {} 255 | }, 256 | { 257 | "cell_type": "code", 258 | "execution_count": 10, 259 | "source": [ 260 | "evaluation_policy = BernoulliTS(\n", 261 | " n_actions=dataset.n_actions, # number of actions; |A|\n", 262 | " len_list=dataset.len_list, # number of items in a recommendation list; K\n", 263 | " is_zozotown_prior=True, # replicate the BernoulliTS policy in the ZOZOTOWN production\n", 264 | " campaign=\"all\",\n", 265 | " random_state=12345,\n", 266 | ")" 267 | ], 268 | "outputs": [], 269 | "metadata": { 270 | "tags": [] 271 | } 272 | }, 273 | { 274 | "cell_type": "code", 275 | "execution_count": 11, 276 | "source": [ 277 | "# compute the action choice probabilities of the evaluation policy using Monte Carlo simulation\n", 278 | "action_dist = evaluation_policy.compute_batch_action_dist(\n", 279 | " n_sim=100000, n_rounds=bandit_data[\"n_rounds\"],\n", 280 | ")" 281 | ], 282 | "outputs": [], 283 | "metadata": {} 284 | }, 285 | { 286 | "cell_type": "code", 287 | "execution_count": 12, 288 | "source": [ 289 | "# action_dist is an array of shape (n_rounds, n_actions, len_list) \n", 290 | "# representing the distribution over actions by the evaluation policy\n", 291 | "action_dist[:5]" 292 | ], 293 | "outputs": [ 294 | { 295 | "output_type": "execute_result", 296 | "data": { 297 | "text/plain": [ 298 | "array([[[0.01078, 0.00931, 0.00917],\n", 299 | " [0.00167, 0.00077, 0.00076],\n", 300 | " [0.0058 , 0.00614, 0.00631],\n", 301 | " ...,\n", 302 | " [0.0008 , 0.00087, 0.00071],\n", 303 | " [0.00689, 0.00724, 0.00755],\n", 304 | " [0.0582 , 0.07603, 0.07998]],\n", 305 | "\n", 306 | " [[0.01078, 0.00931, 0.00917],\n", 307 | " [0.00167, 0.00077, 0.00076],\n", 308 | " [0.0058 , 0.00614, 0.00631],\n", 309 | " ...,\n", 310 | " [0.0008 , 0.00087, 0.00071],\n", 311 | " [0.00689, 0.00724, 0.00755],\n", 312 | " [0.0582 , 0.07603, 0.07998]],\n", 313 | "\n", 314 | " [[0.01078, 0.00931, 0.00917],\n", 315 | " [0.00167, 0.00077, 0.00076],\n", 316 | " [0.0058 , 0.00614, 0.00631],\n", 317 | " ...,\n", 318 | " [0.0008 , 0.00087, 0.00071],\n", 319 | " [0.00689, 0.00724, 0.00755],\n", 320 | " [0.0582 , 0.07603, 0.07998]],\n", 321 | "\n", 322 | " [[0.01078, 0.00931, 0.00917],\n", 323 | " [0.00167, 0.00077, 0.00076],\n", 324 | " [0.0058 , 0.00614, 0.00631],\n", 325 | " ...,\n", 326 | " [0.0008 , 0.00087, 0.00071],\n", 327 | " [0.00689, 0.00724, 0.00755],\n", 328 | " [0.0582 , 0.07603, 0.07998]],\n", 329 | "\n", 330 | " [[0.01078, 0.00931, 0.00917],\n", 331 | " [0.00167, 0.00077, 0.00076],\n", 332 | " [0.0058 , 0.00614, 0.00631],\n", 333 | " ...,\n", 334 | " [0.0008 , 0.00087, 0.00071],\n", 335 | " [0.00689, 0.00724, 0.00755],\n", 336 | " [0.0582 , 0.07603, 0.07998]]])" 337 | ] 338 | }, 339 | "metadata": {}, 340 | "execution_count": 12 341 | } 342 | ], 343 | "metadata": {} 344 | }, 345 | { 346 | "cell_type": "code", 347 | "execution_count": null, 348 | "source": [], 349 | "outputs": [], 350 | "metadata": {} 351 | }, 352 | { 353 | "cell_type": "markdown", 354 | "source": [ 355 | "## (3) Off-Policy Evaluation (OPE)\n", 356 | "\n", 357 | "The next step is **OPE**, which attempts to estimate the performance of new decision making policies using only log data generated by behavior, past policies. \n", 358 | "\n", 359 | "Here, we use\n", 360 | "- **Inverse Propensity Score (IPS)**\n", 361 | "- **DirectMethod (DM)**\n", 362 | "- **Doubly Robust (DR)**\n", 363 | "\n", 364 | "to estimate the performance of Bernoulli TS using only the log data. " 365 | ], 366 | "metadata": { 367 | "tags": [] 368 | } 369 | }, 370 | { 371 | "cell_type": "markdown", 372 | "source": [ 373 | "### (3-1) obtain a reward estimator\n", 374 | "`obp.ope.RegressionModel` simplifies the process of reward modeling\n", 375 | "\n", 376 | "$r(x,a) = \\mathbb{E} [r \\mid x, a] \\approx \\hat{r}(x,a)$" 377 | ], 378 | "metadata": {} 379 | }, 380 | { 381 | "cell_type": "code", 382 | "execution_count": 13, 383 | "source": [ 384 | "# obp.ope.RegressionModel\n", 385 | "regression_model = RegressionModel(\n", 386 | " n_actions=dataset.n_actions, # number of actions; |A|\n", 387 | " len_list=dataset.len_list, # number of items in a recommendation list; K\n", 388 | " base_model=LogisticRegression(C=100, max_iter=10000, random_state=12345), # any sklearn classifier\n", 389 | ")" 390 | ], 391 | "outputs": [], 392 | "metadata": {} 393 | }, 394 | { 395 | "cell_type": "code", 396 | "execution_count": 14, 397 | "source": [ 398 | "estimated_rewards = regression_model.fit_predict(\n", 399 | " context=bandit_data[\"context\"],\n", 400 | " action=bandit_data[\"action\"],\n", 401 | " reward=bandit_data[\"reward\"],\n", 402 | " position=bandit_data[\"position\"],\n", 403 | " random_state=12345,\n", 404 | ")" 405 | ], 406 | "outputs": [], 407 | "metadata": {} 408 | }, 409 | { 410 | "cell_type": "code", 411 | "execution_count": 15, 412 | "source": [ 413 | "estimated_rewards[:, :, 0] # \\hat{q}(x,a)" 414 | ], 415 | "outputs": [ 416 | { 417 | "output_type": "execute_result", 418 | "data": { 419 | "text/plain": [ 420 | "array([[1.58148566e-04, 1.68739271e-02, 1.59238728e-04, ...,\n", 421 | " 1.62785786e-04, 1.58892825e-04, 1.39337751e-04],\n", 422 | " [9.43487804e-05, 1.01350576e-02, 9.49991949e-05, ...,\n", 423 | " 9.71154498e-05, 9.47928215e-05, 8.31259330e-05],\n", 424 | " [9.69432542e-08, 1.05192815e-05, 9.76116177e-08, ...,\n", 425 | " 9.97862793e-08, 9.73995491e-08, 8.54108347e-08],\n", 426 | " ...,\n", 427 | " [1.49986944e-04, 1.60169285e-02, 1.51020855e-04, ...,\n", 428 | " 1.54384887e-04, 1.50692800e-04, 1.32146777e-04],\n", 429 | " [3.99016414e-04, 4.15165918e-02, 4.01766279e-04, ...,\n", 430 | " 4.10713441e-04, 4.00893762e-04, 3.51565900e-04],\n", 431 | " [3.27203945e-04, 3.42986101e-02, 3.29459070e-04, ...,\n", 432 | " 3.36796524e-04, 3.28743530e-04, 2.88290813e-04]])" 433 | ] 434 | }, 435 | "metadata": {}, 436 | "execution_count": 15 437 | } 438 | ], 439 | "metadata": {} 440 | }, 441 | { 442 | "cell_type": "code", 443 | "execution_count": null, 444 | "source": [], 445 | "outputs": [], 446 | "metadata": {} 447 | }, 448 | { 449 | "cell_type": "markdown", 450 | "source": [ 451 | "### (3-2) OPE\n", 452 | "`obp.ope.OffPolicyEvaluation` simplifies the OPE process\n", 453 | "\n", 454 | "$V(\\pi_e) \\approx \\hat{V} (\\pi_e; \\mathcal{D}_0, \\theta)$ using DM, IPS, and DR" 455 | ], 456 | "metadata": {} 457 | }, 458 | { 459 | "cell_type": "code", 460 | "execution_count": 16, 461 | "source": [ 462 | "ope = OffPolicyEvaluation(\n", 463 | " bandit_feedback=bandit_data, # bandit data\n", 464 | " ope_estimators=[\n", 465 | " IPS(estimator_name=\"IPS\"), \n", 466 | " DM(estimator_name=\"DM\"), \n", 467 | " DR(estimator_name=\"DR\"),\n", 468 | " ] # used estimators\n", 469 | ")" 470 | ], 471 | "outputs": [], 472 | "metadata": { 473 | "tags": [] 474 | } 475 | }, 476 | { 477 | "cell_type": "code", 478 | "execution_count": 17, 479 | "source": [ 480 | "estimated_policy_value = ope.estimate_policy_values(\n", 481 | " action_dist=action_dist, # \\pi_e(a|x)\n", 482 | " estimated_rewards_by_reg_model=estimated_rewards, # \\hat{q}\n", 483 | ")" 484 | ], 485 | "outputs": [], 486 | "metadata": {} 487 | }, 488 | { 489 | "cell_type": "code", 490 | "execution_count": 18, 491 | "source": [ 492 | "# OPE results given by the three estimators\n", 493 | "estimated_policy_value" 494 | ], 495 | "outputs": [ 496 | { 497 | "output_type": "execute_result", 498 | "data": { 499 | "text/plain": [ 500 | "{'IPS': 0.00455288, 'DM': 0.004729396841185728, 'DR': 0.0047693518747239025}" 501 | ] 502 | }, 503 | "metadata": {}, 504 | "execution_count": 18 505 | } 506 | ], 507 | "metadata": {} 508 | }, 509 | { 510 | "cell_type": "code", 511 | "execution_count": null, 512 | "source": [], 513 | "outputs": [], 514 | "metadata": {} 515 | }, 516 | { 517 | "cell_type": "markdown", 518 | "source": [ 519 | "## (4) Evaluation of OPE\n", 520 | "\n", 521 | "Our final step is the **evaluation of OPE**, which evaluates the OPE performance (estimation accuracy) of the OPE estimators.\n", 522 | "\n", 523 | "Specifically, we asses the accuracy of the estimators by comparing their estimation with the ground-truth policy value estimated via the on-policy estimation from Open Bandit Dataset.\n", 524 | "\n", 525 | "This type evaluation of OPE is possible, because Open Bandit Dataset contains a set of *multiple* different logged bandit datasets collected by running different policies on the same platform at the same time." 526 | ], 527 | "metadata": { 528 | "tags": [] 529 | } 530 | }, 531 | { 532 | "cell_type": "markdown", 533 | "source": [ 534 | "### (4-1) Approximate the Ground-truth Policy Value\n", 535 | "$V(\\pi) \\approx \\frac{1}{|\\mathcal{D}_{te}|} \\sum_{i=1}^{|\\mathcal{D}_{te}|} r_i, $" 536 | ], 537 | "metadata": {} 538 | }, 539 | { 540 | "cell_type": "code", 541 | "execution_count": 19, 542 | "source": [ 543 | "# we first calculate the ground-truth policy value of the evaluation policy\n", 544 | "# , which is estimated by averaging the factual (observed) rewards contained in the dataset (on-policy estimation)\n", 545 | "policy_value_bts = OpenBanditDataset.calc_on_policy_policy_value_estimate(\n", 546 | " behavior_policy='bts', campaign='all'\n", 547 | ")" 548 | ], 549 | "outputs": [ 550 | { 551 | "output_type": "stream", 552 | "name": "stderr", 553 | "text": [ 554 | "INFO:obp.dataset.real:When `data_path` is not given, this class downloads the example small-sized version of the Open Bandit Dataset.\n", 555 | "/Users/usaito/.pyenv/versions/3.9.5/envs/zr-obp/lib/python3.9/site-packages/obp-0.5.1-py3.9.egg/obp/dataset/real.py:203: FutureWarning: In a future version of pandas all arguments of DataFrame.drop except for the argument 'labels' will be keyword-only\n", 556 | " item_feature_cat = self.item_context.drop(\"item_feature_0\", 1).apply(\n", 557 | "/Users/usaito/.pyenv/versions/3.9.5/envs/zr-obp/lib/python3.9/site-packages/obp-0.5.1-py3.9.egg/obp/dataset/real.py:206: FutureWarning: In a future version of pandas all arguments of concat except for the argument 'objs' will be keyword-only\n", 558 | " self.action_context = pd.concat([item_feature_cat, item_feature_0], 1).values\n" 559 | ] 560 | } 561 | ], 562 | "metadata": {} 563 | }, 564 | { 565 | "cell_type": "code", 566 | "execution_count": null, 567 | "source": [], 568 | "outputs": [], 569 | "metadata": {} 570 | }, 571 | { 572 | "cell_type": "markdown", 573 | "source": [ 574 | "### (4-2) Evaluation of OPE\n", 575 | "Now, let's evaluate the OPE performance (estimation accuracy) of the three estimators \n", 576 | "\n", 577 | "$SE (\\hat{V}; \\mathcal{D}_0) := \\left( V(\\pi_e) - \\hat{V} (\\pi_e; \\mathcal{D}_0, \\theta) \\right)^2$, (squared error of $\\hat{V}$)" 578 | ], 579 | "metadata": {} 580 | }, 581 | { 582 | "cell_type": "code", 583 | "execution_count": 20, 584 | "source": [ 585 | "squared_errors = ope.evaluate_performance_of_estimators(\n", 586 | " ground_truth_policy_value=policy_value_bts,\n", 587 | " action_dist=action_dist,\n", 588 | " estimated_rewards_by_reg_model=estimated_rewards,\n", 589 | " metric=\"se\", # squared error\n", 590 | ")" 591 | ], 592 | "outputs": [], 593 | "metadata": {} 594 | }, 595 | { 596 | "cell_type": "code", 597 | "execution_count": 21, 598 | "source": [ 599 | "squared_errors # IPS is the most accurate " 600 | ], 601 | "outputs": [ 602 | { 603 | "output_type": "execute_result", 604 | "data": { 605 | "text/plain": [ 606 | "{'IPS': 1.245242943999999e-07,\n", 607 | " 'DM': 2.8026101545742736e-07,\n", 608 | " 'DR': 3.241615572516226e-07}" 609 | ] 610 | }, 611 | "metadata": {}, 612 | "execution_count": 21 613 | } 614 | ], 615 | "metadata": {} 616 | }, 617 | { 618 | "cell_type": "code", 619 | "execution_count": null, 620 | "source": [], 621 | "outputs": [], 622 | "metadata": {} 623 | }, 624 | { 625 | "cell_type": "markdown", 626 | "source": [ 627 | "We can iterate the above process several times and calculate the following MSE\n", 628 | "\n", 629 | "$MSE (\\hat{V}) := T^{-1} \\sum_{t=1}^T SE (\\hat{V}; \\mathcal{D}_0^{(t)}) $\n", 630 | "\n", 631 | "where $\\mathcal{D}_0^{(t)}$ is the synthetic data in the $t$-th iteration" 632 | ], 633 | "metadata": {} 634 | }, 635 | { 636 | "cell_type": "markdown", 637 | "source": [ 638 | "Note that the OPE demonstration here is with the small size example version of our dataset. \n", 639 | "Please use its full size version (https://research.zozo.com/data.html) to produce more reasonable results." 640 | ], 641 | "metadata": {} 642 | }, 643 | { 644 | "cell_type": "code", 645 | "execution_count": null, 646 | "source": [], 647 | "outputs": [], 648 | "metadata": {} 649 | } 650 | ], 651 | "metadata": { 652 | "kernelspec": { 653 | "name": "python3", 654 | "display_name": "Python 3.9.5 64-bit ('zr-obp': pyenv)" 655 | }, 656 | "language_info": { 657 | "codemirror_mode": { 658 | "name": "ipython", 659 | "version": 3 660 | }, 661 | "file_extension": ".py", 662 | "mimetype": "text/x-python", 663 | "name": "python", 664 | "nbconvert_exporter": "python", 665 | "pygments_lexer": "ipython3", 666 | "version": "3.9.5" 667 | }, 668 | "interpreter": { 669 | "hash": "64b446a4e17784c2dc3dbe74bf0708929a003fb4822b30671d57ebdef413b716" 670 | } 671 | }, 672 | "nbformat": 4, 673 | "nbformat_minor": 4 674 | } -------------------------------------------------------------------------------- /examples/synthetic.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "source": [ 6 | "# OPE/OPL Experiments with Synthetic Bandit Data\n", 7 | "---\n", 8 | "This notebook provides an example of conducting OPE of several different evaluation policies with synthetic bandit feedback data." 9 | ], 10 | "metadata": {} 11 | }, 12 | { 13 | "cell_type": "code", 14 | "execution_count": 1, 15 | "source": [ 16 | "from sklearn.linear_model import LogisticRegression\n", 17 | "\n", 18 | "# import open bandit pipeline (obp)\n", 19 | "import obp\n", 20 | "from obp.dataset import (\n", 21 | " SyntheticBanditDataset,\n", 22 | " logistic_reward_function,\n", 23 | " linear_behavior_policy,\n", 24 | ")\n", 25 | "from obp.policy import IPWLearner\n", 26 | "from obp.ope import (\n", 27 | " OffPolicyEvaluation, \n", 28 | " RegressionModel,\n", 29 | " InverseProbabilityWeighting as IPS,\n", 30 | " DirectMethod as DM,\n", 31 | " DoublyRobust as DR,\n", 32 | ")" 33 | ], 34 | "outputs": [], 35 | "metadata": {} 36 | }, 37 | { 38 | "cell_type": "code", 39 | "execution_count": 2, 40 | "source": [ 41 | "# obp version\n", 42 | "print(obp.__version__)" 43 | ], 44 | "outputs": [ 45 | { 46 | "output_type": "stream", 47 | "name": "stdout", 48 | "text": [ 49 | "0.5.1\n" 50 | ] 51 | } 52 | ], 53 | "metadata": {} 54 | }, 55 | { 56 | "cell_type": "code", 57 | "execution_count": null, 58 | "source": [], 59 | "outputs": [], 60 | "metadata": {} 61 | }, 62 | { 63 | "cell_type": "markdown", 64 | "source": [ 65 | "## (1) Generate Synthetic Data\n", 66 | "\n", 67 | "`SyntheticBanditDataset` is an easy-to-use synthetic data generator class in the dataset module." 68 | ], 69 | "metadata": {} 70 | }, 71 | { 72 | "cell_type": "code", 73 | "execution_count": 3, 74 | "source": [ 75 | "dataset = SyntheticBanditDataset(\n", 76 | " n_actions=10, # number of actions; |A|\n", 77 | " dim_context=5, # number of dimensions of context vector\n", 78 | " reward_function=logistic_reward_function, # mean reward function; q(x,a)\n", 79 | " behavior_policy_function=linear_behavior_policy, # behavior policy; \\pi_b\n", 80 | " random_state=12345,\n", 81 | ")" 82 | ], 83 | "outputs": [], 84 | "metadata": { 85 | "tags": [] 86 | } 87 | }, 88 | { 89 | "cell_type": "code", 90 | "execution_count": 4, 91 | "source": [ 92 | "training_bandit_data = dataset.obtain_batch_bandit_feedback(n_rounds=10000)\n", 93 | "test_bandit_data = dataset.obtain_batch_bandit_feedback(n_rounds=1000000)" 94 | ], 95 | "outputs": [], 96 | "metadata": {} 97 | }, 98 | { 99 | "cell_type": "code", 100 | "execution_count": 5, 101 | "source": [ 102 | "training_bandit_data" 103 | ], 104 | "outputs": [ 105 | { 106 | "output_type": "execute_result", 107 | "data": { 108 | "text/plain": [ 109 | "{'n_rounds': 10000,\n", 110 | " 'n_actions': 10,\n", 111 | " 'context': array([[-0.20470766, 0.47894334, -0.51943872, -0.5557303 , 1.96578057],\n", 112 | " [ 1.39340583, 0.09290788, 0.28174615, 0.76902257, 1.24643474],\n", 113 | " [ 1.00718936, -1.29622111, 0.27499163, 0.22891288, 1.35291684],\n", 114 | " ...,\n", 115 | " [-1.27028221, 0.80914602, -0.45084222, 0.47179511, 1.89401115],\n", 116 | " [-0.68890924, 0.08857502, -0.56359347, -0.41135069, 0.65157486],\n", 117 | " [ 0.51204121, 0.65384817, -1.98849253, -2.14429131, -0.34186901]]),\n", 118 | " 'action_context': array([[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n", 119 | " [0, 1, 0, 0, 0, 0, 0, 0, 0, 0],\n", 120 | " [0, 0, 1, 0, 0, 0, 0, 0, 0, 0],\n", 121 | " [0, 0, 0, 1, 0, 0, 0, 0, 0, 0],\n", 122 | " [0, 0, 0, 0, 1, 0, 0, 0, 0, 0],\n", 123 | " [0, 0, 0, 0, 0, 1, 0, 0, 0, 0],\n", 124 | " [0, 0, 0, 0, 0, 0, 1, 0, 0, 0],\n", 125 | " [0, 0, 0, 0, 0, 0, 0, 1, 0, 0],\n", 126 | " [0, 0, 0, 0, 0, 0, 0, 0, 1, 0],\n", 127 | " [0, 0, 0, 0, 0, 0, 0, 0, 0, 1]]),\n", 128 | " 'action': array([9, 2, 1, ..., 0, 2, 6]),\n", 129 | " 'position': None,\n", 130 | " 'reward': array([1, 1, 1, ..., 0, 1, 1]),\n", 131 | " 'expected_reward': array([[0.80210203, 0.73828559, 0.83199558, ..., 0.81190503, 0.70617705,\n", 132 | " 0.68985306],\n", 133 | " [0.94119582, 0.93473317, 0.91345213, ..., 0.94140688, 0.93152449,\n", 134 | " 0.90132868],\n", 135 | " [0.87248862, 0.67974991, 0.66965669, ..., 0.79229752, 0.82712978,\n", 136 | " 0.74923536],\n", 137 | " ...,\n", 138 | " [0.66717573, 0.81583571, 0.77012708, ..., 0.87757008, 0.57652468,\n", 139 | " 0.80629132],\n", 140 | " [0.52526986, 0.39952563, 0.61892038, ..., 0.53610389, 0.49392728,\n", 141 | " 0.58408936],\n", 142 | " [0.55375831, 0.11662199, 0.807396 , ..., 0.22532856, 0.42629292,\n", 143 | " 0.24120499]]),\n", 144 | " 'pscore': array([0.07250876, 0.10335615, 0.14110696, ..., 0.09756788, 0.10335615,\n", 145 | " 0.14065505])}" 146 | ] 147 | }, 148 | "metadata": {}, 149 | "execution_count": 5 150 | } 151 | ], 152 | "metadata": {} 153 | }, 154 | { 155 | "cell_type": "code", 156 | "execution_count": null, 157 | "source": [], 158 | "outputs": [], 159 | "metadata": {} 160 | }, 161 | { 162 | "cell_type": "markdown", 163 | "source": [ 164 | "## (2) Train Bandit Policies (OPL)\n", 165 | "`obp.policy.IPWLearner` can be a first choice (IPS=IPW)\n", 166 | "\n", 167 | "$ \\hat{\\pi} \\in \\underset{\\pi \\in \\Pi}{\\operatorname{argmax}} \\hat{V}_{I P S}\\left(\\pi ; \\mathcal{D}_{t r}\\right) $" 168 | ], 169 | "metadata": {} 170 | }, 171 | { 172 | "cell_type": "code", 173 | "execution_count": 6, 174 | "source": [ 175 | "ipw_learner = IPWLearner(\n", 176 | " n_actions=dataset.n_actions, # number of actions; |A|\n", 177 | " base_classifier=LogisticRegression(C=100, random_state=12345) # any sklearn classifier\n", 178 | ")" 179 | ], 180 | "outputs": [], 181 | "metadata": { 182 | "tags": [] 183 | } 184 | }, 185 | { 186 | "cell_type": "code", 187 | "execution_count": 7, 188 | "source": [ 189 | "# fit\n", 190 | "ipw_learner.fit(\n", 191 | " context=training_bandit_data[\"context\"], # context; x\n", 192 | " action=training_bandit_data[\"action\"], # action; a\n", 193 | " reward=training_bandit_data[\"reward\"], # reward; r\n", 194 | " pscore=training_bandit_data[\"pscore\"], # propensity score; pi_b(a|x)\n", 195 | ")" 196 | ], 197 | "outputs": [], 198 | "metadata": {} 199 | }, 200 | { 201 | "cell_type": "code", 202 | "execution_count": 8, 203 | "source": [ 204 | "# predict (action dist = action distribution)\n", 205 | "action_dist_ipw = ipw_learner.predict(\n", 206 | " context=test_bandit_data[\"context\"], # context in the test data\n", 207 | ")" 208 | ], 209 | "outputs": [], 210 | "metadata": {} 211 | }, 212 | { 213 | "cell_type": "code", 214 | "execution_count": 9, 215 | "source": [ 216 | "action_dist_ipw[:, :, 0] # which action to take for each context " 217 | ], 218 | "outputs": [ 219 | { 220 | "output_type": "execute_result", 221 | "data": { 222 | "text/plain": [ 223 | "array([[0., 0., 0., ..., 1., 0., 0.],\n", 224 | " [0., 0., 0., ..., 0., 1., 0.],\n", 225 | " [1., 0., 0., ..., 0., 0., 0.],\n", 226 | " ...,\n", 227 | " [0., 1., 0., ..., 0., 0., 0.],\n", 228 | " [0., 1., 0., ..., 0., 0., 0.],\n", 229 | " [0., 0., 0., ..., 0., 0., 1.]])" 230 | ] 231 | }, 232 | "metadata": {}, 233 | "execution_count": 9 234 | } 235 | ], 236 | "metadata": {} 237 | }, 238 | { 239 | "cell_type": "code", 240 | "execution_count": null, 241 | "source": [], 242 | "outputs": [], 243 | "metadata": {} 244 | }, 245 | { 246 | "cell_type": "markdown", 247 | "source": [ 248 | "## (3) Approximate the Ground-truth Policy Value\n", 249 | "$V(\\pi) \\approx \\frac{1}{|\\mathcal{D}_{te}|} \\sum_{i=1}^{|\\mathcal{D}_{te}|} \\mathbb{E}_{a \\sim \\pi(a|x_i)} [r(x_i, a)], \\; \\, where \\; \\, r(x,a) := \\mathbb{E}_{r \\sim p(r|x,a)} [r]$" 250 | ], 251 | "metadata": { 252 | "tags": [] 253 | } 254 | }, 255 | { 256 | "cell_type": "code", 257 | "execution_count": 10, 258 | "source": [ 259 | "policy_value_of_ipw = dataset.calc_ground_truth_policy_value(\n", 260 | " expected_reward=test_bandit_data[\"expected_reward\"], # expected rewards; q(x,a)\n", 261 | " action_dist=action_dist_ipw, # action distribution of IPWLearner\n", 262 | ")" 263 | ], 264 | "outputs": [], 265 | "metadata": {} 266 | }, 267 | { 268 | "cell_type": "code", 269 | "execution_count": 11, 270 | "source": [ 271 | "# ground-truth policy value of `IPWLearner`\n", 272 | "policy_value_of_ipw" 273 | ], 274 | "outputs": [ 275 | { 276 | "output_type": "execute_result", 277 | "data": { 278 | "text/plain": [ 279 | "0.7544002774485176" 280 | ] 281 | }, 282 | "metadata": {}, 283 | "execution_count": 11 284 | } 285 | ], 286 | "metadata": {} 287 | }, 288 | { 289 | "cell_type": "code", 290 | "execution_count": null, 291 | "source": [], 292 | "outputs": [], 293 | "metadata": {} 294 | }, 295 | { 296 | "cell_type": "markdown", 297 | "source": [ 298 | "## (4) Off-Policy Evaluation (OPE)" 299 | ], 300 | "metadata": { 301 | "tags": [] 302 | } 303 | }, 304 | { 305 | "cell_type": "markdown", 306 | "source": [ 307 | "### (4-1) obtain a reward estimator\n", 308 | "`obp.ope.RegressionModel` simplifies the process of reward modeling\n", 309 | "\n", 310 | "$r(x,a) = \\mathbb{E} [r \\mid x, a] \\approx \\hat{r}(x,a)$" 311 | ], 312 | "metadata": { 313 | "tags": [] 314 | } 315 | }, 316 | { 317 | "cell_type": "code", 318 | "execution_count": 12, 319 | "source": [ 320 | "regression_model = RegressionModel(\n", 321 | " n_actions=dataset.n_actions, # number of actions; |A|\n", 322 | " base_model=LogisticRegression(C=100, random_state=12345) # any sklearn classifier\n", 323 | ")" 324 | ], 325 | "outputs": [], 326 | "metadata": {} 327 | }, 328 | { 329 | "cell_type": "code", 330 | "execution_count": 13, 331 | "source": [ 332 | "estimated_rewards = regression_model.fit_predict(\n", 333 | " context=test_bandit_data[\"context\"], # context; x\n", 334 | " action=test_bandit_data[\"action\"], # action; a\n", 335 | " reward=test_bandit_data[\"reward\"], # reward; r\n", 336 | " random_state=12345,\n", 337 | ")" 338 | ], 339 | "outputs": [], 340 | "metadata": {} 341 | }, 342 | { 343 | "cell_type": "code", 344 | "execution_count": 14, 345 | "source": [ 346 | "estimated_rewards[:, :, 0] # \\hat{q}(x,a)" 347 | ], 348 | "outputs": [ 349 | { 350 | "output_type": "execute_result", 351 | "data": { 352 | "text/plain": [ 353 | "array([[0.83287191, 0.77609002, 0.86301082, ..., 0.80705541, 0.87089962,\n", 354 | " 0.88661944],\n", 355 | " [0.24064512, 0.18060694, 0.28603087, ..., 0.21010776, 0.30020355,\n", 356 | " 0.33212285],\n", 357 | " [0.92681158, 0.89803854, 0.94120582, ..., 0.91400784, 0.94487921,\n", 358 | " 0.95208655],\n", 359 | " ...,\n", 360 | " [0.95590514, 0.93780227, 0.96479479, ..., 0.94790501, 0.96704598,\n", 361 | " 0.97144249],\n", 362 | " [0.9041431 , 0.8677301 , 0.92262342, ..., 0.88785354, 0.92736823,\n", 363 | " 0.93671186],\n", 364 | " [0.19985698, 0.14801146, 0.23998121, ..., 0.1733142 , 0.25267968,\n", 365 | " 0.28157917]])" 366 | ] 367 | }, 368 | "metadata": {}, 369 | "execution_count": 14 370 | } 371 | ], 372 | "metadata": {} 373 | }, 374 | { 375 | "cell_type": "code", 376 | "execution_count": null, 377 | "source": [], 378 | "outputs": [], 379 | "metadata": {} 380 | }, 381 | { 382 | "cell_type": "markdown", 383 | "source": [ 384 | "### (4-2) OPE\n", 385 | "`obp.ope.OffPolicyEvaluation` simplifies the OPE process\n", 386 | "\n", 387 | "$V(\\pi_e) \\approx \\hat{V} (\\pi_e; \\mathcal{D}_0, \\theta)$ using DM, IPS, and DR" 388 | ], 389 | "metadata": {} 390 | }, 391 | { 392 | "cell_type": "code", 393 | "execution_count": 15, 394 | "source": [ 395 | "ope = OffPolicyEvaluation(\n", 396 | " bandit_feedback=test_bandit_data, # test data\n", 397 | " ope_estimators=[\n", 398 | " IPS(estimator_name=\"IPS\"), \n", 399 | " DM(estimator_name=\"DM\"), \n", 400 | " DR(estimator_name=\"DR\"),\n", 401 | " ] # used estimators\n", 402 | ")" 403 | ], 404 | "outputs": [], 405 | "metadata": { 406 | "tags": [] 407 | } 408 | }, 409 | { 410 | "cell_type": "code", 411 | "execution_count": 16, 412 | "source": [ 413 | "estimated_policy_value = ope.estimate_policy_values(\n", 414 | " action_dist=action_dist_ipw, # \\pi_e(a|x)\n", 415 | " estimated_rewards_by_reg_model=estimated_rewards, # \\hat{q}(x,a)\n", 416 | ")" 417 | ], 418 | "outputs": [], 419 | "metadata": { 420 | "tags": [] 421 | } 422 | }, 423 | { 424 | "cell_type": "code", 425 | "execution_count": 17, 426 | "source": [ 427 | "# OPE results given by the three estimators\n", 428 | "estimated_policy_value" 429 | ], 430 | "outputs": [ 431 | { 432 | "output_type": "execute_result", 433 | "data": { 434 | "text/plain": [ 435 | "{'IPS': 0.7514146726298303, 'DM': 0.6496363798172724, 'DR': 0.7515054724486296}" 436 | ] 437 | }, 438 | "metadata": {}, 439 | "execution_count": 17 440 | } 441 | ], 442 | "metadata": {} 443 | }, 444 | { 445 | "cell_type": "code", 446 | "execution_count": null, 447 | "source": [], 448 | "outputs": [], 449 | "metadata": {} 450 | }, 451 | { 452 | "cell_type": "markdown", 453 | "source": [ 454 | "## (5) Evaluation of OPE\n", 455 | "Now, let's evaluate the OPE performance (estimation accuracy) of the three estimators\n", 456 | "\n", 457 | "$SE (\\hat{V}; \\mathcal{D}_0) := \\left( V(\\pi_e) - \\hat{V} (\\pi_e; \\mathcal{D}_0, \\theta) \\right)^2$, (squared error of $\\hat{V}$)" 458 | ], 459 | "metadata": {} 460 | }, 461 | { 462 | "cell_type": "code", 463 | "execution_count": 18, 464 | "source": [ 465 | "squared_errors = ope.evaluate_performance_of_estimators(\n", 466 | " ground_truth_policy_value=policy_value_of_ipw, # V(\\pi_e)\n", 467 | " action_dist=action_dist_ipw, # \\pi_e(a|x)\n", 468 | " estimated_rewards_by_reg_model=estimated_rewards, # \\hat{q}(x,a)\n", 469 | " metric=\"se\", # squared error\n", 470 | ")" 471 | ], 472 | "outputs": [], 473 | "metadata": {} 474 | }, 475 | { 476 | "cell_type": "code", 477 | "execution_count": 19, 478 | "source": [ 479 | "squared_errors # DR is the most accurate" 480 | ], 481 | "outputs": [ 482 | { 483 | "output_type": "execute_result", 484 | "data": { 485 | "text/plain": [ 486 | "{'IPS': 8.913836133368584e-06,\n", 487 | " 'DM': 0.010975474246890016,\n", 488 | " 'DR': 8.379895987376119e-06}" 489 | ] 490 | }, 491 | "metadata": {}, 492 | "execution_count": 19 493 | } 494 | ], 495 | "metadata": {} 496 | }, 497 | { 498 | "cell_type": "code", 499 | "execution_count": null, 500 | "source": [], 501 | "outputs": [], 502 | "metadata": {} 503 | }, 504 | { 505 | "cell_type": "markdown", 506 | "source": [ 507 | "We can iterate the above process several times and calculate the following MSE\n", 508 | "\n", 509 | "$MSE (\\hat{V}) := T^{-1} \\sum_{t=1}^T SE (\\hat{V}; \\mathcal{D}_0^{(t)}) $\n", 510 | "\n", 511 | "where $\\mathcal{D}_0^{(t)}$ is the synthetic data in the $t$-th iteration" 512 | ], 513 | "metadata": {} 514 | }, 515 | { 516 | "cell_type": "code", 517 | "execution_count": null, 518 | "source": [], 519 | "outputs": [], 520 | "metadata": {} 521 | } 522 | ], 523 | "metadata": { 524 | "kernelspec": { 525 | "name": "python3", 526 | "display_name": "Python 3.9.5 64-bit ('zr-obp': pyenv)" 527 | }, 528 | "language_info": { 529 | "codemirror_mode": { 530 | "name": "ipython", 531 | "version": 3 532 | }, 533 | "file_extension": ".py", 534 | "mimetype": "text/x-python", 535 | "name": "python", 536 | "nbconvert_exporter": "python", 537 | "pygments_lexer": "ipython3", 538 | "version": "3.9.5" 539 | }, 540 | "interpreter": { 541 | "hash": "64b446a4e17784c2dc3dbe74bf0708929a003fb4822b30671d57ebdef413b716" 542 | } 543 | }, 544 | "nbformat": 4, 545 | "nbformat_minor": 4 546 | } -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [tool.poetry] 2 | name = "recsys2021-tutorial" 3 | version = "0.1.0" 4 | description = "examples and simulations with Open Bandit Pipeline" 5 | authors = ["usaito "] 6 | 7 | [tool.poetry.dependencies] 8 | python = "^3.9,<3.10" 9 | scikit-learn = "0.24.2" 10 | numpy = "^1.21.2" 11 | pandas = "^1.3.3" 12 | obp = "0.5.1" 13 | matplotlib = "^3.4.3" 14 | jupyterlab = "^3.1.13" 15 | 16 | [tool.poetry.dev-dependencies] 17 | 18 | [build-system] 19 | requires = ["poetry-core>=1.0.0"] 20 | build-backend = "poetry.core.masonry.api" 21 | -------------------------------------------------------------------------------- /real-world/demo.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "source": [ 6 | "## A Brief Demo using Open Bandit Dataset and Pipeline\n", 7 | "\n", 8 | "1. Data Loading and Preprocessing (Random Bucket of Open Bandit Dataset)\n", 9 | "2. Off-Policy Learning (IPWLearner and NNPolicyLearner)\n", 10 | "3. Off-Policy Evaluation (IPWLearner vs NNPolicyLearner)\n", 11 | "\n", 12 | "### \"What is the best new policy for the ZOZOTOWN recommendation interface?\"" 13 | ], 14 | "metadata": {} 15 | }, 16 | { 17 | "cell_type": "code", 18 | "execution_count": 1, 19 | "source": [ 20 | "from sklearn.ensemble import RandomForestClassifier\n", 21 | "from sklearn.linear_model import LogisticRegression\n", 22 | "\n", 23 | "import obp\n", 24 | "from obp.dataset import OpenBanditDataset\n", 25 | "from obp.policy import IPWLearner, NNPolicyLearner\n", 26 | "from obp.ope import (\n", 27 | " RegressionModel,\n", 28 | " OffPolicyEvaluation,\n", 29 | " SelfNormalizedInverseProbabilityWeighting as SNIPS,\n", 30 | " DoublyRobust as DR,\n", 31 | ")" 32 | ], 33 | "outputs": [], 34 | "metadata": {} 35 | }, 36 | { 37 | "cell_type": "code", 38 | "execution_count": 2, 39 | "source": [ 40 | "print(obp.__version__)" 41 | ], 42 | "outputs": [ 43 | { 44 | "output_type": "stream", 45 | "name": "stdout", 46 | "text": [ 47 | "0.5.1\n" 48 | ] 49 | } 50 | ], 51 | "metadata": {} 52 | }, 53 | { 54 | "cell_type": "code", 55 | "execution_count": null, 56 | "source": [], 57 | "outputs": [], 58 | "metadata": {} 59 | }, 60 | { 61 | "cell_type": "markdown", 62 | "source": [ 63 | "## (1) Data Loading and Preprocessing\n", 64 | "\n", 65 | "Here we use a random bucket of the Open Bandit Pipeline. We can download this by using `obp.dataset.OpenBanditDataset`." 66 | ], 67 | "metadata": {} 68 | }, 69 | { 70 | "cell_type": "code", 71 | "execution_count": 3, 72 | "source": [ 73 | "# define OpenBanditDataset class to handle the real bandit data\n", 74 | "dataset = OpenBanditDataset(\n", 75 | " behavior_policy=\"random\", campaign=\"all\", data_path=\"./open_bandit_dataset\",\n", 76 | ")" 77 | ], 78 | "outputs": [ 79 | { 80 | "output_type": "stream", 81 | "name": "stderr", 82 | "text": [ 83 | "/Users/usaito/.pyenv/versions/3.9.5/envs/zr-obp/lib/python3.9/site-packages/obp-0.5.1-py3.9.egg/obp/dataset/real.py:203: FutureWarning: In a future version of pandas all arguments of DataFrame.drop except for the argument 'labels' will be keyword-only\n", 84 | " item_feature_cat = self.item_context.drop(\"item_feature_0\", 1).apply(\n", 85 | "/Users/usaito/.pyenv/versions/3.9.5/envs/zr-obp/lib/python3.9/site-packages/obp-0.5.1-py3.9.egg/obp/dataset/real.py:206: FutureWarning: In a future version of pandas all arguments of concat except for the argument 'objs' will be keyword-only\n", 86 | " self.action_context = pd.concat([item_feature_cat, item_feature_0], 1).values\n" 87 | ] 88 | } 89 | ], 90 | "metadata": {} 91 | }, 92 | { 93 | "cell_type": "code", 94 | "execution_count": 4, 95 | "source": [ 96 | "# logged bandit data collected by the uniform random policy\n", 97 | "training_bandit_data, test_bandit_data = dataset.obtain_batch_bandit_feedback(\n", 98 | " test_size=0.3, is_timeseries_split=True,\n", 99 | ")" 100 | ], 101 | "outputs": [], 102 | "metadata": {} 103 | }, 104 | { 105 | "cell_type": "code", 106 | "execution_count": 5, 107 | "source": [ 108 | "# ignore the position effect for a demo purpose\n", 109 | "training_bandit_data[\"position\"] = None \n", 110 | "test_bandit_data[\"position\"] = None" 111 | ], 112 | "outputs": [], 113 | "metadata": {} 114 | }, 115 | { 116 | "cell_type": "code", 117 | "execution_count": 6, 118 | "source": [ 119 | "# number of actions\n", 120 | "dataset.n_actions" 121 | ], 122 | "outputs": [ 123 | { 124 | "output_type": "execute_result", 125 | "data": { 126 | "text/plain": [ 127 | "80" 128 | ] 129 | }, 130 | "metadata": {}, 131 | "execution_count": 6 132 | } 133 | ], 134 | "metadata": {} 135 | }, 136 | { 137 | "cell_type": "code", 138 | "execution_count": 7, 139 | "source": [ 140 | "# sample size\n", 141 | "dataset.n_rounds" 142 | ], 143 | "outputs": [ 144 | { 145 | "output_type": "execute_result", 146 | "data": { 147 | "text/plain": [ 148 | "1374327" 149 | ] 150 | }, 151 | "metadata": {}, 152 | "execution_count": 7 153 | } 154 | ], 155 | "metadata": {} 156 | }, 157 | { 158 | "cell_type": "code", 159 | "execution_count": null, 160 | "source": [], 161 | "outputs": [], 162 | "metadata": {} 163 | }, 164 | { 165 | "cell_type": "markdown", 166 | "source": [ 167 | "## (2) Off-Policy Learning (OPL)\n", 168 | "\n", 169 | "Train two new policies: `obp.policy.IPWLearner` and `obp.policy.NNPolicyLearner`." 170 | ], 171 | "metadata": {} 172 | }, 173 | { 174 | "cell_type": "markdown", 175 | "source": [ 176 | "### IPWLearner" 177 | ], 178 | "metadata": {} 179 | }, 180 | { 181 | "cell_type": "code", 182 | "execution_count": 8, 183 | "source": [ 184 | "ipw_learner = IPWLearner(\n", 185 | " n_actions=dataset.n_actions,\n", 186 | " base_classifier=RandomForestClassifier(\n", 187 | " n_estimators=100, max_depth=5, min_samples_leaf=10, random_state=12345\n", 188 | " ),\n", 189 | ")" 190 | ], 191 | "outputs": [], 192 | "metadata": {} 193 | }, 194 | { 195 | "cell_type": "code", 196 | "execution_count": 9, 197 | "source": [ 198 | "# fit\n", 199 | "ipw_learner.fit(\n", 200 | " context=training_bandit_data[\"context\"], # context; x\n", 201 | " action=training_bandit_data[\"action\"], # action; a\n", 202 | " reward=training_bandit_data[\"reward\"], # reward; r\n", 203 | " pscore=training_bandit_data[\"pscore\"], # propensity score; pi_b(a|x)\n", 204 | ")" 205 | ], 206 | "outputs": [], 207 | "metadata": {} 208 | }, 209 | { 210 | "cell_type": "code", 211 | "execution_count": 10, 212 | "source": [ 213 | "# predict (make new decisions)\n", 214 | "action_dist_ipw = ipw_learner.predict(\n", 215 | " context=test_bandit_data[\"context\"]\n", 216 | ")" 217 | ], 218 | "outputs": [], 219 | "metadata": {} 220 | }, 221 | { 222 | "cell_type": "markdown", 223 | "source": [ 224 | "### NNPolicyLearner" 225 | ], 226 | "metadata": {} 227 | }, 228 | { 229 | "cell_type": "code", 230 | "execution_count": 11, 231 | "source": [ 232 | "nn_learner = NNPolicyLearner(\n", 233 | " n_actions=dataset.n_actions,\n", 234 | " dim_context=dataset.dim_context,\n", 235 | " solver=\"adam\",\n", 236 | " off_policy_objective=\"ipw\", # = ips\n", 237 | " batch_size=32, \n", 238 | " random_state=12345,\n", 239 | ")" 240 | ], 241 | "outputs": [], 242 | "metadata": {} 243 | }, 244 | { 245 | "cell_type": "code", 246 | "execution_count": 12, 247 | "source": [ 248 | "# fit\n", 249 | "nn_learner.fit(\n", 250 | " context=training_bandit_data[\"context\"], # context; x\n", 251 | " action=training_bandit_data[\"action\"], # action; a\n", 252 | " reward=training_bandit_data[\"reward\"], # reward; r\n", 253 | " pscore=training_bandit_data[\"pscore\"], # propensity score; pi_b(a|x)\n", 254 | ")" 255 | ], 256 | "outputs": [ 257 | { 258 | "output_type": "stream", 259 | "name": "stderr", 260 | "text": [ 261 | "policy learning: 100%|████████████████████████████| 200/200 [00:12<00:00, 16.51it/s]\n" 262 | ] 263 | } 264 | ], 265 | "metadata": {} 266 | }, 267 | { 268 | "cell_type": "code", 269 | "execution_count": 13, 270 | "source": [ 271 | "# predict (make new decisions)\n", 272 | "action_dist_nn = nn_learner.predict(\n", 273 | " context=test_bandit_data[\"context\"]\n", 274 | ")" 275 | ], 276 | "outputs": [], 277 | "metadata": {} 278 | }, 279 | { 280 | "cell_type": "code", 281 | "execution_count": null, 282 | "source": [], 283 | "outputs": [], 284 | "metadata": {} 285 | }, 286 | { 287 | "cell_type": "markdown", 288 | "source": [ 289 | "## (3) Off-Policy Evaluation\n", 290 | "\n", 291 | "Estimating the performance of IPWLearner and NNPolicyLearner via OPE." 292 | ], 293 | "metadata": {} 294 | }, 295 | { 296 | "cell_type": "markdown", 297 | "source": [ 298 | "### (3-1) obtain a reward estimator\n", 299 | "`obp.ope.RegressionModel` simplifies the process of reward modeling\n", 300 | "\n", 301 | "$r(x,a) = \\mathbb{E} [r \\mid x, a] \\approx \\hat{r}(x,a)$" 302 | ], 303 | "metadata": {} 304 | }, 305 | { 306 | "cell_type": "code", 307 | "execution_count": 14, 308 | "source": [ 309 | "regression_model = RegressionModel(\n", 310 | " n_actions=dataset.n_actions, \n", 311 | " base_model=LogisticRegression(C=100, max_iter=500, random_state=12345),\n", 312 | ")" 313 | ], 314 | "outputs": [], 315 | "metadata": {} 316 | }, 317 | { 318 | "cell_type": "code", 319 | "execution_count": 15, 320 | "source": [ 321 | "estimated_rewards = regression_model.fit_predict(\n", 322 | " context=test_bandit_data[\"context\"], \n", 323 | " action=test_bandit_data[\"action\"], \n", 324 | " reward=test_bandit_data[\"reward\"], \n", 325 | " random_state=12345,\n", 326 | ")" 327 | ], 328 | "outputs": [], 329 | "metadata": {} 330 | }, 331 | { 332 | "cell_type": "code", 333 | "execution_count": null, 334 | "source": [], 335 | "outputs": [], 336 | "metadata": {} 337 | }, 338 | { 339 | "cell_type": "markdown", 340 | "source": [ 341 | "### (4-2) OPE\n", 342 | "`obp.ope.OffPolicyEvaluation` simplifies the OPE process\n", 343 | "\n", 344 | "$V(\\pi_e) \\approx \\hat{V} (\\pi_e; \\mathcal{D}_0, \\theta)$ using DM, IPS, and DR" 345 | ], 346 | "metadata": {} 347 | }, 348 | { 349 | "cell_type": "code", 350 | "execution_count": 16, 351 | "source": [ 352 | "ope = OffPolicyEvaluation(\n", 353 | " bandit_feedback=test_bandit_data,\n", 354 | " ope_estimators=[\n", 355 | " SNIPS(estimator_name=\"SNIPS\"),\n", 356 | " DR(estimator_name=\"DR\"),\n", 357 | " ]\n", 358 | ")" 359 | ], 360 | "outputs": [], 361 | "metadata": {} 362 | }, 363 | { 364 | "cell_type": "markdown", 365 | "source": [ 366 | "### (4-3) Visualize the OPE results\n", 367 | "\n", 368 | "Output the relative performances of the trained policies compared to the logging policy (uniform random)" 369 | ], 370 | "metadata": {} 371 | }, 372 | { 373 | "cell_type": "code", 374 | "execution_count": 17, 375 | "source": [ 376 | "ope.visualize_off_policy_estimates_of_multiple_policies(\n", 377 | " policy_name_list=[\"IPWLearner\", \"NNPolicyLearner\"],\n", 378 | " action_dist_list=[action_dist_ipw, action_dist_nn],\n", 379 | " estimated_rewards_by_reg_model=estimated_rewards,\n", 380 | " n_bootstrap_samples=100,\n", 381 | " is_relative=True,\n", 382 | " random_state=12345,\n", 383 | ")" 384 | ], 385 | "outputs": [ 386 | { 387 | "output_type": "display_data", 388 | "data": { 389 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAgIAAALgCAYAAADmyR9FAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjQuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/MnkTPAAAACXBIWXMAAAsTAAALEwEAmpwYAACFXUlEQVR4nOzdeVxUZd8/8M+wDPs+ooALKpuggoiIYIKKS2buadZtud23uT6VppXdudUTZmqZW4uZ9mgqarlV3iJuuIKCC6uiuCGbgKwDyMzvD3/M7QQMZ2AGhPm8X69eea5znXO+WIf5zDnXuY5ILpfLQURERDpJr6kLICIioqbDIEBERKTDGASIiIh0GIMAERGRDmMQICIi0mEMAkRERDqMQYCIiEiHMQgQ6bjKykr88MMPCA4Ohq2tLQwNDWFvb4/u3btj+vTpOHjwoKLvyZMnIRKJIBKJMH78+Br3l5aWBpFIhL59+yq1V20bEhJSY/vz/4jFYrRt2xavvfYazp8/X+0Y+fn5+PTTT+Hj4wNzc3MYGRnByckJAQEBmD9/PmJjYxv+F0OkIwyaugAiajqVlZUYPnw4/vrrL1hbW+OVV15B27ZtUV5ejvj4eOzcuRNJSUkYMWJEtW3Dw8Nx4cIFBAQEaKSWDh06YPLkyQCA4uJiXLhwAXv37sX+/fuxd+9ejB49GgCQnp6OoKAgpKWloVOnTnjzzTchkUiQl5eHy5cv4+uvv4aJiQl69OihkbqIWjoGASId9uuvv+Kvv/6Ct7c3Tp06BSsrK6X1JSUluHjxYrXtOnfujNTUVCxYsABRUVEaqcXZ2RlLly5ValuyZAmWL1+O+fPnK4LAp59+irS0NEydOhU//vgjRCKR0jaPHj3Co0ePNFITkS7grQEiHXbu3DkAwOTJk6uFAAAwNTVF//79q7UHBARg5MiROHv2LPbt26e1+mbPng0AuHPnDrKzs5Vqnjt3brUQAAAODg7w9fXVWk1ELQ2DAJEOs7OzAwCkpKSove2XX34JAwMDfPjhh6ioqNB0aQCA51+FUvWh35Caiag6wbcG0tPTcf36dSQmJiInJweFhYUQi8WwtLSEs7MzvLy80LVrV4jFYm3WS0QaNGbMGKxcuRKbN29GYWEhRo8ejZ49e6JDhw51buvm5oYZM2Zgw4YN2LRpE+bNm6fx+jZs2AAA6NSpEyQSCQBgwoQJiIqKwvTp0xETE4PBgwejR48eioBAROoR1fX2wbNnz+I///kPkpKS6tyZmZkZQkJCMHToUNjb22usSCLSnj179uB//ud/kJGRoWiztbVFv379MHXqVLz66quK9pMnT6J///5488038X//93/Izs6Gi4sLDA0NkZqaCisrK6SlpaFjx44ICgpSGj9QtW1wcDBOnjxZrf3vgwUvXryIM2fOQE9PT2mwoFwux+LFi7F27VpIpVLFfpydnREaGoo5c+bA29tbS39bRC1PrUHgxo0b2L59O+7evQtTU1P06tULHh4e6Ny5M6ytrWFubo7y8nIUFhYiPT0dKSkpuHbtGm7evAkDAwO8/PLLGDNmDExNTRv7ZyIiNVVUVODEiROIiopCbGwsoqKikJ+fDwB466238PPPP0MkElULAgDwxRdf4OOPP8YHH3yAL7/8st5B4HkGBgZo1aoV+vTpg/nz5yMwMLBazfn5+Th69CguXLiAK1eu4OLFiygrK4O+vj42bdqEf/7zn5r/iyJqieS1GD9+vHzRokXyc+fOycvLy2vrVk16erp827Zt8n/84x/y8PBwwdsR0Yvj6dOn8t27d8vNzMzkAOS//fabXC6Xy0+cOCEHIH/zzTcVfUtLS+Xt2rWTGxkZydPS0uR37tyRA5AHBQUp7bNq2+DgYEHt6ioqKpJ/8skncgByIyMjeUZGRoP2R6Qrah0sOH/+fISFhaFPnz4wNDQUHCwcHBzw1ltv4dtvv0X37t0bGFOIqCno6+tj/PjxeO+99wAAkZGRtfY1NjbGZ599hrKyMnz88ceNVWI1ZmZmWLFiBfr27YuysjKcPXu2yWohak5qDQL+/v4N2rG1tTXc3NwatA8ialoWFhYAlEfv12TSpEno0aMHfv31V8TExDRGabUSWjMRPcPHB4l02K+//opjx45BJpNVW5eRkYEffvgBANCvXz+V+xGJRPjqq68gl8vx0UcfaaXWKqtWrUJ8fHyN66KionDixAkYGBigT58+Wq2DqKXgzIJEOuzixYv45ptv0KZNG/Tt2xcdO3YE8GwCnyNHjqC0tBQjR47EuHHj6tzXgAEDMGzYMPzxxx9arXnHjh1YuHAhPDw8EBAQAAcHBxQXFyM+Ph6RkZGQy+VYvXo1HB0dtVoHUUuhMgjMmTNH7R2KRCJ8++239S6IiBrP/Pnz4erqioiICFy7dg1Hjx6FVCqFnZ0dQkJC8MYbb+CNN96ocQa/mqxatQpHjx5FZWWl1mreunUrjhw5gsjISJw8eRIZGRmQy+VwcnLCxIkTMXPmzGovPCKi2qmcR2DChAn12unu3bvrXRARERE1HpVBoGpub3W1atWq3gURERFR46lzZkEiIiJqufjUABERkQ5TOVhQJpPh66+/hkgkwty5c2FgUHP3p0+f4ttvv4VIJMK7776rjTqJiIhIC1ReEbh48SIuXrwIPz+/WkMA8Gxe8F69euH8+fO4cOGCxoskIiIi7VAZBM6fPw9bW1tBj+IEBQXB1tZW6SUjRERE9GJTeWsgNTUVXl5egp4hFolE6Nq1a60zfr1o0tPTm7oEIiKiRqFqgi2VVwTy8/NhZ2cn+EC2trZ48uSJ8MqIiIioSakMAgYGBqioqBC8s4qKCpVjCYiIiOjFojII2NjY4O7du4J3dvfuXdjY2DS4KCIiImocKoOAu7s7EhISkJGRUeeOMjIykJCQAA8PD40VR0RERNqlMggMGjQIMpkMa9asUXnvv6CgAGvXroVMJkNoaKjGiyQiIiLtUHlD38XFBaGhoYiIiMD777+PQYMGoWvXrrC1tQUA5Obm4saNG4iIiEBhYSEGDRoEFxeXRimciIiIGq7Odw1UVlbixx9/RGRkpModDRw4ENOnT4eeXvOYtZiPDxIRka5Q9fig4JcOJScn49ixY0hOTkZ+fj4AwNraGh4eHggNDYW7u7tGim0sDAJERKQrNBIEWhoGASIi0hX1nlCIiIiIWjYGASIiIh3GaQCJiKhOK1euRE5ODiQSCRYtWtTU5ZAGMQgQEVGdcnJyBE0uR80Pbw0QERHpMAYBIiIiHcYgQEREpMMaNEYgKysLDx48AAC0bdsW9vb2GimKiIiIGke9gkBpaSk2b96MCxcuKLX36dMH77zzDoyNjTVSHBEREWlXvYLAli1bcO3aNYwfPx6dOnVCRUUFYmJicOrUKRgZGWHmzJmarpOIiIi0QGUQKCsrg5GRUbX26OhoTJ8+HS+99JKizd/fH2VlZbh06RKDABERUTOhcrDgggULcOPGjWrtlZWVMDExqdZuYmICmUymueqIiIhIq1ReEXB1dcWKFSswcOBATJo0SfHh37VrV2zZsgVSqRQdO3ZERUUFLl++jFOnTqFnz56NUjgRERE1nMogMG/ePPTt2xc//PADYmNj8a9//Qs9evTA9OnTsWrVKnz77bdK/Tt16oSpU6dqtWAiIiLSnDoHC/r6+mL16tXYvn07wsLC8NJLL2Hy5MlYuXIlrl27hocPHwJ49vhgt27dtF4wERERaY6gpwZMTU3xzjvvIDAwEN9//z3mz5+PadOmwd/fH927d9d2jURERKQlas0s2L17d3z11Vfw9/fH6tWrsXbtWhQUFGirNiIiItIyQUGgoKAAt2/fRkFBAYyNjTFt2jQsXboUaWlpeO+99xAVFaXtOomIiEgLVN4akEql2LRpk9IMgr1798asWbPQpUsXrFq1Crt27cKGDRtw7tw5/Otf/4K1tbW2ayYiIiINUXlFYOfOnbhw4QKCg4Mxbdo0hISE4OLFi9ixYwcAQCwW46233sKKFSuQkZGB9957DydOnGiUwomIiKjhVF4RiI6OVlwBqFJaWoqYmBhMmzZN0ebi4oIvv/wSe/fuxQ8//ID+/ftrr2IiIiLSmDqnGLazs1Nqs7Ozq3G2QQMDA7z++usICAjQbIVERESkNSpvDbi6uuL06dNISkrC06dPkZKSgjNnzsDV1bXWbZydnTVdIxEREWmJyisCU6ZMwbJly7BkyRJFm62tLSZPnqztuoiIiKgRqAwCbdq0wddff43Lly8jJycHEokEvr6+MDY2bqz6iIiISIvqnFnQyMgIgYGBjVELERERNTK1ZhYkIiKilkXQuwZqEhMTg8TERJSVlcHe3h6BgYGQSCSarI2IiIi0TGUQ2LlzJ7p3746uXbsq2oqLi/Hll18iKSlJqe/u3bsxY8YM9OvXTzuVEhERkcapDAIHDhyAWCxWCgLfffcdkpKSYG9vj6CgIFhaWiIlJQXnz5/H5s2b4ezsjPbt22u9cCIiImo4tW4NZGRk4OLFi+jYsSOWLFkCExMTAMCwYcPg6+uLDRs24I8//sA777yjlWKJiIhIs9QaLJiYmAgAmDhxoiIEVOnXrx9cXFyQkJCgueqIiIhIq9QKAvn5+QCAzp0717i+c+fOyM3NbXBRRERE1DjUCgJVVwEMDQ1rXG9oaAiRSNTwqoiIiKhR1DlGID4+XvHnjIwMAEB2djbatm1bre/jx49hYWGhwfKIiIhIm+oMAgkJCdXu+1+5cqXGIHD79m04OTlprjoiIiLSKpVB4PmXDT3P0tKyWtvt27dRWVmJbt26aaYyIiIi0jqVQcDT01Pwjjp16oQNGzaodfDz58/j9OnTuH37NkpKSuDo6IhXX30Vffv2VbldRUUFfv31V5w+fRplZWXw9PTEtGnTYG9vr9bxiYiIdF2Tvmvg8OHDMDY2xttvv41FixbBy8sL69atw59//qlyu61bt+LkyZOYNGkS3n//fRQWFuKzzz5DeXl5I1VORETUMqg1oVBlZSUyMzNRXFwMkUgEKysrtGrVqt4HX7RokdJthq5duyIvLw+HDx/Gyy+/XOM2jx8/RmRkJGbOnIng4GAAQIcOHTB79mycOXMGAwcOrHc9REREukZQELh06RKOHj2KxMREVFZWKq2ztLREUFAQRo0aBWtra7UOXtNYg44dO+LixYu1bnP16lUAQO/evRVttra28PDwQGxsLIMAERGRGlQGAblcjo0bN+L06dPV1kkkEhgbGyMjIwN//vknzpw5gw8++AAeHh4NKiglJQUODg61rk9PT4ednR2MjY2V2p2cnDirIRERkZpUBoGIiAicPn0avr6+mDBhAlq3bo3MzEzs2bMHycnJWLx4MVq1aoWzZ8/il19+wcqVK7F69WrY2trWq5jr168jOjoaM2fOrLVPUVERTE1Nq7Wbm5ujuLhY5c8SEREBAAgLC+Mrk4mI1KCvr6/4N39/tiwqg0BkZCTatm2LBQsWKP4ncHZ2xvz587Fw4ULs3LkTCxYsQEhICJydnfHRRx/h999/x9SpU9UuJCsrC+vWrYOfnx9CQkLq9cOoEhoaitDQUMVyTk6Oxo9BRNRSVd0Wrqys5O/PZsjR0bHWdSqfGnjw4AG6deumCAFV9PX10a1bN6VZB52dneHr64vY2Fi1CywqKsIXX3wBiUSCefPmqexrbm6OkpKSGvdhZmam9rGJiIh0mcogIBKJan0kr7y8HBUVFUptTk5Oar90qKysDGFhYXj69Ck+/PBDGBkZqezv6OiIx48fQyqVKrWnp6erTDxERERUncog0K5dO8TExKCoqEipvaioCDExMdUG9UmlUojFYsEHr6ysxJo1a/Do0SN8/PHHsLKyqnMbb29vAM+eZKiSm5uLxMRE9OjRQ/CxiYiIqI4xAv3798cPP/yAjz/+GMOHD4e9vT2ysrJw5MgRPHnyBMOHD1fqf//+fbRp00bwwX/88UfExsZi8uTJKCwsRGFhoWJdx44dYWhoiOXLlwMAPv30UwCAnZ0dBgwYgG3btgF49ghieHg4WrVqhZdeeknwsYmIiKiOIBAaGoqEhAScPXsWW7ZsUVrn4+OjFARKS0tRXl6OwMBAwQe/du0aAODnn3+utm79+vWwt7eHTCartm7KlCkwMjLCtm3bUF5eDk9PT/zP//yPWlcjiIiICBDJ5XJ5XZ0uXbqES5cu4cmTJ7CwsICvry8CAwOhp9ekMxQ3SHp6elOXQETUbHzwwQfIyMhAmzZtsGrVqqYuh9SkagydoJkF/f394e/vr7GCiIiI6MXQfL/SExERUYMxCBAREekwBgEiIiIdxiBARESkwxgEiIiIdBiDABERkQ5jECAiItJhDAJEREQ6TO0gkJCQgL1796q9joiIiF48ageB+Ph4hIeHq72OiIiIXjy8NUBERKTDGASIiIh0GIMAERGRDhP09sGcnBzFn4uLi6u1AYBEItFgWURERNQYBAWB2bNnq2wTiUTYtWuX5qoiIiKiRiEoCIwdOxYikQjAs0cEExISMG7cOK0WRkRERNonKAiMHz9e8efw8HAkJCTgtdde01pRRERE1Dg4WJCIiEiHMQgQERHpMAYBIiIiHaZ2EJDL5fVaR0RERC8ekVxHP73T09ObugQiombjgw8+QEZGBtq0aYNVq1Y1dTmkJkdHx1rX8dYAERGRDmMQICIi0mG1BoHy8vIG71wT+yAiIiLtqTUIzJ49G3/88QcqKirU3mlaWhq+/PJLHDx4sEHFERERkXbVOrOgt7c3tm3bhvDwcAQGBqJPnz5wc3ODWCyusX9mZiauXr2KU6dO4datW5BIJBgxYoTWCiciIqKGU/nUwK1bt7Br1y5cv34dAKCnp4e2bdvC2toaZmZmqKioQFFREdLT01FQUAAAsLS0xCuvvIJXXnkFhoaGjfNT1AOfGiBqWSZvO9/UJbRoled2ACVPAFMr6Ae+2dTltFg/v91HK/tV9dSAyncNuLi44JNPPsGjR48QGRmJGzduIC0tDffu3VPqZ2lpid69eyv+MTAQ9AoDIiIiamKCPrEdHBzw5pvPEmBZWRlyc3NRWFgIsVgMKysr2NjYaLVIIiIi0g61v7obGRnBwcEBDg4O2qiHiIiIGhHnESAiItJhDAJEREQ6jEGAiIhIhzEIEBER6TAGASIiIh3GIEBERKTDOPMPNSsrV65ETk4OJBIJFi1a1NTlEBE1e2oHgadPn+LGjRt48OABpFIpxo0bB+DZmwZLS0thYWEBPT1eaCDtyMnJQUZGRlOXQUTUYqgVBOLi4rBp0ybk5+cr2qqCQFpaGv79739j7ty56Nu3r0aLJCIiIu0Q/NU9NTUVq1atgkgkwttvv42goCCl9W5ubrC3t8elS5c0XiQRERFph+AgsG/fPojFYoSFhWHYsGE1TjHcuXNn3L17V6MFEhERkfYIDgLJycno1asXrK2ta+0jkUiUbhsQERHRi01wEJBKpbC0tFTZp6ysDDKZrMFFERERUeMQHARsbW1x//59lX3S0tLQunXrBhdFREREjUNwEPDx8cHVq1eRlJRU4/rY2FikpKTA19dXY8URERGRdgl+fHD06NE4d+4cPvvsMwwdOhTZ2dkAgCtXriAhIQFHjx6FtbU1hg8frrViiYiISLMEBwFbW1ssXrwYa9euxaFDhxTtK1euBAC0bt0aCxYsqHMcAREREb041JpQqFOnTvjmm29w5coVpKSkoLCwEKampnB1dUWvXr2gr6+vrTqJiIhIC9SeYlhPTw9+fn7w8/PTSAEZGRk4ePAgUlJScP/+fXTp0gVLly5VuU1WVhbmzJlTrT0wMBDvvvuuRuoiIiLSBU3+0qH79+8jNjYWrq6uqKysVGvbSZMmwd3dXbHM2xJERETqERwETp06JXinwcHBgvv27NkTvXr1AgCsXr0ahYWFgrd1dHSEm5ub4P5ERESkTHAQ2Lhxo+CdqhME+KZCIiKipiM4CMycObPG9pKSEty6dQvnzp2Dv79/o84jsHHjRhQVFcHKygpBQUGYOHEixGJxox2fiIiouRMcBEJCQlSu79+/v+KFRNpmaGiIIUOGwNvbGyYmJoiPj8eBAweQmZmJhQsXav34RERELYXGBgt269YN3t7e2L17N5YsWaKp3dbIxsYG06ZNUyx7eXnB2toaP/74I9LS0uDs7Fxtm4iICERERAAAwsLCIJFItFojaUfVI6r6+vr8b0hELU5T/F7T6FMDjo6OOHbsmCZ3KVhAQAB+/PFH3L59u8YgEBoaitDQUMVyTk5OI1ZHmlL1ZEllZSX/GxJRi6Ot32uOjo61rtPoSL0HDx5ocnf1IhKJmroEIiKiZqPBVwRkMhkeP36M48ePIzY2Fj169NBEXWq7cOECgGezHxIREZEwgoPAhAkT6uxjbm6Of/zjH2oVUFZWhtjYWABAbm4uSktLFR/qPXr0gJGREebOnQtPT0/Fkwt79uyBVCqFu7s7TExMkJiYiIMHD8Lf3x8dOnRQ6/hERES6THAQ6NKlS42X3UUiEczMzODi4oL+/furPbvfkydPsGbNGqW2quX169fD3t4eMpkMMplMsd7JyQmHDh3C8ePHUV5eDolEghEjRmDMmDFqHZuIiEjXCQ4Cdc3/X1/29vbYs2ePyj4bNmxQWg4KCkJQUJBW6iEiItIlnNaPiIhIhzEIEBER6bBabw2o826B54lEolqnIyYiIqIXS61BQJ23Df4dgwAREVHzUGsQWL9+fWPWQURERE2g1iDQqlWrxqyDiIiImgAHCxIREemwek0xLJPJUFBQgKdPn9a4nm+FIyIiah7UCgL37t3Djh07EB8fj4qKihr7iEQi7Nq1SyPFERERkXYJDgIPHjzAJ598AgDo3r07Ll++jA4dOsDKygp37txBYWEhvLy8eDWAiIioGREcBPbv34/Kykp88cUXaN++PSZMmAB/f3+MGzcOUqkUW7duRWxsLGbNmqXNeomIiEiDBA8WjI+Ph6+vL9q3b69ok8vlAABjY2P861//gpmZGXbv3q35KomIiEgrBAeBwsJCODg4/HdDPT2UlZUplvX19eHl5YVr165ptkIiIiLSGsFBwNzcHFKpVLFsaWmJnJwcpT4GBgYoKSnRXHVERESkVYKDQOvWrZGVlaVY7tixI65fv44nT54AAKRSKWJiYmBvb6/5KomIiEgrBA8W9Pb2xoEDByCVSmFsbIzBgwcjNjYWCxcuhLu7O27fvo3s7Gy89dZb2qyXiIiINEhwEBg4cCAcHR1RXl4OY2Nj+Pr64u2330Z4eDguXrwIsViMkSNH4uWXX9ZmvURERKRBKoPAwoULERoaipdeegk2NjYIDAxUWj9s2DAMHToUBQUFsLKygkgk0mqxREREpFkqxwjcvXsXW7ZswYwZM7B582bcvHmz+g709GBtbc0QQERE1AypvCKwYsUKRERE4MKFCzhx4gROnDiB9u3bY+DAgejXrx9MTU0bq04iIiLSApVBwM3NDW5ubpgyZQrOnDmDyMhI3LlzB1u3bsWOHTsQEBCAgQMHwsPDo7HqJSKipmBsofxvajEEDRY0MTHB4MGDMXjwYKSlpSEiIgJnz57F6dOncfr0abRt21ZxlcDc3FzbNRMRUSPT9x3R1CWQlgieR6CKs7Mzpk+fju+++w6zZs2Cu7s7Hjx4gG3btuGdd97Bt99+q406iYiISAvUDgJVxGIxgoODsXz5cqxduxYeHh6oqKhAVFSUJusjIiIiLRI8j0BNioqKcOrUKURGRuLBgwcAwAGEREREzUi9gsCNGzcQERGB6OhoPH36FADg6uqK0NDQanMNEBER0YtLcBDIz8/HiRMnEBkZqXjngJmZGUJDQxEaGop27dpprcjm4tEH05u6hBbvaXb5//93Jv++tchh1Y9NXQIRNRKVQUAul+PKlSs4fvw4YmNjIZPJAAAeHh4YOHAgAgICIBaLG6VQIiIi0jyVQWDWrFnIzc0F8Ow1xP369UNoaCicnJwapTgiIiLSLpVBIDc3F56enopv/wYGDRpbSERERC8YlZ/sX3/9NRwcHBqrFiIiImpkKucRYAggIiJq2eo9oRARERE1fwwCREREOoxBgIiISIcxCBAREekwBgEiIiIdJjgIXLx4UTGzIBEREbUMgmcIWrNmDWxsbNC/f38MHDgQEolEm3URERFRIxB8RWDIkCEoKyvD/v37MXfuXISFheHy5cuQy+XarI+IiIi0SPAVgalTp+If//gHzp07h2PHjiE2NhaxsbGwtbXFwIEDMWDAANja2mqzViIiItIwtV4eIBaLERISgpCQENy7dw8RERE4c+YMwsPDsW/fPvj6+mLQoEHw8fHRUrlERESkSfV+i1D79u2VrhLs3r0bMTExiImJgUQiwZAhQzB48GAYGxtrsl4iIiLSoAY9PiiVSnH69Gn89ddfitcVOzs7o6ioCDt27MB7772HtLQ0TdRJREREWlCvKwJ37tzBsWPHcPbsWUilUojFYgwYMABDhgyBs7MzpFIpjh49ij179mDr1q1YtmyZpusmIiIiDRAcBMrKynD27FkcO3YMt2/fBgA4OTlh0KBBCA4OhqmpqaKvsbExRo4cicePHyMyMlLzVRMREZFGCA4CM2bMQGlpKfT09NC7d28MGTIEXl5eKrextbVFRUVFg4skIiIi7RAcBExMTDB8+HCEhobC2tpa0DaDBw9GUFBQfWsjIiIiLRMcBDZs2AA9PfXGFpqamirdMiAiIqIXi+BPdnVDABEREb34BF8R2LdvH/bu3YsNGzbUOINgbm4uZs+ejQkTJmDUqFGCC8jIyMDBgweRkpKC+/fvo0uXLli6dGmd25WUlODnn39GdHQ0ZDIZevbsiSlTpsDCwkLwsYmIiHSd4K/5ly9fhqenZ63TCNva2qJr166Ijo5Wq4D79+8jNjYWjo6OcHR0FLzd2rVrER8fjxkzZmD27NlITU3FqlWr1Do2ERGRrhN8RSAjIwMvvfSSyj5OTk44c+aMWgX07NkTvXr1AgCsXr0ahYWFdW6TkpKCq1evYunSpfD09ATwLIh8/PHHuHbtGrp3765WDURERLpK8BWB8vJyGBkZqewjFoshlUrVK6AeYw9iY2NhZWWlCAEA4OLiAnt7e8TFxam9PyIiIl0l+FPYzs4ON2/eVNnn5s2bjfIGwocPH8LJyalau5OTEx4+fKj14xMREbUUgoOAt7c3EhIScO7cuRrXnz17FgkJCY3y5sHi4uIaH0s0MzNDcXGx1o9PRETUUggeIzBq1ChERUXhm2++wblz5+Dj4wNbW1vk5uYiNjYWMTExMDc3V+uJgcYUERGBiIgIAEBYWBgkEonGj/FI43skahraOD+IqG5Nce4JDgK2trZYvHgx1qxZg+jo6GpPB7Rq1Qrvv/8+7OzsNF7k35mZmdU4qLC4uBhmZmY1bhMaGorQ0FDFck5OjtbqI2rueH4QNQ1tnXuqnspT6+2DnTt3xjfffIPLly/j5s2big9eV1dX9OzZEwYG9XqZodqcnJxw/Pjxau3p6emKJxCIiIiobmp/chsYGKB3797o3bu3NuoRpEePHti3bx+SkpLg4eEBAEhNTUVmZmajjFEgIiJqKRrnK7wKZWVliI2NBfBsdsLS0lJcuHABwLMPfCMjI8ydOxeenp6YOXMmAMDNzQ3e3t5Yv349Jk2aBJFIhB07dsDDw4NzCBAREamh1iBw6tQpAIC/vz9MTEwUy0IEBwcL7vvkyROsWbNGqa1qef369bC3t4dMJoNMJlPq8+6772Lbtm3YtGkT5HI5fH19MWXKFMHHJSIiIhVBYOPGjQAAV1dXmJiYKJaFUCcI2NvbY8+ePSr7bNiwoVqbmZkZZs2ahVmzZgk+FhERESmrNQhUXYa3sbFRWiYiIqKWo9YgEBISonKZiIiImj/1J/onIiKiFoNBgIiISIfVemtgzpw59dqhSCTCt99+W++CiIiIqPHUGgTkcnm9dljf7YiIiKjx1RoEanpkj4iIiFoWjhEgIiLSYfUOAqWlpcjJyUFJSYkm6yEiIqJGpNa7BiorK3Ho0CEcP34cWVlZinZ7e3sMHDgQr776KvT19TVeJBEREWmH4CDw9OlTfP7550hISIBIJIJEIoG1tTXy8/ORnZ2NX3/9FXFxcfjkk08a7XXERERE1DCCP7EPHz6MhIQE+Pr64q233oKDg4NiXUZGBrZv347Lly/j8OHDGDVqlDZqJSIiIg0TPEYgKioK7dq1wwcffKAUAgCgTZs2WLBgAdq1a4czZ85ovEgiIiLSDsFBICMjAz4+PtDTq3kTPT09+Pj4IDMzU2PFERERkXYJDgIGBgaQSqUq+5SVlXGwIBERUTMiOAh06NABFy9eREFBQY3rCwoKcOHCBTg7O2uqNiIiItIywUFgyJAhKCgowEcffYTIyEhkZmaivLwcWVlZOHHiBBYvXoyCggIMGTJEm/USERGRBgl+aiAwMBBpaWk4cOAAvvvuuxr7jBgxAoGBgRorjoiIiLRLrQf+33jjDfj5+SEyMhJpaWkoKSmBqakpnJ2dMWDAALi5uWmrTiIiItICwUGgsLAQIpEIbm5u/MAnIiJqIeoMAtHR0di+fbtiSuE2bdpg0qRJ8PPz03pxREREpF0qBwumpKRg9erVSu8VyMjIwOrVq5GSkqL14oiIiEi7VAaBw4cPQy6XY+zYsfjhhx/w/fffY8yYMZDJZDh8+HBj1UhERERaovLWwM2bN+Hh4YHx48cr2iZMmICEhAReESAiImoBVF4RePLkCVxdXau1u7q61jqxEJE22egDEn0RbDiBJRGRRqi8IlBZWQljY+Nq7UZGRqisrNRaUUS1+aetuKlLICJqUQTPLEhEREQtT52PD548eRLx8fFKbdnZ2QCAZcuWVesvEonw6aefaqg8IiIi0qY6g0B2drbig//vEhISNF4QERERNR6VQWDJkiWNVQcRERE1AZVBwNPTs7HqICIioibAwYJEREQ6jEGAiIhIhzEIEBER6TAGASIiIh3GIEBERKTDGASIiIh0GIMAERGRDmMQICIi0mG1Tii0d+/eeu903Lhx9d6WiIiIGk+tQSA8PLzeO2UQICIiah5qDQI1vWfg8OHDiI2NxUsvvQRPT09YW1sjPz8f8fHxiIqKgq+vL1555RWtFkxERESaU2sQ+Pt7Bk6dOoXr16/j888/R6dOnZTWhYSEYOjQoViyZAl69+6tnUqJiIhI4wQPFjxy5Aj69OlTLQRU6dy5M/r06YMjR45orDgiIiLSLsFBID09HTY2Nir72NjYID09vcFFERERUeMQHARMTEyQnJyssk9ycjKMjY0bXBQRERE1DsFBwNfXF4mJidi+fTtKS0uV1pWWlmL79u1ISkpCz549NV4kERERaUetgwX/7o033kBCQgKOHDmCyMhIODs7w8rKCk+ePEFaWhpKS0thb2+PiRMnarNeIiIi0iDBQcDKygr/+7//i507dyIqKgqJiYmKdWKxGAMHDsTEiRNhYWGhlUKJiIhI8wQHAQCwsLDAjBkzMH36dDx8+BAlJSUwNTWFk5MT9PX1tVUjERERaYlaQaCKvr4+2rdvr5ECHjx4gJ9++gkpKSkwMzPDgAED8Nprr0FPr/bhC1lZWZgzZ0619sDAQLz77rsaqYuIiEgXqB0Enj59ihs3buDBgweQSqWK6YTLy8tRWloKCwsLlR/izysqKsKKFSvQtm1bLFy4EBkZGfjll18gl8vx+uuv17n9pEmT4O7urli2tLRU98chIiLSaWoFgbi4OGzatAn5+fmKtqogkJaWhn//+9+YO3cu+vbtK2h/x44dQ3l5OebPnw9TU1N0794dpaWlCA8Px4gRI2Bqaqpye0dHR7i5uanzIxAREdFzBD8+mJqailWrVkEkEuHtt99GUFCQ0no3NzfY29vj0qVLgg8eFxcHb29vpQ/8oKAglJeXIyEhQfB+iIiIqH4EXxHYt28fxGIxwsLCYG1tXePbCTt37ow7d+4IPvjDhw/h5eWl1CaRSGBkZCRohsKNGzeiqKgIVlZWCAoKwsSJEyEWiwUfn4iISNcJDgLJycno1asXrK2ta+0jkUgQGxsr+ODFxcUwMzOr1m5mZoaioqJatzM0NMSQIUPg7e0NExMTxMfH48CBA8jMzMTChQsFH5+IiEjXCQ4CUqm0zsF4ZWVlkMlkDS6qLjY2Npg2bZpi2cvLC9bW1vjxxx+RlpYGZ2fnattEREQgIiICABAWFgaJRKLxuh5pfI9ETUMb5wcR1a0pzj3BQcDW1hb3799X2SctLQ2tW7cWfHAzMzOUlJRUay8uLoa5ubng/QBAQEAAfvzxR9y+fbvGIBAaGorQ0FDFck5Ojlr7J9IlPD+Imoa2zj1HR8da1wkeLOjj44OrV68iKSmpxvWxsbFISUmBr6+v4MKcnJzw8OFDpbacnByUlZWpLFoVkUhUr+2IiIh0keArAqNHj8a5c+fw2WefYejQocjOzgYAXLlyBQkJCTh69Cisra0xfPhwwQf38fHBwYMHUVpaChMTEwDAuXPnIBaL4enpqdYPcuHCBQBAp06d1NqOiIhIl6l1a2Dx4sVYu3YtDh06pGhfuXIlAKB169ZYsGCBWpP6DBo0CH/++Se++uorjBw5EllZWQgPD8fw4cOVHimcO3cuPD09MXPmTADAnj17IJVK4e7uDhMTEyQmJuLgwYPw9/dHhw4dBB+fiIhI16k1oVCnTp3wzTff4MqVK0hJSUFhYSFMTU3h6uqKXr16qf2+AXNzc3z66afYsmULVq5cCTMzM7zyyisYP368Uj+ZTKY0CNHJyQmHDh3C8ePHUV5eDolEghEjRmDMmDFqHZ+IiEjXieRyubypi2gKQuYpUNejD6ZrfJ9ETcFh1Y9NXYLaJm8739QlEDXYz2/30cp+NTJYcNmyZTh16pTKPqdPn8ayZcuEV0ZERERNSnAQSEhIUAwQrE1OTg6nBiYiImpGBAcBIcrLy9UeJ0BERERNR+3XENdELpcjJycHsbGxsLOz08QuiYiIqBGoDAITJkxQWg4PD6/xZUPPGz16dMOrIiIiokahMgh06dJFMVNfQkICJBIJ7O3tq/XT09ODubk5unXrhgEDBminUiIiItI4lUFg6dKlij9PmDAB/fv3x7hx47RdExERETUSwWME1q9fX+Mrg4mIiKj5EhwEWrVqpc06iIiIqAmo/dRAXl4erl+/jtzcXDx9+rTGPrx9QERE1DyoFQT27NmD33//HZWVlSr7MQgQERE1D4KDwJkzZ7Bv3z507doVQ4YMwerVqxEcHAxvb2/Ex8fjxIkTCAgIwKBBg7RZLxEREWmQ4CDwn//8B7a2tvj4448Vswfa29sjKCgIQUFB8Pf3R1hYGIKCgrRWLBEREWmW4CmG7927hx49eihNIfz8q4F9fHzg7e2NQ4cOabZCIiIi0hrBQaCyshIWFhaKZbFYjJKSEqU+7dq1Q1pamsaKIyIiIu0SHARsbGyQl5enWJZIJLh7965Sn7y8PL50iIiIqBkRHAScnZ1x//59xbKXlxeSkpJw+vRpSKVSXLlyBRcuXEDHjh21UigRERFpnuAg0LNnT9y/fx9ZWVkAgFGjRsHU1BQbNmzA22+/jZUrVwKo/qIiIiIienEJfmogJCQEISEhimWJRIIvvvgChw4dQmZmJlq1aoUhQ4agffv22qiTiIiItEDtmQWfZ29vj2nTpmmqFiIiImpkgm8NEBERUcuj9hUBmUyG3Nxcle8a8PT0bHBhREREpH1qBYGDBw/i0KFDKCgoUNlv9+7dDSqKiIiIGofgILBnzx7s27cP5ubmCA4Ohq2tLecMICIiauYEB4ETJ07A3t4eK1euhKmpqTZrIiIiokYieLBgYWEh/Pz8GAKIiIhaEMFBoE2bNiguLtZmLURERNTIBAeBwYMH4/Lly8jPz9diOURERNSYBI8RGDx4MB49eoR///vfGDt2LDp16lTrbQKJRKKxAomIiEh71Hp8sEOHDjh58iQ2bdpUax+RSIRdu3Y1uDAiIiLSPsFB4Pjx4/j++++hr68PLy8v2NjY8PFBIiKiZk5wEDh06BCsrKzw2Wefwd7eXps1ERERUSMRPFgwOzsbAQEBDAFEREQtiOAgYGtrW+u7BYiIiKh5EhwEgoODERsbi9LSUm3WQ0RERI1IcBAYPXo0XFxcsGLFCsTHxzMQEBERtQCCBwu+8cYbij8vX7681n58fJCIiKj5EBwEunTpApFIpM1aiIiIqJEJDgJLly7VYhlERETUFASPESAiIqKWh0GAiIhIh9V6a2Dv3r0AgKFDh8Lc3FyxLMS4ceMaXhkRERFpXa1BIDw8HAAQGBgIc3NzxbIQDAJERETNQ61BYMmSJQD++0rhqmUiIiJqOWoNAp6eniqXiYiIqPkTPFjw1KlTuHv3rso+9+7dw6lTpxpcFBERETUOwUFg48aNiI6OVtknJiYGGzdubHBRRERE1Dg0+vigTCbj7INERETNiEaDQHp6OszMzDS5SyIiItIilVMM//0yf3R0NLKysqr1k8lkePz4MRITE+Hr66vZComIiEhrVAaBvw/8S0tLQ1paWq39XV1d8fbbb2ukMCIiItI+lUFg/fr1AAC5XI65c+di2LBhGDZsWLV+enp6MDMzg7GxsdoFPHjwAD/99BNSUlJgZmaGAQMG4LXXXoOenuq7FiUlJfj5558RHR0NmUyGnj17YsqUKbCwsFC7BiIiIl2lMgi0atVK8edx48bBy8tLqa2hioqKsGLFCrRt2xYLFy5ERkYGfvnlF8jlcrz++usqt127di3S09MxY8YM6OnpYceOHVi1ahWWL1+usfqIiIhaOsGvIX7ttdc0fvBjx46hvLwc8+fPh6mpKbp3747S0lKEh4djxIgRMDU1rXG7lJQUXL16FUuXLlVMdGRra4uPP/4Y165dQ/fu3TVeKxERUUsk+KmBO3fu4OjRoygpKVG0SaVSrF+/HpMnT8aMGTPwxx9/qHXwuLg4eHt7K33gBwUFoby8HAkJCbVuFxsbCysrK6XZDl1cXGBvb4+4uDi1aiAiItJlgoPAgQMHsH//fqUP7Z07d+LMmTOQy+UoLCzEtm3bcPXqVcEHf/jwIRwdHZXaJBIJjIyMkJ6ernI7Jyenau1OTk54+PCh4OMTERHpOsG3BlJTU+Hl5aVYfvr0KU6dOgUXFxcsWbIERUVFWLRoEf788094e3sL2mdxcXGN8w6YmZmhqKhI5XY13TYwMzOr8fFGAIiIiEBERAQAICwsrFoA0QTHHepdESEizfnPR2ObugSiZknwFYGCggLY2dkplm/fvg2pVIrQ0FCIxWLY2trCz8+vzvcRNJXQ0FCEhYUhLCysqUuhBvrwww+bugQincRzr2VSa2bByspKxZ+TkpIAKL+V0NLSEgUFBYL3Z2ZmpjTmoEpxcTHMzc1VbldaWlrjdpzZkIiISDjBQUAikeDmzZuK5ejoaNjZ2aF169aKtry8PJUf4H9X0z39nJwclJWVqbx0X9tYgPT09BrHDhAREVHNBAeBPn36ICUlBatXr8a6deuQkpKCgIAApT4PHz5UCgZ18fHxwdWrV5W+3Z87dw5isVjpSsPf9ejRA/n5+YqrEsCzMQyZmZnw8fERfHxqnkJDQ5u6BCKdxHOvZRIcBIYPHw43NzdcunQJZ8+ehbOzM8aNG6dYn5WVhVu3bqn8AP+7QYMGwdDQEF999RWuXbuGiIgIhIeHY/jw4UqDAefOnYtNmzYplt3c3ODt7Y3169fj4sWLuHTpEtatWwcPDw/OIaAD+MuIqGnw3GuZRHK5XK7OBvfu3QMAtG3bVmka4KysLNy9exedO3eGra2t4P09ePAAW7ZsUZpiePz48Ur7nj17Njw9PTF79mxFW3FxMbZt24ZLly5BLpfD19cXU6ZMgaWlpTo/DhERkU5TOwgQERFRy6FyHoGEhATY29tDIpEI2tndu3eRlpaG4OBgjRRH6hk/fjyAZy+Lsre3V7TPnj0b2dnZSn2NjIzQpk0b+Pv749VXX1W8MOrPP//E1q1bERAQgPfff7/aMdLT0/Huu+8CAKZNm4YhQ4ZU6xMREYHvv/8ePXr0wEcffQQA2LNnD/bu3Yvg4GClKztE2lR1ThgbG2P9+vU1XjE8efIkNm7ciAEDBuCdd96p1g48GyP13nvv1XiMpUuXIiEhAcuXL4eHh0e19ueJxWK0atUKPXr0wMiRI2FlZVXvny0rKwtz5syBp6cnli5dqmiPj4/HsmXLGv1c4znefKkcI7Bs2TKcPHlSqe3333/H1KlTa+x/6dIlxYlDLx5vb28EBwcjODgYbm5uyMjIQHh4OBYvXozi4mIAQJcuXQAAycnJNe7j+QGaz//5eYmJiUr7ImpqUqkUBw4cqPf2Fy5cUNwWVZe7u7vivPPy8kJ+fj4OHz6MhQsXVgvoRE1B8MyCVSoqKhQfGtS8jBo1Sml2yKysLCxbtgz379/H/v37MWnSJLRv3x6mpqbIy8tDRkYG2rRpo7SPxMREiEQitGvXrtYgUNXOIEAvApFIBAMDAxw9ehSvvvoqrK2t1dpeLBajvLwce/bswYIFC9Q+/sCBAxESEqJYLigowBdffIHU1FT88ssvNV55awgXFxesXbu21pe2Ef2dWhMKUctib2+vuHQaHR0NANDT04O7uzuAmr/xJyUloW3btvD19cXjx4+rTen8+PFjZGdnQywWo3Pnzlr+CYjqJhKJMGjQIJSXl+O3335Te3t/f3/Y2Njg0qVLuH37doPrsbS0xKRJkwAAV65cwdOnTxu8z+cZGRnByckJNjY2Gt0vtVxqXxGglqVjx44Ank3kVKVLly6IjY1FYmKi0jeZvLw8ZGZmIjQ0VHEvNCkpSWk8QtVtAVdXVxgYNOx/r5s3b+LQoUNISkpCYWEhrKys4OPjg3HjxlUbt1JUVISoqChcvnwZ6enpyM/Ph1gshrOzM4YMGVJtzgsA2LBhA06dOoUlS5agoqICBw4cwJ07d1BSUoKtW7ciOjoaGzduxLhx4zBgwAD8+uuvinkvHBwc8Oqrr9Y6HiYnJwcHDhxAXFwccnNzIRaL4e7ujtGjRyuCVpXn7+m++eab2LVrF+Li4pCfn49JkybhlVdeadDfIz27Gnb8+HFERERgxIgRStOl10UsFmP06NH46aefsGfPHo1Ms1t13pWXl6OwsFDxof3gwQPs378f8fHxKCgogKWlJby8vDB27FjBk6XVNUYgLi4OR48exa1bt1BcXAwrKyt06NABISEhCAgIQF5eHmbNmgVLS0ts3LgR+vr61fYRHR2NVatWwc/PDwsXLqz33wPP8RfjHGcQ0HFVkzkZGhoq2p7/kH/e8/f+3dzcIBKJkJiYiH79+in6aOq2wNGjR/HTTz8BADp37gwPDw88evQIkZGRiImJwdKlS9G2bVtF/+TkZPz0009o1aoV2rRpA1dXV+Tl5SEpKQnx8fGYMGECxo6t+aU0UVFRiIyMRKdOneDj44PMzEyIRCLF+pycHHz00UcQi8Xo2rUr8vPzkZiYiA0bNkAmk6F///5K+0tJScEXX3yB4uJiODo6okePHigsLMTVq1cRFxeHefPmITAwsFodBQUF+Oijj1BZWQkPDw+Ul5fDyMioQX+P9Iy1tTWGDBmCgwcPYv/+/fjnP/+p1vYDBw7EgQMHcOXKFdy8eROurq4Nquf5SdSqzr3r16/jyy+/RFlZGTp27IguXbogPT0dUVFRiI6Oxocffqh0a68+tm/fjsOHD0MkEsHNzQ12dnbIz89HcnIyHj9+jICAANjY2MDPzw8XL17ElStX0KtXr2r7qXqBW0PmFeA5/uKc4wwCOu7y5csAgPbt2yvaXFxcYGhoiEePHiE/P19xT7XqQ97DwwPm5uZo27ZttbDwfJ/6SklJwdatW2FtbY0PPvgALi4uinWRkZHYvHkzNm3ahM8//1zR7uTkhM8++wxubm5K+8rIyMDy5csRHh6Ofv36oVWrVtWOd/z4cbz77rs1nrjAs9HjQ4cOxeTJkxXzW1y4cAFr1qzB3r17lX5JlJSUYPXq1SgpKcGcOXOUQlJqaio+++wzbN68GV27dq02gj02Nhb+/v6YN28exGKxGn9jJMTIkSNx7NgxREZGYtSoUTX+v1AbQ0NDjB07Ft9//z12796NTz75pEG1VJ13tra2MDc3h1Qqxbp161BWVoapU6di6NChir6HDx/G9u3bsW7dOqxbt67eHxynT5/G4cOHYWNjg0WLFqFTp06KdeXl5Urn8qBBg3Dx4kUcP368WhDIyclBXFwc7Ozs6j2TK8/xF+sc5xgBHZWbm4tDhw7h8OHDAIDBgwcr1hkYGChOzL8/JWBnZ6c40dzd3fHw4UPFi6aKi4tx//596OvrVztZ1fH7779DJpPhn//8p9IvCAAYMGAA/Pz8cPPmTdy5c0fR3qZNmxqP2aZNG4wZMwYymQwxMTE1Hs/X17fWXxAA0KpVK0yaNElpkquAgAC0a9cO2dnZSiO/T5w4gby8PAwbNkzpFwTw7FvP2LFjIZVKcfr06WrHMTQ0xNSpU1+oXxAtiYWFBV5++WVUVlZi3759am8fEhICe3t7XLt2rdaBsnUpKCjAiRMn8H//938A/nvenT9/Hk+ePIGbm5tSCACezerauXNn5OXl4fz58/U6LgDF+IgZM2YohQDg2e2P52dl7datGxwcHBAXF4fHjx8r9Y2MjIRcLseAAQOUzgl18Bx/sc7xOq8IhIeHIzw8vFr7hAkTtFIQac+yZcuqtYlEIowePRovvfSSUnuXLl2QmJiIxMREBAQEoKSkBHfv3kWfPn0UfTw8PBAREYGkpCT4+/sjOTkZcrkcHTt2VMxLoC6ZTIYbN27AyMio1m8bHh4eiImJwa1btxT3Wqu2jY+PR3JyMvLy8lBRUQG5XI78/HwAwKNHj2rcn5+fn8qavLy8lG6dVHFwcMD9+/eRl5enCEfXrl0DAPTu3bvGfVXdMrl161a1dR07dlRrVk5S36uvvoqjR4/i1KlTGDVqVLWnYlQxMDDAuHHjsHHjRuzevRtLliwRtN3GjRtrfKw6ODgYo0aNAvDf2259+/atcR/9+vVDamoqkpKSlMbtCJWbm4uHDx/CysoKvr6+dfYXiUQIDQ3FL7/8ghMnTiimk5fJZDhx4gT09PQwYMAAteuo2gfP8RcLbw3oEG9vb1hbW0MkEkEsFqNNmzbw8/Or8Zdh1f/MVd98UlJSIJfLlS75Vw2ISUxMhL+/v0bmDygsLIRUKgUAvPHGG3X2rZKbm4uVK1cqfYP4u6r9/l1dE2bVNrDMxMQEwLNHaqtUPUXx6aefqtzn87ULrYMazszMDMOHD8fu3buxd+9ezJkzR63t+/Xrh99++w3x8fG4ceMGunbtWuc27u7uinPM0NBQMaGQs7Ozok9eXh4AKA28fV5Ve25urlr1Vqn6Vq9O8AkJCcGuXbtw4sQJjBkzBnp6eoorBL6+vmoNuHwez/EXj8ogsHv37saqgxrB3+cRUMXNzQ16enq4e/cuSktLa/yQb926NWxsbBRhQRMDBatmvDY2Nq41cVd5fiDR5s2bcefOHfTq1QsjR46Eo6MjTE1Noaenh6tXr+Lzzz9HbbNp1/RN4HnPDyoSWn9AQIDKe7k1jQCvqw7SjGHDhuGPP/5AVFQURo8erda2enp6eO2117Bu3Trs3r1bUBD4+zwCzYWFhQX69OmD06dP49q1a/Dx8dHIIEGe4y8eXhGgGpmYmMDZ2Rm3b99GcnIykpKSYGZmhnbt2in1c3d3R3R0NAoKCpCamgqRSNSggYIWFhYwNDSESCTCrFmzBJ2gUqkUV69ehZWVFebPn1/tvmVGRka961GXnZ0d0tPTMWrUqGr3YenFYGJighEjRmDHjh0IDw9Xe8BbYGAgfvvtNyQnJyMuLk4jNVU9Pvj3eTmqVN2jru9l5apvvOqeC4MGDcLp06cRERGB9u3bIzY2FnZ2doJuL9SG5/iLh4MFqVZV3+yvX7+OW7duKa4SPM/d3R2VlZX4888/8fTpU7Rt2xbm5ub1Pqa+vj68vLxQWlqK69evC9qmpKQEcrkcNjY2NQ5eOnfuXL3rUVe3bt0APJtum15cQ4cOhZWVFc6fP6/21MFVVwUAzV01rTrXoqKialxfNfCsviHb1tYWTk5OePLkiVrhxd3dHR06dMDly5fx+++/o7KyEv3796/3IEGA5/iLiEGAalX1yykyMhIVFRXVJskA/vuL6a+//lLapiHGjBkDkUiETZs24caNG9XWS6VSREZGory8HMCzZ8TNzMxw//59xS0M4NklvN9++02pTdsGDRoEKysrHDhwAMeOHYNMJlNaX1lZibi4uHrPW0+aYWRkhFGjRkEul+PYsWNqb9+7d284OzsjNTUVqampDa6nT58+sLKyQkpKCo4ePaq07o8//sCtW7dgY2OjNFhXXVUDEzdv3oy0tDSldeXl5YpBcH8XGhqKyspK/PXXXxCJRPUeJPg8nuMvFt4aoFpVfchXvVuipm8jzs7OMDIyqvbSotpcuXIFixcvrnX9559/Dg8PD0ybNg0//fQTli9fjnbt2sHBwQEGBgbIzs5GWloaKioq0Lt3b4jFYujp6WHkyJHYuXMnli1bBi8vL1hYWODOnTvIzMzE8OHDFY9JapuZmRk++OADrFy5Ej/88AP279+Pdu3awdzcHPn5+bhz5w6Ki4uxYMECpbkbqPENGjQIBw8eVAzUU4dIJML48eMVEwA1lLGxMebNm4eVK1diy5YtiIyMhKOjI9LT03Hnzh0YGRlh3rx5DZp8Jjg4GKmpqfjrr7+waNEiuLm5QSKRID8/H2lpaZBIJFi1alW17fr164cdO3ZAKpXCx8enzgFvPMeb3znOIEC1srS0hJOTEx4+fKg0t8Dz9PX14eLigvj4eAB1B4HCwsIaR9P+3eDBg+Hm5oYjR44gISEBV65cgZGREWxtbdG3b1/07t1b6aUqo0aNgp2dHY4cOYKUlBRFvTNnzkRlZWWj/ZIAng20XL16NQ4fPqyYqhl49q2mS5cu8Pf3V3pmm5qGWCzGmDFjsGXLlnpt7+fnBxcXlxofE6uPbt26ISwsDPv378eNGzdw7949WFhYoG/fvhgzZozSwLn6mjp1Krp3746jR48qrmZYWVnB3d291m/6JiYm6Ny5M+Lj4wUNEuQ53vzOcZG8tmGWRESk83JzczF79mxYWlpi06ZNDRofQC8m/hclIqJaHThwAJWVlRg8eDBDQAvFWwNERKQkPT0dBw8eRHZ2Nq5fvw5ra2u8/PLLTV0WaQmDABERKcnLy0NkZCTEYjG6dOmCyZMnK92vp5aFYwSIiIh0GG/4EBER6TAGASIiIh3GIEBERKTDGASIiIh0GIMAERGRDmMQICIi0mEMAkRERDqMQYCIiEiHMQgQERHpMAYBIiIiHcYgQEREpMMYBIiIiHQYgwAREZEOYxAgonoTiURK/xgZGaFVq1bw9fXF9OnT8eeff6KysrLGbSdPnlxte1NTU3h6emL+/PnIzs5u5J+GSDfxNcREVG8ikQgAsGTJEgBAZWUl8vPzER8fj7Nnz6K8vBx+fn7YsWMH3NzclLadPHkytm3bhpEjR8LHxwcAkJmZiT/++AP37t1Dhw4dcPnyZdjZ2TXqz0SkawyaugAiav6WLl1arS0zMxNz585FeHg4QkNDERMTA3t7+2r9Ro0ahcmTJyuWpVIpAgICcPXqVaxfv14RMohIO3hrgIi0onXr1ti1axdCQkJw//59/O///q+g7YyNjfHmm28CAKKjo7VZIhFBjSsC6enpuH79OhITE5GTk4PCwkKIxWJYWlrC2dkZXl5e6Nq1K8RisTbrJaJmRE9PD5988glOnjyJX3/9FWvXrlXcThDC0NBQi9URESAgCJw9exb/+c9/kJSUVGufGzdu4PDhwzAzM0NISAiGDh1a4yVAItI9ffv2hYGBAbKyspCWloaOHTuq7F9aWopffvlFsS0RaVetQeDGjRvYvn077t69C1NTUwQHB8PDwwOdO3eGtbU1zM3NUV5ejsLCQqSnpyMlJQXXrl3DkSNHcPToUbz88ssYM2YMTE1NG/PnIaIXjJGREezs7JCZmYns7OxqQeD3339HWloaACArKwuHDx/G/fv30a9fP8ycObMJKibSLbUGgRUrVqBjx45499134efnV+MlOhMTE5iYmMDe3h4+Pj4YP348Hj16hGPHjuHo0aMwNjbGuHHjtPoDENGLr+rhpJpuCxw4cAAHDhxQahs0aBCOHDnCWwNEjaDWwYLz589HWFgY+vTpo9bJ6ODggLfeegvffvstunfvrpEiiaj5kkqlyM3NBQC0atWq2vqtW7dCLpfj6dOnSElJwYQJE3Ds2DFeDSBqJLUGAX9//wbt2Nrautpzw0Ske6KiovD06VO0bt0azs7OtfbT19eHq6srdu7cid69e2PLli04ePBg4xVKpKP4+CARaY1MJsPnn38OAHjjjTcEbaOnp4dvvvkGALBo0aJaZyYkIs1gECAircjKysLrr7+OkydPon379vj4448Fb9u7d28MHz4cSUlJ2L59uxarJCKVjw/OmTNH7R2KRCJ8++239S6IiJqfqpkFZTKZYorhqKgolJeXw9/fHzt27IBEIlFrn8uXL8eRI0ewbNkyvPnmm5yjhEhLVAYBvvSDiIRYtmwZAEAsFsPCwgIdOnTAW2+9hbFjx2Lw4MHQ01P/4mOPHj0wevRo7N+/H9999x3mzp2r6bKJCHW8dKi+QaCmkcFERET04uHbB4mIiHQYBwsSERHpMJVjBGQyGb7++muIRCLMnTsXBgY1d3/69Cm+/fZbiEQivPvuu9qok4iIiLRA5RWBixcv4uLFi/Dz86s1BACAgYEBevXqhfPnz+PChQsaL5KIiIi0Q2UQOH/+PGxtbQW9ASwoKAi2traIiorSWHFERESkXSpvDaSmpsLLy0vQ+8NFIhG6du2K+Ph4jRWnTenp6U1dAhERUaNwdHSsdZ3KKwL5+fmws7MTfCBbW1s8efJEeGVERETUpFQGAQMDA1RUVAjeWUVFhcqxBERERPRiURkEbGxscPfuXcE7u3v3LmxsbBpcFBERETUOlUHA3d0dCQkJyMjIqHNHGRkZSEhIgIeHh8aKIyIiIu1SGQQGDRoEmUyGNWvWqLz3X1BQgLVr10ImkyE0NFTjRRIREZF2qLyh7+LigtDQUEREROD999/HoEGD0LVrV9ja2gIAcnNzcePGDURERKCwsBCDBg2Ci4tLoxROREREDVfnuwYqKyvx448/IjIyUuWOBg4ciOnTp9frLWNNgY8PEhGRrlD1+KDglw4lJyfj2LFjSE5ORn5+PgDA2toaHh4eCA0Nhbu7u0aKbSwMAkREpCs0EgRaGgYBIiLSFfWeUIiIiIhaNgYBIiIiHcYgQEREpMM4HzAREdVp5cqVyMnJgUQiwaJFi5q6HNIgBgEiIqpTTk6OoFlmqfnhrQEiIiIdxiBARESkwxp0ayArKwsPHjwAALRt2xb29vYaKYqIiIgaR72CQGlpKTZv3owLFy4otffp0wfvvPMOjI2NNVIcERERaVe9gsCWLVtw7do1jB8/Hp06dUJFRQViYmJw6tQpGBkZYebMmZquk4iIiLRAZRAoKyuDkZFRtfbo6GhMnz4dL730kqLN398fZWVluHTpEoMAERFRM6FysOCCBQtw48aNau2VlZUwMTGp1m5iYgKZTKa56oiIiEirVF4RcHV1xYoVKzBw4EBMmjRJ8eHftWtXbNmyBVKpFB07dkRFRQUuX76MU6dOoWfPno1SOBERETWcyiAwb9489O3bFz/88ANiY2Pxr3/9Cz169MD06dOxatUqfPvtt0r9O3XqhKlTp2q1YCIiItKcOgcL+vr6YvXq1di+fTvCwsLw0ksvYfLkyVi5ciWuXbuGhw8fAnj2+GC3bt20XjARERFpjqCnBkxNTfHOO+8gMDAQ33//PebPn49p06bB398f3bt313aNREREpCVqzSzYvXt3fPXVV/D398fq1auxdu1aFBQUaKs2IiIi0jJBQaCgoAC3b99GQUEBjI2NMW3aNCxduhRpaWl47733EBUVpe06iYiISAtU3hqQSqXYtGmT0gyCvXv3xqxZs9ClSxesWrUKu3btwoYNG3Du3Dn861//grW1tbZrJiIiIg1ReUVg586duHDhAoKDgzFt2jSEhITg4sWL2LFjBwBALBbjrbfewooVK5CRkYH33nsPJ06caJTCiYiIqOFUXhGIjo5WXAGoUlpaipiYGEybNk3R5uLigi+//BJ79+7FDz/8gP79+2uvYiIiItKYOqcYtrOzU2qzs7OrcbZBAwMDvP766wgICNBshURERKQ1Km8NuLq64vTp00hKSsLTp0+RkpKCM2fOwNXVtdZtnJ2dNV0jERERaYnKKwJTpkzBsmXLsGTJEkWbra0tJk+erO26iIiIqBGoDAJt2rTB119/jcuXLyMnJwcSiQS+vr4wNjZurPqIiIhIi+qcWdDIyAiBgYGNUQsRERE1MrVmFiQiIqKWRdC7BmoSExODxMRElJWVwd7eHoGBgZBIJJqsjYiIiLRMZRDYuXMnunfvjq5duyraiouL8eWXXyIpKUmp7+7duzFjxgz069dPO5USERGRxqkMAgcOHIBYLFYKAt999x2SkpJgb2+PoKAgWFpaIiUlBefPn8fmzZvh7OyM9u3ba71wIiIiaji1bg1kZGTg4sWL6NixI5YsWQITExMAwLBhw+Dr64sNGzbgjz/+wDvvvKOVYomIiEiz1BosmJiYCACYOHGiIgRU6devH1xcXJCQkKC56oiIiEir1AoC+fn5AIDOnTvXuL5z587Izc1tcFFERETUONQKAlVXAQwNDWtcb2hoCJFI1PCqiIiIqFHUOUYgPj5e8eeMjAwAQHZ2Ntq2bVut7+PHj2FhYaHB8oiIiEib6gwCCQkJ1e77X7lypcYgcPv2bTg5OWmuOiIiItIqlUHg+ZcNPc/S0rJa2+3bt1FZWYlu3bpppjIiIiLSOpVBwNPTU/COOnXqhA0bNjS4ICIiImo89Z5iWBPOnz+P06dP4/bt2ygpKYGjoyNeffVV9O3bV+V2FRUV+PXXX3H69GmUlZXB09MT06ZNg729fSNVTkRE1DKoFQQqKyuRmZmJ4uJiiEQiWFlZoVWrVvU++OHDh2Fvb4+3334blpaWuHLlCtatW4fCwkK8/PLLtW63detWXLhwQbFdeHg4PvvsM3z11VcQi8X1roeIiEjXCAoCly5dwtGjR5GYmIjKykqldZaWlggKCsKoUaNgbW2t1sEXLVqkNN6ga9euyMvLw+HDh2sNAo8fP0ZkZCRmzpyJ4OBgAECHDh0we/ZsnDlzBgMHDlSrBiIiIl2mch4BuVyODRs2YPXq1bhx44ZSCJBIJGjbti1KSkrw559/Yv78+dVeRFSXmgYdduzYEXl5ebVuc/XqVQBA7969FW22trbw8PBAbGysWscnIiLSdSqvCEREROD06dPw9fXFhAkT0Lp1a2RmZmLPnj1ITk7G4sWL0apVK5w9exa//PILVq5cidWrV8PW1rbeBaWkpMDBwaHW9enp6bCzs4OxsbFSu5OTE6c3JiIiUpPKIBAZGYm2bdtiwYIF0NfXBwA4Oztj/vz5WLhwIXbu3IkFCxYgJCQEzs7O+Oijj/D7779j6tSp9Srm+vXriI6OxsyZM2vtU1RUBFNT02rt5ubmKC4urnW7iIgIREREAADCwsIgkUjqVSMRkS6q+gzQ19fn788WRmUQePDgAQYOHKj4H6CKvr4+unXrhlOnTinanJ2d4evrW+/L81lZWVi3bh38/PwQEhJSr32oEhoaitDQUMVyTk6Oxo9BRNRSVd0arqys5O/PZsjR0bHWdSrHCIhEIpSXl9e4rry8HBUVFUptTk5O9XrpUFFREb744gtIJBLMmzdPZV9zc3OUlJTUuA8zMzO1j01ERKTLVAaBdu3aISYmBkVFRUrtRUVFiImJqXYvXyqVqv34XllZGcLCwvD06VN8+OGHMDIyUtnf0dERjx8/hlQqVWpPT09XmXiIiIioOpVBoH///njy5Ak+/vhj/Oc//0FcXBz+85//YPHixXjy5Aleeuklpf73799HmzZtBB+8srISa9aswaNHj/Dxxx/Dysqqzm28vb0BPHuksUpubi4SExPRo0cPwccmIiKiOsYIhIaGIiEhAWfPnsWWLVuU1vn4+GD48OGK5dLSUpSXlyMwMFDwwX/88UfExsZi8uTJKCwsRGFhoWJdx44dYWhoiOXLlwMAPv30UwCAnZ0dBgwYgG3btgGAYkKhVq1aVQsmREREpFqdEwrNmzcPAQEBuHTpEp48eQILCwv4+voiMDAQenr/vaBgYmKCzz//XK2DX7t2DQDw888/V1u3fv162NvbQyaTVVs3ZcoUGBkZYdu2bSgvL4enpyf+53/+h7MKEhERqUkkl8vlTV1EU0hPT2/qEoiImo0PPvgAGRkZaNOmDVatWtXU5ZCa6v3UABEREbVsDAJEREQ6jEGAiIhIhzEIEBER6TAGASIiIh3GIEBERKTDGASIiIh0GIMAERGRDlM7CCQkJGDv3r1qryMiIqIXj9pBID4+HuHh4WqvIyIiohcPbw0QERHpMAYBIiIiHcYgQEREpMPqfA0xAOTk5Cj+XFxcXK0NACQSiQbLIiIiosYgKAjMnj1bZZtIJMKuXbs0VxURERE1CkFBYOzYsRCJRACePSKYkJCAcePGabUwIiIi0j5BQWD8+PGKP4eHhyMhIQGvvfaa1ooiIiKixsHBgkRERDqMQYCIiEiHMQgQERHpMLWDgFwur9c6IiIievGI5Dr66Z2ent7UJRARNRsffPABMjIy0KZNG6xataqpyyE1OTo61rqOtwaIiIh0GIMAERGRDqs1CJSXlzd455rYBxEREWlPrUFg9uzZ+OOPP1BRUaH2TtPS0vDll1/i4MGDDSqOiIiItKvWmQW9vb2xbds2hIeHIzAwEH369IGbmxvEYnGN/TMzM3H16lWcOnUKt27dgkQiwYgRI7RWOBERETVcrUFgzpw5GDp0KHbt2oWIiAhERERAT08Pbdu2hbW1NczMzFBRUYGioiKkp6ejoKAAAGBpaYmJEyfilVdegaGhYaP9IERERKQ+QY8PPnr0CJGRkbhx4wbS0tIgk8mU1ltaWqJLly7o3bs3evfuDQMDQa8waFJ8fJCoZZm87XxTl9CiVZ7bAZQ8AUytoB/4ZlOX02L9/HYfrexX1eODgj6xHRwc8Oabz/7Dl5WVITc3F4WFhRCLxbCysoKNjY1mKiUiIqJGpfZXdyMjIzg4OMDBwUEb9RAREVEj4jwCREREOoxBgIiISIcxCBAREekwBgEiIiId9uI/50f0nJUrVyInJwcSiQSLFi1q6nKIiJo9BgFqVnJycpCRkdHUZRARtRi8NUBERKTD1L4i8PTpU9y4cQMPHjyAVCrFuHHjADx702BpaSksLCygp8d8QURE1ByoFQTi4uKwadMm5OfnK9qqgkBaWhr+/e9/Y+7cuejbt69GiyQiIiLtEPzVPTU1FatWrYJIJMLbb7+NoKAgpfVubm6wt7fHpUuXNF4kERERaYfgILBv3z6IxWKEhYVh2LBhNU4x3LlzZ9y9e1ejBRIREZH2CA4CycnJ6NWrF6ytrWvtI5FIlG4bEBER0YtNcBCQSqWwtLRU2aesrKzaK4qJiIjoxSU4CNja2uL+/fsq+6SlpaF169YNLoqIiIgah+Ag4OPjg6tXryIpKanG9bGxsUhJSYGvr6/GiiMiIiLtEvz44OjRo3Hu3Dl89tlnGDp0KLKzswEAV65cQUJCAo4ePQpra2sMHz5ca8USERGRZgkOAra2tli8eDHWrl2LQ4cOKdpXrlwJAGjdujUWLFhQ5zgCIiIienGoNaFQp06d8M033+DKlStISUlBYWEhTE1N4erqil69ekFfX19bdRIREZEWqD3FsJ6eHvz8/ODn56eNeoiIiKgRNfnbBzMyMnDw4EGkpKTg/v376NKlC5YuXapym6ysLMyZM6dae2BgIN59913tFEpERNQCCQ4Cp06dErzT4OBgwX3v37+P2NhYuLq6orKyUvB2ADBp0iS4u7srljk+gYiISD2Cg8DGjRsF71SdINCzZ0/06tULALB69WoUFhYK3tbR0RFubm6C+xMREZEywUFg5syZNbaXlJTg1q1bOHfuHPz9/dWeR4CvLCYiImo6goNASEiIyvX9+/dXvJCosWzcuBFFRUWwsrJCUFAQJk6cCLFY3GjHJyIiau40NliwW7du8Pb2xu7du7FkyRJN7bZGhoaGGDJkCLy9vWFiYoL4+HgcOHAAmZmZWLhwYY3bREREICIiAgAQFhYGiUSi1RpJO6oeUdXX1+d/QyJqcZri95pGnxpwdHTEsWPHNLnLGtnY2GDatGmKZS8vL1hbW+PHH39EWloanJ2dq20TGhqK0NBQxXJOTo7W6yTNqxpQWllZyf+GRNTiaOv3mqOjY63rNHqD/sGDB5rcnVoCAgIAALdv326yGoiIiJqbBl8RkMlkePz4MY4fP47Y2Fj06NFDE3XVm0gkatLjExERNSeCg8CECRPq7GNubo5//OMfDSqovi5cuADg2TTIREREJIzgINClS5cav22LRCKYmZnBxcUF/fv3V3tSn7KyMsTGxgIAcnNzUVpaqvhQ79GjB4yMjDB37lx4enoqHmHcs2cPpFIp3N3dYWJigsTERBw8eBD+/v7o0KGDWscnIiLSZYKDQF3T/tbXkydPsGbNGqW2quX169fD3t4eMpkMMplMsd7JyQmHDh3C8ePHUV5eDolEghEjRmDMmDFaqZGIiKilavJ3Ddjb22PPnj0q+2zYsEFpOSgoCEFBQdosi4iISCdwWj8iIiIdVusVAXXeLfA8kUhU63TERERE9GKpNQio87bBv2MQICIiah5qDQLr169vzDqIiIioCdQaBFq1atWYdRAREVET4GBBIiIiHVavxwdlMhkKCgrw9OnTGtfzrXBERETNg1pB4N69e9ixYwfi4+NRUVFRYx+RSIRdu3ZppDgiIiLSLsFB4MGDB/jkk08AAN27d8fly5fRoUMHWFlZ4c6dOygsLISXlxevBhARETUjgoPA/v37UVlZiS+++ALt27fHhAkT4O/vj3HjxkEqlWLr1q2IjY3FrFmztFkvERERaZDgwYLx8fHw9fVF+/btFW1yuRwAYGxsjH/9618wMzPD7t27NV8lERERaYXgIFBYWAgHB4f/bqinh7KyMsWyvr4+vLy8cO3aNc1WSERERFojOAiYm5tDKpUqli0tLZGTk6PUx8DAACUlJZqrjoiIiLRKcBBo3bo1srKyFMsdO3bE9evX8eTJEwCAVCpFTEwM7O3tNV8lERERaYXgwYLe3t44cOAApFIpjI2NMXjwYMTGxmLhwoVwd3fH7du3kZ2djbfeekub9RIREZEGCQ4CAwcOhKOjI8rLy2FsbAxfX1+8/fbbCA8Px8WLFyEWizFy5Ei8/PLL2qyXiIiINEhlEFi4cCFCQ0Px0ksvwcbGBoGBgUrrhw0bhqFDh6KgoABWVlYQiURaLZaIiIg0S+UYgbt372LLli2YMWMGNm/ejJs3b1bfgZ4erK2tGQKIiIiaIZVXBFasWIGIiAhcuHABJ06cwIkTJ9C+fXsMHDgQ/fr1g6mpaWPVSURERFqgMgi4ubnBzc0NU6ZMwZkzZxAZGYk7d+5g69at2LFjBwICAjBw4EB4eHg0Vr1ERNQUjC2U/00thqDBgiYmJhg8eDAGDx6MtLQ0RERE4OzZszh9+jROnz6Ntm3bKq4SmJuba7tmIiJqZPq+I5q6BNISwfMIVHF2dsb06dPx3XffYdasWXB3d8eDBw+wbds2vPPOO/j222+1UScRERFpgdpBoIpYLEZwcDCWL1+OtWvXwsPDAxUVFYiKitJkfURERKRFgucRqElRURFOnTqFyMhIPHjwAAB0egDhow+mN3UJLd7T7PL//+9M/n1rkcOqH5u6BCJqJPUKAjdu3EBERASio6Px9OlTAICrqytCQ0OrzTVARERELy7BQSA/Px8nTpxAZGSk4p0DZmZmCA0NRWhoKNq1a6e1IomIiEg7VAYBuVyOK1eu4Pjx44iNjYVMJgMAeHh4YODAgQgICIBYLG6UQomIiEjzVAaBWbNmITc3F8Cz1xD369cPoaGhcHJyapTiiIiISLtUBoHc3Fx4enoqvv0bGDRobCERERG9YFR+sn/99ddwcHBorFqIiIiokamcR4AhgIiIqGWr94RCRERE1PwxCBAREekwBgEiIiIdxiBARESkwxgEiIiIdJjgIHDx4kXFzIJERETUMgieIWjNmjWwsbFB//79MXDgQEgkEm3WRURERI1A8BWBIUOGoKysDPv378fcuXMRFhaGy5cvQy6Xa7M+IiIi0iLBVwSmTp2Kf/zjHzh37hyOHTuG2NhYxMbGwtbWFgMHDsSAAQNga2urzVqJiIhIw9R6eYBYLEZISAhCQkJw7949RERE4MyZMwgPD8e+ffvg6+uLQYMGwcfHR0vlEhERkSbV+y1C7du3V7pKsHv3bsTExCAmJgYSiQRDhgzB4MGDYWxsrMl6iYiISIMa9PigVCrF6dOn8ddffyleV+zs7IyioiLs2LED7733HtLS0jRRJxEREWlBva4I3LlzB8eOHcPZs2chlUohFosxYMAADBkyBM7OzpBKpTh69Cj27NmDrVu3YtmyZZqum4iIiDRAcBAoKyvD2bNncezYMdy+fRsA4OTkhEGDBiE4OBimpqaKvsbGxhg5ciQeP36MyMhIzVdNREREGiE4CMyYMQOlpaXQ09ND7969MWTIEHh5eancxtbWFhUVFQ0ukoiIiLRDcBAwMTHB8OHDERoaCmtra0HbDB48GEFBQfWtjYiIiLRMcBDYsGED9PTUG1toamqqdMuAiIiIXiyCP9nVDQFERET04hP86b5v3z5MnDhR8Zjg3+Xm5mLixIn4/fffNVUbERERaZngWwOXL1+Gp6dnrdMI29raomvXroiOjsaoUaMEF5CRkYGDBw8iJSUF9+/fR5cuXbB06dI6tyspKcHPP/+M6OhoyGQy9OzZE1OmTIGFhYXgYxMREek6wVcEMjIy0LZtW5V9nJyckJGRoVYB9+/fR2xsLBwdHeHo6Ch4u7Vr1yI+Ph4zZszA7NmzkZqailWrVql1bCIiIl0n+IpAeXk5jIyMVPYRi8WQSqVqFdCzZ0/06tULALB69WoUFhbWuU1KSgquXr2KpUuXwtPTE8CzKxIff/wxrl27hu7du6tVAxERka4SfEXAzs4ON2/eVNnn5s2bar+BsD6DEGNjY2FlZaUIAQDg4uICe3t7xMXFqb0/IiIiXSX4U9jb2xsJCQk4d+5cjevPnj2LhISERnnz4MOHD+Hk5FSt3cnJCQ8fPtT68YmIiFoKwbcGRo0ahaioKHzzzTc4d+4cfHx8YGtri9zcXMTGxiImJgbm5uZqDRSsr+Li4hrnJzAzM0NWVpbWj09ERNRSCA4Ctra2WLx4MdasWYPo6GhER0crrW/VqhXef/992NnZabxITYiIiEBERAQAICwsDBKJROPHeKTxPRI1DW2cH0RUt6Y499R6+2Dnzp3xzTff4PLly7h58yaKi4thZmYGV1dX9OzZEwYG9XqZodrMzMxqHFRYVU9NQkNDERoaqljOycnRWn1EzR3PD6Kmoa1zT9VTeWp/chsYGKB3797o3bt3g4pqCCcnJxw/frxae3p6uuIJBCIiIqpbs5w3uEePHsjPz0dSUpKiLTU1FZmZmY0yWJGIiKilqPWKwKlTpwAA/v7+MDExUSwLERwcLLhvWVkZYmNjATybpri0tBQXLlwA8OwD38jICHPnzoWnpydmzpwJAHBzc4O3tzfWr1+PSZMmQSQSYceOHfDw8OAcAkRERGqoNQhs3LgRAODq6goTExPFshDqBIEnT55gzZo1Sm1Vy+vXr4e9vT1kMhlkMplSn3fffRfbtm3Dpk2bIJfL4evriylTpgg+LhEREakIAlXfvm1sbJSWNc3e3h579uxR2WfDhg3V2szMzDBr1izMmjVLK3URERHpglqDQEhIiMplIiIiav6a5WBBIiIi0gwGASIiIh1W662BOXPm1GuHIpEI3377bb0LIiIiosZTaxCQy+X12mF9tyMiIqLGV2sQqGmkPhEREbUsHCNARESkw+odBEpLS5GTk4OSkhJN1kNERESNSK2XDlVWVuLQoUM4fvw4srKyFO329vYYOHAgXn31Vejr62u8SCIiItIOwUHg6dOn+Pzzz5GQkACRSASJRAJra2vk5+cjOzsbv/76K+Li4vDJJ5802uuIiYiIqGEEf2IfPnwYCQkJ8PX1xVtvvQUHBwfFuoyMDGzfvh2XL1/G4cOHMWrUKG3USkRERBomeIxAVFQU2rVrhw8++EApBABAmzZtsGDBArRr1w5nzpzReJFERESkHYKDQEZGBnx8fKCnV/Mmenp68PHxQWZmpsaKIyIiIu0SHAQMDAwglUpV9ikrK+NgQSIiomZEcBDo0KEDLl68iIKCghrXFxQU4MKFC3B2dtZUbURERKRlgoPAkCFDUFBQgI8++giRkZHIzMxEeXk5srKycOLECSxevBgFBQUYMmSINuslIiIiDRL81EBgYCDS0tJw4MABfPfddzX2GTFiBAIDAzVWHBEREWmXWg/8v/HGG/Dz80NkZCTS0tJQUlICU1NTODs7Y8CAAXBzc9NWnURERKQFgoNAYWEhRCIR3Nzc+IFPRETUQtQZBKKjo7F9+3bFlMJt2rTBpEmT4Ofnp/XiiIiISLtUDhZMSUnB6tWrld4rkJGRgdWrVyMlJUXrxREREZF2qQwChw8fhlwux9ixY/HDDz/g+++/x5gxYyCTyXD48OHGqpGIiIi0ROWtgZs3b8LDwwPjx49XtE2YMAEJCQm8IkBNwkYfAET//99ERNRQKoPAkydPEBQUVK3d1dUVN2/e1FpRRLX5p624qUsgImpRVN4aqKyshLGxcbV2IyMjVFZWaq0oIiIiahyCZxYkIiKilqfOxwdPnjyJ+Ph4pbbs7GwAwLJly6r1F4lE+PTTTzVUHhEREWlTnUEgOztb8cH/dwkJCRoviIiIiBqPyiCwZMmSxqqDiIiImoDKIODp6dlYdRAREVET4GBBIiIiHcYgQEREpMMYBIiIiHQYgwAREZEOYxAgIiLSYQwCREREOoxBgIiISIcxCBAREemwWicU2rt3b713Om7cuHpvS0RERI2n1iAQHh5e750yCBARETUPtQaBmt4zcPjwYcTGxuKll16Cp6cnrK2tkZ+fj/j4eERFRcHX1xevvPKKVgsmIiIizak1CPz9PQOnTp3C9evX8fnnn6NTp05K60JCQjB06FAsWbIEvXv31k6lREREpHGCBwseOXIEffr0qRYCqnTu3Bl9+vTBkSNHNFYcERERaZfgIJCeng4bGxuVfWxsbJCent7gooiIiKhxCA4CJiYmSE5OVtknOTkZxsbGDS6KiIiIGofgIODr64vExERs374dpaWlSutKS0uxfft2JCUloWfPnhovkoiIiLSj1sGCf/fGG28gISEBR44cQWRkJJydnWFlZYUnT54gLS0NpaWlsLe3x8SJE7VZLxEREWmQ4CBgZWWF//3f/8XOnTsRFRWFxMRExTqxWIyBAwdi4sSJsLCw0EqhREREpHmCgwAAWFhYYMaMGZg+fToePnyIkpISmJqawsnJCfr6+tqqkYiIiLRErSBQRV9fH+3bt9d0LURERNTI1A4CT58+xY0bN/DgwQNIpVLFdMLl5eUoLS2FhYUF9PSEv8vowYMH+Omnn5CSkgIzMzMMGDAAr732msp9ZGVlYc6cOdXaAwMD8e6776r7IxEREekstYJAXFwcNm3ahPz8fEVbVRBIS0vDv//9b8ydOxd9+/YVtL+ioiKsWLECbdu2xcKFC5GRkYFffvkFcrkcr7/+ep3bT5o0Ce7u7oplS0tLdX4cIiIinSc4CKSmpmLVqlWwsLDA22+/jVu3buHs2bOK9W5ubrC3t8elS5cEB4Fjx46hvLwc8+fPh6mpKbp3747S0lKEh4djxIgRMDU1Vbm9o6Mj3NzchP4IRERE9DeCr+Hv27cPYrEYYWFhGDZsGBwcHKr16dy5M+7evSv44HFxcfD29lb6wA8KCkJ5eTkSEhIE74eIiIjqR/AVgeTkZPTq1QvW1ta19pFIJIiNjRV88IcPH8LLy6vaPoyMjARNVbxx40YUFRXBysoKQUFBmDhxIsRiseDjExER6TrBQUAqldZ5D76srAwymUzwwYuLi2FmZlat3czMDEVFRbVuZ2hoiCFDhsDb2xsmJiaIj4/HgQMHkJmZiYULFwo+PhERka4THARsbW1x//59lX3S0tLQunXrBhdVFxsbG0ybNk2x7OXlBWtra/z4449IS0uDs7NztW0iIiIQEREBAAgLC4NEItF4XY80vkeipqGN84OI6tYU557gIODj44Njx44hKSkJHh4e1dbHxsYiJSUFI0eOFHxwMzMzlJSUVGsvLi6Gubm54P0AQEBAAH788Ufcvn27xiAQGhqK0NBQxXJOTo5a+yfSJTw/iJqGts49R0fHWtcJDgKjR4/GuXPn8Nlnn2Ho0KHIzs4GAFy5cgUJCQk4evQorK2tMXz4cMGFOTk54eHDh0ptOTk5KCsrU1m0KiKRqF7bERER6SLBTw3Y2tpi8eLFsLGxwaFDh3DhwgUAwMqVK3Ho0CHY2Nhg8eLFaj3L7+Pjg6tXryq9zfDcuXMQi8Xw9PRU48eAop5OnTqptR0REZEuU2tCoU6dOuGbb77BlStXkJKSgsLCQpiamsLV1RW9evVS+30DgwYNwp9//omvvvoKI0eORFZWFsLDwzF8+HClRwrnzp0LT09PzJw5EwCwZ88eSKVSuLu7w8TEBImJiTh48CD8/f3RoUMHtWogIiLSZWpPMaynpwc/Pz/4+fk1+ODm5ub49NNPsWXLFqxcuRJmZmZ45ZVXMH78eKV+MplM6WkEJycnHDp0CMePH0d5eTkkEglGjBiBMWPGNLgmIiIiXSKSy+VyIR2XLVuGkJAQBAcH19rn9OnTOHHiBJYsWaKxArVFyDwF6nr0wXSN75OoKTis+rGpS1Db5G3nm7oEogb7+e0+WtmvqnF3gscIJCQkKAYI1iYnJ4czAhIRETUjwl8TKEB5ebna4wSIiIio6ag9RqAmcrkcOTk5iI2NhZ2dnSZ2SURERI1AZRCYMGGC0nJ4eDjCw8NV7nD06NENr4qIiIgahcog0KVLF8UEPQkJCZBIJLC3t6/WT09PD+bm5ujWrRsGDBignUqJiIhI41QGgaVLlyr+PGHCBPTv3x/jxo3Tdk1ERETUSASPEVi/fn2NbwokIiKi5ktwEGjVqpU26yAiIqImoPZTA3l5ebh+/Tpyc3Px9OnTGvvw9gEREVHzoFYQ2LNnD37//XdUVlaq7McgQERE1DwIDgJnzpzBvn370LVrVwwZMgSrV69GcHAwvL29ER8fjxMnTiAgIACDBg3SZr1ERESkQYKDwH/+8x/Y2tri448/VsweaG9vj6CgIAQFBcHf3x9hYWEICgrSWrFERESkWYKnGL537x569OihNIXw828E9PHxgbe3Nw4dOqTZComIiEhrBAeByspKWFhYKJbFYjFKSkqU+rRr1w5paWkaK46IiIi0S3AQsLGxQV5enmJZIpHg7t27Sn3y8vL40iEiIqJmRHAQcHZ2xv379xXLXl5eSEpKwunTpyGVSnHlyhVcuHABHTt21EqhREREpHmCg0DPnj1x//59ZGVlAQBGjRoFU1NTbNiwAW+//TZWrlwJoPqLioiIiOjFJfipgZCQEISEhCiWJRIJvvjiCxw6dAiZmZlo1aoVhgwZgvbt22ujTiIiItICtWcWfJ69vT2mTZumqVqIiIiokQm+NUBEREQtj9pXBGQyGXJzc1W+a8DT07PBhREREZH2qRUEDh48iEOHDqGgoEBlv927dzeoKCIiImocgoPAnj17sG/fPpibmyM4OBi2tracM4CIiKiZExwETpw4AXt7e6xcuRKmpqbarImIiIgaieDBgoWFhfDz82MIICIiakEEB4E2bdqguLhYm7UQERFRIxMcBAYPHozLly8jPz9fi+UQERFRYxI8RmDw4MF49OgR/v3vf2Ps2LHo1KlTrbcJJBKJxgokIiIi7VHr8cEOHTrg5MmT2LRpU619RCIRdu3a1eDCiIiISPsEB4Hjx4/j+++/h76+Pry8vGBjY8PHB4mIiJo5wUHg0KFDsLKywmeffQZ7e3tt1kRERESNRPBgwezsbAQEBDAEEBERtSCCg4CtrW2t7xYgIiKi5klwEAgODkZsbCxKS0u1WQ8RERE1IsFBYPTo0XBxccGKFSsQHx/PQEBERNQCCB4s+MYbbyj+vHz58lr78fFBIiKi5kNwEOjSpQtEIpE2ayEiIqJGJjgILF26VItlEBERUVMQPEaAiIiIWh4GASIiIh1W662BvXv3AgCGDh0Kc3NzxbIQ48aNa3hlREREpHW1BoHw8HAAQGBgIMzNzRXLQjAIEBERNQ+1BoElS5YA+O8rhauWiYiIqOWoNQh4enqqXCYiIqLmT/BgwVOnTuHu3bsq+9y7dw+nTp1qcFFERETUOAQHgY0bNyI6Olpln5iYGGzcuLHBRREREVHj0OjjgzKZjLMPEhERNSMaDQLp6ekwMzPT5C6JiIhIi1ROMfz3y/zR0dHIysqq1k8mk+Hx48dITEyEr6+vZiskIiIirVEZBP4+8C8tLQ1paWm19nd1dcXbb7+tkcKIiIhI+1QGgfXr1wMA5HI55s6di2HDhmHYsGHV+unp6cHMzAzGxsZqF/DgwQP89NNPSElJgZmZGQYMGIDXXnsNenqq71qUlJTg559/RnR0NGQyGXr27IkpU6bAwsJC7RqIiIh0lcog0KpVK8Wfx40bBy8vL6W2hioqKsKKFSvQtm1bLFy4EBkZGfjll18gl8vx+uuvq9x27dq1SE9Px4wZM6Cnp4cdO3Zg1apVWL58ucbqIyIiaukEv4b4tdde0/jBjx07hvLycsyfPx+mpqbo3r07SktLER4ejhEjRsDU1LTG7VJSUnD16lUsXbpUMdGRra0tPv74Y1y7dg3du3fXeK1EREQtkeCnBu7cuYOjR4+ipKRE0SaVSrF+/XpMnjwZM2bMwB9//KHWwePi4uDt7a30gR8UFITy8nIkJCTUul1sbCysrKyUZjt0cXGBvb094uLi1KqBiIhIlwkOAgcOHMD+/fuVPrR37tyJM2fOQC6Xo7CwENu2bcPVq1cFH/zhw4dwdHRUapNIJDAyMkJ6errK7ZycnKq1Ozk54eHDh4KPT0REpOsEB4HU1FR4eXkplp8+fYpTp07BxcUFP/zwA9avXw9LS0v8+eefgg9eXFxc47wDZmZmKCoqUrldTbcNzMzMUFxcLPj4REREuk7wGIGCggLY2dkplm/fvg2pVIrQ0FCIxWLY2trCz8/vhb00HxERgYiICABAWFhYtSsRmuC4Q71bI0SkOf/5aGxTl0DULKk1s2BlZaXiz0lJSQCU30poaWmJgoICwfszMzNTGnNQpbi4GObm5iq3Ky0trXG72mY2DA0NRVhYGMLCwgTXRy+mDz/8sKlLINJJPPdaJsFBQCKR4ObNm4rl6Oho2NnZoXXr1oq2vLw8lR/gf1fTPf2cnByUlZWp/MZe21iA9PT0GscOEBERUc0EB4E+ffogJSUFq1evxrp165CSkoKAgAClPg8fPlQKBnXx8fHB1atXlb7dnzt3DmKxWOlKw9/16NED+fn5iqsSwLMxDJmZmfDx8RF8fCIiIl0nOAgMHz4cbm5uuHTpEs6ePQtnZ2eMGzdOsT4rKwu3bt1S+QH+d4MGDYKhoSG++uorXLt2DREREQgPD8fw4cOVBgPOnTsXmzZtUiy7ubnB29sb69evx8WLF3Hp0iWsW7cOHh4enENAB4SGhjZ1CUQ6iedeyySSy+VydTa4d+8eAKBt27ZK0wBnZWXh7t276Ny5M2xtbQXv78GDB9iyZYvSFMPjx49X2vfs2bPh6emJ2bNnK9qKi4uxbds2XLp0CXK5HL6+vpgyZQosLS3V+XGIiIh0mtpBgIiIiFoOlY8PJiQkwN7eHhKJRNDO7t69i7S0NAQHB2ukOFLP+PHjATx7WZS9vb2iffbs2cjOzlbqa2RkhDZt2sDf3x+vvvqq4oVRf/75J7Zu3YqAgAC8//771Y6Rnp6Od999FwAwbdo0DBkypFqfiIgIfP/99+jRowc++ugjAMCePXuwd+9eBAcHK13ZIdKmqnPC2NhYMdfJ3508eRIbN27EgAED8M4771RrB56NkXrvvfdqPMbSpUuRkJCA5cuXw8PDo1r788RiMVq1aoUePXpg5MiRsLKyqvfPlpWVhTlz5sDT0xNLly5VtMfHx2PZsmWNfq7xHG++VI4RWLZsGU6ePKnU9vvvv2Pq1Kk19r906ZLixKEXj7e3N4KDgxEcHAw3NzdkZGQgPDwcixcvVkzE1KVLFwBAcnJyjft4foDm839+XmJiotK+iJqaVCrFgQMH6r39hQsXFLdF1eXu7q4477y8vJCfn4/Dhw9j4cKF1QI6UVMQPKFQlYqKCs7e10yNGjVKaXbIrKwsLFu2DPfv38f+/fsxadIktG/fHqampsjLy0NGRgbatGmjtI/ExESIRCK0a9eu1iBQ1c4gQC8CkUgEAwMDHD16FK+++iqsra3V2l4sFqO8vBx79uzBggUL1D7+wIEDERISolguKCjAF198gdTUVPzyyy81XnlrCBcXF6xdu7bWl7YR/Z1aEwpRy2Jvb6+4dBodHQ0A0NPTg7u7O4Cav/EnJSWhbdu28PX1xePHj5GVlaW0/vHjx8jOzoZYLEbnzp21/BMQ1U0kEmHQoEEoLy/Hb7/9pvb2/v7+sLGxwaVLl3D79u0G12NpaYlJkyYBAK5cuYKnT582eJ/PMzIygpOTE2xsbDS6X2q51L4iQC1Lx44dATybyKlKly5dEBsbi8TERKVvMnl5ecjMzERoaKjiXmhSUpLSeISq2wKurq4wMGjY/143b97EoUOHkJSUhMLCQlhZWcHHxwfjxo2rNm6lqKgIUVFRuHz5MtLT05Gfnw+xWAxnZ2cMGTKk2pwXALBhwwacOnUKS5YsQUVFBQ4cOIA7d+6gpKQEW7duRXR0NDZu3Ihx48ZhwIAB+PXXXxXzXjg4OODVV1+tdTxMTk4ODhw4gLi4OOTm5kIsFsPd3R2jR49WBK0qz9/TffPNN7Fr1y7ExcUhPz8fkyZNwiuvvNKgv0d6djXs+PHjiIiIwIgRI5SmS6+LWCzG6NGj8dNPP2HPnj0amV2v6rwrLy9HYWGh4kP7wYMH2L9/P+Lj41FQUABLS0t4eXlh7NixgidLq2uMQFxcHI4ePYpbt26huLgYVlZW6NChA0JCQhAQEIC8vDzMmjULlpaW2LhxI/T19avtIzo6GqtWrYKfnx8WLlxY778HnuMvxjnOIKDjqiZzMjQ0VLQ9/yH/vOfv/bu5uUEkEiExMRH9+vVT9NHUbYGjR4/ip59+AgB07twZHh4eePToESIjIxETE4OlS5eibdu2iv7Jycn46aef0KpVK7Rp0waurq7Iy8tDUlIS4uPjMWHCBIwdW/Nc9FFRUYiMjESnTp3g4+ODzMxMiEQixfqcnBx89NFHEIvF6Nq1K/Lz85GYmIgNGzZAJpOhf//+SvtLSUnBF198geLiYjg6OqJHjx4oLCzE1atXERcXh3nz5iEwMLBaHQUFBfjoo49QWVkJDw8PlJeXw8jIqEF/j/SMtbU1hgwZgoMHD2L//v345z//qdb2AwcOxIEDB3DlyhXcvHkTrq6uDarn+UnUqs6969ev48svv0RZWRk6duyILl26ID09HVFRUYiOjsaHH36odGuvPrZv347Dhw9DJBLBzc0NdnZ2yM/PR3JyMh4/foyAgADY2NjAz88PFy9exJUrV9CrV69q+6l6b0tD5hXgOf7inOMMAjru8uXLAID27dsr2lxcXGBoaIhHjx4hPz9fcU+16kPew8MD5ubmaNu2bbWw8Hyf+kpJScHWrVthbW2NDz74AC4uLop1kZGR2Lx5MzZt2oTPP/9c0e7k5ITPPvsMbm5uSvvKyMjA8uXLER4ejn79+qFVq1bVjnf8+HG8++67NZ64wLPR40OHDsXkyZMV81tcuHABa9aswd69e5V+SZSUlGD16tUoKSnBnDlzlEJSamoqPvvsM2zevBldu3atNoI9NjYW/v7+mDdvHsRisRp/YyTEyJEjcezYMURGRmLUqFE1/r9QG0NDQ4wdOxbff/89du/ejU8++aRBtVSdd7a2tjA3N4dUKsW6detQVlaGqVOnYujQoYq+hw8fxvbt27Fu3TqsW7eu3h8cp0+fxuHDh2FjY4NFixahU6dOinXl5eVK5/KgQYNw8eJFHD9+vFoQyMnJQVxcHOzs7Oo9kyvP8RfrHOcYAR2Vm5uLQ4cO4fDhwwCAwYMHK9YZGBgoTsy/PyVgZ2enONHc3d3x8OFDxYumiouLcf/+fejr61c7WdXx+++/QyaT4Z///KfSLwgAGDBgAPz8/HDz5k3cuXNH0d6mTZsaj9mmTRuMGTMGMpkMMTExNR7P19e31l8QANCqVStMmjRJaZKrgIAAtGvXDtnZ2Uojv0+cOIG8vDwMGzZM6RcE8Oxbz9ixYyGVSnH69OlqxzE0NMTUqVNfqF8QLYmFhQVefvllVFZWYt++fWpvHxISAnt7e1y7dq3WgbJ1KSgowIkTJ/B///d/AP573p0/fx5PnjyBm5ubUggAns3q2rlzZ+Tl5eH8+fP1Oi4AxfiIGTNmKIUA4Nntj+dnZe3WrRscHBwQFxeHx48fK/WNjIyEXC7HgAEDlM4JdfAcf7HO8TqvCISHhyM8PLxa+4QJE7RSEGnPsmXLqrWJRCKMHj0aL730klJ7ly5dkJiYiMTERAQEBKCkpAR3795Fnz59FH08PDwQERGBpKQk+Pv7Izk5GXK5HB07dlTMS6AumUyGGzduwMjIqNZvGx4eHoiJicGtW7cU91qrto2Pj0dycjLy8vJQUVEBuVyO/Px8AMCjR49q3J+fn5/Kmry8vJRunVRxcHDA/fv3kZeXpwhH165dAwD07t27xn1V3TK5detWtXUdO3ZUa1ZOUt+rr76Ko0eP4tSpUxg1alS1p2JUMTAwwLhx47Bx40bs3r0bS5YsEbTdxo0ba3ysOjg4GKNGjQLw39tuffv2rXEf/fr1Q2pqKpKSkpTG7QiVm5uLhw8fwsrKCr6+vnX2F4lECA0NxS+//IITJ04oppOXyWQ4ceIE9PT0MGDAALXrqNoHz/EXC28N6BBvb29YW1tDJBJBLBajTZs28PPzq/GXYdX/zFXffFJSUiCXy5Uu+VcNiElMTIS/v79G5g8oLCyEVCoFALzxxht19q2Sm5uLlStXKn2D+Luq/f5dXRNm1TawzMTEBMCzR2qrVD1F8emnn6rc5/O1C62DGs7MzAzDhw/H7t27sXfvXsyZM0et7fv164fffvsN8fHxuHHjBrp27VrnNu7u7opzzNDQUDGhkLOzs6JPXl4eACgNvH1eVXtubq5a9Vap+lavTvAJCQnBrl27cOLECYwZMwZ6enqKKwS+vr5qDbh8Hs/xF4/KILB79+7GqoMawd/nEVDFzc0Nenp6uHv3LkpLS2v8kG/dujVsbGwUYUETAwWrZrw2NjauNXFXeX4g0ebNm3Hnzh306tULI0eOhKOjI0xNTaGnp4erV6/i888/R22zadf0TeB5zw8qElp/QECAynu5NY0Ar6sO0oxhw4bhjz/+QFRUFEaPHq3Wtnp6enjttdewbt067N69W1AQ+Ps8As2FhYUF+vTpg9OnT+PatWvw8fHRyCBBnuMvHl4RoBqZmJjA2dkZt2/fRnJyMpKSkmBmZoZ27dop9XN3d0d0dDQKCgqQmpoKkUjUoIGCFhYWMDQ0hEgkwqxZswSdoFKpFFevXoWVlRXmz59f7b5lRkZGvetRl52dHdLT0zFq1Khq92HpxWBiYoIRI0Zgx44dCA8PV3vAW2BgIH777TckJycjLi5OIzVVPT7493k5qlTdo67vZeWqb7zqnguDBg3C6dOnERERgfbt2yM2NhZ2dnaCbi/Uhuf4i4eDBalWVd/sr1+/jlu3bimuEjzP3d0dlZWV+PPPP/H06VO0bdsW5ubm9T6mvr4+vLy8UFpaiuvXrwvapqSkBHK5HDY2NjUOXjp37ly961FXt27dADybbpteXEOHDoWVlRXOnz+v9tTBVVcFAM1dNa0616KiompcXzXwrL4h29bWFk5OTnjy5Ila4cXd3R0dOnTA5cuX8fvvv6OyshL9+/ev9yBBgOf4i4hBgGpV9cspMjISFRUV1SbJAP77i+mvv/5S2qYhxowZA5FIhE2bNuHGjRvV1kulUkRGRqK8vBzAs2fEzczMcP/+fcUtDODZJbzffvtNqU3bBg0aBCsrKxw4cADHjh2DTCZTWl9ZWYm4uLh6z1tPmmFkZIRRo0ZBLpfj2LFjam/fu3dvODs7IzU1FampqQ2up0+fPrCyskJKSgqOHj2qtO6PP/7ArVu3YGNjozRYV11VAxM3b96MtLQ0pXXl5eWKQXB/FxoaisrKSvz1118QiUT1HiT4PJ7jLxbeGqBaVX3IV71boqZvI87OzjAyMqr20qLaXLlyBYsXL651/eeffw4PDw9MmzYNP/30E5YvX4527drBwcEBBgYGyM7ORlpaGioqKtC7d2+IxWLo6elh5MiR2LlzJ5YtWwYvLy9YWFjgzp07yMzMxPDhwxWPSWqbmZkZPvjgA6xcuRI//PAD9u/fj3bt2sHc3Bz5+fm4c+cOiouLsWDBAqW5G6jxDRo0CAcPHlQM1FOHSCTC+PHjFRMANZSxsTHmzZuHlStXYsuWLYiMjISjoyPS09Nx584dGBkZYd68eQ2afCY4OBipqan466+/sGjRIri5uUEikSA/Px9paWmQSCRYtWpVte369euHHTt2QCqVwsfHp84BbzzHm985ziBAtbK0tISTkxMePnyoNLfA8/T19eHi4oL4+HgAdQeBwsLCGkfT/t3gwYPh5uaGI0eOICEhAVeuXIGRkRFsbW3Rt29f9O7dW+mlKqNGjYKdnR2OHDmClJQURb0zZ85EZWVlo/2SAJ4NtFy9ejUOHz6smKoZePatpkuXLvD391d6ZpuahlgsxpgxY7Bly5Z6be/n5wcXF5caHxOrj27duiEsLAz79+/HjRs3cO/ePVhYWKBv374YM2aM0sC5+po6dSq6d++Oo0ePKq5mWFlZwd3dvdZv+iYmJujcuTPi4+MFDRLkOd78znGRvLZhlkREpPNyc3Mxe/ZsWFpaYtOmTQ0aH0AvJv4XJSKiWh04cACVlZUYPHgwQ0ALxVsDRESkJD09HQcPHkR2djauX78Oa2trvPzyy01dFmkJgwARESnJy8tDZGQkxGIxunTpgsmTJyvdr6eWhWMEiIiIdBhv+BAR0f9rtw4EAAAAAAT5Ww9yUcSYCADAmAgAwJgIAMCYCADAmAgAwFg8Xb6+MoXK8wAAAABJRU5ErkJggg==", 390 | "text/plain": [ 391 | "
" 392 | ] 393 | }, 394 | "metadata": {} 395 | } 396 | ], 397 | "metadata": {} 398 | }, 399 | { 400 | "cell_type": "markdown", 401 | "source": [ 402 | "Both policy learner outperforms the random baseline. In particular, NNPolicyLearner seems to be the best, improving the random baseline by about 70\\%. It also outperforms IPWLearner in both SNIPS and DR." 403 | ], 404 | "metadata": {} 405 | }, 406 | { 407 | "cell_type": "markdown", 408 | "source": [], 409 | "metadata": {} 410 | } 411 | ], 412 | "metadata": { 413 | "interpreter": { 414 | "hash": "64b446a4e17784c2dc3dbe74bf0708929a003fb4822b30671d57ebdef413b716" 415 | }, 416 | "kernelspec": { 417 | "name": "python3", 418 | "display_name": "Python 3.9.5 64-bit ('zr-obp': pyenv)" 419 | }, 420 | "language_info": { 421 | "codemirror_mode": { 422 | "name": "ipython", 423 | "version": 3 424 | }, 425 | "file_extension": ".py", 426 | "mimetype": "text/x-python", 427 | "name": "python", 428 | "nbconvert_exporter": "python", 429 | "pygments_lexer": "ipython3", 430 | "version": "3.9.5" 431 | } 432 | }, 433 | "nbformat": 4, 434 | "nbformat_minor": 4 435 | } -------------------------------------------------------------------------------- /simulations/README.md: -------------------------------------------------------------------------------- 1 | # OPE Simulations 2 | 3 | This page contains a list of OPE simulations conducted with the Open Bandit Pipeline. 4 | 5 | - [`evaluation-of-ope-1.ipynb`](./evaluation-of-ope-1.ipynb): evaluation of OPE experiment using Open Bandit Pipeline with synthetic data (easy setting). 6 | - [`evaluation-of-ope-2.ipynb`](./evaluation-of-ope-2.ipynb): evaluation of OPE experiment using Open Bandit Pipeline with synthetic data (challenging setting). 7 | - [`hyperparameter-tuning.ipynb`](./evaluation-of-ope.ipynb): evaluation of hyperparameter tuning for OPE using Open Bandit Pipeline with synthetic data. 8 | --------------------------------------------------------------------------------