├── README.md ├── RECENT_chatobot.ipynb ├── chatbot_keras.py ├── conversation_spliter.py ├── decoder_inputs.txt ├── encoder_inputs.txt ├── padded_decoder_sequences.txt ├── padded_encoder_sequences.txt ├── prepare_data.py ├── seq2seq_model.py └── training_chatbot.py /README.md: -------------------------------------------------------------------------------- 1 | # Automatic Encoder-Decoder Seq2Seq: English Chatbot 2 | ## エンコーダ・デコーダLSTMによるSeq2Seqによる英語チャットボット 3 | ![](https://cdn-images-1.medium.com/max/2560/1*1I2tTjCkMHlQ-r73eRn4ZQ.png) 4 | 5 | ## Introduction 6 | 7 | Seq2seq is Sequence to Sequence model, input and output of the model are time series data, and it converts time series data into another time series data. The idea is simple: prepare two RNNs, the input language side (encoder) and the output language side (decoder), and connect them with intermediate nodes. 8 | Pass the data which you want to convert as an input to Encoder, process it with Encoder, pass the processing result to Decoder, and Decoder outputs the conversion result of the input data. Encoder and Decoder use RNN and process given time series data respectively. 9 | 10 | ## Technical Preferences 11 | 12 | | Title | Detail | 13 | |:-----------:|:------------------------------------------------| 14 | | Environment | MacOS Mojave 10.14.3 | 15 | | Language | Python | 16 | | Library | Kras, scikit-learn, Numpy, matplotlib, Pandas, Seaborn | 17 | | Dataset | [Tab-delimited Bilingual Sentence Pairs](http://www.manythings.org/anki/) | 18 | | Algorithm | Encoder-Decoder LSTM | 19 | 20 | ## Refference 21 | 22 | - [Machine Translation using Sequence-to-Sequence Learning](https://nextjournal.com/gkoehler/machine-translation-seq2seq-cpu) 23 | - [Chatbots with Seq2Seq Learn to build a chatbot using TensorFlow](http://complx.me/2016-06-28-easy-seq2seq/) 24 | - [Generative Model Chatbots](https://medium.com/botsupply/generative-model-chatbots-e422ab08461e) 25 | - [How I Used Deep Learning To Train A Chatbot To Talk Like Me (Sorta)](https://adeshpande3.github.io/How-I-Used-Deep-Learning-to-Train-a-Chatbot-to-Talk-Like-Me) 26 | - [今更聞けないLSTMの基本](https://www.hellocybernetics.tech/entry/2017/05/06/182757) 27 | -------------------------------------------------------------------------------- /RECENT_chatobot.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "colab_type": "text", 7 | "id": "A_pjoByYRMTv" 8 | }, 9 | "source": [ 10 | "# Seq2Seq: Encoder-Decoder Chatbot " 11 | ] 12 | }, 13 | { 14 | "cell_type": "markdown", 15 | "metadata": { 16 | "colab_type": "text", 17 | "id": "4n0pEEarRMTx" 18 | }, 19 | "source": [ 20 | "![](https://cdn-images-1.medium.com/max/2560/1*1I2tTjCkMHlQ-r73eRn4ZQ.png)" 21 | ] 22 | }, 23 | { 24 | "cell_type": "code", 25 | "execution_count": 0, 26 | "metadata": { 27 | "colab": {}, 28 | "colab_type": "code", 29 | "id": "XgAIoL02RMTy" 30 | }, 31 | "outputs": [], 32 | "source": [ 33 | "import numpy as np\n", 34 | "import pandas as pd\n", 35 | "import string\n", 36 | "import pickle\n", 37 | "import operator\n", 38 | "import matplotlib.pyplot as plt\n", 39 | "%matplotlib inline" 40 | ] 41 | }, 42 | { 43 | "cell_type": "markdown", 44 | "metadata": { 45 | "colab_type": "text", 46 | "id": "dHaiKoheRMT5" 47 | }, 48 | "source": [ 49 | "## Step 1. Import Data" 50 | ] 51 | }, 52 | { 53 | "cell_type": "code", 54 | "execution_count": 0, 55 | "metadata": { 56 | "colab": {}, 57 | "colab_type": "code", 58 | "id": "jh3QLAlCRMT6" 59 | }, 60 | "outputs": [], 61 | "source": [ 62 | "# .txtから会話データを取得する\n", 63 | "import codecs\n", 64 | "\n", 65 | "with codecs.open(\"movie_lines.txt\", \"rb\", encoding=\"utf-8\", errors=\"ignore\") as f:\n", 66 | " lines = f.read().split(\"\\n\")\n", 67 | " conversations = []\n", 68 | " for line in lines:\n", 69 | " data = line.split(\" +++$+++ \")\n", 70 | " conversations.append(data)" 71 | ] 72 | }, 73 | { 74 | "cell_type": "code", 75 | "execution_count": 7, 76 | "metadata": { 77 | "colab": { 78 | "base_uri": "https://localhost:8080/", 79 | "height": 119 80 | }, 81 | "colab_type": "code", 82 | "id": "6M3eZnuPRMT9", 83 | "outputId": "c7420e45-8e12-4feb-bf58-7c208d14e842" 84 | }, 85 | "outputs": [ 86 | { 87 | "data": { 88 | "text/plain": [ 89 | "[['L1045', 'u0', 'm0', 'BIANCA', 'They do not!'],\n", 90 | " ['L1044', 'u2', 'm0', 'CAMERON', 'They do to!'],\n", 91 | " ['L985', 'u0', 'm0', 'BIANCA', 'I hope so.'],\n", 92 | " ['L984', 'u2', 'm0', 'CAMERON', 'She okay?'],\n", 93 | " ['L925', 'u0', 'm0', 'BIANCA', \"Let's go.\"],\n", 94 | " ['L924', 'u2', 'm0', 'CAMERON', 'Wow']]" 95 | ] 96 | }, 97 | "execution_count": 7, 98 | "metadata": { 99 | "tags": [] 100 | }, 101 | "output_type": "execute_result" 102 | } 103 | ], 104 | "source": [ 105 | "conversations[:6]" 106 | ] 107 | }, 108 | { 109 | "cell_type": "code", 110 | "execution_count": 0, 111 | "metadata": { 112 | "colab": {}, 113 | "colab_type": "code", 114 | "id": "4hRs6j-vRMUE" 115 | }, 116 | "outputs": [], 117 | "source": [ 118 | "# idと会話だけ取り出す\n", 119 | "chats = {}\n", 120 | "for tokens in conversations:\n", 121 | " if len(tokens) > 4:\n", 122 | " idx = tokens[0][1:]\n", 123 | " chat = tokens[4]\n", 124 | " chats[int(idx)] = chat" 125 | ] 126 | }, 127 | { 128 | "cell_type": "code", 129 | "execution_count": null, 130 | "metadata": { 131 | "colab": { 132 | "base_uri": "https://localhost:8080/", 133 | "height": 20454 134 | }, 135 | "colab_type": "code", 136 | "id": "Q5DrAY8PRMUN", 137 | "outputId": "35500f14-06a6-4eda-f163-397f0320a4c1" 138 | }, 139 | "outputs": [], 140 | "source": [ 141 | "# idと会話をセットにする\n", 142 | "sorted_chats = sorted(chats.items(), key = lambda x: x[0])\n", 143 | "sorted_chats" 144 | ] 145 | }, 146 | { 147 | "cell_type": "code", 148 | "execution_count": 0, 149 | "metadata": { 150 | "colab": {}, 151 | "colab_type": "code", 152 | "id": "azB8ddZcRMUS" 153 | }, 154 | "outputs": [], 155 | "source": [ 156 | "# 会話のペアごとに辞書を作る { 会話セットid: [会話リスト] }\n", 157 | "conves_dict = {}\n", 158 | "counter = 1\n", 159 | "conves_ids = []\n", 160 | "for i in range(1, len(sorted_chats)+1):\n", 161 | " if i < len(sorted_chats):\n", 162 | " if (sorted_chats[i][0] - sorted_chats[i-1][0]) == 1:\n", 163 | " # 1つ前の会話の頭の文字がないのを確認\n", 164 | " if sorted_chats[i-1][1] not in conves_ids:\n", 165 | " conves_ids.append(sorted_chats[i-1][1])\n", 166 | " conves_ids.append(sorted_chats[i][1])\n", 167 | " elif (sorted_chats[i][0] - sorted_chats[i-1][0]) > 1: \n", 168 | " conves_dict[counter] = conves_ids\n", 169 | " conves_ids = []\n", 170 | " counter += 1\n", 171 | " else:\n", 172 | " pass" 173 | ] 174 | }, 175 | { 176 | "cell_type": "code", 177 | "execution_count": null, 178 | "metadata": { 179 | "colab": { 180 | "base_uri": "https://localhost:8080/", 181 | "height": 117184 182 | }, 183 | "colab_type": "code", 184 | "id": "siMFeqSuRMUV", 185 | "outputId": "8a84cd2e-1da3-4a1a-ac69-39b96489c07c" 186 | }, 187 | "outputs": [], 188 | "source": [ 189 | "conves_dict" 190 | ] 191 | }, 192 | { 193 | "cell_type": "code", 194 | "execution_count": 0, 195 | "metadata": { 196 | "colab": {}, 197 | "colab_type": "code", 198 | "id": "vLbT8fzGRMUa" 199 | }, 200 | "outputs": [], 201 | "source": [ 202 | "context_and_target = []\n", 203 | "for conves in conves_dict.values():\n", 204 | " # ペアがない会話は捨てる\n", 205 | " if len(conves) % 2 != 0:\n", 206 | " conves = conves[:-1]\n", 207 | " for i in range(0, len(conves), 2):\n", 208 | " context_and_target.append((conves[i], conves[i+1]))" 209 | ] 210 | }, 211 | { 212 | "cell_type": "code", 213 | "execution_count": 14, 214 | "metadata": { 215 | "colab": { 216 | "base_uri": "https://localhost:8080/", 217 | "height": 153 218 | }, 219 | "colab_type": "code", 220 | "id": "wBJDX_kLRMUd", 221 | "outputId": "faf0d417-4f01-4296-9381-1e7ed10ca469" 222 | }, 223 | "outputs": [ 224 | { 225 | "data": { 226 | "text/plain": [ 227 | "[('Did you change your hair?', 'No.'),\n", 228 | " ('I missed you.',\n", 229 | " 'It says here you exposed yourself to a group of freshmen girls.'),\n", 230 | " ('It was a bratwurst. I was eating lunch.',\n", 231 | " 'With the teeth of your zipper?'),\n", 232 | " ('You the new guy?', 'So they tell me...'),\n", 233 | " (\"C'mon. I'm supposed to give you the tour.\",\n", 234 | " 'So -- which Dakota you from?')]" 235 | ] 236 | }, 237 | "execution_count": 14, 238 | "metadata": { 239 | "tags": [] 240 | }, 241 | "output_type": "execute_result" 242 | } 243 | ], 244 | "source": [ 245 | "# ペア完成\n", 246 | "context_and_target[:5]" 247 | ] 248 | }, 249 | { 250 | "cell_type": "code", 251 | "execution_count": 0, 252 | "metadata": { 253 | "colab": {}, 254 | "colab_type": "code", 255 | "id": "nqBGBxadRMUi" 256 | }, 257 | "outputs": [], 258 | "source": [ 259 | "context, target = zip(*context_and_target)" 260 | ] 261 | }, 262 | { 263 | "cell_type": "code", 264 | "execution_count": 0, 265 | "metadata": { 266 | "colab": {}, 267 | "colab_type": "code", 268 | "id": "20Wf7yq3RMUl" 269 | }, 270 | "outputs": [], 271 | "source": [ 272 | "context = list(context)\n", 273 | "target = list(target)" 274 | ] 275 | }, 276 | { 277 | "cell_type": "code", 278 | "execution_count": 17, 279 | "metadata": { 280 | "colab": { 281 | "base_uri": "https://localhost:8080/", 282 | "height": 102 283 | }, 284 | "colab_type": "code", 285 | "id": "ANNT_oibRMUp", 286 | "outputId": "20198454-ce47-44bd-df38-9a68a782347b" 287 | }, 288 | "outputs": [ 289 | { 290 | "data": { 291 | "text/plain": [ 292 | "['Did you change your hair?',\n", 293 | " 'I missed you.',\n", 294 | " 'It was a bratwurst. I was eating lunch.',\n", 295 | " 'You the new guy?',\n", 296 | " \"C'mon. I'm supposed to give you the tour.\"]" 297 | ] 298 | }, 299 | "execution_count": 17, 300 | "metadata": { 301 | "tags": [] 302 | }, 303 | "output_type": "execute_result" 304 | } 305 | ], 306 | "source": [ 307 | "context[:5]" 308 | ] 309 | }, 310 | { 311 | "cell_type": "code", 312 | "execution_count": 18, 313 | "metadata": { 314 | "colab": { 315 | "base_uri": "https://localhost:8080/", 316 | "height": 357 317 | }, 318 | "colab_type": "code", 319 | "id": "kDb0GTENRMUw", 320 | "outputId": "0f72dffa-69f8-46a2-c2b8-257b02c7bfc6" 321 | }, 322 | "outputs": [ 323 | { 324 | "data": { 325 | "text/plain": [ 326 | "['No.',\n", 327 | " 'It says here you exposed yourself to a group of freshmen girls.',\n", 328 | " 'With the teeth of your zipper?',\n", 329 | " 'So they tell me...',\n", 330 | " 'So -- which Dakota you from?',\n", 331 | " 'I was kidding. People actually live there?',\n", 332 | " 'How many people were in your old school?',\n", 333 | " 'Get out!',\n", 334 | " 'Couple thousand. Most of them evil',\n", 335 | " 'Yeah, but these guys have never seen a horse. They just jack off to Clint Eastwood.',\n", 336 | " 'You burn, you pine, you perish?',\n", 337 | " \"Bianca Stratford. Sophomore. Don't even think about it\",\n", 338 | " \"I could start with your haircut, but it doesn't matter. She's not allowed to date until her older sister does. And that's an impossibility.\",\n", 339 | " 'Expressing my opinion is not a terrorist action.',\n", 340 | " 'I still maintain that he kicked himself in the balls. I was merely a spectator.',\n", 341 | " 'Tempestuous?',\n", 342 | " 'Patrick Verona Random skid.',\n", 343 | " \"I'm sure he's completely incapable of doing anything that interesting.\",\n", 344 | " 'Block E?',\n", 345 | " 'Just a little.']" 346 | ] 347 | }, 348 | "execution_count": 18, 349 | "metadata": { 350 | "tags": [] 351 | }, 352 | "output_type": "execute_result" 353 | } 354 | ], 355 | "source": [ 356 | "target[:20]" 357 | ] 358 | }, 359 | { 360 | "cell_type": "markdown", 361 | "metadata": { 362 | "colab_type": "text", 363 | "id": "bl2kl5WTRMU3" 364 | }, 365 | "source": [ 366 | "## Step 2. Preprocessing for text data" 367 | ] 368 | }, 369 | { 370 | "cell_type": "code", 371 | "execution_count": 0, 372 | "metadata": { 373 | "colab": {}, 374 | "colab_type": "code", 375 | "id": "pqqjGJsGRMU5" 376 | }, 377 | "outputs": [], 378 | "source": [ 379 | "# from my_seq2seq_text_cleanear import text_modifier, nonalpha_remover\n", 380 | "import re\n", 381 | "MAX_LEN = 12" 382 | ] 383 | }, 384 | { 385 | "cell_type": "code", 386 | "execution_count": 173, 387 | "metadata": { 388 | "colab": {}, 389 | "colab_type": "code", 390 | "id": "kXCK1rHRRMU-" 391 | }, 392 | "outputs": [], 393 | "source": [ 394 | "def clean_text(text):\n", 395 | " '''Clean text by removing unnecessary characters and altering the format of words.'''\n", 396 | "\n", 397 | " text = text.lower()\n", 398 | " \n", 399 | " text = re.sub(r\"i'm\", \"i am\", text)\n", 400 | " text = re.sub(r\"he's\", \"he is\", text)\n", 401 | " text = re.sub(r\"she's\", \"she is\", text)\n", 402 | " text = re.sub(r\"it's\", \"it is\", text)\n", 403 | " text = re.sub(r\"that's\", \"that is\", text)\n", 404 | " text = re.sub(r\"what's\", \"that is\", text)\n", 405 | " text = re.sub(r\"where's\", \"where is\", text)\n", 406 | " text = re.sub(r\"how's\", \"how is\", text)\n", 407 | " text = re.sub(r\"\\'ll\", \" will\", text)\n", 408 | " text = re.sub(r\"\\'ve\", \" have\", text)\n", 409 | " text = re.sub(r\"\\'re\", \" are\", text)\n", 410 | " text = re.sub(r\"\\'d\", \" would\", text)\n", 411 | " text = re.sub(r\"\\'re\", \" are\", text)\n", 412 | " text = re.sub(r\"won't\", \"will not\", text)\n", 413 | " text = re.sub(r\"can't\", \"cannot\", text)\n", 414 | " text = re.sub(r\"n't\", \" not\", text)\n", 415 | " text = re.sub(r\"n'\", \"ng\", text)\n", 416 | " text = re.sub(r\"'bout\", \"about\", text)\n", 417 | " text = re.sub(r\"'til\", \"until\", text)\n", 418 | " text = re.sub(r\"[-()\\\"#/@;:<>{}`+=~|.!?,]\", \"\", text)\n", 419 | " \n", 420 | " return text" 421 | ] 422 | }, 423 | { 424 | "cell_type": "markdown", 425 | "metadata": { 426 | "colab_type": "text", 427 | "id": "iv5ggyQhRMVF" 428 | }, 429 | "source": [ 430 | "### 2-1. Clean Text" 431 | ] 432 | }, 433 | { 434 | "cell_type": "code", 435 | "execution_count": 0, 436 | "metadata": { 437 | "colab": {}, 438 | "colab_type": "code", 439 | "id": "5WG930lmRMVH" 440 | }, 441 | "outputs": [], 442 | "source": [ 443 | "tidy_target = []\n", 444 | "for conve in target:\n", 445 | " text = clean_text(conve)\n", 446 | " tidy_target.append(text)" 447 | ] 448 | }, 449 | { 450 | "cell_type": "code", 451 | "execution_count": 22, 452 | "metadata": { 453 | "colab": { 454 | "base_uri": "https://localhost:8080/", 455 | "height": 357 456 | }, 457 | "colab_type": "code", 458 | "id": "lil1q_dlRMVK", 459 | "outputId": "c9b3559d-ea48-4105-88ed-e99d911b02e6" 460 | }, 461 | "outputs": [ 462 | { 463 | "data": { 464 | "text/plain": [ 465 | "['no',\n", 466 | " 'it says here you exposed yourself to a group of freshmen girls',\n", 467 | " 'with the teeth of your zipper',\n", 468 | " 'so they tell me',\n", 469 | " 'so which dakota you from',\n", 470 | " 'i was kidding people actually live there',\n", 471 | " 'how many people were in your old school',\n", 472 | " 'get out',\n", 473 | " 'couple thousand most of them evil',\n", 474 | " 'yeah but these guys have never seen a horse they just jack off to clint eastwood',\n", 475 | " 'you burn you pine you perish',\n", 476 | " 'bianca stratford sophomore do not even think about it',\n", 477 | " 'i could start with your haircut but it does not matter she is not allowed to date until her older sister does and that is an impossibility',\n", 478 | " 'expressing my opinion is not a terrorist action',\n", 479 | " 'i still maintain that he kicked himself in the balls i was merely a spectator',\n", 480 | " 'tempestuous',\n", 481 | " 'patrick verona random skid',\n", 482 | " 'i am sure he is completely incapable of doing anything that interesting',\n", 483 | " 'block e',\n", 484 | " 'just a little']" 485 | ] 486 | }, 487 | "execution_count": 22, 488 | "metadata": { 489 | "tags": [] 490 | }, 491 | "output_type": "execute_result" 492 | } 493 | ], 494 | "source": [ 495 | "tidy_target[:20]" 496 | ] 497 | }, 498 | { 499 | "cell_type": "code", 500 | "execution_count": 0, 501 | "metadata": { 502 | "colab": {}, 503 | "colab_type": "code", 504 | "id": "1u7QXY-TRMVN" 505 | }, 506 | "outputs": [], 507 | "source": [ 508 | "tidy_context = []\n", 509 | "for conve in context:\n", 510 | " text = clean_text(conve)\n", 511 | " tidy_context.append(text)" 512 | ] 513 | }, 514 | { 515 | "cell_type": "code", 516 | "execution_count": 24, 517 | "metadata": { 518 | "colab": { 519 | "base_uri": "https://localhost:8080/", 520 | "height": 377 521 | }, 522 | "colab_type": "code", 523 | "id": "hUfGBwOqRMVP", 524 | "outputId": "60e91895-6e3b-434d-e9b4-ee0b624deb80" 525 | }, 526 | "outputs": [ 527 | { 528 | "data": { 529 | "text/plain": [ 530 | "['did you change your hair',\n", 531 | " 'i missed you',\n", 532 | " 'it was a bratwurst i was eating lunch',\n", 533 | " 'you the new guy',\n", 534 | " \"c'mon i am supposed to give you the tour\",\n", 535 | " 'north actually how would you ',\n", 536 | " 'yeah a couple we are outnumbered by the cows though',\n", 537 | " 'thirtytwo',\n", 538 | " 'how many people go here',\n", 539 | " 'that i am used to',\n", 540 | " 'that girl i ',\n", 541 | " 'who is she',\n", 542 | " 'why not',\n", 543 | " 'katarina stratford my my you have been terrorizing ms blaise again',\n", 544 | " \"well yes compared to your other choices of expression this year today's events are quite mild by the way bobby rictor's gonad retrieval operation went quite well in case you are interested\",\n", 545 | " 'the point is kat people perceive you as somewhat ',\n", 546 | " \"who's that\",\n", 547 | " 'that is pat verona the one who was gone for a year i heard he was doing porn movies',\n", 548 | " 'he always look so',\n", 549 | " 'mandella eat starving yourself is a very slow way to die']" 550 | ] 551 | }, 552 | "execution_count": 24, 553 | "metadata": { 554 | "tags": [] 555 | }, 556 | "output_type": "execute_result" 557 | } 558 | ], 559 | "source": [ 560 | "tidy_context[:20]" 561 | ] 562 | }, 563 | { 564 | "cell_type": "code", 565 | "execution_count": 0, 566 | "metadata": { 567 | "colab": {}, 568 | "colab_type": "code", 569 | "id": "TMRuuv8ZRMVX" 570 | }, 571 | "outputs": [], 572 | "source": [ 573 | "# decoderのinputにはタグ\n", 574 | "bos = \" \"\n", 575 | "eos = \" \"\n", 576 | "final_target = [bos + conve + eos for conve in tidy_target] \n", 577 | "encoder_inputs = tidy_context\n", 578 | "decoder_inputs = final_target" 579 | ] 580 | }, 581 | { 582 | "cell_type": "code", 583 | "execution_count": 4, 584 | "metadata": { 585 | "colab": {}, 586 | "colab_type": "code", 587 | "id": "Rwj5qO6-RMVS" 588 | }, 589 | "outputs": [], 590 | "source": [ 591 | "import codecs\n", 592 | "with codecs.open(\"encoder_inputs.txt\", \"rb\", encoding=\"utf-8\", errors=\"ignore\") as f:\n", 593 | " lines = f.read().split(\"\\n\")\n", 594 | " encoder_text = []\n", 595 | " for line in lines:\n", 596 | " data = line.split(\"\\n\")[0]\n", 597 | " encoder_text.append(data)" 598 | ] 599 | }, 600 | { 601 | "cell_type": "code", 602 | "execution_count": 51, 603 | "metadata": {}, 604 | "outputs": [ 605 | { 606 | "data": { 607 | "text/plain": [ 608 | "143865" 609 | ] 610 | }, 611 | "execution_count": 51, 612 | "metadata": {}, 613 | "output_type": "execute_result" 614 | } 615 | ], 616 | "source": [ 617 | "len(encoder_text)" 618 | ] 619 | }, 620 | { 621 | "cell_type": "code", 622 | "execution_count": null, 623 | "metadata": { 624 | "colab": {}, 625 | "colab_type": "code", 626 | "id": "pgjJwZnYRMVU" 627 | }, 628 | "outputs": [], 629 | "source": [ 630 | "encoder_text" 631 | ] 632 | }, 633 | { 634 | "cell_type": "code", 635 | "execution_count": 6, 636 | "metadata": {}, 637 | "outputs": [], 638 | "source": [ 639 | "with codecs.open(\"decoder_inputs.txt\", \"rb\", encoding=\"utf-8\", errors=\"ignore\") as f:\n", 640 | " lines = f.read().split(\"\\n\")\n", 641 | " decoder_text = []\n", 642 | " for line in lines:\n", 643 | " data = line.split(\"\\n\")[0]\n", 644 | " decoder_text.append(data)" 645 | ] 646 | }, 647 | { 648 | "cell_type": "code", 649 | "execution_count": null, 650 | "metadata": {}, 651 | "outputs": [], 652 | "source": [ 653 | "decoder_text" 654 | ] 655 | }, 656 | { 657 | "cell_type": "markdown", 658 | "metadata": { 659 | "colab_type": "text", 660 | "id": "UMXtB9i6RMVa" 661 | }, 662 | "source": [ 663 | "### 2-2. MAKE VOCABRALY" 664 | ] 665 | }, 666 | { 667 | "cell_type": "code", 668 | "execution_count": 0, 669 | "metadata": { 670 | "colab": {}, 671 | "colab_type": "code", 672 | "id": "BQWanckCRMVb" 673 | }, 674 | "outputs": [], 675 | "source": [ 676 | "# 一旦もともと辞書サイズを調べる\n", 677 | "dictionary = []\n", 678 | "for text in full_text:\n", 679 | " words = text.split()\n", 680 | " for i in range(0, len(words)):\n", 681 | " if words[i] not in dictionary:\n", 682 | " dictionary.append(words[i])" 683 | ] 684 | }, 685 | { 686 | "cell_type": "code", 687 | "execution_count": 8, 688 | "metadata": { 689 | "colab": {}, 690 | "colab_type": "code", 691 | "id": "DR7nq44URMVf", 692 | "scrolled": true 693 | }, 694 | "outputs": [ 695 | { 696 | "name": "stderr", 697 | "output_type": "stream", 698 | "text": [ 699 | "Using TensorFlow backend.\n" 700 | ] 701 | } 702 | ], 703 | "source": [ 704 | "from keras.preprocessing.text import Tokenizer\n", 705 | "VOCAB_SIZE = 14999\n", 706 | "tokenizer = Tokenizer(num_words=VOCAB_SIZE)" 707 | ] 708 | }, 709 | { 710 | "cell_type": "code", 711 | "execution_count": 9, 712 | "metadata": {}, 713 | "outputs": [], 714 | "source": [ 715 | "full_text = encoder_text + decoder_text" 716 | ] 717 | }, 718 | { 719 | "cell_type": "code", 720 | "execution_count": 10, 721 | "metadata": { 722 | "colab": { 723 | "base_uri": "https://localhost:8080/", 724 | "height": 34 725 | }, 726 | "colab_type": "code", 727 | "id": "Q0RgA9ssRMVj", 728 | "outputId": "4ab16108-1ce4-4c32-d2f4-a9f4d7c66b80" 729 | }, 730 | "outputs": [ 731 | { 732 | "data": { 733 | "text/plain": [ 734 | "65283" 735 | ] 736 | }, 737 | "execution_count": 10, 738 | "metadata": {}, 739 | "output_type": "execute_result" 740 | } 741 | ], 742 | "source": [ 743 | "# 辞書を作る\n", 744 | "tokenizer.fit_on_texts(full_text)\n", 745 | "word_index = tokenizer.word_index\n", 746 | "len(word_index)" 747 | ] 748 | }, 749 | { 750 | "cell_type": "code", 751 | "execution_count": 66, 752 | "metadata": { 753 | "colab": {}, 754 | "colab_type": "code", 755 | "id": "wIT8hFwjRMVn" 756 | }, 757 | "outputs": [], 758 | "source": [ 759 | "# リバースした辞書を用意\n", 760 | "index2word = {}\n", 761 | "for k, v in word_index.items():\n", 762 | " if v < 15000:\n", 763 | " index2word[v] = k\n", 764 | " if v > 15000:\n", 765 | " continue" 766 | ] 767 | }, 768 | { 769 | "cell_type": "code", 770 | "execution_count": null, 771 | "metadata": {}, 772 | "outputs": [], 773 | "source": [ 774 | "index2word" 775 | ] 776 | }, 777 | { 778 | "cell_type": "code", 779 | "execution_count": 68, 780 | "metadata": {}, 781 | "outputs": [], 782 | "source": [ 783 | "word2index = {}\n", 784 | "for k, v in index2word.items():\n", 785 | " word2index[v] = k" 786 | ] 787 | }, 788 | { 789 | "cell_type": "code", 790 | "execution_count": null, 791 | "metadata": {}, 792 | "outputs": [], 793 | "source": [ 794 | "word2index" 795 | ] 796 | }, 797 | { 798 | "cell_type": "code", 799 | "execution_count": 71, 800 | "metadata": { 801 | "colab": {}, 802 | "colab_type": "code", 803 | "id": "Vi5Zp56PZwWy" 804 | }, 805 | "outputs": [ 806 | { 807 | "data": { 808 | "text/plain": [ 809 | "True" 810 | ] 811 | }, 812 | "execution_count": 71, 813 | "metadata": {}, 814 | "output_type": "execute_result" 815 | } 816 | ], 817 | "source": [ 818 | "len(word2index) == len(index2word)" 819 | ] 820 | }, 821 | { 822 | "cell_type": "code", 823 | "execution_count": 70, 824 | "metadata": {}, 825 | "outputs": [ 826 | { 827 | "data": { 828 | "text/plain": [ 829 | "14999" 830 | ] 831 | }, 832 | "execution_count": 70, 833 | "metadata": {}, 834 | "output_type": "execute_result" 835 | } 836 | ], 837 | "source": [ 838 | "len(index2word)" 839 | ] 840 | }, 841 | { 842 | "cell_type": "markdown", 843 | "metadata": { 844 | "colab_type": "text", 845 | "id": "0ErckVpJRMVp" 846 | }, 847 | "source": [ 848 | "### 2-3. ONE-HOT VECTORIZER" 849 | ] 850 | }, 851 | { 852 | "cell_type": "code", 853 | "execution_count": 13, 854 | "metadata": { 855 | "colab": {}, 856 | "colab_type": "code", 857 | "id": "Dc9GMlYdRMVq" 858 | }, 859 | "outputs": [], 860 | "source": [ 861 | "# 単語のシーケンスを作る np.arrayにする\n", 862 | "encoder_sequences = tokenizer.texts_to_sequences(encoder_text)\n", 863 | "# encider_sequences = np.array(encider_sequences)" 864 | ] 865 | }, 866 | { 867 | "cell_type": "code", 868 | "execution_count": 14, 869 | "metadata": { 870 | "colab": {}, 871 | "colab_type": "code", 872 | "id": "EB6wvoIFRMVs" 873 | }, 874 | "outputs": [], 875 | "source": [ 876 | "# デコーダーデータ\n", 877 | "decoder_sequences = tokenizer.texts_to_sequences(decoder_text)\n", 878 | "# decoder_sequences = np.array(decoder_sequences)" 879 | ] 880 | }, 881 | { 882 | "cell_type": "code", 883 | "execution_count": null, 884 | "metadata": {}, 885 | "outputs": [], 886 | "source": [ 887 | "encoder_sequences" 888 | ] 889 | }, 890 | { 891 | "cell_type": "code", 892 | "execution_count": 16, 893 | "metadata": {}, 894 | "outputs": [], 895 | "source": [ 896 | "for seqs in encoder_sequences:\n", 897 | " for seq in seqs:\n", 898 | " if seq > 14999:\n", 899 | " print(seq)\n", 900 | " break" 901 | ] 902 | }, 903 | { 904 | "cell_type": "code", 905 | "execution_count": 139, 906 | "metadata": {}, 907 | "outputs": [ 908 | { 909 | "data": { 910 | "text/plain": [ 911 | "15000" 912 | ] 913 | }, 914 | "execution_count": 139, 915 | "metadata": {}, 916 | "output_type": "execute_result" 917 | } 918 | ], 919 | "source": [ 920 | "VOCAB_SIZE = len(index2word) + 1\n", 921 | "VOCAB_SIZE" 922 | ] 923 | }, 924 | { 925 | "cell_type": "code", 926 | "execution_count": 53, 927 | "metadata": {}, 928 | "outputs": [ 929 | { 930 | "data": { 931 | "text/plain": [ 932 | "(143865, 20, 15000)" 933 | ] 934 | }, 935 | "execution_count": 53, 936 | "metadata": {}, 937 | "output_type": "execute_result" 938 | } 939 | ], 940 | "source": [ 941 | "decoder_output_data.shape" 942 | ] 943 | }, 944 | { 945 | "cell_type": "code", 946 | "execution_count": null, 947 | "metadata": {}, 948 | "outputs": [], 949 | "source": [ 950 | "decoder_sequences" 951 | ] 952 | }, 953 | { 954 | "cell_type": "code", 955 | "execution_count": 98, 956 | "metadata": { 957 | "colab": {}, 958 | "colab_type": "code", 959 | "id": "KkUuN7EdRMVz" 960 | }, 961 | "outputs": [], 962 | "source": [ 963 | "import numpy as np\n", 964 | "MAX_LEN = 20\n", 965 | "num_samples = len(encoder_sequences)\n", 966 | "decoder_output_data = np.zeros((num_samples, MAX_LEN, VOCAB_SIZE), dtype=\"float32\")" 967 | ] 968 | }, 969 | { 970 | "cell_type": "code", 971 | "execution_count": 130, 972 | "metadata": { 973 | "colab": {}, 974 | "colab_type": "code", 975 | "id": "LTGqtHmlRMV4" 976 | }, 977 | "outputs": [], 978 | "source": [ 979 | "# outputの3Dテンソル\n", 980 | "for i, seqs in enumerate(decoder_input_data):\n", 981 | " for j, seq in enumerate(seqs):\n", 982 | " if j > 0:\n", 983 | " decoder_output_data[i][j][seq] = 1." 984 | ] 985 | }, 986 | { 987 | "cell_type": "code", 988 | "execution_count": 134, 989 | "metadata": {}, 990 | "outputs": [ 991 | { 992 | "data": { 993 | "text/plain": [ 994 | "(143865, 20, 15000)" 995 | ] 996 | }, 997 | "execution_count": 134, 998 | "metadata": {}, 999 | "output_type": "execute_result" 1000 | } 1001 | ], 1002 | "source": [ 1003 | "decoder_output_data.shape" 1004 | ] 1005 | }, 1006 | { 1007 | "cell_type": "markdown", 1008 | "metadata": { 1009 | "colab_type": "text", 1010 | "id": "VXYJEes1RMV9" 1011 | }, 1012 | "source": [ 1013 | "### 2-4. PADDING" 1014 | ] 1015 | }, 1016 | { 1017 | "cell_type": "code", 1018 | "execution_count": 128, 1019 | "metadata": { 1020 | "colab": { 1021 | "base_uri": "https://localhost:8080/", 1022 | "height": 215 1023 | }, 1024 | "colab_type": "code", 1025 | "id": "d_qBdh0eRMV-", 1026 | "outputId": "fcfb794d-4945-4d18-f1ec-5c4e42712bbb" 1027 | }, 1028 | "outputs": [], 1029 | "source": [ 1030 | "from keras.preprocessing.sequence import pad_sequences\n", 1031 | "encoder_input_data = pad_sequences(encoder_sequences, maxlen=MAX_LEN, dtype='int32', padding='post', truncating='post')\n", 1032 | "decoder_input_data = pad_sequences(decoder_sequences, maxlen=MAX_LEN, dtype='int32', padding='post', truncating='post')" 1033 | ] 1034 | }, 1035 | { 1036 | "cell_type": "code", 1037 | "execution_count": 129, 1038 | "metadata": {}, 1039 | "outputs": [ 1040 | { 1041 | "data": { 1042 | "text/plain": [ 1043 | "array([ 1, 32, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n", 1044 | " 0, 0, 0], dtype=int32)" 1045 | ] 1046 | }, 1047 | "execution_count": 129, 1048 | "metadata": {}, 1049 | "output_type": "execute_result" 1050 | } 1051 | ], 1052 | "source": [ 1053 | "decoder_input_data[0]" 1054 | ] 1055 | }, 1056 | { 1057 | "cell_type": "markdown", 1058 | "metadata": { 1059 | "colab_type": "text", 1060 | "id": "1IvPATnlRMWB" 1061 | }, 1062 | "source": [ 1063 | "### 2-5. Word2Vec: pretrained glove vector" 1064 | ] 1065 | }, 1066 | { 1067 | "cell_type": "code", 1068 | "execution_count": 57, 1069 | "metadata": { 1070 | "colab": {}, 1071 | "colab_type": "code", 1072 | "id": "lbEykxwmRMWC" 1073 | }, 1074 | "outputs": [ 1075 | { 1076 | "name": "stdout", 1077 | "output_type": "stream", 1078 | "text": [ 1079 | "Glove Loded!\n" 1080 | ] 1081 | } 1082 | ], 1083 | "source": [ 1084 | "embeddings_index = {}\n", 1085 | "with open('glove.6B.50d.txt', encoding='utf-8') as f:\n", 1086 | " for line in f:\n", 1087 | " values = line.split()\n", 1088 | " word = values[0]\n", 1089 | " coefs = np.asarray(values[1:], dtype='float32')\n", 1090 | " embeddings_index[word] = coefs\n", 1091 | " f.close()\n", 1092 | "\n", 1093 | "print(\"Glove Loded!\")" 1094 | ] 1095 | }, 1096 | { 1097 | "cell_type": "code", 1098 | "execution_count": 59, 1099 | "metadata": { 1100 | "colab": {}, 1101 | "colab_type": "code", 1102 | "id": "HUXce9sDRMWI" 1103 | }, 1104 | "outputs": [], 1105 | "source": [ 1106 | "embedding_dimention = 50\n", 1107 | "def embedding_matrix_creater(embedding_dimention, word_index):\n", 1108 | " embedding_matrix = np.zeros((len(word_index) + 1, embedding_dimention))\n", 1109 | " for word, i in word_index.items():\n", 1110 | " embedding_vector = embeddings_index.get(word)\n", 1111 | " if embedding_vector is not None:\n", 1112 | " # words not found in embedding index will be all-zeros.\n", 1113 | " embedding_matrix[i] = embedding_vector\n", 1114 | " return embedding_matrix" 1115 | ] 1116 | }, 1117 | { 1118 | "cell_type": "code", 1119 | "execution_count": 137, 1120 | "metadata": {}, 1121 | "outputs": [], 1122 | "source": [ 1123 | "embedding_matrix = embedding_matrix_creater(50, word_index=word2index)" 1124 | ] 1125 | }, 1126 | { 1127 | "cell_type": "code", 1128 | "execution_count": 140, 1129 | "metadata": {}, 1130 | "outputs": [], 1131 | "source": [ 1132 | "embed_layer = Embedding(input_dim=VOCAB_SIZE, output_dim=50, trainable=True,)\n", 1133 | "embed_layer.build((None,))\n", 1134 | "embed_layer.set_weights([embedding_matrix])" 1135 | ] 1136 | }, 1137 | { 1138 | "cell_type": "markdown", 1139 | "metadata": { 1140 | "colab_type": "text", 1141 | "id": "HBLzb6z0RMWM" 1142 | }, 1143 | "source": [ 1144 | "## Step 3. Build Seq2Seq Model" 1145 | ] 1146 | }, 1147 | { 1148 | "cell_type": "code", 1149 | "execution_count": 60, 1150 | "metadata": { 1151 | "colab": {}, 1152 | "colab_type": "code", 1153 | "id": "YW3imyr0RMWX" 1154 | }, 1155 | "outputs": [], 1156 | "source": [ 1157 | "from keras.layers import Embedding\n", 1158 | "from keras.layers import Input, Dense, LSTM, TimeDistributed\n", 1159 | "from keras.models import Model" 1160 | ] 1161 | }, 1162 | { 1163 | "cell_type": "code", 1164 | "execution_count": 149, 1165 | "metadata": {}, 1166 | "outputs": [], 1167 | "source": [ 1168 | "def seq2seq_model_builder(HIDDEN_DIM=300):\n", 1169 | " \n", 1170 | " encoder_inputs = Input(shape=(MAX_LEN, ), dtype='int32',)\n", 1171 | " encoder_embedding = embed_layer(encoder_inputs)\n", 1172 | " encoder_LSTM = LSTM(HIDDEN_DIM, return_state=True)\n", 1173 | " encoder_outputs, state_h, state_c = encoder_LSTM(encoder_embedding)\n", 1174 | " \n", 1175 | " decoder_inputs = Input(shape=(MAX_LEN, ), dtype='int32',)\n", 1176 | " decoder_embedding = embed_layer(decoder_inputs)\n", 1177 | " decoder_LSTM = LSTM(HIDDEN_DIM, return_state=True, return_sequences=True)\n", 1178 | " decoder_outputs, _, _ = decoder_LSTM(decoder_embedding, initial_state=[state_h, state_c])\n", 1179 | " \n", 1180 | " # dense_layer = Dense(VOCAB_SIZE, activation='softmax')\n", 1181 | " outputs = TimeDistributed(Dense(VOCAB_SIZE, activation='softmax'))(decoder_outputs)\n", 1182 | " model = Model([encoder_inputs, decoder_inputs], outputs)\n", 1183 | " \n", 1184 | " return model" 1185 | ] 1186 | }, 1187 | { 1188 | "cell_type": "code", 1189 | "execution_count": 150, 1190 | "metadata": {}, 1191 | "outputs": [], 1192 | "source": [ 1193 | "model = seq2seq_model_builder(HIDDEN_DIM=300)" 1194 | ] 1195 | }, 1196 | { 1197 | "cell_type": "code", 1198 | "execution_count": 151, 1199 | "metadata": {}, 1200 | "outputs": [ 1201 | { 1202 | "name": "stdout", 1203 | "output_type": "stream", 1204 | "text": [ 1205 | "__________________________________________________________________________________________________\n", 1206 | "Layer (type) Output Shape Param # Connected to \n", 1207 | "==================================================================================================\n", 1208 | "input_10 (InputLayer) (None, 20) 0 \n", 1209 | "__________________________________________________________________________________________________\n", 1210 | "input_9 (InputLayer) (None, 20) 0 \n", 1211 | "__________________________________________________________________________________________________\n", 1212 | "embedding_3 (Embedding) (None, 20, 50) 750000 input_9[0][0] \n", 1213 | " input_10[0][0] \n", 1214 | "__________________________________________________________________________________________________\n", 1215 | "lstm_11 (LSTM) [(None, 300), (None, 421200 embedding_3[8][0] \n", 1216 | "__________________________________________________________________________________________________\n", 1217 | "lstm_12 (LSTM) [(None, 20, 300), (N 421200 embedding_3[9][0] \n", 1218 | " lstm_11[0][1] \n", 1219 | " lstm_11[0][2] \n", 1220 | "__________________________________________________________________________________________________\n", 1221 | "time_distributed_4 (TimeDistrib (None, 20, 15000) 4515000 lstm_12[0][0] \n", 1222 | "==================================================================================================\n", 1223 | "Total params: 6,107,400\n", 1224 | "Trainable params: 6,107,400\n", 1225 | "Non-trainable params: 0\n", 1226 | "__________________________________________________________________________________________________\n" 1227 | ] 1228 | } 1229 | ], 1230 | "source": [ 1231 | "model.summary()" 1232 | ] 1233 | }, 1234 | { 1235 | "cell_type": "code", 1236 | "execution_count": 155, 1237 | "metadata": {}, 1238 | "outputs": [ 1239 | { 1240 | "data": { 1241 | "text/plain": [ 1242 | "'/Users/akr712/Desktop/CHATBOT'" 1243 | ] 1244 | }, 1245 | "execution_count": 155, 1246 | "metadata": {}, 1247 | "output_type": "execute_result" 1248 | } 1249 | ], 1250 | "source": [ 1251 | "pwd" 1252 | ] 1253 | }, 1254 | { 1255 | "cell_type": "code", 1256 | "execution_count": null, 1257 | "metadata": {}, 1258 | "outputs": [], 1259 | "source": [ 1260 | "from keras.utils import plot_model\n", 1261 | "plot_model(model, to_file='/Users/akr712/Desktop/CHATBOT/seq2seq.png')" 1262 | ] 1263 | }, 1264 | { 1265 | "cell_type": "code", 1266 | "execution_count": 154, 1267 | "metadata": { 1268 | "colab": {}, 1269 | "colab_type": "code", 1270 | "id": "rIFru1mFRMWd" 1271 | }, 1272 | "outputs": [], 1273 | "source": [ 1274 | "model.compile(optimizer='adam', loss ='categorical_crossentropy', metrics = ['accuracy'])" 1275 | ] 1276 | }, 1277 | { 1278 | "cell_type": "markdown", 1279 | "metadata": { 1280 | "colab_type": "text", 1281 | "id": "g8jouTFzRMWh" 1282 | }, 1283 | "source": [ 1284 | "## Step 4. Training Model" 1285 | ] 1286 | }, 1287 | { 1288 | "cell_type": "code", 1289 | "execution_count": 164, 1290 | "metadata": {}, 1291 | "outputs": [], 1292 | "source": [ 1293 | "BATCH_SIZE = 32\n", 1294 | "EPOCHS = 5" 1295 | ] 1296 | }, 1297 | { 1298 | "cell_type": "code", 1299 | "execution_count": 163, 1300 | "metadata": { 1301 | "colab": {}, 1302 | "colab_type": "code", 1303 | "id": "qGVVQvhFRMWq" 1304 | }, 1305 | "outputs": [ 1306 | { 1307 | "data": { 1308 | "text/plain": [ 1309 | "(143865, 20)" 1310 | ] 1311 | }, 1312 | "execution_count": 163, 1313 | "metadata": {}, 1314 | "output_type": "execute_result" 1315 | } 1316 | ], 1317 | "source": [ 1318 | "encoder_input_data.shape" 1319 | ] 1320 | }, 1321 | { 1322 | "cell_type": "code", 1323 | "execution_count": 165, 1324 | "metadata": { 1325 | "colab": {}, 1326 | "colab_type": "code", 1327 | "id": "p4i_CsA4RMWk" 1328 | }, 1329 | "outputs": [ 1330 | { 1331 | "name": "stdout", 1332 | "output_type": "stream", 1333 | "text": [ 1334 | "Epoch 1/5\n", 1335 | "143865/143865 [==============================] - 5913s 41ms/step - loss: 0.9308 - acc: 0.8280\n", 1336 | "Epoch 2/5\n", 1337 | "143865/143865 [==============================] - 5848s 41ms/step - loss: 0.0447 - acc: 0.9449\n", 1338 | "Epoch 3/5\n", 1339 | "143865/143865 [==============================] - 5494s 38ms/step - loss: 0.0052 - acc: 0.9493\n", 1340 | "Epoch 4/5\n", 1341 | "143865/143865 [==============================] - 5753s 40ms/step - loss: 0.0016 - acc: 0.9498\n", 1342 | "Epoch 5/5\n", 1343 | "143865/143865 [==============================] - 4970s 35ms/step - loss: 8.2432e-04 - acc: 0.9499\n" 1344 | ] 1345 | } 1346 | ], 1347 | "source": [ 1348 | "history = model.fit([encoder_input_data, decoder_input_data], \n", 1349 | " decoder_output_data, \n", 1350 | " epochs=EPOCHS, \n", 1351 | " batch_size=BATCH_SIZE)" 1352 | ] 1353 | }, 1354 | { 1355 | "cell_type": "markdown", 1356 | "metadata": {}, 1357 | "source": [ 1358 | "#### Visualize Learning History" 1359 | ] 1360 | }, 1361 | { 1362 | "cell_type": "code", 1363 | "execution_count": 183, 1364 | "metadata": { 1365 | "colab": {}, 1366 | "colab_type": "code", 1367 | "id": "Mj9pi9UGRMWn" 1368 | }, 1369 | "outputs": [ 1370 | { 1371 | "data": { 1372 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAmsAAAGDCAYAAAB0s1eWAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAIABJREFUeJzt3XmYXXWd7/v3t1KpzHMChMwoCGGGEIYkjS3ajajQigMiyJDoOe2xh3P0dmt3X7Xp26fvPY/t6XNaT3drEmRQBqFtUVEERU3CGIaATBJTmRMyz6nU9Lt/7JWwU1SSTahdaw/v1/PUk7XXsPf3V7vY+8MavitSSkiSJKkyNeRdgCRJkg7NsCZJklTBDGuSJEkVzLAmSZJUwQxrkiRJFcywJkmSVMEMa5KqXkR8OyL+nxLXXR4R7y53TZLUUwxrkiRJFcywJkkVIiIa865BUuUxrEnqFdnhx/8rIp6LiN0RMS8ijo2In0TEzoh4KCJGFK1/eUS8EBHbIuKXEXFK0bKzI+LpbLu7gP5dXuv9EfFstu0jEXFGiTW+LyKeiYgdEbEqIr7SZfnM7Pm2Zcuvz+YPiIh/jIgVEbE9IhZm894ZEau7+T28O5v+SkTcExG3R8QO4PqImB4Rj2avsS4ivh4RTUXbnxoRD0bEloh4LSL+KiKOi4g9ETGqaL1zImJjRPQtZeySKpdhTVJvuhJ4D3AS8AHgJ8BfAWMofB79KUBEnATcAfx5tux+4IcR0ZQFl/8AbgNGAt/Lnpds27OB+cB/AkYB/wbcFxH9SqhvN/BJYDjwPuCPI+KPsuedlNX7z1lNZwHPZtt9FTgXuCir6S+AzhJ/J1cA92Sv+R2gA/ivwGjgQuAS4DNZDUOAh4CfAscDbwd+nlJaD/wS+GjR814L3JlSaiuxDkkVyrAmqTf9c0rptZTSGmAB8HhK6ZmUUgvwfeDsbL2PAT9OKT2YhY2vAgMohKELgL7AP6WU2lJK9wBPFr3Gp4F/Syk9nlLqSCndAuzLtjuslNIvU0rPp5Q6U0rPUQiMF2eLrwYeSindkb3u5pTSsxHRANwI/FlKaU32mo+klPaV+Dt5NKX0H9lr7k0pPZVSeiyl1J5SWk4hbO6v4f3A+pTSP6aUWlJKO1NKj2fLbgGuAYiIPsDHKQRaSVXOsCapN71WNL23m8eDs+njgRX7F6SUOoFVwLhs2ZqUUiradkXR9CTgc9lhxG0RsQ2YkG13WBFxfkQ8nB0+3A78Zwp7uMie43fdbDaawmHY7paVYlWXGk6KiB9FxPrs0Oh/L6EGgB8AUyNiCoW9l9tTSk8cZU2SKohhTVIlWkshdAEQEUEhqKwB1gHjsnn7TSyaXgX8fUppeNHPwJTSHSW87neB+4AJKaVhwL8C+19nFfC2brbZBLQcYtluYGDROPpQOIRaLHV5/C/Ay8CJKaWhFA4TF9dwQneFZ3sn76awd+1a3Ksm1QzDmqRKdDfwvoi4JDtB/nMUDmU+AjwKtAN/GhF9I+JDwPSibb8F/OdsL1lExKDswoEhJbzuEGBLSqklIqZTOPS533eAd0fERyOiMSJGRcRZ2V6/+cDXIuL4iOgTERdm58j9FuifvX5f4G+AI507NwTYAeyKiJOBPy5a9iNgbET8eUT0i4ghEXF+0fJbgeuByzGsSTXDsCap4qSUXqGwh+ifKey5+gDwgZRSa0qpFfgQhVCyhcL5bf9etO1i4FPA14GtwNJs3VJ8BrgpInYCX6IQGvc/70rgMgrBcQuFiwvOzBZ/HniewrlzW4D/D2hIKW3PnnMuhb2Cu4GDrg7txucphMSdFILnXUU17KRwiPMDwHrgVeD3i5YvonBhw9MppeJDw5KqWBx82ockqZpFxC+A76aU5uZdi6SeYViTpBoREecBD1I4525n3vVI6hkeBpWkGhARt1DowfbnBjWptrhnTZIkqYK5Z02SJKmCGdYkSZIqWGPeBfSU0aNHp8mTJ+ddhiRJ0hE99dRTm1JKXZtkd6tmwtrkyZNZvHhx3mVIkiQdUUSU3AvRw6CSJEkVzLAmSZJUwQxrkiRJFcywJkmSVMEMa5IkSRXMsCZJklTBDGuSJEkVzLAmSZJUwQxrkiRJFcywJkmSVMEMa5IkSRWsZu4NKklST0gp0ZmgM6XsMSRenz543ezfLstT9jz7pw/atpttXl+n+20Sqcu2h3/tg56rqOaStzlou0Ot2804DvH8h3v9I42dbtc9/DbFv6/D1dt17PunB/btw/knjKJSGNYkqRellOjofD0MdBY9PtSylD0ueVlnoiMdvCwlsnX2/xx5WWf2nEdallLh9Y607KDxZc/ZkW1zqGWpy++o+2XFz33w76izs/vfXWdnN7+DbFnXgKH68/ZjBvPQf7s47zIOMKxJqkspJXa0tLNldyubd+1j067WA9Obd7cWfnbtY9ueNjo604Ev8uIv+QOhpGvYOMyyWhQBDRH0iXh9uqEw3achaIj9P4daxuvrNBQeF9YJ+hy0DBobG96wLCLo00DR9odbRlZntt5hljVEYWyFMcZB4wUIosvjNy4v2uyg53njuq8v7zqPQ27zxnW71kZ3z9913UOM5/D1Hm7sh1p2iOeKw9fbdZtDvTaHqe3A8xzi91VccwD9GvtQSQxrkmpCSok9rR1s2d3Kpl372JyFr027i6aL5m/evY+2ju7T05D+jYwa1MSowf04fnh/GhsaaGjg4NBxIBTEoZftDyUlLDsovBQFh8Mti3g92BxuWWSvVxyK9j9+w7KGbExdlxWNtziQ7V9PUvkY1iRVrJa2DjbvbmXLruLQVfh30/7p3a1s3lUIXy1tnd0+z8CmPowa3MTIQf0YO6w/p40byshB/Rg9uImRWSgrhLPC40r7v2pJ9c2wJqnXtHV0ZocaC+Gq8G926HH/dNGesF372rt9nqbGBkYPamLk4CZGDerH248ZfGBP2P7QNWpQvwP/DmgyfEmqXoY1SUetozOxdU83hxh37WNTtkesOJRt39vW7fM0NsRBe7gmThzIyEFNjM4e71+2f0/Y4H6NHnqTVDcMa5IO6OxM7GhpO+hk+zeGrtfD19Y9rd1eORcBIwe+fljxlOOHFvaEZXu7Rg8umh7Uj6EDDF+SdCiGNamGpZTYta/94MONu9+4J2xTNn/r7lbaD3HJ4rABfbPDik28bcxgpk9pev3QYxbK9u8JGz6wiT4Nhi9J6gmGNanK7G3tYNOufQeuaDyo5USX8742726ltb37k+4H92s8EL7GjxjIWROGHzgJ/8CJ99n0iEFN9O3jDU8kKQ+GNSln+9o7ik667xK6uvT82rK7lT2tHd0+T/++DQfC1ZjB/Tj5uKEHwljxyfb794L17+tJ95JUDQxrUg9r7+hky57WQ/b26ronbOehrnjs05CdWF841HjC6EGFk+2z87wOOvQ4uImBTf7nLEm1yE93qQS79rWzbtveg3p6ddt+Yncr2/Z0f8Vjn4ZgxMDCyfWjBjdx+vjhjBrU5WT7oukhXvEoScKwJh3Rqi17eP8/L3xD24kIGDFw/7ldTbzjuCFFhxvf2PNr2IC+NHjSvSTpTTKsSUcwb2Ezu/e1848fOZOxw/ozanA/Rg5qYsTAvjR60r0kqcwMa9JhbN/bxt2LV3H5mcdz5bnj8y5HklSH3C0gHcadT6xkT2sHN86ckncpkqQ6ZViTDqGto5NvP7KcC08YxWnjhuVdjiSpThnWpEO4//l1rNvewpxZ7lWTJOXHsCZ1I6XEtxYs44Qxg/j9dxyTdzmSpDpmWJO68UTzFn6zZgezZ06x3YYkKVeGNakbcxc2M2JgXz50tleASpLyVdawFhGXRsQrEbE0Ir7QzfJJEfHziHguIn4ZEeO7LB8aEasj4uvlrFMq1rxpNw+99BrXXDCJAU3eP1OSlK+yhbWI6AN8A3gvMBX4eERM7bLaV4FbU0pnADcB/9Bl+d8Bvy5XjVJ35i9spm9DA9deOCnvUiRJKuuetenA0pTSspRSK3AncEWXdaYCv8imHy5eHhHnAscCPytjjdJBtu1p5XtPreKKs47nmCH98y5HkqSyhrVxwKqix6uzecWWAB/Kpj8IDImIURHRAPwj8Pky1ie9wXceX0lLWyezbdchSaoQeV9g8Hng4oh4BrgYWAN0AJ8B7k8prT7cxhHx6YhYHBGLN27cWP5qVdNa2zu55ZHlzDpxNCcfNzTvciRJAsp7b9A1wISix+OzeQeklNaS7VmLiMHAlSmlbRFxITArIj4DDAaaImJXSukLXbb/JvBNgGnTpqWyjUR14UfPrWXDzn38jw+fkXcpkiQdUM6w9iRwYkRMoRDSrgKuLl4hIkYDW1JKncAXgfkAKaVPFK1zPTCta1CTelJKibkLmjnxmMFcfNKYvMuRJOmAsh0GTSm1A58FHgBeAu5OKb0QETdFxOXZau8EXomI31K4mODvy1WPdDiP/m4zL67bwZxZU4iwCa4kqXJESrVx9HDatGlp8eLFeZehKnXjt59kyaptLPrCu+jf195qkqTyioinUkrTSlk37wsMpNwt3bCLX7y8gWsvnGRQkyRVHMOa6t78Rc00NTZwzQU2wZUkVR7Dmuralt2t3PvUaj509jhGD+6XdzmSJL2BYU117fbHVrCvvZPZM22CK0mqTIY11a2Wtg5ufXQ573zHGE48dkje5UiS1C3DmurWfUvWsmlXK3NmnpB3KZIkHZJhTXUppcS8Bc2cfNwQZrx9VN7lSJJ0SIY11aWFSzfxyms7mT3TJriSpMpmWFNd+taCZsYM6cflZx2fdymSJB2WYU1155X1O/n1bzdy3YWT6NdoE1xJUmUzrKnuzF/YTP++DVx9vk1wJUmVz7CmurJx5z6+/+warjxnPCMHNeVdjiRJR2RYU125/bEVtLZ3cqNNcCVJVcKwprrR0tbB7Y+t4JKTj+FtYwbnXY4kSSUxrKlufP+ZNWze3cqcWTbBlSRVD8Oa6kJnZ2LewmZOPX4oF5wwMu9yJEkqmWFNdeFXr25k6YZdzJllE1xJUnUxrKkuzFvQzLFD+/G+022CK0mqLoY11byX1u1g4dJNXHfRZJoa/ZOXJFUXv7lU8+YuaGZA3z58YrpNcCVJ1cewppq2YUcL9y1Zw0enjWfYwL55lyNJ0ptmWFNNu/XRFbR3Jm6YYRNcSVJ1MqypZu1t7eD2x1fwnlOOZfLoQXmXI0nSUTGsqWbd+/Rqtu1pswmuJKmqGdZUkzo7E/MXNnPm+GGcN3lE3uVIknTUDGuqSb94eQPLNu1m9qwTbIIrSapqhjXVpLkLl3H8sP6897Tj8i5FkqS3xLCmmvObNdt5bNkWrp8xmb59/BOXJFU3v8lUc+YtbGZQUx8+dt7EvEuRJOktM6yppqzf3sIPl6zlY+dNZNgAm+BKkqqfYU015duPLKczJW6YMTnvUiRJ6hGGNdWM3fva+e7jK7j0tOOYMHJg3uVIktQjDGuqGfc8tZodLe3MnmkTXElS7TCsqSZ0dCbmL2rm7InDOXeSTXAlSbXDsKaa8NBLr7Fi8x7muFdNklRjDGuqCXMXLGP8iAH84anH5l2KJEk9yrCmqvfsqm08uXwrN8yYQqNNcCVJNcZvNlW9eQubGdKvkY9OG593KZIk9TjDmqramm17uf/5dVw1fQJD+tsEV5JUewxrqmq3PLIcgOtnTMm3EEmSyqSsYS0iLo2IVyJiaUR8oZvlkyLi5xHxXET8MiLGZ/PPiohHI+KFbNnHylmnqtPOljbueHwll50+lnHDB+RdjiRJZVG2sBYRfYBvAO8FpgIfj4ipXVb7KnBrSukM4CbgH7L5e4BPppROBS4F/ikihperVlWnuxevZue+dmbPdK+aJKl2lXPP2nRgaUppWUqpFbgTuKLLOlOBX2TTD+9fnlL6bUrp1Wx6LbABGFPGWlVl2js6uXlRM+dNHsFZE8zxkqTaVc6wNg5YVfR4dTav2BLgQ9n0B4EhETGqeIWImA40Ab/r+gIR8emIWBwRizdu3Nhjhavy/ezF11i9da+3lpIk1by8LzD4PHBxRDwDXAysATr2L4yIscBtwA0ppc6uG6eUvplSmpZSmjZmjDve6sncBcuYNGog75lqE1xJUm1rLONzrwEmFD0en807IDvE+SGAiBgMXJlS2pY9Hgr8GPjrlNJjZaxTVeapFVt5euU2/vbyU+nTEHmXI0lSWZVzz9qTwIkRMSUimoCrgPuKV4iI0RGxv4YvAvOz+U3A9ylcfHBPGWtUFZq3cBlD+zfy4XNtgitJqn1lC2sppXbgs8ADwEvA3SmlFyLipoi4PFvtncArEfFb4Fjg77P5HwV+D7g+Ip7Nfs4qV62qHqu27OGnv1nP1edPYlC/cu4YliSpMpT12y6ldD9wf5d5Xyqavgd4w56zlNLtwO3lrE3V6eZFy2mI4LqLJuVdiiRJvSLvCwykku1oaeOuJ1fy/jPGMnaYTXAlSfXBsKaqcdcTq9jd2sGcWbbrkCTVD8OaqkJb1gT3ghNGctq4YXmXI0lSrzGsqSr85DfrWbu9hTk2wZUk1RnDmipeSom5C5YxZfQg3nXyMXmXI0lSrzKsqeItXrGV51Zv58aZU2iwCa4kqc4Y1lTx5i5YxvCBffnwOTbBlSTVH8OaKtryTbv52Yuvcc35kxjQ1CfvciRJ6nWGNVW0mxc109gQfPJCm+BKkuqTYU0Va/ueNu5evJrLzxzHMUP7512OJEm5MKypYn33iZXsbetg9swpeZciSVJuDGuqSK3tnXz7kWZmvH0UU48fmnc5kiTlxrCminT/8+t4bcc+by0lSap7hjVVnJQS31qwjLcfM5iLTxyTdzmSJOXKsKaK89iyLbywdgezbYIrSZJhTZVn3sJljBzUxAfPHpd3KZIk5c6wpoqybOMuHnppA9dcMIn+fW2CK0mSYU0VZf6iZpoaG7j2ApvgSpIEhjVVkK27W7nnqdV88KxxjBnSL+9yJEmqCIY1VYzvPL6ClrZOZs+yCa4kSfsZ1lQR9rV3cMujK/i9k8Zw0rFD8i5HkqSKYVhTRfjhknVs3LmPOd5aSpKkgxjWlLuUEnMXLOMdxw5h1omj8y5HkqSKYlhT7hYt3czL63cye9YUImyCK0lSMcOacjd34TJGD+7HFWcdn3cpkiRVHMOacvXqazv55Ssb+eSFk+jXaBNcSZK6MqwpV/MXNdOvsYFPnD8x71IkSapIhjXlZvOufdz79BquPHc8owbbBFeSpO4Y1pSb2x9bSWt7JzfOsF2HJEmHYlhTLlraOrjtseW86+RjePsxg/MuR5KkimVYUy5+8OwaNu1qtQmuJElHYFhTrys0wW3mlLFDufBto/IuR5KkimZYU6/79aubeHXDLj5lE1xJko7IsKZeN3fBMo4Z0o/3n2ETXEmSjsSwpl718vodLHh1E9ddNJmmRv/8JEk6Er8t1avmLWhmQN8+NsGVJKlEhjX1mg07W/jBs2v58LnjGT6wKe9yJEmqCoY19ZrbH11BW2cnN9quQ5KkkhnW1CsKTXBX8O5TjmXK6EF5lyNJUtUoa1iLiEsj4pWIWBoRX+hm+aSI+HlEPBcRv4yI8UXLrouIV7Of68pZp8rv3qdXs3VPm01wJUl6k8oW1iKiD/AN4L3AVODjETG1y2pfBW5NKZ0B3AT8Q7btSODLwPnAdODLETGiXLWqvDo7E/MWNnP6uGFMnzIy73IkSaoq5dyzNh1YmlJallJqBe4EruiyzlTgF9n0w0XL/xB4MKW0JaW0FXgQuLSMtaqMfvnbDSzbuJs5NsGVJOlNK2dYGwesKnq8OptXbAnwoWz6g8CQiBhV4raqEnMXNDN2WH8uO31s3qVIklR18r7A4PPAxRHxDHAxsAboKHXjiPh0RCyOiMUbN24sV416C15Yu51HfreZ6y+aTN8+ef+5SZJUfcr57bkGmFD0eHw274CU0tqU0odSSmcDf53N21bKttm630wpTUspTRszZkxP168eMG9hMwOb+nDVdJvgSpJ0NMoZ1p4EToyIKRHRBFwF3Fe8QkSMjoj9NXwRmJ9NPwD8QUSMyC4s+INsnqrIazta+OGStXx02gSGDeibdzmSJFWlsoW1lFI78FkKIesl4O6U0gsRcVNEXJ6t9k7glYj4LXAs8PfZtluAv6MQ+J4EbsrmqYrc8shy2jsTN86wXYckSUersZxPnlK6H7i/y7wvFU3fA9xziG3n8/qeNlWZPa3tfOfxlfzh1OOYOGpg3uVIklS1PONbZXHvU6vZvreNT/2ee9UkSXorDGvqcfub4J41YTjnTLSXsSRJb0VJYS0i/j0i3ld0MYB0SA+99BrLN++xCa4kST2g1PD1f4CrgVcj4v+NiHeUsSZVubkLmxk3fACXnnpc3qVIklT1SgprKaWHUkqfAM4BlgMPRcQjEXFDRNiTQQc8t3obTzRv4YYZk2m0Ca4kSW9Zyd+m2W2grgfmAM8A/4tCeHuwLJWpKs1b2Mzgfo187LwJR15ZkiQdUUmtOyLi+8A7gNuAD6SU1mWL7oqIxeUqTtVl7ba9/Pi5dVx/0WSG9HeHqyRJPaHUPmv/O6X0cHcLUkrTerAeVbFbHllOZ0pcP2Ny3qVIklQzSj0MOjUihu9/kN0G6jNlqklVaNe+dr77xEree/pYxo+wCa4kST2l1LD2qewG6wCklLYCnypPSapG31u8ip0t7cyZaRNcSZJ6UqlhrU8UNcyKiD5AU3lKUrXp6EzMX9TMtEkjONsmuJIk9ahSw9pPKVxMcElEXALckc2TePDF9azaspc5s9yrJklSTyv1AoO/BP4T8MfZ4weBuWWpSFVn7oJmJowcwHum2gRXkqSeVlJYSyl1Av+S/UgHPLNyK4tXbOXLH5hKnwZvLSVJUk8rtc/aicA/AFOB/vvnp5ROKFNdqhJzFzYzpH8jH5lmE1xJksqh1HPWbqawV60d+H3gVuD2chWl6rBqyx5+8vw6rp4+kcH9Sj2iLkmS3oxSw9qAlNLPgUgprUgpfQV4X/nKUjW45ZHlNETYBFeSpDIqdXfIvohoAF6NiM8Ca4DB5StLlW5nSxt3PrmK950xlrHDBuRdjiRJNavUPWt/BgwE/hQ4F7gGuK5cRany3fXkKnbta2e2TXAlSSqrI+5Zyxrgfiyl9HlgF3BD2atSRWvv6OTmRcuZPmUkZ4wffuQNJEnSUTvinrWUUgcwsxdqUZX46QvrWbNtr7eWkiSpF5R6ztozEXEf8D1g9/6ZKaV/L0tVqlgpJb61oJnJowby7lOOzbscSZJqXqlhrT+wGXhX0bwEGNbqzNMrt7Jk1Tb+7opTabAJriRJZVfqHQw8T00AfOvXzQwb0Jcrzx2fdymSJNWFUu9gcDOFPWkHSSnd2OMVqWKt2LybB15czx9f/DYGNtkEV5Kk3lDqN+6Piqb7Ax8E1vZ8OapkNy9aTmNDcN1Fk/MuRZKkulHqYdB7ix9HxB3AwrJUpIq0fW8bdy9exQfOPJ5jh/Y/8gaSJKlHlNoUt6sTgWN6shBVtjufWMme1g6b4EqS1MtKPWdtJwefs7Ye+MuyVKSK09bRybcfWc5FbxvFqccPy7scSZLqSqmHQYeUuxBVrvufX8e67S38/QdPy7sUSZLqTkmHQSPigxExrOjx8Ij4o/KVpUpRaIK7jBPGDOKdJ3nkW5Kk3lbqOWtfTilt3/8gpbQN+HJ5SlIleaJ5C79Zs4M5M0+wCa4kSTkoNax1t56NturA3IXNjBjYlw+dMy7vUiRJqkulhrXFEfG1iHhb9vM14KlyFqb8NW/azUMvvca1F0yif98+eZcjSVJdKjWs/QnQCtwF3Am0AP+lXEWpMsxf2EzfhgauuXBS3qVIklS3Sr0adDfwhTLXogqybU8r33tqFVecdTzHDLEJriRJeSn1atAHI2J40eMREfFA+cpS3r7z+Epa2jqZM+uEvEuRJKmulXoYdHR2BSgAKaWteAeDmtXa3sktjyxn1omjecdxttiTJClPpYa1zoiYuP9BREzm4DsaqIb86Lm1bNi5z71qkiRVgFLbb/w1sDAifgUEMAv4dNmqUm4KTXCbOfGYwfzeiaPzLkeSpLpX0p61lNJPgWnAK8AdwOeAvUfaLiIujYhXImJpRLzhAoWImBgRD0fEMxHxXERcls3vGxG3RMTzEfFSRHzxTY1KR+3R323mpXU7mDNrChE2wZUkKW+l3sh9DvBnwHjgWeAC4FHgXYfZpg/wDeA9wGrgyYi4L6X0YtFqfwPcnVL6l4iYCtwPTAY+AvRLKZ0eEQOBFyPijpTS8jc5Pr1Jcxc2M3pwE1ecZRNcSZIqQannrP0ZcB6wIqX0+8DZwLbDb8J0YGlKaVlKqZVCf7YruqyTgKHZ9DBgbdH8QRHRCAyg0ONtR4m16igt3bCLX7y8gWsvmGwTXEmSKkSpYa0lpdQCEBH9UkovA+84wjbjgFVFj1dn84p9BbgmIlZT2Kv2J9n8e4DdwDpgJfDVlNKWEmvVUZq/qJmmxgauuWDikVeWJEm9otSwtjrrs/YfwIMR8QNgRQ+8/seBb6eUxgOXAbdFRAOFvXIdwPHAFOBzEfGGSxMj4tMRsTgiFm/cuLEHyqlfW3a3cu9Tq7nynHGMGtwv73IkSVKm1DsYfDCb/EpEPEzhkOVPj7DZGmBC0ePx2bxis4FLs9d4NCL6A6OBq4GfppTagA0RsYjCBQ7LutT1TeCbANOmTbOVyFtw+2Mr2NfeyY0zpuRdiiRJKlLqnrUDUkq/Sindl52HdjhPAidGxJSIaAKuAu7rss5K4BKAiDgF6A9szOa/K5s/iMIFDS+/2VpVmpa2Dm59dDnvfMcYTjzWJriSJFWSNx3WSpVSagc+CzwAvEThqs8XIuKmiLg8W+1zwKciYgmFliDXp5QShatIB0fECxRC380ppefKVWu9u2/JWjbtauVTNsGVJKnilNoU96iklO6ncOFA8bwvFU2/CMzoZrtdFNp3qMxSSsxb0MzJxw3horeNyrscSZLURdn2rKk6LFy6iVde28mcWSfYBFeSpApkWKtz31rQzJgh/fjAmWPzLkWSJHXDsFbHXlm/k1//diPXXTiJfo02wZUkqRIZ1urY/IXN9O/bwCdr60TrAAATs0lEQVTOn5R3KZIk6RAMa3Vq4859fP/ZNXz43PGMGNSUdzmSJOkQDGt16vbHVtBqE1xJkiqeYa0OtbR1cNtjK3j3KcdwwpjBeZcjSZIOw7BWh77/zBq27G5l9kyb4EqSVOkMa3WmszMxb2Ezp40bygUnjMy7HEmSdASGtTrzq1c3snTDLubMtAmuJEnVwLBWZ+YtaOa4of257HSb4EqSVA0Ma3XkpXU7WLh0E9ddNJmmRt96SZKqgd/YdWTugmYGNvXh6ukT8y5FkiSVyLBWJzbsaOG+JWv46LQJDBvYN+9yJElSiQxrdeLWR1fQ3pm4YcbkvEuRJElvgmGtDuxt7eD2x1fwB1OPZdKoQXmXI0mS3gTDWh249+nVbNvTxpxZNsGVJKnaGNZqXGdnYv7CZs4cP4xpk0bkXY4kSXqTDGs17hcvb2DZpt3MmWUTXEmSqpFhrcbNXbiMccMH8N7Tjsu7FEmSdBQMazXsN2u289iyLVx/0WQa+/hWS5JUjfwGr2HzFjYzqKkPH5s+Ie9SJEnSUTKs1ah12/fywyVr+dh5Exna3ya4kiRVK8NajbrlkRV0JpvgSpJU7QxrNWj3vna++/gK3nvaWCaMHJh3OZIk6S0wrNWge55azY6WdmbPmpJ3KZIk6S0yrNWYjs7E/EXNnDNxOOdMtAmuJEnVzrBWYx566TVWbN7jraUkSaoRhrUaM3fBMsaPGMAfTD0271IkSVIPMKzVkGdXbePJ5Vu5ccYUm+BKklQj/EavIfMWNjOkXyMfPc8muJIk1QrDWo1Ys20v9z+/jo+fP5HB/RrzLkeSJPUQw1qNuOWR5QBcd9HkXOuQJEk9y7BWA3a2tHHH4yu57PSxjBs+IO9yJElSDzKs1YC7F69m5752PmUTXEmSao5hrcq1d3Ry86Jmpk8eyRnjh+ddjiRJ6mGGtSr3sxdfY/XWvd5aSpKkGmVYq3JzFyxj0qiBvPsUm+BKklSLDGtV7KkVW3l65TZunDGFPg2RdzmSJKkMDGtVbN7CZQwb0JePTBufdymSJKlMyhrWIuLSiHglIpZGxBe6WT4xIh6OiGci4rmIuKxo2RkR8WhEvBARz0dE/3LWWm1WbdnDT3+znqvPn8jAJpvgSpJUq8r2LR8RfYBvAO8BVgNPRsR9KaUXi1b7G+DulNK/RMRU4H5gckQ0ArcD16aUlkTEKKCtXLVWo5sXLachgusunJx3KZIkqYzKuWdtOrA0pbQspdQK3Alc0WWdBAzNpocBa7PpPwCeSyktAUgpbU4pdZSx1qqyo6WNu55cyQfOPJ7jhrnDUZKkWlbOsDYOWFX0eHU2r9hXgGsiYjWFvWp/ks0/CUgR8UBEPB0Rf9HdC0TEpyNicUQs3rhxY89WX8HuemIVu1s7mD3Tdh2SJNW6vC8w+Djw7ZTSeOAy4LaIaKBweHYm8Ins3w9GxCVdN04pfTOlNC2lNG3MmDG9WXdu2rImuBeeMIrTxg3LuxxJklRm5Qxra4AJRY/HZ/OKzQbuBkgpPQr0B0ZT2Av365TSppTSHgp73c4pY61V4ye/Wc/a7S3MsQmuJEl1oZxh7UngxIiYEhFNwFXAfV3WWQlcAhARp1AIaxuBB4DTI2JgdrHBxcCL1LmUEnMXLOOE0YP4/Xcck3c5kiSpF5QtrKWU2oHPUgheL1G46vOFiLgpIi7PVvsc8KmIWALcAVyfCrYCX6MQ+J4Fnk4p/bhctVaLxSu28tzq7dw4cwoNNsGVJKkulLVBV0rpfgqHMIvnfalo+kVgxiG2vZ1C+w5l5i5YxvCBfbnyHJvgSpJUL/K+wEAlWr5pNz978TWuOX8SA5r65F2OJEnqJYa1KnHzomb6NjTwyYsm5V2KJEnqRYa1KrB9Txt3L17N5WcdzzFDbIIrSVI9MaxVge8+sZK9bTbBlSSpHhnWKlxreyfffqSZmW8fzSljhx55A0mSVFMMaxXux8+v5bUd+5htE1xJkuqSYa2CFZrgNnPiMYN550n1cTstSZJ0MMNaBXts2RZeWLuD2TOnEGETXEmS6pFhrYLNW7iMUYOa+KOzx+VdiiRJyolhrUIt27iLh17awDUXTKJ/X5vgSpJUrwxrFWr+omaaGhu49kKb4EqSVM8MaxVo6+5W7nlqNR86exyjB/fLuxxJkpQjw1oF+s7jK2hp6+RGm+BKklT3DGsVZl97B7c8uoKLTxrDSccOybscSZKUM8NahfnhknVs3LmPOTbBlSRJGNYqSqEJ7jJOPm4IM98+Ou9yJElSBTCsVZBFSzfz8vqdNsGVJEkHGNYqyNyFyxg9uB+Xn3V83qVIkqQKYVirEK++tpNfvrKR6y6cRL9Gm+BKkqQCw1qFmL+omX6NDXziApvgSpKk1xnWKsDmXfu49+k1XHnueEYOasq7HEmSVEEMaxXgtsdW0NreyWyb4EqSpC4MazlraevgtkdXcMnJx/C2MYPzLkeSJFUYw1rOfvDsGjbvbmW2TXAlSVI3DGs5KjTBbWbq2KFceMKovMuRJEkVyLCWo1+/uolXN+xiziyb4EqSpO4Z1nI0d8Eyjh3aj/efYRNcSZLUPcNaTl5ev4MFr27iuosm09To2yBJkrpnSsjJvAXNDOjbh6unT8y7FEmSVMEMaznYsLOFHzy7lo9MG8/wgTbBlSRJh2ZYy8Htj66grbOTG2bYrkOSJB2eYa2XtbR1cNtjK3j3KccyZfSgvMuRJEkVzrDWy+59ejVb97TxqVkn5F2KJEmqAoa1XtTZmZi3sJkzxg/jvMkj8i5HkiRVAcNaL/rlbzewbONuZs+0Ca4kSSqNYa0XzV3QzNhh/bns9LF5lyJJkqqEYa2XvLB2O4/8bjPXXzSZvn38tUuSpNKYGnrJvAXNDGrqw1U2wZUkSW+CYa0XrN/ewn1L1vLR8yYwbEDfvMuRJElVpKxhLSIujYhXImJpRHyhm+UTI+LhiHgmIp6LiMu6Wb4rIj5fzjrL7dZHl9OZEjdcZBNcSZL05pQtrEVEH+AbwHuBqcDHI2Jql9X+Brg7pXQ2cBXwf7os/xrwk3LV2Bv2tLbzncdX8oenHsfEUQPzLkeSJFWZcu5Zmw4sTSktSym1AncCV3RZJwFDs+lhwNr9CyLij4Bm4IUy1lh29z61mu1725gzy71qkiTpzStnWBsHrCp6vDqbV+wrwDURsRq4H/gTgIgYDPwl8LdlrK/s9jfBPXvicM6dNDLvciRJUhXK+wKDjwPfTimNBy4DbouIBgoh7n+mlHYdbuOI+HRELI6IxRs3bix/tW/SQy+9xvLNe5gz01tLSZKko9NYxudeA0woejw+m1dsNnApQErp0YjoD4wGzgc+HBH/AxgOdEZES0rp68Ubp5S+CXwTYNq0aakso3gL5i5sZtzwAfzhqcfmXYokSapS5dyz9iRwYkRMiYgmChcQ3NdlnZXAJQARcQrQH9iYUpqVUpqcUpoM/BPw37sGtUr33OptPNG8hRtmTKbRJriSJOkolS1FpJTagc8CDwAvUbjq84WIuCkiLs9W+xzwqYhYAtwBXJ9Sqrg9ZEdj3sJmhvRr5GPnTTjyypIkSYdQzsOgpJTup3DhQPG8LxVNvwjMOMJzfKUsxZXR2m17+fFz67hhxmSG9LcJriRJOnoenyuDWx5ZTgKuu2hy3qVIkqQqZ1jrYbv2tfPdJ1by3tOOY/wIm+BKkqS3xrDWw763eBU7W9qZM8t2HZIk6a0zrPWgjs7E/EXNTJs0grMmDM+7HEmSVAMMaz3owRfXs2rLXm8tJUmSeoxhrQd9a0EzE0cO5D1Tj8u7FEmSVCMMaz3k6ZVbeWrFVm6cMZk+DZF3OZIkqUYY1nrIvIXNDOnfyEem2QRXkiT1HMNaD1i1ZQ8/eX4dV58/kUH9ytpnWJIk1RnDWg+45ZHlNERwvU1wJUlSDzOsvUU7W9q488lVvP+MsYwdNiDvciRJUo0xrL1Fdz25il372pk90ya4kiSp5xnW3oL2jk5uXrSc86eM5PTxw/IuR5Ik1SDD2lvw0xfWs2bbXm8tJUmSysawdpRSSnxrQTNTRg/ikpOPybscSZJUowxrR+nplVtZsmobN86cQoNNcCVJUpkY1o7St37dzPCBfbnynHF5lyJJkmqYYe0orNi8mwdeXM8nzp/IwCab4EqSpPIxrB2Fmxctp7Eh+OSFk/MuRZIk1TjD2pu0fW8bdy9exQfOPJ5jh/bPuxxJklTjDGtv0p1PrGRPawdzbIIrSZJ6gWHtTWjr6OTbjyxnxttHMfX4oXmXI0mS6oBh7U24//l1rNve4l41SZLUawxrJSo0wV3G28YM4uKTxuRdjiRJqhP2nSjR3rYOJo8axKwTR9sEV5Ik9RrDWokGNjXy9avPybsMSZJUZzwMKkmSVMEMa5IkSRXMsCZJklTBDGuSJEkVzLAmSZJUwQxrkiRJFcywJkmSVMEMa5IkSRXMsCZJklTBDGuSJEkVzLAmSZJUwQxrkiRJFcywJkmSVMEipZR3DT0iIjYCK3rhpUYDm3rhdSpRPY8d6nv8jr1+1fP463nsUN/j742xT0opjSllxZoJa70lIhanlKblXUce6nnsUN/jd+z1OXao7/HX89ihvsdfaWP3MKgkSVIFM6xJkiRVMMPam/fNvAvIUT2PHep7/I69ftXz+Ot57FDf46+osXvOmiRJUgVzz5okSVIFM6wdQkRcGhGvRMTSiPhCN8v7RcRd2fLHI2Jy71dZHiWM/fqI2BgRz2Y/c/KosxwiYn5EbIiI3xxieUTE/85+N89FxDm9XWO5lDD2d0bE9qL3/Uu9XWO5RMSEiHg4Il6MiBci4s+6WaeW3/tSxl+T739E9I+IJyJiSTb2v+1mnZr8vC9x7DX7eb9fRPSJiGci4kfdLKuM9z6l5E+XH6AP8DvgBKAJWAJM7bLOZ4B/zaavAu7Ku+5eHPv1wNfzrrVM4/894BzgN4dYfhnwEyCAC4DH8665F8f+TuBHeddZprGPBc7JpocAv+3m776W3/tSxl+T73/2fg7OpvsCjwMXdFmnVj/vSxl7zX7eF43xvwHf7e7vu1Lee/esdW86sDSltCyl1ArcCVzRZZ0rgFuy6XuASyIierHGcill7DUrpfRrYMthVrkCuDUVPAYMj4ixvVNdeZUw9pqVUlqXUno6m94JvASM67JaLb/3pYy/JmXv567sYd/sp+vJ3DX5eV/i2GtaRIwH3gfMPcQqFfHeG9a6Nw5YVfR4NW/84DqwTkqpHdgOjOqV6sqrlLEDXJkdCronIib0TmkVodTfT626MDtk8pOIODXvYsohO8xxNoW9DMXq4r0/zPihRt//7DDYs8AG4MGU0iHf+xr7vC9l7FDbn/f/BPwF0HmI5RXx3hvWdDR+CExOKZ0BPMjr/9eh2vY0hdujnAn8M/AfOdfT4yJiMHAv8OcppR1519PbjjD+mn3/U0odKaWzgPHA9Ig4Le+aeksJY6/Zz/uIeD+wIaX0VN61HIlhrXtrgOL/exifzet2nYhoBIYBm3uluvI64thTSptTSvuyh3OBc3uptkpQyt9GTUop7dh/yCSldD/QNyJG51xWj4mIvhSCyndSSv/ezSo1/d4fafy1/v4DpJS2AQ8Dl3ZZVKuf9wccauw1/nk/A7g8IpZTOOXnXRFxe5d1KuK9N6x170ngxIiYEhFNFE4qvK/LOvcB12XTHwZ+kbIzEKvcEcfe5Tydyymc31Iv7gM+mV0ZeAGwPaW0Lu+iekNEHLf/XI2ImE7h86MmvrCycc0DXkopfe0Qq9Xse1/K+Gv1/Y+IMRExPJseALwHeLnLajX5eV/K2Gv58z6l9MWU0viU0mQK33W/SCld02W1injvG3v7BatBSqk9Ij4LPEDh6sj5KaUXIuImYHFK6T4KH2y3RcRSCidlX5VfxT2nxLH/aURcDrRTGPv1uRXcwyLiDgpXvY2OiNXAlymcdEtK6V+B+ylcFbgU2APckE+lPa+EsX8Y+OOIaAf2AlfVwhdWZgZwLfB8dv4OwF8BE6H233tKG3+tvv9jgVsiog+FAHp3SulH9fB5T2ljr9nP+0OpxPfeOxhIkiRVMA+DSpIkVTDDmiRJUgUzrEmSJFUww5okSVIFM6xJkiRVMMOaJL1FEfHOiPhR3nVIqk2GNUmSpApmWJNUNyLimoh4IiKejYh/y25ivSsi/mdEvBARP4+IMdm6Z0XEY9kNrL8fESOy+W+PiIeyG5o/HRFvy55+cHaj65cj4jv7u/1L0ltlWJNUFyLiFOBjwIzsxtUdwCeAQRS6lZ8K/IrCnRsAbgX+MruB9fNF878DfCO7oflFwP5bTp0N/DkwFTiBwl0BJOkt83ZTkurFJRRuQv1kttNrALAB6ATuyta5Hfj3iBgGDE8p/SqbfwvwvYgYAoxLKX0fIKXUApA93xMppdXZ42eBycDC8g9LUq0zrEmqFwHcklL64kEzI/7vLusd7T349hVNd+Dnq6Qe4mFQSfXi58CHI+IYgIgYGRGTKHwOfjhb52pgYUppO7A1ImZl868FfpVS2gmsjog/yp6jX0QM7NVRSKo7/p+fpLqQUnoxIv4G+FlENABtwH8BdgPTs2UbKJzXBnAd8K9ZGFsG3JDNvxb4t4i4KXuOj/TiMCTVoUjpaPf4S1L1i4hdKaXBedchSYfiYVBJkqQK5p41SZKkCuaeNUmSpApmWJMkSapghjVJkqQKZliTJEmqYIY1SZKkCmZYkyRJqmD/P4WoHwmI10A5AAAAAElFTkSuQmCC\n", 1373 | "text/plain": [ 1374 | "
" 1375 | ] 1376 | }, 1377 | "metadata": { 1378 | "needs_background": "light" 1379 | }, 1380 | "output_type": "display_data" 1381 | } 1382 | ], 1383 | "source": [ 1384 | "# 正確性の可視化\n", 1385 | "import matplotlib.pyplot as plt\n", 1386 | "%matplotlib inline\n", 1387 | "\n", 1388 | "plt.figure(figsize=(10, 6))\n", 1389 | "plt.plot(history.history['acc'])\n", 1390 | "#plt.plot(history.history['val_acc'])\n", 1391 | "plt.title('model accuracy')\n", 1392 | "plt.ylabel('accuracy')\n", 1393 | "plt.xlabel('epoch')\n", 1394 | "# plt.legend(['train', 'test'], loc='upper left')\n", 1395 | "plt.show()" 1396 | ] 1397 | }, 1398 | { 1399 | "cell_type": "code", 1400 | "execution_count": 184, 1401 | "metadata": {}, 1402 | "outputs": [ 1403 | { 1404 | "data": { 1405 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAmQAAAGDCAYAAACFuAwbAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAIABJREFUeJzt3Xd0XOd95vHnN6gEAbCCFSCKiiWqSxDFBtsbN1m2paxVbUlWIeht3iQn3mzs3cSbeEs2mz3J7sbajcViybJsSZbshLalKLZlWyQlUiyqVKVBgAQbwAqQIOr89o8ZkiAIkACJi3fK93MODqa8M/O8HJ7hw3vvvNfcXQAAAAgnFjoAAABAtqOQAQAABEYhAwAACIxCBgAAEBiFDAAAIDAKGQAAQGAUMgBZwcweMbP/MsyxjWb28fN9HgAYLgoZAABAYBQyAACAwChkAFJGclfhH5nZG2Z21MxWmNl0M3vOzNrN7BdmNqnf+JvNbIuZHTKzX5vZpf3uu8bMNicf96SkwgGv9Vkzey352JfM7MpzzLzUzLaa2QEzW2Vms5K3m5n9jZm1mFmbmb1pZpcn77vJzN5OZttpZv/unP7AAGQMChmAVHOrpE9IuljS5yQ9J+k/SCpT4jPr9yTJzC6W9ANJf5C871lJPzGzfDPLl/T3kh6TNFnSD5PPq+Rjr5G0UtK/kDRF0rclrTKzgpEENbPfkfQXku6QNFNSk6Qnknd/UtKHk/OYkByzP3nfCkn/wt1LJF0u6YWRvC6AzEMhA5Bq/tbd97r7TkmrJa1391fdvVPSjyVdkxx3p6SfufvP3b1H0v+UNE7SQknzJeVJ+l/u3uPuT0va0O81vizp2+6+3t373P1RSV3Jx43E3ZJWuvtmd++S9HVJC8ysSlKPpBJJl0gyd3/H3XcnH9cjaa6Zlbr7QXffPMLXBZBhKGQAUs3efpePDXK9OHl5lhJbpCRJ7h6XtEPS7OR9O93d+z22qd/lSklfTe6uPGRmhyRVJB83EgMzHFFiK9hsd39B0rckPSSpxcweNrPS5NBbJd0kqcnMfmNmC0b4ugAyDIUMQLrapUSxkpQ4ZkuJUrVT0m5Js5O3HTen3+Udkv6ru0/s91Pk7j84zwzjldgFulOS3P3/uPt1kuYqsevyj5K3b3D3WyRNU2LX6lMjfF0AGYZCBiBdPSXpM2b2MTPLk/RVJXY7viTpZUm9kn7PzPLM7POS5vV77DJJ/9LMbkgefD/ezD5jZiUjzPADSQ+Y2dXJ48/+mxK7WBvN7Prk8+dJOiqpU1I8eYzb3WY2IbmrtU1S/Dz+HABkAAoZgLTk7u9JukfS30rap8QXAD7n7t3u3i3p85Lul3RAiePNftTvsRslLVVil+JBSVuTY0ea4ReS/lTSM0pslbtA0l3Ju0uVKH4HldituV/SXyXvu1dSo5m1SfqXShyLBiCL2amHWAAAAGCssYUMAAAgMAoZAABAYBQyAACAwChkAAAAgVHIAAAAAssNHWCkpk6d6lVVVaFjAAAAnNWmTZv2uXvZ2calXSGrqqrSxo0bQ8cAAAA4KzNrOvsodlkCAAAERyEDAAAIjEIGAAAQGIUMAAAgMAoZAABAYBQyAACAwChkAAAAgVHIAAAAAqOQAQAABEYhAwAACIxCBgAAEBiFbIDOnj795PVdcvfQUQAAQJagkA3w3Fu79W9/8Kpe/GBf6CgAACBLUMgG+MwVszStpEDLVzeEjgIAALIEhWyA/NyY7ltYpdUf7NM7u9tCxwEAAFmAQjaIu2+Yo3F5OVq+elvoKAAAIAtQyAYxsShfd9SWa9XrO7W3rTN0HAAAkOEoZEN4cHG1euOuR19qDB0FAABkOArZECqnjNen5s7Q4+u3q6O7N3QcAACQwShkZ7D0w9U6fKxHT29qDh0FAABkMArZGVxXOVnXzJmoFWu2qS/OQrEAACAaFLKzqF9co6b9Hfr523tDRwEAABmKQnYWn7psusonjWOhWAAAEBkK2Vnk5sT04KJqbWw6qFe3HwwdBwAAZCAK2TDccX2FSgpzWSgWAABEgkI2DMUFufriDXP03Fu7teNAR+g4AAAgw1DIhun+hVWKmWnlWraSAQCA0UUhG6aZE8bpc1fN0lMbdujwsZ7QcQAAQAahkI1AfV21jnb36QevbA8dBQAAZBAK2QhcNmuCFl4wRY+sbVR3bzx0HAAAkCEoZCO0tK5Ge9o69eybu0NHAQAAGYJCNkIfubhMF04r1rLVDXLndEoAAOD8UchGKBYz1S+u1pZdbXq5YX/oOAAAIANQyM7B714zW1PG57NQLAAAGBUUsnNQmJejexdU6oV3W7S1pT10HAAAkOYoZOfo3vmVKsiNacUatpIBAIDzQyE7R1OKC/T5a8v1zOad2nekK3QcAACQxihk52HJ4mp198b12MtNoaMAAIA0RiE7DxdOK9bHLpmmx9Y1qbOnL3QcAACQpihk56m+rkYHjnbrR5t3ho4CAADSFIXsPM2vmazLZ5dq+ZoGxeMsFAsAAEaOQnaezExL62rU0HpUv3qvJXQcAACQhihko+CmK2Zq5oRCFooFAADnhEI2CvJyYnpgUZVebtivt3YeDh0HAACkGQrZKLlr3hyNz8/R8tUNoaMAAIA0QyEbJaWFebrz+jn66Ru7tfvwsdBxAABAGqGQjaIHFlUp7q5H1jaGjgIAANIIhWwUVUwu0qevmKnvv7JdR7p6Q8cBAABpgkI2ypbW1ai9s1dPbtgROgoAAEgTFLJRdnXFRF1fNUkr12xTb188dBwAAJAGKGQRqK+r0c5Dx/SPW/aEjgIAANJApIXMzG40s/fMbKuZfW2Q++eY2a/M7FUze8PMbooyz1j5+KXTVTWlSMtWb5M7p1MCAABnFlkhM7McSQ9J+rSkuZK+YGZzBwz7E0lPufs1ku6S9H+jyjOWcmKmJYur9fqOQ9rYdDB0HAAAkOKi3EI2T9JWd29w925JT0i6ZcAYl1SavDxB0q4I84yp266r0MSiPC17kYViAQDAmUVZyGZL6v9Vw+bkbf39maR7zKxZ0rOS/u1gT2RmXzazjWa2sbW1NYqso25cfo7uuaFSP39nrxr3HQ0dBwAApLDQB/V/QdIj7l4u6SZJj5nZaZnc/WF3r3X32rKysjEPea6+tLBSebGYVq7lpOMAAGBoURaynZIq+l0vT97W3xJJT0mSu78sqVDS1AgzjalpJYW65epZ+uHGZh3q6A4dBwAApKgoC9kGSReZWbWZ5Stx0P6qAWO2S/qYJJnZpUoUsvTYJzlMS+qqdaynT4+v3x46CgAASFGRFTJ375X0FUnPS3pHiW9TbjGzb5rZzclhX5W01Mxel/QDSfd7hq0TccmMUtVdNFWPvNSort6+0HEAAEAKyo3yyd39WSUO1u9/2zf6XX5b0qIoM6SCpXU1+tLKV7TqtV26vbbi7A8AAABZJfRB/Vmh7qKpumRGiVasYaFYAABwOgrZGDBLLBT77p52rf5gX+g4AAAgxVDIxsjNV89SWUmBlq1moVgAAHAqCtkYKcjN0f0Lq7T6g316d09b6DgAACCFUMjG0N03zNG4vBwtX81CsQAA4CQK2RiaWJSv22vL9Q+v7VRLW2foOAAAIEVQyMbYg4uq1Rt3ffflptBRAABAiqCQjbGqqeP1ybnT9b31Tero7g0dBwAApAAKWQBL62p0qKNHz2xqDh0FAACkAApZANdVTtJVFRO1Ys029cVZKBYAgGxHIQvAzLS0rlqN+zv0i3f2ho4DAAACo5AFcuNlMzR74jgtZ6FYAACyHoUskNycmB5cXK0NjQf12o5DoeMAAICAKGQB3Xl9hUoKczmdEgAAWY5CFlBxQa6+OG+Onntzt3Yc6AgdBwAABEIhC+z+RVWKmek7axtDRwEAAIFQyAKbOWGcPnvlTD25YbsOH+sJHQcAAARAIUsB9XU1Otrdpyc3bA8dBQAABEAhSwGXz56gBTVT9J21jerpi4eOAwAAxhiFLEUs/XC1dh/u1LNv7g4dBQAAjDEKWYr46MXTdEHZeC1b3SB3TqcEAEA2oZCliFjMtGRxjd7a2aZ1DQdCxwEAAGOIQpZCPn/tbE0Zn8/plAAAyDIUshRSmJeje+ZX6pfvtmhry5HQcQAAwBihkKWYexdUKj83phVrtoWOAgAAxgiFLMVMLS7QrdfO1o82N2v/ka7QcQAAwBigkKWgJYtr1NUb12PrmkJHAQAAY4BCloIunFas37lkmh57uUmdPX2h4wAAgIhRyFJUfV219h/t1o9f3Rk6CgAAiBiFLEUtqJmiy2aVasWabYrHWSgWAIBMRiFLUWampXU12tpyRL95vzV0HAAAECEKWQr7zJUzNXNCoZaxUCwAABmNQpbC8nJiun9hlV767X5t2XU4dBwAABARClmKu2veHI3Pz9Hy1SwUCwBApqKQpbgJ4/J0x/UV+snru7T78LHQcQAAQAQoZGngwUXVirvrkZcaQ0cBAAARoJClgYrJRfr05TP1/fXbdaSrN3QcAAAwyihkaaK+rlrtnb16asOO0FEAAMAoo5CliWvmTFJt5SStXLtNvX3x0HEAAMAoopClkfq6GjUfPKbnt+wNHQUAAIwiClka+cTc6aqcUqRlqxvkzumUAADIFBSyNJITMy1ZXK3XdhzS5u0HQ8cBAACjhEKWZm67rlwTxuVp2YssFAsAQKagkKWZovxc3TN/jp5/e4+a9h8NHQcAAIwCClkaum9BlXJjppVr2EoGAEAmoJCloWmlhbr5qtl6amOzDnV0h44DAADOE4UsTdXXVetYT58eX789dBQAAHCeKGRp6tKZpaq7aKoefalR3b0sFAsAQDqjkKWx+roatbR3adXru0JHAQAA54FClsY+fNFUfWh6iZazUCwAAGmNQpbGzExL6qr17p52rdm6L3QcAABwjihkae6Wq2eprKRAy1azBAYAAOmKQpbmCnJzdN+CSr34fqve29MeOg4AADgHFLIMcPcNlSrMi2nFmobQUQAAwDmItJCZ2Y1m9p6ZbTWzrw0x5g4ze9vMtpjZ96PMk6kmjc/X7ddV6O9f3aWW9s7QcQAAwAhFVsjMLEfSQ5I+LWmupC+Y2dwBYy6S9HVJi9z9Mkl/EFWeTLdkcbV64nE99nJT6CgAAGCEotxCNk/SVndvcPduSU9IumXAmKWSHnL3g5Lk7i0R5sloVVPH6xOXTtf31jXpWHdf6DgAAGAEoixksyXt6He9OXlbfxdLutjM1prZOjO7McI8Ga++rkYHO3r09Obm0FEAAMAIhD6oP1fSRZI+KukLkpaZ2cSBg8zsy2a20cw2tra2jnHE9HF91SRdVT5BK9dsUzzOQrEAAKSLKAvZTkkV/a6XJ2/rr1nSKnfvcfdtkt5XoqCdwt0fdvdad68tKyuLLHC6MzPV19Vo276j+sU7e0PHAQAAwxRlIdsg6SIzqzazfEl3SVo1YMzfK7F1TGY2VYldmKzdcB4+ffkMzZ44TstZKBYAgLQRWSFz915JX5H0vKR3JD3l7lvM7JtmdnNy2POS9pvZ25J+JemP3H1/VJmyQW5OTA8sqtIrjQf0+o5DoeMAAIBhsHQ7KXVtba1v3LgxdIyU1t7Zo4V/8YI+8qEyfeuL14aOAwBA1jKzTe5ee7ZxoQ/qRwRKCvP0hRvm6Lm39qj5YEfoOAAA4CwoZBnq/oVVMkmPrG0MHQUAAJwFhSxDzZo4Tp+5cqae2LBDbZ09oeMAAIAzoJBlsKV1NTrS1asnX9lx9sEAACAYClkGu3z2BM2vmazvrN2mnr546DgAAGAIFLIMt7SuRrsOd+rZN3eHjgIAAIZAIctw/+xD01RTNl7LV29Tui1xAgBAtqCQZbhYzLRkcbXe3HlY67cdCB0HAAAMgkKWBW69tlyTx+dr+WrOSgUAQCqikGWBwrwc3TO/Ur94p0W/bT0SOg4AABiAQpYlvrSgUvm5Ma1Yw0nHAQBINRSyLDG1uECfv2a2ntnUrP1HukLHAQAA/VDIskh9XbW6euP63rrtoaMAAIB+KGRZ5MJpJfpnHyrTY+sa1dnTFzoOAABIopBlmaV1Ndp3pFv/8NrO0FEAAEAShSzLLLhgiubOLGWhWAAAUgiFLMuYmZZ+uFoftBzRr99vDR0HAACIQpaVPnPFLM0oLWShWAAAUgSFLAvl58Z038Iqrd26X1t2HQ4dBwCArEchy1JfnDdHRfk5WrGahWIBAAiNQpalJhTl6Y7aCq16fZf2HO4MHQcAgKxGIctiSxZXK+6uR15qDB0FAICsRiHLYhWTi3Tj5TP0/fVNOtrVGzoOAABZi0KW5erratTW2aunNu4IHQUAgKxFIcty186ZpOsqJ2nl2m3qi7NQLAAAIVDIoKV11dpx4Jj+acue0FEAAMhKFDLoE3NnqHJKkZaxUCwAAEFQyKCcmOnBRdXavP2QNjUdDB0HAICsQyGDJOn22nJNGJfH6ZQAAAiAQgZJUlF+ru6+YY6e37JH2/d3hI4DAEBWoZDhhPsWViknZlq5ltMpAQAwloZVyMzs982s1BJWmNlmM/tk1OEwtqaXFupzV83SUxt36HBHT+g4AABkjeFuIXvQ3dskfVLSJEn3SvrvkaVCMPWLa9TR3afHX2kKHQUAgKwx3EJmyd83SXrM3bf0uw0ZZO6sUi2+cKoefalR3b3x0HEAAMgKwy1km8zsn5QoZM+bWYkk/rXOUPV11drb1qWfvL4rdBQAALLCcAvZEklfk3S9u3dIypP0QGSpENRHLi7TxdOLtWx1g9w5nRIAAFEbbiFbIOk9dz9kZvdI+hNJh6OLhZDMTPWLa/Tunna99Nv9oeMAAJDxhlvI/p+kDjO7StJXJf1W0ncjS4XgbrlmlqYWF3A6JQAAxsBwC1mvJ/Zd3SLpW+7+kKSS6GIhtILcHN23oFK/fq9VH+xtDx0HAICMNtxC1m5mX1diuYufmVlMiePIkMHumV+pwryYlq9moVgAAKI03EJ2p6QuJdYj2yOpXNJfRZYKKWHS+Hzddl25fvzqTrW2d4WOAwBAxhpWIUuWsMclTTCzz0rqdHeOIcsCSxbXqCce12MvN4aOAgBAxhruqZPukPSKpNsl3SFpvZndFmUwpIbqqeP18Uun67F1TTrW3Rc6DgAAGWm4uyz/oxJrkN3n7l+SNE/Sn0YXC6mkfnG1Dnb06JnNzaGjAACQkYZbyGLu3tLv+v4RPBZpbl71ZF1ZPkEr12xTPM5CsQAAjLbhlqp/NLPnzex+M7tf0s8kPRtdLKQSM1N9XY0a9h3VL99tOfsDAADAiAz3oP4/kvSwpCuTPw+7+x9HGQyp5abLZ2j2xHEsFAsAQARyhzvQ3Z+R9EyEWZDCcnNiemBRlf7Lz97RG82HdGX5xNCRAADIGGfcQmZm7WbWNshPu5m1jVVIpIY7r69QSUEuC8UCADDKzljI3L3E3UsH+Slx99KxConUUFKYp7vmVehnb+7WzkPHQscBACBj8E1JjMj9i6olSY+sZSsZAACjhUKGEZk9cZw+c8VMPfHKDrV39oSOAwBARqCQYcSW1tWovatXT27YEToKAAAZgUKGEbuifIJuqJ6s76xtVG9fPHQcAADSHoUM52RpXY12HjqmZ9/aEzoKAABpL9JCZmY3mtl7ZrbVzL52hnG3mpmbWW2UeTB6fueSaaqZOl7LVzfIndMpAQBwPiIrZGaWI+khSZ+WNFfSF8xs7iDjSiT9vqT1UWXB6IvFTA8urtYbzYf1yrYDoeMAAJDWotxCNk/SVndvcPduSU9IumWQcf9Z0l9K6owwCyJw67XlmlSUp2UsFAsAwHmJspDNltT/a3jNydtOMLNrJVW4+88izIGIjMvP0b3zK/XLd/eqofVI6DgAAKStYAf1m1lM0l9L+uowxn7ZzDaa2cbW1tbow2HY7l1QpbycmFasYSsZAADnKspCtlNSRb/r5cnbjiuRdLmkX5tZo6T5klYNdmC/uz/s7rXuXltWVhZhZIxUWUmB/vnVs/XM5mYdONodOg4AAGkpykK2QdJFZlZtZvmS7pK06vid7n7Y3ae6e5W7V0laJ+lmd98YYSZEoL6uWp09cT2+ril0FAAA0lJkhczdeyV9RdLzkt6R9JS7bzGzb5rZzVG9LsbeRdNL9NEPlenRl5vU2dMXOg4AAGkn0mPI3P1Zd7/Y3S9w9/+avO0b7r5qkLEfZetY+lpaV6N9R7q06rVdoaMAAJB2WKkfo2LhBVN06cxSLV/DQrEAAIwUhQyjwsy0tK5a7+89ot+8zzdhAQAYCQoZRs1nr5yl6aUFWs5CsQAAjAiFDKMmPzem+xZWac3WfXp7V1voOAAApA0KGUbV3fMqVZSfo+VrGkJHAQAgbVDIMKomFOXpjtoK/eT1XdrbxulJAQAYDgoZRt2Di6rVF3c98lJj6CgAAKQFChlG3ZwpRfrUZTP0+LomHe3qDR0HAICURyFDJOrratTW2aunNzWHjgIAQMqjkCES11VO0rVzJmrFmm3qi7NQLAAAZ0IhQ2SW1tVo+4EO/fztPaGjAACQ0ihkiMwnL5uhOZOLtIyFYgEAOCMKGSKTEzM9uKhKm5oOavP2g6HjAACQsihkiNTttRUqLczV8tUsFAsAwFAoZIjU+IJc3T2/Uv/41h7tONAROg4AACmJQobI3begSjEzrVjDsWQAAAyGQobIzZhQqJuvmqWnNu7Q4Y6e0HEAAEg5FDKMifq6GnV09+n7r2wPHQUAgJRDIcOYmDurVIsunKJHXtqm7t546DgAAKQUChnGTH1djfa2delnb+4KHQUAgJRCIcOY+ejFZbpoWrGWvbhN7pxOCQCA4yhkGDNmpvq6ar29u00v/3Z/6DgAAKQMChnG1C1Xz9bU4nwtY6FYAABOoJBhTBXm5ehLC6r0q/datbWlPXQcAABSAoUMY+6e+ZUqzItpOScdBwBAEoUMAUwen69bry3Xj17dqdb2rtBxAAAIjkKGIJYsrlZ3b1yPrWsKHQUAgOAoZAiipqxYH790mr63rkmdPX2h4wAAEBSFDMHU19XowNFuPbO5OXQUAACCopAhmBuqJ+uK2RO0YvU2xeMsFAsAyF4UMgRzfKHYhn1H9cK7LaHjAAAQDIUMQd10xUzNmlCo5WtYKBYAkL0oZAgqLyemBxZVa13DAb2183DoOAAABEEhQ3B3zqtQcUEup1MCAGQtChmCKy3M013XV+inb+zWrkPHQscBAGDMUciQEh5YXC1JeuSlxrBBAAAIgEKGlDB74jjddMVM/WD9drV39oSOAwDAmKKQIWUsratWe1evntywI3QUAADGFIUMKePK8omaVz1Z31nbqN6+eOg4AACMGQoZUkr94mrtPHRMz721J3QUAADGDIUMKeXjl05X9dTxWr66Qe6cTgkAkB0oZEgpsZjpwcXVer35sDY0HgwdBwCAMUEhQ8q57dpyTSrKY6FYAEDWoJAh5YzLz9E98yv1i3f2atu+o6HjAAAQOQoZUtK9CyqVF4tp5ZptoaMAABA5ChlS0rSSQv3uNbP0w007dPBod+g4AABEikKGlFVfV6POnrgeX98UOgoAAJGikCFlXTy9RB+5uEyPvtykrt6+0HEAAIgMhQwpbWldjVrbu/QPr+0KHQUAgMhQyJDSFl04RZfMKNGK1dtYKBYAkLEoZEhpZqaldTV6b2+7XvxgX+g4AABEgkKGlPe5q2ZpWkmBlrNQLAAgQ1HIkPLyc2O6b2GVVn+wT+/sbgsdBwCAUUchQ1q4+4Y5GpeXo+WrWSgWAJB5KGRICxOL8nVHbblWvb5Te9s6Q8cBAGBURVrIzOxGM3vPzLaa2dcGuf8PzextM3vDzH5pZpVR5kF6e3BxtXrjru++3Bg6CgAAoyqyQmZmOZIekvRpSXMlfcHM5g4Y9qqkWne/UtLTkv5HVHmQ/iqnjNen5s7Q99ZtV0d3b+g4AACMmii3kM2TtNXdG9y9W9ITkm7pP8Ddf+XuHcmr6ySVR5gHGWDph6t1+FiPnt7UHDoKAACjJspCNlvSjn7Xm5O3DWWJpOcGu8PMvmxmG81sY2tr6yhGRLq5rnKyrpkzUSvWbFNfnIViAQCZISUO6jezeyTVSvqrwe5394fdvdbda8vKysY2HFLO0roaNe3v0M/f3hs6CgAAoyLKQrZTUkW/6+XJ205hZh+X9B8l3ezuXRHmQYb41GUzVDF5HAvFAgAyRpSFbIOki8ys2szyJd0laVX/AWZ2jaRvK1HGWiLMggySEzM9uKhaG5sO6tXtB0PHAQDgvEVWyNy9V9JXJD0v6R1JT7n7FjP7ppndnBz2V5KKJf3QzF4zs1VDPB1wijtqK1RSmMtCsQCAjJAb5ZO7+7OSnh1w2zf6Xf54lK+PzDW+IFdfvGGOlr3YoB0HOlQxuSh0JAAAzllKHNQPnIv7F1YpZqaVa9lKBgBIbxQypK2ZE8bpc1fN0lMbdujwsZ7QcQAAOGcUMqS1+rpqHe3u0xOvbA8dBQCAc0YhQ1q7bNYELbxgih55qVE9ffHQcQAAOCcUMqS9pXU12n24Uz97Y3foKAAAnBMKGdLeRy4u04XTirVsdYPcOZ0SACD9UMiQ9mIxU/3iam3Z1aaXG/aHjgMAwIhRyJARfvea2ZpanM9CsQCAtEQhQ0YozMvRvfOr9MK7Ldra0h46DgAAI0IhQ8a4Z/4cFeTGtGINW8kAAOmFQoaMMaW4QLdeV65nNu/UviNdoeMAADBsFDJklAcXVau7N67HXm4KHQUAgGGjkCGjXDitWB+7ZJoeW9ekzp6+0HEAABgWChkyTn1djQ4c7daPNu8MHQUAgGGhkCHjzK+ZrMtnl2rFmgbF4ywUCwBIfRQyZBwz09K6Gv229ah+/X5L6DgAAJwVhQwZ6aYrZmrmhEIte5ElMAAAqY9ChoyUlxPTA4uq9HLDfr2183DoOAAAnBGFDBnrrnlzVFyQq+WrG0JHAQDgjChkyFilhXm68/oK/fSN3dp9+FjoOAAADIlChoz2wKIquaRH1jaGjgIAwJAoZMho5ZOK9OnLZ+j7r2zXka7e0HEAABgUhQwZr76uRu2dvXpyw47QUQAAGBSFDBnv6oqJur5qklau2abevnjoOAAAnIbq5IBwAAAOy0lEQVRChqxQX1ejnYeO6R+37AkdBQCA01DIkBU+ful0VU0p0rLV2+TO6ZQAAKmFQoaskBMzLVlcrdd3HNKmpoOh4wAAcAoKGbLGbddVaGJRnpaxUCwAIMVQyJA1xuXn6J4bKvVPb+9V476joeMAAHAChQxZ5UsLK5UXi2nlWk46DgBIHRQyZJVpJYW65epZ+uHGZh3q6A4dBwAASRQyZKH6uhod6+nT4+u3h44CAIAkChmy0IdmlOjDF5fpkZca1dXbFzoOAAAUMmSnpXXVam3v0qrXdoWOAgAAhQzZafGFU3XJjBKtWMNCsQCA8ChkyEpmiYVi393TrtUf7AsdBwCQ5ShkyFo3Xz1LZSUFLBQLAAiOQoasVZCbo/sXVmn1B/v07p620HEAAFmMQoasdvcNczQuL0crVrNQLAAgHAoZstrEonzdXluuf3htl1raO0PHAQBkKQoZst6Di6rVE4/ruy81hY4CAMhSFDJkvaqp4/XJudP1vfVN6ujuDR0HAJCFKGSApKV1NTrU0aNnNjWHjgIAyEK5oQMAqeC6ykm6umKi/uYXH2j9tgOaVlKo6aUFmlZaoOklhZpWWqBppYUqKciVmYWOCwDIMBQyQImFYv/85sv0F8+9oy272vRCW4s6uk8/z2VhXuxkWTte1EoKNa2kQNNLC08UuNJxFDcAwPBZup02pra21jdu3Bg6BrLAka5e7W3rVEtbl1raT/7ee/x6e5da2rp0pOv0487yc2MnS1ryd1m/68eL28SiPIobAGQwM9vk7rVnG8cWMmAIxQW5Ki4r1gVlxWccd7SrN1nOOrU3+bu1vStR5tq79P7edq3Zuk/tnYMUt5yYypIFbdophS35O7k1blJRvmIxihsAZCoKGXCexhfkqrogV9VTx59x3LHuvhNb1k5ueUsUuJb2LjW0HtW6hgM6fKzntMfmxkzTSgpUVlqo6ScK3Om7TqeMp7gBQDqikAFjZFx+jiqnjFfllDMXt86ePrW299s9emLLW+K2pv0d2tB4QAc7Ti9uOTFTWfHJwjZwy9vx31OKC5RDcQOAlEEhA1JMYV6OKiYXqWJy0RnHdfX2JXeNdql1wJa3ve1daj7YoVe3H9T+o92nPTZm0tTiAd8i7ff7+Ja3qcX5ys1hdRwAiBqFDEhTBbk5Kp9UpPJJZy5u3b1x7Tty8pi2lraTX0jY296p3Yc79XrzIe0/2q2B3/Exk6aMP76V7WRZO7nrNLHFraykQHkUNwA4ZxQyIMPl58Y0a+I4zZo47ozjevri2n+k+0RxO63AtXfqrV1t2n+kS/FBitvkovx+X0Y4uQzI8S1v00sLVVZcoPxcihsADEQhAyBJysuJacaEQs2YUHjGcb19ce0/2n3aMiD9d52+u6dN+450q29gc5M0qSjv9GVABhS4spICFeblRDVVAEg5FDIAI5KbE9P00kJNLy2UNGHIcX1x1/6jXQPWcTt1y9vWliNqbe9S7yDFbcK4vNO+Rdr/+vFj3yhuADIBhQxAJHJiljyLwZmLWzzuOtDRfeKYtta2fqUtueVtW8NRtbR3qqfv9OJWWph7Ylfp6eu4ndzyVpTPxx2A1BXpJ5SZ3Sjpf0vKkbTc3f/7gPsLJH1X0nWS9ku6090bo8wEILXEYqapxQWaWlyguSodclw87jp0rOeU5UBOLMib3Aq3ofGAWtq61N0XP+3xJQW5KhtkGZDju05LC/OUEzPFLHEqreOXY2aK9b9sics5MZP1uxwzk5mUYycvcxYGAMMVWSEzsxxJD0n6hKRmSRvMbJW7v91v2BJJB939QjO7S9JfSrozqkwA0lcsZpo8Pl+Tx+frkhlDj3N3HT7Wc/IUV8ktb/13nb66/ZD2tnWqq/f04jaqmfuXuFjick6yrMVixy8PUuoGKXixIQriiedLlsj+jx/s9U/cHju9XObYcAvpqc+Rk3zuxHMkbu//fEPlH/b8B7z+8XnG+uU8+XoD/gz6zTvHTDbwfUi+vnRqgT5+yQa5D4hClFvI5kna6u4NkmRmT0i6RVL/QnaLpD9LXn5a0rfMzDzdTrAJIGWYmSYW5WtiUb4+NKNkyHHurrbO3hNb2to7exR3Ke6uvrjLB1zuc1fcPTEmPsRl9+T1xHhPPj7uOuVy/MRznfocffHEuLi7+pLjzvoc8cRr9cbj6u5TMm8yQ/J5++f3frfHT8sz9PwTmfy0pVGy1YmiduK6Dbh+/P5TBw51/9meb/DnGN5jT3bJocYPL8spUWzA7+HOY4R/bgNferA/l+Fm0SCv9aUFlfrn15SfNr8QoixksyXt6He9WdINQ41x914zOyxpiqR9/QeZ2ZclfVmS5syZE1VeAFnEzDRhXJ4mjMvTRdOHLm44lfvphXCwy8fL2ymldmB5Hfh4P71QevK5BhbewQvlyYLqyYI7sNQOLJ/xZDY/ZY7J38lbT14/dYAPc/zA+3Xa/Wd+3GDP3f/9OJcsGnj/sB83jHmclmXw+zXUa44gy2nzGGGWVFo/MS2OcnX3hyU9LEm1tbX8/wwAAjm+WzJHp281AXDuoqyGOyVV9Ltenrxt0DFmlqvEV7H2R5gJAAAg5URZyDZIusjMqs0sX9JdklYNGLNK0n3Jy7dJeoHjxwAAQLaJbJdl8piwr0h6XollL1a6+xYz+6akje6+StIKSY+Z2VZJB5QobQAAAFkl0mPI3P1ZSc8OuO0b/S53Sro9ygwAAACpLnW+XgAAAJClKGQAAACBUcgAAAACo5ABAAAERiEDAAAIjEIGAAAQGIUMAAAgMAoZAABAYBQyAACAwCzdTh1pZq2SmiJ+mamS9kX8Gqksm+efzXOXsnv+zD17ZfP8s3nu0tjMv9Ldy842KO0K2Vgws43uXhs6RyjZPP9snruU3fNn7tk5dym755/Nc5dSa/7ssgQAAAiMQgYAABAYhWxwD4cOEFg2zz+b5y5l9/yZe/bK5vln89ylFJo/x5ABAAAExhYyAACAwLK6kJnZjWb2npltNbOvDXJ/gZk9mbx/vZlVjX3K6Axj/vebWauZvZb8qQ+Rc7SZ2UozazGzt4a438zs/yT/XN4ws2vHOmOUhjH/j5rZ4X7v+zfGOmNUzKzCzH5lZm+b2RYz+/1BxmTk+z/MuWfye19oZq+Y2evJ+f/5IGMy8jN/mHPPyM/748wsx8xeNbOfDnJfarzv7p6VP5JyJP1WUo2kfEmvS5o7YMy/lvR3yct3SXoydO4xnv/9kr4VOmsEc/+wpGslvTXE/TdJek6SSZovaX3ozGM8/49K+mnonBHNfaaka5OXSyS9P8jf+4x8/4c590x+701ScfJynqT1kuYPGJORn/nDnHtGft73m98fSvr+YH+/U+V9z+YtZPMkbXX3BnfvlvSEpFsGjLlF0qPJy09L+piZ2RhmjNJw5p+R3P1FSQfOMOQWSd/1hHWSJprZzLFJF71hzD9juftud9+cvNwu6R1JswcMy8j3f5hzz1jJ9/NI8mpe8mfgQdQZ+Zk/zLlnLDMrl/QZScuHGJIS73s2F7LZknb0u96s0z+cToxx915JhyVNGZN00RvO/CXp1uRum6fNrGJsogU33D+bTLYguXvjOTO7LHSYKCR3S1yjxNaC/jL+/T/D3KUMfu+Tu61ek9Qi6efuPuR7n2mf+cOYu5S5n/f/S9K/lxQf4v6UeN+zuZDh7H4iqcrdr5T0c538HwQy22YlTvVxlaS/lfT3gfOMOjMrlvSMpD9w97bQecbSWeae0e+9u/e5+9WSyiXNM7PLQ2caK8OYe0Z+3pvZZyW1uPum0FnOJpsL2U5J/f8HUJ68bdAxZpYraYKk/WOSLnpnnb+773f3ruTV5ZKuG6NsoQ3n70bGcve247s33P1ZSXlmNjVwrFFjZnlKFJLH3f1HgwzJ2Pf/bHPP9Pf+OHc/JOlXkm4ccFcmf+ZLGnruGfx5v0jSzWbWqMShOb9jZt8bMCYl3vdsLmQbJF1kZtVmlq/EgXyrBoxZJem+5OXbJL3gyaP+MsBZ5z/guJmblTjmJBuskvSl5Lft5ks67O67Q4caK2Y24/jxE2Y2T4nPiYz4Ryk5rxWS3nH3vx5iWEa+/8OZe4a/92VmNjF5eZykT0h6d8CwjPzMH87cM/Xz3t2/7u7l7l6lxL9zL7j7PQOGpcT7njvWL5gq3L3XzL4i6XklvnG40t23mNk3JW1091VKfHg9ZmZblTgI+q5wiUfXMOf/e2Z2s6ReJeZ/f7DAo8jMfqDEt8mmmlmzpP+kxEGucve/k/SsEt+02yqpQ9IDYZJGYxjzv03SvzKzXknHJN2VCf8oJS2SdK+kN5PH00jSf5A0R8r49384c8/k936mpEfNLEeJovmUu/80Sz7zhzP3jPy8H0oqvu+s1A8AABBYNu+yBAAASAkUMgAAgMAoZAAAAIFRyAAAAAKjkAEAAARGIQOAYTCzj5rZT0PnAJCZKGQAAACBUcgAZBQzu8fMXjGz18zs28mTKh8xs78xsy1m9kszK0uOvdrM1iVPqPxjM5uUvP1CM/tF8iTbm83sguTTFydPvPyumT1+fFV7ADhfFDIAGcPMLpV0p6RFyRMp90m6W9J4JVblvkzSb5Q4O4EkfVfSHydPqPxmv9sfl/RQ8iTbCyUdP3XSNZL+QNJcSTVKrH4PAOcta0+dBCAjfUyJkyJvSG68GiepRVJc0pPJMd+T9CMzmyBporv/Jnn7o5J+aGYlkma7+48lyd07JSn5fK+4e3Py+muSqiStiX5aADIdhQxAJjFJj7r710+50exPB4w713PGdfW73Cc+QwGMEnZZAsgkv5R0m5lNkyQzm2xmlUp81t2WHPNFSWvc/bCkg2ZWl7z9Xkm/cfd2Sc1m9rvJ5ygws6IxnQWArMP/7gBkDHd/28z+RNI/mVlMUo+kfyPpqKR5yftalDjOTJLuk/R3ycLVIOmB5O33Svq2mX0z+Ry3j+E0AGQhcz/XLfcAkB7M7Ii7F4fOAQBDYZclAABAYGwhAwAACIwtZAAAAIFRyAAAAAKjkAEAAARGIQMAAAiMQgYAABAYhQwAACCw/w9AHfnljf7CawAAAABJRU5ErkJggg==\n", 1406 | "text/plain": [ 1407 | "
" 1408 | ] 1409 | }, 1410 | "metadata": { 1411 | "needs_background": "light" 1412 | }, 1413 | "output_type": "display_data" 1414 | } 1415 | ], 1416 | "source": [ 1417 | "# 損失関数の可視化\n", 1418 | "plt.figure(figsize=(10, 6))\n", 1419 | "plt.plot(history.history['loss'])\n", 1420 | "# plt.plot(history.history['val_loss'])\n", 1421 | "plt.title('model loss')\n", 1422 | "plt.ylabel('loss')\n", 1423 | "plt.xlabel('epoch')\n", 1424 | "# plt.legend(['train', 'test'], loc='upper left')\n", 1425 | "plt.show()" 1426 | ] 1427 | }, 1428 | { 1429 | "cell_type": "code", 1430 | "execution_count": null, 1431 | "metadata": {}, 1432 | "outputs": [], 1433 | "source": [ 1434 | "# モデルの読み込み\n", 1435 | "with open('seq2seq.json',\"w\").write(model.to_json())\n", 1436 | "\n", 1437 | "# 重みの読み込み\n", 1438 | "model.load_weights('seq2seq.h5')\n", 1439 | "print(\"Saved Model!\")" 1440 | ] 1441 | }, 1442 | { 1443 | "cell_type": "code", 1444 | "execution_count": 187, 1445 | "metadata": { 1446 | "colab": {}, 1447 | "colab_type": "code", 1448 | "id": "4nigsv81RMXA" 1449 | }, 1450 | "outputs": [ 1451 | { 1452 | "name": "stdout", 1453 | "output_type": "stream", 1454 | "text": [ 1455 | "Saved Model!\n" 1456 | ] 1457 | } 1458 | ], 1459 | "source": [ 1460 | "# 重みを保存する\n", 1461 | "model_json = model.to_json()\n", 1462 | "with open(\"model.json\", \"w\") as json_file:\n", 1463 | " json_file.write(model_json)\n", 1464 | "\n", 1465 | "model.save_weights(\"chatbot_model.h5\")\n", 1466 | "print(\"Saved Model!\")" 1467 | ] 1468 | }, 1469 | { 1470 | "cell_type": "code", 1471 | "execution_count": 191, 1472 | "metadata": {}, 1473 | "outputs": [], 1474 | "source": [ 1475 | "json_string = model.to_json()\n", 1476 | "open('seq2seq.json', 'w').write(json_string)\n", 1477 | "model.save_weights('seq2seq_weights.h5')" 1478 | ] 1479 | }, 1480 | { 1481 | "cell_type": "code", 1482 | "execution_count": 192, 1483 | "metadata": {}, 1484 | "outputs": [ 1485 | { 1486 | "name": "stdout", 1487 | "output_type": "stream", 1488 | "text": [ 1489 | "1_0306_chatobot3.ipynb glove.6B.50d.txt\r\n", 1490 | "1_0306_chatobot4.ipynb model.json\r\n", 1491 | "apple_orange_model.json movie_lines.txt\r\n", 1492 | "apple_orange_weights.h5 padded_decoder_sequences.txt\r\n", 1493 | "chatbot_model.h5 padded_encoder_sequences.txt\r\n", 1494 | "decoder_inputs.txt seq2seq.json\r\n", 1495 | "encoder_inputs.txt seq2seq_weights.h5\r\n" 1496 | ] 1497 | } 1498 | ], 1499 | "source": [ 1500 | "%ls" 1501 | ] 1502 | }, 1503 | { 1504 | "cell_type": "code", 1505 | "execution_count": 190, 1506 | "metadata": {}, 1507 | "outputs": [ 1508 | { 1509 | "data": { 1510 | "text/plain": [ 1511 | "'/Users/akr712/Desktop/CHATBOT'" 1512 | ] 1513 | }, 1514 | "execution_count": 190, 1515 | "metadata": {}, 1516 | "output_type": "execute_result" 1517 | } 1518 | ], 1519 | "source": [ 1520 | "pwd" 1521 | ] 1522 | } 1523 | ], 1524 | "metadata": { 1525 | "colab": { 1526 | "name": "1. 0306_chatobot3.ipynb", 1527 | "provenance": [], 1528 | "version": "0.3.2" 1529 | }, 1530 | "kernelspec": { 1531 | "display_name": "Python 3", 1532 | "language": "python", 1533 | "name": "python3" 1534 | }, 1535 | "language_info": { 1536 | "codemirror_mode": { 1537 | "name": "ipython", 1538 | "version": 3 1539 | }, 1540 | "file_extension": ".py", 1541 | "mimetype": "text/x-python", 1542 | "name": "python", 1543 | "nbconvert_exporter": "python", 1544 | "pygments_lexer": "ipython3", 1545 | "version": "3.5.1" 1546 | } 1547 | }, 1548 | "nbformat": 4, 1549 | "nbformat_minor": 1 1550 | } 1551 | -------------------------------------------------------------------------------- /chatbot_keras.py: -------------------------------------------------------------------------------- 1 | 2 | 3 | from keras.layers import Input, Embedding, LSTM, Dense, RepeatVector, Bidirectional, Dropout, merge 4 | from keras.optimizers import Adam, SGD 5 | from keras.models import Model 6 | from keras.models import Sequential 7 | from keras.layers import Activation, Dense 8 | from keras.callbacks import EarlyStopping 9 | from keras.preprocessing import sequence 10 | 11 | import keras.backend as K 12 | import numpy as np 13 | np.random.seed(1234) # for reproducibility 14 | import pickle as cPickle 15 | import theano.tensor as T 16 | import os 17 | import pandas as pd 18 | import sys 19 | import matplotlib.pyplot as plt 20 | 21 | 22 | """ 23 | Build Your Chatbot!!! 24 | センテンスレベルの文脈ベクトル 25 | 単語レベルの意味ベクトル 26 | """ 27 | 28 | # params 29 | WORD2VEC_DIMS = 100 30 | DOC2VEC_DIMS = 300 31 | 32 | DICTIONARY_SIZE = 10000 33 | MAX_INPUT_LENGTH = 30 34 | MAX_OUTPUT_LENGTH = 30 35 | 36 | NUM_HIDDEN_UNITS = 256 37 | BATCH_SIZE = 64 38 | NUM_EPOCHS = 100 39 | 40 | NUM_SUBSETS = 1 41 | 42 | PATIENCE = 0 43 | DROPOUT = .25 44 | N_TEST = 100 45 | 46 | CALL_BACKS = EarlyStopping(monitor='val_loss', patience=PATIENCE) 47 | 48 | # files 49 | vocabulary_file = 'vocabulary_movie' 50 | questions_file = 'Padded_context' 51 | answers_file = 'Padded_answers' 52 | weights_file = 'my_model_weights20.h5' 53 | GLOVE_DIR = './glove.6B/' 54 | 55 | # padding and buckets 56 | 57 | BOS = "" 58 | EOS = "" 59 | PAD = "" 60 | 61 | BUCKETS = [(5,10),(10,15),(15,25),(20,30)] 62 | 63 | def print_result(input): 64 | 65 | ans_partial = np.zeros((1,maxlen_input)) 66 | ans_partial[0, -1] = 2 # the index of the symbol BOS (begin of sentence) 67 | for k in range(maxlen_input - 1): 68 | ye = model.predict([input, ans_partial]) 69 | mp = np.argmax(ye) 70 | ans_partial[0, 0:-1] = ans_partial[0, 1:] 71 | ans_partial[0, -1] = mp 72 | text = '' 73 | for k in ans_partial[0]: 74 | k = k.astype(int) 75 | if k < (dictionary_size-2): 76 | w = vocabulary[k] 77 | text = text + w[0] + ' ' 78 | return(text) 79 | 80 | 81 | # ====================================================================== 82 | # Reading a pre-trained word embedding and addapting to our vocabulary: 83 | # ====================================================================== 84 | 85 | # 辞書づくり 86 | word2vec_index = {} 87 | f = open(os.path.join(GLOVE_DIR, "glove.6B.100d.txt")) 88 | for line in f: 89 | words = line.split() 90 | word = words[0] 91 | index = np.asarray(words[1:], dtype="float32") 92 | word2vec_index[word] = index 93 | f.close() 94 | 95 | print("The number of word vecters are: ", len(word2vec_index)) 96 | 97 | word_embedding_matrix = np.zeros((DICTIONARY_SIZE, WORD2VEC_DIMS)) 98 | # Load vocabulary 99 | vocabulary = cPickle.load(open(vocabulary_file, 'rb')) 100 | 101 | i = 0 102 | for word in vocabulary: 103 | word2vec = word2vec_index.get(word[0]) 104 | if word2vec is not None: 105 | word_embedding_matrix[i] = word2vec 106 | i += 1 107 | 108 | 109 | # ====================================================================== 110 | # Keras model of the chatbot: 111 | # ====================================================================== 112 | 113 | ADAM = Adam(lr=0.00005) 114 | 115 | """ 116 | Input Layer #Document*2 117 | """ 118 | input_context = Input(shape=(MAX_INPUT_LENGTH,), dtype="int32", name="input_context") 119 | input_answer = Input(shape=(MAX_INPUT_LENGTH,), dtype="int32", name="input_answer") 120 | 121 | """ 122 | Embedding Layer: 正の整数(インデックス)を固定次元の密ベクトルに変換します. 123 | ・input_dim: 正の整数.語彙数.入力データの最大インデックス + 1. 124 | ・output_dim: 0以上の整数.密なembeddingsの次元数. 125 | ・input_length: 入力の系列長(定数). この引数はこのレイヤーの後にFlattenからDenseレイヤーへ接続する際に必要です (これがないと,denseの出力のshapeを計算できません). 126 | """ 127 | # weightが存在したら引用する 128 | if os.path.isfile(weights_file): 129 | Shared_Embedding = Embedding(input_dim=DICTIONARY_SIZE, output_dim=WORD2VEC_DIMS, input_length=MAX_INPUT_LENGTH,) 130 | else: 131 | Shared_Embedding = Embedding(input_dim=DICTIONARY_SIZE, output_dim=WORD2VEC_DIMS, input_length=MAX_INPUT_LENGTH, 132 | weights=[word_embedding_matrix]) 133 | 134 | """ 135 | Shared Embedding Layer #Doc2Vec(Document*2) 136 | """ 137 | shared_embedding_context = Shared_Embedding(input_context) 138 | shared_embedding_answer = Shared_Embedding(input_answer) 139 | 140 | """ 141 | LSTM Layer # 142 | """ 143 | Encoder_LSTM = LSTM(units=DOC2VEC_DIMS, init= "lecun_uniform") 144 | Decoder_LSTM = LSTM(units=DOC2VEC_DIMS, init= "lecun_uniform") 145 | embedding_context = Encoder_LSTM(shared_embedding_context) 146 | embedding_answer = Decoder_LSTM(shared_embedding_answer) 147 | 148 | """ 149 | Merge Layer # 150 | """ 151 | merge_layer = merge([embedding_context, embedding_answer], mode='concat', concat_axis=1) 152 | 153 | """ 154 | Dense Layer # 155 | """ 156 | dence_layer = Dense(DICTIONARY_SIZE/2, activation="relu")(merge_layer) 157 | 158 | """ 159 | Output Layer # 160 | """ 161 | outputs = Dense(DICTIONARY_SIZE, activation="softmax")(dence_layer) 162 | 163 | """ 164 | Modeling 165 | """ 166 | model = Model(input=[input_context, input_answer], output=[outputs]) 167 | model.compile(loss="categorical_crossentropy", optimizer=ADAM) 168 | 169 | if os.path.isfile(weights_file): 170 | model.load_weights(weights_file) 171 | 172 | 173 | # ====================================================================== 174 | # Loading the data: 175 | # ====================================================================== 176 | 177 | Q = cPickle.load(open(questions_file, 'rb')) 178 | A = cPickle.load(open(answers_file, 'rb')) 179 | N_SAMPLES, N_WORDS = A.shape 180 | 181 | Q_test = Q[0:N_TEST,:] 182 | A_test = A[0:N_TEST,:] 183 | Q = Q[N_TEST + 1:,:] 184 | A = A[N_TEST + 1:,:] 185 | 186 | print("Number of Samples = %d"%(N_SAMPLES - N_TEST)) 187 | Step = np.around((N_SAMPLES - N_TEST) / NUM_SUBSETS) 188 | SAMPLE_ROUNDS = Step * NUM_SUBSETS 189 | 190 | 191 | # ====================================================================== 192 | # Bot training: 193 | # ====================================================================== 194 | 195 | x = range(0, NUM_EPOCHS) 196 | VALID_LOSS = np.zeros(NUM_EPOCHS) 197 | TRAIN_LOSS = np.zeros(NUM_EPOCHS) 198 | 199 | for n_epoch in range(NUM_EPOCHS): 200 | # Loop over training batches due to memory constraints 201 | for n_batch in range(0, SAMPLE_ROUNDS, Step): 202 | 203 | Q2 = Q[n_batch:n_batch+Step] 204 | s = Q2.shape 205 | counter = 0 206 | for id, sentence in enumerate(A[n_batch:n_batch+Step]): 207 | l = np.where(sentence==3) # the position od the symbol EOS 208 | limit = l[0][0] 209 | counter += limit + 1 210 | 211 | question = np.zeros((counter, MAX_INPUT_LENGTH)) 212 | answer = np.zeros((counter, MAX_INPUT_LENGTH)) 213 | target = np.zeros((counter, DICTIONARY_SIZE)) 214 | 215 | # Loop over the training examples: 216 | counter = 0 217 | for i, sentence in enumerate(A[n_batch:n_batch+Step]): 218 | ans_partial = np.zeros((1, MAX_INPUT_LENGTH)) 219 | 220 | # Loop over the positions of the current target output (the current output sequence) 221 | l = np.where(sent==3) # the position of the symbol EOS 222 | limit = l[0][0] 223 | 224 | for k in range(1, limit+1): 225 | # Mapping the target output (the next output word) for one-hot codding: 226 | target = np.zeros((1, DICTIONARY_SIZE)) 227 | target[0, sentence[k]] = 1 228 | 229 | # preparing the partial answer to input: 230 | ans_partial[0,-k:] = sentence[0:k] 231 | 232 | # training the model for one epoch using teacher forcing: 233 | 234 | question[counter, :] = Q2[i:i+1] 235 | answer[counter, :] = ans_partial 236 | target[counter, :] = target 237 | counter += 1 238 | 239 | print('Training epoch: %d, Training examples: %d - %d'%(n_epoch, n_batch, n_batch + Step)) 240 | model.fit([question, answer], target, batch_size=BATCH_SIZE, epochs=1) 241 | 242 | test_input = Q_test[41:42] 243 | print(print_result(test_input)) 244 | train_input = Q_test[41:42] 245 | print(print_result(train_input)) 246 | 247 | model.save_weights(weights_file, overwrite=True) 248 | -------------------------------------------------------------------------------- /conversation_spliter.py: -------------------------------------------------------------------------------- 1 | 2 | import numpy as np 3 | 4 | text = open('dialog_simple', 'r') 5 | q = open('context', 'w') 6 | a = open('answers', 'w') 7 | pre_pre_previous_raw='' 8 | pre_previous_raw='' 9 | previous_raw='' 10 | person = ' ' 11 | previous_person=' ' 12 | 13 | l1 = ['won’t','won\'t','wouldn’t','wouldn\'t','’m', '’re', '’ve', '’ll', '’s','’d', 'n’t', '\'m', '\'re', '\'ve', '\'ll', '\'s', '\'d', 'can\'t', 'n\'t', 'B: ', 'A: ', ',', ';', '.', '?', '!', ':', '. ?', ', .', '. ,', 'EOS', 'BOS', 'eos', 'bos'] 14 | l2 = ['will not','will not','would not','would not',' am', ' are', ' have', ' will', ' is', ' had', ' not', ' am', ' are', ' have', ' will', ' is', ' had', 'can not', ' not', '', '', ' ,', ' ;', ' .', ' ?', ' !', ' :', '? ', '.', ',', '', '', '', ''] 15 | l3 = ['-', '_', ' *', ' /', '* ', '/ ', '\"', ' \\"', '\\ ', '--', '...', '. . .'] 16 | 17 | for i, raw_word in enumerate(text): 18 | pos = raw_word.find('+++$+++') 19 | 20 | if pos > -1: 21 | person = raw_word[pos+7:pos+10] 22 | raw_word = raw_word[pos+8:] 23 | while pos > -1: 24 | pos = raw_word.find('+++$+++') 25 | raw_word = raw_word[pos+2:] 26 | 27 | raw_word = raw_word.replace('$+++','') 28 | previous_person = person 29 | 30 | for j, term in enumerate(l1): 31 | raw_word = raw_word.replace(term,l2[j]) 32 | 33 | for term in l3: 34 | raw_word = raw_word.replace(term,' ') 35 | 36 | raw_word = raw_word.lower() 37 | 38 | if i>0: 39 | q.write(pre_previous_raw[:-1] + ' ' + previous_raw[:-1]+ '\n') # python will convert \n to os.linese 40 | a.write(raw_word[:-1]+ '\n') 41 | 42 | pre_pre_previous_raw = pre_previous_raw 43 | pre_previous_raw = previous_raw 44 | previous_raw = raw_word 45 | 46 | q.close() 47 | a.close() 48 | -------------------------------------------------------------------------------- /prepare_data.py: -------------------------------------------------------------------------------- 1 | 2 | 3 | import numpy as np 4 | np.random.seed(1234) # for reproducibility 5 | import pandas as pd 6 | import os 7 | import csv 8 | import nltk 9 | import itertools 10 | import operator 11 | import pickle 12 | import numpy as np 13 | from keras.preprocessing import sequence 14 | from scipy import sparse, io 15 | from numpy.random import permutation 16 | import re 17 | 18 | questions_file = 'context' 19 | answers_file = 'answers' 20 | vocabulary_file = 'vocabulary_movie' 21 | padded_questions_file = 'Padded_context' 22 | padded_answers_file = 'Padded_answers' 23 | unknown_token = 'something' 24 | 25 | vocabulary_size = 7000 26 | max_features = vocabulary_size 27 | maxlen_input = 50 28 | maxlen_output = 50 # cut texts after this number of words 29 | 30 | print ("Reading the context data...") 31 | q = open(questions_file, 'r') 32 | questions = q.read() 33 | print ("Reading the answer data...") 34 | a = open(answers_file, 'r') 35 | answers = a.read() 36 | all = answers + questions 37 | print ("Tokenazing the answers...") 38 | paragraphs_a = [p for p in answers.split('\n')] 39 | paragraphs_b = [p for p in all.split('\n')] 40 | paragraphs_a = ['BOS '+p+' EOS' for p in paragraphs_a] 41 | paragraphs_b = ['BOS '+p+' EOS' for p in paragraphs_b] 42 | paragraphs_b = ' '.join(paragraphs_b) 43 | tokenized_text = paragraphs_b.split() 44 | paragraphs_q = [p for p in questions.split('\n') ] 45 | tokenized_answers = [p.split() for p in paragraphs_a] 46 | tokenized_questions = [p.split() for p in paragraphs_q] 47 | 48 | ### Counting the word frequencies: 49 | ##word_freq = nltk.FreqDist(itertools.chain(tokenized_text)) 50 | ##print ("Found %d unique words tokens." % len(word_freq.items())) 51 | ## 52 | ### Getting the most common words and build index_to_word and word_to_index vectors: 53 | ##vocab = word_freq.most_common(vocabulary_size-1) 54 | ## 55 | ### Saving vocabulary: 56 | ##with open(vocabulary_file, 'w') as v: 57 | ## pickle.dump(vocab, v) 58 | 59 | vocab = pickle.load(open(vocabulary_file, 'rb')) 60 | 61 | 62 | index_to_word = [x[0] for x in vocab] 63 | index_to_word.append(unknown_token) 64 | word_to_index = dict([(w,i) for i,w in enumerate(index_to_word)]) 65 | 66 | print ("Using vocabulary of size %d." % vocabulary_size) 67 | print ("The least frequent word in our vocabulary is '%s' and appeared %d times." % (vocab[-1][0], vocab[-1][1])) 68 | 69 | # Replacing all words not in our vocabulary with the unknown token: 70 | for i, sent in enumerate(tokenized_answers): 71 | tokenized_answers[i] = [w if w in word_to_index else unknown_token for w in sent] 72 | 73 | for i, sent in enumerate(tokenized_questions): 74 | tokenized_questions[i] = [w if w in word_to_index else unknown_token for w in sent] 75 | 76 | # Creating the training data: 77 | X = np.asarray([[word_to_index[w] for w in sent] for sent in tokenized_questions]) 78 | Y = np.asarray([[word_to_index[w] for w in sent] for sent in tokenized_answers]) 79 | 80 | Q = sequence.pad_sequences(X, maxlen=maxlen_input) 81 | A = sequence.pad_sequences(Y, maxlen=maxlen_output, padding='post') 82 | 83 | with open(padded_questions_file, 'w') as q: 84 | pickle.dump(Q, q) 85 | 86 | with open(padded_answers_file, 'w') as a: 87 | pickle.dump(A, a) 88 | -------------------------------------------------------------------------------- /seq2seq_model.py: -------------------------------------------------------------------------------- 1 | 2 | """ 3 | Glove pre-trained word embedding 4 | """ 5 | 6 | from keras.layers import Input, Embedding, LSTM, Dense, RepeatVector, Bidirectional, Dropout, merge 7 | from keras.optimizers import Adam, SGD 8 | from keras.models import Model 9 | from keras.models import Sequential 10 | from keras.layers import Activation, Dense 11 | from keras.callbacks import EarlyStopping 12 | from keras.preprocessing import sequence 13 | 14 | import keras.backend as K 15 | import numpy as np 16 | np.random.seed(1234) # for reproducibility 17 | import pickle as cPickle 18 | import theano.tensor as T 19 | import os 20 | import pandas as pd 21 | import sys 22 | import matplotlib.pyplot as plt 23 | 24 | 25 | """ 26 | Build Your Chatbot!!! 27 | センテンスレベルの文脈ベクトル 28 | 単語レベルの意味ベクトル 29 | """ 30 | 31 | # params 32 | WORD2VEC_DIMS = 100 33 | DOC2VEC_DIMS = 300 34 | 35 | DICTIONARY_SIZE = 10000 36 | MAX_INPUT_LENGTH = 30 37 | MAX_OUTPUT_LENGTH = 30 38 | 39 | NUM_HIDDEN_UNITS = 256 40 | BATCH_SIZE = 64 41 | NUM_EPOCHS = 100 42 | 43 | NUM_SUBSETS = 1 44 | 45 | PATIENCE = 0 46 | DROPOUT = .25 47 | N_TEST = 100 48 | 49 | CALL_BACKS = EarlyStopping(monitor='val_loss', patience=PATIENCE) 50 | 51 | # files 52 | vocabulary_file = 'vocabulary_movie' 53 | questions_file = 'Padded_context' 54 | answers_file = 'Padded_answers' 55 | weights_file = 'my_model_weights20.h5' 56 | GLOVE_DIR = './glove.6B/' 57 | 58 | # padding and buckets 59 | 60 | BOS = "" 61 | EOS = "" 62 | PAD = "" 63 | 64 | BUCKETS = [(5,10),(10,15),(15,25),(20,30)] 65 | 66 | def print_result(input): 67 | 68 | ans_partial = np.zeros((1,maxlen_input)) 69 | ans_partial[0, -1] = 2 # the index of the symbol BOS (begin of sentence) 70 | for k in range(maxlen_input - 1): 71 | ye = model.predict([input, ans_partial]) 72 | mp = np.argmax(ye) 73 | ans_partial[0, 0:-1] = ans_partial[0, 1:] 74 | ans_partial[0, -1] = mp 75 | text = '' 76 | for k in ans_partial[0]: 77 | k = k.astype(int) 78 | if k < (dictionary_size-2): 79 | w = vocabulary[k] 80 | text = text + w[0] + ' ' 81 | return(text) 82 | 83 | 84 | # ====================================================================== 85 | # Reading a pre-trained word embedding and addapting to our vocabulary: 86 | # ====================================================================== 87 | 88 | # 辞書づくり 89 | word2vec_index = {} 90 | f = open(os.path.join(GLOVE_DIR, "glove.6B.100d.txt")) 91 | for line in f: 92 | words = line.split() 93 | word = words[0] 94 | index = np.asarray(words[1:], dtype="float32") 95 | word2vec_index[word] = index 96 | f.close() 97 | 98 | print("The number of word vecters are: ", len(word2vec_index)) 99 | 100 | word_embedding_matrix = np.zeros((DICTIONARY_SIZE, WORD2VEC_DIMS)) 101 | # Load vocabulary 102 | vocabulary = cPickle.load(open(vocabulary_file, 'rb')) 103 | 104 | i = 0 105 | for word in vocabulary: 106 | word2vec = word2vec_index.get(word[0]) 107 | if word2vec is not None: 108 | word_embedding_matrix[i] = word2vec 109 | i += 1 110 | 111 | 112 | # ====================================================================== 113 | # Keras model of the chatbot: 114 | # ====================================================================== 115 | 116 | ADAM = Adam(lr=0.00005) 117 | 118 | """ 119 | Input Layer #Document*2 120 | """ 121 | input_context = Input(shape=(MAX_INPUT_LENGTH,), dtype="int32", name="input_context") 122 | input_answer = Input(shape=(MAX_INPUT_LENGTH,), dtype="int32", name="input_answer") 123 | 124 | """ 125 | Embedding Layer: 正の整数(インデックス)を固定次元の密ベクトルに変換します. 126 | ・input_dim: 正の整数.語彙数.入力データの最大インデックス + 1. 127 | ・output_dim: 0以上の整数.密なembeddingsの次元数. 128 | ・input_length: 入力の系列長(定数). この引数はこのレイヤーの後にFlattenからDenseレイヤーへ接続する際に必要です (これがないと,denseの出力のshapeを計算できません). 129 | """ 130 | # weightが存在したら引用する 131 | if os.path.isfile(weights_file): 132 | Shared_Embedding = Embedding(input_dim=DICTIONARY_SIZE, output_dim=WORD2VEC_DIMS, input_length=MAX_INPUT_LENGTH,) 133 | else: 134 | Shared_Embedding = Embedding(input_dim=DICTIONARY_SIZE, output_dim=WORD2VEC_DIMS, input_length=MAX_INPUT_LENGTH, 135 | weights=[word_embedding_matrix]) 136 | 137 | """ 138 | Shared Embedding Layer #Doc2Vec(Document*2) 139 | """ 140 | shared_embedding_context = Shared_Embedding(input_context) 141 | shared_embedding_answer = Shared_Embedding(input_answer) 142 | 143 | """ 144 | LSTM Layer # 145 | """ 146 | Encoder_LSTM = LSTM(units=DOC2VEC_DIMS, init= "lecun_uniform") 147 | Decoder_LSTM = LSTM(units=DOC2VEC_DIMS, init= "lecun_uniform") 148 | embedding_context = Encoder_LSTM(shared_embedding_context) 149 | embedding_answer = Decoder_LSTM(shared_embedding_answer) 150 | 151 | """ 152 | Merge Layer # 153 | """ 154 | merge_layer = merge([embedding_context, embedding_answer], mode='concat', concat_axis=1) 155 | 156 | """ 157 | Dense Layer # 158 | """ 159 | dence_layer = Dense(DICTIONARY_SIZE/2, activation="relu")(merge_layer) 160 | 161 | """ 162 | Output Layer # 163 | """ 164 | outputs = Dense(DICTIONARY_SIZE, activation="softmax")(dence_layer) 165 | 166 | """ 167 | Modeling 168 | """ 169 | model = Model(input=[input_context, input_answer], output=[outputs]) 170 | model.compile(loss="categorical_crossentropy", optimizer=ADAM) 171 | 172 | if os.path.isfile(weights_file): 173 | model.load_weights(weights_file) 174 | 175 | 176 | # ====================================================================== 177 | # Loading the data: 178 | # ====================================================================== 179 | 180 | Q = cPickle.load(open(questions_file, 'rb')) 181 | A = cPickle.load(open(answers_file, 'rb')) 182 | N_SAMPLES, N_WORDS = A.shape 183 | 184 | Q_test = Q[0:N_TEST,:] 185 | A_test = A[0:N_TEST,:] 186 | Q = Q[N_TEST + 1:,:] 187 | A = A[N_TEST + 1:,:] 188 | 189 | print("Number of Samples = %d"%(N_SAMPLES - N_TEST)) 190 | Step = np.around((N_SAMPLES - N_TEST) / NUM_SUBSETS) 191 | SAMPLE_ROUNDS = Step * NUM_SUBSETS 192 | 193 | 194 | # ====================================================================== 195 | # Bot training: 196 | # ====================================================================== 197 | 198 | x = range(0, NUM_EPOCHS) 199 | VALID_LOSS = np.zeros(NUM_EPOCHS) 200 | TRAIN_LOSS = np.zeros(NUM_EPOCHS) 201 | 202 | for n_epoch in range(NUM_EPOCHS): 203 | # Loop over training batches due to memory constraints 204 | for n_batch in range(0, SAMPLE_ROUNDS, Step): 205 | 206 | Q2 = Q[n_batch:n_batch+Step] 207 | s = Q2.shape 208 | counter = 0 209 | for id, sentence in enumerate(A[n_batch:n_batch+Step]): 210 | l = np.where(sentence==3) # the position od the symbol EOS 211 | limit = l[0][0] 212 | counter += limit + 1 213 | 214 | question = np.zeros((counter, MAX_INPUT_LENGTH)) 215 | answer = np.zeros((counter, MAX_INPUT_LENGTH)) 216 | target = np.zeros((counter, DICTIONARY_SIZE)) 217 | 218 | # Loop over the training examples: 219 | counter = 0 220 | for i, sentence in enumerate(A[n_batch:n_batch+Step]): 221 | ans_partial = np.zeros((1, MAX_INPUT_LENGTH)) 222 | 223 | # Loop over the positions of the current target output (the current output sequence) 224 | l = np.where(sent==3) # the position of the symbol EOS 225 | limit = l[0][0] 226 | 227 | for k in range(1, limit+1): 228 | # Mapping the target output (the next output word) for one-hot codding: 229 | target = np.zeros((1, DICTIONARY_SIZE)) 230 | target[0, sentence[k]] = 1 231 | 232 | # preparing the partial answer to input: 233 | ans_partial[0,-k:] = sentence[0:k] 234 | 235 | # training the model for one epoch using teacher forcing: 236 | 237 | question[counter, :] = Q2[i:i+1] 238 | answer[counter, :] = ans_partial 239 | target[counter, :] = target 240 | counter += 1 241 | 242 | print('Training epoch: %d, Training examples: %d - %d'%(n_epoch, n_batch, n_batch + Step)) 243 | model.fit([question, answer], target, batch_size=BATCH_SIZE, epochs=1) 244 | 245 | test_input = Q_test[41:42] 246 | print(print_result(test_input)) 247 | train_input = Q_test[41:42] 248 | print(print_result(train_input)) 249 | 250 | model.save_weights(weights_file, overwrite=True) 251 | -------------------------------------------------------------------------------- /training_chatbot.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/samurainote/Automatic-Encoder-Decoder_Seq2Seq_Chatbot/d4ef7a0b8a3760507ebbac34354cbc1469708d2e/training_chatbot.py --------------------------------------------------------------------------------