├── README.md ├── LICENSE └── DocHate.ipynb /README.md: -------------------------------------------------------------------------------- 1 | ## Multilabel Classification with DocHate tips 2 | When journalists ask their audience for help, success creates a whole new problem: what do you do with thousands of tips? 3 | 4 | Or what do you do with thousands of textual descriptions of … anything … potholes, disciplinary actions at prisons, aircraft safety incidents? There are too many to really read. 5 | 6 | And any time you feel "there are too many to really read," that's when you should consider getting help from machine learning. 7 | 8 | Here's how we did that. There's an iPython notebook in this repo; we also have [a non-technical blogpost](https://qz.ai/a-crash-course-for-journalists-in-classifying-text-with-machine-learning/) you can read. 9 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright 2020 Quartz 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 4 | 5 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 6 | 7 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 8 | 9 | 10 | -------------------------------------------------------------------------------- /DocHate.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Multilabel Classification with DocHate tips\n", 8 | "\n", 9 | "\n", 10 | "When journalists ask their audience for help, success creates a whole new problem: what do you do with thousands of tips? \n", 11 | "\n", 12 | "Or what do you do with thousands of textual descriptions of … anything … potholes, disciplinary actions at prisons, aircraft safety incidents? There are too many to really read.\n", 13 | "\n", 14 | "And any time you feel \"there are too many to really read,\" that's when you should consider getting help from machine learning.\n", 15 | "\n", 16 | "The folks at ProPublica’s Documenting Hate project had this problem, with around 6,000 tips about hate crimes and bias incidents contributed by readers. To report a hate incident, someone only has to provide a written description of what happened. If they choose, they can also fill out checkboxes for why the victim was targeted -- e.g. because of their race, religion or immigrant status.\n", 17 | "\n", 18 | "Only some people include that “targeted because” checkbox, but that data is important for analysis and for getting tips to the right reporter. Could we train a computer to guess at what kind of target was involved based on the written description alone?\n", 19 | "\n", 20 | "**This notebook is technical and gets into the nitty-gritty of how to do text classification in this context. If you'd like a less-technical overview, read [our blogpost](https://qz.ai/a-crash-course-for-journalists-in-classifying-text-with-machine-learning/).** Check out the interactive example of the Naive Bayes model here: [demo](https://s3.amazonaws.com/qz-aistudio-public/dochate.html).\n", 21 | "\n", 22 | "We used Python and the scikit-learn library. (And tested some other algorithms using Keras.) But all of this is doable in R or other programming/stats languages. \n", 23 | "\n", 24 | "Here's what the final results look like, for predicting whether a tip is related to race and/or ethnicity using a variety of algorithms:\n", 25 | "\n", 26 | "````\n", 27 | " AU PR Curve\n", 28 | " Keras CNN 92\n", 29 | " Naive Bayes 90\n", 30 | " Spacy 88\n", 31 | " Google AutomML 87\n", 32 | " Keras NN 84\n", 33 | " Keras LSTM -\n", 34 | "```` \n", 35 | "\n", 36 | "It goes without saying, but, **be aware that there are slurs, swear words, and other offensive language in the code and output here!**" 37 | ] 38 | }, 39 | { 40 | "cell_type": "markdown", 41 | "metadata": {}, 42 | "source": [ 43 | "## Step 1: Figuring out what question we wanted to answer.\n", 44 | "\n", 45 | "ProPublica receives tips about hate crimes via a [web form](http://documentinghate.com). The `targeted_because` checkboxes are optional. To fiddle with text classification approaches that work well for this kind of data (short-ish, political topics, etc.), we're going to try to \"fill in the blanks\" when the value of the `targeted_because` field is empty. \n" 46 | ] 47 | }, 48 | { 49 | "cell_type": "code", 50 | "execution_count": 1, 51 | "metadata": {}, 52 | "outputs": [], 53 | "source": [ 54 | "# let's get started!\n", 55 | "from os import environ\n", 56 | "import numpy as np\n", 57 | "import tensorflow as tf\n", 58 | "import random\n", 59 | "\n", 60 | "# several of the algorithms we test here make use of randomness. and we split the data into train/test groups randomly.\n", 61 | "# in order to make sure that every time we run this notebook, we get the same results (rather than a\n", 62 | "# \"good\" split making one algorithm choice seem better), we set an arbitrary number (1234) as the seed for all the \n", 63 | "# random number generators.\n", 64 | "RANDOM_SEED = 1234\n", 65 | "np.random.seed(RANDOM_SEED)\n", 66 | "random.seed(RANDOM_SEED)\n", 67 | "tf.set_random_seed(RANDOM_SEED)\n", 68 | "environ['PYTHONHASHSEED'] = '0'\n", 69 | "\n", 70 | "from tensorflow import keras\n", 71 | "import pandas as pd\n", 72 | "import spacy\n", 73 | "import csv\n", 74 | "from sklearn.model_selection import train_test_split\n", 75 | "from sklearn.metrics import confusion_matrix, average_precision_score, precision_recall_curve, classification_report\n", 76 | "nlp = spacy.load('en_core_web_lg')\n", 77 | "from sklearn.preprocessing import MultiLabelBinarizer\n", 78 | "from imblearn.over_sampling import SMOTE" 79 | ] 80 | }, 81 | { 82 | "cell_type": "code", 83 | "execution_count": 4, 84 | "metadata": {}, 85 | "outputs": [ 86 | { 87 | "name": "stdout", 88 | "output_type": "stream", 89 | "text": [ 90 | "Tips count: 5943\n", 91 | "Columns: ['admin_url', 'links', 'source', 'city', 'state', 'incident_date', 'where_occurred', 'type', 'targeted_because', 'gender', 'religion', 'race_ethnicity', 'reported_to_police', 'police_dept', 'description', 'knowledge', 'status']\n" 92 | ] 93 | } 94 | ], 95 | "source": [ 96 | "tips_raw = pd.read_csv(\"data/dochate/CleanReport-2019-02-13.csv\") # the actual file is confidential. see \"step 2\"\n", 97 | "print(\"Tips count: {}\".format(tips_raw.shape[0]))\n", 98 | "print(\"Columns: {}\".format(tips_raw.columns.tolist()))" 99 | ] 100 | }, 101 | { 102 | "cell_type": "markdown", 103 | "metadata": {}, 104 | "source": [ 105 | "## Step 2: Getting our data\n", 106 | "\n", 107 | "We have our \"train_test\" data and our \"real\" data all mixed in one spreadsheet (along with out-of-scope data, like trolls, inapplicable data like those in Spanish and those with a blank `description`.). Our actual goal is to predict the column values where it's absent, using just the description field. \n", 108 | "\n", 109 | "I can't show you the data itself, but the descriptions are just text. The `targeted_because` column is comma-separated, so it might say `race,ethnicity` or `religion,sexual-orientation,race`.\n", 110 | "\n", 111 | "We remove the trolls, the not-applicable tips, those without a description and those that are in Spanish.\n", 112 | "\n", 113 | "We split the remaining data into two groups. First `real_data` the remaining tips where there were no targeting reasons selected. Those are the ones where we want the computer to find the right answer. Second, `train_test_data` is tips that do have a targeting reason provided by the tipster." 114 | ] 115 | }, 116 | { 117 | "cell_type": "code", 118 | "execution_count": 6, 119 | "metadata": {}, 120 | "outputs": [ 121 | { 122 | "name": "stdout", 123 | "output_type": "stream", 124 | "text": [ 125 | "Tips that need classification: (568, 19)\n", 126 | "tips that have a classification already: (3710, 19)\n" 127 | ] 128 | } 129 | ], 130 | "source": [ 131 | "column_of_interest = \"targeted_because\"\n", 132 | "\n", 133 | "\n", 134 | "# remove all the tips that were marked by hand as trolls or not-applicable or with a blank description.\n", 135 | "tips = tips_raw[(tips_raw[\"status\"] != 'troll') & (tips_raw[\"status\"] != 'not-applicable') & tips_raw[\"description\"].notnull()]\n", 136 | "\n", 137 | "\n", 138 | "# remove duplicates (of which there are some!)\n", 139 | "tips_raw = tips_raw.drop_duplicates(subset=['description', column_of_interest], keep=False)\n", 140 | "\n", 141 | "\n", 142 | "# this is a hacky way of detecting if a tip is in English or in Spanish. \n", 143 | "# stopwords are standard lists of grammatical function words (\"a\", \"the\", \"of\"). \n", 144 | "# If tip has more than 2 Spanish stopwords for every 3 English ones, we exclude it.\n", 145 | "# It's not perfect but it works okay.\n", 146 | "def is_english(sentence):\n", 147 | " from nltk import word_tokenize\n", 148 | " from nltk.corpus import stopwords\n", 149 | " tokens_set = set(word_tokenize(sentence))\n", 150 | " return len(set(stopwords.words('english')) & tokens_set) * 1.5 > len(set(stopwords.words('spanish')) & tokens_set )\n", 151 | "tips['english'] = tips['description'].apply(lambda x: is_english(x))\n", 152 | "tips = tips[tips['english'] != False]\n", 153 | "tips = tips.reset_index()\n", 154 | "\n", 155 | "# split the data into the data for training/testing our models -- and the \"real life\" data we hope to use our model to help with.\n", 156 | "train_test_data = tips[ tips[column_of_interest].notnull() ].copy() # if targeted_because isn't blank\n", 157 | "real_data = tips[~tips.isin(train_test_data)].dropna(how='all').copy() # if it is.\n", 158 | "\n", 159 | "\n", 160 | "print(\"Tips that need classification: {}\".format(real_data.shape))\n", 161 | "print(\"tips that have a classification already: {}\".format(train_test_data.shape))\n" 162 | ] 163 | }, 164 | { 165 | "cell_type": "markdown", 166 | "metadata": {}, 167 | "source": [ 168 | "## Step 3. Cleaning the data to remove the things that might confuse a computer.\n", 169 | "\n", 170 | "Data cleaning is one of the most important parts of real-world natural language processing, but it's underdiscussed for at least two reasons: it's completely unsexy and it's often different for every project. Data cleaning means removing stuff that might distract a computer and combining similar but not quite identical features so that they appear identical to the computer. A good way to think about data cleaning is to ask yourself what sorts of things in your data would be what _you_ would use to categorize the data.\n", 171 | "\n", 172 | "An easy example is that we lowercase everything (in other words, making the not-quite-identical words \"Then\" and \"then\" identical by transforming the first to \"then\"), since we're working with text typed by internet users. And, we will remove punctuation and \"non-word characters\" because they're not likely to tell us much about what attribute a hate crime was targeted by. (Ask yourself, will commas tell us anything about hate crimes? Of course not.) These are very typical and built into the vectorizers... (so we don't have to do it). We may also want to remove common English words like \"a\" and \"the\" -- typically called \"stopwords\" -- this is sometimes automatic, but not always, so it's worth checking.\n", 173 | "\n", 174 | "Other examples depend on your precise dataset. There's no recipe. You have to ask yourself what words will be a distraction to the model. Here's a harder example: Consider a database of press releases from US Congress that you're trying to categorize by topic (taxation, military, education, etc.). The model should pick out the phrases used frequently by members of Congress who talk about each topic a lot. Sometimes that's good (\"deduction\", for instance)... but sometimes that's bad, like the name of former Rep. Paul Ryan's press secretary. Those words aren't actually useful for determining if something is about taxation... especially if Ryan's replacement hires his staffer.\n", 175 | "\n", 176 | "Data cleaning is the process of cogitating about the data and figuring out a way to remove the unhelpful stuff, but not the helpful stuff. This is task-specific; if we had a dataset of press releases about Wisconsin that contained Ryan's press releases but also ones from the Milwaukee Brewers that we were trying to classify into the politics or sports category, the presence of Ryan-related words like the name of his press secretary _would_ be useful. \n", 177 | "\n", 178 | "Additionally, since the Documenting Hate tip data was submitted by users, it’s possible that they made mistakes -- like marking a clearly religion-related hate crime as related to, say, gender. Or failing to select ‘immigrant’ as a category for an incident that involved the intersection of race and immigrant status. Once I had an initial model trained, I dug -- quite unscientifically -- through the data that the classifier gets wrong to see if there are some where the user-contributed answer (that we treat as ground-truth) might be wrong. We might want to change some of the answers ourselves or even merge categories. You don’t want the computer to “learn” someone else’s mistakes.\n", 179 | "\n", 180 | "We can see what words are being fed into the Naive Bayes model with `vectorizer.vocabulary_.keys()`. Let's do that and take a look. They mostly look good, right? " 181 | ] 182 | }, 183 | { 184 | "cell_type": "code", 185 | "execution_count": 7, 186 | "metadata": { 187 | "scrolled": true 188 | }, 189 | "outputs": [ 190 | { 191 | "name": "stdout", 192 | "output_type": "stream", 193 | "text": [ 194 | "cranky, 28th, decatur, designated, ezpass, reverting, tender, demonstrators, tellin, fundamental, closely, giampa, inferiors, replies, demarcate, regenerate, aprove, marxist, shoe, hallway, arabs, mart, shortest, purported, curbs, abqjew, palm, homosexuality, engange, waste, trigger, xbox, carbondale, cbp, breathe, superstition, martial, horribly, beware, netanyahu, breeds, 42nd, smythe, chuckle, attended, predators, corporation, potted, strangely, duffle, catty, harrasing, sacred, bt, leaf, revs, gecko, 1st, akbar, clarksville, muscle, advocating, meningioma, bucks, slack, americorps, fyre, hanging, offhand, efforts, agitators, paranormal, investgations, disected, humptulips, ranged, justin, bisexual, congress, marijuana, hoodie, philando, denver, marco, upholstery, for, unavailable, kenosha, 99485266, imperial, holt, parkersburg, kleeve, middletown, instagram, arbitrarily, swaztica, outpopulate, seminole, killing, considerado, spokesman, nuefeild, chugiak, escape, founded, clinical, mustard, affluent, tab, caucassian, stolen, virtually, dangerous, funneled, santa, chokes, tenants, buffalo, benches, lynching, readership, transferred, sa, siguiendo\n" 195 | ] 196 | } 197 | ], 198 | "source": [ 199 | "from sklearn.feature_extraction.text import HashingVectorizer, CountVectorizer, TfidfVectorizer\n", 200 | "simple_vectorizer = TfidfVectorizer(lowercase=True) # 1,1 works well?\n", 201 | "simple_vectorizer.fit(tips[\"description\"])\n", 202 | "words = list(simple_vectorizer.vocabulary_.keys())\n", 203 | "random.shuffle(words)\n", 204 | "print(', '.join(words[:125]))" 205 | ] 206 | }, 207 | { 208 | "cell_type": "markdown", 209 | "metadata": {}, 210 | "source": [ 211 | "But what if there are meaningful words that are absent from here, because they're removed by the cleaning process?\n", 212 | "\n", 213 | "🤔🤔🤔\n" 214 | ] 215 | }, 216 | { 217 | "cell_type": "code", 218 | "execution_count": 8, 219 | "metadata": { 220 | "scrolled": true 221 | }, 222 | "outputs": [ 223 | { 224 | "name": "stdout", 225 | "output_type": "stream", 226 | "text": [ 227 | "is email 'visible' to the computer? True\n", 228 | "is e-mail 'visible' to the computer? False\n", 229 | "is t-shirt 'visible' to the computer? False\n", 230 | "is f**king 'visible' to the computer? False\n" 231 | ] 232 | } 233 | ], 234 | "source": [ 235 | "words_to_check = [\"email\", \"e-mail\", \"t-shirt\", \"f**king\"]\n", 236 | "for word in words_to_check:\n", 237 | " print(\"is {} 'visible' to the computer? {}\".format(word, word in simple_vectorizer.vocabulary_.keys()))" 238 | ] 239 | }, 240 | { 241 | "cell_type": "markdown", 242 | "metadata": {}, 243 | "source": [ 244 | "I think you see where we're going here...\n", 245 | "\n", 246 | "### Data Cleaning That Preserves Censored Slurs\n", 247 | "\n", 248 | "What are some words that are really informative about hate crimes, but are frequently not spelled out? Slurs. Remember when we removed \"non-word characters\"? We might want to backtrack and keep some of them, e.g. when a slur is replaced with comics-style grawlixes (\"F@#$!\"), stars, dashes or transformed into, e.g., \"the f-word\" or \"k**e\".\n", 249 | "\n", 250 | "Default text-cleaning rules will split words at hyphens, transforming the quite-informative \"f-word\" first into \"f word\", then it will remove one-letter words, so we're just left with \"word\"... which tells us basically nothing. Defaults are usually a good choice, but here's an example where they're not.\n", 251 | "\n", 252 | "So lets find some examples in the dataset so we can try to make sure they're included." 253 | ] 254 | }, 255 | { 256 | "cell_type": "code", 257 | "execution_count": 18, 258 | "metadata": { 259 | "scrolled": true 260 | }, 261 | "outputs": [ 262 | { 263 | "data": { 264 | "text/plain": [ 265 | "['B_t', 'C*NT', 'C-word', 'E=MC', 'F*****g']" 266 | ] 267 | }, 268 | "execution_count": 18, 269 | "metadata": {}, 270 | "output_type": "execute_result" 271 | } 272 | ], 273 | "source": [ 274 | "import re\n", 275 | "# find words that have one alphabetic character, then one or more non-alpha chars, then more alpha chars.\n", 276 | "# (so this matches 'e-mail', 't-shirt', 'f***ing' but not 'anti-semitic')\n", 277 | "bad_words = sorted(set([item for sublist in [res for res in [re.findall(r\"(?i)(?<= )[a-z\\u00C0-\\u017F“][^a-z0-9“\\u00C0-\\u017F'’\\.\\s]+[a-z\\u00C0-\\u017F“]+\", tip) for tip in list(tips['description'].values)] if res] for item in sublist if not re.match(r'^(?i)[A-Za-z][&-][A-Za-z]$', item)]))\n", 278 | "list(bad_words)[5:10]" 279 | ] 280 | }, 281 | { 282 | "cell_type": "markdown", 283 | "metadata": {}, 284 | "source": [ 285 | "Yeah okay. How're we gonna deal with that...\n", 286 | "\n", 287 | "We can verify (with the `inspect` method) that the word \"word\" makes an input tip more likely to be related to race and less likely to be related to other topics (though it's far more of a drag on, say, the `religion` class than on `sexual-orientation`.)\n", 288 | "\n", 289 | "I noodled around with this for a while... The solution didn't occur to me immediately and I tried a variety of things and changed my goals when it became clear I hadn't fully solved the problem. At the start, I just wanted words like `f-word`, `f****r`, etc. to be preserved in the data given to the classifier... by the end, I decided that I wanted as many different variants of censored words to be \"collapsed\" into the same token -- and into a token that was mostly understandable by a human (not gibberish). I also wanted to make sure that the censored words didn't get turned into an instance of an unrelated \"normal\" word.\n", 290 | "\n", 291 | "What I came up with only acts on words that are a single alphabetic character followed by one or more non-alphabetic characters followed by one or more alphabetic characters. If the non-alphabetic character string is just one hyphen, it gets turned into `dash`, so if we see `t-shirt` we turn it into `tdashshirt`. Otherwise, we replace it with the first letter, `XXX` and the last letter of the word -- so that `f*cking` and `f***ing` end up collapsed to the same thing, `fXXXg`. \n", 292 | "\n", 293 | "Inevitably, I did a fair amount of futzing around here. For a while, my regex didn't realize characters with diacritics were letters, so it started censoring the Spanish word \"pública\". I also was initially matching words like \"A&M\", which had to be excluded.\n", 294 | "\n", 295 | "The effect of this turned out not to be that great though (about half a percentage point improvement in AUC). " 296 | ] 297 | }, 298 | { 299 | "cell_type": "code", 300 | "execution_count": 19, 301 | "metadata": { 302 | "scrolled": true 303 | }, 304 | "outputs": [], 305 | "source": [ 306 | "import re\n", 307 | "def collapse_censored_word(word):\n", 308 | " if re.match(r\"(?i)[a-z\\u00C0-\\u017F“]-[a-z\\u00C0-\\u017F“]+\", word): # if there's just one hyphen, e.g. t-shirt, f-ing...\n", 309 | " word = word.replace(\"-\", \"dash\")\n", 310 | " else:\n", 311 | " word = word[0] + \"XXX\" + word[-1]\n", 312 | "# word = re.sub(r\"(?i)([^a-z“\\u00C0-\\u017F0-9'’\\.\\s\\-]+|-{2,})\", \"XXX\", word)\n", 313 | " return word\n", 314 | "\n", 315 | "censorable_word_regex = \"[a-z\\u00C0-\\u017F“][^a-z0-9“\\u00C0-\\u017F'’\\.\\s]+[a-z\\u00C0-\\u017F“]+\"\n", 316 | "def clean(text):\n", 317 | " potential_censored_words = re.findall(r\"(?i)(?<=[ \\(\\\"\\'])\" + censorable_word_regex, text) + re.findall(\"(?i)^\" + censorable_word_regex, text)\n", 318 | " for word in potential_censored_words:\n", 319 | " text = text.replace(word, collapse_censored_word(word))\n", 320 | " return text.replace(\"“\", '').replace(\"’s\", \" 's\").replace(\"'s\", \" 's\")" 321 | ] 322 | }, 323 | { 324 | "cell_type": "markdown", 325 | "metadata": {}, 326 | "source": [ 327 | "See how it works? Rather than just retaining \"word\", we retain something meaningful." 328 | ] 329 | }, 330 | { 331 | "cell_type": "code", 332 | "execution_count": 20, 333 | "metadata": {}, 334 | "outputs": [ 335 | { 336 | "data": { 337 | "text/plain": [ 338 | "'Qdashword is bad'" 339 | ] 340 | }, 341 | "execution_count": 20, 342 | "metadata": {}, 343 | "output_type": "execute_result" 344 | } 345 | ], 346 | "source": [ 347 | "clean(\"Q-word is bad\")" 348 | ] 349 | }, 350 | { 351 | "cell_type": "code", 352 | "execution_count": 21, 353 | "metadata": {}, 354 | "outputs": [ 355 | { 356 | "name": "stdout", 357 | "output_type": "stream", 358 | "text": [ 359 | "Total censored words: 120\n", 360 | "Total censored words after cleaning: 73\n" 361 | ] 362 | } 363 | ], 364 | "source": [ 365 | "# checking how many bad words are collapsed with this method\n", 366 | "print(\"Total censored words: {}\".format(len(bad_words)))\n", 367 | "unified_bad_words = {}\n", 368 | "for word, clean_word in [(word, collapse_censored_word(word)) for word in bad_words]:\n", 369 | " if clean_word not in unified_bad_words:\n", 370 | " unified_bad_words[clean_word] = []\n", 371 | " unified_bad_words[clean_word].append(word)\n", 372 | "print(\"Total censored words after cleaning: {}\".format(len(set( unified_bad_words.keys()))))" 373 | ] 374 | }, 375 | { 376 | "cell_type": "markdown", 377 | "metadata": {}, 378 | "source": [ 379 | "Once we've come up with a way to clean the text that we like, we do it.\n", 380 | "\n", 381 | "`lemmatize` relies on a library called Spacy. It removes verb endings from words -- on the theory that we learn more by treating \"punch\" and \"punching\" and \"punched\" as the same word, especially when we have a small dataset. It adds a few percentage points of AUPR for several of the classes (but not race_ethnicity) with NB. For CNN it improves or does nothing (and costs one percentage point in a few places; immigrant does worse with the default dropout1=0.02 but dropout1=0.01 fixes the problem)\n" 382 | ] 383 | }, 384 | { 385 | "cell_type": "code", 386 | "execution_count": 22, 387 | "metadata": {}, 388 | "outputs": [], 389 | "source": [ 390 | "def lemmatize(doc):\n", 391 | " return ' '.join([token.lemma_ for token in nlp(doc)])\n", 392 | "\n", 393 | "train_test_data[\"description\"] = train_test_data[\"description\"].apply(clean)\n", 394 | "train_test_data[\"description\"] = train_test_data[\"description\"].apply(lemmatize)" 395 | ] 396 | }, 397 | { 398 | "cell_type": "markdown", 399 | "metadata": {}, 400 | "source": [ 401 | "### Preparing to predict targeted_because\n", 402 | "\n", 403 | "Right now, the `targeted_because` column is exactly as it was in our source data (except we removed the blanks rows, the trolls, etc.) -- that is, a string with commas. The \"typical\" format for machine-learning projects like this one is to have one column for each possible targeting reason (race, etc.) and then a `1` in that column for each description if it has that class and a `0` if it doesn't. \n", 404 | "\n", 405 | "You can do that however you like, but we're using the MultiLabelBinarizer class from scikit-learn.\n", 406 | "\n", 407 | "I'm also adding the `race_ethnicity` column that's `1` (i.e. true) if the hate incident is classified as either `race` or `ethnicity`-related by the tipster. That's because I guess that some tipsters are going to mix them up, which'd confuse the computer. \n", 408 | "\n", 409 | "I wonder if merging `race` and `ethnicity` categories might be a good idea -- only because people may use the terms interchangably on the form in a way that the computer can't learn the nuanced distinction between them.\n" 410 | ] 411 | }, 412 | { 413 | "cell_type": "code", 414 | "execution_count": 27, 415 | "metadata": {}, 416 | "outputs": [], 417 | "source": [ 418 | "# split the comma-separated targeted_because column into an actual list.\n", 419 | "train_test_data[column_of_interest] = train_test_data[column_of_interest].apply(lambda x: x.split(\",\") if type(x) != list else x)\n", 420 | "\n", 421 | "# since we're doing a multi-label classification problem -- aka a single incident can involve targeting someone for \n", 422 | "# one or more of the possible labels (e.g. race AND religion AND immigrant status) -- we need to do some data preprocessing.\n", 423 | "# 'disability', 'ethnicity', 'gender', 'immigrant', 'race', 'religion', 'sexual-orientation'\n", 424 | "# we're actually going to be doing 7 classifiers, one to see if a description matches each label or not.\n", 425 | "lb = MultiLabelBinarizer()\n", 426 | "labels_df = pd.DataFrame(lb.fit_transform(train_test_data[column_of_interest]), columns=list(lb.classes_), index=train_test_data.index)\n", 427 | "train_test_data_one_hot = pd.concat([train_test_data[[\"description\", column_of_interest]], labels_df], axis=1)\n", 428 | "# print(train_test_data_one_hot[[idx for idx in train_test_data_one_hot.columns if idx != 'description']])\n", 429 | "train_test_data_one_hot[\"race_ethnicity\"] = train_test_data_one_hot.apply(lambda x: 1.0 if x[\"race\"] or x[\"ethnicity\"] else 0.0, axis=1)\n", 430 | "train_test_data_one_hot[[\"description\", \"race_ethnicity\"]].to_csv(\"data/dochate/dochate_for_automl.csv\", header=False, index=False)\n", 431 | "unique_classes = list(set([item for sublist in train_test_data[column_of_interest].values for item in sublist])) + [\"race_ethnicity\"]" 432 | ] 433 | }, 434 | { 435 | "cell_type": "markdown", 436 | "metadata": {}, 437 | "source": [ 438 | "So here's our data looks like now.\n", 439 | "\n", 440 | "````\n", 441 | " description race gender ...\n", 442 | "0 I was the victim of a hate crime. 0 1\n", 443 | "1 I also was a hate crime victim. 1 0\n", 444 | "````" 445 | ] 446 | }, 447 | { 448 | "cell_type": "markdown", 449 | "metadata": {}, 450 | "source": [ 451 | "Before we get started with actual machine learning, this is how many hate incidents of each class we have. It's generally harder to predict classes that have fewer examples. (The computer, which is quite dumb, never learns what words from the 129 disability-related reports indicate it's a disability-related report as opposed to a word that happens to be included in the report, like a city name.) That's why we don't do a great job with guessing which tips have to do with disability or gender." 452 | ] 453 | }, 454 | { 455 | "cell_type": "code", 456 | "execution_count": 28, 457 | "metadata": {}, 458 | "outputs": [ 459 | { 460 | "name": "stdout", 461 | "output_type": "stream", 462 | "text": [ 463 | " disability: 127 / 3710 | 3%\n", 464 | " ethnicity: 1236 / 3710 | 33%\n", 465 | " gender: 374 / 3710 | 10%\n", 466 | " immigrant: 745 / 3710 | 20%\n", 467 | " race: 1838 / 3710 | 50%\n", 468 | " religion: 948 / 3710 | 26%\n", 469 | "sexual-orientation: 655 / 3710 | 18%\n", 470 | " race_ethnicity: 2428 / 3710\n" 471 | ] 472 | } 473 | ], 474 | "source": [ 475 | "# are any of these columns so rare as to be useless to try to predict?\n", 476 | "from itertools import groupby\n", 477 | "all_values = [item for sublist in train_test_data[\"targeted_because\"].values for item in sublist]\n", 478 | "total = len(train_test_data)\n", 479 | "for cnt, label in [(len(list(g)), k) for k, g in groupby((sorted(all_values)))]:\n", 480 | " print(\"{}: {} / {} | {}%\".format(label.rjust(18), str(cnt).rjust(len(str(total))), total, round(cnt / float(total) * 100) ))\n", 481 | "\n", 482 | "print(\"{}: {} / {}\".format(\"race_ethnicity\".rjust(18), str(len(train_test_data[train_test_data_one_hot[\"race_ethnicity\"] == 1.0])).rjust(len(str(total))), total)) " 483 | ] 484 | }, 485 | { 486 | "cell_type": "markdown", 487 | "metadata": {}, 488 | "source": [ 489 | "## Step 4. Choosing an algorithm\n", 490 | "\n", 491 | "Naive Bayes is a simple machine-learning technique (i.e. there's no calculus) but it works well. It's what we're trying first.\n", 492 | "\n", 493 | "Later in this notebook, I'll be trying several algorithms for two reasons (a) as a learning exercise and (b) because machine learning is often such that one algorithm will mysteriously work better than others for a given task, just for idiosyncratic reasons, so it can be worthwhile to try several. Be aware that a lot of them require the data to be in different formats -- that’s step 4 -- so it requires a little extra work.\n", 494 | "\n", 495 | "I’d lean towards picking simpler algorithms over more complex ones… especially if you have relatively little data.\n", 496 | "\n", 497 | "### Here are the algorithms I tried.\n", 498 | "\n", 499 | " - Naive Bayes\n", 500 | " - a ‘vanilla’ neural net\n", 501 | " - a convolutional neural network\n", 502 | " - an LSTM neural network\n", 503 | " - Google’s NLP AutoML\n", 504 | " - Spacy’s text classification\n", 505 | "\n", 506 | "Spoiler alert: the convolutional neural net works a tiny bit better than naive Bayes, but only a touch. And it's more complicated.\n", 507 | "\n", 508 | "For each algorithm, we will do the next two steps:\n", 509 | "\n", 510 | "5. Formatting the data in the way that your chosen algorithm requires it. \n", 511 | "6. Feeding most of your data to your algorithm and perhaps waiting a few minutes." 512 | ] 513 | }, 514 | { 515 | "cell_type": "markdown", 516 | "metadata": {}, 517 | "source": [ 518 | "## Naive Bayes\n", 519 | "\n", 520 | "This is a pretty basic classification algorithm, but it worked well in my experimentation.\n", 521 | "\n", 522 | "We're actually doing seven classifiers, one for each of those options, predicting if a given description matches `race` or not, another predicting if it matches `sexual-orientation` or not, etc." 523 | ] 524 | }, 525 | { 526 | "cell_type": "code", 527 | "execution_count": 30, 528 | "metadata": {}, 529 | "outputs": [], 530 | "source": [ 531 | "from sklearn.naive_bayes import MultinomialNB, ComplementNB\n", 532 | "from sklearn.feature_extraction.text import HashingVectorizer, CountVectorizer, TfidfVectorizer\n", 533 | "from sklearn.metrics import classification_report\n", 534 | "from sklearn.model_selection import KFold\n", 535 | "from os.path import join\n", 536 | "from os import makedirs\n", 537 | "import pickle" 538 | ] 539 | }, 540 | { 541 | "cell_type": "markdown", 542 | "metadata": {}, 543 | "source": [ 544 | "### Step 5: Formatting the data in the way that our chosen algorithm requires it. \n", 545 | "\n", 546 | "At this point, our tips are in English. But computers can’t read! So we’re going to have to modify the data to a particular “format” for Naive Bayes.\n", 547 | "\n", 548 | "That algorithm requires words to be represented by numbers -- a process called vectorizing.\n", 549 | "\n", 550 | "I used the scikit-learn package’s TfidfVectorizer to do this, after experimenting with the HashingVectorizer and CountVectorizer. (The performance was about the same.) TfidfVectorizer transforms each tip into a list of numbers: reflecting the TF-IDF score for each token (aka word) in that tip (calculated against the entire corpus of all tips). Implicitly, each position into the list refers to an individual word -- and most of the entries in the list are 0, for words that exist in our dataset, but not in this particular tip. So a \"vectorized\" tip might look like this:\n", 551 | "\n", 552 | "```\n", 553 | "[0.1, 0, 0, 0, 0, 0.2, 0.11, 0, 0, 0]\n", 554 | "```\n", 555 | "\n", 556 | "The vectorizers also have the option to generate \"n-grams\" -- aka pairing together 2 or 3 word chunks and treating them as tokens too. For instance, the word \"my\" and the word \"country\" might not be informative about the tip of hate incident alone, but when they occur together, \"my country\" is probably a strong sign of an immigration-related incident. This tactic is often successful, but it gave worse results here.\n", 557 | "\n", 558 | "We also split our data into two groups: training data and test data. Won’t the model do better with more training data? Yes, but we keep some portion to the side, so we can evaluate how the model did, with data it wasn’t trained on (but that we know the right answers for)." 559 | ] 560 | }, 561 | { 562 | "cell_type": "code", 563 | "execution_count": 31, 564 | "metadata": { 565 | "scrolled": true 566 | }, 567 | "outputs": [ 568 | { 569 | "name": "stdout", 570 | "output_type": "stream", 571 | "text": [ 572 | "what (part of) a vectorized tip looks like: \n", 573 | "[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]\n" 574 | ] 575 | } 576 | ], 577 | "source": [ 578 | "def equalize_classes(predictor, response):\n", 579 | " return SMOTE(random_state=RANDOM_SEED).fit_sample(predictor, response)\n", 580 | "\n", 581 | "\n", 582 | "train_df, test_df = train_test_split(train_test_data_one_hot, \n", 583 | " test_size=0.2, \n", 584 | " shuffle=True,\n", 585 | " random_state=RANDOM_SEED)\n", 586 | "\n", 587 | "train_features_nb = train_df[\"description\"]\n", 588 | "test_features_nb = test_df[\"description\"]\n", 589 | "\n", 590 | "vectorizer = TfidfVectorizer(ngram_range=(1,1), # 1,1 works well?\n", 591 | " # max_features=5000, # works best with max_features set to None.\n", 592 | " lowercase=True) # automatically lowercase each word.\n", 593 | "vectorizer.fit(train_features_nb)\n", 594 | "\n", 595 | "train_features_nb_vec = vectorizer.transform(train_features_nb)\n", 596 | "test_features_nb_vec = vectorizer.transform(test_features_nb)\n", 597 | "\n", 598 | "print(\"what (part of) a vectorized tip looks like: \")\n", 599 | "print(train_features_nb_vec[0].toarray()[0].tolist()[200:250])\n", 600 | "\n" 601 | ] 602 | }, 603 | { 604 | "cell_type": "markdown", 605 | "metadata": {}, 606 | "source": [ 607 | "### Step 6: Feeding most of your data to your algorithm and perhaps waiting a few minutes.\n", 608 | "\n", 609 | "Finally! Let's train our model. We'll actually train seven models, one for each of our classes.\n", 610 | "\n", 611 | "This is the part where the computer is learning. And it’s pretty simple, from your perspective. It's this line: `naivebayes_classifier.fit(train_features_nb_vec, train_labels_nb)`. \n", 612 | "\n", 613 | "For each of our classes, we have differing amounts of tips. For instance, we have 749 tips of incidents that involve immigrant status, out of 3732 total tagged tips, so 20%. This is a problem. Imagine if we had a very dumb model that predicted that nothing was immigrant-status related; it'd get 80% accuracy! So we have to \"equalize\" the imbalanced classes. We're doing that with SMOTE oversampling (but there are other options).\n", 614 | "\n" 615 | ] 616 | }, 617 | { 618 | "cell_type": "code", 619 | "execution_count": 64, 620 | "metadata": {}, 621 | "outputs": [ 622 | { 623 | "name": "stdout", 624 | "output_type": "stream", 625 | "text": [ 626 | "Training models for ...\n", 627 | " - disability\n", 628 | " - ethnicity\n", 629 | " - gender\n", 630 | " - immigrant\n", 631 | " - race\n", 632 | " - religion\n", 633 | " - sexual-orientation\n", 634 | " - race_ethnicity\n" 635 | ] 636 | } 637 | ], 638 | "source": [ 639 | "classifiers = {}\n", 640 | "print(\"Training models for ...\")\n", 641 | "for class_of_interest in [col for col in train_test_data_one_hot.columns if col != \"description\" and col != 'targeted_because']:\n", 642 | " print(\" - \" + class_of_interest)\n", 643 | " naivebayes_classifier = MultinomialNB()\n", 644 | " train_labels_nb = train_df[class_of_interest]\n", 645 | " test_labels_nb = test_df[class_of_interest]\n", 646 | "\n", 647 | " train_features_equalized_nb_vec, train_labels_equalized_nb = equalize_classes(train_features_nb_vec, train_labels_nb)\n", 648 | "\n", 649 | " naivebayes_classifier.fit(train_features_equalized_nb_vec, train_labels_equalized_nb) # <-- TRAINING\n", 650 | "\n", 651 | " classifiers[class_of_interest] = naivebayes_classifier\n" 652 | ] 653 | }, 654 | { 655 | "cell_type": "markdown", 656 | "metadata": {}, 657 | "source": [ 658 | "### Step 6. Looking at the results and deciding if it’s good enough or not -- and if it isn’t, repeating steps 2-6 as necessary.\n", 659 | "\n", 660 | "So we're going to see how we did at the end of the next cell. But how do we know how well our model did?\n", 661 | "\n", 662 | "`area under precision-recall curve` is a \"metric\" that's good for measuring classifiers with imbalanced classes -- a dataset is imbalanced when it isn't just 50% of one class and 50% of another, but instead has, say, 26% religion-related hate incidents and thus 74% non-religion-related. It's plotting precision (avoiding false positives) against recall (avoiding false negatives). The area under that curve is a proportion; the higher the better. If the area under the precision recall curve is significantly higher than the proportion of classses in our test data, then our model has learned to make a distinction between the two classes, however imperfectly.\n", 663 | "\n", 664 | "In that example, you’d be comparing the area under the precision-recall curve to the proportion of your testing data that has the religion class -- if your model has more than 26% area under the precision-recall curve, it’s working. If it’s got a lot more than 26%, it’s working pretty well.\n", 665 | "\n", 666 | "We also show the confusion matrix, which plots the model's guesses against the right answers. Bigger numbers in the top-left and bottom-right are better; the top-right is false negatives and top-left is false positives." 667 | ] 668 | }, 669 | { 670 | "cell_type": "code", 671 | "execution_count": 65, 672 | "metadata": {}, 673 | "outputs": [ 674 | { 675 | "name": "stdout", 676 | "output_type": "stream", 677 | "text": [ 678 | "\n", 679 | "disability\n", 680 | "area under PR curve: 0.68\n", 681 | "If the AUPR score (0.6807504776362596) is more than a little bigger than the baseline (0.6563342318059299), which it *is*, then our model is working!\n", 682 | "\n", 683 | "\n", 684 | "\n", 685 | "ethnicity\n", 686 | "area under PR curve: 0.8\n", 687 | "If the AUPR score (0.8002249439051338) is more than a little bigger than the baseline (0.6563342318059299), which it *is*, then our model is working!\n", 688 | "\n", 689 | "\n", 690 | "\n", 691 | "gender\n", 692 | "area under PR curve: 0.63\n", 693 | "If the AUPR score (0.6319593009558203) is more than a little bigger than the baseline (0.6563342318059299), which it *is*, then our model is working!\n", 694 | "\n", 695 | "\n", 696 | "\n", 697 | "immigrant\n", 698 | "area under PR curve: 0.77\n", 699 | "If the AUPR score (0.7732047968744618) is more than a little bigger than the baseline (0.6563342318059299), which it *is*, then our model is working!\n", 700 | "\n", 701 | "\n", 702 | "\n", 703 | "race\n", 704 | "area under PR curve: 0.88\n", 705 | "If the AUPR score (0.8807390613874166) is more than a little bigger than the baseline (0.6563342318059299), which it *is*, then our model is working!\n", 706 | "\n", 707 | "\n", 708 | "\n", 709 | "religion\n", 710 | "area under PR curve: 0.55\n", 711 | "If the AUPR score (0.545680249173754) is more than a little bigger than the baseline (0.6563342318059299), which it *is*, then our model is working!\n", 712 | "\n", 713 | "\n", 714 | "\n", 715 | "sexual-orientation\n", 716 | "area under PR curve: 0.56\n", 717 | "If the AUPR score (0.5569133421769314) is more than a little bigger than the baseline (0.6563342318059299), which it *is*, then our model is working!\n", 718 | "\n", 719 | "\n", 720 | "\n", 721 | "race_ethnicity\n", 722 | "area under PR curve: 0.9\n", 723 | "If the AUPR score (0.9040488375018177) is more than a little bigger than the baseline (0.6563342318059299), which it *is*, then our model is working!\n", 724 | "\n", 725 | "\n" 726 | ] 727 | } 728 | ], 729 | "source": [ 730 | "for class_of_interest in [col for col in train_test_data_one_hot.columns if col != \"description\" and col != 'targeted_because']:\n", 731 | " naivebayes_classifier = classifiers[class_of_interest]\n", 732 | " predicted_probabilities_nb = naivebayes_classifier.predict_proba(test_features_nb_vec)[:,1]\n", 733 | " predicted_labels_nb = [(1.0 if proba > 0.5 else 0.0) for proba in predicted_probabilities_nb]\n", 734 | "# print(confusion_matrix(test_labels_nb, predicted_labels_nb, labels=[1., 0.]))\n", 735 | " print()\n", 736 | " \n", 737 | " pr_baseline = float(len([a for a in test_labels_nb if a]))/len(test_labels_nb)\n", 738 | " pr_score = average_precision_score(test_labels_nb, predicted_probabilities_nb)\n", 739 | " print(class_of_interest)\n", 740 | " print(\"area under PR curve: \", round(pr_score, 2))\n", 741 | " print(\"If the AUPR score ({}) is more than a little bigger than the baseline ({}), which it *{}*, then our model is working!\".format(pr_score, pr_baseline, \"is\" if pr_score - (pr_baseline * 1.1) else \"isn't\" ))\n", 742 | " print()\n", 743 | " print() " 744 | ] 745 | }, 746 | { 747 | "cell_type": "markdown", 748 | "metadata": {}, 749 | "source": [ 750 | "You can see a chart of the precision-recall curve below. If we had a different goal, we might rather have false positive than false negatives (or vice versa); the values of precision for each possible recall goal are what is plotted here.\n" 751 | ] 752 | }, 753 | { 754 | "cell_type": "code", 755 | "execution_count": 39, 756 | "metadata": {}, 757 | "outputs": [ 758 | { 759 | "data": { 760 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYoAAAEWCAYAAAB42tAoAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAIABJREFUeJzt3XuYXXV97/H3Zy7J5B4kBDDkwh3DTSRyOVrEogicCj69KKi1tFR6o7WttcfztEeR1lrr0R571FYqHLwj+LQ+qWJRUcFWoAlXITGIMZAbl0AukMwkmcz3/PFdy70ZZtbsTGbP7Jn5vJ5nP7P32muv9dtrkvWZ3++3fr+liMDMzGwwbWNdADMza20OCjMzq+SgMDOzSg4KMzOr5KAwM7NKDgozM6vkoLD9JulySf8x1uUYaZIelnTuEOsskvS8pPZRKlbTSVon6XXF86slfWGsy2StxUExSUiaKuk6SY9Jek7S/ZIuHOtyNaI4kXUXJ+gnJd0gaeZI7yciToyI7w+xzuMRMTMi9o30/ouT9N7ie26T9ENJZ4/0fsz2l4Ni8ugA1gOvAeYAfwncJGnJGJZpf7wxImYCrwCWkeV/AaXx/m/6K8X3nAd8D7h5jMsz4iR1jHUZbP+M9/9U1qCI2BkRV0fEuojoi4ivAz8DTh/sM5IWSvoXSU9LekbSJwZZ7+OS1kvaIekeSb9Q994ZklYW7z0p6WPF8i5JXyi2u03SCkmHNvA9NgLfBE4qtvN9SR+U9J/ALuAoSXOK2tNmSRsl/XV9U5Gkd0paXdSsVkl6RbG8vglmsHIvkRTlyU7SSyUtl/SspEclvbNuP1dLuknS54p9PSxp2VDfsfievcAXgQWSDqnb5i8VtcGyxnFK3XsD/r4kHS3pu8WyLZK+KGluI+XoT9Ilxf53SPqppAv6H7u67/6FfsfsCkmPA9+V9E1JV/Xb9gOSfrl4foKkbxfHdY2kNw+nvDYyHBSTVHFSPg54eJD324GvA48BS4AFwI2DbG4F8HLgJcCXgJsldRXvfRz4eETMBo4GbiqW/wZZs1kIHAz8LtDdQLkXAhcB99Ut/nXgSmBWUd4bgF7gGOA04Hzgt4vP/xpwNfAOYDZwMfDMALsarNz93QhsAF4K/CrwN5J+se79i4t15gLLgQHDdoDvOaUo4zPA1mLZacD1wO+Qx+zTwHJls2LV70vAh4oyvow85lc3Uo5+ZToD+BzwnuL7nAOs249NvKbY/xuALwOX1W17KbAY+IakGcC3yX9L84FLgU8V69hYiAg/JtkD6AS+A3y6Yp2zgaeBjgHeuxz4j4rPbgVOLZ7fAXwAmNdvnd8Cfgic0kB51wHPA9vIE+GngGnFe98Hrqlb91Bgd/l+sewy4HvF81uBd1Xs53VDlHsJEGRT3kJgHzCr7v0PATcUz68GvlP33lKgu+J7Xg3sKb7nPjIkzq17/x+Bv+r3mTXkCXjQ39cA+3kTcN8g3/tq4AuDfO7TwN8Pdez6b6fumB1V9/4sYCewuHj9QeD64vlbgB8MsO/3j/X/ncn6cI1ikina8D9PnpCuqlv+zaIT9XlJbyNPgo9FNoEMtc0/K5pytkvaRtYU5hVvX0HWXH5cNC/9UrH88+RJ+0ZJmyT9naTOit28KSLmRsTiiPj9iKivfayve76YDMLNRfPMNvIkM794fyHw06G+U0W5670UeDYinqtb9hj513zpibrnu4AuSR2S3lZ3vL9Zt85NETGXDLyHeGHT4GLg3eX3Kr7bwqIcg/6+JB0q6caiGW4H8AVqv5/90eixG8zPf0/FMfsGWVuADPMvFs8XA2f2+55vAw47gH3bAXCn0iQiScB15EnooojYW74XERf2W/dsYJGkjqqwUPZH/DlwHvBwRPRJ2ko2dxARPwEuKwLql4GvSjo4InaSf7F/QNmhfgv51/F1w/hq9VMgrydrFPMGKfd6simpeoODlLvfapuAl0iaVRcWi4CNDWz/i9ROjAO9v0XSlcBKSV+KiM1F2T8YER/sv/4Qv6+/IY/RyRHxrKQ30WATWD9Vx24nML3u9UAn9f5TVX8ZeL+kO4AusvO+3M/tEfH6YZTRmsA1isnlH8k24jf2+4t8IP8FbAb+VtIMZefzqwZYbxbZH/A00CHpfWTbPwCS3i7pkIjoI5tUAPokvVbSyUXb+g5gL9B3QN8OKE6o3wI+Kmm2pLaiM/c1xSqfAf5M0ulKx0ha3H87g5W7377Wk81nHyqOzylkTWRExiFExBqy1vXnxaJ/Bn5X0plF2WdI+u+SZlH9+5pFNt1tl7SA7GMYjuuA35R0XnFcF0g6oXjvfuBSSZ3KDvtfbWB7t5C1h2vIq73K4/t14DhJv15sr1PSKyW9bJjltgPkoJgkipPh75Cdzk/0a2Z6kchxAm8kO4QfJzts3zLAqrcC/w48Qja79PDCpqALgIclPU92EF9ahNRhwFfJkFgN3E42R42EdwBTgFVkf8lXgcOL73Uz2R7+JeA54GtkJ3x/g5W7v8vINvhNwL+S7ejfGaHvAfAR4EpJ8yNiJfBOsjawFXiU7C8a6vf1AfKy4u1kc8+/DKcgEfFfwG8Cf19s63byRA/wv8jaxtZif19qYHu7i7K8rn79onZ2PtkstYlsvvswMHU45bYDpwjfuMjMzAbnGoWZmVVyUJiZWSUHhZmZVXJQmJlZpXE3jmLevHmxZMmSsS6Gmdm4cs8992yJiEOGXvPFxl1QLFmyhJUrV451MczMxhVJjw33s256MjOzSg4KMzOr5KAwM7NKDgozM6vkoDAzs0oOCjMzq9S0oJB0vaSnJD00yPuS9A/K+ww/qOK+xWZm1lqaWaO4gZyqeTAXAscWjyvJeyU0pK9vdB6eWNfMrIkD7iLijuLOZYO5BPhc5Dznd0maK+nw4sYzg3r+efjBD0awoEM49VSYO3f09mdm1mrGcmT2Al54g5sNxbIXBUVxS8grAebNW8L69dDW5N6VffvgiScyJBwUZjaZjYspPCLiWuBagOOPXxbHHgsdTS55Tw/s2OHmJzOzsQyKjcDCutdH0MBN6VtFXx/s2QO7d0NvL3R3w969WRMp39u7Nx8RcNppMGXKWJfazGz/jWVQLAeuknQjcCawfaj+ibGwfXs2Qe3dm/0jPT352L07l+3ZUwuKPXvyZ0TWeHp7cxu7dsGhh8JQk97u21fb3p49uZ3ubpDyZ3t7LnvpS2HatAP7XhEZaL29Lww3qHXkz5njcDOzJgaFpC8D5wLzJG0A3g90AkTEPwG3ABeRN4jfRd60vWWUJ89162DLlgyH7u5cLmUfydSp0NmZj1mzoKur9rqjI9d5/nl48MH8/LZt+XPPntz2rl35vKyZ7N2bJ+7e3lptpKyxlJ/p7YVXvALOPjvLGVGryeze/eJwaWvL/bS3Z1kiavsqtxdRC7UyqPbtgyOPzH319eXnm93cZ2atqZlXPV02xPsB/EGz9n+gZszIv6Z37crn8+blX/FdXXnSbFRPT550H3gAfvrTPBHv2pUn6zKMIvKE3tGRIVOG0JQpMHNm/uzoyJP/Qw/B44/nemWIlEFR/uzpyeXlz7a23E9HR77f1pb7nDq19rOvL/cj5ff8yU+yNrVzZ263sxPOPbf22bLsrnGYTXz+G7HCsmUHvo2DDsqT6rZteWKdOhUOPrhW+5gypfEruMrmoieegOeeq9VuyppN+Xz69DzZt7fn8rI2sD9XivX0wKZN8Nhj2alf1ji6umo1n74+OOaYrHn09b24RlQfXpD7P+KILKeZjR8OiiZrb8+/xEeCBGedlSfiZjcDHXlkPgC2boUVK2Dt2gwrKQNo82bYsCEfZVNW+bPszC+bssrAeOMbYcGC5pbdzEaWg2IcGu2+goMOgvPPf/HyvXuztrF6ddY0Imo1pM7OrNnMmZO1nJ074ZFHYOPGWk2o7Jjv/33KvpP6q8hmznQfidlY8X89G7bTTmt83fLEv2oVrFlTq3EcdVQ2R/X1ZZjUX/m1b1+tKWvatKxNzZ498Pb37as1ddUHTEdHBp2ZDZ+DwkbFQQfBiSdmB/ysWVmbuOee7OBfuzZrGOXlv2WfSvlz/foMih074KSTsrZSjlsprx4rQ6Js/io79QHOOQcOP3xsv7/ZeOagsFFz2GEvfP2Lv5gn86lTa1d1DWTxYrj77gyVrVszDMoruSCDo68vm7+k3FZHR4ZS2dT11rc297uZTWQOChszHR3Z9zCUzk549avzcl3IfpDyaq6hrqDq7YVnnoFnn81aja+4Mtt/DgobN+bM2f/PvOQl8PTTsHx5jow//vja8qlT83nZbFWOTJ85s/mTTpqNJw4Km9AWLsxO8vXrc+zJ5s0ZCAcdlJfplv0bu3fXrrZ62ctyfIiZJQeFTXgnnABHH52jzWfPzst0n3kmBxTWD0bs6MgBhtOmZb9IZ+dYl9ysNTgobFLo7ISlS/P5YYfVOsTr7duX82E99hjcfnuGSznosBxEuGdPbb4uyKA5kCuqyst/yw74euU+e3uzb2XGjOHvx+xAOChsUhqoD6Kc8uS552DlymymKue76u2tzc9VBsWuXdm3sWhR3gmxDIxyosb6UCkDobu7tq36CR/b2jLAynEk5TiT8nMR8MpXwiGHjO5xMgMHhdkLnHpq/ly5Mq+UuvvuPImXTVMdHdkJPmtWPu69N4PlmWfg5JNrkyiWfR49Pfm6XF7O11VOBNnZmfOA7d2b/Sble+U0KZ2dGRqbN8P8+Q4KGxsOCrMBLFuWJ/tyRt3BXHhhzoO1ZQvceWee/MvBg+Wsv9OnZ99I/bT0/Ws0PT21cOj/3s6dGUQPPpgDDru6Rv77mlVxUJgNorx8diivfGWe6PdnJuD+qk7+M2ZkbWLLlqzpvPrVw9uH2XD5anGzEdDV1dyxF0uXZpPUj36Uo83r9fVl7WfHjmwu27Qpm6q6u5tXHptcXKMwGwfa2/PmWdu3w9e/nk1jUgZE/f3Zy7sl9vTkGJJf+IVaf0l5DxSz/eWgMBsnTjmlNonivfdmDQMyMMppTdracmT5E09k3wbUrtZqa8vgaGTaFLN6DgqzceToo/MeHuUVU4M1d3V35y1z167NvpYtW7K2sWNHjkjv6MhpTKZNy+2UV2HNnFkbIzJ7dq1Zq7wfe3nFl00uDgqzcWbatKHXOeqofNS7667aHQmnT89lXV05h1Z3dzZXHXJINnOVs/pOm/bCJi0JLrigdgWXTQ4OCrNJ4qyz8mdfX/Z19PRkrWPu3KwlbNxYGwH+6KMZINOn16aBf+qpDIzbbssAmTkzBwO2t+c2Dj44t19eDmwTh4PCbJJpa6vd9a9++pH6e5kPNCni4YdnrWTduqyBlJ3rHR3ZRzJjRu2+ImeemYHR3Z3hUt7BcN68fC1lM9hxx7kpazxwUJhZQ2bMgPPOe/HynTuzOaujo3aJbk9PBkVPTzZZ7d2b65b3Vi/vQvjQQ3kFV3d3vjd7dgZHeSvc6dNz/fJKLilrM/Pm5fsRvpf6aPAhNrMDMmNG7T4fkKPI16/P5QcfnLWNKVMyQKQ8+Xd25piQJ5+EH/6wNmCxoyObsdraMoA6OjI4yqAox4YcfXT+7O3NgJkyJcealM1h+/Y1PmDShuagMLMRdfDBtf6Keoce+sLXy5blxIrlHFobN2az1tSpGTI7dmRglDWLuXMzFNasyTBoa8tQmjYtP79mTY4dKe8r0tGR7y1alJ/17LvD56AwszFTXn0FeZJfuLD2etGigT+zePGLl913X3bM79yZQfPUU7UA+vGP81LgY4/N7c+c6Vvi7i8HhZmNe6edNvDyLVvggQdyAOKGDXkl1wknZG3GGuegMLMJa9687IDfuzc7zjdsyCu1du2qNWstWlS7QZUNzEFhZhNeZ2fWOrZty0t8778/r5jasyf7Ns4/P6+4isiOcV+y+0IOCjObNObOzZHlpVWrsj/jttuy76K81PYNb/CcWPUcFGY2aZ1wQgbCpk3ZFLVjR16K++1v5z1AurryaqnDDsvaxty5k7Mj3EFhZpNWW1v2UZRXWPX05J0KV6/Oe3qUEyHOmZPvz5qVt7zt7KyNQu/tzQCZyHNfNTUoJF0AfBxoBz4TEX/b7/1FwGeBucU6742IW5pZJjOzwXR1wWtfmyf/7u4Mgvvuy1BYvz5rH08+WZue5KCDamNBDj88ax89Pfn5RYteeLnveNa0oJDUDnwSeD2wAVghaXlErKpb7S+BmyLiHyUtBW4BljSrTGZmjShHhEPewwOyJvHYY/D00xkgzz2XwbJ2bb6/cWPWPsqgePjhHLHe3g5HHJGBcsghOaajvX1svtdwNbNGcQbwaESsBZB0I3AJUB8UAZTzTM4BNjWxPGZmB2Tx4hcP+CunE+npyean9vZstnroIbjnnny9enXtiqr58+GMM/LS3fGimUGxAFhf93oDcGa/da4GviXpD4EZwOsG2pCkK4ErAQ49dJDhmmZmY6irq/b88MNrM/M+9VQGxJYt8JOf5LQjmzfDa16Tl+TOnz825d0fTbwdfEMuA26IiCOAi4DPS3pRmSLi2ohYFhHL5sw5ZNQLaWY2XPPnZ2f3McfAhRdmTWLbNvjWt+BrX8tLdNety8F/raqZNYqNQH1XzhHFsnpXABcARMSdkrqAecBTTSyXmdmYOekkeP75nJvqiSfgu9/NPpEjj8w+jqlT4fTTsxbSKpoZFCuAYyUdSQbEpcBb+63zOHAecIOklwFdwNNNLJOZ2ZibOTOnRT/66JzI8J57crR4OZHhjBlwyiljXcqapgVFRPRKugq4lbz09fqIeFjSNcDKiFgOvBv4Z0l/QnZsXx4R0awymZm1krIG8frX5+tdu+DeezM0jjnmhbPrjqWmjqMoxkTc0m/Z++qerwJe1cwymJmNF+VNnZ58Em6+OS+vPfHEsZ97yiOzzcxayBln5NTomzfnWI1HHskJDU88MUeSjwUHhZlZC5Hg5S/PO/X96Ec5IrynJ6dIX7QoA2O0OSjMzFpQWxuceiosWZL3Fd+2LUNj27a8Kqp+3EazOSjMzFrYnDk5/uLpp2Hlyhxv0ddXm1pkNIz1gDszM2vAIYfkDZZ2787O7tHkoDAzGyfa2/MKqGeeyVu6jhYHhZnZODJjRgbFz342evt0UJiZjSPz5+cgvYceyvt/79zZ/H26M9vMbByZMyfDYt26DImnnsqZaMu78DWDaxRmZuPM0qXwqlfllVCrV8Py5bBnT/P256AwMxuHZs/Oy2ZnzcpaxY9/3Lx9OSjMzMax007L+aE2NfH+oA4KM7NxrK0tb7P62GP5aMo+mrNZMzMbDR0deR/vZ5/NmyCtWpU3RhrRfYzs5szMbLQtXJg1iwcfhDvvzLA4/fS8a95IcFCYmU0ACxZAb2/eYnXz5lw2UkHhpiczswli8eKcLHDevBy9vWrVyGzXQWFmNsEsXJhjLH74Q/jBD3K22QPhoDAzm2AOOwxe+1p44om8//b69Qe2PQeFmdkE1NUF55wD3d3Zb3EgHBRmZhPU1KnZ7LR6NeRNVofHQWFmNkF1dMBJJ5XzQDkozMxsAFKO3D4QDgozswls1qzyqqeZ04a7DQeFmdkENmMGnHkmAG56MjOz5nBQmJlZJQeFmZlVclCYmVklB4WZmVVyUJiZWaWG70chaQGwuP4zEXFHMwplZmato6GgkPRh4C3AKmBfsTiAyqCQdAHwcaAd+ExE/O0A67wZuLrY3gMR8dZGC29mZs3XaI3iTcDxEbG70Q1Lagc+Cbwe2ACskLQ8IlbVrXMs8D+BV0XEVknzGy+6mZmNhkb7KNYCnfu57TOARyNibUTsAW4ELum3zjuBT0bEVoCIeGo/92FmZk3WaI1iF3C/pNuAn9cqIuKPKj6zAKi/XcYG4Mx+6xwHIOk/yeapqyPi3xssk5mZjYJGg2J58WjG/o8FzgWOAO6QdHJEbKtfSdKVwJUAhx66qAnFMDOzwTQUFBHxWUlTKGoAwJqI2DvExzYCC+teH1Esq7cBuLvY1s8kPUIGx4p++78WuBbg+OOXHeCEuWZmtj8a6qOQdC7wE7Jz+lPAI5LOGeJjK4BjJR1ZhMylvLhW8jWyNoGkeWQQrW208GZm1nyNNj19FDg/ItYASDoO+DJw+mAfiIheSVcBt5L9D9dHxMOSrgFWRsTy4r3zJZWX3b4nIp4Z/tcxM7OR1mhQdJYhARARj0ga8iqoiLgFuKXfsvfVPQ/gT4uHmZm1oEaDYqWkzwBfKF6/DVjZnCKZmVkraTQofg/4A6C8HPYHZF+FmZlNcI1e9bQb+FjxMDOzSaQyKCTdFBFvlvQjci6mF4iIU5pWMjMzawlD1SjeVfz8pWYXxMzMWlPlOIqI2Fw83QKsj4jHgKnAqcCmJpfNzMxaQKOTAt4BdBX3pPgW8OvADc0qlJmZtY5Gg0IRsQv4ZeBTEfFrwInNK5aZmbWKhoNC0tnk+IlvFMvam1MkMzNrJY0GxR+TNxj612IajqOA7zWvWGZm1ioaHUdxO3B73eu11AbfmZnZBDbUOIr/ExF/LOnfGHgcxcVNK5mZmbWEoWoUny9+/u9mF8TMzFpTZVBExD3F05VAd0T0AUhqJ8dTmJnZBNdoZ/ZtwPS619OA74x8cczMrNU0GhRdEfF8+aJ4Pr1ifTMzmyAaDYqdkl5RvpB0OtDdnCKZmVkrafR+FH8M3CxpEyDgMOAtTSuVmZm1jEbHUayQdAJwfLFoTUTsbV6xzMysVTTU9CRpOvA/gHdFxEPAEkmeetzMbBJotI/i/wF7gLOL1xuBv25KiczMrKU0GhRHR8TfAXsBiplk1bRSmZlZy2g0KPZImkYxjYeko4HdTSuVmZm1jEaveno/8O/AQklfBF4FXN6sQpmZWesYMigkCfgxedOis8gmp3dFxJYml83MzFrAkEERESHplog4mdpNi8zMbJJotI/iXkmvbGpJzMysJTXaR3Em8HZJ64CdZPNTRMQpzSqYmZm1hkaD4g1NLYWZmbWsoe5w1wX8LnAM8CPguojoHY2CmZlZaxiqj+KzwDIyJC4EPtr0EpmZWUsZqulpaXG1E5KuA/6r+UUyM7NWMlSN4uczxLrJycxschoqKE6VtKN4PAecUj6XtGOojUu6QNIaSY9Kem/Fer8iKSQt298vYGZmzVXZ9BQR7cPdsKR24JPA64ENwApJyyNiVb/1ZgHvAu4e7r7MzKx5Gh1wNxxnAI9GxNqI2APcCFwywHp/BXwY6GliWczMbJiaGRQLgPV1rzcUy36uuA/3woionBpE0pWSVkpauX370yNfUjMzG1Qzg6KSpDbgY8C7h1o3Iq6NiGURsWzOnEOaXzgzM/u5ZgbFRmBh3esjimWlWcBJwPeLqUHOApa7Q9vMrLU0MyhWAMdKOlLSFOBSYHn5ZkRsj4h5EbEkIpYAdwEXR8TKJpbJzMz2U9OCohh3cRVwK7AauCkiHpZ0jaSLm7VfMzMbWY1OCjgsEXELcEu/Ze8bZN1zm1kWMzMbnjHrzDYzs/HBQWFmZpUcFGZmVslBYWZmlRwUZmZWyUFhZmaVHBRmZlbJQWFmZpUcFGZmVslBYWZmlRwUZmZWyUFhZmaVHBRmZlbJQWFmZpUcFGZmVslBYWZmlRwUZmZWyUFhZmaVHBRmZlbJQWFmZpUcFGZmVslBYWZmlRwUZmZWyUFhZmaVHBRmZlbJQWFmZpUcFGZmVslBYWZmlRwUZmZWyUFhZmaVHBRmZlbJQWFmZpWaGhSSLpC0RtKjkt47wPt/KmmVpAcl3SZpcTPLY2Zm+69pQSGpHfgkcCGwFLhM0tJ+q90HLIuIU4CvAn/XrPKYmdnwNLNGcQbwaESsjYg9wI3AJfUrRMT3ImJX8fIu4IgmlsfMzIahmUGxAFhf93pDsWwwVwDfHOgNSVdKWilp5fbtT49gEc3MbCgt0Zkt6e3AMuAjA70fEddGxLKIWDZnziGjWzgzs0muo4nb3ggsrHt9RLHsBSS9DvgL4DURsbuJ5TEzs2FoZo1iBXCspCMlTQEuBZbXryDpNODTwMUR8VQTy2JmZsPUtKCIiF7gKuBWYDVwU0Q8LOkaSRcXq30EmAncLOl+ScsH2ZyZmY2RZjY9ERG3ALf0W/a+uueva+b+zczswLVEZ7aZmbUuB4WZmVVyUJiZWSUHhZmZVXJQmJlZJQeFmZlVclCYmVklB4WZmVVyUJiZWSUHhZmZVXJQmJlZJQeFmZlVclCYmVklB4WZmVVyUJiZWSUHhZmZVXJQmJlZJQeFmZlVclCYmVklB4WZmVVyUJiZWSUHhZmZVXJQmJlZJQeFmZlVclCYmVklB4WZmVVyUJiZWSUHhZmZVXJQmJlZJQeFmZlVclCYmVklB4WZmVVyUJiZWaWmBoWkCyStkfSopPcO8P5USV8p3r9b0pJmlsfMzPZf04JCUjvwSeBCYClwmaSl/Va7AtgaEccAfw98uFnlMTOz4elo4rbPAB6NiLUAkm4ELgFW1a1zCXB18fyrwCckKSJisI1GQE8PdDSz5GZmE8iePQAa9uebebpdAKyve70BOHOwdSKiV9J24GBgS/1Kkq4Erixe7Tn33Nk/hUGzZBLZexB0bh3rUrQGH4saH4saH4skwfOLhvvpcfF3eURcC1wLIGllxI5lY1yklpDHosfHAh+Lej4WNT4WNZJWDvezzezM3ggsrHt9RLFswHUkdQBzgGeaWCYzM9tPzQyKFcCxko6UNAW4FFjeb53lwG8Uz38V+G5V/4SZmY2+pjU9FX0OVwG3Au3A9RHxsKRrgJURsRy4Dvi8pEeBZ8kwGcq1zSrzOORjUeNjUeNjUeNjUTPsYyH/AW9mZlU8MtvMzCo5KMzMrFLLBoWn/6hp4Fj8qaRVkh6UdJukxWNRztEw1LGoW+9XJIWkCXtpZCPHQtKbi38bD0v60miXcbQ08H9kkaTvSbqv+H9y0ViUs9kkXS/pKUkPDfK+JP1DcZwelPSKhjYcES33IDu/fwocBUwBHgCW9lvn94F/Kp5fCnxlrMs9hsfitcD04vnvTeZjUaw3C7gDuAtYNtblHsN/F8cC9wEHFa/nj3W5x/BYXAv8XvF8KbBurMvdpGNxDvAK4KFB3r8I+CY5TPss4O6J0SNUAAAD0ElEQVRGttuqNYqfT/8REXuAcvqPepcAny2efxU4T9Lwx6i3riGPRUR8LyJ2FS/vIsesTESN/LsA+Cty3rCe0SzcKGvkWLwT+GREbAWIiKdGuYyjpZFjEcDs4vkcYNMolm/URMQd5BWkg7kE+Fyku4C5kg4farutGhQDTf+xYLB1IqIXKKf/mGgaORb1riD/YpiIhjwWRVV6YUR8YzQLNgYa+XdxHHCcpP+UdJekC0atdKOrkWNxNfB2SRuAW4A/HJ2itZz9PZ8A42QKD2uMpLcDy4DXjHVZxoKkNuBjwOVjXJRW0UE2P51L1jLvkHRyRGwb01KNjcuAGyLio5LOJsdvnRQRfWNdsPGgVWsUnv6jppFjgaTXAX8BXBwRu0epbKNtqGMxCzgJ+L6kdWQb7PIJ2qHdyL+LDcDyiNgbET8DHiGDY6Jp5FhcAdwEEBF3Al3AvFEpXWtp6HzSX6sGhaf/qBnyWEg6Dfg0GRITtR0ahjgWEbE9IuZFxJKIWEL211wcEcOeDK2FNfJ/5GtkbQJJ88imqLWjWchR0sixeBw4D0DSy8igeHpUS9kalgPvKK5+OgvYHhGbh/pQSzY9RfOm/xh3GjwWHwFmAjcX/fmPR8TFY1boJmnwWEwKDR6LW4HzJa0C9gHviYgJV+tu8Fi8G/hnSX9CdmxfPhH/sJT0ZfKPg3lFf8z7gU6AiPgnsn/mIuBRYBfwmw1tdwIeKzMzG0Gt2vRkZmYtwkFhZmaVHBRmZlbJQWFmZpUcFGZmVslBYdaPpH2S7pf0kKR/kzR3hLd/uaRPFM+vlvRnI7l9s5HmoDB7se6IeHlEnESO0fmDsS6Q2VhyUJhVu5O6SdMkvUfSimIu/w/ULX9HsewBSZ8vlr2xuFfKfZK+I+nQMSi/2QFryZHZZq1AUjs57cN1xevzybmSziDn818u6RxyjrG/BP5bRGyR9JJiE/8BnBURIem3gT8nRwibjSsOCrMXmybpfrImsRr4drH8/OJxX/F6JhkcpwI3R8QWgIgo7wdwBPCVYr7/KcDPRqf4ZiPLTU9mL9YdES8HFpM1h7KPQsCHiv6Ll0fEMRFxXcV2/i/wiYg4GfgdciI6s3HHQWE2iOKugX8EvLuYyv5W4LckzQSQtEDSfOC7wK9JOrhYXjY9zaE2hfNvYDZOuenJrEJE3CfpQeCyiPh8MUX1ncUsvc8Dby9mKv0gcLukfWTT1OXkXdVulrSVDJMjx+I7mB0ozx5rZmaV3PRkZmaVHBRmZlbJQWFmZpUcFGZmVslBYWZmlRwUZmZWyUFhZmaV/j9AzbimxYT7AQAAAABJRU5ErkJggg==\n", 761 | "text/plain": [ 762 | "
" 763 | ] 764 | }, 765 | "metadata": { 766 | "needs_background": "light" 767 | }, 768 | "output_type": "display_data" 769 | } 770 | ], 771 | "source": [ 772 | "pr_chart(test_labels_nb, predicted_probabilities_nb)" 773 | ] 774 | }, 775 | { 776 | "cell_type": "markdown", 777 | "metadata": {}, 778 | "source": [ 779 | "## Keras Neural Nets\n", 780 | "via https://www.tensorflow.org/tutorials/keras/basic_text_classification\n", 781 | "\n", 782 | "Neural nets are very trendy and for good reason: they're very powerful and can \"learn\" patterns that are too complicated for Naive Bayes.\n", 783 | "\n", 784 | "Neural nets are a category, not an individual model algorithm. We're going to try two different ones:\n", 785 | "\n", 786 | " - a basic neural net\n", 787 | " - a convolutional neural net\n", 788 | " \n", 789 | "Both networks learn \"embeddings\" for each word in our tips. These are akin to word2vec-style vectors, but where those vectors are trained on a large general-purpose dataset, ours are trained just for this purpose (and trained on a lot less data). I tried using word2vec vectors, but it didn't work as well. The vectors are equivalent to the output of the TfidfVectorizer that we used for Naive Bayes.\n", 790 | "\n", 791 | "The convolutional neural net does better than the basic one, likely because it takes into account each word's context.\n", 792 | "\n", 793 | "(I also tried an LSTM, but I couldn't get it to work! I suspect because I don't have enough data.)" 794 | ] 795 | }, 796 | { 797 | "cell_type": "markdown", 798 | "metadata": {}, 799 | "source": [ 800 | "Here's some shared settings for both kinds of models." 801 | ] 802 | }, 803 | { 804 | "cell_type": "code", 805 | "execution_count": 45, 806 | "metadata": {}, 807 | "outputs": [], 808 | "source": [ 809 | "from __future__ import absolute_import, division, print_function\n", 810 | "WORDS_TO_KEEP = 10000 # should really be 10000\n", 811 | "tokenizer = keras.preprocessing.text.Tokenizer(num_words=WORDS_TO_KEEP)\n", 812 | "VALIDATION_SET_SIZE = 1000\n", 813 | "SHOULD_EQUALIZE = True\n", 814 | "VOCAB_SIZE = WORDS_TO_KEEP + 3\n", 815 | "MAX_SEQUENCE_LENGTH = 256" 816 | ] 817 | }, 818 | { 819 | "cell_type": "markdown", 820 | "metadata": {}, 821 | "source": [ 822 | "### Step 5: Formatting the data in the way that our chosen algorithm requires it. \n", 823 | "\n", 824 | "Unlike Naive Bayes, the neural nets take the words as a list of numbers, where each number corresponds directly to a token (aka word). Each list has to be the same length, so we have a special character for padding that gets added at the end of shorter tips. So a tip encoded for Keras might look like. `[46, 3449, 9, 172, 15, 6, 1054, 0, 0, 0 ... 0, 0]`" 825 | ] 826 | }, 827 | { 828 | "cell_type": "code", 829 | "execution_count": 46, 830 | "metadata": {}, 831 | "outputs": [ 832 | { 833 | "name": "stdout", 834 | "output_type": "stream", 835 | "text": [ 836 | "We have 16135 total words.\n" 837 | ] 838 | } 839 | ], 840 | "source": [ 841 | "# via https://www.tensorflow.org/tutorials/keras/basic_text_classification\n", 842 | "train_df, test_df = train_test_split(train_test_data_one_hot, test_size=0.2, shuffle=True, random_state=RANDOM_SEED)\n", 843 | "\n", 844 | "tokenizer.fit_on_texts(train_test_data_one_hot[\"description\"])\n", 845 | "print(\"We have {} total words.\".format(max(tokenizer.word_index.values())))\n", 846 | "\n", 847 | "word_index = {k:(v+3) for k,v in tokenizer.word_index.items()} \n", 848 | "word_index[\"\"] = 0\n", 849 | "word_index[\"\"] = 1\n", 850 | "word_index[\"\"] = 2 # unknown\n", 851 | "word_index[\"\"] = 3\n", 852 | "\n", 853 | "reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])\n", 854 | "\n", 855 | "def decode_text(text):\n", 856 | " return ' '.join([reverse_word_index.get(i, '?') for i in text])\n", 857 | "def encode_texts(texts):\n", 858 | " return [[word_index['']] + [idx + 3 for idx in list(idxs)] for idxs in tokenizer.texts_to_sequences(texts)[:]]\n", 859 | "def encode_text(text):\n", 860 | " return encode_texts([text])[0]" 861 | ] 862 | }, 863 | { 864 | "cell_type": "markdown", 865 | "metadata": {}, 866 | "source": [ 867 | "These are some helper methods we'll use for all the kinds of neural nets. The `train_keras_model` method combines steps 4-6: preparing the input data, training the model, and printing out evaluation stats." 868 | ] 869 | }, 870 | { 871 | "cell_type": "code", 872 | "execution_count": 50, 873 | "metadata": { 874 | "scrolled": true 875 | }, 876 | "outputs": [], 877 | "source": [ 878 | "def equalize_classes_keras(predictor, response):\n", 879 | " return SMOTE(random_state=RANDOM_SEED).fit_sample(predictor, response)\n", 880 | "\n", 881 | "def train_keras_model(model_fn, train_df, test_df, epochs=40, should_equalize=SHOULD_EQUALIZE, classes_of_interest=None):\n", 882 | " # several of the algorithms we test here make use of randomness. and we split the data into train/test groups randomly.\n", 883 | " # in order to make sure that every time we run this notebook, we get the same results (rather than a\n", 884 | " # \"good\" split making one algorithm choice seem better), we set an arbitrary number (1234) as the seed for all the \n", 885 | " # random number generators.\n", 886 | " keras.backend.clear_session()\n", 887 | " np.random.seed(RANDOM_SEED)\n", 888 | " random.seed(RANDOM_SEED)\n", 889 | " tf.set_random_seed(RANDOM_SEED)\n", 890 | " \n", 891 | " histories = {}\n", 892 | "\n", 893 | " # preparing the input data.\n", 894 | " train_data = np.array(encode_texts(train_df[\"description\"]))\n", 895 | " test_data = np.array(encode_texts(test_df[\"description\"]))\n", 896 | " train_data = keras.preprocessing.sequence.pad_sequences(train_data,\n", 897 | " value=word_index[\"\"],\n", 898 | " padding='post',\n", 899 | " maxlen=256)\n", 900 | " test_data = keras.preprocessing.sequence.pad_sequences(test_data,\n", 901 | " value=word_index[\"\"],\n", 902 | " padding='post',\n", 903 | " maxlen=256)\n", 904 | " for class_of_interest in classes_of_interest:\n", 905 | " train_labels = train_df[class_of_interest]\n", 906 | " test_labels = test_df[class_of_interest]\n", 907 | "\n", 908 | " if should_equalize:\n", 909 | " equalized_train_data, equalized_train_labels = equalize_classes_keras(train_data, train_labels)\n", 910 | " else:\n", 911 | " equalized_train_data = train_data.copy()\n", 912 | " equalized_train_labels = train_labels.copy()\n", 913 | "\n", 914 | " x_val = equalized_train_data[:VALIDATION_SET_SIZE]\n", 915 | " partial_x_train = equalized_train_data[VALIDATION_SET_SIZE:]\n", 916 | "\n", 917 | " y_val = equalized_train_labels[:VALIDATION_SET_SIZE]\n", 918 | " partial_y_train = equalized_train_labels[VALIDATION_SET_SIZE:]\n", 919 | " model = model_fn()\n", 920 | " print(class_of_interest)\n", 921 | " history = model.fit(partial_x_train,\n", 922 | " partial_y_train,\n", 923 | " epochs=epochs,\n", 924 | " batch_size=256,\n", 925 | " validation_data=(x_val, y_val),\n", 926 | " verbose=0\n", 927 | " )\n", 928 | " results = model.evaluate(test_data, test_labels)\n", 929 | " histories[class_of_interest] = history\n", 930 | " predicted_probabilities = model.predict(test_data)\n", 931 | " predicted_labels = [1.0 if proba > 0.5 else 0.0 for proba in predicted_probabilities]\n", 932 | " # print(confusion_matrix(test_labels, predicted_labels, labels=[1., 0.]))\n", 933 | " pr_score = average_precision_score(test_labels, predicted_probabilities)\n", 934 | " pr_baseline = float(len([a for a in test_labels if a]))/len(test_labels)\n", 935 | " print(\"Area under PR curve: \", round(pr_score, 2))\n", 936 | " print(\"If the AUPR score ({}) is more than a little bigger than the baseline ({}), which it *{}*, then our model is working!\".format(round(pr_score, 2), round(pr_baseline, 2), \"is\" if pr_score - (pr_baseline * 1.1) else \"isn't\" ))\n", 937 | "\n", 938 | " print()\n", 939 | " print()\n", 940 | " return [histories, test_labels, predicted_probabilities]" 941 | ] 942 | }, 943 | { 944 | "cell_type": "code", 945 | "execution_count": 51, 946 | "metadata": {}, 947 | "outputs": [], 948 | "source": [ 949 | "def embedding_layer(word2vec=False):\n", 950 | " if word2vec: \n", 951 | " EMBEDDING_DIM = 200\n", 952 | " from gensim.models import Word2Vec\n", 953 | " w2v = Word2Vec.load(\"my_word2vec_model.bin\")\n", 954 | " embedding_matrix = np.zeros((len(word_index) + 1, EMBEDDING_DIM))\n", 955 | " for word, i in word_index.items():\n", 956 | " embedding_vector = w2v[word.lower()] if word in w2v else None\n", 957 | " if embedding_vector is not None:\n", 958 | " # words not found in embedding index will be all-zeros.\n", 959 | " embedding_matrix[i] = embedding_vector\n", 960 | "\n", 961 | " return keras.layers.Embedding(len(word_index) + 1,\n", 962 | " EMBEDDING_DIM,\n", 963 | " weights=[embedding_matrix],\n", 964 | " input_length=MAX_SEQUENCE_LENGTH,\n", 965 | " trainable=False) # \n", 966 | " else: \n", 967 | " return keras.layers.Embedding(VOCAB_SIZE, 16)\n" 968 | ] 969 | }, 970 | { 971 | "cell_type": "markdown", 972 | "metadata": {}, 973 | "source": [ 974 | "### Basic Keras NN\n", 975 | "\n", 976 | "This is a very basic neural net. " 977 | ] 978 | }, 979 | { 980 | "cell_type": "code", 981 | "execution_count": 54, 982 | "metadata": {}, 983 | "outputs": [], 984 | "source": [ 985 | "def basic_model():\n", 986 | " model = keras.Sequential()\n", 987 | " model.add(embedding_layer())\n", 988 | " model.add(keras.layers.GlobalAveragePooling1D())\n", 989 | " model.add(keras.layers.Dense(16, activation=tf.nn.relu))\n", 990 | " model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))\n", 991 | " model.compile(optimizer='adam',\n", 992 | " loss='binary_crossentropy',\n", 993 | " metrics=['accuracy'])\n", 994 | " # model.summary()\n", 995 | " return model" 996 | ] 997 | }, 998 | { 999 | "cell_type": "markdown", 1000 | "metadata": {}, 1001 | "source": [ 1002 | "As you can see, this model doesn't do as well as Naive Bayes. It also takes a lot longer to train, hence the progress bars." 1003 | ] 1004 | }, 1005 | { 1006 | "cell_type": "code", 1007 | "execution_count": 55, 1008 | "metadata": {}, 1009 | "outputs": [ 1010 | { 1011 | "name": "stdout", 1012 | "output_type": "stream", 1013 | "text": [ 1014 | "742/742 [==============================] - 0s 32us/sample - loss: 0.7459 - acc: 0.2615\n", 1015 | "religion\n", 1016 | "Area under PR curve: 0.34\n", 1017 | "PR Baseline : 0.24123989218328842\n", 1018 | "If the AUPR score (0.34) is more than a little bigger than the baseline (0.24), which it *is*, then our model is working!\n", 1019 | "\n", 1020 | "\n", 1021 | "742/742 [==============================] - 0s 43us/sample - loss: 0.7171 - acc: 0.3518\n", 1022 | "ethnicity\n", 1023 | "Area under PR curve: 0.43\n", 1024 | "PR Baseline : 0.35175202156334234\n", 1025 | "If the AUPR score (0.43) is more than a little bigger than the baseline (0.35), which it *is*, then our model is working!\n", 1026 | "\n", 1027 | "\n", 1028 | "742/742 [==============================] - 0s 35us/sample - loss: 0.8048 - acc: 0.1685\n", 1029 | "sexual-orientation\n", 1030 | "Area under PR curve: 0.17\n", 1031 | "PR Baseline : 0.16711590296495957\n", 1032 | "If the AUPR score (0.17) is more than a little bigger than the baseline (0.17), which it *is*, then our model is working!\n", 1033 | "\n", 1034 | "\n", 1035 | "742/742 [==============================] - 0s 84us/sample - loss: 0.8311 - acc: 0.1105\n", 1036 | "gender\n", 1037 | "Area under PR curve: 0.1\n", 1038 | "PR Baseline : 0.1105121293800539\n", 1039 | "If the AUPR score (0.1) is more than a little bigger than the baseline (0.11), which it *is*, then our model is working!\n", 1040 | "\n", 1041 | "\n", 1042 | "742/742 [==============================] - 0s 53us/sample - loss: 0.8521 - acc: 0.0283\n", 1043 | "disability\n", 1044 | "Area under PR curve: 0.03\n", 1045 | "PR Baseline : 0.02830188679245283\n", 1046 | "If the AUPR score (0.03) is more than a little bigger than the baseline (0.03), which it *is*, then our model is working!\n", 1047 | "\n", 1048 | "\n", 1049 | "742/742 [==============================] - 0s 51us/sample - loss: 0.7844 - acc: 0.2143\n", 1050 | "immigrant\n", 1051 | "Area under PR curve: 0.29\n", 1052 | "PR Baseline : 0.21428571428571427\n", 1053 | "If the AUPR score (0.29) is more than a little bigger than the baseline (0.21), which it *is*, then our model is working!\n", 1054 | "\n", 1055 | "\n", 1056 | "742/742 [==============================] - 0s 42us/sample - loss: 0.6918 - acc: 0.5418\n", 1057 | "race\n", 1058 | "Area under PR curve: 0.56\n", 1059 | "PR Baseline : 0.48517520215633425\n", 1060 | "If the AUPR score (0.56) is more than a little bigger than the baseline (0.49), which it *is*, then our model is working!\n", 1061 | "\n", 1062 | "\n", 1063 | "742/742 [==============================] - 0s 51us/sample - loss: 0.7289 - acc: 0.4111\n", 1064 | "race_ethnicity\n", 1065 | "Area under PR curve: 0.7\n", 1066 | "PR Baseline : 0.6563342318059299\n", 1067 | "If the AUPR score (0.7) is more than a little bigger than the baseline (0.66), which it *is*, then our model is working!\n", 1068 | "\n", 1069 | "\n" 1070 | ] 1071 | } 1072 | ], 1073 | "source": [ 1074 | "histories, test_labels, predicted_probas = train_keras_model(basic_model, train_df, test_df, epochs=5, classes_of_interest=list(set(all_values)) + ['race_ethnicity'])" 1075 | ] 1076 | }, 1077 | { 1078 | "cell_type": "markdown", 1079 | "metadata": {}, 1080 | "source": [ 1081 | "## Keras CNN:\n", 1082 | "\n" 1083 | ] 1084 | }, 1085 | { 1086 | "cell_type": "markdown", 1087 | "metadata": {}, 1088 | "source": [ 1089 | "### Step 6. Looking at the results and deciding if it’s good enough or not -- and if it isn’t, repeating steps 2-6 as necessary.\n", 1090 | "\n", 1091 | "Convolutional neural nets were revolutionary in how they improved neural nets' performance with text by examining several words at once. Other people undoubtedly would do a better job explaining how that works!\n", 1092 | "\n", 1093 | "While the performance of CNNs can be great, they have a LOT of settings that you can fiddle with to change how well they do. Learning rates, embedding dimensions, the number of epochs, etc. And they even depend on starting off with a certain amount of randomness, so running the same model twice on the same data (if you're not careful) can get you different results! \n", 1094 | "\n", 1095 | "Other tutorials will do you a better job explaining how to do that too, though below I discuss using a \"grid search\" to try to help.\n", 1096 | "\n", 1097 | "### repeatability\n", 1098 | "\n", 1099 | "However, I went down the rabbithole of making my runs repeatable: this is really hard becuase there are several sources of randomness and Jupyter notebooks all happen within the same Python session. In the process, I learned something important about _keras_ sessions -- a model is built within a session, so when you train a model twice within the same session, you're really just (it seems?) training the same model at double the epochs. So to make stuff repeatable, we need to re-set the random seeds AND start a new session AND re-declare the model.\n", 1100 | "```\n", 1101 | " keras.backend.clear_session()\n", 1102 | " np.random.seed(RANDOM_SEED)\n", 1103 | " random.seed(RANDOM_SEED)\n", 1104 | " tf.set_random_seed(RANDOM_SEED)\n", 1105 | "```" 1106 | ] 1107 | }, 1108 | { 1109 | "cell_type": "code", 1110 | "execution_count": 76, 1111 | "metadata": {}, 1112 | "outputs": [], 1113 | "source": [ 1114 | "def cnn_model(learning_rate=0.001, dropout_embedding=0.0, dropout1=0.2, dropout2=0.2, dropout3=0.2, embedding_dim=32, num_filters=32, kernel_size=5 ):\n", 1115 | " keras.backend.clear_session()\n", 1116 | "\n", 1117 | " adam = keras.optimizers.Adam(lr=learning_rate) # default lr = 0.001\n", 1118 | "\n", 1119 | " model = keras.Sequential()\n", 1120 | " model.add(keras.layers.Embedding(VOCAB_SIZE, embedding_dim, input_length=MAX_SEQUENCE_LENGTH))\n", 1121 | " # dropout here doesn't help.\n", 1122 | " # model.add(keras.layers.Dropout(dropout_embedding))\n", 1123 | " model.add(keras.layers.Conv1D(num_filters, kernel_size, activation='relu'))\n", 1124 | " model.add(keras.layers.Dropout(dropout1)) # 0.4 works better than 0.5 here (but not dramatically); 0.3 does better than 0.4 maybe?, 0.2 actually does better than both (and better than NB, at 86/92)\n", 1125 | " model.add(keras.layers.GlobalMaxPooling1D())\n", 1126 | " model.add(keras.layers.Dropout(dropout2))\n", 1127 | " model.add(keras.layers.Dense(10, activation='relu'))\n", 1128 | " model.add(keras.layers.Dropout(dropout3))\n", 1129 | " model.add(keras.layers.Dense(1, activation='sigmoid'))\n", 1130 | " model.compile(optimizer=adam,\n", 1131 | " loss='binary_crossentropy',\n", 1132 | " metrics=['accuracy'])\n", 1133 | " # model.summary()\n", 1134 | " return model" 1135 | ] 1136 | }, 1137 | { 1138 | "cell_type": "code", 1139 | "execution_count": 77, 1140 | "metadata": {}, 1141 | "outputs": [ 1142 | { 1143 | "name": "stdout", 1144 | "output_type": "stream", 1145 | "text": [ 1146 | "742/742 [==============================] - 0s 165us/sample - loss: 0.3474 - acc: 0.8747\n", 1147 | "religion\n", 1148 | "Area under PR curve: 0.77\n", 1149 | "PR Baseline : 0.24123989218328842\n", 1150 | "If the AUPR score (0.77) is more than a little bigger than the baseline (0.24), which it *is*, then our model is working!\n", 1151 | "\n", 1152 | "\n", 1153 | "742/742 [==============================] - 0s 170us/sample - loss: 0.5497 - acc: 0.7480\n", 1154 | "ethnicity\n", 1155 | "Area under PR curve: 0.66\n", 1156 | "PR Baseline : 0.35175202156334234\n", 1157 | "If the AUPR score (0.66) is more than a little bigger than the baseline (0.35), which it *is*, then our model is working!\n", 1158 | "\n", 1159 | "\n", 1160 | "742/742 [==============================] - 0s 164us/sample - loss: 0.2411 - acc: 0.9272\n", 1161 | "sexual-orientation\n", 1162 | "Area under PR curve: 0.79\n", 1163 | "PR Baseline : 0.16711590296495957\n", 1164 | "If the AUPR score (0.79) is more than a little bigger than the baseline (0.17), which it *is*, then our model is working!\n", 1165 | "\n", 1166 | "\n", 1167 | "742/742 [==============================] - 0s 203us/sample - loss: 0.3429 - acc: 0.8895\n", 1168 | "gender\n", 1169 | "Area under PR curve: 0.2\n", 1170 | "PR Baseline : 0.1105121293800539\n", 1171 | "If the AUPR score (0.2) is more than a little bigger than the baseline (0.11), which it *is*, then our model is working!\n", 1172 | "\n", 1173 | "\n", 1174 | "742/742 [==============================] - 0s 115us/sample - loss: 0.1347 - acc: 0.9717\n", 1175 | "disability\n", 1176 | "Area under PR curve: 0.11\n", 1177 | "PR Baseline : 0.02830188679245283\n", 1178 | "If the AUPR score (0.11) is more than a little bigger than the baseline (0.03), which it *is*, then our model is working!\n", 1179 | "\n", 1180 | "\n", 1181 | "742/742 [==============================] - 0s 94us/sample - loss: 0.4530 - acc: 0.7857\n", 1182 | "immigrant\n", 1183 | "Area under PR curve: 0.48\n", 1184 | "PR Baseline : 0.21428571428571427\n", 1185 | "If the AUPR score (0.48) is more than a little bigger than the baseline (0.21), which it *is*, then our model is working!\n", 1186 | "\n", 1187 | "\n", 1188 | "742/742 [==============================] - 0s 158us/sample - loss: 0.5393 - acc: 0.7385\n", 1189 | "race\n", 1190 | "Area under PR curve: 0.8\n", 1191 | "PR Baseline : 0.48517520215633425\n", 1192 | "If the AUPR score (0.8) is more than a little bigger than the baseline (0.49), which it *is*, then our model is working!\n", 1193 | "\n", 1194 | "\n", 1195 | "742/742 [==============================] - 0s 116us/sample - loss: 0.4752 - acc: 0.7682\n", 1196 | "race_ethnicity\n", 1197 | "Area under PR curve: 0.9\n", 1198 | "PR Baseline : 0.6563342318059299\n", 1199 | "If the AUPR score (0.9) is more than a little bigger than the baseline (0.66), which it *is*, then our model is working!\n", 1200 | "\n", 1201 | "\n" 1202 | ] 1203 | } 1204 | ], 1205 | "source": [ 1206 | "histories_cnn, test_labels_cnn, predicted_probas_cnn = train_keras_model(cnn_model, train_df, test_df, epochs=20, should_equalize=False, classes_of_interest=(list(set(all_values))) + [\"race_ethnicity\"]) # " 1207 | ] 1208 | }, 1209 | { 1210 | "cell_type": "raw", 1211 | "metadata": {}, 1212 | "source": [ 1213 | "\n", 1214 | "\n", 1215 | "### dropout (and epochs)\n", 1216 | "\n", 1217 | "The goal of a model is to learn how to _generalize_. We're going to use a chart of training loss versus validation loss to help us understand if the model is generalizing, i.e. if it's doing the right thing. If it's not, then we can know which settings to fiddle with.\n", 1218 | "\n", 1219 | "Suppose many of the religion-related tips in our training data mention grocery stores. That's not, logically, true of ALL religion-related hate incidents; it's just a coincidence that that's how our training data got chosen. We want the model to correctly learn the real signs of what makes a hate incident related to religion, but not get confused by the coincidental co-occurrence of grocery stores with religion-related tips. If the model has generalized, it's learned what _really_ distinguishes religion-related tips. But if it focuses wrongfully on grocery stores, then it is \"overfit\" -- that is, focused on patterns that are true in the training data, but not the patterns that are true in real life.\n", 1220 | "\n", 1221 | "The way neural nets handle this problem is by having a \"validation set\" -- chosen like the training data, but not used for training. Unless we're really unlucky, the religion-related tips in the validation set won't mention grocery stores... but they will mention the true signs of religion-related hate incidents (mosques, synagogues, etc.). \n", 1222 | "\n", 1223 | "In essence, if performance on the validation set is close to performance on the training set, then we're doing good. If it's much worse, then the model is either undertrained or overfit.\n", 1224 | "\n", 1225 | "The solution to undertraining is to train more. We do that by increasing the number of epochs. We know if the model is undertrained is if the \"validation loss\" is steadily decreasing on the chart below.\n", 1226 | "\n", 1227 | "The solution to overfitting is to make the model more \"forgetful\" -- we want the model to learn multiple ways of determining what's a religion-related hate crime. Not just the names of houses of worship (and grocery stores, the spurious indicator) but other words that provide some signal. To do that, we force the model to ignore some things with a \"dropout layer\". Increasing the amount of dropout in the model's dropout layers lets it forget more and is one possible solution to overfitting.\n", 1228 | "\n", 1229 | "You can read more about this at these links:\n", 1230 | "\n", 1231 | " - https://stats.stackexchange.com/questions/187335/validation-error-less-than-training-error/187404#187404\n", 1232 | " - https://forums.fast.ai/t/determining-when-you-are-overfitting-underfitting-or-just-right/7732/6" 1233 | ] 1234 | }, 1235 | { 1236 | "cell_type": "code", 1237 | "execution_count": 78, 1238 | "metadata": {}, 1239 | "outputs": [ 1240 | { 1241 | "data": { 1242 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYUAAAEWCAYAAACJ0YulAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAIABJREFUeJzt3Xl8lPW1x/HPYRORyK4oIEGgbIKIKeKCLFKLG4uigqGKG+qrVFuvthTqUiq3Lly0WG6vWMUNRatVUUHUSkVtVZYiLoAgYEUQARVBtBo494/fk2EIk2RCMkuS7/v1mldmnnnmmTOTyZw8v+X8zN0REREBqJHpAEREJHsoKYiISIySgoiIxCgpiIhIjJKCiIjEKCmIiEiMkoJUKDOraWbbzeywitw3k8ysnZlV+NhtMxtgZmvjbq8ws97J7LsPz/VnMxu3r48v4bg3mdl9FX1cyZxamQ5AMsvMtsfdrAf8B9gZ3b7M3WeU5XjuvhOoX9H7Vgfu3qEijmNmlwAj3b1v3LEvqYhjS9WnpFDNuXvsSzn6T/QSd3+puP3NrJa7F6QjNhFJPzUfSYmi5oFHzewRM9sGjDSzY83sDTP70sw2mNkUM6sd7V/LzNzMcqPbD0X3zzGzbWb2TzNrU9Z9o/tPMbMPzGyrmd1pZq+b2ahi4k4mxsvMbJWZfWFmU+IeW9PMbjezLWa2GhhYwvsz3sxmFtk21cwmR9cvMbNl0ev5MPovvrhjrTOzvtH1emb2YBTbe8DRRfb9jZmtjo77npkNirZ3Bf4I9I6a5jbHvbc3xj3+8ui1bzGzp8zskGTem9KY2dAoni/N7GUz6xB33zgzW29mX5nZ8rjX2svMFkfbN5rZbck+n6SAu+uiC+4OsBYYUGTbTcB3wBmEfyL2B34IHEM40zwc+AAYE+1fC3AgN7r9ELAZyANqA48CD+3DvgcB24DB0X1XA98Do4p5LcnE+DTQAMgFPi987cAY4D2gJdAEmB/+VBI+z+HAduCAuGN/BuRFt8+I9jGgP/AN0C26bwCwNu5Y64C+0fVJwN+BRkBr4P0i+54DHBL9Ts6LYjg4uu8S4O9F4nwIuDG6fnIUY3egLvC/wMvJvDcJXv9NwH3R9U5RHP2j39E4YEV0vQvwEdA82rcNcHh0fQEwIrqeAxyT6b+F6nzRmYIk4zV3f8bdd7n7N+6+wN3fdPcCd18NTAP6lPD4x919obt/D8wgfBmVdd/TgSXu/nR03+2EBJJQkjH+3t23uvtawhdw4XOdA9zu7uvcfQtwcwnPsxp4l5CsAH4EfOHuC6P7n3H31R68DPwNSNiZXMQ5wE3u/oW7f0T47z/+eR9z9w3R7+RhQkLPS+K4APnAn919ibt/C4wF+phZy7h9intvSjIcmOXuL0e/o5sJieUYoICQgLpETZBrovcOQnJvb2ZN3H2bu7+Z5OuQFFBSkGR8HH/DzDqa2XNm9qmZfQVMAJqW8PhP467voOTO5eL2PTQ+Dnd3wn/WCSUZY1LPRfgPtyQPAyOi6+dFtwvjON3M3jSzz83sS8J/6SW9V4UOKSkGMxtlZm9HzTRfAh2TPC6E1xc7nrt/BXwBtIjbpyy/s+KOu4vwO2rh7iuA/yL8Hj6LmiObR7teCHQGVpjZW2Z2apKvQ1JASUGSUXQ45l2E/47bufuBwPWE5pFU2kBozgHAzIw9v8SKKk+MG4BWcbdLGzL7GDDAzFoQzhgejmLcH3gc+D2haach8EKScXxaXAxmdjjwJ+AKoEl03OVxxy1t+Ox6QpNU4fFyCM1UnyQRV1mOW4PwO/sEwN0fcvfjCU1HNQnvC+6+wt2HE5oI/wd4wszqljMW2UdKCrIvcoCtwNdm1gm4LA3P+SzQw8zOMLNawFVAsxTF+BjwczNrYWZNgF+VtLO7fwq8BtwHrHD3ldFd+wF1gE3ATjM7HTipDDGMM7OGFuZxjIm7rz7hi38TIT9eSjhTKLQRaFnYsZ7AI8DFZtbNzPYjfDm/6u7FnnmVIeZBZtY3eu5rCf1Ab5pZJzPrFz3fN9FlF+EF/MTMmkZnFluj17arnLHIPlJSkH3xX8AFhD/4uwgdwinl7huBc4HJwBagLfAvwryKio7xT4S2/3cInaCPJ/GYhwkdx7GmI3f/EvgF8CShs3YYIbkl4wbCGctaYA7wQNxxlwJ3Am9F+3QA4tvhXwRWAhvNLL4ZqPDxzxOacZ6MHn8YoZ+hXNz9PcJ7/idCwhoIDIr6F/YDbiX0A31KODMZHz30VGCZhdFtk4Bz3f278sYj+8ZC06xI5WJmNQnNFcPc/dVMxyNSVehMQSoNMxsYNafsB1xHGLXyVobDEqlSlBSkMjkBWE1omvgxMNTdi2s+EpF9oOYjERGJ0ZmCiIjEVLqCeE2bNvXc3NxMhyEiUqksWrRos7uXNIwbqIRJITc3l4ULF2Y6DBGRSsXMSpuZD6j5SERE4qQ0KURDCFdEJXjHJrj/djNbEl0+iGq4iIhIhqSs+SiaXDSVUDVyHbDAzGa5+/uF+7j7L+L2/xlwVKriERGR0qWyT6EnsKqwPG60EMlgQl34REYQpvaLSBb5/vvvWbduHd9++22mQ5Ek1K1bl5YtW1K7dnGlr0qWyqTQgj1L/64j1FXfi5m1JlROfLmY+0cDowEOOyyr13gXqXLWrVtHTk4Oubm5hOK0kq3cnS1btrBu3TratGlT+gMSyJaO5uGExVV2JrrT3ae5e5675zVrVuqIqr3MmAG5uVCjRvg5o0xL0YtUb99++y1NmjRRQqgEzIwmTZqU66wulWcKn7BnPfhYXfUEhgM/TUUQM2bA6NGwY0e4/dFH4TZAfrnrQopUD0oIlUd5f1epPFNYQFhir42Z1SFaqq/oTmbWkVBG95+pCGL8+N0JodCOHWG7iIjsKWVJwd0LCAuDzAWWAY+5+3tmNsHMBsXtOhyY6SkqwvTvf5dtu4hkly1bttC9e3e6d+9O8+bNadGiRez2d98lt+zChRdeyIoVK0rcZ+rUqcyooLblE044gSVLllTIsdItpTOa3X02MLvItuuL3L4xlTEcdlhoMkq0XUQq3owZ4Uz83/8Of2cTJ5avqbZJkyaxL9gbb7yR+vXrc8011+yxj7vj7tSokfj/3OnTp5f6PD/9aUpasCudbOloTpmJE6FevT231asXtotIxSrsw/voI3Df3YeXisEdq1atonPnzuTn59OlSxc2bNjA6NGjycvLo0uXLkyYMCG2b+F/7gUFBTRs2JCxY8dy5JFHcuyxx/LZZ58B8Jvf/IY77rgjtv/YsWPp2bMnHTp04B//+AcAX3/9NWeddRadO3dm2LBh5OXllXpG8NBDD9G1a1eOOOIIxo0bB0BBQQE/+clPYtunTJkCwO23307nzp3p1q0bI0eOrPD3LBmVrvZRWRX+hzJmDHwZzZc+8ED417+gZUs4/nioVeXfBZH0KKkPLxUDO5YvX84DDzxAXl4eADfffDONGzemoKCAfv36MWzYMDp37rzHY7Zu3UqfPn24+eabufrqq7n33nsZO3avggu4O2+99RazZs1iwoQJPP/889x55500b96cJ554grfffpsePXqUGN+6dev4zW9+w8KFC2nQoAEDBgzg2WefpVmzZmzevJl33nkHgC+jL6dbb72Vjz76iDp16sS2pVuVP1OA8GH84gtYswbuvBO6dQs/+/aFZs3gvPPgkUfCPiKy79Ldh9e2bdtYQgB45JFH6NGjBz169GDZsmW8//7ec2X3339/TjnlFACOPvpo1q5dm/DYZ5555l77vPbaawwfPhyAI488ki5dupQY35tvvkn//v1p2rQptWvX5rzzzmP+/Pm0a9eOFStWcOWVVzJ37lwaNGgAQJcuXRg5ciQzZszY58ln5VUtkkKh3NxwxjB3LmzeDE88AUOGwEsvhcTQrFlIFP/zP/DBB7sfp3kOIskprq8uVX14BxxwQOz6ypUr+cMf/sDLL7/M0qVLGThwYMLx+nXq1Ildr1mzJgUFBQmPvd9++5W6z75q0qQJS5cupXfv3kydOpXLLrsMgLlz53L55ZezYMECevbsyc6dCadupVS1SgrxcnLgzDNh+nTYsAH+8Q/41a/g88/hmmugQwf4wQ/g1FPh4ovT00YqUtllsg/vq6++IicnhwMPPJANGzYwd+7cCn+O448/nsceewyAd955J+GZSLxjjjmGefPmsWXLFgoKCpg5cyZ9+vRh06ZNuDtnn302EyZMYPHixezcuZN169bRv39/br31VjZv3syOom1xaaDWdKBmTTj22HCZOBHWroXnnoNnnoE5c/beP5VtpCKVWeHfREWOPkpWjx496Ny5Mx07dqR169Ycf/zxFf4cP/vZzzj//PPp3Llz7FLY9JNIy5Yt+d3vfkffvn1xd8444wxOO+00Fi9ezMUXX4y7Y2bccsstFBQUcN5557Ft2zZ27drFNddcQ05OToW/htJUujWa8/LyPJ2L7JQ0OfC22+CMM8JZhUhVtWzZMjp16pTpMLJCQUEBBQUF1K1bl5UrV3LyySezcuVKamXZaJVEvzMzW+TuecU8JCa7XkkWat068TyH2rXh2mvDpX37kBxOPx1OOCHcJyJVz/bt2znppJMoKCjA3bnrrruyLiGUV9V6NSkwceKetZMgtJFOmxYSwLPPhmamP/4RJk+Ghg1h4MCQJE45BRo1ylzsIlKxGjZsyKJFizIdRkpV247mZOXnhwTQunVoSmrdOtzOzw/Xf/pTeP552LIF/vpXGDoUXn453F84mmnSJChlhr2ISFZQUkhCfn7ofN61K/xM1GlWv35ICPfeG0Yz/fOfMHZsmPtw7bXQsWNIKjk58LOfQTSJUkQkqygppECNGtCrF9x0E/zyl1C37u77tm8PTU0HHwxdu8JVV8HTT2vinIhkByWFFBs/HhKtd9GwIRxyCNx9d5hA16QJ5OWFJDJnTkgeIiLppqSQYsVN79+6FV54IZwhzJ8PN9wABxwAf/hDmDDXqFGoy3TddaGP4ptv0hu3SLbo16/fXhPR7rjjDq644ooSH1e/fn0A1q9fz7BhwxLu07dvX0ob4n7HHXfsMYns1FNPrZC6RDfeeCOTJk0q93EqmpJCipU27X+//aB375AUXnklJIkXXwz9ELt2we9/DyedFJLEiSfCpZfCLbeEEh1vv60zCqn6RowYwcyZM/fYNnPmTEaMGJHU4w899FAef/zxfX7+oklh9uzZNGzYcJ+Pl+2UFFKsrNP+69WDAQPgv/87dFZ//nkY9jpmDOzcCbNmhQ7sYcOge/fQcd28eRgeO2pU6MeYORMWLFA/hVQNw4YN47nnnostqLN27VrWr19P7969Y/MGevToQdeuXXn66af3evzatWs54ogjAPjmm28YPnw4nTp1YujQoXwTdwp+xRVXxMpu33DDDQBMmTKF9evX069fP/r16wdAbm4umzdvBmDy5MkcccQRHHHEEbGy22vXrqVTp05ceumldOnShZNPPnmP50lkyZIl9OrVi27dujF06FC+iP54p0yZEiulXViI75VXXoktMnTUUUexbdu2fX5vE9E8hRQr77T/Aw+E004Ll0JffQUffgirVoVL4fWXXoL779/z8Y0bQ7t20LYttGkThsk2abL3pUGD0EEuUpKf/xwqekGx7t0h+j5NqHHjxvTs2ZM5c+YwePBgZs6cyTnnnIOZUbduXZ588kkOPPBANm/eTK9evRg0aFCx6xT/6U9/ol69eixbtoylS5fuUfp64sSJNG7cmJ07d3LSSSexdOlSrrzySiZPnsy8efNo2rTpHsdatGgR06dP580338TdOeaYY+jTpw+NGjVi5cqVPPLII9x9992cc845PPHEEyWuj3D++edz55130qdPH66//np++9vfcscdd3DzzTezZs0a9ttvv1iT1aRJk5g6dSrHH38827dvp278SJYKoKSQBvn55av9UtxKVkcdtfe+O3aEEuGFCaPw8sYb8OijoUkqkRo1QgJJlDCKXpo23X09ruCkSMoUNiEVJoV77rkHCGsejBs3jvnz51OjRg0++eQTNm7cSPPmzRMeZ/78+Vx55ZUAdOvWjW7dusXue+yxx5g2bRoFBQVs2LCB999/f4/7i3rttdcYOnRorFLrmWeeyauvvsqgQYNo06YN3bt3B0ouzw1hfYcvv/ySPn36AHDBBRdw9tlnx2LMz89nyJAhDBkyBAhF+a6++mry8/M588wzadmyZTJvYdKUFLJc4UpWhU2ahVVaIXGiqVcPunQJl6J27QoLDW3ZUvrl3/8OCxFt2VJyJ3dOTvEJo+jtZs3g0ENDAUKpnEr6jz6VBg8ezC9+8QsWL17Mjh07OProowGYMWMGmzZtYtGiRdSuXZvc3NyE5bJLs2bNGiZNmsSCBQto1KgRo0aN2qfjFCosuw2h9HZpzUfFee6555g/fz7PPPMMEydO5J133mHs2LGcdtppzJ49m+OPP565c+fSsWPHfY61KCWFLFeRK1kVng00bhzqNSXrm2/2TBibNxd/+8MPw+2tWxMfq1atcLbTps3el9zcMH+jpCKEUj3Vr1+ffv36cdFFF+3Rwbx161YOOuggateuzbx58/goUaGyOCeeeCIPP/ww/fv3591332Xp0qVAKLt9wAEH0KBBAzZu3MicOXPo27cvADk5OWzbtm2v5qPevXszatQoxo4di7vz5JNP8uCDD5b5tTVo0IBGjRrx6quv0rt3bx588EH69OnDrl27+Pjjj+nXrx8nnHACM2fOZPv27WzZsoWuXbvStWtXFixYwPLly5UUqpN0r2SVyP77h6VLy3KWWlAQOsnjk8Znn4UZ4WvWhMusWXvP7N5//5AcEiWMtm1D34dUTyNGjGDo0KF7jETKz8/njDPOoGvXruTl5ZX65XjFFVdw4YUX0qlTJzp16hQ74zjyyCM56qij6NixI61atdqj7Pbo0aMZOHAghx56KPPmzYtt79GjB6NGjaJnz54AXHLJJRx11FElNhUV5/777+fyyy9nx44dHH744UyfPp2dO3cycuRItm7dirtz5ZVX0rBhQ6677jrmzZtHjRo16NKlS2wVuYqi0tlZLjc3cZXW1q3DF2xl9/XX4fUVJor4y9q1u9fVLtShQ5i/cdxx4dKhgzrIU02lsysflc6uwoqr0lqWlayK66jOBgccAJ07h0siX365O0msWBGG6T79dKgxBWH+xrHH7k4UP/xhOKaI7BslhSxX3iGtZe2ozjYNG4ZRVvEjrdxh5cqwhOrrr4efs2eH+2rWDEMcjztud6Jo1SozsYtURmo+quKqevNToS++CMNuC5PEm2/uToQtW+5OEqefDocfntlYK5tly5bRsWPHYsf+S3Zxd5YvX77PzUdKClVcjRrhP+uizIqfs1AVFBTA0qW7k8Q//rG7c75r11CEcMiQcAai77qSrVmzhpycHJo0aaLEkOXcnS1btrBt2zbatGmzx31KCgJUzJlCNvdJlMXq1WHE01NPwauvhqTYqtXuBNG7t5ZSTeT7779n3bp15Rq3L+lTt25dWrZsSe0iH+ZkkwLuXqkuRx99tEvyHnrIvV4993C+EC716oXt6Xh84TFat3Y3Cz/L8thU2bTJffp098GD3evWDa+rUSP3kSPdH3/cfdu2TEcoUrGAhZ7Ed6zOFKqB8vynX94zjaId3bB7jetsOdv4+utQmfapp8J6259/HqrX/uhH4QzijDPgoIMyHaVI+aj5SCpEefskKltHd0EBvPZaSBBPPRViNwud1EOGwHnnhcWRRCqbZJOCpv1IiUpbD6I02TAjuyxq1YK+fUONnzVrQkXQG24I61Zcc03ogxg0KMyV+P77TEcrUvGUFKREZV0PoqjyJhUITVC5ueGsJTc33E4HMzjyyJAU/vWvMHnu2mvDWhVDhoQE8ctfwvLl6YlHJB2UFKRE+fmh/b916/Al2bp12foDyptUCvskPvooNGMVTr5LV2KI94MfhJXwPv44jGLq1QsmT4ZOncIiR/feq5XwpPJTn4KkXCY7ulPt00/hwQfhnnvCmUT9+nDuuXDRRaH8hob1S7ZQR7NUCZVl8p17mCB3771hMaOvvw5nEBddBOefr9FLknnqaJYqoSL6JNKhcITSPffAhg3w5z+Huk3XXgstWsCZZ8Jzz2VXIhNJRElBslp5+yQyIScHLr44nDm8/35Y1/j110Pdpbw8eOGFxGc/ItkgpUnBzAaa2QozW2VmY4vZ5xwze9/M3jOzh1MZj1Q+5e3ozrROneC222DdOrj//jAx7sc/hgEDwigmkWyTsqRgZjWBqcApQGdghJl1LrJPe+DXwPHu3gX4earikcorPz90Ku/aFX6WNSFkakhrvNq1Q9/CihVhDsTSpdCzJ5x9NnzwQfrjESlOKs8UegKr3H21u38HzAQGF9nnUmCqu38B4O5FFmcUKZ9sGtIKoXzGVVeFtayvuw7mzAkLDF12Gaxfn5mYROKlMim0AD6Ou70u2hbvB8APzOx1M3vDzAYmOpCZjTazhWa2cNOmTSkKV6qi8eP3rLsE4fb48ZmJp9CBB8KECSE5XH55GLXUrh2MG7f3EqQi6ZTpjuZaQHugLzACuNvMGhbdyd2nuXueu+c1a9YszSFKZZbtZTYOPhj++EdYtgwGDw6T49q2hUmTQJWqJRNSmRQ+AeIXQmwZbYu3Dpjl7t+7+xrgA0KSEKkQlWVIa7t28MgjsGhRGKF07bVhBvX06bBzZ6ajk+oklUlhAdDezNqYWR1gODCryD5PEc4SMLOmhOak1SmMSaqZyjaktUcPmDsX/vY3aN48TH7r1i0U4NMwVkmHlCUFdy8AxgBzgWXAY+7+nplNMLNB0W5zgS1m9j4wD7jW3bekKiapfirrkNb+/cM603/5SyjnPWRIqK/0xhuZjkyqOpW5EClFppcj/f770Ix0ww2h1tKIEXDzzdnXBCbZTWUuRCpANgxprV07POfKlSE5PfkkdOgQhrSqKqtUNCUFkRJk05DW+vXhppvC+g1DhoTrP/gB3HefaipJxVFSEClBNg5pbd06jFR6/fWw0M+FF4bZ0a++mrmYpOpQUhApQTYPaT3uOPjnP+Ghh2DjRjjxxFA2Y82aTEcmlZmSgkgJsn1Ia40aodN7xQr47W9h9mzo2BHGjoWvvsp0dFIZKSmIlKCyDGmtVw+uvz4U1xs+HG65Bdq3h7vv1uQ3KRsNSRWpghYsgF/8IvQ7dOsGt98e5j5I9aUhqSLV2A9/GDqeH30Utm6Fk04KtZXefz/TkUm2U1IQSbFMredgBuecE4aw/v73MG8edO0KF1wAq1VMRoqhpCCSQtkw+a1u3dDxvHo1XH01PPZYmPx2xRXwSdESlVLtKSmIpFA2TX5r2jQsDfrhhyEx3XNPqM56zTWweXP645HspKQgkkLZOPnt0ENh6tQwjPXcc0MndJs2YfTS1q2Zi0uyg5KCSApl8+S3Nm1CiYx334VTToHf/S5su/lm+PrrTEcnmaKkIJJC2T75DaBTp9DPsHhxmCX961+H1d+mTIH//CfT0Um6KSmIpFBlmfwGcNRR8OyzYW5Dp05w1VWh4N4994Q1HaR6UFIQSbH8fFi7NlQyXbs2OxNCvOOOg5dfhhdfDKu/XXIJdO4civCpGmvVp6QgInsxgwEDwkpvTz8dhrWed16oxjp/fqajk1RSUhCRYpnBoEGwZAk8+GCoxtqnDwwdGhb9kapHSUFESlWjBowcGQruTZwIL70UmpR+/nP4/PNMRycVSUlBRJK2//4wblw4S7joIrjzzjBS6fbb4bvvMh2dVAQlBREps+bN4a674O234ZhjQvmMzp3hiSdCOQ+pvJQURCqBTBXVK80RR8Dzz8OcOaEzetiwsALcggWZjkz2lZKCSJbLhqJ6pRk4MHRG33VX6Hfo2TMMvc1kOQ/ZN0oKIlkum4rqlaRWrZCsVq0K/Q5//WuY/DZunJYGrUyUFESyXDYW1StJTk4YobRiBZx9dljLoX17+L//08zoykBJQSTLZXNRvZIcdliY2/DWW7vXb+jaNUyGU2d09lJSEMlylaGoXkl++EN45RV48smQDIYMCZ3Rb7yR6cgkESUFkSxXmYrqFccsJIN33w3NSCtXwrHHhtFKH3yQ6egknnklO4/Ly8vzhQsXZjoMESmH7dth8mS49dZQnnv06LDIz8EHZzqyqsvMFrl7Xmn76UxBRNKufv2QBD78EC69NAxlbdcuLPSjBX4yS0lBRDLm4IPhf/8X3nsPTj45JIp27ULzmEYqZYaSgohkXIcOoUTG66+HWkqXXaaRSpmipCAiWeO44+DVVzVSKZOUFEQkqxQ3Uumss0LC0OpvqaWkICJZqVat0Iy0ahX89rfwwgvhrKFtW7juOg1lTRUlBRHJaoUjlTZsgAceCCUz/vu/Qz9Er14wdSps3pzpKKsOJQURqRTq14ef/CScMXz8Mdx2G3zzDYwZA4ccAoMHw+OPw7ffZjrSyk1JQaQayNb1GPbVoYfCNdeERX6WLIGrrgprOJx9dkgQl10Gr72mkUv7IqVJwcwGmtkKM1tlZmMT3D/KzDaZ2ZLockkq4xGpjirDegzlceSRMGlSOHuYOxdOPx0eegh69w79D9dfHzqrJTkpK3NhZjWBD4AfAeuABcAId38/bp9RQJ67j0n2uCpzIVI2ubkhERTVujWsXZvuaNJj+/YwrPWBB+BvfwvJsFevMHt6xIiw1nR1kw1lLnoCq9x9tbt/B8wEBqfw+UQkgcq2HkNFKOx/ePHFcAZx661hoZ+LL4aWLeHaa2H16kxHmZ1SmRRaAB/H3V4XbSvqLDNbamaPm1mrRAcys9FmttDMFm7atCkVsYpUWZV1PYaK0qJFSALvvgvz5kH//nD77aGcxmmnwezZmvsQL9Mdzc8Aue7eDXgRuD/RTu4+zd3z3D2vWbNmaQ1QpLKr7OsxVBQz6NsX/vKX0Jx23XWweHFIDO3bh36Jzz/PdJSZl1RSMLO2ZrZfdL2vmV1pZg1LedgnQPx//i2jbTHuvsXd/xPd/DNwdHJhi0iyqsJ6DBWtRYswIe6jj2DmzN1nEy1awEUXwaJFmY4wc5I9U3gC2Glm7YBphC/7h0tdbA5EAAAOy0lEQVR5zAKgvZm1MbM6wHBgVvwOZnZI3M1BwLIk4xGRMsjPD53Ku3aFn9U5IcSrUwfOPRfmzw/DWy+4AB59FPLyQsf0gw9Wv3kPySaFXe5eAAwF7nT3a4FDSnpAtP8YYC7hy/4xd3/PzCaY2aBotyvN7D0zexu4Ehi1Ly9CRKS8unULtZbWr4c//AG++ALOPx9atYJf/zrxCK6qKKkhqWb2JnAHMB44w93XmNm77n5EqgMsSkNSRSQddu0Kw1mnToVnngnbBgwIZxNDhuzdT5PtKnpI6oXAscDEKCG0AR4sT4AiItmsRg340Y/gqadgzRoYNw6WLw9Nb82bwyWXhKqtVW3WdJknr5lZI6CVuy9NTUgl05mCiGTKrl3wyitw//2hztLXX0ObNqGZ6fzz4fDDMx1h8Sr0TMHM/m5mB5pZY2AxcLeZTS5vkCIilUmNGtCvH9x3H3z6aZgxffjhMGFCKKlx4olwzz2wdWumI913yTYfNXD3r4AzgQfc/RhgQOrCEpFsUtUK6lWEwlnTL70URnRNnAgbN4ZmpebN4bzzQi2mnTszHWnZJJsUakXDR88Bnk1hPCKSZap6Qb2KcNhhu/sc3ngDLrwQnn8eBg4Mo5d++cswo7oyzJxOdvTR2cB1wOvufoWZHQ7c5u5npTrAotSnIJJe1bGgXkX4z3/g2WdD/8Ps2eGMoVatUNr70ENLvjRqFCYaVqRk+xRSViU1VZQURNKrRo3EI2zMKsd/vtngs892j2Jav37Py5df7r3/fvslThYDB4b5FPsi2aRQK8mDtQTuBI6PNr0KXOXu6/YtPBGpLA47LPGZQnUpqFcRDjooNLklsmNHWGp0/frdP+MvS5eGpqht26Bx431PCslKKikA0wllLc6Obo+Mtv0oFUGJSPaYODF8oe3YsXtbdSyolyr16oWRS23blrzftm1Qs2bq40m2o7mZu09394Loch+gcqUi1YAK6mWHnJz0zKJO9kxhi5mNBB6Jbo8AtqQmJBHJNvn5SgLVRbJnChcRhqN+CmwAhqHidSIiVU5SScHdP3L3Qe7ezN0PcvchQNqHo4qISGqVZ+W1qyssChERyQrlSQoVPLVCREQyrTxJoXLNehMRkVKVOPrIzLaR+MvfgP1TEpGIiGRMiUnB3XPSFYiIiGReeZqPRESSotLblUeyk9dERPZJYentwjIZhaW3QRPispHOFEQkpcaP37NuEoTb48dnJh4pmZKCiKTUv/9dtu2SWUoKIpJSxZXYVunt7KSkICIpNXHi3tU9VXo7eykpiEhKqfR25aLRRyKSciq9XXnoTEFERGKUFEREJEZJQUREYpQUREQkRklBRERilBRERCRGSUFERGKUFEQk66n0dvpo8pqIZDWV3k4vnSmISFZT6e30UlIQkaym0tvpldKkYGYDzWyFma0ys7El7HeWmbmZ5aUyHhGpfFR6O71SlhTMrCYwFTgF6AyMMLPOCfbLAa4C3kxVLCJSean0dnql8kyhJ7DK3Ve7+3fATGBwgv1+B9wCfJvCWESkklLp7fRKZVJoAXwcd3tdtC3GzHoArdz9uZIOZGajzWyhmS3ctGlTxUcqIlktPx/WroVdu8JPJYTUyVhHs5nVACYD/1Xavu4+zd3z3D2vWbNmqQ9ORKSaSmVS+ARoFXe7ZbStUA5wBPB3M1sL9AJmqbNZRCRzUpkUFgDtzayNmdUBhgOzCu90963u3tTdc909F3gDGOTuC1MYk4iIlCBlScHdC4AxwFxgGfCYu79nZhPMbFCqnldERPZdSstcuPtsYHaRbdcXs2/fVMYiIiKl04xmERGJUVIQEZEYJQUREYlRUhCRKk/rMSRP6ymISJWm9RjKRmcKIlKlaT2GslFSEJEqTesxlI2SgohUaVqPoWyUFESkStN6DGWjpCAiVZrWYygbjT4SkSovP19JIFk6UxARkRglBRERiVFSEBGRGCUFERGJUVIQEZEYJQUREYlRUhARkRglBRERiVFSEBGRGCUFERGJUVIQESlFdVq5TbWPRERKUN1WbtOZgohICarbym1KCiIiJahuK7cpKYiIlKC6rdympCAiUoLqtnKbkoKISAmq28ptGn0kIlKK6rRym84UREQkRklBRERilBRERCRGSUFERGKUFEREJEZJQUREYpQUREQkRklBRERiUpoUzGygma0ws1VmNjbB/Zeb2TtmtsTMXjOzzqmMR0RESpaypGBmNYGpwClAZ2BEgi/9h929q7t3B24FJqcqHhERKV0qzxR6AqvcfbW7fwfMBAbH7+DuX8XdPADwFMYjIpIRlWnltlTWPmoBfBx3ex1wTNGdzOynwNVAHaB/ogOZ2WhgNMBhVbVerYhUSZVt5baMdzS7+1R3bwv8CvhNMftMc/c8d89r1qxZegMUESmHyrZyWyqTwidAq7jbLaNtxZkJDElhPCIiaVfZVm5LZVJYALQ3szZmVgcYDsyK38HM2sfdPA1YmcJ4RETSrrKt3JaypODuBcAYYC6wDHjM3d8zswlmNijabYyZvWdmSwj9ChekKh4RkUyobCu3pXSRHXefDcwusu36uOtXpfL5RUQyrbAzefz40GR02GEhIWRjJzNo5TURkZSrTCu3ZXz0kYiIZA8lBRERiVFSEBGRGCUFERGJUVIQEZEYJQUREYlRUhARkRglBRGRLJfO0tuavCYiksXSXXpbZwoiIlks3aW3lRRERLJYuktvKymIiGSxdJfeVlIQEcli6S69raQgIpLF8vNh2jRo3RrMws9p01JXdVWjj0REslw6S2/rTEFERGKUFEREJEZJQUREYpQUREQkRklBRERizN0zHUOZmNkm4KNMx1GMpsDmTAdRAsVXPtkeH2R/jIqvfMoTX2t3b1baTpUuKWQzM1vo7nmZjqM4iq98sj0+yP4YFV/5pCM+NR+JiEiMkoKIiMQoKVSsaZkOoBSKr3yyPT7I/hgVX/mkPD71KYiISIzOFEREJEZJQUREYpQUysjMWpnZPDN738zeM7OrEuzT18y2mtmS6HJ9mmNca2bvRM+9MMH9ZmZTzGyVmS01sx5pjK1D3PuyxMy+MrOfF9kn7e+fmd1rZp+Z2btx2xqb2YtmtjL62aiYx14Q7bPSzC5IU2y3mdny6Pf3pJk1LOaxJX4WUhzjjWb2Sdzv8dRiHjvQzFZEn8exaYzv0bjY1prZkmIem9L3sLjvlIx9/txdlzJcgEOAHtH1HOADoHORffoCz2YwxrVA0xLuPxWYAxjQC3gzQ3HWBD4lTKrJ6PsHnAj0AN6N23YrMDa6Pha4JcHjGgOro5+NouuN0hDbyUCt6PotiWJL5rOQ4hhvBK5J4jPwIXA4UAd4u+jfU6riK3L//wDXZ+I9LO47JVOfP50plJG7b3D3xdH1bcAyoEVmoyqzwcADHrwBNDSzQzIQx0nAh+6e8Rnq7j4f+LzI5sHA/dH1+4EhCR76Y+BFd//c3b8AXgQGpjo2d3/B3Quim28ALSvyOcuqmPcvGT2BVe6+2t2/A2YS3vcKVVJ8ZmbAOcAjFf28ySjhOyUjnz8lhXIws1zgKODNBHcfa2Zvm9kcM+uS1sDAgRfMbJGZjU5wfwvg47jb68hMYhtO8X+ImXz/Ch3s7hui658CByfYJxvey4sIZ36JlPZZSLUxURPXvcU0f2TD+9cb2OjuK4u5P23vYZHvlIx8/pQU9pGZ1QeeAH7u7l8VuXsxoUnkSOBO4Kk0h3eCu/cATgF+amYnpvn5S2VmdYBBwF8S3J3p928vHs7Vs278tpmNBwqAGcXsksnPwp+AtkB3YAOhiSYbjaDks4S0vIclfaek8/OnpLAPzKw24Zc3w93/WvR+d//K3bdH12cDtc2sabric/dPop+fAU8STtHjfQK0irvdMtqWTqcAi919Y9E7Mv3+xdlY2KwW/fwswT4Zey/NbBRwOpAffWnsJYnPQsq4+0Z33+nuu4C7i3nujH4WzawWcCbwaHH7pOM9LOY7JSOfPyWFMoraH+8Blrn75GL2aR7th5n1JLzPW9IU3wFmllN4ndAh+W6R3WYB50ejkHoBW+NOU9Ol2P/OMvn+FTELKBzNcQHwdIJ95gInm1mjqHnk5GhbSpnZQOCXwCB331HMPsl8FlIZY3w/1dBinnsB0N7M2kRnj8MJ73u6DACWu/u6RHem4z0s4TslM5+/VPWoV9ULcALhNG4psCS6nApcDlwe7TMGeI8wkuIN4Lg0xnd49LxvRzGMj7bHx2fAVMKoj3eAvDS/hwcQvuQbxG3L6PtHSFAbgO8J7bIXA02AvwErgZeAxtG+ecCf4x57EbAqulyYpthWEdqSCz+D/xfteygwu6TPQhrfvwejz9dSwhfcIUVjjG6fShhx82GqYkwUX7T9vsLPXdy+aX0PS/hOycjnT2UuREQkRs1HIiISo6QgIiIxSgoiIhKjpCAiIjFKCiIiEqOkIBIxs522ZwXXCqvYaWa58RU6RbJVrUwHIJJFvnH37pkOQiSTdKYgUoqonv6tUU39t8ysXbQ918xejgq+/c3MDou2H2xhjYO3o8tx0aFqmtndUc38F8xs/2j/K6Na+kvNbGaGXqYIoKQgEm//Is1H58bdt9XduwJ/BO6Itt0J3O/u3QgF6aZE26cAr3go6NeDMBMWoD0w1d27AF8CZ0XbxwJHRce5PFUvTiQZmtEsEjGz7e5eP8H2tUB/d18dFS771N2bmNlmQumG76PtG9y9qZltAlq6+3/ijpFLqHvfPrr9K6C2u99kZs8D2wnVYJ/yqBigSCboTEEkOV7M9bL4T9z1nezu0zuNUIuqB7AgqtwpkhFKCiLJOTfu5z+j6/8gVPUEyAdeja7/DbgCwMxqmlmD4g5qZjWAVu4+D/gV0ADY62xFJF30H4nIbvvbnou3P+/uhcNSG5nZUsJ/+yOibT8DppvZtcAm4MJo+1XANDO7mHBGcAWhQmciNYGHosRhwBR3/7LCXpFIGalPQaQUUZ9CnrtvznQsIqmm5iMREYnRmYKIiMToTEFERGKUFEREJEZJQUREYpQUREQkRklBRERi/h8LiTr5imrhRgAAAABJRU5ErkJggg==\n", 1243 | "text/plain": [ 1244 | "
" 1245 | ] 1246 | }, 1247 | "metadata": { 1248 | "needs_background": "light" 1249 | }, 1250 | "output_type": "display_data" 1251 | }, 1252 | { 1253 | "data": { 1254 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYsAAAEWCAYAAACXGLsWAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAIABJREFUeJzt3XucVWXd9/HPF0QRUUEhK5AZNEsOcpywQvOsaKZpViDeeShJC/O2rJvSJ81SsyzNHm8LzVJDiexRsTzkMQ+pMSioaApycvA0gAcURQ6/549rDW6GmdkbZvbec/i+X6/92mtd61pr//aaPfu317rWui5FBGZmZk3pVO4AzMys9XOyMDOzvJwszMwsLycLMzPLy8nCzMzycrIwM7O8nCysYJI6S3pbUr+WrFtOkj4mqcWvH5d0oKSFOfPPSdq7kLqb8VpXSfrh5q5vVogtyh2AFY+kt3NmuwGrgLXZ/DciYsqmbC8i1gLdW7puRxARn2iJ7Uj6OnBcROybs+2vt8S2zZriZNGORcT6L+vsl+vXI+LuxupL2iIi1pQiNrN8/HlsXXwaqgOT9FNJf5Z0g6QVwHGSPi3pUUlvSHpZ0mWSumT1t5AUkiqz+T9ly2+XtELSI5L6b2rdbPmhkp6X9Kak30h6WNIJjcRdSIzfkDRP0uuSLstZt7OkSyQtkzQfGNPE/jlL0tR6ZZdL+lU2/XVJz2bv54XsV39j26qRtG823U3SdVlsc4CR9eqeLWl+tt05ko7IyvcA/i+wd3aKb2nOvj03Z/1Tsve+TNLNkj5SyL7ZlP1cF4+kuyUtl/SKpO/nvM7/yfbJW5KqJX20oVN+kh6q+ztn+/OB7HWWA2dL2k3SfdlrLM322/Y561dk77E2W/5rSV2zmAfk1PuIpJWSdmzs/VoeEeFHB3gAC4ED65X9FHgf+Dzph8PWwCeBPUlHnbsAzwMTs/pbAAFUZvN/ApYCVUAX4M/Anzaj7oeAFcCR2bLvAKuBExp5L4XEeAuwPVAJLK9778BEYA7QF9gReCD9GzT4OrsAbwPb5Gz7NaAqm/98VkfA/sC7wJBs2YHAwpxt1QD7ZtMXA/cDPYEK4Jl6db8MfCT7mxybxbBTtuzrwP314vwTcG42fXAW4zCgK/C/wL2F7JtN3M/bA68CpwNbAdsBo7JlPwBmA7tl72EYsAPwsfr7Gnio7u+cvbc1wKlAZ9Ln8ePAAcCW2efkYeDinPfzdLY/t8nqj86WTQbOz3md7wI3lfv/sC0/yh6AHyX6QzeeLO7Ns96ZwF+y6YYSwG9z6h4BPL0ZdU8CHsxZJuBlGkkWBcb4qZzl/w84M5t+gHQ6rm7ZYfW/wOpt+1Hg2Gz6UOC5Jur+DfhWNt1Uslic+7cAvplbt4HtPg18LpvOlyyuAS7IWbYdqZ2qb759s4n7+b+AGY3Ue6Eu3nrlhSSL+XliOKbudYG9gVeAzg3UGw0sAJTNzwKObun/q4708GkoezF3RtLukv6enVZ4CzgP6NXE+q/kTK+k6Ubtxup+NDeOSP/dNY1tpMAYC3otYFET8QJcD4zLpo/N5uviOFzSY9kpkjdIv+qb2ld1PtJUDJJOkDQ7O5XyBrB7gduF9P7Wby8i3gJeB/rk1Cnob5ZnP+9MSgoNaWpZPvU/jx+WNE3SkiyGP9aLYWGkiyk2EBEPk45S9pI0GOgH/H0zYzLcZmHpl2au35F+yX4sIrYDfkT6pV9ML5N++QIgSWz45VZfc2J8mfQlUyffpb3TgAMl9SGdJrs+i3Fr4EbgQtIpoh7APwqM45XGYpC0C3AF6VTMjtl2/5Oz3XyX+b5EOrVVt71tSae7lhQQV31N7ecXgV0bWa+xZe9kMXXLKftwvTr1399FpKv49shiOKFeDBWSOjcSx7XAcaSjoGkRsaqRelYAJwurb1vgTeCdrIHwGyV4zb8BIyR9XtIWpPPgvYsU4zTgvyX1yRo7/6epyhHxCulUyR9Jp6DmZou2Ip1HrwXWSjqcdG690Bh+KKmH0n0oE3OWdSd9YdaS8ubJpCOLOq8CfXMbmuu5AfiapCGStiIlswcjotEjtSY0tZ+nA/0kTZS0laTtJI3Kll0F/FTSrkqGSdqBlCRfIV1I0VnSBHISWxMxvAO8KWln0qmwOo8Ay4ALlC4a2FrS6Jzl15FOWx1LShzWDE4WVt93geNJDc6/IzVEF1VEvAp8BfgV6Z9/V+AJ0i/Klo7xCuAe4ClgBunoIJ/rSW0Q609BRcQbwBnATaRG4mNISa8Q55COcBYCt5PzRRYRTwK/Af6d1fkE8FjOuncBc4FXJeWeTqpb/w7S6aKbsvX7AeMLjKu+RvdzRLwJHAR8kZTAngf2yRb/AriZtJ/fIjU2d81OL54M/JB0scPH6r23hpwDjCIlrenAX3NiWAMcDgwgHWUsJv0d6pYvJP2dV0XEvzbxvVs9dY0/Zq1GdlrhJeCYiHiw3PFY2yXpWlKj+bnljqWt80151ipIGkO68uhd0qWXq0m/rs02S9b+cySwR7ljaQ98Gspai72A+aRz9YcAR7lB0jaXpAtJ93pcEBGLyx1Pe+DTUGZmlpePLMzMLK9202bRq1evqKysLHcYZmZtysyZM5dGRFOXqgPtKFlUVlZSXV1d7jDMzNoUSfl6MQB8GsrMzApQ1GQhaYzSCGHzJE1qYHmFpHskPSnpfkm5XT6slTQre0wvZpxmZta0op2Gym6supx0l2cNMEPS9Ih4JqfaxcC1EXGNpP1JXRP8V7bs3YgYVqz4zMyscMVssxgFzIuI+QBKg8gcSeq7v85A0tgFAPeRughoMatXr6ampob33nuvJTdrLaxr16707duXLl0a6+7IzMqtmMmiDxt2N1xDGkgl12zgaODXwFHAtpJ2jIhlQFdJ1aRuhn8WERslkqwjsgkA/fpt3HloTU0N2267LZWVlaSOTK21iQiWLVtGTU0N/fv3z7+CmZVFuRu4zwT2kfQEqROyJaSBWgAqIqKK1GPkpZI26vI4IiZHRFVEVPXuvfGVX++99x477rijE0UrJokdd9zRR39mm2HKFKishE6d0vOUKcV7rWIeWSxhwz77+1KvT/2IeIl0ZIGk7sAXs948iYgl2fN8SfcDw9mMAVWcKFo//43MNt2UKTBhAqxcmeYXLUrzAOM3t5/hJhTzyGIGsJuk/pK2BMaSuhheT1IvSXUx/AC4OivvmfXFj6RepCESc9s6zMw6tLPO+iBR1Fm5MpUXQ9GSRdbX/ETgTuBZ0khVcySdJ+mIrNq+wHOSngd2As7PygcA1ZJmkxq+f1bvKqo2YdmyZQwbNoxhw4bx4Q9/mD59+qyff//99wvaxoknnshzzz3XZJ3LL7+cKcU8/jSzVmdxI90jNlbeXO2mI8Gqqqqofwf3s88+y4ABAwrexpQpKSsvXgz9+sH557fc4dy5555L9+7dOfPMMzcoXz8YeqdyNx+V16b+rcw6usrKdOqpvooKWLiw8O1Impm1DzepY39D5ag7/7doEUR8cP6vGD/Y582bx8CBAxk/fjyDBg3i5ZdfZsKECVRVVTFo0CDOO++89XX32msvZs2axZo1a+jRoweTJk1i6NChfPrTn+a1114D4Oyzz+bSSy9dX3/SpEmMGjWKT3ziE/zrX2mAsHfeeYcvfvGLDBw4kGOOOYaqqipmzZq1UWznnHMOn/zkJxk8eDCnnHIKdT8mnn/+efbff3+GDh3KiBEjWJh9Gi+44AL22GMPhg4dylnFOv41a6ea00B9/vnQrduGZd26pfKiqPtl29YfI0eOjPqeeeaZjcoaU1ERkdLEho+KioI30aRzzjknfvGLX0RExNy5c0NSzJgxY/3yZcuWRUTE6tWrY6+99oo5c+ZERMTo0aPjiSeeiNWrVwcQt912W0REnHHGGXHhhRdGRMRZZ50Vl1xyyfr63//+9yMi4pZbbolDDjkkIiIuvPDC+OY3vxkREbNmzYpOnTrFE088sVGcdXGsW7cuxo4du/71RowYEdOnT4+IiHfffTfeeeedmD59euy1116xcuXKDdbdHJvytzJrD/70p4hu3Tb8vunWLZVvyjYqKiKk9Lwp69YBqqOA71gfWWRKff5v1113parqgyO/G264gREjRjBixAieffZZnnlm4yaarbfemkMPPRSAkSNHrv91X9/RRx+9UZ2HHnqIsWPHAjB06FAGDRrU4Lr33HMPo0aNYujQofzzn/9kzpw5vP766yxdupTPf/7zQLqJrlu3btx9992cdNJJbL311gDssMMOm74jzDqolmigHj8+nXJaty49F+MqqDrtptfZ5urXr+Hzfw3c69cittlmm/XTc+fO5de//jX//ve/6dGjB8cdd1yD9x1sueWW66c7d+7MmjVrGtz2VlttlbdOQ1auXMnEiRN5/PHH6dOnD2effbbvfzArklL/QG0uH1lkSn7+L8dbb73Ftttuy3bbbcfLL7/MnXfe2eKvMXr0aKZNmwbAU0891eCRy7vvvkunTp3o1asXK1as4K9//SsAPXv2pHfv3tx6661Autlx5cqVHHTQQVx99dW8++67ACxfvrzF4zZrrxr7IVqsH6jN5WSRGT8eJk9OVxJI6Xny5OIe1tUZMWIEAwcOZPfdd+erX/0qo0ePbvHXOO2001iyZAkDBw7kxz/+MQMHDmT77bffoM6OO+7I8ccfz8CBAzn00EPZc88PemeZMmUKv/zlLxkyZAh77bUXtbW1HH744YwZM4aqqiqGDRvGJZdc0uJxm7VX5fyBujl86WwHsWbNGtasWUPXrl2ZO3cuBx98MHPnzmWLLVrHmUj/rawjKubl+oUq9NLZ1vFNYUX39ttvc8ABB7BmzRoigt/97netJlGYdVTjx5c+OWwuf1t0ED169GDmzJnlDsPM2ii3WZiZWV5OFmZmlpeThZmZ5eVkYWZmeTlZFNF+++230Q12l156KaeeemqT63Xv3h2Al156iWOOOabBOvvuuy/1LxWu79JLL2VlTn8Chx12GG+88UYhoZuZbcDJoojGjRvH1KlTNyibOnUq48aNK2j9j370o9x4442b/fr1k8Vtt91Gjx49Nnt7ZtZxFTVZSBoj6TlJ8yRNamB5haR7JD0p6X5JfXOWHS9pbvY4vphxFssxxxzD3//+9/UDHS1cuJCXXnqJvffee/19DyNGjGCPPfbglltu2Wj9hQsXMnjwYCB1xTF27FgGDBjAUUcdtb6LDYBTTz11fffm55xzDgCXXXYZL730Evvttx/77bcfAJWVlSxduhSAX/3qVwwePJjBgwev79584cKFDBgwgJNPPplBgwZx8MEHb/A6dW699Vb23HNPhg8fzoEHHsirr74KpHs5TjzxRPbYYw+GDBmyvruQO+64gxEjRjB06FAOOOCAFtm3Zq1BKcfALrtCuqbdnAfQmTRm9i7AlsBsYGC9On8Bjs+m9weuy6Z3AOZnzz2z6Z5NvV6+LspPPz1in31a9nH66fm7//3c5z4XN998c0SkbsK/+93vRkTqivzNN9+MiIja2trYddddY926dRERsc0220RExIIFC2LQoEEREfHLX/4yTjzxxIiImD17dnTu3Hl9F+d1XYOvWbMm9tlnn5g9e3ZERFRUVERtbe36WOrmq6urY/DgwfH222/HihUrYuDAgfH444/HggULonPnzuu7Lv/Sl74U11133Ubvafny5etjvfLKK+M73/lORER8//vfj9Nzdsry5cvjtddei759+8b8+fM3iLU+d1FubU1LdDHeGtAKuigfBcyLiPkR8T4wFTiyXp2BwL3Z9H05yw8B7oqI5RHxOnAXMKaIsRZN7qmo3FNQEcEPf/hDhgwZwoEHHsiSJUvW/0JvyAMPPMBxxx0HwJAhQxgyZMj6ZdOmTWPEiBEMHz6cOXPmNNhJYK6HHnqIo446im222Ybu3btz9NFH8+CDDwLQv39/hg0bBjTeDXpNTQ2HHHIIe+yxB7/4xS+YM2cOAHfffTff+ta31tfr2bMnjz76KJ/97Gfp378/4G7Mrf0o9RjY5VbMO7j7AC/mzNcAe9arMxs4Gvg1cBSwraQdG1m3T/0XkDQBmADQL09XjdmZlpI78sgjOeOMM3j88cdZuXIlI0eOBFLHfLW1tcycOZMuXbpQWVm5Wd2BL1iwgIsvvpgZM2bQs2dPTjjhhGZ1K17XvTmkLs4bOg112mmn8Z3vfIcjjjiC+++/n3PPPXezX8+srWprXYw3V7kbuM8E9pH0BLAPsARYW+jKETE5Iqoioqp3797FirFZunfvzn777cdJJ520QcP2m2++yYc+9CG6dOnCfffdx6KGBtPI8dnPfpbrr78egKeffponn3wSSN2bb7PNNmy//fa8+uqr3H777evX2XbbbVmxYsVG29p77725+eabWblyJe+88w433XQTe++9d8Hv6c0336RPn5S7r7nmmvXlBx10EJdffvn6+ddff51PfepTPPDAAyxYsABwN+bWfrS1Lsabq5jJYgmwc85836xsvYh4KSKOjojhwFlZ2RuFrNuWjBs3jtmzZ2+QLMaPH091dTV77LEH1157LbvvvnuT2zj11FN5++23GTBgAD/60Y/WH6EMHTqU4cOHs/vuu3Psscdu0L35hAkTGDNmzPoG7jojRozghBNOYNSoUey55558/etfZ/jw4QW/n3PPPZcvfelLjBw5kl69eq0vP/vss3n99dcZPHgwQ4cO5b777qN3795MnjyZo48+mqFDh/KVr3yl4Ncxa83aWhfjzVW0LsolbQE8DxxA+qKfARwbEXNy6vQClkfEOknnA2sj4keSdgBmAiOyqo8DIyOi0Z+l7qK8bfPfytqi1tDFeHOVvYvyiFgjaSJwJ+nKqKsjYo6k80it79OBfYELJQXwAPCtbN3lkn5CSjAA5zWVKMzMyqEtdTHeXEXtojwibgNuq1f2o5zpG4EG7zqLiKuBq4sZn5mZFabcDdxFV6zTbNZy/DeyculQN9U1U7tOFl27dmXZsmX+MmrFIoJly5bRtWvXcodiHcyUKTBhAixalG6pW7QozTthNKxdj8G9evVqampqmnXfgRVf165d6du3L126dCl3KNaBVFamBFFfRQU0cC9qu1X2Bu7WoEuXLuvvHDYzy9XRbqprrnZ9GsrMrDEd7aa65nKyMLM2qzkN1B3tprrmcrIwszapuQ3U48fD5MmpjUJKz5Mnd5z7JjZVu27gNrP2yw3ULaPQBm4fWZhZm+QG6tJysjCzNskN1KXlZGFmbZIbqEvLycLM2iQ3UJdWu74pz8zat47U62u5+cjCzMzycrIwM7O8iposJI2R9JykeZImNbC8n6T7JD0h6UlJh2XllZLelTQre/y2mHGamVnTitZmIakzcDlwEFADzJA0PSKeyal2NjAtIq6QNJA0UFJltuyFiBhWrPjMzKxwxTyyGAXMi4j5EfE+MBU4sl6dALbLprcHXipiPGZmtpmKmSz6AC/mzNdkZbnOBY6TVEM6qjgtZ1n/7PTUPyXt3dALSJogqVpSdW1tbQuGbmZmucrdwD0O+GNE9AUOA66T1Al4GegXEcOB7wDXS9qu/soRMTkiqiKiqnfv3iUN3MysIylmslgC7Jwz3zcry/U1YBpARDwCdAV6RcSqiFiWlc8EXgA+XsRYzcysCcVMFjOA3ST1l7QlMBaYXq/OYuAAAEkDSMmiVlLvrIEcSbsAuwHzixirmZk1oWhXQ0XEGkkTgTuBzsDVETFH0nlAdURMB74LXCnpDFJj9wkREZI+C5wnaTWwDjglIpYXK1YzM2uax7MwM+vAPJ6FmZm1GCcLMzPLy8nCzMzycrIwM7O8nCzMzCwvJwszM8vLycLMzPJysjAzs7ycLMzMLC8nCzMzy8vJwszM8nKyMDOzvJwszMwsLycLMzPLy8nCzMzyKmqykDRG0nOS5kma1MDyfpLuk/SEpCclHZaz7AfZes9JOqSYcZqZWdOKNlJeNizq5cBBQA0wQ9L0iHgmp9rZwLSIuELSQOA2oDKbHgsMAj4K3C3p4xGxtljxmplZ44p5ZDEKmBcR8yPifWAqcGS9OgFsl01vD7yUTR8JTI2IVRGxAJiXbc/MzMqgmMmiD/BiznxNVpbrXOA4STWko4rTNmFdJE2QVC2pura2tqXiNrMSmTIFKiuhU6f0PGVKuSOyxpS7gXsc8MeI6AscBlwnqeCYImJyRFRFRFXv3r2LFqSZtbwpU2DCBFi0CCLS84QJThitVTGTxRJg55z5vllZrq8B0wAi4hGgK9CrwHXNrA076yxYuXLDspUrU7m1PsVMFjOA3ST1l7QlqcF6er06i4EDACQNICWL2qzeWElbSeoP7Ab8u4ixmlmJLV68aeVWXkVLFhGxBpgI3Ak8S7rqaY6k8yQdkVX7LnCypNnADcAJkcwhHXE8A9wBfMtXQpm1L/36bVq5lZciotwxtIiqqqqorq4udxhmVqC6NovcU1HdusHkyTB+fPni6mgkzYyIqnz1yt3AbWYd1PjxKTFUVICUnp0oWq+i3ZRnZpbP+PFODm2FjyzMzCwvJwszM8vLycLMzPLKmywknSapZymCMTOz1qmQI4udSD3GTsu6HFexgzIzs9Ylb7KIiLNJd1D/HjgBmCvpAkm7Fjk2M2vl3BFgx1FQm0WkO/deyR5rgJ7AjZJ+XsTYzKwVc0eALact3Bud9w5uSacDXwWWAlcBN0fE6qx32LkR0SqOMHwHt1lpVVamBFFfRQUsXFjqaNqe1avh+uvhootgwYLUzUlFRcOPPn1giyLdFVfoHdyFvPwOwNERscHHIiLWSTp8cwM0s7bNHQFunlWr4A9/SEli4UIYOhROPRVefDEl39mz4bXXNlynU6eUMBpLJv36pa5SiqmQZHE7sLxuRtJ2wICIeCwini1aZGbWqvXr1/CRhTsCbNg778CVV8IvfgEvvQR77gm/+Q187nOpu5Nc776bku6iRRs/HnwQpk6FtTldqw4bBk88Udz4C0kWVwAjcubfbqDMzDqY889vuCPA888vX0yt0VtvweWXw69+BUuXwn77wbXXwv77b5wk6my9NXziE+nRkDVrUsKpSyBbbVW8+OsUkiwUOQ0b2ekn9yll1sHV9el01lnpV3C/filRuK+nZNky+PWv09HDG2/AoYemfTV6dPO3vcUWaX/36wd779387RX0mgXUmS/p26SjCYBvAvOLF5KZtRXuCHBjr7ySjiL+93/Tqaejj4Yf/hBGjix3ZM1TyKWzpwCfIQ1rWgPsCUwoZOPZTXzPSZonaVIDyy+RNCt7PC/pjZxla3OW1R9hz8ysVVm8GE47Dfr3h1/+Er7wBXj6afjrX9t+ooACjiwi4jXSkKibRFJn4HLgIFKSmSFpekQ8k7PtM3LqnwYMz9nEuxExbFNf18yslBYsSKffrr023S9x/PEwaRJ87GPljqxl5U0WkroCXwMGkcbIBiAiTsqz6ihgXkTMz7YzFTiSNFRqQ8YB5xQQs5lZ2a1enY4gfvzjlCS+8Q343vfa79VghZyGug74MHAI8E+gL7CigPX6AC/mzNdkZRuRVAH0B+7NKe4qqVrSo5K+0Mh6E7I61bW1tQWEZGbWfI89lk4t/eAH6dLXF15IDdntNVFAYcniYxHxf4B3IuIa4HOkdouWNBa4MSJyrhymIrur8Fjg0ob6ooqIyRFRFRFVvXv3buGQzMw2tGIFnH46fPrTsHw53Hwz3HhjumGuvSskWazOnt+QNBjYHvhQAestAXbOme+blTVkLHBDbkFELMme5wP3s2F7hplZSd16KwwcmI4gvvUteOYZOPLIckdVOoUki8nZeBZnA9NJbQ4XFbDeDGA3Sf0lbUlKCBtd1SRpd1LHhI/klPWUtFU23QsYTeNtHWZmRfPyy/DlL8MRR0CPHvCvf6WEsd125Y6stJps4M46C3wrIl4HHgB2KXTDEbFG0kTgTqAzcHVEzJF0HlAdEXWJYywwNTbs0XAA8DtJ60gJ7We5V1GZmRXbunXw+9+nRuv33ktXPJ15Jmy5ZbkjK49Cep2tLqRHwnJzr7Nm1lL+85/UlcmDD8K++8Lvfgcf/3i5oyqOQnudLeQ01N2SzpS0s6Qd6h4tEKOZWauyalW6FHbo0HRD3e9/D/fe234TxaYopLuPr2TP38opCzbhlJSZWWv30EPpaOLZZ2HcOLjkEthpp3JH1XoUcgd3/1IEYmZWKmvXpnaI995Ll8NedBH89rfpPom//x0OO6zcEbY+hdzB/dWGyiPi2pYPx8yscBFwxRXpqKDuy3/Vqg+mG5p/770Nx4KANLjQGWfAeedB9+7leS+tXSGnoT6ZM90VOAB4HHCyMLOyWbUKTj4ZrrsujRa33XbQtWt6dOsGO+zwwfxWW30wXX9+q63gU5+CIUPK/Y5at0JOQ52WOy+pBzC1aBGZmeWxdCkcdVQ6ovjpT1MX4I0NJGQtY3MGMXqH1I+TmVnJPfssHH54Ginuz39ON8xZ8RXSZnEr6eonSJfaDgSmFTMoM7OG3HUXfOlL6fTRP/8Jo0aVO6KOo5Aji4tzptcAiyKipkjxmJk16Le/hYkTU/9Mf/tb++7htTUq5Ka8xcBjEfHPiHgYWCapsqhRmVlJTJkClZXpaqDKyjTf2qxdm65UOvVUGDMGHn7YiaIcCkkWfwHW5cyvzcrMrA2bMiXdhLZoUboEddGiNN+aEsaKFaln10svTQnjlltg223LHVXHVEiy2CIi3q+byaY7aFdaZu3HWWfBypUblq1cmcpbg0WLYPRouOOOdC/Fr34FnTuXO6qOq5BkUSvpiLoZSUcCS4sXkpmVwuLFm1ZeSo89BnvumWK5/XY45ZRyR2SFJItTgB9KWixpMfA/wDeKG5aZFVtj5/3L3R4wbVrq6bVbN3jkETjooPLGY0neZBERL0TEp0iXzA6MiM9ExLzih2ZmxXT++ekLOVe3bqm8HCLgJz+Br3wFqqrS0cWAAeWJxTaWN1lIukBSj4h4OyLezkax+2khG5c0RtJzkuZJmtTA8kskzcoez0t6I2fZ8ZLmZo/jN+1tmVk+48fD5MmpqwwpPU+enMpL7b334L/+C370o/R8993Qu3fp47DGFTL40RMRMbxh5ey8AAARlklEQVRe2eMRMSLPep2B54GDgBrSMKvjGhvxTtJpwPCIOCkbL6MaqCLdEDgTGJmN2NcgD35k1jbV1sIXvpCGK3XXHaXXkoMfda4bDzvb8NbAVk3UrzMKmBcR87MrqKYCTQ1vPg64IZs+BLgrIpZnCeIuYEwBr2lmbcjjj8MnP5mep01LV2I5UbROhSSLKcA9kr4m6eukL+5rClivD/BiznxNVrYRSRWk/qbu3dR1zaxt+sMf4DOfSWNdP/BA6sbDWq9Cep29SNJs4EDSKaE7gYoWjmMscGNErM1bM4ekCcAEgH7lvoTDzAqyahWcfnoa13r//WHqVLdPtAWFHFkAvEpKFF8C9geeLWCdJcDOOfN9s7KGjOWDU1AFrxsRkyOiKiKqevvTZtbq1dTAPvukRPH978OddzpRtBWNHllI+jipHWEc6Sa8P5MaxPcrcNszgN0k9Sd90Y8Fjm3gdXYHegKP5BTfCVwgqWc2fzDwgwJf18xaofvuS5fFvvsu3HgjfPGL5Y7INkVTRxb/IR1FHB4Re0XEb0j9QhUkItYAE0lf/M8C0yJijqTzcu8IJyWRqZFzWVZELAd+Qko4M4DzsjIza2Mi4OKL0811O+4I//63E0Vb1Oils5K+QPoiHw3cQbqa6aqIaJUDH/nSWbPWZ8UK+NrX4C9/SQniD39wR4CtTbMvnY2ImyNiLLA7cB/w38CHJF0h6eCWC9XM2qPnnkv9O/31r3DRRSlhOFG0XYV09/FORFwfEZ8nNTQ/QeofysysQTfdlO6fqK2Ff/wjNWb7/om2rdCroQCIiNezK5AOKFZAZtZ2rV2b7sA++mjYfXeYORMO8LdFu1DIsKpmZnktXQrHHpvGyT75ZLjssjRWtrUPThZm1mwzZ6ajiVdfhauuSo3a1r5s0mkoM2tdyj2GdgRceWUa0Q7goYecKNorH1mYtVF1Y2jXDY1aN4Y2lKab8ZqadLrpjjvgwAPhhhugV6/iv66Vh48szNqoco2hHQG//z0MGpQ6APzNb1K3HU4U7ZuPLMzaqHKMob14cTqa+Mc/0tCnv/897LJL8V7PWg8fWZi1UaUcQzsiNVwPHgwPPwyXXw733ONE0ZE4WZiVUXMaqEs1hvbixXDIIemIoqoKnnoKvvnNFLN1HP5zl1lzr2bx+m13/boG6kWL0i/3ugbqQrdR7DG0I9L2Bg+GRx6BK65IY2P3b5W9w1nRRUS7eIwcOTLamj/9KaJbt4j0b5ke3bqlcq/f/tevqNhw3bpHRUVh6xfTwoURBx6Y4jnggIgFC8odkRULUB0FfMeW/Uu+pR5tMVk098vC67ft9aWG15cKW78Y1q2LuOKKiO7d0+O3v01l1n4Vmiwa7aK8rWmLXZR36pS+HuqT0rjEXr99r19ZmU491VdRAQsX5l+/pS1cmG6ou/fedN/EVVelWKx9a3YX5VZ8zb2axeu37fVL1UCdz7p1qT1i8GCYMSO1U/zjH04UVk8hhx+b+wDGAM8B84BJjdT5MvAMMAe4Pqd8LTAre0zP91pt8TRUuc+Ze/3yrl+3jYqKdOqpomLT1m2Odesinnoq4oILIkaMSLEffHDEokWleX1rPSh3mwXQGXgB2AXYEpgNDKxXZzfS+Bg9s/kP5Sx7e1Nery0mi4jmf1l4/ba9fimtWhVx110R3/52RP/+HyS4qqqIq69220RHVWiyKFqbhaRPA+dGxCHZ/A+yI5kLc+r8HHg+Iq5qYP23I6J7oa/XFtsszIpt+XK4/XaYPj314fTWW6nb8AMPhM9/Hg4/HD760XJHaeVUaJtFMbv76AO8mDNfA+xZr87HASQ9TDoSOTci7siWdZVUDawBfhYRN9d/AUkTgAkA/Ypx26pZG/T88yk53Hprutt67VrYaSf48pdTgjjwwI3bSszyKXffUFuQTkXtSxqy9QFJe0TEG0BFRCyRtAtwr6SnIuKF3JUjYjIwGdKRRWlDN2sd1qyBf/3rgwTx/POpfOhQ+MEPUoKoqvId19Y8xUwWS4Cdc+b7ZmW5aoDHImI1sEDS86TkMSMilgBExHxJ9wPDSW0gLWrdOnjnnZbeqlnhIlJvsW++CW+8senPK1akbXTpAvvvD9/+djq95KuZrCUVM1nMAHaT1J+UJMYCx9arczMwDviDpF6k01LzJfUEVkbEqqx8NPDzYgS5bBl86EPF2LJZy+jUCbbfHnr0+OB5l10+mN9+exgyJPXftO225Y7W2quiJYuIWCNpInAnqT3i6oiYI+k8Uuv79GzZwZKeIV0q+72IWCbpM8DvJK0j3Qvys4h4phhxbrMNXHxxMbZsVrhu3TZOCHXP22yTbvQzKyffwW1m1oH5Du4SKfcYyGZmpVDuq6HatHKPgWxmVio+smiGco2BbGZWak4WzVCOMZDNzMrByaIZSjkGsplZOTlZNENr6WLazKzYnCyaodhjIJuZtRa+GqqZxo93cjCz9s9HFmZmlpeThZmZ5eVkYWZmeTlZmJlZXk4WZmaWl5OFmZnl5WRhZmZ5FTVZSBoj6TlJ8yRNaqTOlyU9I2mOpOtzyo+XNDd7HF/MOM3MrGlFuylPUmfgcuAg0ljbMyRNzx3xTtJuwA+A0RHxuqQPZeU7AOcAVUAAM7N1Xy9WvGZm1rhiHlmMAuZFxPyIeB+YChxZr87JwOV1SSAiXsvKDwHuiojl2bK7gDFFjNXMzJpQzGTRB3gxZ74mK8v1ceDjkh6W9KikMZuwLpImSKqWVF1bW9uCoZuZWa5yN3BvAewG7AuMA66U1KPQlSNickRURURV7969ixSimZkVM1ksAXbOme+bleWqAaZHxOqIWAA8T0oehaxrZmYlUsxkMQPYTVJ/SVsCY4Hp9ercTDqqQFIv0mmp+cCdwMGSekrqCRyclZmZWRkU7WqoiFgjaSLpS74zcHVEzJF0HlAdEdP5ICk8A6wFvhcRywAk/YSUcADOi4jlxYrVzMyapogodwwtoqqqKqqrq8sdhplZmyJpZkRU5atX7gZuMzNrA5wszMwsLycLMzPLy8nCzMzycrIwM7O8nCzMzCwvJwszM8vLycLMzPJysjAzs7ycLMzMLC8nCzMzy8vJwszM8nKyMDOzvJwszMwsLycLMzPLq6jJQtIYSc9JmidpUgPLT5BUK2lW9vh6zrK1OeX1R9gzM7MSKtpIeZI6A5cDB5HG2p4haXpEPFOv6p8jYmIDm3g3IoYVKz4zMytcMY8sRgHzImJ+RLwPTAWOLOLrmZlZkRQzWfQBXsyZr8nK6vuipCcl3Shp55zyrpKqJT0q6QsNvYCkCVmd6tra2hYM3czMcpW7gftWoDIihgB3AdfkLKvIxoU9FrhU0q71V46IyRFRFRFVvXv3Lk3EZmYdUDGTxRIg90ihb1a2XkQsi4hV2exVwMicZUuy5/nA/cDwIsZqZmZNKGaymAHsJqm/pC2BscAGVzVJ+kjO7BHAs1l5T0lbZdO9gNFA/YZxMzMrkaJdDRURayRNBO4EOgNXR8QcSecB1RExHfi2pCOANcBy4IRs9QHA7yStIyW0nzVwFZWZmZWIIqLcMbSIqqqqqK6uLncYZmZtiqSZWftwk8rdwF12U6ZAZSV06pSep0wpd0RmZq1P0U5DtQVTpsCECbByZZpftCjNA4wfX764zMxamw59ZHHWWR8kijorV6ZyMzP7QIdOFosXb1q5mVlH1aGTRb9+m1ZuZtZRdehkcf750K3bhmXduqVyMzP7QIdOFuPHw+TJUFEBUnqePNmN22Zm9XXoq6EgJQYnBzOzpnXoIwszMyuMk4WZmeXlZGFmZnk5WZiZWV5OFmZmlle76XVWUi2wqNxxNKEXsLTcQTTB8TWP42sex9c8zYmvIiLyDjXabpJFayepupBugMvF8TWP42sex9c8pYjPp6HMzCwvJwszM8vLyaJ0Jpc7gDwcX/M4vuZxfM1T9PjcZmFmZnn5yMLMzPJysjAzs7ycLFqIpJ0l3SfpGUlzJJ3eQJ19Jb0paVb2+FEZ4lwo6ans9asbWC5Jl0maJ+lJSSNKGNsncvbNLElvSfrvenVKug8lXS3pNUlP55TtIOkuSXOz556NrHt8VmeupONLGN8vJP0n+/vdJKlHI+s2+VkoYnznSlqS8zc8rJF1x0h6LvssTiphfH/OiW2hpFmNrFuK/dfg90pZPoMR4UcLPICPACOy6W2B54GB9ersC/ytzHEuBHo1sfww4HZAwKeAx8oUZ2fgFdINQ2Xbh8BngRHA0zllPwcmZdOTgIsaWG8HYH723DOb7lmi+A4GtsimL2oovkI+C0WM71zgzAL+/i8AuwBbArPr/z8VK756y38J/KiM+6/B75VyfAZ9ZNFCIuLliHg8m14BPAv0KW9Um+VI4NpIHgV6SPpIGeI4AHghIsp6V35EPAAsr1d8JHBNNn0N8IUGVj0EuCsilkfE68BdwJhSxBcR/4iINdnso0Dfln7dQjWy/woxCpgXEfMj4n1gKmm/t6im4pMk4MvADS39uoVq4nul5J9BJ4sikFQJDAcea2DxpyXNlnS7pEElDSwJ4B+SZkqa0MDyPsCLOfM1lCfpjaXxf9Jy78OdIuLlbPoVYKcG6rSW/XgS6UixIfk+C8U0MTtNdnUjp1Baw/7bG3g1IuY2sryk+6/e90rJP4NOFi1MUnfgr8B/R8Rb9RY/TjqtMhT4DXBzqeMD9oqIEcChwLckfbYMMTRJ0pbAEcBfGljcGvbhepGO91vl9eeSzgLWAFMaqVKuz8IVwK7AMOBl0qme1mgcTR9VlGz/NfW9UqrPoJNFC5LUhfQHnRIR/6/+8oh4KyLezqZvA7pI6lXKGCNiSfb8GnAT6XA/1xJg55z5vllZKR0KPB4Rr9Zf0Br2IfBq3am57Pm1BuqUdT9KOgE4HBiffZlspIDPQlFExKsRsTYi1gFXNvK65d5/WwBHA39urE6p9l8j3ysl/ww6WbSQ7Pzm74FnI+JXjdT5cFYPSaNI+39ZCWPcRtK2ddOkhtCn61WbDnw1uyrqU8CbOYe7pdLoL7py78PMdKDuypLjgVsaqHMncLCkntlploOzsqKTNAb4PnBERKxspE4hn4VixZfbBnZUI687A9hNUv/sSHMsab+XyoHAfyKipqGFpdp/TXyvlP4zWMyW/I70APYiHQo+CczKHocBpwCnZHUmAnNIV3Y8CnymxDHukr327CyOs7Ly3BgFXE66EuUpoKrEMW5D+vLfPqesbPuQlLReBlaTzvl+DdgRuAeYC9wN7JDVrQKuyln3JGBe9jixhPHNI52rrvsc/jar+1HgtqY+CyWK77rss/Uk6UvvI/Xjy+YPI13980Ip48vK/1j3mcupW47919j3Ssk/g+7uw8zM8vJpKDMzy8vJwszM8nKyMDOzvJwszMwsLycLMzPLy8nCLA9Ja7Vhb7gt1gOqpMrcHk/NWqstyh2AWRvwbkQMK3cQZuXkIwuzzZSNZ/DzbEyDf0v6WFZeKenerKO8eyT1y8p3UhpfYnb2+Ey2qc6SrszGK/iHpK2z+t/OxjF4UtLUMr1NM8DJwqwQW9c7DfWVnGVvRsQewP8FLs3KfgNcExFDSJ34XZaVXwb8M1IniCNId/4C7AZcHhGDgDeAL2blk4Dh2XZOKdabMyuE7+A2y0PS2xHRvYHyhcD+ETE/6+ztlYjYUdJSUhcWq7PylyOil6RaoG9ErMrZRiVpzIHdsvn/AbpExE8l3QG8TepZ9+bIOlA0KwcfWZg1TzQyvSlW5Uyv5YO2xM+R+ukaAczIekI1KwsnC7Pm+UrO8yPZ9L9IvaQCjAcezKbvAU4FkNRZ0vaNbVRSJ2DniLgP+B9ge2CjoxuzUvEvFbP8tpY0K2f+joiou3y2p6QnSUcH47Ky04A/SPoeUAucmJWfDkyW9DXSEcSppB5PG9IZ+FOWUARcFhFvtNg7MttEbrMw20xZm0VVRCwtdyxmxebTUGZmlpePLMzMLC8fWZiZWV5OFmZmlpeThZmZ5eVkYWZmeTlZmJlZXv8f2FfVARnL29MAAAAASUVORK5CYII=\n", 1255 | "text/plain": [ 1256 | "
" 1257 | ] 1258 | }, 1259 | "metadata": { 1260 | "needs_background": "light" 1261 | }, 1262 | "output_type": "display_data" 1263 | } 1264 | ], 1265 | "source": [ 1266 | "training_and_validation_loss(histories_cnn['race_ethnicity'])\n", 1267 | "training_and_validation_accuracy(histories_cnn['race_ethnicity'])" 1268 | ] 1269 | }, 1270 | { 1271 | "cell_type": "markdown", 1272 | "metadata": {}, 1273 | "source": [ 1274 | "As you can see in the charts above, validation loss has begun to flatten out. We don't need more epochs. Since the training loss is very low, but validation loss is higher, that means we may be overfit." 1275 | ] 1276 | }, 1277 | { 1278 | "cell_type": "code", 1279 | "execution_count": 80, 1280 | "metadata": {}, 1281 | "outputs": [ 1282 | { 1283 | "data": { 1284 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYoAAAEWCAYAAAB42tAoAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAIABJREFUeJzt3XuYXXV97/H3Zy65Ti6EkARyhRCiyE0IoLUHEBDBVvBprUK1ltZKb7S2tfZ4nvYo0lprrfbYo7bS4qHiBcGn9UkVS0FBVMQmXCVAMIFAQkBISAK5TZLJ9/zxXdu9M8ys2ZnMnr1n5vN6nv3MXmuvvfZvr0nWZ36/3/r9liICMzOz/rQ1uwBmZtbaHBRmZlbKQWFmZqUcFGZmVspBYWZmpRwUZmZWykFhB03S5ZK+3+xyDDVJqySdM8A2CyRtl9Q+TMVqOEnrJJ1fPL9K0hebXSZrLQ6KMULSeEnXSnpS0kuS7pd0UbPLVY/iRLarOEH/VNJ1krqG+nMi4lURcccA2zwVEV0R0TPUn1+cpPcW33OrpLskvXaoP8fsYDkoxo4OYD1wNjAN+AvgRkmLmlimg/HmiOgCTgWWkeU/gNJI/zf91eJ7zgRuB25qcnmGnKSOZpfBDs5I/09ldYqIHRFxVUSsi4j9EfEN4AngtP7eI2m+pH+T9LykzZI+3c92n5K0XtKLku6R9D9qXjtD0sritZ9K+mSxfoKkLxb73SpphaTZdXyPp4FvAScU+7lD0kck/QDYCRwjaVpRe3pG0tOS/qq2qUjSeyQ9UtSsHpZ0arG+tgmmv3IvkhSVk52koyQtl/SCpDWS3lPzOVdJulHSF4rPWiVp2UDfsfie+4AvAXMlHVGzz18saoOVGsdJNa/1+fuStFjSd4p1myR9SdL0esrRm6RLis9/UdJaSRf2PnY13/2LvY7ZuyU9BXxH0rckXdlr3w9I+qXi+Ssk3Voc19WS3jaY8trQcFCMUcVJ+ThgVT+vtwPfAJ4EFgFzgRv62d0K4BRgBvBl4CZJE4rXPgV8KiKmAouBG4v1v07WbOYDhwO/A+yqo9zzgTcB99Ws/jXgCmBKUd7rgH3AscCrgQuA3yre/yvAVcC7gKnAxcDmPj6qv3L3dgOwATgKeCvw15LOrXn94mKb6cByoM+w7eN7jivKuBnYUqx7NfB54LfJY/Y5YLmyWbHs9yXgo0UZX0ke86vqKUevMp0BfAF4f/F9zgLWHcQuzi4+/43AV4DLavZ9PLAQ+KakycCt5L+lWcClwGeLbawZIsKPMfYAOoHbgM+VbPNa4Hmgo4/XLge+X/LeLcDJxfM7gQ8DM3tt85vAXcBJdZR3HbAd2EqeCD8LTCxeuwO4umbb2UB35fVi3WXA7cXzW4D3lnzO+QOUexEQZFPefKAHmFLz+keB64rnVwG31bx2PLCr5HteBewpvmcPGRLn1Lz+j8Bf9nrPavIE3O/vq4/PeQtwXz/f+yrgi/2873PA3w907Hrvp+aYHVPz+hRgB7CwWP4I8Pni+duB7/Xx2R9q9v+dsfpwjWKMKdrwrydPSFfWrP9W0Ym6XdI7yJPgk5FNIAPt80+LppxtkraSNYWZxcvvJmsujxbNS79YrL+ePGnfIGmjpL+V1FnyMW+JiOkRsTAifi8iamsf62ueLySD8JmieWYreZKZVbw+H1g70HcqKXeto4AXIuKlmnVPkn/NVzxb83wnMEFSh6R31Bzvb9Vsc2NETCcD7yEObBpcCLyv8r2K7za/KEe/vy9JsyXdUDTDvQh8kerv52DUe+z687PfU3HMvknWFiDD/EvF84XAmb2+5zuAOYfw2XYI3Kk0hkgScC15EnpTROytvBYRF/Xa9rXAAkkdZWGh7I/4M+A8YFVE7Je0hWzuICJ+AlxWBNQvAV+TdHhE7CD/Yv+wskP9ZvKv42sH8dVqp0BeT9YoZvZT7vVkU1L5Dvspd6/NNgIzJE2pCYsFwNN17P9LVE+Mfb2+SdIVwEpJX46IZ4qyfyQiPtJ7+wF+X39NHqMTI+IFSW+hziawXsqO3Q5gUs1yXyf13lNVfwX4kKQ7gQlk533lc74bEW8YRBmtAVyjGFv+kWwjfnOvv8j78t/AM8DfSJqs7Hx+XR/bTSH7A54HOiR9kGz7B0DSOyUdERH7ySYVgP2SXi/pxKJt/UVgL7D/kL4dUJxQ/wv4hKSpktqKztyzi03+BfhTSacpHStpYe/99FfuXp+1nmw++2hxfE4iayJDMg4hIlaTta4/K1b9M/A7ks4syj5Z0i9ImkL572sK2XS3TdJcso9hMK4FfkPSecVxnSvpFcVr9wOXSupUdti/tY793UzWHq4mr/aqHN9vAMdJ+rVif52STpf0ykGW2w6Rg2KMKE6Gv012Oj/bq5npZSLHCbyZ7BB+iuywfXsfm94C/CfwGNnsspsDm4IuBFZJ2k52EF9ahNQc4GtkSDwCfJdsjhoK7wLGAQ+T/SVfA44svtdNZHv4l4GXgK+TnfC99Vfu3i4j2+A3Av9OtqPfNkTfA+DjwBWSZkXESuA9ZG1gC7CG7C8a6Pf1YfKy4m1kc8+/DaYgEfHfwG8Af1/s67vkiR7gf5O1jS3F5325jv11F2U5v3b7onZ2AdkstZFsvvsYMH4w5bZDpwjfuMjMzPrnGoWZmZVyUJiZWSkHhZmZlXJQmJlZqRE3jmLmzJmxaNGiZhfDzGxEueeeezZFxBEDb/lyIy4oFi1axMqVK5tdDDOzEUXSk4N9r5uezMyslIPCzMxKOSjMzKyUg8LMzEo5KMzMrJSDwszMSjUsKCR9XtJzkh7q53VJ+gflfYYfVHHfYjMzay2NHEdxHTkd8hf6ef0iYEnxOJO8V8KZ9ex4/yHftaBx2lxHM7NRpmFBERF3Fncu688lwBci5zm/W9J0SUcWN57p1/bt8L3vDWFBh9icObB0abNLYWY2dJo5MnsuB97gZkOx7mVBUdwS8gqAmTMXsX59a/7lvnUrPPWUg8LMRpcRMYVHRFwDXAOwdOmyWLIEOlqw5GvWwO7dfb+2fz/s25c/9+6FiPy5f38+7+7O8Kssz5wJkyb1vS8zs+HUzNPt08D8muV51HFT+la2ezfs2AGPPQZ79mQQ7N5dDYTagIDcprK+pyeDpBIaxx0Hr399fZ+7f38+WjE8zWzka+apZTlwpaQbyE7sbQP1T4wE3d3wne/kSXvfPujsrJ7E29pyua2tujxuHEycmOvb23P9Qw/BunWwaRPs2gVS/mxrq/586aVqyFQCZv78DBgzs6HUsKCQ9BXgHGCmpA3Ah4BOgIj4J+Bm4E3kDeJ3kjdtH9FOOCE728eNqwbBYIwfn7WNO+7I4Nm3r1r7aGvLUKg0U7W35/PNm2HtWjjiiGq4RORyZ+eQfk0zG2MaedXTZQO8HsDvN+rzm6Wr69D3sXhxnvTHjcv9VYKn8mhvf/l7Hnwwm7nuvDN/7tuXP5ctg5NPzqatvXurgVPpS6nUUHbuzJ89PTBvHkydeujfw8xGB7dqt6ApU+CUUw7uPUcfDY8+mif6SZOyVrJqFfz4x/Dii9Umqn37cpvu7mofSk9P1j4icnn2bDjnHJg+PfdjZmObg2KUmDIFTj/9wHXjx8NPf5on/46ObI4aNy6bojo78z2dnflapY/kvvtgyxa49VaYOxdOOy1rIJ2dMHlyfWWpXOG1b18GUkSumz7dHe5mI5H/245iZ9Y1zv1Axx+f4bJ+fV7BtXt3NlcBnHtuNQT27Kle0RWRyzt3Vq/g6n1FV3d3XvL7qlfl80ofylFHOTzMWp3/i9oBJk3KZqxx4/LKq61bMzgqJ/4JE6ohUQmRnp5qx31E1kwqNRcpazZPPJH7ev75fE/l8uFzz/WVWmatzkFhfZo7Nx+Ql93edVc2SU2alKHQ1QUzZmQIVJquyq7ymjoVNm7M98+YkeHy6KOwenWGSWWg4YIFBz/QMKJay9mzp9rXAtVO+p0783O6uzPsjj66OgCys9ODG83KOChsQBMnwnnnHdo+pk/PR8Xu3XmSfuaZDJCenlyeMyeboyLgpJOqJ/dKE9auXdXne/dmAFT6Q2qbvfbuzW0rodHZWb0SrLMzazhtbbmtBK98ZS5Pnw6HHeYBjGa1/F/BmmLChOxD2b69OuDw3nthw4bsH9m/PwOkq6t6gq/0g1RqH1I2c0nVwYuVS4gnT86aS6WzvvLYvh3uuSdrR21t+XP79gysSu3i2GMzcGbPzrExZmOdg8KaZsKEfFS85jX5s6cHvv99eOGFDIfOztxu6tRs6qpctdXXeJKBdHXB2WcfuG7Hjvy5ZQs88kgGyPPPZ81ix44MnZ6eDJ4jj3RNw8Ye/5O3ltPe/vKTeSNVLvudPDkHG0J24N97bz7a27MZq6sr+1AWLKiOku/pgYULc5va0DMbTRwUZn2YPRve8IbqfF1bt8KKFTlVyhNPVDvQI3LbKVMyZI45xh3jNvo4KMz6UenXADj8cLjwwhzlDtkfIuUAxY0bs5/jySfz8Qu/0LwymzWCg8LsIPSeA6vSr7JnD6xcmZ3xq1bBkiV5tVZPT4ZNZQ6wyoDFQ5k00my4OSjMhsC4cTmZ4z33wA9+kDWL9vbqeI5Zsw4c1d7VBWecUb3SyqyVOSjMhsjs2fBzP5d9Ge3tednvjh15BdUzz1Qv533ppaxNbN+eP+fNy0tyK6PcJ0yoNnlF5E+ped/LzEFhNoSmT89O8Fq9T/abNuXVVOvW5SW5a9ZkkNQ2Vc2alSGzd292lL/61fl+B4Y1g4PCrMF6n9xnzoQLLsjna9fmAMO1a7MJ6oUXsnlqypRq7aOjI6+2mjYNTj21OlXJvn3VqUkmTsybVNWqTMZYe1teKZu9akfJmw3EQWHWRIsX56PWvn3VEeebN2e/x4MPZpBs3pzBUQmAXbvyOcDSpdnZvmtXdTR77ZQmlelO2tryfiPz5rmGYvVxUJi1mNqR34cfnrWPbdvg7rvh6aez9tDWluM1Zs7Mk/8DD2RAjB+f4QDVCRsrgwEPOyxrL9u3w8035/QkJ5/c/10ZKzez8tVZ5qAwGwGmTYM3vrH/1+fMyT6N2o7wvsyeDc8+m+M/Hnggg2Px4qyVTJuWodLdXa2x7NuXgTNxYobWggWuhYxFDgqzUaDS91CPOXOyw/3ee3OwYOUeIV1dWXvYvz+3q0yaOHFi7n/CBHjzmzNsbGxxUJiNQR0dOY6jMgvv3r3VGsm4cQc2N+3enZ3qK1ZkLWT+/Gze2r49A6a9PftGKtPAz5qV/Slz5w5u4kZrPQ4KszGs0ozU2dn/lVATJmQIjB+fl/Q+/niu7+jIvpOOjqx17N6dwTNxYm6/eHHWXk44wf0cI52DwswGNHkyvO51WWOYOPHAWsfOndWp3yu1j5Urs7axdm02bc2aVW0a2749+zugeqvc3jenqowp6e7OoJk61X0jzeSgMLO69L5/SEXtbLmVbS66KE/8t96aJ/y1a3O7HTuqtY7KHQYrV27t3Fm9N3tEBgZkh/thh2XQTJyYfSRdXRlevjfI8PBhNrOG6OzMGXe7u7PT/PDDs8/i2WfzBD9hQjZjTZ5cDYBx4/LR2Zlh8eMf58j12htWTZqU+5k9O6/UOumkXFepkfi+IEPPQWFmDdPWliFQO6hwypTq8yOPLH//KafkAzJwtm7Nm0o9/XSGz5QpWVs56qgDZ+at3AK3MuHiYYdlf0pXV67v6ckgmju3/qvFxjIHhZmNCOPHZy1i9uysRQD86Ec5U++OHVkTeeaZ/AkZCFKGhVTtkN+/vxoqRxwBRx+dgTFnTm7T3u6rtXpzUJjZiHXmmQcun3hi9fnevRkSW7ZUr9zavj2busaNy0t9N27MSRrvuy9rJePHZ0gceWTWVjo6soN+3ryxPR28g8LMRqXaZqeK2mamyjiSLVvyKq22trxiq6cnr7KqTIHS3Z3LZ501dgcbOijMbMySYMaM6my+kLWOzZvzCqypU+Hhh+G553L50kvHZrOUg8LMrEZX14E1j9mzc1T6pk3wjW/kfFdz5+bYkLHC4yXNzAZw2mnZBPXYYzmL79e/ngMJx4qGBoWkCyWtlrRG0gf6eH2BpNsl3SfpQUlvamR5zMwGo60NzjsPzj03+zxeeAGWL8/O8J07m126xmtY05OkduAzwBuADcAKScsj4uGazf4CuDEi/lHS8cDNwKJGlcnM7FAtWZLhsHlzjjyfODGvipo5MwNl8eLRN91II/sozgDWRMTjAJJuAC4BaoMigKnF82nAxgaWx8xsSJx8ck4tsmFDPjZvrt7o6Zxz4FWvanYJh1Yjg2IusL5meQPQ66pnrgL+S9IfAJOB8/vakaQrgCsAZs9eMOQFNTM7WPPn56MyT1V3d9629u678zLb004bPWMvmn3V02XAdRHxCUmvBa6XdEJE7K/dKCKuAa4BWLp0WTShnGZmferoyMtoAU4/Pa+Q6u7OqUZmzIBjj80xGfv25TaVOa9GkkYGxdPA/JrlecW6Wu8GLgSIiB9KmgDMBJ5rYLnMzBri8MPh/PPhttuyVjFpUk58OH16BsWOHdkZPmcOHHNMdbr1VtfIoFgBLJF0NBkQlwK/2mubp4DzgOskvRKYAIyhi87MbLTp6MhZcyEvp3322WyCmjw5JzN87rns31i1Kuesisj5pmbMaG65yzQsKCJin6QrgVuAduDzEbFK0tXAyohYDrwP+GdJf0x2bF8eEW5aMrNR4bjj8lExb17+fOih7ATfvTunDLn/frjssgPv7dFKNNLOy0uXLosvfGGlb1hiZiPa7t05i+1jj+V8U6efDmef3bjPk3RPRCwbzHs9MtvMrAkmTMgaxCmnZNPUo4/mAL4XXsjmqFbiv8vNzJqsqyvnkvr2tzNAZs2ChQuzv6PSXNVMDgozsyY75RRYvTqbo9aty3mkVq/OmsUJJ+TltHPn5ujviROHv3wOCjOzFrB0af488cTss9i5M6c4v/fe7Muo3Ft86dLcpr2dYeurdVCYmbWQtrYcX3H44XnXPQlefBF+8hN46qnqfcPHj88pz+fPb3wtw0FhZtaiKiO4p0/Pq6K2bs278T36aDZLrV2br731rY0d7e2gMDMbIaZPz5HfANu2wYMP5gjwnTvzHt+N4stjzcxGoGnT8oqozs4c8d1IDgozsxFqxoy8x/edd+bMtd3djfkcB4WZ2Qg1bVreKGnrVrjrLvjmN+GRR4b+c9xHYWY2gi1Zklc/3XVXjr144YWcqXbx4qGbO8o1CjOzEW78eHj96+EVr8hLZ3/4Q7j99uzwHgoOCjOzUWLBAvj5n8+AuP/+HLg3FBwUZmajSFdXjrmYOnXoJhd0UJiZjTLd3bB3b07/sWvXoe/PQWFmNsrMmJFTf6xbBz/4waHvz0FhZjbKtLXBWWfllOUbNx56E5SDwsxslBo/Pqf4ePbZQ9uPg8LMbJRavDjHVTz//KHtx0FhZjZKjRuXg+4eeODQ9uOgMDMbpSq3Vd20CbLnYnAcFGZmo9iMGVmzOBQOCjOzUW7//kN7v4PCzGwUmzgR9uwBmDRhsPtwUJiZjWLTpsEpp4D7KMzMrF+TJx/a+x0UZmZWykFhZmalHBRmZlbKQWFmZqUcFGZmVqqj3g0lzQUW1r4nIu5sRKHMzKx11BUUkj4GvB14GOgpVgdQGhSSLgQ+BbQD/xIRf9PHNm8Drir290BE/Gq9hTczs8art0bxFmBpRHTXu2NJ7cBngDcAG4AVkpZHxMM12ywB/hfwuojYImlW/UU3M7PhUG8fxeNA50Hu+wxgTUQ8HhF7gBuAS3pt8x7gMxGxBSAinjvIzzAzswart0axE7hf0reBn9UqIuIPS94zF1hfs7wBOLPXNscBSPoB2Tx1VUT8Z51lMjOzYVBvUCwvHo34/CXAOcA84E5JJ0bE1tqNJF0BXAEwe/aCBhTDzMz6U1dQRMS/ShpHUQMAVkfE3gHe9jQwv2Z5XrGu1gbgR8W+npD0GBkcK3p9/jXANQBLly47xNuEm5nZwairj0LSOcBPyM7pzwKPSTprgLetAJZIOroImUt5ea3k62RtAkkzySB6vN7Cm5lZ49Xb9PQJ4IKIWA0g6TjgK8Bp/b0hIvZJuhK4hex/+HxErJJ0NbAyIpYXr10gqXLZ7fsjYvPgv46ZmQ21eoOisxISABHxmKQBr4KKiJuBm3ut+2DN8wD+pHiYmVkLqjcoVkr6F+CLxfI7gJWNKZKZmbWSeoPid4HfByqXw36P7KswM7NRrt6rnrqBTxYPMzMbQ0qDQtKNEfE2ST8m52I6QESc1LCSmZlZSxioRvHe4ucvNrogZmbWmkrHUUTEM8XTTcD6iHgSGA+cDGxscNnMzKwF1Dsp4J3AhOKeFP8F/BpwXaMKZWZmraPeoFBE7AR+CfhsRPwK8KrGFcvMzFpF3UEh6bXk+IlvFuvaG1MkMzNrJfUGxR+RNxj692IajmOA2xtXLDMzaxX1jqP4LvDdmuXHqQ6+MzOzUWygcRT/JyL+SNJ/0Pc4iosbVjIzM2sJA9Uori9+/l2jC2JmZq2pNCgi4p7i6UpgV0TsB5DUTo6nMDOzUa7ezuxvA5NqlicCtw19cczMrNXUGxQTImJ7ZaF4PqlkezMzGyXqDYodkk6tLEg6DdjVmCKZmVkrqfd+FH8E3CRpIyBgDvD2hpXKzMxaRr3jKFZIegWwtFi1OiL2Nq5YZmbWKupqepI0CfifwHsj4iFgkSRPPW5mNgbU20fx/4A9wGuL5aeBv2pIiczMrKXUGxSLI+Jvgb0AxUyyalipzMysZdQbFHskTaSYxkPSYqC7YaUyM7OWUe9VTx8C/hOYL+lLwOuAyxtVKDMzax0DBoUkAY+SNy16Ddnk9N6I2NTgspmZWQsYMCgiIiTdHBEnUr1pkZmZjRH19lHcK+n0hpbEzMxaUr19FGcC75S0DthBNj9FRJzUqIKZmVlrqDco3tjQUpiZWcsa6A53E4DfAY4FfgxcGxH7hqNgZmbWGgbqo/hXYBkZEhcBn2h4iczMrKUM1PR0fHG1E5KuBf678UUyM7NWMlCN4mczxLrJycxsbBooKE6W9GLxeAk4qfJc0osD7VzShZJWS1oj6QMl2/2ypJC07GC/gJmZNVZp01NEtA92x5Lagc8AbwA2ACskLY+Ih3ttNwV4L/CjwX6WmZk1Tr0D7gbjDGBNRDweEXuAG4BL+tjuL4GPAbsbWBYzMxukRgbFXGB9zfKGYt3PFPfhnh8RpVODSLpC0kpJK7dte37oS2pmZv1qZFCUktQGfBJ430DbRsQ1EbEsIpZNm3ZE4wtnZmY/08igeBqYX7M8r1hXMQU4AbijmBrkNcByd2ibmbWWRgbFCmCJpKMljQMuBZZXXoyIbRExMyIWRcQi4G7g4ohY2cAymZnZQWpYUBTjLq4EbgEeAW6MiFWSrpZ0caM+18zMhla9kwIOSkTcDNzca90H+9n2nEaWxczMBqdpndlmZjYyOCjMzKyUg8LMzEo5KMzMrJSDwszMSjkozMyslIPCzMxKOSjMzKyUg8LMzEo5KMzMrJSDwszMSjkozMyslIPCzMxKOSjMzKyUg8LMzEo5KMzMrJSDwszMSjkozMyslIPCzMxKOSjMzKyUg8LMzEo5KMzMrJSDwszMSjkozMyslIPCzMxKOSjMzKyUg8LMzEo5KMzMrJSDwszMSjkozMyslIPCzMxKOSjMzKxUQ4NC0oWSVktaI+kDfbz+J5IelvSgpG9LWtjI8piZ2cFrWFBIagc+A1wEHA9cJun4XpvdByyLiJOArwF/26jymJnZ4DSyRnEGsCYiHo+IPcANwCW1G0TE7RGxs1i8G5jXwPKYmdkgNDIo5gLra5Y3FOv6827gW329IOkKSSslrdy27fkhLKKZmQ2kJTqzJb0TWAZ8vK/XI+KaiFgWEcumTTtieAtnZjbGdTRw308D82uW5xXrDiDpfODPgbMjoruB5TEzs0FoZI1iBbBE0tGSxgGXAstrN5D0auBzwMUR8VwDy2JmZoPUsKCIiH3AlcAtwCPAjRGxStLVki4uNvs40AXcJOl+Scv72Z2ZmTVJI5ueiIibgZt7rftgzfPzG/n5ZmZ26FqiM9vMzFqXg8LMzEo5KMzMrJSDwszMSjkozMyslIPCzMxKOSjMzKyUg8LMzEo5KMzMrJSDwszMSjkozMyslIPCzMxKOSjMzKyUg8LMzEo5KMzMrJSDwszMSjkozMyslIPCzMxKOSjMzKyUg8LMzEo5KMzMrJSDwszMSjkozMyslIPCzMxKOSjMzKyUg8LMzEo5KMzMrJSDwszMSjkozMyslIPCzMxKOSjMzKyUg8LMzEo5KMzMrFRDg0LShZJWS1oj6QN9vD5e0leL138kaVEjy2NmZgevYUEhqR34DHARcDxwmaTje232bmBLRBwL/D3wsUaVx8zMBqejgfs+A1gTEY8DSLoBuAR4uGabS4CriudfAz4tSRER/e00Anbvho5GltzMbBTZswdAg35/I0+3c4H1NcsbgDP72yYi9knaBhwObKrdSNIVwBXF0p5zzpm6FvrNkjFk72HQuaXZpWgNPhZVPhZVPhZJgu0LBvvuEfF3eURcA1wDIGllxIvLmlyklpDHYrePBT4WtXwsqnwsqiStHOx7G9mZ/TQwv2Z5XrGuz20kdQDTgM0NLJOZmR2kRgbFCmCJpKMljQMuBZb32mY58OvF87cC3ynrnzAzs+HXsKanos/hSuAWoB34fESsknQ1sDIilgPXAtdLWgO8QIbJQK5pVJlHIB+LKh+LKh+LKh+LqkEfC/kPeDMzK+OR2WZmVspBYWZmpVo2KDz9R1Udx+JPJD0s6UFJ35a0sBnlHA4DHYua7X5ZUkgatZdG1nMsJL2t+LexStKXh7uMw6WO/yMLJN0u6b7i/8mbmlHORpP0eUnPSXqon9cl6R+K4/SgpFPr2nFEtNyD7PxeCxwDjAMeAI7vtc3vAf9UPL8U+Gqzy93EY/F6YFLx/HfH8rEotpsC3AncDSxrdrmb+O9iCXAfcFixPKvZ5W7isbgG+N3i+fHAumaXu0HH4izgVOChfl5/E/Atcpj2a4Af1bPfVq1R/Gz6j4jYA1Sm/6h1CfCvxfMQrcDIAAADt0lEQVSvAedJGvwY9dY14LGIiNsjYmexeDc5ZmU0quffBcBfkvOG7R7Owg2zeo7Fe4DPRMQWgIh4bpjLOFzqORYBTC2eTwM2DmP5hk1E3EleQdqfS4AvRLobmC7pyIH226pB0df0H3P72yYi9gGV6T9Gm3qORa13k38xjEYDHouiKj0/Ir45nAVrgnr+XRwHHCfpB5LulnThsJVueNVzLK4C3ilpA3Az8AfDU7SWc7DnE2CETOFh9ZH0TmAZcHazy9IMktqATwKXN7koraKDbH46h6xl3inpxIjY2tRSNcdlwHUR8QlJryXHb50QEfubXbCRoFVrFJ7+o6qeY4Gk84E/By6OiO5hKttwG+hYTAFOAO6QtI5sg10+Sju06/l3sQFYHhF7I+IJ4DEyOEabeo7Fu4EbASLih8AEYOawlK611HU+6a1Vg8LTf1QNeCwkvRr4HBkSo7UdGgY4FhGxLSJmRsSiiFhE9tdcHBGDngythdXzf+TrZG0CSTPJpqjHh7OQw6SeY/EUcB6ApFeSQfH8sJayNSwH3lVc/fQaYFtEPDPQm1qy6SkaN/3HiFPnsfg40AXcVPTnPxURFzet0A1S57EYE+o8FrcAF0h6GOgB3h8Ro67WXeexeB/wz5L+mOzYvnw0/mEp6SvkHwczi/6YDwGdABHxT2T/zJuANcBO4Dfq2u8oPFZmZjaEWrXpyczMWoSDwszMSjkozMyslIPCzMxKOSjMzKyUg8KsF0k9ku6X9JCk/5A0fYj3f7mkTxfPr5L0p0O5f7Oh5qAwe7ldEXFKRJxAjtH5/WYXyKyZHBRm5X5IzaRpkt4vaUUxl/+Ha9a/q1j3gKTri3VvLu6Vcp+k2yTNbkL5zQ5ZS47MNmsFktrJaR+uLZYvIOdKOoOcz3+5pLPIOcb+Avi5iNgkaUaxi+8Dr4mIkPRbwJ+RI4TNRhQHhdnLTZR0P1mTeAS4tVh/QfG4r1juIoPjZOCmiNgEEBGV+wHMA75azPc/DnhieIpvNrTc9GT2crsi4hRgIVlzqPRRCPho0X9xSkQcGxHXluzn/wKfjogTgd8mJ6IzG3EcFGb9KO4a+IfA+4qp7G8BflNSF4CkuZJmAd8BfkXS4cX6StPTNKpTOP86ZiOUm57MSkTEfZIeBC6LiOuLKap/WMzSux14ZzFT6UeA70rqIZumLifvqnaTpC1kmBzdjO9gdqg8e6yZmZVy05OZmZVyUJiZWSkHhZmZlXJQmJlZKQeFmZmVclCYmVkpB4WZmZX6/0h8KP6zDBbFAAAAAElFTkSuQmCC\n", 1285 | "text/plain": [ 1286 | "
" 1287 | ] 1288 | }, 1289 | "metadata": { 1290 | "needs_background": "light" 1291 | }, 1292 | "output_type": "display_data" 1293 | } 1294 | ], 1295 | "source": [ 1296 | "pr_chart(test_labels_cnn, predicted_probas_cnn)" 1297 | ] 1298 | }, 1299 | { 1300 | "cell_type": "markdown", 1301 | "metadata": {}, 1302 | "source": [ 1303 | "## Keras LSTM:\n", 1304 | "\n", 1305 | "LSTMs (short for Long Short-Term Memory) are another kind of neural network that works well on text. But for some reason, I couldn't get it to do much at all for this dataset. I got an area under PR curve of 0.66, with a baseline of 0.657...\n", 1306 | "\n", 1307 | "In other words, the model was just guessing. \n", 1308 | "\n", 1309 | "I don't know why. If you do, let me know?\n" 1310 | ] 1311 | }, 1312 | { 1313 | "cell_type": "code", 1314 | "execution_count": 82, 1315 | "metadata": {}, 1316 | "outputs": [], 1317 | "source": [ 1318 | "def lstm_model(learning_rate=0.001):\n", 1319 | " keras.backend.clear_session()\n", 1320 | "\n", 1321 | " adam = keras.optimizers.Adam(lr=learning_rate) # default lr = 0.001\n", 1322 | "\n", 1323 | " model = keras.Sequential()\n", 1324 | " model.add(\n", 1325 | " keras.layers.Embedding(VOCAB_SIZE, # vocab size\n", 1326 | " 32, # output size; embedding dimensions\n", 1327 | " input_length=MAX_SEQUENCE_LENGTH\n", 1328 | " ))\n", 1329 | " model.add(keras.layers.Dropout(0.2))\n", 1330 | " model.add(keras.layers.LSTM(100))\n", 1331 | " model.add(keras.layers.Dropout(0.2))\n", 1332 | "\n", 1333 | " model.add(keras.layers.Dense(1, activation='sigmoid')) # with just one dense layer, race_ethnicity ROC 0.69\n", 1334 | " model.compile(loss='binary_crossentropy', optimizer=adam, metrics=['accuracy'])\n", 1335 | " model.summary()\n", 1336 | " return model" 1337 | ] 1338 | }, 1339 | { 1340 | "cell_type": "code", 1341 | "execution_count": 83, 1342 | "metadata": {}, 1343 | "outputs": [ 1344 | { 1345 | "name": "stdout", 1346 | "output_type": "stream", 1347 | "text": [ 1348 | "_________________________________________________________________\n", 1349 | "Layer (type) Output Shape Param # \n", 1350 | "=================================================================\n", 1351 | "embedding (Embedding) (None, 256, 32) 320096 \n", 1352 | "_________________________________________________________________\n", 1353 | "dropout (Dropout) (None, 256, 32) 0 \n", 1354 | "_________________________________________________________________\n", 1355 | "lstm (LSTM) (None, 100) 53200 \n", 1356 | "_________________________________________________________________\n", 1357 | "dropout_1 (Dropout) (None, 100) 0 \n", 1358 | "_________________________________________________________________\n", 1359 | "dense (Dense) (None, 1) 101 \n", 1360 | "=================================================================\n", 1361 | "Total params: 373,397\n", 1362 | "Trainable params: 373,397\n", 1363 | "Non-trainable params: 0\n", 1364 | "_________________________________________________________________\n", 1365 | "742/742 [==============================] - 2s 2ms/sample - loss: 0.7263 - acc: 0.4084\n", 1366 | "race_ethnicity\n", 1367 | "Area under PR curve: 0.69\n", 1368 | "PR Baseline : 0.6563342318059299\n", 1369 | "If the AUPR score (0.69) is more than a little bigger than the baseline (0.66), which it *is*, then our model is working!\n", 1370 | "\n", 1371 | "\n" 1372 | ] 1373 | } 1374 | ], 1375 | "source": [ 1376 | "histories, test_labels_lstm, predicted_probas_lstm = train_keras_model(lstm_model, train_df, test_df, epochs=5, should_equalize=True, classes_of_interest=[\"race_ethnicity\"])" 1377 | ] 1378 | }, 1379 | { 1380 | "cell_type": "code", 1381 | "execution_count": 84, 1382 | "metadata": {}, 1383 | "outputs": [ 1384 | { 1385 | "data": { 1386 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYsAAAEWCAYAAACXGLsWAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAIABJREFUeJzt3Xl4lOX1//H3ARFE2VFRQMC6sIkQI2ARAVfcoCgqCCouRaiKS2vl617U1q2KUH5WbEutoKhYrDvVQsUVCYggIEsVNYIsUUAElcD5/XE/CUNMMpNlMpPk87quuZh55llOnpA5c+/m7oiIiBSnRqoDEBGR9KdkISIicSlZiIhIXEoWIiISl5KFiIjEpWQhIiJxKVlIhTCzmma2xcwOKs99U8nMDjGzcu97bmYnmtmqmNfLzKxnIvuW4lp/MbMbS3t8Mee908z+Xt7nldTZI9UBSHoysy0xL+sCPwA7oteXu/uUkpzP3XcA+5T3vtWBux9eHucxs8uAoe7eO+bcl5XHuaXqU7KQQrl7/od19M31Mnd/vaj9zWwPd8+tiNhEpOKpGkpKJapmeMrMnjSzb4GhZnaMmb1nZhvNbI2ZjTOzWtH+e5iZm1nr6PXk6P1XzOxbM3vXzNqUdN/o/VPNbLmZbTKz8Wb2tpkNKyLuRGK83MxWmtk3ZjYu5tiaZvagmeWY2SdA32Luz01mNrXAtglm9kD0/DIzWxr9PP+LvvUXda5sM+sdPa9rZo9HsS0Gjiqw781m9kl03sVm1i/afgTwJ6BnVMW3Iebe3h5z/IjoZ88xs+fM7IBE7k08ZjYgimejmc00s8Nj3rvRzFab2WYz+zjmZ+1uZvOj7WvN7L5ErydJ4O566FHsA1gFnFhg253Aj8CZhC8dewFHA90IJdaDgeXAldH+ewAOtI5eTwY2AJlALeApYHIp9t0P+BboH713HbAdGFbEz5JIjP8CGgCtga/zfnbgSmAx0AJoAswOf0KFXudgYAuwd8y51wGZ0eszo30MOB7YBnSK3jsRWBVzrmygd/T8fuC/QCOgFbCkwL7nAgdEv5Pzoxj2j967DPhvgTgnA7dHz0+OYuwM1AH+HzAzkXtTyM9/J/D36Hm7KI7jo9/RjcCy6HkH4DOgWbRvG+Dg6PlcYHD0vB7QLdV/C9X5oZKFlMVb7v6Cu+90923uPtfd57h7rrt/AkwEehVz/DR3z3L37cAUwodUSfc9A1jg7v+K3nuQkFgKlWCMf3D3Te6+ivDBnHetc4EH3T3b3XOAu4u5zifAR4QkBnAS8I27Z0Xvv+Dun3gwE/gPUGgjdgHnAne6+zfu/hmhtBB73afdfU30O3mCkOgzEzgvwBDgL+6+wN2/B0YDvcysRcw+Rd2b4gwCnnf3mdHv6G5CwukG5BISU4eoKvPT6N5BSPqHmlkTd//W3eck+HNIEihZSFl8EfvCzNqa2Utm9pWZbQbGAE2LOf6rmOdbKb5Ru6h9D4yNw92d8E28UAnGmNC1CN+Ii/MEMDh6fn70Oi+OM8xsjpl9bWYbCd/qi7tXeQ4oLgYzG2ZmH0bVPRuBtgmeF8LPl38+d98MfAM0j9mnJL+zos67k/A7au7uy4BfE34P66JqzWbRrhcD7YFlZva+mZ2W4M8hSaBkIWVRsNvoI4Rv04e4e33gVkI1SzKtIVQLAWBmxu4fbgWVJcY1QMuY1/G69j4NnGhmzQkljCeiGPcCpgF/IFQRNQT+nWAcXxUVg5kdDDwMjASaROf9OOa88br5riZUbeWdrx6huuvLBOIqyXlrEH5nXwK4+2R370GogqpJuC+4+zJ3H0Soavwj8KyZ1SljLFJKShZSnuoBm4DvzKwdcHkFXPNFIMPMzjSzPYCrgX2TFOPTwDVm1tzMmgA3FLezu38FvAX8HVjm7iuit2oDewLrgR1mdgZwQgliuNHMGloYh3JlzHv7EBLCekLe/CWhZJFnLdAir0G/EE8Cl5pZJzOrTfjQftPdiyyplSDmfmbWO7r29YR2pjlm1s7M+kTX2xY9dhJ+gAvMrGlUEtkU/Ww7yxiLlJKShZSnXwMXET4IHiE0RCeVu68FzgMeAHKAnwEfEMaFlHeMDxPaFhYRGl+nJXDME4QG6/wqKHffCFwLTCc0Eg8kJL1E3EYo4awCXgH+EXPehcB44P1on8OB2Hr+14AVwFozi61Oyjv+VUJ10PTo+IMI7Rhl4u6LCff8YUIi6wv0i9ovagP3EtqZviKUZG6KDj0NWGqht939wHnu/mNZ45HSsVDFK1I1mFlNQrXHQHd/M9XxiFQVKllIpWdmfaNqmdrALYReNO+nOCyRKkXJQqqCY4FPCFUcpwAD3L2oaigRKQVVQ4mISFwqWYiISFxVZiLBpk2beuvWrVMdhohIpTJv3rwN7l5cd3OgCiWL1q1bk5WVleowREQqFTOLNxMBoGooERFJgJKFiIjEpWQhIiJxVZk2CxGpWNu3byc7O5vvv/8+1aFIAurUqUOLFi2oVauoqcGKp2QhIqWSnZ1NvXr1aN26NWGyX0lX7k5OTg7Z2dm0adMm/gGFUDWUiJTK999/T5MmTZQoKgEzo0mTJmUqBSpZiEipKVFUHmX9XVX7ZLFzJ/zmN/DJJ/H3FRGprqp9sli5Ev76V8jIgOeeS3U0IpKonJwcOnfuTOfOnWnWrBnNmzfPf/3jj4kte3HxxRezbNmyYveZMGECU6ZMKY+QOfbYY1mwYEG5nKuiVftkcdhhMH8+HHooDBgQShnbt6c6KpGqZ8oUaN0aatQI/5b187dJkyYsWLCABQsWMGLECK699tr813vuuScQGnZ37ix6cb1JkyZx+OGHF3udK664giFDyrwGVKVX7ZMFQJs28NZb8KtfwR//CL17Q3ZZF5IUkXxTpsDw4fDZZ+Ae/h0+vOwJozArV66kffv2DBkyhA4dOrBmzRqGDx9OZmYmHTp0YMyYMfn75n3Tz83NpWHDhowePZojjzySY445hnXr1gFw8803M3bs2Pz9R48eTdeuXTn88MN55513APjuu+84++yzad++PQMHDiQzMzNuCWLy5MkcccQRdOzYkRtvvBGA3NxcLrjggvzt48aNA+DBBx+kffv2dOrUiaFDh5b7PUuEkkWkdm2YMAGefBIWLoQuXeDf/051VCJVw003wdatu2/bujVsT4aPP/6Ya6+9liVLltC8eXPuvvtusrKy+PDDD3nttddYsmTJT47ZtGkTvXr14sMPP+SYY47hb3/7W6Hndnfef/997rvvvvzEM378eJo1a8aSJUu45ZZb+OCDD4qNLzs7m5tvvplZs2bxwQcf8Pbbb/Piiy8yb948NmzYwKJFi/joo4+48MILAbj33ntZsGABCxcu5E9/+lMZ707pKFkUMGgQZGVBs2bQty/ceivs2JHqqEQqt88/L9n2svrZz35GZmZm/usnn3ySjIwMMjIyWLp0aaHJYq+99uLUU08F4KijjmLVqlWFnvuss876yT5vvfUWgwYNAuDII4+kQ4cOxcY3Z84cjj/+eJo2bUqtWrU4//zzmT17NocccgjLli1j1KhRzJgxgwYNGgDQoUMHhg4dypQpU0o9qK6slCwKcfjhMGcOXHQR3HEHnHIKrF2b6qhEKq+DDirZ9rLae++985+vWLGChx56iJkzZ7Jw4UL69u1b6HiDvHYOgJo1a5Kbm1vouWvXrh13n9Jq0qQJCxcupGfPnkyYMIHLL78cgBkzZjBixAjmzp1L165d2ZGCb7BKFkWoWxcmTQo9pd5+O1RLzZ6d6qhEKqe77gp/U7Hq1g3bk23z5s3Uq1eP+vXrs2bNGmbMmFHu1+jRowdPP/00AIsWLSq05BKrW7duzJo1i5ycHHJzc5k6dSq9evVi/fr1uDvnnHMOY8aMYf78+ezYsYPs7GyOP/547r33XjZs2MDWgnV6FUDTfcRxySWQmQkDB8Lxx4f/3NdfH3p0iEhi8joT3XRTqHo66KDwt1QRnYwyMjJo3749bdu2pVWrVvTo0aPcr3HVVVdx4YUX0r59+/xHXhVSYVq0aMEdd9xB7969cXfOPPNMTj/9dObPn8+ll16Ku2Nm3HPPPeTm5nL++efz7bffsnPnTn7zm99Qr169cv8Z4qkya3BnZmZ6Mhc/2rwZLrsMnnkGzjgDHnsMGjdO2uVE0t7SpUtp165dqsNIC7m5ueTm5lKnTh1WrFjBySefzIoVK9hjj/T6Pl7Y78zM5rl7ZhGH5EuvnySN1a8PTz0Fxx0H110XBvE9/TR07ZrqyEQk1bZs2cIJJ5xAbm4u7s4jjzySdomirKrWT5NkZnDllSFBnHsuHHssPPAAXHFFeE9EqqeGDRsyb968VIeRVKp5L4WuXcOo71NOgauuCt1tN29OdVQiIsmjZFFKjRvDv/4Fd98Nzz4bGsEXLkx1VCIiyaFkUQY1asANN8DMmbBlC3TrBkUM+hQRqdSULMrBccfBBx9Ajx5w6aVw8cU/ndpARKQyU7IoJ/vvDzNmwC23hG613bpBnJmPRaQM+vTp85MBdmPHjmXkyJHFHrfPPvsAsHr1agYOHFjoPr179yZeV/yxY8fuNjjutNNOY+PGjYmEXqzbb7+d+++/v8znKW9KFuWoZk0YMwZeeQW++iq0Yzz1VKqjEqmaBg8ezNSpU3fbNnXqVAYPHpzQ8QceeCDTpk0r9fULJouXX36Zhg0blvp86U7JIglOOSVUS3XqFHpKXXEF/PBDqqMSqVoGDhzISy+9lL/Q0apVq1i9ejU9e/bMH/eQkZHBEUccwb/+9a+fHL9q1So6duwIwLZt2xg0aBDt2rVjwIABbNu2LX+/kSNH5k9vfttttwEwbtw4Vq9eTZ8+fejTpw8ArVu3ZsOGDQA88MADdOzYkY4dO+ZPb75q1SratWvHL3/5Szp06MDJJ5+823UKs2DBArp3706nTp0YMGAA33zzTf7186Ysz5vA8I033shf/KlLly58++23pb63hdE4iyRp0QL++1/4v/8La2S8/34YxNemTaojEyl/11wD5b0AXOfOEH3OFqpx48Z07dqVV155hf79+zN16lTOPfdczIw6deowffp06tevz4YNG+jevTv9+vUrch3qhx9+mLp167J06VIWLlxIRkZG/nt33XUXjRs3ZseOHZxwwgksXLiQUaNG8cADDzBr1iyaNm2627nmzZvHpEmTmDNnDu5Ot27d6NWrF40aNWLFihU8+eSTPProo5x77rk8++yzxa5PceGFFzJ+/Hh69erFrbfeyu9+9zvGjh3L3Xffzaeffkrt2rXzq77uv/9+JkyYQI8ePdiyZQt16tQpwd2OTyWLJKpVC+6/H6ZPhxUrwqjv559PdVQiVUdsVVRsFZS7c+ONN9KpUydOPPFEvvzyS9YWM3X07Nmz8z+0O3XqRKdOnfLfe/rpp8nIyKBLly4sXrw47iSBb731FgMGDGDvvfdmn3324ayzzuLNN98EoE2bNnTu3Bkofhp0COtrbNy4kV69egFw0UUXMTuazbRTp04MGTKEyZMn548U79GjB9dddx3jxo1j48aN5T6CXCWLCvCLX4RBfOecA/37h4kI77orJBORqqC4EkAy9e/fn2uvvZb58+ezdetWjjrqKACmTJnC+vXrmTdvHrVq1aJ169aFTksez6effsr999/P3LlzadSoEcOGDSvVefLkTW8OYYrzeNVQRXnppZeYPXs2L7zwAnfddReLFi1i9OjRnH766bz88sv06NGDGTNm0LZt21LHWpBKFhXk4IPDVOcjR8J994UZbL/8MtVRiVRu++yzD3369OGSSy7ZrWF706ZN7LffftSqVYtZs2bx2WefFXue4447jieeeAKAjz76iIXRCNvNmzez995706BBA9auXcsrr7ySf0y9evUKbRfo2bMnzz33HFu3buW7775j+vTp9OzZs8Q/W4MGDWjUqFF+qeTxxx+nV69e7Ny5ky+++II+ffpwzz33sGnTJrZs2cL//vc/jjjiCG644QaOPvpoPv744xJfszgqWVSgOnXg//2/MKfU8OFhjYwpU+Ckk1IdmUjlNXjwYAYMGLBbz6ghQ4Zw5plncsQRR5CZmRn3G/bIkSO5+OKLadeuHe3atcsvoRx55JF06dKFtm3b0rJly92mNx8+fDh9+/blwAMPZNasWfnbMzIyGDZsGF2jWUYvu+wyunTpUmyVU1Eee+wxRowYwdatWzn44IOZNGkSO3bsYOjQoWzatAl3Z9SoUTRs2JBbbrmFWbNmUaNGDTp06JC/6l950RTlKbJ0aaiWWrIEbrsNbr45dL0VqSw0RXnlU5YpypNaDWVmfc1smZmtNLPRhbz/oJktiB7LzWxjtL2Vmc2Pti82sxHJjDMV2rULS7decAHcfjuceiqsW5fqqERECpe0ZGFmNYEJwKlAe2CwmbWP3cfdr3X3zu7eGRgP/DN6aw1wTLS9GzDazA5MVqypsvfe8Pe/w6OPhiVbu3SBt95KdVQiIj+VzJJFV2Clu3/i7j8CU4H+xew/GHgSwN1/dPe8YWy1kxxnSpmFFfjeey+sSdy7d2gAryK1g1LFVZVq7OqgrL+rZH4INwe+iHmdHW37CTNrBbQBZsZsa2lmC6Nz3OPuqws5briZZZlZ1vr168s1+IrWuTNkZYVutr/9bfg3Gqwpkpbq1KlDTk6OEkYl4O7k5OSUaaBeuvSGGgRMc/cdeRvc/QugU1T99JyZTXP33UbVuPtEYCKEBu6KDDgZGjQIa3yPHw+//nUYxPfMM2GOKZF006JFC7Kzs6nsX9Sqizp16tCiRYtSH5/MZPEl0DLmdYtoW2EGAVcU9oa7rzazj4CeQOln/aokzGDUqF1Lt/boAQ8+GMZnaOlWSSe1atWijeavqTaSWQ01FzjUzNqY2Z6EhPCTyS7MrC3QCHg3ZlsLM9sret4IOBaoVhN+d+8eJiM88cQwEeH550M5zwsmIpKwpCULd88FrgRmAEuBp919sZmNMbN+MbsOAqb67hWf7YA5ZvYh8AZwv7svSlas6apJE3jhBfj978MkhEcfDYuq3V0QkXSgQXmVxH//C4MHw6ZNYRT4sGGpjkgSsWkTvP46tG8fxtaIpJu0GJQn5ad371At1b17WLb10kuhlHOQSZLl5IS12E87DfbdFwYOhCOPhN/9DqKlF0QqHSWLSqRZM3jtNbjppvBh1L07LF+e6qgEYO1a+POfwzxf++8fkvnHH8PVV8PMmWFql9tvDx0XPvgg1dGKlJySRSVTsybceSe8/HKYtTYzM3SvlYqXnQ3jxkGvXnDAAaHH2uefww03wLx58L//hQGWffqECSOfey4kla5d4dZbVcqQykVtFpXYF1+E7rXvvQdXXRUWWtpzz1RHVbV9+ik8+2x4vPde2NaxY6hqOvts6NCh+C7OX38N114L//hHOG7SJI2jkdRSm0U10LIlvPFG+PAZPx569oQ40/ZLKSxbFnqkHXVUWJfk+utDqeD3vw/vLVoUZg7u2DH+WJjGjeGxx+DFF0Pi6N4dbrwRyrCejkiFUMmiivjnP0PDd82a4VvrGWekOqLKyx0WL4Zp00IJ4qOPwvbu3UMJ4qyzymct9Y0bw0j9v/0t9JSaNAm6dSv7eUVKQiWLauass8LSra1bw5lnwujRkJub6qgqD/dw/268Edq2hSOOgDFjQkngoYdCld+774YP9/IatNywIfz1r/Dqq7BlC/z856HUol5uko5Usqhivv8errkGHnkkVEtNnQoHVrnJ3cvHzp3w/vu7ShCrVoWSWZ8+of3hF78IPdAqwubNIVFMnAiHHRZKGzGLsokkjUoW1VSdOqEL5+OPhx45XbrAf/6T6qjSx44dYe2Qq6+Ggw6CY44JPZratw/f8teuDd2TR4youEQBUL9+SPCvvw4//BAS/bXXwtatFReDSHGULKqooUNh7twwZchJJ8Edd4Rv0tVRbm74EB45Epo3D11dJ04M06c8/nhYofCll+CSS8L9SqUTTggN5iNHwtix0KlTSG4iqaZkUYW1bx+qWc4/P/TrP/VUqC6zSf/wQxiLcumlYZDcSSeFxHDccfDUU+E+TJ8ekmrDhqmOdnf16sGECTBrVmhL6dUrdI3esiXVkUl1pjaLasA9LN06ahQ0bRomJfz5z1MdVfnbtg1mzAjtD88/H9oB6tcPDf4DB8Ipp8Bee6U6ypL57rvQ6D5+fOi88Je/wPHHpzoqqUrUZiH5zGD4cHjnHahdO3xT/eMfq8bSrVu2hOR33nlhHqYBA0KJ4uyzw1iGdetg8uTQWF3ZEgWEddofeihURe2xR6imGjlS09VLxVPJoprZtCmMx5g+PXyATpqUftUw8WzaFKZuf/bZ0O30++9hv/1Cojj77DDpYq1aqY6y/G3dCrfcEhbDatkylDJOOinVUUllp5KFFKpBg/Ah+8AD4Zt3RkboNZXu8mZyPf30UIK44ILQHvPLX4bp21ev3jWRX1VMFAB164YS4dtvh1LSySeHn3/TplRHJtWBkkU1ZBa6Zc6eDdu3h/aLP/85/aql1q4N3UljZ3JdvDg09r7zThgolzeRX82aqY624hxzTJi59re/DQm0Y0d45ZVURyVVnZJFNZb3oXP88aEefOjQ1Pe4KTiT64gRYb6r66+HrKwwkd8f/xhir1GN//futRfcc08YVV6/flg7Y9gw+OabVEcmVVU1/nMTCL2jXnopTHs+dWoYe7B4ccXGsGrVrgTQsmUYMJeTE+rnFy4Mk/X94Q9hIr94E/VVN127hmlKbropNOR36BDac0TKm5KFUKNG+LB5/fXwzbRr1zAmIZmWL9+VANq0gd/8JoyNuPNOWLo0TN73u9+FOZqUIIpXu3a4b3PmhOTfr18oJebkpDoyqUqULCRfnz6hWuroo+HCC0PjaXlNaue+ewI4/PAwfqBWLbj33rBQUN435LZty+ea1c1RR4WquttuCwMPO3QIvd5EyoO6zspP5OaGD5zf/z6sHf3MM3DooSU/j3tIPnmLBS1bFkoJPXrsmuq7Zcvyj19gwYLQRXrBgjAGZfz40ItMpKBEu84qWUiRXn45dFHdvj2Mxzj77PjH5M3kmpcgPv00VHP17h0SxC9+ERquJfm2bw+N4GPGhLE0EyaEtcBFYmmchZTZaaeFqqH27cMH/TXXFL5udOxMrq1ahYbqhx4KVU1/+UvoAvuf/4QeV0oUFadWLbj55jCO5qCDwhK855wTRrVL1fDDD6GqMdltjAC4e5V4HHXUUS7J8cMP7ldf7Q7u3bq5f/aZ+/bt7q+95j5ihPv++4f3atd279fP/bHH3L/+OtVRS6zt293/8Af3Pfd0b9LE/Ykn3HfuTHVUUho7drjPnu0+fLh7w4bhby8jo/TnA7I8gc9YVUNJwqZNC9N416wZHjk5YVTxaaeFKqrTTw8zpkr6WrIk/A7nzIH+/eHhh1XaqyyWLg3do6dMCWOP6tYN7X5Dh4Y5w/bYo3TnVZuFJMWKFWH0d4MGIUH07Rv+00rlsWNHmF/qllvC4L6HHgofOOqinH7WrAnjnyZPDlXCNWqEaV6GDg3Jfp99yn4NJQsRKdby5aGU8fbboVT4yCNhcShJrS1bQjvE5Mlh7NPOnZCZGRLEoEFh6pvypAZuESnWYYfBG2+EUsbMmWFcxqRJ6TdHWHWQmxvm9xoyJCSDCy8MyfzGG0P109y5oQNJeSeKklCyEKnGatYMvdwWLgxjai65JKyo+MUXqY6s6nPflQSaNw9tf6+8EhLFW2/BJ5+E5ZDTZZBqUpOFmfU1s2VmttLMRhfy/oNmtiB6LDezjdH2zmb2rpktNrOFZnZeMuMUqe4OOSQs4zp+fPig6tAhrK6oUkb5i00CXbuG6r/jjoPnnoOvvgqdDnr0SL82pKS1WZhZTWA5cBKQDcwFBrv7kiL2vwro4u6XmNlhgLv7CjM7EJgHtHP3jUVdT20WIuXj00/DdPCzZsGJJ4ak0bp1qqOq3HJywoqOkyeH6fUhDFQdOjR0FEnlAmTp0GbRFVjp7p+4+4/AVKB/MfsPBp4EcPfl7r4ier4aWAdosgKRCtCmTWhYffhheO+9MJfXww+HhlZJ3LZtYaqc/v2hWTP41a/CQlV33x26vs6aFZJyZVmpMpnJojkQW/OZHW37CTNrBbQBZhbyXldgT+B/hbw33MyyzCxr/fr15RK0iIQumiNGhMkfjzkmfNCdcEKoQpGi7dy5Kwk0axZGzWdlhXahBQtg0SK44YYwor6ySZcG7kHANHffEbvRzA4AHgcudveffK9x94nununumftqljSRcteqFcyYEaqi5s8PpYzx41XKKCgvCbRqFRYTe+aZMGDu9dfh88/hvvtCB4J0a4coiWQmiy+B2DlFW0TbCjOIqAoqj5nVB14CbnL395ISoYjEZQaXXRZKGb16wahRob59xYpUR5Za2dm7kkCnTmFd+86dwyC6r74K3ZBPOKHqLPmbzGQxFzjUzNqY2Z6EhPB8wZ3MrC3QCHg3ZtuewHTgH+4+LYkxikiCWrYMqyr+/e/hm3TeB+SOHXEPrTI2b96VBA46KKyDXrcu/OlPsHp1WKXwvPOq5qwGSUsW7p4LXAnMAJYCT7v7YjMbY2b9YnYdBEz13btlnQscBwyL6VrbOVmxikhizOCii8LSuyedBL/+NfTsCR9/nOrIkufHH3clgf33D2NRPv88rPmyYkVYB/2KK6r+eiGa7kNESsUdnngiVEt9911YN+O660o/oV06cQ89wSZPDqsO5uSEJWsHDQrdXbt2rdztD7ES7TpbBX6tIpIKZmF6ihNOCL2lbrghzEw8aVIY1FcZLV8eZnWdPDn0/KpTJyzYNXRomMCvVq1UR5g66dIbSkQqqWbNwqqIU6eGAX0ZGWFJ3tzcVEeWmHXrQg+vbt3Cgl133AEHHxzaZtauhSefDBMtVudEAUoWIlIOzEKd/uLFYRDaTTeFD9+FC1MdWeG2bt2VBA48MFSl/fgj3H9/mBfrtddC20z9+qmONH0oWYhIudlvvzCtxTPPhA/dzMzQlrF9e6ojC7228pLA/vvD+eeHXl3XXx/+/eCD0GCvadoLpzYLESl3AweGsRijRoVeQ//8Z6jW6VzBfRrdw8hClYUjAAAQF0lEQVTpyZNDSWLNmrBwV15Ddc+eYbS6xKfbJCJJ0bRp6C01fXoYpHb00XDrraG6J9k++wz+8Afo2DG0oeS1SUybFmJ59NEwwFCJInG6VSKSVL/4RVj7e/Dg0Hh81FFhvqTy9s03u5JA69Zh4aDGjeHPfw4JYvr0MMNrnTrlf+3qQMlCRJKucWP4xz/gxRfh66+he/fwYf7992U77w8/7EoCzZrB8OGhB9Odd4aur2++CZdfHq4vZaM2CxGpMKefHnpM/frXoZrouefCuIxu3RI/x86dYd3wyZNDY/rGjaHB+le/Cu0QGRlVZ8BcOlHJQkQqVMOG8Ne/hiVEv/0Wfv7zMMfStm3FH7d0aeiSe/DBYWW5yZPhjDPg1VfDpH4PPhiquJQokkPJQkRSom/fUMq47LIwe2vnzrtWkcuzZs2uJNC+fVg4qF27kCjWroXHH4dTTqkaU4ykO91iEUmZ+vXDGtTnnBOSxrHHwtVXh6qkyZPDehA7d4bxGmPHhoF/zZqlOurqSclCRFLuxBPDwLjRo0NSgF09moYMgbZtUxqeoGQhImmiXj2YMCGUMLZtC8u5qv0hfShZiEha6dIl1RFIYdTALSIicSlZiIhIXEoWIiISl5KFiIjEpWQhIiJxJZQszOxnZlY7et7bzEaZWcPkhiYiIuki0ZLFs8AOMzsEmAi0BJ5IWlQiIpJWEk0WO909FxgAjHf364EDkheWiIikk0STxXYzGwxcBLwYbauVnJBERCTdJJosLgaOAe5y90/NrA3wePLCEhGRdJLQdB/uvgQYBWBmjYB67n5PMgMTEZH0kWhvqP+aWX0zawzMBx41sweSG5qIiKSLRKuhGrj7ZuAs4B/u3g04MXlhiYhIOkk0WexhZgcA57KrgTsuM+trZsvMbKWZjS7k/QfNbEH0WG5mG2Pee9XMNppZwtcTEZHkSHSK8jHADOBtd59rZgcDK4o7wMxqAhOAk4BsYK6ZPR+1fwDg7tfG7H8VEDs58X1AXeDyBGMUEZEkSahk4e7PuHsndx8Zvf7E3c+Oc1hXYGW074/AVKB/MfsPBp6MueZ/gG8TiU9ERJIr0QbuFmY23czWRY9nzaxFnMOaA1/EvM6OthV2/lZAG2BmIvHEHDfczLLMLGv9+vUlOVREREog0TaLScDzwIHR44VoW3kZBExz9x0lOcjdJ7p7prtn7rvvvuUYjoiIxEo0Wezr7pPcPTd6/B2I9+n8JWEOqTwtom2FGURMFZSIiKSXRJNFjpkNNbOa0WMokBPnmLnAoWbWxsz2JCSE5wvuZGZtgUbAuyUJXEREKk6iyeISQrfZr4A1wEBgWHEHRBMPXknoRbUUeNrdF5vZGDPrF7PrIGCqu3vs8Wb2JvAMcIKZZZvZKQnGKiIi5cwKfEYnfqDZNe4+tpzjKbXMzEzPyspKdRgiIpWKmc1z98x4+5VlpbzrynCsiIhUImVJFlZuUYiISForS7IoXf2ViIhUOsVO92Fm31J4UjBgr6REJCIiaafYZOHu9SoqEBERSV9lqYYSEZFqQslCRETiUrIQEZG4lCxERCQuJQsREYlLyUJEROJSshARkbiULEREJC4lCxERiUvJQkRE4lKyEBGRuJQsREQkLiULERGJS8lCRETiUrIQEZG4lCxERCQuJQsREYlLyUJEROJSshARkbiULEREJC4lCxERiUvJQkRE4lKyEBGRuJQsREQkrqQmCzPra2bLzGylmY0u5P0HzWxB9FhuZhtj3rvIzFZEj4uSGaeIiBRvj2Sd2MxqAhOAk4BsYK6ZPe/uS/L2cfdrY/a/CugSPW8M3AZkAg7Mi479JlnxiohI0ZJZsugKrHT3T9z9R2Aq0L+Y/QcDT0bPTwFec/evowTxGtA3ibGKiEgxkpksmgNfxLzOjrb9hJm1AtoAM0tyrJkNN7MsM8tav359uQQtIiI/lS4N3IOAae6+oyQHuftEd89098x99903SaGJiEgyk8WXQMuY1y2ibYUZxK4qqJIeKyIiSZbMZDEXONTM2pjZnoSE8HzBncysLdAIeDdm8wzgZDNrZGaNgJOjbSIikgJJ6w3l7rlmdiXhQ74m8Dd3X2xmY4Asd89LHIOAqe7uMcd+bWZ3EBIOwBh3/zpZsYqISPEs5jO6UsvMzPSsrKxUhyEiUqmY2Tx3z4y3X7o0cIuISBpTshARkbiULEREJC4lCxERiUvJQkRE4lKyEBGRuJQsREQkLiULERGJS8lCRETiUrIQEZG4lCxERCQuJQsREYlLyUJEROJSshARkbiULEREJC4lCxERiUvJQkRE4lKyEBGRuJQsREQkLiULERGJS8lCRETiUrIQEZG4lCxERCQuJQsREYlLyUJEROJSshARkbiULEREJC4lCxERiUvJQkRE4kpqsjCzvma2zMxWmtnoIvY518yWmNliM3siZvs9ZvZR9DgvmXGKiEjx9kjWic2sJjABOAnIBuaa2fPuviRmn0OB/wN6uPs3ZrZftP10IAPoDNQG/mtmr7j75mTFKyIiRUtmyaIrsNLdP3H3H4GpQP8C+/wSmODu3wC4+7poe3tgtrvnuvt3wEKgbxJjFRGRYiQzWTQHvoh5nR1ti3UYcJiZvW1m75lZXkL4EOhrZnXNrCnQB2hZ8AJmNtzMsswsa/369Un4EUREBJJYDVWC6x8K9AZaALPN7Ah3/7eZHQ28A6wH3gV2FDzY3ScCEwEyMzO9ooIWEaluklmy+JLdSwMtom2xsoHn3X27u38KLCckD9z9Lnfv7O4nARa9V+6mTIHWraFGjfDvlCnJuIqISOWWzGQxFzjUzNqY2Z7AIOD5Avs8RyhVEFU3HQZ8YmY1zaxJtL0T0An4d3kHOGUKDB8On30G7uHf4cOVMERECkpasnD3XOBKYAawFHja3Reb2Rgz6xftNgPIMbMlwCzgenfPAWoBb0bbJwJDo/OVq5tugq1bd9+2dWvYLiIiu5h71ajqz8zM9KysrBIdU6NGKFEUZAY7d5ZTYCIiaczM5rl7Zrz9qvUI7oMOKtl2EZHqqloni7vugrp1d99Wt27YLiIiu1TrZDFkCEycCK1ahaqnVq3C6yFDUh2ZiEh6qdbJAkJiWLUqtFGsWqVEEY+6GotUT6kelCeVSF5X47weZHldjUFJVqSqq/YlC0mcuhqXnEpiUlWoZCEJ+/zzkm2v7lQSk6pEJQtJmLoal4xKYiWnklj6UrKQhKmrccmoJFYymn4nvSlZSMLU1bhkVBIrGZXE0puShZSIuhonTiWxklFJLL0pWYgkiUpiJaOSWMlVZBuPkoVIEqkkljiVxEqmott4lCxEJC2oJFYyFd3GU62nKBcRqazKa4kFTVEuIlKFVXQbj5KFiEglVNFtPEoWIiKVUEW38WhuKBGRSmrIkIrrAKCShYiIxKVkISIicSlZiIhIXEoWIiISl5KFiIjEVWVGcJvZeuCzMpyiKbChnMIpT4qrZBRXySiukqmKcbVy933j7VRlkkVZmVlWIkPeK5riKhnFVTKKq2Sqc1yqhhIRkbiULEREJC4li10mpjqAIiiuklFcJaO4SqbaxqU2CxERiUslCxERiUvJQkRE4qpWycLM/mZm68zsoyLeNzMbZ2YrzWyhmWWkSVy9zWyTmS2IHrdWUFwtzWyWmS0xs8VmdnUh+1T4PUswrgq/Z2ZWx8zeN7MPo7h+V8g+tc3sqeh+zTGz1mkS1zAzWx9zvy5Ldlwx165pZh+Y2YuFvFfh9yuBmFJ5r1aZ2aLouj9ZGjSpf4/uXm0ewHFABvBREe+fBrwCGNAdmJMmcfUGXkzB/ToAyIie1wOWA+1Tfc8SjKvC71l0D/aJntcC5gDdC+zzK+DP0fNBwFNpEtcw4E8V/X8suvZ1wBOF/b5Scb8SiCmV92oV0LSY95P291itShbuPhv4uphd+gP/8OA9oKGZHZAGcaWEu69x9/nR82+BpUDzArtV+D1LMK4KF92DLdHLWtGjYA+S/sBj0fNpwAlmZmkQV0qYWQvgdOAvRexS4fcrgZjSWdL+HqtVskhAc+CLmNfZpMGHUOSYqBrhFTPrUNEXj4r/XQjfSmOl9J4VExek4J5F1RcLgHXAa+5e5P1y91xgE9AkDeICODuquphmZi2THVNkLPBbYGcR76fifsWLCVJzryAk+X+b2TwzG17I+0n7e1SyqBzmE+ZvORIYDzxXkRc3s32AZ4Fr3H1zRV67OHHiSsk9c/cd7t4ZaAF0NbOOFXHdeBKI6wWgtbt3Al5j17f5pDGzM4B17j4v2ddKVIIxVfi9inGsu2cApwJXmNlxFXVhJYvdfQnEfktoEW1LKXffnFeN4O4vA7XMrGlFXNvMahE+kKe4+z8L2SUl9yxeXKm8Z9E1NwKzgL4F3sq/X2a2B9AAyEl1XO6e4+4/RC//AhxVAeH0APqZ2SpgKnC8mU0usE9F36+4MaXoXuVd+8vo33XAdKBrgV2S9veoZLG754ELox4F3YFN7r4m1UGZWbO8eloz60r4vSX9Aya65l+Bpe7+QBG7Vfg9SySuVNwzM9vXzBpGz/cCTgI+LrDb88BF0fOBwEyPWiZTGVeBeu1+hHagpHL3/3P3Fu7emtB4PdPdhxbYrULvVyIxpeJeRdfd28zq5T0HTgYK9qBM2t/jHuVxksrCzJ4k9JJpambZwG2Exj7c/c/Ay4TeBCuBrcDFaRLXQGCkmeUC24BByf6AifQALgAWRfXdADcCB8XElop7lkhcqbhnBwCPmVlNQnJ62t1fNLMxQJa7P09Ico+b2UpCp4ZBSY4p0bhGmVk/IDeKa1gFxFWoNLhf8WJK1b3aH5gefQfaA3jC3V81sxGQ/L9HTfchIiJxqRpKRETiUrIQEZG4lCxERCQuJQsREYlLyUJEROJSshCJw8x2xMwwusDMRpfjuVtbEbMNi6STajXOQqSUtkVTZYhUWypZiJRStLbAvdH6Au+b2SHR9tZmNjOaaO4/ZnZQtH1/M5seTW74oZn9PDpVTTN71MJaE/+ORlljZqMsrNmx0MympujHFAGULEQSsVeBaqjzYt7b5O5HAH8izFYKYeLCx6KJ5qYA46Lt44A3oskNM4DF0fZDgQnu3gHYCJwdbR8NdInOMyJZP5xIIjSCWyQOM9vi7vsUsn0VcLy7fxJNbPiVuzcxsw3AAe6+Pdq+xt2bmtl6oEXMJHR5U6y/5u6HRq9vAGq5+51m9iqwhTBj7nMxa1KIVDiVLETKxot4XhI/xDzfwa62xNOBCYRSyNxo1lWRlFCyECmb82L+fTd6/g67JrwbArwZPf8PMBLyFyNqUNRJzawG0NLdZwE3EKbm/knpRqSi6JuKSHx7xcxuC/Cqu+d1n21kZgsJpYPB0bargElmdj2wnl0zf14NTDSzSwkliJFAUdNH1wQmRwnFgHHRWhQiKaE2C5FSitosMt19Q6pjEUk2VUOJiEhcKlmIiEhcKlmIiEhcShYiIhKXkoWIiMSlZCEiInEpWYiISFz/H8VnS5lGlAE/AAAAAElFTkSuQmCC\n", 1387 | "text/plain": [ 1388 | "
" 1389 | ] 1390 | }, 1391 | "metadata": { 1392 | "needs_background": "light" 1393 | }, 1394 | "output_type": "display_data" 1395 | }, 1396 | { 1397 | "data": { 1398 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYsAAAEWCAYAAACXGLsWAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAIABJREFUeJzt3XucVXW9//HXG0SQq8hFFJAhIwUvII6Ix7tmYSkclVTEk2hKmrc0T5Ha0dTs7jGLX0qmxwxF02NHOmmpoeaxlKEAFVIQMRHU4SKKIDDw+f2x1sBmmJm1B2bP3sO8n4/Hfsy6fNdan71mZn/2+n6/67sUEZiZmdWnVbEDMDOz0udkYWZmmZwszMwsk5OFmZllcrIwM7NMThZmZpbJycLyJqm1pFWS9mrMssUk6ZOSGr3/uKRPS1qYM/+qpCPzKbsNx7pT0tXbur1ZPnYqdgBWOJJW5cy2B9YCG9L5L0fE5IbsLyI2AB0bu2xLEBH7NMZ+JJ0PnB0Rx+Ts+/zG2LdZfZwsdmARsenDOv3men5EPFlXeUk7RURVU8RmlsV/j6XF1VAtmKSbJD0g6X5JHwJnSzpM0l8lvS9piaTbJLVJy+8kKSSVpfO/Ttc/JulDSX+R1L+hZdP1J0p6TdJKST+V9H+SxtURdz4xflnSfEkrJN2Ws21rSf8paZmkBcCIes7PNZKm1Fg2UdIt6fT5kuam7+f19Ft/XftaJOmYdLq9pHvT2F4BDq5R9lpJC9L9viJpZLr8AOBnwJFpFd/SnHN7fc72F6bvfZmk30raI59z05DzXB2PpCclLZf0jqSv5xznW+k5+UBShaQ9a6vyk/Rc9e85PZ/PpsdZDlwraYCkaekxlqbnrUvO9v3S91iZrv+JpHZpzANzyu0habWkbnW9X8sQEX61gBewEPh0jWU3AeuAk0m+OOwCHAIcSnLV+QngNeCStPxOQABl6fyvgaVAOdAGeAD49TaU7Ql8CIxK110JrAfG1fFe8onxf4AuQBmwvPq9A5cArwB9gG7As8m/Qa3H+QSwCuiQs+/3gPJ0/uS0jIDjgDXAgem6TwMLc/a1CDgmnf4R8DTQFegHzKlR9nRgj/R3clYaw+7puvOBp2vE+Wvg+nT6M2mMQ4B2wP8D/pTPuWngee4CvAtcDrQFOgPD0nXfBGYBA9L3MATYDfhkzXMNPFf9e07fWxVwEdCa5O/xU8DxwM7p38n/AT/KeT8vp+ezQ1r+8HTdJOA7Ocf5GvBIsf8Pm/Or6AH41US/6LqTxZ8ytrsK+E06XVsCuD2n7Ejg5W0oex7w55x1ApZQR7LIM8bhOev/G7gqnX6WpDquet3nan6A1dj3X4Gz0ukTgVfrKfs74OJ0ur5k8c/c3wXwldyytez3ZeDz6XRWsrgHuDlnXWeSdqo+Weemgef534DpdZR7vTreGsvzSRYLMmIYXX1c4EjgHaB1LeUOB94AlM7PBE5t7P+rlvRyNZS9lTsjaV9J/5tWK3wA3AB0r2f7d3KmV1N/o3ZdZffMjSOS/+5Fde0kzxjzOhbwZj3xAtwHjEmnz0rnq+M4SdILaRXJ+yTf6us7V9X2qC8GSeMkzUqrUt4H9s1zv5C8v037i4gPgBVA75wyef3OMs5zX5KkUJv61mWp+ffYS9KDkt5OY/ivGjEsjKQzxRYi4v9IrlKOkLQ/sBfwv9sYk+E2C0u+aea6g+Sb7CcjojPwHyTf9AtpCck3XwAkiS0/3GranhiXkHzIVMvq2vsg8GlJvUmqye5LY9wFeAj4LkkV0a7AH/OM4526YpD0CeDnJFUx3dL9/iNnv1ndfBeTVG1V768TSXXX23nEVVN95/ktYO86tqtr3UdpTO1zlvWqUabm+/s+SS++A9IYxtWIoZ+k1nXE8SvgbJKroAcjYm0d5SwPThZWUydgJfBR2kD45SY45u+AoZJOlrQTST14jwLF+CDwVUm908bOb9RXOCLeIakq+S+SKqh56aq2JPXolcAGSSeR1K3nG8PVknZVch/KJTnrOpJ8YFaS5M0LSK4sqr0L9MltaK7hfuBLkg6U1JYkmf05Iuq8UqtHfef5UWAvSZdIaiups6Rh6bo7gZsk7a3EEEm7kSTJd0g6UrSWNJ6cxFZPDB8BKyX1JakKq/YXYBlws5JOA7tIOjxn/b0k1VZnkSQO2w5OFlbT14BzSBqc7yBpiC6oiHgXOAO4heSff2/g7yTfKBs7xp8DTwEvAdNJrg6y3EfSBrGpCioi3geuAB4haSQeTZL08nEdyRXOQuAxcj7IImI28FPgxbTMPsALOds+AcwD3pWUW51Uvf3jJNVFj6Tb7wWMzTOumuo8zxGxEjgBOI0kgb0GHJ2u/iHwW5Lz/AFJY3O7tHrxAuBqks4On6zx3mpzHTCMJGk9CjycE0MVcBIwkOQq458kv4fq9QtJfs9rI+L5Br53q6G68cesZKTVCouB0RHx52LHY82XpF+RNJpfX+xYmjvflGclQdIIkp5Ha0i6Xq4n+XZttk3S9p9RwAHFjmVH4GooKxVHAAtI6uo/C5ziBknbVpK+S3Kvx80R8c9ix7MjcDWUmZll8pWFmZll2mHaLLp37x5lZWXFDsPMrFmZMWPG0oior6s6sAMli7KyMioqKoodhplZsyIpaxQDwNVQZmaWBycLMzPL5GRhZmaZnCzMzCyTk4WZmWVysjCzkjF5MpSVQatWyc/Jk4sdkVXbYbrOmlnzNnkyjB8Pq1cn82++mcwDjN3WcXOt0fjKwsxKwjXXbE4U1VavTpZb7ZrySsxXFmZWEv5Zx3B/dS1v6Zr6SsxXFmZWEvaq4wG3dS1v6Zr6SszJwsxKwne+A+3bb7msfftkuW2tqa/EnCzMrCSMHQuTJkG/fiAlPydNcuN2XZr6SszJwsxKxtixsHAhbNyY/HSiqFtTX4k5WZiZNUNNfSXm3lBmZs3U2LFNd/XlKwszM8vkZGFmZpmcLMzMLJOThZmZZXKyMDOzTE4WZmaWycnCzMwyOVmYmVkmJwszM8vkZGFmZpmcLMzMLJOThZmZZXKyMCugpnxGslkhedRZswJp6mckmxWSryzMCqSpn5FsVkhOFmYF0tTPSDYrJCcLswJp6mckmxWSk4VZgTT1M5LNCqmgyULSCEmvSpovaUIt68dJqpQ0M32dn7NuQ87yRwsZp1khNPUzks0KqWDJQlJrYCJwIjAIGCNpUC1FH4iIIenrzpzla3KWjyxUnNYw7graMGPHwsKFsHFj8tOJwpqrQnadHQbMj4gFAJKmAKOAOQU8phWQu4KatVyFrIbqDbyVM78oXVbTaZJmS3pIUt+c5e0kVUj6q6R/re0AksanZSoqKyu3KUh/U86fu4KatVzFbuCeCpRFxIHAE8A9Oev6RUQ5cBZwq6S9a24cEZMiojwiynv06NHgg1d/U37zTYjY/E3ZCaN27gpq1nIVMlm8DeReKfRJl20SEcsiYm06eydwcM66t9OfC4CngYMaO0B/U24YdwU1a7kKmSymAwMk9Ze0M3AmsEWvJkl75MyOBOamy7tKaptOdwcOpwBtHf6m3DDuCmrWchUsWUREFXAJ8AeSJPBgRLwi6QZJ1b2bLpP0iqRZwGXAuHT5QKAiXT4N+F5ENHqy8DflhnFXULOWSxFR7BgaRXl5eVRUVDRom5q9eyD5puwPQDNrKSTNSNuH61XsBu6i8jdlM7P8tPghyseOdXIwM8vSoq8szMwsP04WZmaWycnCzMwyOVmYmVkmJwszM8vkZGFmZpmcLMzMLJOThZmZZXKyMDOzTE4WZmaWycnCzMwyOVmYmVkmJwszM8vkZGFmZpmcLMzMLJOThZmZZXKyMDOzTE4WZmaWycnCzMwyOVmYmVkmJwszM8vkZGFmZpmcLMzMLJOThZmZZXKyMDOzTE4WZmaWycnCzMwyOVmYmVkmJwszM8vkZGFmZpmcLMzMLJOThZmZZXKyMDOzTAVNFpJGSHpV0nxJE2pZP05SpaSZ6ev8nHXnSJqXvs4pZJxmZla/nQq1Y0mtgYnACcAiYLqkRyNiTo2iD0TEJTW23Q24DigHApiRbruiUPGamVndCnllMQyYHxELImIdMAUYlee2nwWeiIjlaYJ4AhhRoDjNzCxDIZNFb+CtnPlF6bKaTpM0W9JDkvo2cFszM2sCxW7gngqURcSBJFcP9zRkY0njJVVIqqisrCxIgGZmVthk8TbQN2e+T7psk4hYFhFr09k7gYPz3TbdflJElEdEeY8ePRotcDMz21Ihk8V0YICk/pJ2Bs4EHs0tIGmPnNmRwNx0+g/AZyR1ldQV+Ey6zMzMiqBgvaEiokrSJSQf8q2BuyLiFUk3ABUR8ShwmaSRQBWwHBiXbrtc0o0kCQfghohYXqhYzcysfoqIYsfQKMrLy6OioqLYYZiZNSuSZkREeVa5Yjdwm5lZM5CZLCRdmrYbmJlZC5XPlcXuJHdfP5gO36FCB2VmZqUlM1lExLXAAOCXJA3Q8yTdLGnvAsdmZmYlIq82i0hawd9JX1VAV+AhST8oYGxmZlYiMrvOSroc+CKwlOTGuX+PiPWSWgHzgK8XNkQzMyu2fO6z2A04NSLezF0YERslnVSYsMzMrJTkUw31GMkNcwBI6izpUICImFvnVmZmtsPIJ1n8HFiVM78qXWZmZi1EPslCkXObd0RspIDDhJiZWenJJ1kskHSZpDbp63JgQaEDMzOz0pFPsrgQ+BeSIcIXAYcC4wsZlJmZlZbM6qSIeI9keHEzM2uh8rnPoh3wJWA/oF318og4r4BxmZlZCcmnGupeoBfwWeAZkqfWfVjIoMzMrLTkkyw+GRHfAj6KiHuAz5O0W5iZWQuRT7JYn/58X9L+QBegZ+FCMjOzUpPP/RKT0udZXEvyDO2OwLcKGpWZmZWUepNFOljgBxGxAngW+ESTRGVmZiWl3mqo9G5tjyprZtbC5dNm8aSkqyT1lbRb9avgkZmZWcnIp83ijPTnxTnLAldJmZm1GPncwd2/KQIxM7PSlc8d3F+sbXlE/KrxwzEzs1KUTzXUITnT7YDjgb8BThZmZi1EPtVQl+bOS9oVmFKwiMzMrOTk0xuqpo8At2OYmbUg+bRZTCXp/QRJchkEPFjIoMzMrLTk02bxo5zpKuDNiFhUoHjMzKwE5ZMs/gksiYiPASTtIqksIhYWNDIzMysZ+bRZ/AbYmDO/IV1mZmYtRD7JYqeIWFc9k07vXLiQzMys1OSTLColjayekTQKWFq4kMzMrNTk02ZxITBZ0s/S+UVArXd1m5nZjimfm/JeB4ZL6pjOryp4VGZmVlIyq6Ek3Sxp14hYFRGrJHWVdFNTBGdmZqUhnzaLEyPi/eqZ9Kl5n8tn55JGSHpV0nxJE+opd5qkkFSezpdJWiNpZvq6PZ/jmZlZYeTTZtFaUtuIWAvJfRZA26yNJLUGJgInkLRzTJf0aETMqVGuE3A58EKNXbweEUPyiM/MzAosnyuLycBTkr4k6XzgCeCePLYbBsyPiAVpd9spwKhayt0IfB/4OM+YzcysiWUmi4j4PnATMBDYB/gD0C+PffcG3sqZX5Qu20TSUKBvRPxvLdv3l/R3Sc9IOrK2A0gaL6lCUkVlZWUeIZmZ2bbId9TZd0kGE/wCcBwwd3sPLKkVcAvwtVpWLwH2ioiDgCuB+yR1rlkoIiZFRHlElPfo0WN7QzIzszrU2WYh6VPAmPS1FHgAUEQcm+e+3wb65sz3SZdV6wTsDzwtCaAX8KikkRFRAawFiIgZkl4HPgVU5HlsMzNrRPU1cP8D+DNwUkTMB5B0RQP2PR0YIKk/SZI4EziremVErAS6V89Lehq4KiIqJPUAlkfEBkmfAAYACxpwbDMza0T1VUOdSlIdNE3SLyQdDyjfHUdEFXAJSRvHXODBiHhF0g25w4fU4ShgtqSZwEPAhRGxPN9jm5lZ41JE1F9A6kDSi2kMSXvFr4BHIuKPhQ8vf+Xl5VFR4VoqM7OGkDQjIsqzyuXTG+qjiLgvIk4maXf4O/CNRojRzMyaiQY9gzsiVqQ9kI4vVEBmZlZ6GpQszMysZXKyMDOzTE4WZmaWycnCzMwyOVmYmVkmJwszM8vkZGFmZpmcLMzMLJOThZmZZXKyMDOzTE4WZmaWycnCzMwyOVmYmVkmJwszM8vkZGFmZpmcLMzMLJOThZmZZXKyMDOzTE4WZmaWycnCzMwyOVmYmVkmJwszM8u0U7EDMDOzhlu7FpYsgbffTuYPP7ywx3OyMDMrIRGwdGmSBBYvTn7WnH777aRMtUMOgRdfLGxcThZmZk1k9er6E8Dixclr3bott5OgZ0/Yc0/o2xeGD0+me/dOXmVlhY/dycLMbDtt2ADvvVd3Aqiefv/9rbft0GHzh/7hh2+e7t17c0LYYw9o06bp31cuJwszs3p88EH9CWDx4qTtYMOGLbdr1Qp69Uo+7AcMgGOO2fJqoDoZdO6cXDmUOicLM2uR1q+Hd97JvhpYtWrrbbt02fyBP3Bg7VcDu+8OrVs3/fsqFCcLM9uhRMCKFdlXA+++m5TN1aZNUuXTuzcccACMGFH71UCHDsV5b8XkZGFmzcbatXU3EFdPL14Ma9ZsvW23bps/7A86aOsrgd69oXv3pPrItuZkYWYlZebMpBtobdVDy5ZtXb5t280f9occsnUC2HPP5NWuXdO/lx2Jk4WZlYTnn4cbb4THH0/mq7uL9u4Ne+0Fhx1W+9VA167No4G4uXOyMLOiiYCnn06SxLRpSTXQd78LY8YkCaHY3UVts4LWzkkaIelVSfMlTain3GmSQlJ5zrJvptu9KumzhYzTzJpWRHIFceSRcNxxMHcu/PjHsHAhTJgA/fo5UZSagl1ZSGoNTAROABYB0yU9GhFzapTrBFwOvJCzbBBwJrAfsCfwpKRPRUSNnsxm1pxEwNSpcNNNMH16cjfyz34GX/qS2xRKXSGvLIYB8yNiQUSsA6YAo2opdyPwfeDjnGWjgCkRsTYi3gDmp/szs2Zo40Z46KGkF9KoUcm4RpMmwfz5cPHFThTNQSGTRW/grZz5RemyTSQNBfpGxP82dNt0+/GSKiRVVFZWNk7UZtZoqqpg8mTYf3/4wheSLq333AOvvQYXXAA771zsCC1fRetRLKkVcAvwtW3dR0RMiojyiCjv0aNH4wVnZttl/Xq4++7k7uazz07uXbj/fpgzB774RdjJXWuanUL+yt4G+ubM90mXVesE7A88raTfWy/gUUkj89jWzErQ2rVJkvje9+DNN5Nqp//+76TqyTe7NW+F/PVNBwZI6i9pZ5IG60erV0bEyojoHhFlEVEG/BUYGREVabkzJbWV1B8YABR4tHYz21arV8Ntt8Hee8NFFyUD6P3udzBjBpxyihPFjqBgVxYRUSXpEuAPQGvgroh4RdINQEVEPFrPtq9IehCYA1QBF7snlFnpWbUKfv5z+NGPkiG6jzoK/uu/4PjjfaPcjkZRcyStZqq8vDwqKiqKHYZZi7ByZdLl9T//MxmC44QT4Nprk2RhzYukGRFRnlXOzUxmlrfly+HWW5Mqp5Ur4fOfT5LE8OHFjswKzcnCzDK99x7ccgtMnJhUPZ1ySpIkhg4tdmTWVJwszKxOixfDD38Id9wBH38MZ5wB11yT3DdhLYuThZlt5c034Qc/gF/+Mrmx7uyz4ZvfhH32qb38+vXrWbRoER9//HHtBazo2rVrR58+fWizjYNuOVmY2Savv56M+nrPPUlvpnHjkoH9PvGJ+rdbtGgRnTp1oqysDLkbVMmJCJYtW8aiRYvo37//Nu3DvZ/NjH/8I7mzep994Ne/hgsvTMZtmjQpO1EAfPzxx3Tr1s2JokRJolu3btt15ecrC7MW7KWXkhFgf/Mb2GUXuPxyuOqq5DnUDeVEUdq29/fjZGHWAs2YkSSJ3/4WOnVKqpquuAI8xJrVxdVQZi3IX/4Cn/sclJcnT6i77rrkgUM339y0iWLyZCgrS4YBKStL5rfHsmXLGDJkCEOGDKFXr1707t170/y6devy2se5557Lq6++Wm+ZiRMnMnl7g22mfGVhtoOLgGeeSa4knnoqeXTpzTfDV74CXbo0fTyTJ8P48cl4UpD0vBo/PpkeO3bb9tmtWzdmzpwJwPXXX0/Hjh256qqrtigTEUQEreoYqOruu+/OPM7FF1+8bQHuAHxlYbaDioA//jEZguPYY+Hll5MxnBYuTLrBFiNRQHKfRnWiqLZ6dbK8sc2fP59BgwYxduxY9ttvP5YsWcL48eMpLy9nv/3244YbbthU9ogjjmDmzJlUVVWx6667MmHCBAYPHsxhhx3Ge++9B8C1117Lrbfeuqn8hAkTGDZsGPvssw/PP/88AB999BGnnXYagwYNYvTo0ZSXl29KZLmuu+46DjnkEPbff38uvPBCqodeeu211zjuuOMYPHgwQ4cOZeHChQDcfPPNHHDAAQwePJhrCnGyMjhZmO1gIpIRX4cPh89+Ft54Ixme44034Gtfgw4dihvfP//ZsOXb6x//+AdXXHEFc+bMoXfv3nzve9+joqKCWbNm8cQTTzBnzpyttlm5ciVHH300s2bN4rDDDuOuu+6qdd8RwYsvvsgPf/jDTYnnpz/9Kb169WLOnDl861vf4u9//3ut215++eVMnz6dl156iZUrV/L4448DMGbMGK644gpmzZrF888/T8+ePZk6dSqPPfYYL774IrNmzeJrX9vmxwBtMycLsx3Exo3w8MPJEBwnn5wM0XHHHcm9E5demvR2KgV77dWw5dtr7733prx88zh5999/P0OHDmXo0KHMnTu31mSxyy67cOKJJwJw8MEHb/p2X9Opp566VZnnnnuOM888E4DBgwez33771brtU089xbBhwxg8eDDPPPMMr7zyCitWrGDp0qWcfPLJQHIjXfv27XnyySc577zz2CX9Je62224NPxHbycnCrJnbsAHuuw8OOABGj4aPPkqGCX/ttaQtoG3bYke4pe98B9q333JZ+/bJ8kLokHMpNW/ePH7yk5/wpz/9idmzZzNixIha7z3YOed5r61bt6aqqqrWfbdNT259ZWqzevVqLrnkEh555BFmz57NeeedV/J3vztZmDVT69cnSWHgwM0Nw/fdB3PnwjnnwDaO6lBwY8cmN/v165fcJd6vXzK/rY3bDfHBBx/QqVMnOnfuzJIlS/jDH/7Q6Mc4/PDDefDBBwF46aWXar1yWbNmDa1ataJ79+58+OGHPPzwwwB07dqVHj16MHXqVCC52XH16tWccMIJ3HXXXaxZswaA5cuXN3rcWdwbyqyZWbs2GY7ju99NGquHDIGHHmpeT6QbO7ZpkkNNQ4cOZdCgQey7777069ePww8/vNGPcemll/LFL36RQYMGbXp1qdGboFu3bpxzzjkMGjSIPfbYg0MPPXTTusmTJ/PlL3+Za665hp133pmHH36Yk046iVmzZlFeXk6bNm04+eSTufHGGxs99vr44UdmzcSaNXDnnckAf4sWwbBh8K1vJc+UKPbN03PnzmXgwIHFDaJEVFVVUVVVRbt27Zg3bx6f+cxnmDdvHjvtVPzv5rX9nvzwI7MdxKpVcPvtSbfXd9+FI45IRoM94YTiJwnb2qpVqzj++OOpqqoiIrjjjjtKIlFsr+b/Dsx2UB98kDy69JZbkkeXHn88PPAAHH10sSOz+uy6667MmDGj2GE0OicLsxKzfHlyX8RPfgLvv58Mz3HttXDYYcWOzFoyJwuzElFZufnRpR9+CP/6r0mSOPjgYkdm5mRhVnRLliTtEbffnjRin346XH01HHhgsSMz28zJwqxI3noLvv/9pIdTVRWcdVaSJPbdt9iRmW2tmfTKNttxLFgAF1wAe++dDMfxb/8Gr74Kv/qVE8W2OvbYY7e6we7WW2/loosuqne7jh07ArB48WJGjx5da5ljjjmGrG75t956K6tzRkf83Oc+x/vvv59P6M2Gk4VZE3n11eTO6k99Cu69N0kYr78Ov/hFkjhs240ZM4YpU6ZssWzKlCmMGTMmr+333HNPHnrooW0+fs1k8fvf/55dd911m/dXilwNZdbINm5MGqhXrkxelZVJQnjgAWjXDi67LHl06Z57FjvSwvjqV6GWEbm3y5AhkI4MXqvRo0dz7bXXsm7dOnbeeWcWLlzI4sWLOfLII1m1ahWjRo1ixYoVrF+/nptuuolRo0Ztsf3ChQs56aSTePnll1mzZg3nnnsus2bNYt999900xAbARRddxPTp01mzZg2jR4/m29/+NrfddhuLFy/m2GOPpXv37kybNo2ysjIqKiro3r07t9xyy6ZRa88//3y++tWvsnDhQk488USOOOIInn/+eXr37s3//M//bBoosNrUqVO56aabWLduHd26dWPy5MnsvvvurFq1iksvvZSKigokcd1113Haaafx+OOPc/XVV7Nhwwa6d+/OU0891Wi/AycLsxxVVcn9DdUf9DVf9a2rXv/hh8kw4bk6doSvfx2uvBJ69izOe9uR7bbbbgwbNozHHnuMUaNGMWXKFE4//XQk0a5dOx555BE6d+7M0qVLGT58OCNHjqzzmdQ///nPad++PXPnzmX27NkMHTp007rvfOc77LbbbmzYsIHjjz+e2bNnc9lll3HLLbcwbdo0unfvvsW+ZsyYwd13380LL7xARHDooYdy9NFH07VrV+bNm8f999/PL37xC04//XQefvhhzj777C22P+KII/jrX/+KJO68805+8IMf8OMf/5gbb7yRLl268NJLLwGwYsUKKisrueCCC3j22Wfp379/o48f5WRhO4y1a/P/YK9rXc2H8tSmbdvkwUFdukDnzsnP3XffvKzmui5dkseYFmFU6aKo7wqgkKqroqqTxS9/+UsgeebE1VdfzbPPPkurVq14++23effdd+nVq1et+3n22We57LLLADjwwAM5MKdb2oMPPsikSZOoqqpiyZIlzJkzZ4v1NT333HOccsopm0a+PfXUU/nzn//MyJEj6d+/P0OGDAHqHgZ90aJFnHHGGSxZsoR169bRv39/AJ588sktqt26du3K1KlTOeqoozaVaexhzJ0srOgiki6j2/JtPnfd2rXZx2rffusP9b322vKDvb4P/S60Tw18AAAJ10lEQVRdSm/Ib0uMGjWKK664gr/97W+sXr2ag9MbVCZPnkxlZSUzZsygTZs2lJWVbdNw4G+88QY/+tGPmD59Ol27dmXcuHHbNax425w/pNatW29R3VXt0ksv5corr2TkyJE8/fTTXH/99dt8vO3lZGHbZePGZOyi7fk2/8EHSfVPluoP7eqfPXvCgAH1f7Dnvjp1Kt1hu237dezYkWOPPZbzzjtvi4btlStX0rNnT9q0acO0adN48803693PUUcdxX333cdxxx3Hyy+/zOzZs4FkePMOHTrQpUsX3n33XR577DGOOeYYADp16sSHH364VTXUkUceybhx45gwYQIRwSOPPMK9996b93tauXIlvXv3BuCee+7ZtPyEE05g4sSJmx7xumLFCoYPH85XvvIV3njjjU3VUI15ddHik8Xy5XDkkcWOonmJSB6wU/1BnzVwcatWW39w9+0L++2X/7f5Tp2az/DbVjxjxozhlFNO2aKKZuzYsZx88skccMABlJeXs29G/+SLLrqIc889l4EDBzJw4MBNVyiDBw/moIMOYt9996Vv375bDG8+fvx4RowYwZ577sm0adM2LR86dCjjxo1j2LBhQNLAfdBBB9X55L2arr/+er7whS/QtWtXjjvuON544w0geRb4xRdfzP7770/r1q257rrrOPXUU5k0aRKnnnoqGzdupGfPnjzxxBN5HScfLX6I8pUr4fzzCxDQDq5Dh+xv8tXrOnTw6Kg7Og9R3jx4iPLt0KUL/OY3xY7CzKy0+cLezMwyOVmYWaPYUaq0d1Tb+/spaLKQNELSq5LmS5pQy/oLJb0kaaak5yQNSpeXSVqTLp8p6fZCxmlm26ddu3YsW7bMCaNERQTLli2jXbt227yPgrVZSGoNTAROABYB0yU9GhFzcordFxG3p+VHArcAI9J1r0fEkELFZ2aNp0+fPixatIjKyspih2J1aNeuHX369Nnm7QvZwD0MmB8RCwAkTQFGAZuSRUR8kFO+A+CvJWbNUJs2bTbdOWw7pkJWQ/UG3sqZX5Qu24KkiyW9DvwAuCxnVX9Jf5f0jKRa74SQNF5ShaQKf6MxMyucojdwR8TEiNgb+AZwbbp4CbBXRBwEXAncJ6lzLdtOiojyiCjv0aNH0wVtZtbCFDJZvA30zZnvky6ryxTgXwEiYm1ELEunZwCvA58qUJxmZpahkG0W04EBkvqTJIkzgbNyC0gaEBHz0tnPA/PS5T2A5RGxQdIngAHAgvoONmPGjKWS6h/0pX7dgaXbsX2hOK6GcVwN47gaZkeMq18+hQqWLCKiStIlwB+A1sBdEfGKpBuAioh4FLhE0qeB9cAK4Jx086OAGyStBzYCF0ZEvYOzR8R21UNJqsjnlvem5rgaxnE1jONqmJYcV0GH+4iI3wO/r7HsP3KmL69ju4eBhwsZm5mZ5a/oDdxmZlb6nCw2m1TsAOrguBrGcTWM42qYFhvXDjNEuZmZFY6vLMzMLJOThZmZZWpRyULSXZLek/RyHesl6bZ0lNzZkoaWSFzHSFqZMwrvf9RWrgBx9ZU0TdIcSa9I2qr3WjHOWZ5xNfk5k9RO0ouSZqVxfbuWMm0lPZCerxcklZVIXOMkVeacryZ7fqSk1unQPr+rZV2Tn688YirmuVqYM1L3Vo8GLej/Y0S0mBfJ/RtDgZfrWP854DFAwHDghRKJ6xjgd0U4X3sAQ9PpTsBrwKBin7M842ryc5aeg47pdBvgBWB4jTJfAW5Pp88EHiiRuMYBP2vqv7H02FcC99X2+yrG+cojpmKeq4VA93rWF+z/sUVdWUTEs0B9N/eNAn4Vib8Cu0raowTiKoqIWBIRf0unPwTmsvVgkE1+zvKMq8ml52BVOtsmfdXsQTIKuCedfgg4XirsE8rzjKsoJPUhGb3hzjqKNPn5yiOmUlaw/8cWlSzykNdIuUVyWFqN8Jik/Zr64Onl/0Ek30pzFfWc1RMXFOGcpdUXM4H3gCcios7zFRFVwEqgWwnEBXBaWnXxkKS+tawvhFuBr5OM1FCbYpyvrJigOOcKkiT/R0kzJI2vZX3B/h+dLJqHvwH9ImIw8FPgt015cEkdSe6o/2ps+QySosqIqyjnLCI2RPLQrj7AMEn7N8Vxs+QR11SgLCIOBJ5g87f5gpF0EvBeJIOFloQ8Y2ryc5XjiIgYCpwIXCzpqKY6sJPFlho6Um6TiIgPqqsRIhlCpY2k7k1xbEltSD6QJ0fEf9dSpCjnLCuuYp6z9JjvA9PY/OTHapvOl6SdgC7AsmLHFRHLImJtOnsncHAThHM4MFLSQpJRp4+T9OsaZZr6fGXGVKRzVX3st9Of7wGPkDxkLlfB/h+dLLb0KPDFtEfBcGBlRCwpdlCSelXX00oaRvJ7K/gHTHrMXwJzI+KWOoo1+TnLJ65inDNJPSTtmk7vQvJI4X/UKPYomwfMHA38KdKWyWLGVaNeeyRJO1BBRcQ3I6JPRJSRNF7/KSLOrlGsSc9XPjEV41ylx+0gqVP1NPAZoGYPyoL9PxZ0IMFSI+l+kl4y3SUtAq4jaewjkmeB/56kN8F8YDVwbonENRq4SFIVsAY4s9AfMKnDgX8DXkrruwGuBvbKia0Y5yyfuIpxzvYA7lHy/PlWwIMR8TttOdLyL4F7Jc0n6dRwZoFjyjeuyySNBKrSuMY1QVy1KoHzlRVTsc7V7sAj6XegnYD7IuJxSRdC4f8fPdyHmZllcjWUmZllcrIwM7NMThZmZpbJycLMzDI5WZiZWSYnC7MMkjbkjDA6U9KERtx3meoYbdislLSo+yzMttGadKgMsxbLVxZm2yh9tsAP0ucLvCjpk+nyMkl/Sgeae0rSXuny3SU9kg5uOEvSv6S7ai3pF0qeNfHH9C5rJF2m5JkdsyVNKdLbNAOcLMzysUuNaqgzctatjIgDgJ+RjFYKycCF96QDzU0GbkuX3wY8kw5uOBR4JV0+AJgYEfsB7wOnpcsnAAel+7mwUG/OLB++g9ssg6RVEdGxluULgeMiYkE6sOE7EdFN0lJgj4hYny5fEhHdJVUCfXIGoaseYv2JiBiQzn8DaBMRN0l6HFhFMmLub3OeSWHW5HxlYbZ9oo7phlibM72BzW2JnwcmklyFTE9HXTUrCicLs+1zRs7Pv6TTz7N5wLuxwJ/T6aeAi2DTw4i61LVTSa2AvhExDfgGydDcW13dmDUVf1Mxy7ZLzui2AI9HRHX32a6SZpNcHYxJl10K3C3p34FKNo/8eTkwSdKXSK4gLgLqGj66NfDrNKEIuC19FoVZUbjNwmwbpW0W5RGxtNixmBWaq6HMzCyTryzMzCyTryzMzCyTk4WZmWVysjAzs0xOFmZmlsnJwszMMv1/gb9XNhhRuVkAAAAASUVORK5CYII=\n", 1399 | "text/plain": [ 1400 | "
" 1401 | ] 1402 | }, 1403 | "metadata": { 1404 | "needs_background": "light" 1405 | }, 1406 | "output_type": "display_data" 1407 | } 1408 | ], 1409 | "source": [ 1410 | "training_and_validation_loss(histories['race_ethnicity'])\n", 1411 | "training_and_validation_accuracy(histories['race_ethnicity'])" 1412 | ] 1413 | }, 1414 | { 1415 | "cell_type": "markdown", 1416 | "metadata": {}, 1417 | "source": [ 1418 | "## Spacy Text Categorization\n", 1419 | "https://spacy.io/usage/examples#textcat\n", 1420 | "It's a CNN, but with smart defaults for a lot of the options." 1421 | ] 1422 | }, 1423 | { 1424 | "cell_type": "code", 1425 | "execution_count": 85, 1426 | "metadata": {}, 1427 | "outputs": [ 1428 | { 1429 | "name": "stdout", 1430 | "output_type": "stream", 1431 | "text": [ 1432 | "Using 2968 examples (2374 training, 594 evaluation)\n", 1433 | "Training the model...\n", 1434 | "LOSS \t P \t R \t F \n", 1435 | "1 . 128.089\t0.726\t0.908\t0.807\n", 1436 | "2 . 64.358\t0.795\t0.847\t0.820\n", 1437 | "3 . 39.252\t0.789\t0.858\t0.822\n", 1438 | "4 . 27.853\t0.802\t0.853\t0.827\n", 1439 | "5 . 19.732\t0.801\t0.868\t0.833\n", 1440 | "6 . 15.693\t0.801\t0.858\t0.828\n", 1441 | "7 . 12.891\t0.795\t0.868\t0.830\n", 1442 | "8 . 10.547\t0.800\t0.866\t0.832\n", 1443 | "9 . 8.722\t0.804\t0.866\t0.834\n", 1444 | "10. 8.085\t0.813\t0.847\t0.830\n", 1445 | "11. 7.108\t0.815\t0.868\t0.841\n", 1446 | "12. 7.256\t0.821\t0.845\t0.833\n", 1447 | "13. 6.614\t0.820\t0.850\t0.835\n", 1448 | "14. 6.221\t0.815\t0.837\t0.826\n", 1449 | "15. 5.845\t0.812\t0.839\t0.825\n", 1450 | "16. 7.384\t0.807\t0.845\t0.825\n", 1451 | "17. 6.187\t0.806\t0.853\t0.829\n", 1452 | "18. 7.135\t0.797\t0.858\t0.826\n", 1453 | "19. 5.674\t0.802\t0.863\t0.831\n", 1454 | "20. 5.625\t0.811\t0.847\t0.829\n", 1455 | "Saved model to data/spacy\n" 1456 | ] 1457 | } 1458 | ], 1459 | "source": [ 1460 | "from spacy.util import minibatch, compounding\n", 1461 | "from pathlib import Path\n", 1462 | "\n", 1463 | "nlp_textcat = spacy.load('en_core_web_lg')\n", 1464 | "\n", 1465 | "SHOULD_EQUALIZE_SPACY = True\n", 1466 | "VALIDATION_SET_PERCENTAGE = 0.2\n", 1467 | "output_dir = 'data/spacy/'\n", 1468 | "n_iter = 20 # 5 might be plenty?\n", 1469 | "COLUMN_OF_INTEREST_SPACY = \"race_ethnicity\"\n", 1470 | "\n", 1471 | "train_df, test_df = train_test_split(train_test_data_one_hot, test_size=0.2, shuffle=True)\n", 1472 | "\n", 1473 | "def load_data(limit=0, split=0.8, column=COLUMN_OF_INTEREST_SPACY):\n", 1474 | " \"\"\"Load data from the IMDB dataset.\"\"\"\n", 1475 | " # Partition off part of the train data for evaluation\n", 1476 | " train_data = train_df[[\"description\", column]]\n", 1477 | " train_data = train_data.sample(frac=1).reset_index(drop=True)\n", 1478 | " train_data = train_data[-limit:]\n", 1479 | " texts, labels = zip(*train_data.values)\n", 1480 | " cats = [{'POSITIVE-{}'.format(column): bool(y)} for y in labels]\n", 1481 | " split = int(len(train_data) * split)\n", 1482 | " return (texts[:split], cats[:split]), (texts[split:], cats[split:])\n", 1483 | "\n", 1484 | "def evaluate(tokenizer, textcat, texts, cats):\n", 1485 | " docs = (tokenizer(text) for text in texts)\n", 1486 | " tp = 0.0 # True positives\n", 1487 | " fp = 1e-8 # False positives\n", 1488 | " fn = 1e-8 # False negatives\n", 1489 | " tn = 0.0 # True negatives\n", 1490 | " for i, doc in enumerate(textcat.pipe(docs)):\n", 1491 | " gold = cats[i]\n", 1492 | " for label, score in doc.cats.items():\n", 1493 | " if label not in gold:\n", 1494 | " continue\n", 1495 | " if score >= 0.5 and gold[label] >= 0.5:\n", 1496 | " tp += 1.\n", 1497 | " elif score >= 0.5 and gold[label] < 0.5:\n", 1498 | " fp += 1.\n", 1499 | " elif score < 0.5 and gold[label] < 0.5:\n", 1500 | " tn += 1\n", 1501 | " elif score < 0.5 and gold[label] >= 0.5:\n", 1502 | " fn += 1\n", 1503 | " precision = tp / (tp + fp)\n", 1504 | " recall = tp / (tp + fn)\n", 1505 | " f_score = 2 * (precision * recall) / (precision + recall)\n", 1506 | " return {'textcat_p': precision, 'textcat_r': recall, 'textcat_f': f_score}\n", 1507 | "\n", 1508 | "\n", 1509 | "\n", 1510 | "if output_dir is not None:\n", 1511 | " output_dir = Path(output_dir)\n", 1512 | " if not output_dir.exists():\n", 1513 | " output_dir.mkdir()\n", 1514 | "\n", 1515 | "# add the text classifier to the pipeline if it doesn't exist\n", 1516 | "# nlp.create_pipe works for built-ins that are registered with spaCy\n", 1517 | "if 'textcat' not in nlp_textcat.pipe_names:\n", 1518 | " textcat = nlp_textcat.create_pipe('textcat')\n", 1519 | " nlp_textcat.add_pipe(textcat, last=True)\n", 1520 | "# otherwise, get it, so we can add labels to it\n", 1521 | "else:\n", 1522 | " textcat = nlp_textcat.get_pipe('textcat')\n", 1523 | "\n", 1524 | "# add label to text classifier\n", 1525 | "label_name = 'POSITIVE-{}'.format(COLUMN_OF_INTEREST_SPACY)\n", 1526 | "textcat.add_label(label_name)\n", 1527 | "\n", 1528 | "(train_texts, train_cats), (dev_texts, dev_cats) = load_data(split=1.0-VALIDATION_SET_PERCENTAGE)\n", 1529 | "print(\"Using {} examples ({} training, {} evaluation)\"\n", 1530 | " .format(len(train_texts) + len(dev_texts), len(train_texts), len(dev_texts)))\n", 1531 | "train_data = list(zip(train_texts,\n", 1532 | " [{'cats': cats} for cats in train_cats]))\n", 1533 | "\n", 1534 | "# get names of other pipes to disable them during training\n", 1535 | "other_pipes = [pipe for pipe in nlp.pipe_names if pipe != 'textcat']\n", 1536 | "with nlp_textcat.disable_pipes(*other_pipes): # only train textcat\n", 1537 | " optimizer = nlp_textcat.begin_training()\n", 1538 | " print(\"Training the model...\")\n", 1539 | " print('{:^5}\\t{:^5}\\t{:^5}\\t{:^5}'.format('LOSS', 'P', 'R', 'F'))\n", 1540 | " for i in range(n_iter):\n", 1541 | " losses = {}\n", 1542 | " # batch up the examples using spaCy's minibatch\n", 1543 | " batches = minibatch(train_data, size=compounding(4., 32., 1.001))\n", 1544 | " for batch in batches:\n", 1545 | " texts, annotations = zip(*batch)\n", 1546 | " nlp_textcat.update(texts, annotations, sgd=optimizer, drop=0.2,\n", 1547 | " losses=losses)\n", 1548 | " with textcat.model.use_params(optimizer.averages):\n", 1549 | " # evaluate on the dev data split off in load_data()\n", 1550 | " scores = evaluate(nlp.tokenizer, textcat, dev_texts, dev_cats)\n", 1551 | " print('{4: <2}. {0:.3f}\\t{1:.3f}\\t{2:.3f}\\t{3:.3f}' # print a simple table\n", 1552 | " .format(losses['textcat'], scores['textcat_p'],\n", 1553 | " scores['textcat_r'], scores['textcat_f'], i+1))\n", 1554 | "\n", 1555 | "\n", 1556 | "if output_dir is not None:\n", 1557 | " with nlp_textcat.use_params(optimizer.averages):\n", 1558 | " nlp_textcat.to_disk(output_dir)\n", 1559 | " print(\"Saved model to\", output_dir)" 1560 | ] 1561 | }, 1562 | { 1563 | "cell_type": "code", 1564 | "execution_count": 116, 1565 | "metadata": {}, 1566 | "outputs": [ 1567 | { 1568 | "data": { 1569 | "application/vnd.jupyter.widget-view+json": { 1570 | "model_id": "1e117bcf60e24c12bc8310369969c663", 1571 | "version_major": 2, 1572 | "version_minor": 0 1573 | }, 1574 | "text/plain": [ 1575 | "HBox(children=(IntProgress(value=0, max=747), HTML(value='')))" 1576 | ] 1577 | }, 1578 | "metadata": {}, 1579 | "output_type": "display_data" 1580 | }, 1581 | { 1582 | "name": "stdout", 1583 | "output_type": "stream", 1584 | "text": [ 1585 | "\n", 1586 | "spaCy Area under ROC curve: 0.8214486024989873\n", 1587 | "spaCy Area under PR curve: 0.8822858437821977\n" 1588 | ] 1589 | } 1590 | ], 1591 | "source": [ 1592 | "from tqdm import tqdm_notebook as tqdm\n", 1593 | "predicted_probabilities = []\n", 1594 | "for row in tqdm(test_df[[\"description\", COLUMN_OF_INTEREST_SPACY]].values):\n", 1595 | " doc = nlp_textcat(row[0])\n", 1596 | " predicted_probabilities.append(doc.cats[\"POSITIVE-{}\".format(COLUMN_OF_INTEREST_SPACY)])\n", 1597 | " \n", 1598 | "print(\"spaCy Area under PR curve: \", average_precision_score(test_df[COLUMN_OF_INTEREST_SPACY], predicted_probabilities))\n" 1599 | ] 1600 | }, 1601 | { 1602 | "cell_type": "markdown", 1603 | "metadata": {}, 1604 | "source": [ 1605 | "## Bonus #1: What confuses the classifier?\n", 1606 | "\n", 1607 | "Let's loop through the test set and see for which documents the Naive Bayes classifier gives the wrong answer. Then we'll use our human judgment to see if the computer is really giving the 'wrong' answer or if the data is just coded wrong.\n", 1608 | "\n", 1609 | "In some cases the person who submitted the tip didn't classify it in the way we'd want to. In that case, the computer's not really wrong.\n", 1610 | "\n", 1611 | "In some cases, the computer is not giving the answer we want it to. Sometimes, that's \"forgivable\" and we can't expect it to (e.g. misspellings or descriptions that use unfamiliar wordings); other times, the model is _really_ doing the wrong thing. It's that last case that we're trying to eliminate overall.\n", 1612 | "\n", 1613 | "But the purpose of this exercise is to get a sense of what the model is missing and what tips are mis-classified by humans. \n", 1614 | "\n", 1615 | "What I've noticed:\n", 1616 | "\n", 1617 | "- the model doesn't seem to know the phrase \"N word\"; even though that should be a clear sign of a race-related tip. (Which I fix above).\n", 1618 | "- the model classifies many Judaism-related tips as non-race/ethnicity-related (when teh submitter classified it that way). In my opinion, that may be okay if the model categorizes those as religion-related. (But like, it's a tough category in the first place. If humans can't agree on the right answer, we can't expect the computer to settle it for us!)\n", 1619 | "- forgivable error \"get out of the country\" coded as race-related when person reporting is from England. (i.e. computer is picking up on a real signal that just happens not to apply here.)" 1620 | ] 1621 | }, 1622 | { 1623 | "cell_type": "code", 1624 | "execution_count": null, 1625 | "metadata": { 1626 | "collapsed": true 1627 | }, 1628 | "outputs": [], 1629 | "source": [ 1630 | "test_labels_nb = test_df_nb[class_of_interest]\n", 1631 | "test_features_nb = test_df_nb[\"description\"]\n", 1632 | "test_features_nb_vec = vectorizer.transform(test_features_nb)\n", 1633 | "\n", 1634 | "predicted_probabilities = nbclassifier.predict_proba(test_features_nb_vec)[:,1]\n", 1635 | "for (row, predicted_proba) in zip(test_df_nb[[\"description\", class_of_interest]].values, predicted_probabilities):\n", 1636 | " if row[1] == (1.0 if predicted_proba > 0.5 else 0.0):\n", 1637 | " continue\n", 1638 | " print(\"Classifier sez {}; gold-standard coded as {}\".format(predicted_proba, row[1]))\n", 1639 | " print(\"Text: {}\".format(row[0]))\n", 1640 | " print(\"\\n---------------------------------------------\\n\")" 1641 | ] 1642 | }, 1643 | { 1644 | "cell_type": "markdown", 1645 | "metadata": {}, 1646 | "source": [ 1647 | "`I can't actually show you the output.`" 1648 | ] 1649 | }, 1650 | { 1651 | "cell_type": "markdown", 1652 | "metadata": {}, 1653 | "source": [ 1654 | "## Bonus #2. Interactive Example\n", 1655 | "\n", 1656 | "We can figure out what words in a description contribute most to the score (again using Naive Bayes).\n", 1657 | "\n", 1658 | "You can also see this code in an [interactive form here](https://s3.amazonaws.com/qz-aistudio-public/dochate.html).\n", 1659 | "\n", 1660 | "For a tip, we remove each n-gram and run the modified tip through the classifier. The n-grams that, when removed, cause the largest change in the model's guess are the ones that have the biggest effect. This is kind of a hack: some models might take into account additional context than just trigrams or do so in different ways. But this gives us a sense of what the model is basing its decision on. \n", 1661 | "\n", 1662 | "You'll find some times when the model makes the right call... and other times where it has mistakenly learned that an phrase that ought to be irrelevant has a big effect. (Can you think of why that might happen?)\n" 1663 | ] 1664 | }, 1665 | { 1666 | "cell_type": "code", 1667 | "execution_count": 67, 1668 | "metadata": {}, 1669 | "outputs": [], 1670 | "source": [ 1671 | "import re\n", 1672 | "\n", 1673 | "def classify_all(text, cutoff=0.5):\n", 1674 | " classifications = []\n", 1675 | " vectorized_text = vectorizer.transform([text])\n", 1676 | " for class_of_interest, classifier in classifiers.items():\n", 1677 | " proba = classifier.predict_proba(vectorized_text)[:,1][0]\n", 1678 | " if proba > 0.50:\n", 1679 | " classifications.append(class_of_interest)\n", 1680 | " return classifications\n", 1681 | " \n", 1682 | "\n", 1683 | "def permute_text(text):\n", 1684 | " words = ' '.join(re.sub(r'[^A-Za-z0-9]', ' ', text).split()).split()\n", 1685 | " bigrams = list(zip(words[:-1], words[1:]))\n", 1686 | " trigrams = list(zip(bigrams[:-2], words[2:]))\n", 1687 | " return [(word, ' '.join(words[:i] + words[i+1:]) ) for i, word in enumerate(words)] + [(' '.join(bigram), ' '.join(words[:i] + words[i+2:]) ) for i, bigram in enumerate(bigrams)] + [( ' '.join(trigram[0] + (trigram[1],)), ' '.join(words[:i] + words[i+3:]) ) for i, trigram in enumerate(trigrams)]\n", 1688 | "\n", 1689 | "def but_why_with_func(text, classify):\n", 1690 | " baseline = classify(text)\n", 1691 | " permuted_texts = permute_text(text)\n", 1692 | " diffs = [(deleted_word, baseline - classify(permuted_text)) for (deleted_word, permuted_text) in permuted_texts]\n", 1693 | " biggest_diffs = sorted(diffs, key=lambda word_diff: -abs(word_diff[1]))[:4]\n", 1694 | " return baseline, biggest_diffs \n", 1695 | " \n", 1696 | "def but_why(text, class_of_interest=\"race_ethnicity\"):\n", 1697 | " baseline, biggest_diffs = but_why_with_func(text, \n", 1698 | " lambda x: classifiers[class_of_interest].predict_proba(vectorizer.transform([x]))[:,1][0])\n", 1699 | " return baseline, biggest_diffs\n", 1700 | "\n", 1701 | "def inspect(text, class_of_interest=None):\n", 1702 | " text = clean(text)\n", 1703 | " print(\"Text:\")\n", 1704 | " print()\n", 1705 | " print(\" \" + text)\n", 1706 | " print()\n", 1707 | "\n", 1708 | " print(\"Predicted targeted-because: \")\n", 1709 | " for target in classify_all(text):\n", 1710 | " print(' * ' + target)\n", 1711 | " if class_of_interest: \n", 1712 | " print()\n", 1713 | " print(\"Why that {} prediction?\".format(class_of_interest))\n", 1714 | " baseline, biggest_diffs = but_why(text, class_of_interest)\n", 1715 | " print(\"predicted probability: {0:.2f}%\".format(baseline * 100))\n", 1716 | " print(\"top difference-makers:\")\n", 1717 | " for (deleted_word, diff) in biggest_diffs:\n", 1718 | " print(\" - {0}, {1:.2f}%\".format(deleted_word, diff * 100))\n", 1719 | "\n" 1720 | ] 1721 | }, 1722 | { 1723 | "cell_type": "code", 1724 | "execution_count": 69, 1725 | "metadata": {}, 1726 | "outputs": [ 1727 | { 1728 | "name": "stdout", 1729 | "output_type": "stream", 1730 | "text": [ 1731 | "Text:\n", 1732 | "\n", 1733 | " someone called me the ndashword at the grocery store\n", 1734 | "\n", 1735 | "Predicted targeted-because: \n", 1736 | " * race\n", 1737 | " * race_ethnicity\n", 1738 | "\n", 1739 | "Why that race_ethnicity prediction?\n", 1740 | "predicted probability: 69.56%\n", 1741 | "top difference-makers:\n", 1742 | " - ndashword at, 18.51%\n", 1743 | " - the ndashword at, 18.47%\n", 1744 | " - ndashword at the, 18.47%\n", 1745 | " - ndashword, 18.08%\n" 1746 | ] 1747 | } 1748 | ], 1749 | "source": [ 1750 | "text = \"someone called me the n-word at the grocery store\"\n", 1751 | "inspect(text, 'race_ethnicity')" 1752 | ] 1753 | }, 1754 | { 1755 | "cell_type": "markdown", 1756 | "metadata": {}, 1757 | "source": [ 1758 | "## Bonus 3: Grid Search for targeted_because\n", 1759 | "\n", 1760 | "My CNN managed to slightly outperform Naive Bayes for predicting `targeted_because`. Let's see if we can fiddle with the knobs (technically \"hyperparameters\" -- settings like the size of the embedding layer, the learning rate or the dropout amount -- and get better results.\n", 1761 | "\n", 1762 | "There's no science to this. We're just fiddling with knobs. Grid Search is a technique for fiddling with those knobs systematically...\n", 1763 | "\n", 1764 | "This took a lot of fiddling (Tensorflow doesn't parallelize well, due to memory reservations that aren't easily freed. That's what a lot of random-seed setting code above does.)\n", 1765 | "\n", 1766 | "I did this on a p3.2xlarge instance, which costs \\\\$3/hr and testing 11250 combinations 3x each, took 20hr 52min -- costing \\\\$60ish. (If you do everything right the first time, which I didn't.)\n", 1767 | "\n", 1768 | "Results look like this: Best: 0.792443 using {'batch_size': 256, 'dropout1': 0.1, 'dropout2': 0.0, 'dropout3': 0.0, 'dropout_embedding': 0.0, 'embedding_dim': 32, 'epochs': 20, 'kernel_size': 5, 'learning_rate': 0.001, 'num_filters': 256, 'verbose': 0} plus results for every combination.\n", 1769 | "\n", 1770 | "I used this tutorial: https://machinelearningmastery.com/grid-search-hyperparameters-deep-learning-models-python-keras/\n", 1771 | "\n", 1772 | "The best result was 0.898847 using `{'batch_size': 256, 'dropout1': 0.0, 'dropout2': 0.1, 'dropout3': 0.2, 'dropout_embedding': 0.0, 'embedding_dim': 32, 'epochs': 30, 'kernel_size': 5, 'learning_rate': 0.0005, 'num_filters': 256, 'verbose': 0}`." 1773 | ] 1774 | }, 1775 | { 1776 | "cell_type": "code", 1777 | "execution_count": null, 1778 | "metadata": {}, 1779 | "outputs": [], 1780 | "source": [ 1781 | "from tensorflow.keras.wrappers.scikit_learn import KerasClassifier\n", 1782 | "from sklearn.model_selection import GridSearchCV\n", 1783 | "\n", 1784 | "keras.backend.clear_session()\n", 1785 | "np.random.seed(RANDOM_SEED)\n", 1786 | "random.seed(RANDOM_SEED)\n", 1787 | "tf.set_random_seed(RANDOM_SEED)\n", 1788 | "\n", 1789 | "\n", 1790 | "should_equalize = False\n", 1791 | "\n", 1792 | "histories = {}\n", 1793 | "\n", 1794 | "train_data = np.array(encode_texts(train_df[\"description\"]))\n", 1795 | "test_data = np.array(encode_texts(test_df[\"description\"]))\n", 1796 | "train_data = keras.preprocessing.sequence.pad_sequences(train_data,\n", 1797 | " value=word_index[\"\"],\n", 1798 | " padding='post',\n", 1799 | " maxlen=256)\n", 1800 | "test_data = keras.preprocessing.sequence.pad_sequences(test_data,\n", 1801 | " value=word_index[\"\"],\n", 1802 | " padding='post',\n", 1803 | " maxlen=256)\n", 1804 | "for class_of_interest in [\"race_ethnicity\"]:\n", 1805 | " train_labels = train_df[class_of_interest]\n", 1806 | " test_labels = test_df[class_of_interest]\n", 1807 | "\n", 1808 | " if should_equalize:\n", 1809 | " equalized_train_data, equalized_train_labels = equalize_classes_keras(train_data, train_labels)\n", 1810 | " else:\n", 1811 | " equalized_train_data = train_data.copy()\n", 1812 | " equalized_train_labels = train_labels.copy()\n", 1813 | "\n", 1814 | " x_val = equalized_train_data[:VALIDATION_SET_SIZE]\n", 1815 | " partial_x_train = equalized_train_data[VALIDATION_SET_SIZE:]\n", 1816 | "\n", 1817 | " y_val = equalized_train_labels[:VALIDATION_SET_SIZE]\n", 1818 | " partial_y_train = equalized_train_labels[VALIDATION_SET_SIZE:]\n", 1819 | " \n", 1820 | " # different bits\n", 1821 | " parameters_cnn = {\n", 1822 | " \"learning_rate\": (0.001,),\n", 1823 | " \"dropout_embedding\": (0.0, 0.1, 0.2, 0.3, 0.4),\n", 1824 | " \"dropout1\": (0.0, 0.1, 0.2, 0.3, 0.4),\n", 1825 | " \"dropout2\": (0.0, 0.1, 0.2, 0.3, 0.4),\n", 1826 | " \"dropout3\": (0.0, 0.1, 0.2, 0.3, 0.4),\n", 1827 | " \"embedding_dim\": (16, 32), # default 16\n", 1828 | " \"num_filters\": (32,64,256), # default 128\n", 1829 | " \"kernel_size\": (3,5,7), # default 5\n", 1830 | "\n", 1831 | " # these aren't actually options we're messing around with parameters\n", 1832 | " \"epochs\": (20,),\n", 1833 | " \"batch_size\": (256,),\n", 1834 | " \"validation_data\": [(x_val, y_val)],\n", 1835 | " \"verbose\": (0,),\n", 1836 | " }\n", 1837 | "\n", 1838 | " \n", 1839 | " model = KerasClassifier(build_fn=cnn_model) \n", 1840 | " gridsearcher = GridSearchCV(model, parameters_cnn, scoring='average_precision', verbose=0, n_jobs=1)\n", 1841 | " grid_result = gridsearcher.fit(partial_x_train, partial_y_train)\n", 1842 | " \n", 1843 | " # summarize results\n", 1844 | " print(\"Best: %f using %s\" % (grid_result.best_score_, {k:v for k,v in grid_result.best_params_.items() if k != 'validation_data'}))\n", 1845 | " means = grid_result.cv_results_['mean_test_score']\n", 1846 | " stds = grid_result.cv_results_['std_test_score']\n", 1847 | " params = grid_result.cv_results_['params']\n", 1848 | " for mean, stdev, param in zip(means, stds, params):\n", 1849 | " print(\"%f (%f) with: %r\" % (mean, stdev, param))\n" 1850 | ] 1851 | }, 1852 | { 1853 | "cell_type": "markdown", 1854 | "metadata": {}, 1855 | "source": [ 1856 | "### some helpful methods for charting...\n", 1857 | "\n", 1858 | "That are used above, but defined here just so that they're out of the way!" 1859 | ] 1860 | }, 1861 | { 1862 | "cell_type": "code", 1863 | "execution_count": 38, 1864 | "metadata": {}, 1865 | "outputs": [], 1866 | "source": [ 1867 | "from sklearn.utils.fixes import signature\n", 1868 | "import matplotlib.pyplot as plt\n", 1869 | "\n", 1870 | "def pr_chart(labels, predicted_probas):\n", 1871 | " precision, recall, _ = precision_recall_curve(labels, predicted_probas)\n", 1872 | "\n", 1873 | " # In matplotlib < 1.5, plt.fill_between does not have a 'step' argument\n", 1874 | " step_kwargs = ({'step': 'post'}\n", 1875 | " if 'step' in signature(plt.fill_between).parameters\n", 1876 | " else {})\n", 1877 | " plt.step(recall, precision, color='b', alpha=0.2,\n", 1878 | " where='post')\n", 1879 | " plt.fill_between(recall, precision, alpha=0.2, color='b', **step_kwargs)\n", 1880 | " plt.xlabel('Recall')\n", 1881 | " plt.ylabel('Precision')\n", 1882 | " plt.ylim([0.0, 1.05])\n", 1883 | " plt.xlim([0.0, 1.0])\n", 1884 | " plt.title('2-class Precision-Recall curve')" 1885 | ] 1886 | }, 1887 | { 1888 | "cell_type": "code", 1889 | "execution_count": 74, 1890 | "metadata": {}, 1891 | "outputs": [], 1892 | "source": [ 1893 | "def training_and_validation_accuracy(history):\n", 1894 | " acc = history.history['acc']\n", 1895 | " val_acc = history.history['val_acc']\n", 1896 | " loss = history.history['loss']\n", 1897 | " val_loss = history.history['val_loss']\n", 1898 | "\n", 1899 | " epochs = range(1, len(acc) + 1)\n", 1900 | " \n", 1901 | " plt.plot(epochs, acc, 'bo', label='Training acc')\n", 1902 | " plt.plot(epochs, val_acc, 'b', label='Validation acc')\n", 1903 | " plt.title('Training and validation accuracy')\n", 1904 | " plt.xlabel('Epochs')\n", 1905 | " plt.ylabel('Accuracy')\n", 1906 | " plt.legend()\n", 1907 | "\n", 1908 | " plt.show()\n" 1909 | ] 1910 | }, 1911 | { 1912 | "cell_type": "code", 1913 | "execution_count": 75, 1914 | "metadata": {}, 1915 | "outputs": [], 1916 | "source": [ 1917 | "import matplotlib.pyplot as plt\n", 1918 | "\n", 1919 | "def training_and_validation_loss(history): \n", 1920 | " acc = history.history['acc']\n", 1921 | " val_acc = history.history['val_acc']\n", 1922 | " loss = history.history['loss']\n", 1923 | " val_loss = history.history['val_loss']\n", 1924 | "\n", 1925 | " epochs = range(1, len(acc) + 1)\n", 1926 | "\n", 1927 | " # \"bo\" is for \"blue dot\"\n", 1928 | " plt.plot(epochs, loss, 'bo', label='Training loss')\n", 1929 | " # b is for \"solid blue line\"\n", 1930 | " plt.plot(epochs, val_loss, 'b', label='Validation loss')\n", 1931 | " plt.title('Training and validation loss')\n", 1932 | " plt.xlabel('Epochs')\n", 1933 | " plt.ylabel('Loss')\n", 1934 | " plt.legend()\n", 1935 | "\n", 1936 | " plt.show()" 1937 | ] 1938 | }, 1939 | { 1940 | "cell_type": "code", 1941 | "execution_count": null, 1942 | "metadata": {}, 1943 | "outputs": [], 1944 | "source": [] 1945 | } 1946 | ], 1947 | "metadata": { 1948 | "kernelspec": { 1949 | "display_name": "Python 2", 1950 | "language": "python", 1951 | "name": "python2" 1952 | }, 1953 | "language_info": { 1954 | "codemirror_mode": { 1955 | "name": "ipython", 1956 | "version": 3 1957 | }, 1958 | "file_extension": ".py", 1959 | "mimetype": "text/x-python", 1960 | "name": "python", 1961 | "nbconvert_exporter": "python", 1962 | "pygments_lexer": "ipython3", 1963 | "version": "3.7.2" 1964 | } 1965 | }, 1966 | "nbformat": 4, 1967 | "nbformat_minor": 2 1968 | } 1969 | --------------------------------------------------------------------------------