├── .gitignore ├── LICENSE ├── README.md ├── keras-quora-question-pairs.py ├── quora-q-pairs-model.png ├── quora-question-pairs-data-prep.ipynb └── quora-question-pairs-training.ipynb /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | *.npy 3 | *.json 4 | *.h5 5 | .ipynb_* 6 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2017 Bradley P. Allen. 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining 6 | a copy of this software and associated documentation files (the 7 | "Software"), to deal in the Software without restriction, including 8 | without limitation the rights to use, copy, modify, merge, publish, 9 | distribute, sublicense, and/or sell copies of the Software, and to 10 | permit persons to whom the Software is furnished to do so, subject to 11 | the following conditions: 12 | 13 | The above copyright notice and this permission notice shall be 14 | included in all copies or substantial portions of the Software. 15 | 16 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 17 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF 18 | MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 19 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE 20 | LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 21 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION 22 | WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # keras-quora-question-pairs 2 | 3 | A Keras model that addresses the Quora Question Pairs 4 | [[1]](https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs) 5 | dyadic prediction task. 6 | 7 | ## Model implementation 8 | 9 | The Keras model architecture is shown below: 10 | 11 | ![[Keras model architecture for Quora Question Pairs dyadic prediction]](quora-q-pairs-model.png) 12 | 13 | The model architecture is based on the Stanford Natural Language 14 | Inference [[2]](http://nlp.stanford.edu/pubs/snli_paper.pdf) benchmark 15 | model developed by Stephen Merity 16 | [[3]](https://github.com/Smerity/keras_snli), specifically the version 17 | using a simple summation of GloVe word embeddings 18 | [[4]](http://nlp.stanford.edu/pubs/glove.pdf) to represent each 19 | question in the pair. A difference between this and the Merity SNLI 20 | benchmark is that our final layer is Dense with sigmoid activation, as 21 | opposed to softmax. Another key difference is that we are using the 22 | max operator as opposed to sum to combine word embeddings into a 23 | question representation. We use binary cross-entropy as a loss 24 | function and Adam for optimization. 25 | 26 | ## Evaluation 27 | 28 | We partition the Quora question pairs into a 90/10 train/test 29 | split. We run training for 25 epochs with a further 90/10 30 | train/validation split, saving the weights from the model checkpoint 31 | with the maximum validation accuracy. Training takes approximately 120 32 | secs/epoch, using Tensorflow as a backend for Keras on an Amazon Web 33 | Services EC2 p2-xlarge GPU compute instance. We finally evaluate the 34 | best checkpointed model to obtain a test set accuracy of 35 | 0.8291. The table below places this in the context of other work 36 | on the dataset reported to date: 37 | 38 | | Model | Source of Word Embeddings | Accuracy | 39 | | --- | --- | --- | 40 | | "BiMPM model" [[5]](https://arxiv.org/pdf/1702.03814) | GloVe Common Crawl (840B tokens, 300D) | **0.88** | 41 | | "LSTM with concatenation" [[6]](https://engineering.quora.com/Semantic-Question-Matching-with-Deep-Learning) | "Quora's text corpus" | 0.87 | 42 | | "LSTM with distance and angle" [[6]](https://engineering.quora.com/Semantic-Question-Matching-with-Deep-Learning) | "Quora's text corpus" | 0.87 | 43 | | "Decomposable attention" [[6]](https://engineering.quora.com/Semantic-Question-Matching-with-Deep-Learning) | "Quora's text corpus" | 0.86 | 44 | | "L.D.C." [[5]](https://arxiv.org/pdf/1702.03814) | GloVe Common Crawl (840B tokens, 300D) | 0.86 | 45 | | Max bag-of-embeddings (*this work*) | GloVe Common Crawl (840B tokens, 300D) | 0.83 | 46 | | "Multi-Perspective-LSTM" [[5]](https://arxiv.org/pdf/1702.03814) | GloVe Common Crawl (840B tokens, 300D) | 0.83 | 47 | | "Siamese-LSTM" [[5]](https://arxiv.org/pdf/1702.03814) | GloVe Common Crawl (840B tokens, 300D) | 0.83 | 48 | | "Neural bag-of-words" (max) [[7]](https://explosion.ai/blog/quora-deep-text-pair-classification) | GloVe Common Crawl pruned to 1M vocab. (spaCy default) | 0.83 | 49 | | "Neural bag-of-words" (max & mean) [[7]](https://explosion.ai/blog/quora-deep-text-pair-classification) | GloVe Common Crawl pruned to 1M vocab. (spaCy default) | 0.83 | 50 | | "Max-out Window Encoding" with depth 2 [[7]](https://explosion.ai/blog/quora-deep-text-pair-classification) | GloVe Common Crawl pruned to 1M vocab. (spaCy default) | 0.83 | 51 | | "Neural bag-of-words" (mean) [[7]](https://explosion.ai/blog/quora-deep-text-pair-classification) | GloVe Common Crawl pruned to 1M vocab. (spaCy default) | 0.81 | 52 | | "Multi-Perspective-CNN" [[5]](https://arxiv.org/pdf/1702.03814) | GloVe Common Crawl (840B tokens, 300D) | 0.81 | 53 | | "Siamese-CNN" [[5]](https://arxiv.org/pdf/1702.03814) | GloVe Common Crawl (840B tokens, 300D) | 0.80 | 54 | | "Spacy + TD-IDF + Siamese" [[8]](http://www.erogol.com/duplicate-question-detection-deep-learning/) | GloVe (6B tokens, 300D) | 0.79 | 55 | 56 | 57 | ## Discussion 58 | 59 | An initial pass at hyperparameter tuning by evaluating possible 60 | settings a hyperparameter at a time led to the following observations: 61 | 62 | * Computing the question representation by applying the max operator to the word embeddings slightly outperformed using mean and sum, which is consistent with what is reported in [[7]](https://explosion.ai/blog/quora-deep-text-pair-classification). 63 | * Computing the question representation using max also slightly outperformed the use of bidirectional LSTM and GRU recurrent layers, again as discussed in [[7]](https://explosion.ai/blog/quora-deep-text-pair-classification). 64 | * Batch normalization improved accuracy, as observed by [[8]](http://www.erogol.com/duplicate-question-detection-deep-learning/). 65 | * Any amount of dropout decreased accuracy, as also observed by [[8]](http://www.erogol.com/duplicate-question-detection-deep-learning/). 66 | * Four hidden layers in the fully-connected component had the best accuracy, with between zero and six hidden layers evaluated. 67 | * Using 200 dimensions for the layers in the fully-connected component showed the best accuracy among tested dimensions 50, 100, 200, and 300. 68 | 69 | ## Future work 70 | 71 | A more principled (and computationally-intensive) campaign of 72 | randomized search over the space of hyperparameter configurations is 73 | planned. 74 | 75 | ## Requirements 76 | 77 | * Python 3.5.2 78 | * jupyter 4.2.1 79 | 80 | ## Package dependencies 81 | 82 | * numpy 1.11.3 83 | * pandas 0.19.2 84 | * matplotlib 1.5.3 85 | * Keras 1.2.1 86 | * scikit-learn 0.18.1 87 | * h5py 2.6.0 88 | * hdf5 1.8.17 89 | 90 | ## Usage 91 | 92 | This repository contains two different ways to create and run the model. 93 | 94 | ### From the command line 95 | 96 | $ python3 keras-question-pairs.py 97 | 98 | On first execution, this will download the required Quora and GloVe datasets and generate files that cache the training data and related word count and embedding data for subsequent runs. 99 | 100 | ### As Jupyter notebooks 101 | 102 | Simply run the notebook server using the standard Jupyter command: 103 | 104 | $ jupyter notebook 105 | 106 | First run 107 | 108 | quora-question-pairs-data-prep.ipynb 109 | 110 | As with the script above, this will generate files for training the Keras model. Then run 111 | 112 | quora-question-pairs-training.ipynb 113 | 114 | next to train and evaluate the model. 115 | 116 | ## License 117 | 118 | MIT. See the LICENSE file for the copyright notice. 119 | 120 | ## References 121 | 122 | [[1]](https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs) Shankar Iyar, Nikhil Dandekar, and Kornél Csernai. “First Quora Dataset Release: Question Pairs,” 24 January 2016. Retrieved at https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs on 31 January 2017. 123 | 124 | [[2]](http://nlp.stanford.edu/pubs/snli_paper.pdf) Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. "A large annotated corpus for learning natural language inference," in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015), September 2015. 125 | 126 | [[3]](https://github.com/Smerity/keras_snli) Stephen Merity. "Keras SNLI baseline example,” 4 September 2016. Retrieved at https://github.com/Smerity/keras_snli on 31 January 2017. 127 | 128 | [[4]](http://nlp.stanford.edu/pubs/glove.pdf) Jeffrey Pennington, Richard Socher, and Christopher D. Manning. "GloVe: Global Vectors for Word Representation," in Proceedings of the 2014 Conference on Empirical Methods In Natural Language Processing (EMNLP 2014), October 2014. 129 | 130 | [[5]](https://arxiv.org/pdf/1702.03814) Zhiguo Wang, Wael Hamza and Radu Florian. "Bilateral Multi-Perspective Matching for Natural Language Sentences," 13 February 2017. Retrieved at https://arxiv.org/pdf/1702.03814.pdf on 14 February 2017. 131 | 132 | [[6]](https://engineering.quora.com/Semantic-Question-Matching-with-Deep-Learning) Lili Jiang, Shuo Chang, and Nikhil Dandekar. "Semantic Question Matching with Deep Learning," 13 February 2017. Retrieved at https://engineering.quora.com/Semantic-Question-Matching-with-Deep-Learning on 13 February 2017. 133 | 134 | [[7]](https://explosion.ai/blog/quora-deep-text-pair-classification) Matthew Honnibal. "Deep text-pair classification with Quora's 2017 question dataset," 13 February 2017. Retreived at https://explosion.ai/blog/quora-deep-text-pair-classification on 13 February 2017. 135 | 136 | [[8]](http://www.erogol.com/duplicate-question-detection-deep-learning/) Eren Golge. "Duplicate Question Detection with Deep Learning on Quora Dataset," 12 February 2017. Retreived at http://www.erogol.com/duplicate-question-detection-deep-learning/ on 13 February 2017. 137 | -------------------------------------------------------------------------------- /keras-quora-question-pairs.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | import numpy as np 3 | import csv, datetime, time, json 4 | from zipfile import ZipFile 5 | from os.path import expanduser, exists 6 | from keras.preprocessing.text import Tokenizer 7 | from keras.preprocessing.sequence import pad_sequences 8 | from keras.models import Sequential 9 | from keras.layers import Embedding, Dense, Dropout, Reshape, Merge, BatchNormalization, TimeDistributed, Lambda 10 | from keras.regularizers import l2 11 | from keras.callbacks import Callback, ModelCheckpoint 12 | from keras.utils.data_utils import get_file 13 | from keras import backend as K 14 | from sklearn.model_selection import train_test_split 15 | 16 | KERAS_DATASETS_DIR = expanduser('~/.keras/datasets/') 17 | QUESTION_PAIRS_FILE_URL = 'http://qim.ec.quoracdn.net/quora_duplicate_questions.tsv' 18 | QUESTION_PAIRS_FILE = 'quora_duplicate_questions.tsv' 19 | GLOVE_ZIP_FILE_URL = 'http://nlp.stanford.edu/data/glove.840B.300d.zip' 20 | GLOVE_ZIP_FILE = 'glove.840B.300d.zip' 21 | GLOVE_FILE = 'glove.840B.300d.txt' 22 | Q1_TRAINING_DATA_FILE = 'q1_train.npy' 23 | Q2_TRAINING_DATA_FILE = 'q2_train.npy' 24 | LABEL_TRAINING_DATA_FILE = 'label_train.npy' 25 | WORD_EMBEDDING_MATRIX_FILE = 'word_embedding_matrix.npy' 26 | NB_WORDS_DATA_FILE = 'nb_words.json' 27 | MAX_NB_WORDS = 200000 28 | MAX_SEQUENCE_LENGTH = 25 29 | EMBEDDING_DIM = 300 30 | MODEL_WEIGHTS_FILE = 'question_pairs_weights.h5' 31 | VALIDATION_SPLIT = 0.1 32 | TEST_SPLIT = 0.1 33 | RNG_SEED = 13371447 34 | NB_EPOCHS = 25 35 | 36 | if exists(Q1_TRAINING_DATA_FILE) and exists(Q2_TRAINING_DATA_FILE) and exists(LABEL_TRAINING_DATA_FILE) and exists(NB_WORDS_DATA_FILE) and exists(WORD_EMBEDDING_MATRIX_FILE): 37 | q1_data = np.load(open(Q1_TRAINING_DATA_FILE, 'rb')) 38 | q2_data = np.load(open(Q2_TRAINING_DATA_FILE, 'rb')) 39 | labels = np.load(open(LABEL_TRAINING_DATA_FILE, 'rb')) 40 | word_embedding_matrix = np.load(open(WORD_EMBEDDING_MATRIX_FILE, 'rb')) 41 | with open(NB_WORDS_DATA_FILE, 'r') as f: 42 | nb_words = json.load(f)['nb_words'] 43 | else: 44 | if not exists(KERAS_DATASETS_DIR + QUESTION_PAIRS_FILE): 45 | get_file(QUESTION_PAIRS_FILE, QUESTION_PAIRS_FILE_URL) 46 | 47 | print("Processing", QUESTION_PAIRS_FILE) 48 | 49 | question1 = [] 50 | question2 = [] 51 | is_duplicate = [] 52 | with open(KERAS_DATASETS_DIR + QUESTION_PAIRS_FILE, encoding='utf-8') as csvfile: 53 | reader = csv.DictReader(csvfile, delimiter='\t') 54 | for row in reader: 55 | question1.append(row['text1']) 56 | question2.append(row['text2']) 57 | is_duplicate.append(row['duplicate']) 58 | 59 | print('Question pairs: %d' % len(question1)) 60 | 61 | questions = question1 + question2 62 | tokenizer = Tokenizer(nb_words=MAX_NB_WORDS) 63 | tokenizer.fit_on_texts(questions) 64 | question1_word_sequences = tokenizer.texts_to_sequences(question1) 65 | question2_word_sequences = tokenizer.texts_to_sequences(question2) 66 | word_index = tokenizer.word_index 67 | 68 | print("Words in index: %d" % len(word_index)) 69 | 70 | if not exists(KERAS_DATASETS_DIR + GLOVE_ZIP_FILE): 71 | zipfile = ZipFile(get_file(GLOVE_ZIP_FILE, GLOVE_ZIP_FILE_URL)) 72 | zipfile.extract(GLOVE_FILE, path=KERAS_DATASETS_DIR) 73 | 74 | print("Processing", GLOVE_FILE) 75 | 76 | embeddings_index = {} 77 | with open(KERAS_DATASETS_DIR + GLOVE_FILE, encoding='utf-8') as f: 78 | for line in f: 79 | values = line.split(' ') 80 | word = values[0] 81 | embedding = np.asarray(values[1:], dtype='float32') 82 | embeddings_index[word] = embedding 83 | 84 | print('Word embeddings: %d' % len(embeddings_index)) 85 | 86 | nb_words = min(MAX_NB_WORDS, len(word_index)) 87 | word_embedding_matrix = np.zeros((nb_words + 1, EMBEDDING_DIM)) 88 | for word, i in word_index.items(): 89 | if i > MAX_NB_WORDS: 90 | continue 91 | embedding_vector = embeddings_index.get(word) 92 | if embedding_vector is not None: 93 | word_embedding_matrix[i] = embedding_vector 94 | 95 | print('Null word embeddings: %d' % np.sum(np.sum(word_embedding_matrix, axis=1) == 0)) 96 | 97 | q1_data = pad_sequences(question1_word_sequences, maxlen=MAX_SEQUENCE_LENGTH) 98 | q2_data = pad_sequences(question2_word_sequences, maxlen=MAX_SEQUENCE_LENGTH) 99 | labels = np.array(is_duplicate, dtype=int) 100 | print('Shape of question1 data tensor:', q1_data.shape) 101 | print('Shape of question2 data tensor:', q2_data.shape) 102 | print('Shape of label tensor:', labels.shape) 103 | 104 | np.save(open(Q1_TRAINING_DATA_FILE, 'wb'), q1_data) 105 | np.save(open(Q2_TRAINING_DATA_FILE, 'wb'), q2_data) 106 | np.save(open(LABEL_TRAINING_DATA_FILE, 'wb'), labels) 107 | np.save(open(WORD_EMBEDDING_MATRIX_FILE, 'wb'), word_embedding_matrix) 108 | with open(NB_WORDS_DATA_FILE, 'w') as f: 109 | json.dump({'nb_words': nb_words}, f) 110 | 111 | X = np.stack((q1_data, q2_data), axis=1) 112 | y = labels 113 | X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=TEST_SPLIT, random_state=RNG_SEED) 114 | Q1_train = X_train[:,0] 115 | Q2_train = X_train[:,1] 116 | Q1_test = X_test[:,0] 117 | Q2_test = X_test[:,1] 118 | 119 | Q1 = Sequential() 120 | Q1.add(Embedding(nb_words + 1, EMBEDDING_DIM, weights=[word_embedding_matrix], input_length=MAX_SEQUENCE_LENGTH, trainable=False)) 121 | Q1.add(TimeDistributed(Dense(EMBEDDING_DIM, activation='relu'))) 122 | Q1.add(Lambda(lambda x: K.max(x, axis=1), output_shape=(EMBEDDING_DIM, ))) 123 | Q2 = Sequential() 124 | Q2.add(Embedding(nb_words + 1, EMBEDDING_DIM, weights=[word_embedding_matrix], input_length=MAX_SEQUENCE_LENGTH, trainable=False)) 125 | Q2.add(TimeDistributed(Dense(EMBEDDING_DIM, activation='relu'))) 126 | Q2.add(Lambda(lambda x: K.max(x, axis=1), output_shape=(EMBEDDING_DIM, ))) 127 | model = Sequential() 128 | model.add(Merge([Q1, Q2], mode='concat')) 129 | model.add(BatchNormalization()) 130 | model.add(Dense(200, activation='relu')) 131 | model.add(BatchNormalization()) 132 | model.add(Dense(200, activation='relu')) 133 | model.add(BatchNormalization()) 134 | model.add(Dense(200, activation='relu')) 135 | model.add(BatchNormalization()) 136 | model.add(Dense(200, activation='relu')) 137 | model.add(BatchNormalization()) 138 | model.add(Dense(1, activation='sigmoid')) 139 | 140 | model.compile(loss='binary_crossentropy', 141 | optimizer='adam', 142 | metrics=['accuracy', 'precision', 'recall', 'fbeta_score']) 143 | 144 | callbacks = [ModelCheckpoint(MODEL_WEIGHTS_FILE, monitor='val_acc', save_best_only=True)] 145 | 146 | print("Starting training at", datetime.datetime.now()) 147 | 148 | t0 = time.time() 149 | history = model.fit([Q1_train, Q2_train], 150 | y_train, 151 | nb_epoch=NB_EPOCHS, 152 | validation_split=VALIDATION_SPLIT, 153 | verbose=1, 154 | callbacks=callbacks) 155 | t1 = time.time() 156 | 157 | print("Training ended at", datetime.datetime.now()) 158 | 159 | print("Minutes elapsed: %f" % ((t1 - t0) / 60.)) 160 | 161 | model.load_weights(MODEL_WEIGHTS_FILE) 162 | 163 | loss, accuracy, precision, recall, fbeta_score = model.evaluate([Q1_test, Q2_test], y_test) 164 | print('') 165 | print('loss = {0:.4f}'.format(loss)) 166 | print('accuracy = {0:.4f}'.format(accuracy)) 167 | print('precision = {0:.4f}'.format(precision)) 168 | print('recall = {0:.4f}'.format(recall)) 169 | print('F = {0:.4f}'.format(fbeta_score)) 170 | -------------------------------------------------------------------------------- /quora-q-pairs-model.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gallupliu/keras-quora-question-pairs/8099a6c7e78d75794c47f30893bfcbc940f4fa20/quora-q-pairs-model.png -------------------------------------------------------------------------------- /quora-question-pairs-data-prep.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Quora question pairs: data preparation" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "## Import packages" 15 | ] 16 | }, 17 | { 18 | "cell_type": "code", 19 | "execution_count": 1, 20 | "metadata": { 21 | "collapsed": false 22 | }, 23 | "outputs": [ 24 | { 25 | "name": "stderr", 26 | "output_type": "stream", 27 | "text": [ 28 | "Using TensorFlow backend.\n" 29 | ] 30 | } 31 | ], 32 | "source": [ 33 | "from __future__ import print_function\n", 34 | "\n", 35 | "import numpy as np\n", 36 | "import csv, json\n", 37 | "from zipfile import ZipFile\n", 38 | "from os.path import expanduser, exists\n", 39 | "\n", 40 | "from keras.preprocessing.text import Tokenizer\n", 41 | "from keras.preprocessing.sequence import pad_sequences\n", 42 | "from keras.utils.data_utils import get_file" 43 | ] 44 | }, 45 | { 46 | "cell_type": "markdown", 47 | "metadata": {}, 48 | "source": [ 49 | "## Initialize global variables" 50 | ] 51 | }, 52 | { 53 | "cell_type": "code", 54 | "execution_count": 2, 55 | "metadata": { 56 | "collapsed": true 57 | }, 58 | "outputs": [], 59 | "source": [ 60 | "KERAS_DATASETS_DIR = expanduser('~/.keras/datasets/')\n", 61 | "QUESTION_PAIRS_FILE_URL = 'http://qim.ec.quoracdn.net/quora_duplicate_questions.tsv'\n", 62 | "QUESTION_PAIRS_FILE = 'quora_duplicate_questions.tsv'\n", 63 | "GLOVE_ZIP_FILE_URL = 'http://nlp.stanford.edu/data/glove.840B.300d.zip'\n", 64 | "GLOVE_ZIP_FILE = 'glove.840B.300d.zip'\n", 65 | "GLOVE_FILE = 'glove.840B.300d.txt'\n", 66 | "Q1_TRAINING_DATA_FILE = 'q1_train.npy'\n", 67 | "Q2_TRAINING_DATA_FILE = 'q2_train.npy'\n", 68 | "LABEL_TRAINING_DATA_FILE = 'label_train.npy'\n", 69 | "WORD_EMBEDDING_MATRIX_FILE = 'word_embedding_matrix.npy'\n", 70 | "NB_WORDS_DATA_FILE = 'nb_words.json'\n", 71 | "MAX_NB_WORDS = 200000\n", 72 | "MAX_SEQUENCE_LENGTH = 25\n", 73 | "EMBEDDING_DIM = 300" 74 | ] 75 | }, 76 | { 77 | "cell_type": "markdown", 78 | "metadata": {}, 79 | "source": [ 80 | "## Download and extract questions pairs data" 81 | ] 82 | }, 83 | { 84 | "cell_type": "code", 85 | "execution_count": 3, 86 | "metadata": { 87 | "collapsed": false 88 | }, 89 | "outputs": [ 90 | { 91 | "name": "stdout", 92 | "output_type": "stream", 93 | "text": [ 94 | "Downloading data from http://qim.ec.quoracdn.net/quora_duplicate_questions.tsv\n", 95 | "Processing quora_duplicate_questions.tsv\n", 96 | "Question pairs: 404351\n" 97 | ] 98 | } 99 | ], 100 | "source": [ 101 | "if not exists(KERAS_DATASETS_DIR + QUESTION_PAIRS_FILE):\n", 102 | " get_file(QUESTION_PAIRS_FILE, QUESTION_PAIRS_FILE_URL)\n", 103 | "\n", 104 | "print(\"Processing\", QUESTION_PAIRS_FILE)\n", 105 | "\n", 106 | "question1 = []\n", 107 | "question2 = []\n", 108 | "is_duplicate = []\n", 109 | "with open(KERAS_DATASETS_DIR + QUESTION_PAIRS_FILE, encoding='utf-8') as csvfile:\n", 110 | " reader = csv.DictReader(csvfile, delimiter='\\t')\n", 111 | " for row in reader:\n", 112 | " question1.append(row['text1'])\n", 113 | " question2.append(row['text2'])\n", 114 | " is_duplicate.append(row['duplicate'])\n", 115 | "\n", 116 | "print('Question pairs: %d' % len(question1))" 117 | ] 118 | }, 119 | { 120 | "cell_type": "markdown", 121 | "metadata": {}, 122 | "source": [ 123 | "## Build tokenized word index" 124 | ] 125 | }, 126 | { 127 | "cell_type": "code", 128 | "execution_count": 4, 129 | "metadata": { 130 | "collapsed": false 131 | }, 132 | "outputs": [ 133 | { 134 | "name": "stdout", 135 | "output_type": "stream", 136 | "text": [ 137 | "Words in index: 95603\n" 138 | ] 139 | } 140 | ], 141 | "source": [ 142 | "questions = question1 + question2\n", 143 | "tokenizer = Tokenizer(nb_words=MAX_NB_WORDS)\n", 144 | "tokenizer.fit_on_texts(questions)\n", 145 | "question1_word_sequences = tokenizer.texts_to_sequences(question1)\n", 146 | "question2_word_sequences = tokenizer.texts_to_sequences(question2)\n", 147 | "word_index = tokenizer.word_index\n", 148 | "\n", 149 | "print(\"Words in index: %d\" % len(word_index))" 150 | ] 151 | }, 152 | { 153 | "cell_type": "markdown", 154 | "metadata": {}, 155 | "source": [ 156 | "## Download and process GloVe embeddings" 157 | ] 158 | }, 159 | { 160 | "cell_type": "code", 161 | "execution_count": 5, 162 | "metadata": { 163 | "collapsed": false 164 | }, 165 | "outputs": [ 166 | { 167 | "name": "stdout", 168 | "output_type": "stream", 169 | "text": [ 170 | "Downloading data from http://nlp.stanford.edu/data/glove.840B.300d.zip\n", 171 | "Processing glove.840B.300d.txt\n", 172 | "Word embeddings: 2196016\n" 173 | ] 174 | } 175 | ], 176 | "source": [ 177 | "if not exists(KERAS_DATASETS_DIR + GLOVE_ZIP_FILE):\n", 178 | " zipfile = ZipFile(get_file(GLOVE_ZIP_FILE, GLOVE_ZIP_FILE_URL))\n", 179 | " zipfile.extract(GLOVE_FILE, path=KERAS_DATASETS_DIR)\n", 180 | " \n", 181 | "print(\"Processing\", GLOVE_FILE)\n", 182 | "\n", 183 | "embeddings_index = {}\n", 184 | "with open(KERAS_DATASETS_DIR + GLOVE_FILE, encoding='utf-8') as f:\n", 185 | " for line in f:\n", 186 | " values = line.split(' ')\n", 187 | " word = values[0]\n", 188 | " embedding = np.asarray(values[1:], dtype='float32')\n", 189 | " embeddings_index[word] = embedding\n", 190 | "\n", 191 | "print('Word embeddings: %d' % len(embeddings_index))" 192 | ] 193 | }, 194 | { 195 | "cell_type": "markdown", 196 | "metadata": {}, 197 | "source": [ 198 | "## Prepare word embedding matrix" 199 | ] 200 | }, 201 | { 202 | "cell_type": "code", 203 | "execution_count": 6, 204 | "metadata": { 205 | "collapsed": false 206 | }, 207 | "outputs": [ 208 | { 209 | "name": "stdout", 210 | "output_type": "stream", 211 | "text": [ 212 | "Null word embeddings: 29273\n" 213 | ] 214 | } 215 | ], 216 | "source": [ 217 | "nb_words = min(MAX_NB_WORDS, len(word_index))\n", 218 | "word_embedding_matrix = np.zeros((nb_words + 1, EMBEDDING_DIM))\n", 219 | "for word, i in word_index.items():\n", 220 | " if i > MAX_NB_WORDS:\n", 221 | " continue\n", 222 | " embedding_vector = embeddings_index.get(word)\n", 223 | " if embedding_vector is not None:\n", 224 | " word_embedding_matrix[i] = embedding_vector\n", 225 | "\n", 226 | "print('Null word embeddings: %d' % np.sum(np.sum(word_embedding_matrix, axis=1) == 0))" 227 | ] 228 | }, 229 | { 230 | "cell_type": "markdown", 231 | "metadata": {}, 232 | "source": [ 233 | "## Prepare training data tensors" 234 | ] 235 | }, 236 | { 237 | "cell_type": "code", 238 | "execution_count": 7, 239 | "metadata": { 240 | "collapsed": false 241 | }, 242 | "outputs": [ 243 | { 244 | "name": "stdout", 245 | "output_type": "stream", 246 | "text": [ 247 | "Shape of question1 data tensor: (404351, 25)\n", 248 | "Shape of question2 data tensor: (404351, 25)\n", 249 | "Shape of label tensor: (404351,)\n" 250 | ] 251 | } 252 | ], 253 | "source": [ 254 | "q1_data = pad_sequences(question1_word_sequences, maxlen=MAX_SEQUENCE_LENGTH)\n", 255 | "q2_data = pad_sequences(question2_word_sequences, maxlen=MAX_SEQUENCE_LENGTH)\n", 256 | "labels = np.array(is_duplicate, dtype=int)\n", 257 | "print('Shape of question1 data tensor:', q1_data.shape)\n", 258 | "print('Shape of question2 data tensor:', q2_data.shape)\n", 259 | "print('Shape of label tensor:', labels.shape)" 260 | ] 261 | }, 262 | { 263 | "cell_type": "markdown", 264 | "metadata": {}, 265 | "source": [ 266 | "## Persist training and configuration data to files" 267 | ] 268 | }, 269 | { 270 | "cell_type": "code", 271 | "execution_count": 8, 272 | "metadata": { 273 | "collapsed": true 274 | }, 275 | "outputs": [], 276 | "source": [ 277 | "np.save(open(Q1_TRAINING_DATA_FILE, 'wb'), q1_data)\n", 278 | "np.save(open(Q2_TRAINING_DATA_FILE, 'wb'), q2_data)\n", 279 | "np.save(open(LABEL_TRAINING_DATA_FILE, 'wb'), labels)\n", 280 | "np.save(open(WORD_EMBEDDING_MATRIX_FILE, 'wb'), word_embedding_matrix)\n", 281 | "with open(NB_WORDS_DATA_FILE, 'w') as f:\n", 282 | " json.dump({'nb_words': nb_words}, f)" 283 | ] 284 | } 285 | ], 286 | "metadata": { 287 | "hide_input": false, 288 | "kernelspec": { 289 | "display_name": "Python 3", 290 | "language": "python", 291 | "name": "python3" 292 | }, 293 | "language_info": { 294 | "codemirror_mode": { 295 | "name": "ipython", 296 | "version": 3 297 | }, 298 | "file_extension": ".py", 299 | "mimetype": "text/x-python", 300 | "name": "python", 301 | "nbconvert_exporter": "python", 302 | "pygments_lexer": "ipython3", 303 | "version": "3.4.3" 304 | } 305 | }, 306 | "nbformat": 4, 307 | "nbformat_minor": 2 308 | } 309 | -------------------------------------------------------------------------------- /quora-question-pairs-training.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Quora question pairs: training" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "## Import packages" 15 | ] 16 | }, 17 | { 18 | "cell_type": "code", 19 | "execution_count": 1, 20 | "metadata": { 21 | "collapsed": false 22 | }, 23 | "outputs": [ 24 | { 25 | "name": "stderr", 26 | "output_type": "stream", 27 | "text": [ 28 | "Using TensorFlow backend.\n" 29 | ] 30 | } 31 | ], 32 | "source": [ 33 | "%matplotlib inline\n", 34 | "from __future__ import print_function\n", 35 | "import numpy as np\n", 36 | "import pandas as pd\n", 37 | "import datetime, time, json\n", 38 | "from keras.models import Sequential\n", 39 | "from keras.layers import Embedding, Dense, Dropout, Reshape, Merge, BatchNormalization, TimeDistributed, Lambda\n", 40 | "from keras.regularizers import l2\n", 41 | "from keras.callbacks import Callback, ModelCheckpoint\n", 42 | "from keras import backend as K\n", 43 | "from sklearn.model_selection import train_test_split" 44 | ] 45 | }, 46 | { 47 | "cell_type": "markdown", 48 | "metadata": {}, 49 | "source": [ 50 | "## Initialize global variables" 51 | ] 52 | }, 53 | { 54 | "cell_type": "code", 55 | "execution_count": 2, 56 | "metadata": { 57 | "collapsed": true 58 | }, 59 | "outputs": [], 60 | "source": [ 61 | "Q1_TRAINING_DATA_FILE = 'q1_train.npy'\n", 62 | "Q2_TRAINING_DATA_FILE = 'q2_train.npy'\n", 63 | "LABEL_TRAINING_DATA_FILE = 'label_train.npy'\n", 64 | "WORD_EMBEDDING_MATRIX_FILE = 'word_embedding_matrix.npy'\n", 65 | "NB_WORDS_DATA_FILE = 'nb_words.json'\n", 66 | "MODEL_WEIGHTS_FILE = 'question_pairs_weights.h5'\n", 67 | "MAX_SEQUENCE_LENGTH = 25\n", 68 | "EMBEDDING_DIM = 300\n", 69 | "VALIDATION_SPLIT = 0.1\n", 70 | "TEST_SPLIT = 0.1\n", 71 | "RNG_SEED = 13371447\n", 72 | "NB_EPOCHS = 25" 73 | ] 74 | }, 75 | { 76 | "cell_type": "markdown", 77 | "metadata": {}, 78 | "source": [ 79 | "## Load the dataset, embedding matrix and word count" 80 | ] 81 | }, 82 | { 83 | "cell_type": "code", 84 | "execution_count": 3, 85 | "metadata": { 86 | "collapsed": true 87 | }, 88 | "outputs": [], 89 | "source": [ 90 | "q1_data = np.load(open(Q1_TRAINING_DATA_FILE, 'rb'))\n", 91 | "q2_data = np.load(open(Q2_TRAINING_DATA_FILE, 'rb'))\n", 92 | "labels = np.load(open(LABEL_TRAINING_DATA_FILE, 'rb'))\n", 93 | "word_embedding_matrix = np.load(open(WORD_EMBEDDING_MATRIX_FILE, 'rb'))\n", 94 | "with open(NB_WORDS_DATA_FILE, 'r') as f:\n", 95 | " nb_words = json.load(f)['nb_words']" 96 | ] 97 | }, 98 | { 99 | "cell_type": "markdown", 100 | "metadata": {}, 101 | "source": [ 102 | "## Partition the dataset into train and test sets" 103 | ] 104 | }, 105 | { 106 | "cell_type": "code", 107 | "execution_count": 4, 108 | "metadata": { 109 | "collapsed": false 110 | }, 111 | "outputs": [], 112 | "source": [ 113 | "X = np.stack((q1_data, q2_data), axis=1)\n", 114 | "y = labels\n", 115 | "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=TEST_SPLIT, random_state=RNG_SEED)\n", 116 | "Q1_train = X_train[:,0]\n", 117 | "Q2_train = X_train[:,1]\n", 118 | "Q1_test = X_test[:,0]\n", 119 | "Q2_test = X_test[:,1]" 120 | ] 121 | }, 122 | { 123 | "cell_type": "markdown", 124 | "metadata": {}, 125 | "source": [ 126 | "## Define the model" 127 | ] 128 | }, 129 | { 130 | "cell_type": "code", 131 | "execution_count": 5, 132 | "metadata": { 133 | "collapsed": true 134 | }, 135 | "outputs": [], 136 | "source": [ 137 | "Q1 = Sequential()\n", 138 | "Q1.add(Embedding(nb_words + 1, \n", 139 | " EMBEDDING_DIM, \n", 140 | " weights=[word_embedding_matrix], \n", 141 | " input_length=MAX_SEQUENCE_LENGTH, \n", 142 | " trainable=False))\n", 143 | "Q1.add(TimeDistributed(Dense(EMBEDDING_DIM, activation='relu')))\n", 144 | "Q1.add(Lambda(lambda x: K.max(x, axis=1), output_shape=(EMBEDDING_DIM, )))\n", 145 | "Q2 = Sequential()\n", 146 | "Q2.add(Embedding(nb_words + 1, \n", 147 | " EMBEDDING_DIM, \n", 148 | " weights=[word_embedding_matrix], \n", 149 | " input_length=MAX_SEQUENCE_LENGTH, \n", 150 | " trainable=False))\n", 151 | "Q2.add(TimeDistributed(Dense(EMBEDDING_DIM, activation='relu')))\n", 152 | "Q2.add(Lambda(lambda x: K.max(x, axis=1), output_shape=(EMBEDDING_DIM, )))\n", 153 | "model = Sequential()\n", 154 | "model.add(Merge([Q1, Q2], mode='concat'))\n", 155 | "model.add(BatchNormalization())\n", 156 | "model.add(Dense(200, activation='relu'))\n", 157 | "model.add(BatchNormalization())\n", 158 | "model.add(Dense(200, activation='relu'))\n", 159 | "model.add(BatchNormalization())\n", 160 | "model.add(Dense(200, activation='relu'))\n", 161 | "model.add(BatchNormalization())\n", 162 | "model.add(Dense(200, activation='relu'))\n", 163 | "model.add(BatchNormalization())\n", 164 | "model.add(Dense(1, activation='sigmoid'))\n", 165 | "model.compile(loss='binary_crossentropy', \n", 166 | " optimizer='adam', \n", 167 | " metrics=['accuracy', 'precision', 'recall', 'fbeta_score'])" 168 | ] 169 | }, 170 | { 171 | "cell_type": "markdown", 172 | "metadata": {}, 173 | "source": [ 174 | "## Train the model, checkpointing weights with best validation accuracy" 175 | ] 176 | }, 177 | { 178 | "cell_type": "code", 179 | "execution_count": 6, 180 | "metadata": { 181 | "collapsed": false 182 | }, 183 | "outputs": [ 184 | { 185 | "name": "stdout", 186 | "output_type": "stream", 187 | "text": [ 188 | "Starting training at 2017-02-14 03:53:09.396345\n", 189 | "Train on 327523 samples, validate on 36392 samples\n", 190 | "Epoch 1/25\n", 191 | "142s - loss: 0.5089 - acc: 0.7451 - precision: 0.6798 - recall: 0.6037 - fbeta_score: 0.6271 - val_loss: 0.4529 - val_acc: 0.7789 - val_precision: 0.7166 - val_recall: 0.6673 - val_fbeta_score: 0.6812\n", 192 | "Epoch 2/25\n", 193 | "139s - loss: 0.4459 - acc: 0.7834 - precision: 0.7187 - recall: 0.6967 - fbeta_score: 0.6960 - val_loss: 0.4247 - val_acc: 0.7936 - val_precision: 0.7142 - val_recall: 0.7350 - val_fbeta_score: 0.7159\n", 194 | "Epoch 3/25\n", 195 | "138s - loss: 0.4104 - acc: 0.8049 - precision: 0.7430 - recall: 0.7365 - fbeta_score: 0.7287 - val_loss: 0.4109 - val_acc: 0.8064 - val_precision: 0.7574 - val_recall: 0.7011 - val_fbeta_score: 0.7190\n", 196 | "Epoch 4/25\n", 197 | "137s - loss: 0.3823 - acc: 0.8220 - precision: 0.7625 - recall: 0.7661 - fbeta_score: 0.7540 - val_loss: 0.4070 - val_acc: 0.8064 - val_precision: 0.7208 - val_recall: 0.7813 - val_fbeta_score: 0.7415\n", 198 | "Epoch 5/25\n", 199 | "139s - loss: 0.3604 - acc: 0.8333 - precision: 0.7742 - recall: 0.7866 - fbeta_score: 0.7704 - val_loss: 0.3949 - val_acc: 0.8135 - val_precision: 0.7327 - val_recall: 0.7828 - val_fbeta_score: 0.7488\n", 200 | "Epoch 6/25\n", 201 | "137s - loss: 0.3400 - acc: 0.8452 - precision: 0.7887 - recall: 0.8038 - fbeta_score: 0.7869 - val_loss: 0.3877 - val_acc: 0.8177 - val_precision: 0.7376 - val_recall: 0.7912 - val_fbeta_score: 0.7552\n", 202 | "Epoch 7/25\n", 203 | "137s - loss: 0.3234 - acc: 0.8547 - precision: 0.7995 - recall: 0.8182 - fbeta_score: 0.7998 - val_loss: 0.3946 - val_acc: 0.8182 - val_precision: 0.7412 - val_recall: 0.7850 - val_fbeta_score: 0.7545\n", 204 | "Epoch 8/25\n", 205 | "137s - loss: 0.3070 - acc: 0.8632 - precision: 0.8108 - recall: 0.8294 - fbeta_score: 0.8117 - val_loss: 0.3936 - val_acc: 0.8212 - val_precision: 0.7555 - val_recall: 0.7653 - val_fbeta_score: 0.7519\n", 206 | "Epoch 9/25\n", 207 | "137s - loss: 0.2935 - acc: 0.8697 - precision: 0.8187 - recall: 0.8384 - fbeta_score: 0.8203 - val_loss: 0.4004 - val_acc: 0.8244 - val_precision: 0.7638 - val_recall: 0.7637 - val_fbeta_score: 0.7551\n", 208 | "Epoch 10/25\n", 209 | "136s - loss: 0.2821 - acc: 0.8759 - precision: 0.8268 - recall: 0.8465 - fbeta_score: 0.8286 - val_loss: 0.4002 - val_acc: 0.8203 - val_precision: 0.7339 - val_recall: 0.8096 - val_fbeta_score: 0.7621\n", 210 | "Epoch 11/25\n", 211 | "137s - loss: 0.2712 - acc: 0.8813 - precision: 0.8338 - recall: 0.8541 - fbeta_score: 0.8363 - val_loss: 0.3931 - val_acc: 0.8244 - val_precision: 0.7494 - val_recall: 0.7934 - val_fbeta_score: 0.7626\n", 212 | "Epoch 12/25\n", 213 | "138s - loss: 0.2623 - acc: 0.8852 - precision: 0.8397 - recall: 0.8589 - fbeta_score: 0.8415 - val_loss: 0.4068 - val_acc: 0.8247 - val_precision: 0.7649 - val_recall: 0.7626 - val_fbeta_score: 0.7555\n", 214 | "Epoch 13/25\n", 215 | "138s - loss: 0.2538 - acc: 0.8899 - precision: 0.8468 - recall: 0.8635 - fbeta_score: 0.8478 - val_loss: 0.4070 - val_acc: 0.8257 - val_precision: 0.7467 - val_recall: 0.8038 - val_fbeta_score: 0.7659\n", 216 | "Epoch 14/25\n", 217 | "136s - loss: 0.2439 - acc: 0.8947 - precision: 0.8524 - recall: 0.8704 - fbeta_score: 0.8543 - val_loss: 0.4079 - val_acc: 0.8253 - val_precision: 0.7616 - val_recall: 0.7693 - val_fbeta_score: 0.7576\n", 218 | "Epoch 15/25\n", 219 | "137s - loss: 0.2358 - acc: 0.8982 - precision: 0.8562 - recall: 0.8749 - fbeta_score: 0.8587 - val_loss: 0.4152 - val_acc: 0.8275 - val_precision: 0.7637 - val_recall: 0.7749 - val_fbeta_score: 0.7610\n", 220 | "Epoch 16/25\n", 221 | "136s - loss: 0.2278 - acc: 0.9023 - precision: 0.8626 - recall: 0.8808 - fbeta_score: 0.8650 - val_loss: 0.4276 - val_acc: 0.8231 - val_precision: 0.7541 - val_recall: 0.7747 - val_fbeta_score: 0.7562\n", 222 | "Epoch 17/25\n", 223 | "137s - loss: 0.2216 - acc: 0.9052 - precision: 0.8664 - recall: 0.8839 - fbeta_score: 0.8686 - val_loss: 0.4310 - val_acc: 0.8253 - val_precision: 0.7496 - val_recall: 0.7953 - val_fbeta_score: 0.7640\n", 224 | "Epoch 18/25\n", 225 | "137s - loss: 0.2133 - acc: 0.9096 - precision: 0.8724 - recall: 0.8899 - fbeta_score: 0.8748 - val_loss: 0.4571 - val_acc: 0.8234 - val_precision: 0.7414 - val_recall: 0.8058 - val_fbeta_score: 0.7648\n", 226 | "Epoch 19/25\n", 227 | "136s - loss: 0.2087 - acc: 0.9115 - precision: 0.8757 - recall: 0.8916 - fbeta_score: 0.8773 - val_loss: 0.4565 - val_acc: 0.8249 - val_precision: 0.7500 - val_recall: 0.7895 - val_fbeta_score: 0.7615\n", 228 | "Epoch 20/25\n", 229 | "136s - loss: 0.2019 - acc: 0.9146 - precision: 0.8791 - recall: 0.8961 - fbeta_score: 0.8814 - val_loss: 0.4474 - val_acc: 0.8266 - val_precision: 0.7620 - val_recall: 0.7763 - val_fbeta_score: 0.7611\n", 230 | "Epoch 21/25\n", 231 | "136s - loss: 0.1955 - acc: 0.9173 - precision: 0.8820 - recall: 0.9008 - fbeta_score: 0.8855 - val_loss: 0.4586 - val_acc: 0.8228 - val_precision: 0.7405 - val_recall: 0.8041 - val_fbeta_score: 0.7632\n", 232 | "Epoch 22/25\n", 233 | "137s - loss: 0.1918 - acc: 0.9191 - precision: 0.8851 - recall: 0.9015 - fbeta_score: 0.8875 - val_loss: 0.4636 - val_acc: 0.8245 - val_precision: 0.7606 - val_recall: 0.7720 - val_fbeta_score: 0.7576\n", 234 | "Epoch 23/25\n", 235 | "137s - loss: 0.1868 - acc: 0.9215 - precision: 0.8893 - recall: 0.9038 - fbeta_score: 0.8910 - val_loss: 0.4579 - val_acc: 0.8255 - val_precision: 0.7469 - val_recall: 0.8029 - val_fbeta_score: 0.7660\n", 236 | "Epoch 24/25\n", 237 | "136s - loss: 0.1815 - acc: 0.9238 - precision: 0.8922 - recall: 0.9067 - fbeta_score: 0.8939 - val_loss: 0.4506 - val_acc: 0.8264 - val_precision: 0.7635 - val_recall: 0.7750 - val_fbeta_score: 0.7611\n", 238 | "Epoch 25/25\n", 239 | "137s - loss: 0.1757 - acc: 0.9270 - precision: 0.8959 - recall: 0.9113 - fbeta_score: 0.8983 - val_loss: 0.4839 - val_acc: 0.8251 - val_precision: 0.7490 - val_recall: 0.7978 - val_fbeta_score: 0.7645\n", 240 | "Training ended at 2017-02-14 04:50:33.618734\n", 241 | "Minutes elapsed: 57.403701\n" 242 | ] 243 | } 244 | ], 245 | "source": [ 246 | "print(\"Starting training at\", datetime.datetime.now())\n", 247 | "t0 = time.time()\n", 248 | "callbacks = [ModelCheckpoint(MODEL_WEIGHTS_FILE, monitor='val_acc', save_best_only=True)]\n", 249 | "history = model.fit([Q1_train, Q2_train],\n", 250 | " y_train,\n", 251 | " nb_epoch=NB_EPOCHS,\n", 252 | " validation_split=VALIDATION_SPLIT,\n", 253 | " verbose=2,\n", 254 | " callbacks=callbacks)\n", 255 | "t1 = time.time()\n", 256 | "print(\"Training ended at\", datetime.datetime.now())\n", 257 | "print(\"Minutes elapsed: %f\" % ((t1 - t0) / 60.))" 258 | ] 259 | }, 260 | { 261 | "cell_type": "markdown", 262 | "metadata": {}, 263 | "source": [ 264 | "## Plot training and validation accuracy" 265 | ] 266 | }, 267 | { 268 | "cell_type": "code", 269 | "execution_count": 7, 270 | "metadata": { 271 | "collapsed": false 272 | }, 273 | "outputs": [ 274 | { 275 | "data": { 276 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAArkAAAHHCAYAAAC7nkCYAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzs3Xt8FPW9//H3d5OQGxAuERORAOINRUXQCj9RgVpQj40U\nW7Bqa0FLPQW00ILWnnOMWFuDrVKgtNaDItYGtHqo2gveqBpErSRWVOKVO4hEwnVJAsn398dkk+zm\nQthMMpPd1/PxmMfuzs5+9zPyNnyYzHzHWGsFAAAAxJKA1wUAAAAAbqPJBQAAQMyhyQUAAEDMockF\nAABAzKHJBQAAQMyhyQUAAEDMockFAABAzKHJBQAAQMyhyQUAAEDMockFAABAzPFFk2uMucgY84wx\nZpsxptoYk9uCz4w0xqw1xpQbYz4yxtzQHrUCAADA/3zR5EpKl/SOpKmS7NE2Nsb0k/ScpJcknSPp\nN5L+1xjztbYrEQAAAB2FsfaoPWW7MsZUSxpnrX2mmW3yJV1urT273roCSRnW2ivaoUwAAAD4mF+O\n5B6rYZJejFi3UtJwD2oBAACAz3TUJjdL0s6IdTsldTXGJHtQDwAAAHwk0esCXGRqHhs9/8IY01PS\nWEkbJZW3U00AAABouZSax3estcHWDNRRm9zPJR0fsa6XpH3W2somPjNW0uNtWhUAAADcMFRSUWsG\n6KhN7hpJl0esG1OzvikbJemPf/yjBg4c2EZlIVozZszQAw884HUZ8BAZABkAGcBNN92k4uJiV8by\nRZNrjEmXdLLqTjk4yRhzjqTd1totxphfSjrBWhuaC/f3kqbVzLLwsKSvSvqmpOZmViiXpIEDB2rI\nkCFtsRtohYyMDP5c4hwZABkAGUCXLl1cG8svF56dJ6lY0lo559T+Ws4h6rtq3s+S1Ce0sbV2o6T/\nkHSpnPl1Z0i60VobOeMCOogDBw54XQI8RgZABkAGEAy26jTcML44kmutfUXNNNzW2klNfGZoW9aF\n9vPBBx94XQI8RgZABkAG8Nlnn7k2ll+O5CLOnXXWWV6XAI+RAZABkAGcfPLJro1FkwtfmDx5stcl\nwGNkAGQAZABXXXWVa2P57ra+bcUYM0TS2rVr13JSOwAAgA8VFRVp6NChkjTUWtuqKcQ4kgsAAICY\nQ5MLX8jPz/e6BHiMDIAMgAxgyZIlro1FkwtfcHPKEHRMZABkAGQA5eXlro3FObkAAADwBc7JBQAA\nAJpBkwsAAICYQ5MLXygtLfW6BHiMDIAMgAygrKzMtbFocuELTAAOMgAyADKAOXPmuDYWTS58IS8v\nz+sS4DEyADIAMoApU6a4NhZNLnyBGS9ABkAGQAYwcOBA18aiyQUAAEDMockFAABAzKHJhS8sXrzY\n6xLgMTIAMgAygBUrVrg2Fk0ufKGoqFU3NUEMIAMgAyADKCkpcW0sbusLAAAAX+C2vgAAAEAzaHIB\nAAAQc2hyAQAAEHNocuELubm5XpcAj5EBkAGQAcyYMcO1sRJdGwlohWnTpnldAjxGBkAGQAZim7VS\nebl04IC0f7+zhJ6HHnv1miDpVVe+j9kVAAAAEObIEenQIacpPXTIWQ4caNiUNtaoNreuqqr57+3U\nqUiVle7MrsCRXAAAAB87fLiu0Tx0SAoGw1+HlvoNaWufHznSsto6d5a6dKl7DD3v1Us66aTwdY09\n1n+eni6tWyc5M4i1Hk0uAADAUVRXSxUVdUt5efjryKW8PLxxPFqD2tj60LqjHf2sLxCQUlPrlpSU\nxp9363b0bSKfRzamaWnO9/kVTS58YcWKFRo3bpzXZcBDZABkAC3NwOHDTgPY2HLwYNPvhZbQ0cqj\nNan1X7f0yGakhITwpjMtLfx1aqrUo4fUu3fz2zX2ufrrQg1pUlJ0dfrFqlWrXBuLJhe+UFBQwF9u\ncY4MgAzElsOH687hbOnyt78V6PHHxx21SW1pw5mU5DSCkUtqqpSc7CzduzsNYuh1S5aWbp+SInXq\n1Lb/nWPNypUrXRuLC88AAIgToV+5h45SRj429V55uXOE9GhNav1tKiuPXk96uvNr79CSnu4sjTWm\nTa1vaptYOKoZj9y8rS9HcgEA8Jnqaudq9LKy5pc9e8J/9X60ZjXaX7knJoY3o/WX44+XBgxo+v36\nDWz9134/nxMdH00uAABtoLpa2rv36E1qU+urqxsfNyPD+RV79+7OxUNpaU7T2LNnw1+lh563Zl1y\nsnNeKdDR0OQCAOKWtXUXK9VfGlt3rMv+/c74kYwJb1RDS79+DddFLhkZNJxAS9HkwhcmTZqkRx55\nxOsy4CEygGPNQFWVtG+fc9TzWJZ9+8Kb2ZZITq47XzRyyciQTjih8fWNNapdu/Jr+qbwcwB5eXmu\njUWTC18YM2aM1yXAY2QgvoSOoO7bV3c3pN69x+j//u/YmtWmpKc7v8qvv/TpI511Vt2k8y1d0tKc\nc1LR9vg5gGHDhunZZ591ZSxmVwAAtIi1db+GDzWnkY8tfW///qbPOZWcc0wjm9TmloyM8OdcVQ90\nTMyuAACIirXO1fi7d0tffuk8Ri6R68vK6prT5o6LJCY6v4rv0qXusUsX51f0OTmNvxe5LnTeKUdO\nAbQWP0YAoAMKHVVtSZMaua6iouF4xjgNZo8edUvv3tLZZztHR7t2bdiQRjarycnOOADgBzS58IXC\nwkKNGDHC6zLgoXjNgLXOUdLGmtWjLY1Nth8IOM1qz551zWpOjjR4sPO8/vr6S7du3l8MFa8ZQB0y\ngOLiYtfGosmFL8ydO5cfbHGuI2cgdArA3r3OBVGhuVFb0qiWlTmzBERKTGzYiJ50knTeeXWNamQz\n26NHx75yvyNnAO4gA1i6dKlrY3HhGXwhGAwqLS3N6zLgIS8zUFlZ15yGlvqvW/JeU3eSSk5u+uhp\nc0vnzvH3q39+DoAMYPXq1aF/6HDhGWIDP9TgRgaOHJFKS6Vdu+oeI5+XlTVsUMvLmx4zNN9p6Kr9\njAznNqannlr3uv57oeehZjU1tdW7FTf4OQAygFQXf2jS5ALwrWCw8Ua1qddlZQ3HSEyUjjvOWTIz\n685Rba5BDT3v2pWr/AGgo+LHN4B2U1XlNKM7d0qffx7++MUXDZvWxu5G1bmz06yGGteTT5aGDw9v\nZOs/z8iIv1/7AwBocuETs2bN0n333ed1GYiCtc4FVPWb1sgGtn4jG3kDgNCv/w8enKWvfOU+DR0a\n3qjWb1wzM/n1fyzj5wDIAObNm+faWDS58IWcnByvS0A91joT/+/Y0XTTGnq+c2fDi67S06WsLKd5\nzcqSLryw7nn9x+OPr2taFyzI0fTp7b+v8A9+DoAMICsry7WxmF0BiCOho647doQv27c3XBd5qkBK\nSl2DGtmsRj527uzN/gEAOjZu6wsgTFWVcx5rY81q/eXzzxveQKBbNyk721n69pWGDZNOOKFuXaip\n7dqVc1sBAB0HTS7gc9Y6swZs2OAsGzc6j5s21TWvO3c2PNf1uOPqGtWBA6WvfrXudf0GlnNcAQCx\niCYXvlBSUqLTTz/d6zI8c+BAwya2/ut9++q2TU+X+vd3jroOHRp+1DW0HH+8lJTk1d5EJ94zADIA\nMgBpw4YNro3FObnwhdzcXD3zzDNel9FmysudI6+NNbAbNjjTZYUkJ0v9+jlL//7OUv95z56xedpA\nrGcAR0cGQAZwySWX6NVXX5VcOCeXJhe+sHnz5g59Va21zmkDn3ziLJFN7PbtddsmJDg3I6jfuNZv\nZLOypEDAox3xUEfPAFqPDIAM4K9//auuvPJKiQvPECs6wg+16mqnWQ01sp98In38cd3z0GwExjin\nEISa19GjwxvZE0/kLlqN6QgZQNsiAyADyM7Odm0s/qoF6qmulrZtC29eQ88//VQ6dMjZLhBwjsae\ncoozB+wNNzh33jrlFKeZTUnxdj8AAIh3NLmIO1VV0tatTTeyFRXOdoGAc+T1lFOkSy6RbryxrpHt\n1885dxYAAPgTTS58IT8/X7fddpurYx454jSv774rrVvnLB99JH32Wd1csQkJzpHXk092ptj6wQ/q\nGtm+faVOnVwtCc1oiwygYyEDIANYsmSJa2PR5MIXgpG31zoG1jrzxK5b5zS0oab2gw/qjsqecIJ0\n1lnS2LFOExta+vbteFNtxarWZACxgQyADKC8vNy1sZhdAR1KMOg0r6FGNvS4a5fzflqaNGiQdPbZ\nTlMbeuzZ09u6AQDA0XFbX8S86mpn6q36jey77zrnzVZXOzMYDBjgNLFTp9Y1tCedFJ/TbwEAgHA0\nufDc7t1158zWP3/24EHn/R49nAb2ssvqjsyeeaZz5y8AAIDG0OSi3ezfL73/vrO8917d444dklSq\npKRMnXGG08SOH193dDY7Ozbv8IVwpaWlyszM9LoMeIgMgAygrKzMtbFocuG6Q4ek9evDG9n333du\nayvVnWpw5pnS5MnO4x/+MFnPP/8MF4HFscmTJ3M7zzhHBkAGMGfOHNfGoslF1CornSm53nsvvKH9\n9FNnxgPJmb3gzDOliROdx0GDpNNPdy4Qq++00/JocONcXl6e1yX4hrVW1bZaR6qPhC1Vtir8dXWV\njDEyMjLGKGACMqp5rFkfeh5672jbNfYZK6vyI+W1y6HDh+qeHznUYH1j6xqsr3le//3yM8t10SMX\nqXtKd3VP7a5uyd3UPbV77ev6j91SnPdSE1NlfPirnmpbXfvfO5ZZa3W4+rAqqypVcaTCeayqiPp5\n9pXZun/N/cpIzlDX5K7KSMlo8DwtKS3m/7s2xVrb4GdBVXVV2Lqq6qoGz+tvE/n8WD6XEEhQUiBJ\nSQlJSgwk1j5PCtS8rnl+rO8HTN3FNFOmTNGrr77qyn8vZlfAUR054jSukUdmP/rIeU9yTikYNKiu\nkR00SDrjDKlLF29rRx1rrQ4ePqgDlQd0oPKA9lfsr3teub/B+tC6Klulbsnd1C2lmzJSMtQtpVuD\nJSPZWZ+U4M2/VCqOVGhvxV7tKd9z1KWyqlJWVtba2kdJDda19rH+XxZHa1Yjt6uyVZ78d3RLwASU\nmpiqlMQUpSY5jymJKbXrmlpvrVVZeZnKysu0p3yPyg45z8sOlenQkUONflenhE4NmuBuKd2c5000\nx0kJSWENeWRzfujwocafR2zb3Ocqq5zJuJMTkpWcmKzkhGSlJKYoObHmsWZ9Y8+bXNfIOJ0SOqna\nVquyqlKVVZU6XOU0nKHGs/66+utr11W3/DNNNaitkRhIVKeETrX7a2Rqf/Y0JcEkNNkAd03uqozk\nDGWkNP88JTFFFUcqVFFVUbtvR2vCj7Zt7fv1Xh+uOtxoQ9pUo3q09dW2ulX/vVsjwSS02c+mgAnU\nNr3mc6MDCw5IzK6AtrB9u7RqlbO8/bZUUlI332zPns65sqNGSdOnO03tmWc6F4fBPVXVVWEN6cFK\n53loXeh1/WZ0f8V+HTjcdAN7sPKgrJr/R21aUpo6d+qszp06q0unLurcqbMCJqB3K96tbRL3Vexr\n9vORjW9TDXHkIunoTWpFw3V7y/c22QAZmfDvTslQckJy7VGg0JG2+o+1RzQj1td/bOqz9R8TTaIS\nAglKDCSGLQkmfF1Ltmlqu9DRj2pbXdtgR/u82lbXNuiRz40xLW5UUxJT2uQfOxVHKmob3sjHPeV7\napvjskNl2rpvq9btXFf7+uDhgy3+HiNTu2/1G/XI58elHefsf0Lj7ycnJstaW9vwlB8pb/A8ct2+\nin1Nblv/eUv3o1NCp9olKSHJeQwkhb1ubF16Unrd60Dd55MTkp2GtKbZbu3zTgmdlBBIaLT+quoq\n7a/cr73le7W3Yq/2Vexr+nnFXu0t36st+7aEvbe3fK/rTVmCSWh0f5pal94pXUmBpLr/f03D/5/r\n//8ezfrGXoe2Dz0PbRPt+6GfNfX/AX+46rAOVx/W4arDzuua54erD7fq/Y3rN2q+5rvy58WRXKi0\nVPrnP6WXX3aWDz901g8aJA0fXndk9swzpV69OuZFYNY6v249XH1Y1ba60SX0F/rRllBj0NQSalCb\na0xr1zXRyJYfOfpk2EmBJHVJ7lLbjHbu1FldkruENagNnic3XB9al56U3uRfOPWF/vJprAndW763\nyYa0/ntHa7ZDAiZw1Oa4uSXUpCO+VVZV1mav7FCZjlQfabKJ7ZTQyde/CrfWhh0tLD9SXns0tH4j\n25L/l2OdtVaHjhzS3vLwZnhfxT6VHyk/aoNa/whz6H3+u7Y95slFq+zZI736qtPQrlrlTNslSaee\n6hyhnTNHGjnSaWjby+LFi3XjjTc2u035kfIGTVXoKE7k0b7G1h+uPtxOexMuOcH513yokQw1memd\n0tUrvZf6d+tft67m/dD2za3z6tSAhEBC2NHXY1Vtq3Wg8kBY81tWXqYX/vyCJlw/oUGT6ueGA+5q\nyc+BaHRK6KRe6b3UK70df6i1EWOM03QlJkvJXlfjPjczYIxRWlKa0pLSlN0l25Ux0fZWrFjh2li+\naXKNMVMl/URSlqR/S5purf1XM9v/SNLNknIklUr6s6SfWmtbd4JQDDpwQCosdBral1+WioqcGyr0\n7SuNHi3NmuU0t717t0891lrtPrRb2/Zv07Z927R9/3Yt/utifdrvU6c5beRX0mWHypr8NV2CqWu6\nQufjdUvppr4ZfRv8qrpTQicFTCBsCf2K+mhL6EKclizpSelK75TuaTPqVwETUNfkruqa3FU5GTm1\n61duX6mL+l7kYWXwWlFRUZs0ueg4yABKSkpcG8sXpysYYyZKelTSFElvSZoh6VuSTrXWljay/bWS\nFkv6nqQ1kk6t+XyBtfYnTXxH3JyuUF4urVlTd6T2zTedC8Sys52mdtQo57F/f/e/u7KqUtv3b9e2\nfdtqm9ht+8Ofb9+/PezX8fXPm6zfpIYudqq/hL1fs6QnpXO0DwCAGBCLpyvMkPSgtXapJBljbpb0\nH5ImS5rbyPbDJRVaa5fXvN5sjCmQ9JX2KNZvKiulf/2r7kjt6687F4plZjqnHcyf7zS1p54a/fm0\noaueGzSvEU3sruCusM+lJqaqd9fe6t2lt/pk9NGwE4epd5fetet6d+2t7M7ZHO0EAACu8rzJNcYk\nSRoq6RehddZaa4x5UU4z25jXJV1njDnfWvsvY8xJkq6QczQ3JlVWVeqDXR+oaEeRirYXa92WzSrd\nJe0qlXZ/KVVVSYlJUuY50ilfdRrcLl2lSiP9Q9I/iiUVH9t3Wmu1r2JfbQMbeQV7r/RetY3qBb0v\nUO/Tw5vX3l16q1tKN46yAgCAdud5kyspU1KCpJ0R63dKOq2xD1hrC4wxmZIKjdNBJUj6vbU2v00r\nbScHKw/q3Z3vqmhHkYo/L1bRjiK998V7zoVT1ihhz2mq2nWSEgIB9egpnXqa1DNTysgIP1JrVXdT\nhmhld8nW+Sec36B5ze6SrU4JnVo3OAAAQBvxQ5PbFCM1Ps+QMWakpDvkXHj2lqSTJc03xuyw1v68\n3Sp0QdmhMhV/XqziHcUq+rxIxTuK9eGXH6raVispkKQ+yYMU+OJcJb59ow5/dq76ppytb13VWVf9\np3TBBYqZu4Tl5uZyK8c4RwZABkAGMGPGDNfG8kOTWyqpStLxEet7qeHR3ZA5kpZaax+pef2+Maaz\npAcl+bbJ3bF/R9jR2eLPi7Vxz0ZJziT65xx/ji46cbQuSfqxPls9RKv/coY+25usgQOlH18tXT1P\nOuecjjlP7dFMmzbN6xLgMTIAMgAygAkTJrh2W1/PZ0m31h6WtFbSV0Prak5B+Kqcc28bkyYp8t52\n1TUfbbYFvOKKK5Sbmxu2DB8+vMG8bM8//7xyc3MbfH7q1KlavHhx2LqioiLl5uaqtLQ0tE/6rOwz\nTfjhBI3+/mhd8fgVyvpVlk64/wRd+bsrddfNd2n7hu26euDVenz841pz/Qf6xvpfaNe9F2rpNQv1\n+x/cqF3rztWPb63SqFG5+sMfCnX33dLgwU6DW1BQoEmTJjWobeLEia7uR8idd96p/PzwM0E2b96s\n3NzcBlN9LFiwQLNmzQpbFwwGlZubq8LCwrD19fdjzJgxMbEf9bEfx7YfY8aMiYn9kGLjz8OL/ZAU\nE/sRK38eXuzHCy+8EBP7ESt/Hm29HwUFBbW9WP/+/TV48GA98cQTDcaJll+mEJsg56KxH6huCrFv\nSjrdWrvLGLNU0lZr7R01299Zs80PJL0p6RRJiyT9y1p7bRPf0WZTiH1+4HO9sfUNrdmyRm9tf0vF\nO4q1t2KvJCm7c7bOzT5XQ7KGOI/ZQ9Q3o69KS41WrJCeekp66SVniq/hw6Xx453lpJNcLREAAMD3\nYm4KMWvtEzUXks2Rc9rCO5LGWmtD81GdKOlIvY/cLefI7d2SekvaJekZSf/V1rVWVlWqeEex3tj6\nht7Y5jS2m/ZukiSd0OUEXdD7As36f7M0JNtparM6Z9V+dutW6f8ecxrb115z1l18sfTAA9I3vtF+\nN2MAAACIdb5ociXJWrtIztHYxt4bHfE61ODe3dZ1bdm7xTlKu3WN3tj6hop2FKmiqkKdEjppaPZQ\nXT3wag07cZiGnThMfTL6NPj8Z585Te1TTzk3ZUhKkr76VenBB6WrrpKOO66t96BjWLFihcaNG+d1\nGfAQGQAZABnAqlWrXBvLN02uHxw6fEhFO4pqG9o3tr6hbfu3SZL6deunYScO08QzJ2p4n+E65/hz\nnHuHN+KDD5ym9umnpXfekVJSpMsukx57TLrySqlbt/bcq46hoKCAH2xxjgyADIAMYOXKla6N5Ytz\ncttD5Dm51lpt3LOxtqFds3WN3vn8HR2pPqLUxFSd3/t8DT9xuIadOEwX9L5A2V2ymx2/ulp69FFp\n7lyppETq3NlpaMePly6/3HkNAACApsXcObnt6ZHiR3TXR3fpja1v6IuDX0iSTulxioadOEyTBk/S\n8BOHa1CvQcd0m9nXXpN+9COpqEi6+mqn0f3a15wjuAAAAGh/cdfkPlz8sIZ/ZbimDJniHKU98QJl\npmVGNdamTdLs2dITT0jnnScVFkoXXuhywQAAADhmcdfk/vN7/9T5553fqjEOHJDy86Vf/Urq3l1a\nskT6znekgOezDgMAAEDywc0g2ltCICHqz1ZXS0uXSqedJt13nzRzpvTRR9INN9DgtlZjk0gjvpAB\nkAGQAeTl5bk2Fq1ZC61Z49ys4YYbnFMS1q+X7rmHC8rcUv+OZ4hPZABkAGQAw4YNc22suJ1doaW2\nbJFuv13605+kc8+V5s1zbuAAAAAAdzG7QjsIBp1TEvLzpS5dpP/9X+l735MSoj/bAQAAAO2EJjeC\ntVJBgXTbbdIXXzhTg/3sZ1LXrl5XBgAAgJbinNx6/vUv53zb666Tzj/fuXNZfj4NbnsoLCz0ugR4\njAyADIAMoLi42LWxaHIlbdvmXFD2la9IBw9KL73k3JJ3wACvK4sfc+fO9boEeIwMgAyADGDp0qWu\njRXXF54dOiT9+tfSL38ppaVJP/+5dNNNnHfrhWAwqLS0NK/LgIfIAMgAyABWr16tESNGSFx4Fh1r\npSefdO5Wtn27dMst0n/9l9Stm9eVxS9+qIEMgAyADCA1NdW1seKuyV2/Xrr1VucWvF//uvTCC9Ip\np3hdFQAAANwUd03u9ddLZ54pPf+89LWveV0NAAAA2kLcXXh2223SO+/Q4PrNrFmzvC4BHiMDIAMg\nA5g3b55rY8VdkzthgpQYd8ev/S8nJ8frEuAxMgAyADKArKws18aK69kVAAAA4B9u3tY37o7kAgAA\nIPbR5AIAACDm0OTCF0pKSrwuAR4jAyADIAPYsGGDa2PR5MIXZs+e7XUJ8BgZABkAGcD8+fNdG4sm\nF76wcOFCr0uAx8gAyADIANz8hw5NLnyBaWNABkAGQAaQnZ3t2lg0uQAAAIg5NLkAAACIOTS58IX8\n/HyvS4DHyADIAMgAlixZ4tpYNLnwhWAw6HUJ8BgZABkAGUB5eblrY3FbXwAAAPgCt/UFAAAAmkGT\nCwAAgJhDkwtfKC0t9boEeIwMgAyADKCsrMy1sWhy4QuTJ0/2ugR4jAyADIAMYM6cOa6NRZMLX8jL\ny/O6BHiMDIAMgAxgypQpro1FkwtfYMYLkAGQAZABDBw40LWxaHIBAAAQc2hyAQAAEHNocuELixcv\n9roEeIwMgAyADGDFihWujUWTC18oKmrVTU0QA8gAyADIAEpKSlwbi9v6AgAAwBe4rS8AAADQDJpc\nAAAAxByaXAAAAMQcmlz4Qm5urtclwGNkAGQAZAAzZsxwbSyaXPjCtGnTvC4BHiMDIAMgA5gwYYJr\nYzG7AgAAAHyB2RUAAACAZtDkAgAAIObQ5MIX3LyNHzomMgAyADKAVatWuTYWTS58oaCgwOsS4DEy\nADIAMoCVK1e6NhYXngEAAMAXuPAMAAAAaAZNLgAAAGIOTS4AAABiDk0ufGHSpElelwCPkQGQAZAB\n5OXluTYWTS58YcyYMV6XAI+RAZABkAEMGzbMtbGYXQEAAAC+wOwKAAAAQDNocgEAABBzaHLhC4WF\nhV6XAI+RAZABkAEUFxe7NhZNLnxh7ty5XpcAj5EBkAGQASxdutS1sbjwDL4QDAaVlpbmdRnwEBkA\nGQAZwOrVqzVixAiJC88QK/ihBjIAMgAygNTUVNfGoskFAABAzKHJBQAAQMyhyYUvzJo1y+sS4DEy\nADIAMoB58+a5NpZvmlxjzFRjzAZjzCFjzBvGmPOPsn2GMea3xpjtNZ8pMcZc1l71wl05OTlelwCP\nkQGQAZABZGVluTaWL2ZXMMZMlPSopCmS3pI0Q9K3JJ1qrS1tZPskSa9L+lzSPZK2S+oraY+1dl0T\n38HsCgAAAD7m5m19E90pqdVmSHrQWrtUkowxN0v6D0mTJTU2ad6NkrpJGmatrapZt7k9CgUAAID/\neX66Qs1R2aGSXgqts87h5RclDW/iY1+XtEbSImPM58aYdcaYnxpjPN8fAAAAeM8PTWGmpARJOyPW\n75TU1IkZJ8k5nSEg6XJJd0v6saQ72qhGtLGSkhKvS4DHyADIAMgANmzY4NpYfmhym2IkNXXCcEBO\nEzzFWltns//BAAAgAElEQVRsrX1Czrm5/9lexcFds2fP9roEeIwMgAyADGD+/PmujeWHJrdUUpWk\n4yPW91LDo7shOyR9ZMOvmlsvKcsY0+x5xldccYVyc3PDluHDh2vFihVh2z3//PPKzc1t8PmpU6dq\n8eLFYeuKioqUm5ur0tLwa+TuvPNO5efnh63bvHmzcnNzG/xrdcGCBQ2mTgkGg8rNzVVhYWHY+oKC\nAk2aNKlBbRMnTuyw+7Fw4cKY2I/62I9j24+FCxfGxH5IsfHn4cV+TJgwISb2I1b+PLzYj+zs7JjY\nj1j582jr/SgoKKjtxfr376/BgwervLy8wTjR8svsCm9IetNae2vNayPnQrL51tr7Gtn+Hknfttae\nVG/drZJmWWtPbOI7mF0BAADAx9ycXcEPR3Il6X5JU4wx3zXGnC7p95LSJC2RJGPMUmPML+pt/ztJ\nPY0xvzHGnGKM+Q9JP5W0UAAAAIh7vphCzFr7hDEmU9IcOactvCNprLV2V80mJ0o6Um/7rcaYMZIe\nkPRvSdtqnjc23RgAAADijF+O5Mpau8ha289am2qtHW6tfbvee6OttZMjtn/TWvv/rLVp1tpTrLX5\n1g/nXiAqkecIIf6QAZABkAEsWbLEtbF80+QivgWDQa9LgMfIAMgAyABi7sKz9sCFZwAAAP4Wixee\nAQAAAK6hyQUAAEDMiarJNcaMdLkOxLnICasRf8gAyADIAMrKylwbK9ojuSuNMZ8aY/7LGNPHtWoQ\ntyZPnnz0jRDTyADIAMgA5syZ49pY0Ta5veXceOGbkjYYY1YaYyYYYzq5VhniSl5entclwGNkAGQA\nZABTpkxxbaxWz65QM2vBJEnflmQkPS5psbX2360vzz3MrgAAAOBvvppdoaaAX8o5spsuabKktcaY\n14wxZ7Z2fAAAAOBYRd3kGmOSjDHfNMb8TdImSWMlTZNzW96Ta9Y96UqVAAAAwDGIdnaFBZJ2SPq9\npI8knVtzK97/tdYetNZulPQTSae7Vili2uLFi70uAR4jAyADIANYsWKFa2NFeyT3DEnTJZ1grf2R\ntfa9RrYplTQq6soQV4qKWnXaDWIAGQAZABlASUmJa2NxW18AAAD4gucXnhljfmqMaTCZnTFmsjHm\nttYUBAAAALRWtKcr/EBSY8eT35d0c/TlAAAAAK0XbZObJefCs0i7JGVHXw4AAADQetE2uVskXdjI\n+gslbY++HMSr3Nxcr0uAx8gAyADIAGbMmOHaWNE2uQ9JmmeMmWSM6VuzTJb0QM17wDGZNm2a1yXA\nY2QAZABkABMmTHBtrKhmVzDGGEn3SrpFUqea1eWS8q21c1yrzkXMrgAAAOBvbs6ukBjNh6zTGd9m\njLlb0kBJhyR9bK2taE0xAAAAgBuianJDrLUHJP3LpVoAAAAAV0R7Tq6MMecbY+YaY5YZY56uv7hZ\nIOKDm7fxQ8dEBkAGQAawatUq18aK9mYQ10haLedUhW9ISpJzq9/Rkva6Vh3iRkFBgdclwGNkAGQA\nZAArV650baxoLzx7V9KD1trfGmP2SzpH0gZJD0raYa2907UKXcKFZwAAAP7m+W19JQ2Q9Nea55WS\n0msuRntA0pTWFAQAAAC0VrRN7m5JXWqeb5M0qOZ5N0lprS0KAAAAaI1oZ1d4TdLXJK2T9KSk3xhj\nRtese8ml2gAAAICoRHskd5qkZTXP75F0v6TjJT0l6UYX6kKcmTRpktclwGNkAGQAZAB5eXmujXXM\nTa4xJlHSlZKqJMlaW22tvddam2ut/bG1tsy16hA3xowZ43UJ8BgZABkAGcCwYcNcGyva2RWCkgZa\naze5VkkbY3YFAAAAf/PD7ApvSRrcmi8GAAAA2kq0F54tknS/MaaPpLWSDtZ/01r7bmsLAwAAAKIV\n7ZHcZZL6S5ov585n70gqrvcIHJPCwkKvS4DHyADIAMgAiovdayOjbXL7N7KcVO8ROCZz5871ugR4\njAyADIAMYOnSpa6NFdWFZx0RF575WzAYVFoa9xGJZ2QAZABkAKtXr9aIESMkFy48i+qcXGPMd5t7\n31rrXhuOuMAPNZABkAGQAaSmpro2VrQXnv0m4nWSnNv5VkoKSqLJBQAAgGeianKttd0j1xljTpH0\nO0n3tbYoAAAAoDWivfCsAWvtx5JuV8OjvMBRzZo1y+sS4DEyADIAMoB58+a5NpZrTW6NI5JOcHlM\nxIGcnByvS4DHyADIAMgAsrKyXBsr2tv65kaukpQtaZqkLdbay12ozVXMrgAAAOBvbt7WN9oLz1ZE\nvLaSdkl6WdKPW1MQAAAA0FrRXnjm9mkOAAAAgGtoVuELJSUlXpcAj5EBkAGQAWzYsMG1saJqco0x\nfzbG3N7I+lnGmCdbXxbizezZs70uAR4jAyADIAOYP3++a2NFeyT3Ekl/bWT9PyRdHH05iFcLFy70\nugR4jAyADIAMwM1/6ETb5HaWc3ezSIcldY2+HMQrpo0BGQAZABlAdna2a2NF2+SukzSxkfXXSPog\n+nIAAACA1ot2CrG7JT1tjBkgZ9owSfqqpG9L+pYbhQEAAADRiupIrrX2WUnjJJ0saZGkX0s6UdKl\n1trIOXSBo8rPz/e6BHiMDIAMgAxgyZIlro0V7ZFcWWv/qsYvPgOOWTAY9LoEeIwMgAyADKC8vNy1\nsaK9re/5kgLW2jcj1l8gqcpa+7ZL9bmG2/oCAAD4m5u39Y32wrPfSurTyPreNe8BAAAAnom2yT1D\nUmPddXHNewAAAIBnom1yKyQd38j6bElHoi8H8aq0tNTrEuAxMgAyADKAsrIy18aKtsl9XtIvjTEZ\noRXGmG6SfiHpBTcKQ3yZPHmy1yXAY2QAZABkAHPmzHFtrGib3J/IOSd3kzFmlTFmlaQNkrIk/dit\n4hA/8vLyvC4BHiMDIAMgA5gyZYprY0U1u4IkGWPSJV0n6RxJhyS9K6nAWnvYtepcxOwKAAAA/ubm\n7AqtmSf3oDGmUNJmSZ1qVl9ujJG19pnWFAUAAAC0RlRNrjHmJEn/J+ksSVaSqXkMSWh9aQAAAEB0\noj0n9zdyzsE9XlJQ0iBJl0h6W9JIVypDXFm8eLHXJcBjZABkAGQAK1ascG2saJvc4ZL+x1q7S1K1\nnLucFUr6qaT5bhWH+FFU1KrTbhADyADIAMgASkpKXBsr2tv6lsk5IfgzY8ynkm6y1q4yxgyQtM5a\nm+ZahS7hwjMAAAB/88OFZ+9JOlvSZ5LelDTbGFMpaUrNOgAAAMAz0Ta5P5eUXvP8fyQ9J+k1SV9K\nmuhCXQAAAEDUompyrbUr6z3/RNLpxpgekspstBPvAgAAAC6J9sKzBqy1u2lwEa3c3FyvS4DHyADI\nAMgAZsyY4dpYrjW5rWWMmWqM2WCMOWSMecMYc34LP3eNMabaGPN0W9eItjNt2jSvS4DHyADIAMgA\nJkyY4NpYUd/W103GmImSHpVz4dpbkmZI+pakU621pc18rq+kQkmfStptrR3fzLbMrgAAAOBjbs6u\n4JcjuTMkPWitXWqtLZF0s5ybTExu6gPGmICkP8q58G1Du1QJAACADsHzJtcYkyRpqKSXQutqzu19\nUc5NJ5pyp6QvrLWPtG2FAAAA6Gg8b3IlZUpKkLQzYv1OSVmNfcAYc6GkSZJuatvS0F7cvI0fOiYy\nADIAMoBVq1a5NpYfmtymGEkNThg2xnSW9Jik71try9q9KrSJgoICr0uAx8gAyADIAFauXHn0jVrI\nD01uqaQqScdHrO+lhkd3JWmApL6SnjXGHDbGHJb0XUlXGWMqjTH9m/uyK664Qrm5uWHL8OHDG/zr\n8fnnn290KpOpU6dq8eLFYeuKioqUm5ur0tLwa+TuvPNO5efnh63bvHmzcnNzG9ybecGCBZo1a1bY\numAwqNzcXBUWFoatLygo0KRJkxrUNnHixA67H8uXL4+J/aiP/Ti2/Vi+fHlM7IcUG38eXuzHjTfe\nGBP7ESt/Hl7sR05OTkzsR6z8ebT1fhQUFNT2Yv3799fgwYNVUVHRYJxo+WV2hTckvWmtvbXmtZG0\nWdJ8a+19Edt2knRyxBD3SOos6RZJH1trjzTyHcyuAAAA4GNuzq4Q7W193Xa/pEeNMWtVN4VYmqQl\nkmSMWSppq7X2DmttpaQP6n/YGLNHzvVq69u1agAAAPiSL5pca+0TxphMSXPknLbwjqSx1tpdNZuc\nKKnB0VkAAACgMX44J1eSZK1dZK3tZ61NtdYOt9a+Xe+90dbaJufMtdZOau5GEPC/xs7lQXwhAyAD\nIAPIy8tzbSzfNLmIb2PGjPG6BHiMDIAMgAxg2LBhro3liwvP2gMXngEAAPhbLN7WFwAAAHANTS4A\nAABiDk0ufCFyYmnEHzIAMgAygOLiYtfGosmFL8ydO9frEuAxMgAyADKApUuXujYWF57BF4LBoNLS\n0rwuAx4iAyADIANYvXq1RowYIXHhGWIFP9RABkAGQAaQmprq2lg0uQAAAIg5NLkAAACIOTS58IVZ\ns2Z5XQI8RgZABkAGMG/ePNfGosmFL+Tk5HhdAjxGBkAGQAaQlZXl2ljMrgAAAABf4La+AAAAQDNo\ncgEAABBzaHLhCyUlJV6XAI+RAZABkAFs2LDBtbFocuELs2fP9roEeIwMgAyADGD+/PmujUWTC19Y\nuHCh1yXAY2QAZABkAG7+Q4cmF77AtDEgAyADIAPIzs52bSyaXAAAAMQcmlwAAADEHJpc+EJ+fr7X\nJcBjZABkAGQAS5YscW0smlz4QjAY9LoEeIwMgAyADKC8vNy1sbitLwAAAHyB2/oCAAAAzaDJBQAA\nQMyhyYUvlJaWel0CPEYGQAZABlBWVubaWDS58IXJkyd7XQI8RgZABkAGMGfOHNfGosmFL+Tl5Xld\nAjxGBkAGQAYwZcoU18aiyYUvMOMFyADIAMgABg4c6NpYNLkAAACIOTS5AAAAiDk0ufCFxYsXe10C\nPEYGQAZABrBixQrXxqLJhS8UFbXqpiaIAWQAZABkACUlJa6NxW19AQAA4Avc1hcAAABoBk0uAAAA\nYg5NLgAAAGIOTS58ITc31+sS4DEyADIAMoAZM2a4NhZNLnxh2rRpXpcAj5EBkAGQAUyYMMG1sZhd\nAQAAAL7A7AoAAABAM2hyAQAAEHNocuELbt7GDx0TGQAZABnAqlWrXBuLJhe+UFBQ4HUJ8BgZABkA\nGcDKlStdG4sLzwAAAOALXHgGAAAANIMmFwAAADGHJhcAAAAxhyYXvjBp0iSvS4DHyADIAMgA8vLy\nXBuLJhe+MGbMGK9LgMfIAMgAyACGDRvm2ljMrgAAAABfYHYFAAAAoBk0uQAAAIg5NLnwhcLCQq9L\ngMfIAMgAyACKi4tdG4smF74wd+5cr0uAx8gAyADIAJYuXeraWFx4Bl8IBoNKS0vzugx4iAyADIAM\nYPXq1RoxYoTEhWeIFfxQAxkAGQAZQGpqqmtj0eQCAAAg5tDkAgAAIObQ5MIXZs2a5XUJ8BgZABkA\nGcC8efNcG4smF76Qk5PjdQnwGBkAGQAZQFZWlmtjMbsCAAAAfIHb+gIAAADNoMkFAABAzKHJhS+U\nlJR4XQI8RgZABkAGsGHDBtfG8k2Ta4yZaozZYIw5ZIx5wxhzfjPb3mSMedUYs7tmeaG57eF/s2fP\n9roEeIwMgAyADGD+/PmujeWLJtcYM1HSryXdKelcSf+WtNIYk9nERy6R9CdJIyUNk7RF0vPGmOy2\nrxZtYeHChV6XAI+RAZABkAG4+Q8dXzS5kmZIetBau9RaWyLpZklBSZMb29ha+x1r7e+tte9aaz+S\ndJOcfflqu1UMVzFtDMgAyADIALKz3Tte6XmTa4xJkjRU0kuhddaZ1+xFScNbOEy6pCRJu10vEAAA\nAB2O502upExJCZJ2RqzfKamlMwLnS9ompzEGAABAnPNDk9sUI+mod6owxtwuaYKkcdbayjavCm0i\nPz/f6xLgMTIAMgAygCVLlrg2lh+a3FJJVZKOj1jfSw2P7oYxxvxE0mxJX7PWvt+SL7viiiuUm5sb\ntgwfPlwrVqwI2+75559Xbm5ug89PnTpVixcvDltXVFSk3NxclZaWhq2/8847G/wPu3nzZuXm5jaY\nJmXBggUN7tkdDAaVm5urwsLCsPUFBQWaNGlSg9omTpzYYfcjGAzGxH7Ux34c234Eg8GY2A8pNv48\nvNiPdevWxcR+xMqfhxf78Ze//CUm9iNW/jzaej8KCgpqe7H+/ftr8ODBDTLQGr64ra8x5g1Jb1pr\nb615bSRtljTfWntfE5+ZJekOSWOstf9qwXdwW18AAAAfc/O2vonulNRq90t61BizVtJbcmZbSJO0\nRJKMMUslbbXW3lHzerakOZK+LWmzMSZ0FPiAtfZgO9cOAAAAn/FFk2utfaJmTtw5ck5beEfSWGvt\nrppNTpR0pN5H/lPObAp/jhjqrpoxAAAAEMd80eRKkrV2kaRFTbw3OuJ1/3YpCu2mtLRUmZlN3fsD\n8YAMgAyADKCsrMy1sfxw4RmgyZMbve8H4ggZABkAGcCcOe79Qp4mF76Ql5fndQnwGBkAGQAZwJQp\nU1wbiyYXvsCMFyADIAMgAxg4cKBrY9HkAgAAIObQ5AIAACDm0OTCFyLv1oL4QwZABkAGEHmHtNbw\nzRRifrB58+YGt8BD+1i5cqXOPfdcr8s4ZpmZmcrJyfG6jJhQVFSkG2+80esy4CEyADKAyNsLt4Yv\nbuvbHo52W9/Nmzdr4MCBCgaD7V8cOqy0tDStX7+eRhcAABfE4m19PVdaWqpgMKg//vGPrl7Zh9i1\nfv16XX/99SotLaXJBQDAZ2hyIwwcOJApTAAAADo4LjwDAABAzKHJBeALubm5XpcAj5EBkAHMmDHD\ntbFocgH4wrRp07wuAR4jAyADmDBhgmtj0eSi1fr166fJkydH9dmRI0dq1KhRLleEjmjMmDFelwCP\nkQGQAQwfPty1sWhy48SaNWt01113ad++fa6PHQgEZIyJ6rPGGAUCxBAAALiL2RXixOuvv645c+Zo\n0qRJ6tq1q6tjf/jhh1E3qi+88IKrtQAAAEgcyY0bLb3ph7VWFRUVxzR2UlKSEhISoilLiYmJSkzk\n31pw91aO6JjIAMgAVq1a5dpYNLlx4K677tLs2bMlOefPBgIBJSQkaNOmTQoEArrlllv0pz/9SYMG\nDVJKSopWrlwpSfrVr36lCy+8UJmZmUpLS9N5552np556qsH4kefkPvroowoEAnr99dc1c+ZM9erV\nS507d9b48eP15Zdfhn125MiRGj16dO3rV155RYFAQE8++aTuuece9enTR6mpqbr00kv16aefNvju\n3/72txowYIDS0tI0bNgwFRYWNhgTHUNBQYHXJcBjZABkAKEexA0cQosDV199tT766CMtW7ZMv/nN\nb9SzZ08ZY3TcccdJkl566SU9+eSTmjp1qjIzM9WvXz9J0vz583XVVVfp+uuvV2VlpZYtW6YJEybo\nueee0+WXX147flPn406fPl09evRQXl6eNm7cqAceeEDTpk0L+yHW1GfvvfdeJSQkaNasWdq7d6/y\n8/N1/fXXa82aNbXb/O53v9P06dN1ySWXaObMmdq4caPGjRun7t27q0+fPq39z4Z2tnz5cq9LgMfI\nAMgA7r33XtdOZaTJjQODBg3SkCFDtGzZMl111VUNbkH70Ucf6b333tNpp50Wtv7jjz9WcnJy7etp\n06bp3HPP1f333x/W5DbluOOO0z/+8Y/a11VVVVqwYIH279+vLl26NPvZiooK/fvf/649DaJbt276\n0Y9+pA8++EBnnHGGDh8+rP/5n//RBRdcoJdeeqn2nOCzzz5bN9xwA00uAABxjiY3SsGgVFLStt9x\n+ulSWlrbfofknDIQ2eBKCmtw9+zZoyNHjuiiiy7SsmXLjjqmMUZTpkwJW3fRRRdp3rx52rRpkwYN\nGtTs5ydPnhx2nu9FF10ka60+++wznXHGGXr77bf15ZdfKj8/P+yit2uvvVY/+tGPjlofAACIbTS5\nUSopkYYObdvvWLtWGjKkbb9DUu3pCZGee+453XPPPXrnnXfCLkZr6UwKkUdTu3fvLkkqKytr9Wc3\nbdokY4wGDBgQtl1CQkKT+wMAAOIHTW6UTj/daULb+jvaQ2pqaoN1r732mq666iqNHDlSv/vd75Sd\nna2kpCQ9/PDDLb4woKkZF1oy00NrPouOadKkSXrkkUe8LgMeIgMgA8jLy3NtLJrcKKWltc9RVrcc\n680ann76aaWmpmrlypVhU3wtXrzY7dKi0rdvX1lr9cknn+iSSy6pXV9VVaWNGzfqnHPO8bA6RIM7\nHYEMgAxg2LBhevbZZ10ZiynE4kR6erok59zalkhISJAxRkeOHKldt3HjRv3lL39pk/qO1Xnnnaee\nPXvqoYceUnV1de36P/7xjy06HQL+8+1vf9vrEuAxMgAygMsuu8y1sTiSGyeGDh0qa63uuOMOXXPN\nNUpKStLXv/71Jre/8sordf/992vs2LG69tprtXPnTi1atEinnHKK3n333aN+X1OnFbh1ukFSUpLy\n8vJ0yy23aNSoUZowYYI2btyoJUuW6OSTT476NsMAACA2cCQ3Tpx33nn6+c9/rnfffVeTJk3Sdddd\np127dskY02hDOHLkSD388MPauXOnZsyYoeXLl2vu3LkaN25cg20bG6OpJrOx9dF+durUqZo/f762\nbNmiWbNmqbCwUM8++6wyMjKUkpLS6BgAACA+mHi5kMcYM0TS2rVr12pIIyfTFhUVaejQoWrqfXQM\n1lodd9xxuvrqq/Xggw+26XeRGXcVFhZqxIgRXpcBD5EBkAEsXrxYN910kyQNtdYWtWYsjuSiw6qs\nrGyw7tFHH9Xu3bs1atQoDypCa8ydO9frEuAxMgAygKVLl7o2FufkosNas2aNZs6cqW9+85vq2bOn\n1q5dq4cfflhnn322vvnNb3pdHo5RS24ygthGBkAG8Itf/MK1o/k0ueiw+vXrpz59+mjBggXavXu3\nevTooe9973v65S9/GTbtGTqGtPa4vR98jQyADKCxufujRSeADqtv375asWKF12UAAAAf4pxcAAAA\nxByaXAC+MGvWLK9LgMfIAMgA5s2b59pYNLkAfCEnJ8frEuAxMgAygKysLNfGoskF4AvTp0/3ugR4\njAyADOCaa65xbSyaXAAAAMQcmlwAAADEHJpcAL5QUlLidQnwGBkAGcCGDRtcG4smF4AvzJ492+sS\n4DEyADKA+fPnuzYWTS6O2ZIlSxQIBLR58+badSNHjtSoUaOO+tlXXnlFgUBAr776qqs1BQIBzZkz\nx9Ux0b4WLlzodQnwGBkAGYCb/9ChycUxM8bIGNNgXSDQsjhFfral/v73v+uuu+5qcU3oWJg6CGQA\nZADZ2dmujcVtfeGKF154oc2/429/+5sWLVqkO++8s8F7hw4dUmIicQYAAA66AriiPRpMa22T73Xq\n1KnNvx8AAHQcnK4QB/785z8rEAiosLCwwXu///3vFQgEtH79eq1bt07f+973NGDAAKWmpio7O1s3\n3nijdu/efdTvGDlypEaPHh22btu2bRo3bpw6d+6s448/XjNnzlRFRUWDZrWwsFATJ05U3759lZKS\nopycHM2cOVPl5eW120yaNEmLFi2S5Jx/GwgElJCQUPt+Y+fkFhcX6/LLL1dGRoa6dOmiSy+9VG++\n+WbYNo8++qgCgYBef/11zZw5U7169VLnzp01fvx4ffnll0fdb7gnPz/f6xLgMTIAMoAlS5a4NhZH\ncuPAlVdeqc6dO2v58uUaMWJE2HtPPvmkBg0apIEDB+r+++/Xxo0bNXnyZGVlZen999/Xgw8+qA8+\n+EBr1qxp9jsiz4ctLy/X6NGjtXXrVt16663Kzs7WY489ppdffrnBtk8++aSCwaB++MMfqmfPnnrr\nrbe0YMECbdu2TcuXL5ck3Xzzzdq+fbtefPFFPf74480e1ZWkDz74QBdffLEyMjJ0++23KzExUQ8+\n+KBGjhypV199Veeff37Y9tOnT1ePHj2Ul5enjRs36oEHHtC0adNUUFDQ7PfAPcFg0OsS4DEyADKA\n+ge4Ws1aGxeLpCGS7Nq1a21j1q5da5t7v6O79tprbVZWlq2urq5dt3PnTpuQkGDvuecea6215eXl\nDT63bNkyGwgEbGFhYe26JUuW2EAgYDdt2lS7buTIkXbUqFG1r+fNm2cDgYB96qmnatcdOnTInnLK\nKTYQCNhXXnmldn1j33vvvffahIQEu2XLltp106ZNs4FAoNH9M8bYu+66q/b1uHHjbEpKit24cWPt\nuh07dtiuXbvakSNHhu2LMcaOHTs2bLyZM2fapKQku2/fvka/z9rYzwwAAO0t9HerpCG2lb0fR3Kj\nFDwcVElp205afXrm6UpLSnNlrIkTJ2rZsmX65z//WTvV1xNPPCFrrSZMmCBJSk5Ort2+oqJCBw4c\n0AUXXCBrrYqKinThhRe2+Pv+/ve/Kzs7W+PHj69dl5KSoilTpui2224L27b+9waDQR06dEjDhw9X\ndXW1iouLdeKJJx7TvlZXV+uFF17QN77xDfXt27d2fVZWlq699lo99NBDOnDggDp37izJOQo9ZcqU\nsDEuuugizZs3T5s2bdKgQYOO6fsBAID3aHKjVFJaoqF/GNqm37F2yloNyR7iyliXXXaZunbtquXL\nl4c1uYMHD9bJJ58sSSorK1NeXp6WL1+uL774ovazxhjt3bv3mL5v06ZNtePWd9pppzVYt2XLFv33\nf/+3nn32WZWVlbXqeyVp165dCgaDOvXUUxu8N3DgQFlrtWXLFg0cOLB2fZ8+fcK26969uySF1QMA\nADoOmtwonZ55utZOWdvm3+GWTp066aqrrtLTTz+tRYsWaceOHVq9enXYSf7f+ta39MYbb2j27Nk6\n55xz1LlzZ1VXV2vs2LGqrq4+pu+z1jY6b62NOJe2urpal156qfbs2aOf/vSnOu2005Senq5t27bp\nhsGUNFQAAA08SURBVBtuOObvbew7WqL+RWytHQvRKS0tVWZmptdlwENkAGQAbh5cosmNUlpSmmtH\nWdvLNddco8cee0wvvfSS3n//fUlOYytJe/bs0csvv6y7775bP/vZz2o/88knn0T1Xf369dN7773X\nYP2HH34Y9nrdunX6+OOP9dhjj+m6666rXf/iiy82+GxLb/bQq1cvpaWlNfguSVq/fr2MMQ2O3MJ7\nkydP1jPPPON1GfAQGQAZgJt3L2UKsThy6aWXqnv37lq2bJmeeOIJfeUrX6k9ZzV0JDPyyOkDDzwQ\n1Z3ErrjiCu3YsUNPPfVU7bpgMKiHHnoobLumvnfevHkNvjc9PV2StG/fvma/OxAIaMyYMfrLX/4S\nduvhnTt3qqCgQBdffHHt+bjwj7y8PK9LgMfIAMgAIq+RaQ2O5MaRxMREjR8/XsuWLVMwGNSvfvWr\n2ve6dOmiiy++WHPnzlVlZaV69+6t559/Xhs2bIjqV/bf//73tXDhQn3nO9/R22+/XTuFWKhRDTn9\n9NM1YMAA/fjHP9bWrVvVtWtXPfXUU9qzZ0+DMYcOHSprraZPn66xY8cqISFBEydObPT7f/7zn+vF\nF1/UhRdeqB/+8IdKSEjQH/7wB1VWVmru3Llh2za1f5yq0L6GDOlYvxmB+8gAyADqXy/TWhzJjTMT\nJ07UwYMHZYypPVUhpKCgQGPHjtWiRYt0xx13KDk5Wf/4xz9kjGnR0dz626Smpurll1/W2LFjtXDh\nQt1zzz21TXR9iYmJeu6553Tuuefq3nvv1Zw5c3Taaadp6dKlDcYfP368brnlFq1cuVLf/e53de21\n14Z9d/3vP+OMM/Taa6/prLPO0r333qu7775b/fv31z//+U+dd955TdbdkvUAAMD/TLwcrTLGDJG0\ndu3atY3+S7GoqEhDhw5VU+8DkcgMAADuCv3dKmmotbaoNWNxJBeALyxevNjrEuAxMgAygBUrVrg2\nFk0uAF8oKmrVP9gRA8gAyABKSty70RZNLgBf+O1vf+t1CfAYGQAZwO233+7aWDS5AAAAiDk0uQAA\nAIg5NLkAAACIOTS5AHwhNzfX6xLgMTIAMoAZM2a4NhZNLgBfmDZtmtclwGNkAGQAEyZMcG0sbusb\nYf369V6XgA6CrLhrzJgxXpcAj5EBkAEMHz7ctbFocmtkZmYqLS1N119/vdeloANJS0tTZmam12UA\nAIAINLk1cnJytH79epWWlnpdCjqQzMxM5eTkeF0GAACI4Jsm1xgzVdJPJGVJ+rek6dbafzWz/bck\nzZHUT9JHkm631v69NTXk5OTQsHhkxYoVGjdunNdlwENkAGQAZACrVq1ybSxfXHhmjJko6deS7pR0\nrpwmd6UxptHfAxtjhkv6k6SHJA2WtELSCmPMGe1TMdyWn5/vdQnwGBkAGQAZwJIlS1wbyxdNrqQZ\nkh601i611pZIullSUNLkJra/VdLfrbX3W2s/tNbeKalIEpdldlDHHXec1yXAY2QAZABkAD169HBt\nLM+bXGNMkqShkl4KrbPWWkkvSmrqErvhNe/Xt7KZ7QEAABBHPG9yJWVKSpC0M2L9Tjnn5zYm6xi3\nBwAAQBzxQ5PbFCPJtuH2AAAAiFF+mF2hVFKVpOMj1vdSw6O1IZ8f4/aSlCIxgb9fvfXWWyoqKvK6\nDHiIDIAMgAzgvffec20s45z+6i1jzBuS3rTW3lrz2kjaLGm+tfa+RrZfJinVWntVvXWrJf3bWvvD\nJr7jWkmPt0X9AAAAcNVQa22r/sXjhyO5knS/pEeNMWslvSVntoU0SUskyRizVNJWa+0dNdv/RtIr\nxpiZkv4q6dtyLl77fjPfsVLSdZI2Sip3fxf+f3t3HitXWcZx/PsLWkQIipiKihK2RhZFAwERsBhM\n1BrBBIPigsAfSCwJ0T8UtAKCYEAE3PhDFCIQiYgLYFhEZZEK1CiSlqXIIlspFIoUymbl8Y9zrg5D\n23uROzO3p99PMrl3znnnnGdy37zz3Pc85x1JkiS9TK9qf972cg80JZLcqjq/XRP3WJoyhL8BH6iq\nJW2TTYEVPe2vS7I/cHz7+DuwT1XdsppzPEqztq4kSZI6bkqUK0iSJEmTaSqvriBJkiT9X0xyJUmS\n1DkmuRqZJEcneb7vscq6aq35kuyR5KIkD7R/771X0ubYJIuSPJXkiiRbjSJWDcZ4fSDJWSsZFy4Z\nVbyafEmOTDIvybIkDyX5VZIZfW3WTfKDJI8keSLJBUmmjypmTa4J9oGr+saBfyc5/aWcxyRXo7aA\n5mbDTdrH7qMNRwO2Ps2NpbNZyZe3JPkycBjwOWBnYDlweZJpwwxSA7XaPtC6lBeOC/sPJzQNyR7A\n94BdgPcDrwR+m2S9njanAR8G9gXeC7wJ+MWQ49TgTKQPFPBD/jcWvBH40ks5yZRYXUFrtRU9q2io\n46rqMuAy+O962P0OB46rqovbNgfQfMnLR4HzhxWnBmcCfQDgWceF7qqqWb3PkxwIPEyzFOi1STYE\nDgY+UVVXt20OAm5NsnNVzRtyyJpk4/WBnl1PvZyxwJlcjdrW7WXLO5Ocm+Qtow5Io5Fkc5r/1n8/\ntq2qlgE3ALuOKi6NxJ7tJczbkpye5HWjDkgD9VqaWbul7fMdaSbheseChTRfEuVY0E39fWDMp5Is\nSTI/yQl9M73jciZXo3Q9cCCwkOYyxDHANUm2r6rlI4xLo7EJzSDX//XcD7X7tHa4lOay9N3AlsA3\ngUuS7Fquedk57Wz+acC1PWvdbwI81/6T28uxoINW0Qeg+Zbae4BFwDuAk4AZwMcmemyTXI1MVV3e\n83RBknk0HXo/4KzRRKUpKKy6dlMdU1W9ZSk3J5kP3AnsCVw5kqA0SKcD2zKx+zEcC7pprA/s1rux\nqn7U8/TmJIuB3yXZvKrunsiBLVfQlFFVjwO3A95Nv3ZaTPMh9oa+7dN58eyu1hLth9kjOC50TpLv\nA7OAPatqUc+uxcC0tja3l2NBx/T1gQfHaX4DzWfEhMcCk1xNGUk2oLk8OV5HVwe1ycxiYK+xbe2H\n3C7An0YVl0YryabAxjgudEqb3OwDvK+q7u3b/RdgBS8cC2YAbwWuG1qQGqhx+sDKvItmJn/CY4Hl\nChqZJN8CLqYpUXgz8HWage28UcalwUmyPs1/4WN31W+RZAdgaVXdR1OXNSfJHcA/gOOA+4ELRxCu\nBmB1faB9HE1Tk7u4bXcizRWey198NK2J2rVO9wf2BpYnGbt683hVPVNVy5L8GDglyWPAE8B3gbmu\nrNAN4/WBJFsAnwQuAR4FdgBOAa6uqgUTPo91/BqVJOfRrJW3MbCEZtmQr0601kZrniQzaeoq+wee\nn1TVwW2bY4BDaO62/SMwu6ruGGacGpzV9QHg88CvgXfS/P0X0SS3R7mkWHckeZ6V19YeVFVnt23W\nBU6mSYTWpVl2bnZVPTy0QDUw4/WB9grOucB2NGtr3wf8Eji+qp6c8HlMciVJktQ11uRKkiSpc0xy\nJUmS1DkmuZIkSeock1xJkiR1jkmuJEmSOsckV5IkSZ1jkitJkqTOMcmVJElS55jkSpIkqXNMciVp\nLZJkZpLnk2w46lgkaZBMciVp7eP3uUvqPJNcSZIkdY5JriQNURpHJrkryVNJbkyyb7tvrJRgVpKb\nkjyd5Lok2/UdY98kC5I8k+TuJF/s2z8tyYlJ7m3bLExyUF8oOyX5c5LlSeYm2XrAb12ShsokV5KG\n6yvAp4FDgG2BU4FzkuzR0+Yk4AvATsAS4KIk6wAk2RH4GfBTYHvgaOC4JAf0vP4c4OPAYcDbgEOB\nJ3v2B/hGe44dgRXAmZP6LiVpxFJlaZYkDUOSacBSYK+quqFn+xnAesAZwJXAflV1QbtvI+B+4LNV\ndUGSc4HXV9UHe15/IjCrqt6eZAZwW3uOK1cSw0zgD+3+q9ptHwJ+A6xXVc8N4K1L0tA5kytJw7MV\n8GrgiiRPjD2AzwBbtm0KuH7sBVX1GLAQ2KbdtA0wt++4c4GtkwTYgWZm9ppxYpnf8/uD7c/pL+3t\nSNLU9YpRByBJa5EN2p+zgEV9+56lSYJXZeyyW3jx6gjp+f3pCcbyr5Uc24kPSZ3hgCZJw3MLTTK7\nWVXd1fd4oG0T4N1jL2jLFWYAt/YcY/e+4+4G3F5N/dl8mrF95gDfhyRNec7kStKQVNWTSU4GTm1v\nJLsWeA1Nkvo4cG/b9KgkS4GHgeNpbj67sN33bWBekjk0N6C9B5hNc3MZVXVPkrOBM5McDtwEbAZM\nr6qft8fonfllNdskaY1lkitJQ1RVX0vyEHAEsAXwT+CvwAnAOjSlA0cA36EpX7gR+EhVrWhff2OS\n/YBjgTk09bRzquqcntMc2h7vB8DGNMnzCb1hrCy0yXqPkjQVuLqCJE0RPSsfbFRVy0YdjyStyazJ\nlaSpxbIBSZoEJrmSNLV4eU2SJoHlCpIkSeocZ3IlSZLUOSa5kiRJ6hyTXEmSJHWOSa4kSZI6xyRX\nkiRJnWOSK0mSpM4xyZUkSVLnmORKkiSpc0xyJUmS1Dn/AZBP2hefLFLIAAAAAElFTkSuQmCC\n", 277 | "text/plain": [ 278 | "" 279 | ] 280 | }, 281 | "metadata": {}, 282 | "output_type": "display_data" 283 | } 284 | ], 285 | "source": [ 286 | "acc = pd.DataFrame({'epoch': [ i + 1 for i in history.epoch ],\n", 287 | " 'training': history.history['acc'],\n", 288 | " 'validation': history.history['val_acc']})\n", 289 | "ax = acc.ix[:,:].plot(x='epoch', figsize={5,8}, grid=True)\n", 290 | "ax.set_ylabel(\"accuracy\")\n", 291 | "ax.set_ylim([0.0,1.0]);" 292 | ] 293 | }, 294 | { 295 | "cell_type": "markdown", 296 | "metadata": {}, 297 | "source": [ 298 | "## Print best validation accuracy and epoch" 299 | ] 300 | }, 301 | { 302 | "cell_type": "code", 303 | "execution_count": 8, 304 | "metadata": { 305 | "collapsed": false 306 | }, 307 | "outputs": [ 308 | { 309 | "name": "stdout", 310 | "output_type": "stream", 311 | "text": [ 312 | "Maximum accuracy at epoch 15 = 0.8275\n" 313 | ] 314 | } 315 | ], 316 | "source": [ 317 | "max_val_acc, idx = max((val, idx) for (idx, val) in enumerate(history.history['val_acc']))\n", 318 | "print('Maximum accuracy at epoch', '{:d}'.format(idx+1), '=', '{:.4f}'.format(max_val_acc))" 319 | ] 320 | }, 321 | { 322 | "cell_type": "markdown", 323 | "metadata": {}, 324 | "source": [ 325 | "## Evaluate the model with best validation accuracy on the test partition" 326 | ] 327 | }, 328 | { 329 | "cell_type": "code", 330 | "execution_count": 9, 331 | "metadata": { 332 | "collapsed": false 333 | }, 334 | "outputs": [ 335 | { 336 | "name": "stdout", 337 | "output_type": "stream", 338 | "text": [ 339 | "40096/40436 [============================>.] - ETA: 0s\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n", 340 | "loss = 0.4094\n", 341 | "accuracy = 0.8291\n", 342 | "precision = 0.7616\n", 343 | "recall = 0.7760\n", 344 | "F = 0.7609\n" 345 | ] 346 | } 347 | ], 348 | "source": [ 349 | "model.load_weights(MODEL_WEIGHTS_FILE)\n", 350 | "loss, accuracy, precision, recall, fbeta_score = model.evaluate([Q1_test, Q2_test], y_test)\n", 351 | "print('')\n", 352 | "print('loss = {0:.4f}'.format(loss))\n", 353 | "print('accuracy = {0:.4f}'.format(accuracy))\n", 354 | "print('precision = {0:.4f}'.format(precision))\n", 355 | "print('recall = {0:.4f}'.format(recall))\n", 356 | "print('F = {0:.4f}'.format(fbeta_score))" 357 | ] 358 | } 359 | ], 360 | "metadata": { 361 | "hide_input": false, 362 | "kernelspec": { 363 | "display_name": "Python 3", 364 | "language": "python", 365 | "name": "python3" 366 | }, 367 | "language_info": { 368 | "codemirror_mode": { 369 | "name": "ipython", 370 | "version": 3 371 | }, 372 | "file_extension": ".py", 373 | "mimetype": "text/x-python", 374 | "name": "python", 375 | "nbconvert_exporter": "python", 376 | "pygments_lexer": "ipython3", 377 | "version": "3.4.3" 378 | } 379 | }, 380 | "nbformat": 4, 381 | "nbformat_minor": 2 382 | } 383 | --------------------------------------------------------------------------------