├── 1. Training Siamese Network on images.ipynb ├── 2. Faces conv features.ipynb ├── 3. Training Siamese Network on Conv Features.ipynb ├── 4. Testing your image - who's your doppelgänger.ipynb ├── 5. Unexpected result of the training.ipynb ├── 6. Train face attributes model.ipynb ├── README.md ├── assets ├── celeba.png ├── face_reco.JPG ├── loss.JPG ├── obama.png └── openface.jpg ├── identity_CelebA.txt ├── list_eval_partition.csv ├── losses.py ├── metrics.py ├── test_imgs ├── jenia.jpg └── mark_z.JPG └── utils.py /3. Training Siamese Network on Conv Features.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Build and Train SiameseNet with Triplet Loss\n", 8 | "\n", 9 | "Now when we have the encodings of all faces, we can train small siamese model (even on our laptop) to distinguish whether 2 images show the same person." 10 | ] 11 | }, 12 | { 13 | "cell_type": "code", 14 | "execution_count": 1, 15 | "metadata": {}, 16 | "outputs": [], 17 | "source": [ 18 | "import pandas as pd\n", 19 | "import numpy as np\n", 20 | "\n", 21 | "celeb_data = pd.read_csv('identity_CelebA.txt', sep=\" \", header=None)\n", 22 | "celeb_data.columns = [\"image\", \"label\"]\n", 23 | "partition = pd.read_csv('list_eval_partition.csv')\n", 24 | "df_train = celeb_data[partition.partition==0]\n", 25 | "df_valid = celeb_data[partition.partition==1]\n", 26 | "df_test = celeb_data[partition.partition==2]" 27 | ] 28 | }, 29 | { 30 | "cell_type": "code", 31 | "execution_count": 2, 32 | "metadata": { 33 | "scrolled": true 34 | }, 35 | "outputs": [], 36 | "source": [ 37 | "convfeats = np.load('conv_feats.npy')\n", 38 | "labels = celeb_data['label'].values\n", 39 | "\n", 40 | "X_train = convfeats[partition.partition==0]\n", 41 | "X_valid = convfeats[partition.partition==1]\n", 42 | "y_train = labels[partition.partition==0]\n", 43 | "y_valid = labels[partition.partition==1]" 44 | ] 45 | }, 46 | { 47 | "cell_type": "markdown", 48 | "metadata": {}, 49 | "source": [ 50 | "### Create Siamese Model\n", 51 | "Siamese model will encode the conv features to a 256 dim vector that will represent the image. \n", 52 | "Kind of a signature of the images. \n", 53 | "\n", 54 | "The fundamental assumption of this model is that same person should have similar \"signatures\". " 55 | ] 56 | }, 57 | { 58 | "cell_type": "code", 59 | "execution_count": 3, 60 | "metadata": {}, 61 | "outputs": [ 62 | { 63 | "name": "stderr", 64 | "output_type": "stream", 65 | "text": [ 66 | "Using TensorFlow backend.\n" 67 | ] 68 | }, 69 | { 70 | "name": "stdout", 71 | "output_type": "stream", 72 | "text": [ 73 | "WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\n", 74 | "Instructions for updating:\n", 75 | "Colocations handled automatically by placer.\n" 76 | ] 77 | } 78 | ], 79 | "source": [ 80 | "from keras.layers import Input, Dense, LeakyReLU, Concatenate, Lambda, BatchNormalization\n", 81 | "from keras import backend as K\n", 82 | "from keras.models import Model, load_model\n", 83 | "\n", 84 | "def embedder(conv_feat_size):\n", 85 | " input = Input((conv_feat_size,), name = 'input')\n", 86 | " normalize = Lambda(lambda x: K.l2_normalize(x, axis=-1), name='normalize')\n", 87 | " x = Dense(512)(input)\n", 88 | " x = LeakyReLU(alpha=0.1)(x)\n", 89 | " x = Dense(128)(x)\n", 90 | " x = normalize(x)\n", 91 | " model = Model(input, x)\n", 92 | " return model\n", 93 | " \n", 94 | "def get_siamese_model(conv_feat_size=2048):\n", 95 | " \n", 96 | " input_a = Input( (conv_feat_size,), name='anchor')\n", 97 | " input_p = Input( (conv_feat_size,), name='positive')\n", 98 | " input_n = Input( (conv_feat_size,), name='negative')\n", 99 | " \n", 100 | " emb_model = embedder(conv_feat_size)\n", 101 | " output_a = emb_model(input_a)\n", 102 | " output_p = emb_model(input_p)\n", 103 | " output_n = emb_model(input_n)\n", 104 | " \n", 105 | " merged_vector = Concatenate(axis=-1)([output_a, output_p, output_n])\n", 106 | " model = Model(inputs=[input_a, input_p, input_n],\n", 107 | " outputs=merged_vector)\n", 108 | "\n", 109 | " return model\n", 110 | "\n", 111 | "model = get_siamese_model(conv_feat_size = convfeats.shape[1])" 112 | ] 113 | }, 114 | { 115 | "cell_type": "markdown", 116 | "metadata": {}, 117 | "source": [ 118 | "### Create Siamese Model Loss - Triplet Loss\n", 119 | "\n", 120 | "Same person should have 'similar' signatures between his images, whilst new person should have differnet signature.\n", 121 | "One way to compare the \"similarity\" between this signatures (vectors) is to use euclidean distance metric or cosine distance.\n", 122 | "\n", 123 | "I chose to use cosine distance: https://en.wikipedia.org/wiki/Cosine_similarity\n", 124 | "But you can easily change it and check if you're getting better results. If you do, let me know :)\n", 125 | "\n", 126 | "Let's define 3 variables:\n", 127 | "1. Anchor - The image against which comparisons will be made\n", 128 | "2. Positive - Different image of the person in the anchor image\n", 129 | "3. Negative - image of a different person\n", 130 | "\n", 131 | "Our loss is essentially: \n", 132 | "\n", 133 | "###### Loss = Cos_dist(Anchor, Positive) - Cos_dist(Anchor, Negative) + alpha\n", 134 | "\n", 135 | "for more information visit here: https://towardsdatascience.com/siamese-network-triplet-loss-b4ca82c1aec8\n" 136 | ] 137 | }, 138 | { 139 | "cell_type": "code", 140 | "execution_count": 4, 141 | "metadata": {}, 142 | "outputs": [], 143 | "source": [ 144 | "def triplet_loss(y_true, y_pred, cosine = True, alpha = 0.2, embedding_size = 128):\n", 145 | " \n", 146 | " ind = int(embedding_size * 2)\n", 147 | " a_pred = y_pred[:, :embedding_size]\n", 148 | " p_pred = y_pred[:, embedding_size:ind]\n", 149 | " n_pred = y_pred[:, ind:]\n", 150 | " if cosine:\n", 151 | " positive_distance = 1 - K.sum((a_pred * p_pred), axis=-1)\n", 152 | " negative_distance = 1 - K.sum((a_pred * n_pred), axis=-1)\n", 153 | " else:\n", 154 | " positive_distance = K.sqrt(K.sum(K.square(a_pred - p_pred), axis=-1))\n", 155 | " negative_distance = K.sqrt(K.sum(K.square(a_pred - n_pred), axis=-1))\n", 156 | " loss = K.maximum(0.0, positive_distance - negative_distance + alpha)\n", 157 | " return loss" 158 | ] 159 | }, 160 | { 161 | "cell_type": "markdown", 162 | "metadata": {}, 163 | "source": [ 164 | "### Create image generator for siamese network\n", 165 | "The input of the model will be mini-batches of [Anchors, Positives, Negatives] conv features of the images. \n", 166 | "\n", 167 | "Don't forget - we train on conv features and not on the origianl images" 168 | ] 169 | }, 170 | { 171 | "cell_type": "code", 172 | "execution_count": 5, 173 | "metadata": {}, 174 | "outputs": [], 175 | "source": [ 176 | "from keras.utils import Sequence\n", 177 | "\n", 178 | "class EmbLoader(Sequence):\n", 179 | " def __init__(self, convfeats, labels, batchSize = 16):\n", 180 | " self.X = convfeats\n", 181 | " self.batchSize = batchSize\n", 182 | " self.y = labels\n", 183 | " self.POS = np.zeros((batchSize, convfeats.shape[1]))\n", 184 | " self.NEG = np.zeros((batchSize, convfeats.shape[1]))\n", 185 | " #gets the number of batches this generator returns\n", 186 | " def __len__(self):\n", 187 | " l,rem = divmod(len(self.y), self.batchSize)\n", 188 | " return (l + (1 if rem > 0 else 0))\n", 189 | " #shuffles data on epoch end\n", 190 | " def on_epoch_end(self):\n", 191 | " a = np.arange(len(self.y))\n", 192 | " np.random.shuffle(a)\n", 193 | " self.X = self.X[a]\n", 194 | " self.y = self.y[a]\n", 195 | " #gets a batch with index = i\n", 196 | " def __getitem__(self, i):\n", 197 | " start = i*self.batchSize\n", 198 | " stop = (i+1)*self.batchSize\n", 199 | " ancor_labels = self.y[start:stop]\n", 200 | " ancors = self.X[start:stop]\n", 201 | " for k, label in enumerate(ancor_labels):\n", 202 | " pos_idx = np.where(self.y==label)[0]\n", 203 | " neg_idx = np.where(self.y!=label)[0]\n", 204 | " self.NEG[k] = self.X[np.random.choice(neg_idx)]\n", 205 | " pos_idx_hat = pos_idx[(pos_idxstop)]\n", 206 | " if len(pos_idx_hat):\n", 207 | " self.POS[k] = self.X[np.random.choice(pos_idx_hat)]\n", 208 | " else:\n", 209 | " # positive examples are within the batch or just 1 example in dataset\n", 210 | " self.POS[k] = self.X[np.random.choice(pos_idx)]\n", 211 | " return [ancors, self.POS[:k+1], self.NEG[:k+1]], np.empty(k+1)" 212 | ] 213 | }, 214 | { 215 | "cell_type": "markdown", 216 | "metadata": {}, 217 | "source": [ 218 | "## Launch training" 219 | ] 220 | }, 221 | { 222 | "cell_type": "code", 223 | "execution_count": 6, 224 | "metadata": { 225 | "scrolled": true 226 | }, 227 | "outputs": [ 228 | { 229 | "name": "stdout", 230 | "output_type": "stream", 231 | "text": [ 232 | "WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\n", 233 | "Instructions for updating:\n", 234 | "Use tf.cast instead.\n", 235 | "Epoch 1/150\n", 236 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0403 - val_loss: 0.0399\n", 237 | "\n", 238 | "Epoch 00001: val_loss improved from inf to 0.03989, saving model to siamese.h5\n", 239 | "Epoch 2/150\n", 240 | "2544/2544 [==============================] - 27s 11ms/step - loss: 0.0391 - val_loss: 0.0395\n", 241 | "\n", 242 | "Epoch 00002: val_loss improved from 0.03989 to 0.03951, saving model to siamese.h5\n", 243 | "Epoch 3/150\n", 244 | "2539/2544 [============================>.] - ETA: 0s - loss: 0.0379Epoch 3/150\n", 245 | "2544/2544 [==============================] - 31s 12ms/step - loss: 0.0379 - val_loss: 0.0395\n", 246 | "\n", 247 | "Epoch 00003: val_loss improved from 0.03951 to 0.03950, saving model to siamese.h5\n", 248 | "Epoch 4/150\n", 249 | "2544/2544 [==============================] - 27s 11ms/step - loss: 0.0378 - val_loss: 0.0389\n", 250 | "\b\n", 251 | "Epoch 00004: val_loss improved from 0.03950 to 0.03894, saving model to siamese.h5\n", 252 | "Epoch 5/150\n", 253 | "2538/2544 [============================>.] - ETA: 0s - loss: 0.0378\n", 254 | "2544/2544 [==============================] - 27s 11ms/step - loss: 0.0378 - val_loss: 0.0401\n", 255 | "\n", 256 | "Epoch 00005: val_loss did not improve from 0.03894\n", 257 | "Epoch 6/150\n", 258 | "2543/2544 [============================>.] - ETA: 0s - loss: 0.0380\n", 259 | "Epoch 00005: val_loss did not improve from 0.03894\n", 260 | "Epoch 6/150\n", 261 | "2544/2544 [==============================] - 29s 12ms/step - loss: 0.0380 - val_loss: 0.0378\n", 262 | "\n", 263 | "Epoch 00006: val_loss improved from 0.03894 to 0.03779, saving model to siamese.h5\n", 264 | "Epoch 7/150\n", 265 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0374 - val_loss: 0.0384\n", 266 | "\b\n", 267 | "Epoch 00007: val_loss did not improve from 0.03779\n", 268 | "Epoch 8/150\n", 269 | "2544/2544 [==============================] - 25s 10ms/step - loss: 0.0373 - val_loss: 0.0395\n", 270 | "\n", 271 | "Epoch 00008: val_loss did not improve from 0.03779\n", 272 | "Epoch 9/150\n", 273 | "2544/2544 [==============================] - 31s 12ms/step - loss: 0.0371 - val_loss: 0.0386\n", 274 | "\n", 275 | "Epoch 00009: val_loss did not improve from 0.03779\n", 276 | "Epoch 10/150\n", 277 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0366 - val_loss: 0.0381\n", 278 | "\n", 279 | "Epoch 00010: val_loss did not improve from 0.03779\n", 280 | "Epoch 11/150\n", 281 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0364 - val_loss: 0.0382\n", 282 | "\n", 283 | "Epoch 00011: val_loss did not improve from 0.03779\n", 284 | "Epoch 12/150\n", 285 | "2544/2544 [==============================] - 26s 10ms/step - loss: 0.0367 - val_loss: 0.0377\n", 286 | "\n", 287 | "Epoch 00012: val_loss improved from 0.03779 to 0.03771, saving model to siamese.h5\n", 288 | "Epoch 13/150\n", 289 | "2544/2544 [==============================] - 31s 12ms/step - loss: 0.0364 - val_loss: 0.0382\n", 290 | "\n", 291 | "Epoch 00013: val_loss did not improve from 0.03771\n", 292 | "Epoch 14/150\n", 293 | "2544/2544 [==============================] - 25s 10ms/step - loss: 0.0366 - val_loss: 0.0390\n", 294 | "\n", 295 | "Epoch 00014: val_loss did not improve from 0.03771\n", 296 | "Epoch 15/150\n", 297 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0361 - val_loss: 0.0388\n", 298 | "\n", 299 | "Epoch 00015: val_loss did not improve from 0.03771\n", 300 | "Epoch 16/150\n", 301 | "2544/2544 [==============================] - 29s 11ms/step - loss: 0.0367 - val_loss: 0.0372\n", 302 | "\n", 303 | "Epoch 00016: val_loss improved from 0.03771 to 0.03723, saving model to siamese.h5\n", 304 | "Epoch 17/150\n", 305 | "2544/2544 [==============================] - 27s 11ms/step - loss: 0.0360 - val_loss: 0.0373\n", 306 | "\n", 307 | "Epoch 00017: val_loss did not improve from 0.03723\n", 308 | "Epoch 18/150\n", 309 | "2543/2544 [============================>.] - ETA: 0s - loss: 0.0365\n", 310 | "Epoch 00017: val_loss did not improve from 0.03723\n", 311 | "Epoch 18/150\n", 312 | "2544/2544 [==============================] - 31s 12ms/step - loss: 0.0365 - val_loss: 0.0368\n", 313 | "\n", 314 | "Epoch 00018: val_loss improved from 0.03723 to 0.03682, saving model to siamese.h5\n", 315 | "Epoch 19/150\n", 316 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0364 - val_loss: 0.0383\n", 317 | "\n", 318 | "Epoch 00019: val_loss did not improve from 0.03682\n", 319 | "Epoch 20/150\n", 320 | "2544/2544 [==============================] - 26s 10ms/step - loss: 0.0357 - val_loss: 0.0391\n", 321 | "\n", 322 | "Epoch 00020: val_loss did not improve from 0.03682\n", 323 | "Epoch 21/150\n", 324 | "2544/2544 [==============================] - 30s 12ms/step - loss: 0.0361 - val_loss: 0.0373\n", 325 | "\n", 326 | "Epoch 00021: val_loss did not improve from 0.03682\n", 327 | "Epoch 22/150\n", 328 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0359 - val_loss: 0.0367\n", 329 | "\n", 330 | "Epoch 00022: val_loss improved from 0.03682 to 0.03666, saving model to siamese.h5\n", 331 | "Epoch 23/150\n", 332 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0358 - val_loss: 0.0372\n", 333 | "\n", 334 | "Epoch 00023: val_loss did not improve from 0.03666\n", 335 | "Epoch 24/150\n", 336 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0360 - val_loss: 0.0390\n", 337 | "\n", 338 | "Epoch 00024: val_loss did not improve from 0.03666\n", 339 | "Epoch 25/150\n", 340 | "2541/2544 [============================>.] - ETA: 0s - loss: 0.0355\n", 341 | "2544/2544 [==============================] - 26s 10ms/step - loss: 0.0355 - val_loss: 0.0367\n", 342 | "\n", 343 | "Epoch 00025: val_loss did not improve from 0.03666\n", 344 | "Epoch 26/150\n", 345 | "2544/2544 [==============================] - 31s 12ms/step - loss: 0.0354 - val_loss: 0.0366\n", 346 | " 5/2544 [..............................] - ETA: 33:11 - loss: 0.0403\n", 347 | "Epoch 00026: val_loss improved from 0.03666 to 0.03662, saving model to siamese.h5\n", 348 | "Epoch 27/150\n", 349 | "2541/2544 [============================>.] - ETA: 0s - loss: 0.0356Epoch 27/150\n", 350 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0356 - val_loss: 0.0369\n", 351 | "\n", 352 | "Epoch 00027: val_loss did not improve from 0.03662\n", 353 | "Epoch 28/150\n", 354 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0356 - val_loss: 0.0361\n", 355 | "\n", 356 | "Epoch 00028: val_loss improved from 0.03662 to 0.03609, saving model to siamese.h5\n", 357 | "Epoch 29/150\n", 358 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0357 - val_loss: 0.0377\n", 359 | "\n", 360 | "Epoch 00029: val_loss did not improve from 0.03609\n", 361 | "Epoch 30/150\n", 362 | "2544/2544 [==============================] - 25s 10ms/step - loss: 0.0350 - val_loss: 0.0371\n", 363 | "\n", 364 | "Epoch 00030: val_loss did not improve from 0.03609\n", 365 | "Epoch 31/150\n", 366 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0353 - val_loss: 0.0357\n", 367 | "\n", 368 | "Epoch 00031: val_loss improved from 0.03609 to 0.03572, saving model to siamese.h5\n", 369 | "Epoch 32/150\n", 370 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0350 - val_loss: 0.0380\n", 371 | "\n", 372 | "Epoch 00032: val_loss did not improve from 0.03572\n", 373 | "Epoch 33/150\n", 374 | "2544/2544 [==============================] - 29s 11ms/step - loss: 0.0351 - val_loss: 0.0368\n", 375 | "\n", 376 | "Epoch 00033: val_loss did not improve from 0.035\n", 377 | "Epoch 34/150\n", 378 | "2538/2544 [============================>.] - ETA: 0s - loss: 0.0344\n", 379 | "Epoch 34/150\n", 380 | "2544/2544 [==============================] - 30s 12ms/step - loss: 0.0344 - val_loss: 0.0360\n", 381 | "\n", 382 | "Epoch 00034: val_loss did not improve from 0.03572\n", 383 | "Epoch 35/150\n", 384 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0349 - val_loss: 0.0366\n", 385 | "\n", 386 | "Epoch 00035: val_loss did not improve from 0.03572\n", 387 | "Epoch 36/150\n", 388 | "2544/2544 [==============================] - 25s 10ms/step - loss: 0.0349 - val_loss: 0.0365\n", 389 | "\n", 390 | "Epoch 00036: val_loss did not improve from 0.03572\n", 391 | "Epoch 37/150\n", 392 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0348 - val_loss: 0.0380\n", 393 | "\n", 394 | "Epoch 00037: val_loss did not improve from 0.03572\n", 395 | "Epoch 38/150\n", 396 | "2544/2544 [==============================] - 31s 12ms/step - loss: 0.0349 - val_loss: 0.0365\n", 397 | "\n", 398 | "Epoch 00038: val_loss did not improve from 0.03572\n", 399 | "Epoch 39/150\n", 400 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0349 - val_loss: 0.0362\n", 401 | "\n", 402 | "Epoch 00039: val_loss did not improve from 0.03572\n", 403 | "Epoch 40/150\n", 404 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0346 - val_loss: 0.0364\n", 405 | "\n", 406 | "Epoch 00040: val_loss did not improve from 0.03572\n", 407 | "Epoch 41/150\n", 408 | "2544/2544 [==============================] - 26s 10ms/step - loss: 0.0349 - val_loss: 0.0355\n", 409 | "\n", 410 | "Epoch 00041: val_loss improved from 0.03572 to 0.03546, saving model to siamese.h5\n", 411 | "Epoch 42/150\n", 412 | "2542/2544 [============================>.] - ETA: 0s - loss: 0.0350\n", 413 | "Epoch 00041: val_loss improved from 0.03572 to 0.03546, saving model to siamese.h5\n", 414 | "Epoch 42/150\n", 415 | "2544/2544 [==============================] - 27s 11ms/step - loss: 0.0350 - val_loss: 0.0369\n", 416 | "\n", 417 | "Epoch 00042: val_loss did not improve from 0.03546\n", 418 | "Epoch 43/150\n" 419 | ] 420 | }, 421 | { 422 | "name": "stdout", 423 | "output_type": "stream", 424 | "text": [ 425 | "2544/2544 [==============================] - 31s 12ms/step - loss: 0.0346 - val_loss: 0.0361\n", 426 | "\n", 427 | "Epoch 00043: val_loss did not improve from 0.03546\n", 428 | "Epoch 44/150\n", 429 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0345 - val_loss: 0.0359\n", 430 | "\b\n", 431 | "Epoch 00044: val_loss did not improve from 0.03546\n", 432 | "Epoch 45/150\n", 433 | "2544/2544 [==============================] - 25s 10ms/step - loss: 0.0338 - val_loss: 0.0352\n", 434 | "\n", 435 | "Epoch 00045: val_loss improved from 0.03546 to 0.03516, saving model to siamese.h5\n", 436 | "Epoch 46/150\n", 437 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0347 - val_loss: 0.0355\n", 438 | "\n", 439 | "Epoch 00046: val_loss did not improve from 0.03516\n", 440 | "Epoch 47/150\n", 441 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0347 - val_loss: 0.0366\n", 442 | "\n", 443 | "Epoch 00047: val_loss did not improve from 0.03516\n", 444 | "Epoch 48/150\n", 445 | "2544/2544 [==============================] - 31s 12ms/step - loss: 0.0344 - val_loss: 0.0372\n", 446 | "\n", 447 | "Epoch 00048: val_loss did not improve from 0.03516\n", 448 | "Epoch 49/150\n", 449 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0341 - val_loss: 0.0380\n", 450 | "\n", 451 | "Epoch 00049: val_loss did not improve from 0.03516\n", 452 | "Epoch 50/150\n", 453 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0344 - val_loss: 0.0382\n", 454 | "\n", 455 | "Epoch 00050: val_loss did not improve from 0.03516\n", 456 | "Epoch 51/150\n", 457 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0344 - val_loss: 0.0366\n", 458 | "\n", 459 | "Epoch 00051: val_loss did not improve from 0.03516\n", 460 | "Epoch 52/150\n", 461 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0343 - val_loss: 0.0352\n", 462 | "\b\n", 463 | "Epoch 00052: val_loss did not improve from 0.03516\n", 464 | "Epoch 53/150\n", 465 | "2544/2544 [==============================] - 26s 10ms/step - loss: 0.0340 - val_loss: 0.0369\n", 466 | "\n", 467 | "Epoch 00053: val_loss did not improve from 0.03516\n", 468 | "Epoch 54/150\n", 469 | "2544/2544 [==============================] - 27s 11ms/step - loss: 0.0348 - val_loss: 0.0356\n", 470 | "\n", 471 | "Epoch 00054: val_loss did not improve from 0.03516\n", 472 | "Epoch 55/150\n", 473 | "2544/2544 [==============================] - 31s 12ms/step - loss: 0.0340 - val_loss: 0.0364\n", 474 | "\n", 475 | "Epoch 00055: val_loss did not improve from 0.03516\n", 476 | "Epoch 56/150\n", 477 | "2544/2544 [==============================] - 26s 10ms/step - loss: 0.0342 - val_loss: 0.0352\n", 478 | "\n", 479 | "Epoch 00056: val_loss did not improve from 0.03516\n", 480 | "Epoch 57/150\n", 481 | "2544/2544 [==============================] - 31s 12ms/step - loss: 0.0341 - val_loss: 0.0358\n", 482 | "\n", 483 | "Epoch 00057: val_loss did not improve from 0.03516\n", 484 | "Epoch 58/150\n", 485 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0341 - val_loss: 0.0352\n", 486 | "\n", 487 | "Epoch 00058: val_loss did not improve from 0.03516\n", 488 | "Epoch 59/150\n", 489 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0340 - val_loss: 0.0371\n", 490 | "\n", 491 | "Epoch 00059: val_loss did not improve from 0.03516\n", 492 | "Epoch 60/150\n", 493 | "2544/2544 [==============================] - 26s 10ms/step - loss: 0.0342 - val_loss: 0.0356\n", 494 | "\n", 495 | "Epoch 00060: val_loss did not improve from 0.03516\n", 496 | "Epoch 61/150\n", 497 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0338 - val_loss: 0.0360\n", 498 | "\n", 499 | "Epoch 00061: val_loss did not improve from 0.03516\n", 500 | "Epoch 62/150\n", 501 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0338 - val_loss: 0.0355\n", 502 | "\n", 503 | "Epoch 00062: val_loss did not improve from 0.03516\n", 504 | "Epoch 63/150\n", 505 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0338 - val_loss: 0.0355\n", 506 | "\n", 507 | "Epoch 00063: val_loss did not improve from 0.03516\n", 508 | "Epoch 64/150\n", 509 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0339 - val_loss: 0.0346\n", 510 | "\n", 511 | "Epoch 00064: val_loss improved from 0.03516 to 0.03456, saving model to siamese.h5\n", 512 | "Epoch 65/150\n", 513 | "2544/2544 [==============================] - 31s 12ms/step - loss: 0.0343 - val_loss: 0.0358\n", 514 | "\n", 515 | "Epoch 00065: val_loss did not improve from 0.03456\n", 516 | "Epoch 66/150\n", 517 | "2544/2544 [==============================] - 25s 10ms/step - loss: 0.0336 - val_loss: 0.0371\n", 518 | "\n", 519 | "Epoch 00066: val_loss did not improve from 0.03456\n", 520 | "Epoch 67/150\n", 521 | "2544/2544 [==============================] - 31s 12ms/step - loss: 0.0338 - val_loss: 0.0358\n", 522 | "\n", 523 | "Epoch 00067: val_loss did not improve from 0.03456\n", 524 | "Epoch 68/150\n", 525 | "2544/2544 [==============================] - 26s 10ms/step - loss: 0.0336 - val_loss: 0.0355\n", 526 | "\n", 527 | "Epoch 00071: val_loss did not improve from 0.03391\n", 528 | "Epoch 72/150\n", 529 | "2544/2544 [==============================] - 31s 12ms/step - loss: 0.0338 - val_loss: 0.0354\n", 530 | " 13/2544 [..............................] - ETA: 14:15 - loss: 0.0309\n", 531 | "Epoch 00072: val_loss did not improve from 0.03391\n", 532 | "Epoch 73/150\n", 533 | "2544/2544 [==============================] - 26s 10ms/step - loss: 0.0338 - val_loss: 0.0351\n", 534 | "\n", 535 | "Epoch 00073: val_loss did not improve from 0.03391\n", 536 | "Epoch 74/150\n", 537 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0335 - val_loss: 0.0352\n", 538 | "\n", 539 | "Epoch 00078: val_loss did not improve from 0.03391\n", 540 | "Epoch 79/150\n", 541 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0340 - val_loss: 0.0362\n", 542 | "\n", 543 | "Epoch 00079: val_loss did not improve from 0.03391\n", 544 | "Epoch 80/150\n", 545 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0336 - val_loss: 0.0350\n", 546 | "\n", 547 | "Epoch 00080: val_loss did not improve from 0.03391\n", 548 | "Epoch 81/150\n", 549 | "2544/2544 [==============================] - 25s 10ms/step - loss: 0.0334 - val_loss: 0.0351\n", 550 | "\n", 551 | "Epoch 00081: val_loss did not improve from 0.03391\n", 552 | "Epoch 82/150\n", 553 | "2544/2544 [==============================] - 31s 12ms/step - loss: 0.0339 - val_loss: 0.0345\n", 554 | "\n", 555 | "Epoch 00082: val_loss did not improve from 0.03391\n", 556 | "Epoch 83/150\n", 557 | "2544/2544 [==============================] - 25s 10ms/step - loss: 0.0330 - val_loss: 0.0367\n", 558 | "\n", 559 | "Epoch 00083: val_loss did not improve from 0.03391\n", 560 | "Epoch 84/150\n", 561 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0333 - val_loss: 0.0343\n", 562 | "\n", 563 | "Epoch 00084: val_loss did not improve from 0.03391\n", 564 | "Epoch 85/150\n", 565 | "2544/2544 [==============================] - 31s 12ms/step - loss: 0.0335 - val_loss: 0.0360\n", 566 | "\n", 567 | "Epoch 00085: val_loss did not improve from 0.03391\n", 568 | "Epoch 86/150\n", 569 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0333 - val_loss: 0.0360\n", 570 | "\n", 571 | "Epoch 00086: val_loss did not improve from 0.03391\n", 572 | "Epoch 87/150\n", 573 | "2544/2544 [==============================] - 25s 10ms/step - loss: 0.0336 - val_loss: 0.0355\n", 574 | "\n", 575 | "Epoch 00087: val_loss did not improve from 0.03391\n", 576 | "Epoch 88/150\n", 577 | "2544/2544 [==============================] - 29s 11ms/step - loss: 0.0334 - val_loss: 0.0352\n", 578 | "\n", 579 | "Epoch 00088: val_loss did not improve from 0.03391\n", 580 | "Epoch 89/150\n", 581 | "2544/2544 [==============================] - 29s 12ms/step - loss: 0.0336 - val_loss: 0.0354\n", 582 | "\n", 583 | "Epoch 00089: val_loss did not improve from 0.03391\n", 584 | "Epoch 90/150\n", 585 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0332 - val_loss: 0.0347\n", 586 | "\n", 587 | "Epoch 00093: val_loss did not improve from 0.03391\n", 588 | "Epoch 94/150\n", 589 | "2544/2544 [==============================] - 30s 12ms/step - loss: 0.0331 - val_loss: 0.0353\n", 590 | "\n", 591 | "Epoch 00094: val_loss did not improve from 0.03391\n", 592 | "Epoch 95/150\n", 593 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0334 - val_loss: 0.0349\n", 594 | "\n", 595 | "Epoch 00095: val_loss did not improve from 0.03391\n", 596 | "Epoch 96/150\n", 597 | "2544/2544 [==============================] - 25s 10ms/step - loss: 0.0330 - val_loss: 0.0344\n", 598 | "\n", 599 | "Epoch 00096: val_loss did not improve from 0.03391\n", 600 | "Epoch 97/150\n", 601 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0338 - val_loss: 0.0343\n", 602 | "\n", 603 | "Epoch 00097: val_loss did not improve from 0.03391\n", 604 | "Epoch 98/150\n", 605 | "2544/2544 [==============================] - 31s 12ms/step - loss: 0.0334 - val_loss: 0.0349\n", 606 | "\n", 607 | "Epoch 00098: val_loss did not improve from 0.03391\n", 608 | "Epoch 99/150\n", 609 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0331 - val_loss: 0.0353\n", 610 | "\n", 611 | "Epoch 00099: val_loss did not improve from 0.03391\n", 612 | "Epoch 100/150\n", 613 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0331 - val_loss: 0.0338\n", 614 | "\n", 615 | "Epoch 00100: val_loss improved from 0.03391 to 0.03377, saving model to siamese.h5\n", 616 | "Epoch 101/150\n", 617 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0335 - val_loss: 0.0353\n", 618 | "\n", 619 | "Epoch 00101: val_loss did not improve from 0.03377\n", 620 | "Epoch 102/150\n", 621 | "2544/2544 [==============================] - 25s 10ms/step - loss: 0.0330 - val_loss: 0.0350\n", 622 | "\n", 623 | "Epoch 00102: val_loss did not improve from 0.03377\n", 624 | "Epoch 103/150\n", 625 | "2544/2544 [==============================] - 28s 11ms/step - loss: 0.0328 - val_loss: 0.0338\n", 626 | "\n", 627 | "Epoch 00103: val_loss did not improve from 0.03377\n", 628 | "Epoch 104/150\n" 629 | ] 630 | }, 631 | { 632 | "name": "stdout", 633 | "output_type": "stream", 634 | "text": [ 635 | " 230/2544 [=>............................] - ETA: 1:21 - loss: 0.0331" 636 | ] 637 | }, 638 | { 639 | "name": "stderr", 640 | "output_type": "stream", 641 | "text": [ 642 | "Process ForkPoolWorker-2478:\n", 643 | "Process ForkPoolWorker-2485:\n", 644 | "Process ForkPoolWorker-2492:\n", 645 | "Process ForkPoolWorker-2479:\n", 646 | "Process ForkPoolWorker-2484:\n", 647 | "Process ForkPoolWorker-2490:\n", 648 | "Process ForkPoolWorker-2474:\n", 649 | "Process ForkPoolWorker-2482:\n", 650 | "Process ForkPoolWorker-2473:\n", 651 | "Process ForkPoolWorker-2495:\n", 652 | "Process ForkPoolWorker-2494:\n", 653 | "Process ForkPoolWorker-2489:\n", 654 | "Process ForkPoolWorker-2481:\n", 655 | "Process ForkPoolWorker-2477:\n", 656 | "Process ForkPoolWorker-2487:\n", 657 | "Process ForkPoolWorker-2476:\n", 658 | "Process ForkPoolWorker-2480:\n", 659 | "Process ForkPoolWorker-2483:\n", 660 | "Process ForkPoolWorker-2486:\n", 661 | "Process ForkPoolWorker-2488:\n", 662 | "Process ForkPoolWorker-2493:\n", 663 | "Process ForkPoolWorker-2496:\n", 664 | "Process ForkPoolWorker-2491:\n", 665 | "Process ForkPoolWorker-2475:\n", 666 | "Traceback (most recent call last):\n", 667 | "Traceback (most recent call last):\n", 668 | "Traceback (most recent call last):\n", 669 | "Traceback (most recent call last):\n", 670 | "Traceback (most recent call last):\n", 671 | "Traceback (most recent call last):\n", 672 | "Traceback (most recent call last):\n", 673 | "Traceback (most recent call last):\n", 674 | "Traceback (most recent call last):\n", 675 | "Traceback (most recent call last):\n", 676 | "Traceback (most recent call last):\n", 677 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 678 | " self.run()\n", 679 | "Traceback (most recent call last):\n", 680 | "Traceback (most recent call last):\n", 681 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 682 | " self.run()\n", 683 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 684 | " self.run()\n", 685 | "Traceback (most recent call last):\n", 686 | "Traceback (most recent call last):\n", 687 | "Traceback (most recent call last):\n", 688 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 689 | " self.run()\n", 690 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 691 | " self.run()\n", 692 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 693 | " self.run()\n", 694 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 695 | " self.run()\n", 696 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 697 | " self.run()\n", 698 | "Traceback (most recent call last):\n", 699 | "Traceback (most recent call last):\n", 700 | "Traceback (most recent call last):\n", 701 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 702 | " self.run()\n", 703 | "Traceback (most recent call last):\n", 704 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 705 | " self._target(*self._args, **self._kwargs)\n", 706 | "Traceback (most recent call last):\n", 707 | "Traceback (most recent call last):\n", 708 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 709 | " self.run()\n", 710 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 711 | " self._target(*self._args, **self._kwargs)\n", 712 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 713 | " self.run()\n", 714 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 715 | " self._target(*self._args, **self._kwargs)\n", 716 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 717 | " self.run()\n", 718 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 719 | " self._target(*self._args, **self._kwargs)\n", 720 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 721 | " self.run()\n", 722 | "Traceback (most recent call last):\n", 723 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 724 | " self._target(*self._args, **self._kwargs)\n", 725 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 726 | " self.run()\n", 727 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 728 | " self._target(*self._args, **self._kwargs)\n", 729 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 730 | " self._target(*self._args, **self._kwargs)\n", 731 | "Traceback (most recent call last):\n", 732 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 733 | " self.run()\n", 734 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 735 | " self.run()\n", 736 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 737 | " self.run()\n", 738 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 739 | " self._target(*self._args, **self._kwargs)\n", 740 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 741 | " self._target(*self._args, **self._kwargs)\n", 742 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 743 | " self.run()\n", 744 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 745 | " self.run()\n", 746 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 119, in worker\n", 747 | " result = (True, func(*args, **kwds))\n", 748 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 749 | " self._target(*self._args, **self._kwargs)\n", 750 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 108, in worker\n", 751 | " task = get()\n", 752 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 753 | " self.run()\n", 754 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 755 | " self.run()\n", 756 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 757 | " self.run()\n", 758 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 759 | " self._target(*self._args, **self._kwargs)\n", 760 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 761 | " self._target(*self._args, **self._kwargs)\n", 762 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 119, in worker\n", 763 | " result = (True, func(*args, **kwds))\n", 764 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 119, in worker\n", 765 | " result = (True, func(*args, **kwds))\n", 766 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 767 | " self._target(*self._args, **self._kwargs)\n", 768 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 769 | " self.run()\n", 770 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 108, in worker\n", 771 | " task = get()\n", 772 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 108, in worker\n", 773 | " task = get()\n", 774 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 108, in worker\n", 775 | " task = get()\n", 776 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 777 | " self._target(*self._args, **self._kwargs)\n", 778 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 779 | " self._target(*self._args, **self._kwargs)\n", 780 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 108, in worker\n", 781 | " task = get()\n", 782 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 249, in _bootstrap\n", 783 | " self.run()\n", 784 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 785 | " self._target(*self._args, **self._kwargs)\n", 786 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 787 | " self._target(*self._args, **self._kwargs)\n", 788 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 789 | " self._target(*self._args, **self._kwargs)\n", 790 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 108, in worker\n", 791 | " task = get()\n", 792 | " File \"/usr/lib/python3.5/multiprocessing/queues.py\", line 342, in get\n", 793 | " with self._rlock:\n", 794 | " File \"/usr/local/lib/python3.5/dist-packages/keras/utils/data_utils.py\", line 401, in get_index\n", 795 | " return _SHARED_SEQUENCES[uid][i]\n", 796 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 108, in worker\n", 797 | " task = get()\n", 798 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 799 | " self._target(*self._args, **self._kwargs)\n", 800 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 801 | " self._target(*self._args, **self._kwargs)\n", 802 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 803 | " self._target(*self._args, **self._kwargs)\n", 804 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 805 | " self._target(*self._args, **self._kwargs)\n", 806 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 108, in worker\n", 807 | " task = get()\n", 808 | " File \"/usr/local/lib/python3.5/dist-packages/keras/utils/data_utils.py\", line 401, in get_index\n", 809 | " return _SHARED_SEQUENCES[uid][i]\n", 810 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 108, in worker\n", 811 | " task = get()\n", 812 | " File \"/usr/local/lib/python3.5/dist-packages/keras/utils/data_utils.py\", line 401, in get_index\n", 813 | " return _SHARED_SEQUENCES[uid][i]\n", 814 | " File \"/usr/lib/python3.5/multiprocessing/queues.py\", line 342, in get\n", 815 | " with self._rlock:\n", 816 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 817 | " self._target(*self._args, **self._kwargs)\n" 818 | ] 819 | }, 820 | { 821 | "name": "stderr", 822 | "output_type": "stream", 823 | "text": [ 824 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 108, in worker\n", 825 | " task = get()\n", 826 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 108, in worker\n", 827 | " task = get()\n", 828 | " File \"/usr/lib/python3.5/multiprocessing/queues.py\", line 342, in get\n", 829 | " with self._rlock:\n", 830 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 108, in worker\n", 831 | " task = get()\n", 832 | " File \"/usr/lib/python3.5/multiprocessing/queues.py\", line 342, in get\n", 833 | " with self._rlock:\n", 834 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 108, in worker\n", 835 | " task = get()\n", 836 | " File \"/usr/lib/python3.5/multiprocessing/process.py\", line 93, in run\n", 837 | " self._target(*self._args, **self._kwargs)\n", 838 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 108, in worker\n", 839 | " task = get()\n", 840 | " File \"/usr/lib/python3.5/multiprocessing/queues.py\", line 342, in get\n", 841 | " with self._rlock:\n", 842 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 108, in worker\n", 843 | " task = get()\n", 844 | " File \"\", line 27, in __getitem__\n", 845 | " pos_idx = np.where(self.y==label)[0]\n", 846 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 108, in worker\n", 847 | " task = get()\n", 848 | " File \"/usr/lib/python3.5/multiprocessing/queues.py\", line 342, in get\n", 849 | " with self._rlock:\n", 850 | " File \"/usr/lib/python3.5/multiprocessing/queues.py\", line 342, in get\n", 851 | " with self._rlock:\n", 852 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 108, in worker\n", 853 | " task = get()\n", 854 | " File \"/usr/lib/python3.5/multiprocessing/synchronize.py\", line 96, in __enter__\n", 855 | " return self._semlock.__enter__()\n", 856 | " File \"\", line 28, in __getitem__\n", 857 | " neg_idx = np.where(self.y!=label)[0]\n", 858 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 119, in worker\n", 859 | " result = (True, func(*args, **kwds))\n", 860 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 108, in worker\n", 861 | " task = get()\n", 862 | " File \"/usr/lib/python3.5/multiprocessing/queues.py\", line 342, in get\n", 863 | " with self._rlock:\n", 864 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 108, in worker\n", 865 | " task = get()\n", 866 | " File \"/usr/lib/python3.5/multiprocessing/synchronize.py\", line 96, in __enter__\n", 867 | " return self._semlock.__enter__()\n", 868 | " File \"/usr/lib/python3.5/multiprocessing/queues.py\", line 342, in get\n", 869 | " with self._rlock:\n", 870 | " File \"/usr/lib/python3.5/multiprocessing/queues.py\", line 342, in get\n", 871 | " with self._rlock:\n", 872 | " File \"/usr/lib/python3.5/multiprocessing/queues.py\", line 342, in get\n", 873 | " with self._rlock:\n", 874 | " File \"/usr/lib/python3.5/multiprocessing/queues.py\", line 342, in get\n", 875 | " with self._rlock:\n", 876 | " File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 108, in worker\n", 877 | " task = get()\n", 878 | " File \"/usr/lib/python3.5/multiprocessing/queues.py\", line 343, in get\n", 879 | " res = self._reader.recv_bytes()\n", 880 | " File \"/usr/lib/python3.5/multiprocessing/synchronize.py\", line 96, in __enter__\n", 881 | " return self._semlock.__enter__()\n", 882 | " File \"\", line 29, in __getitem__\n", 883 | " self.NEG[k] = self.X[np.random.choice(neg_idx)]\n", 884 | " File \"/usr/lib/python3.5/multiprocessing/queues.py\", line 342, in get\n", 885 | " with self._rlock:\n", 886 | " File \"/usr/lib/python3.5/multiprocessing/queues.py\", line 342, in get\n", 887 | " with self._rlock:\n", 888 | " File \"/usr/lib/python3.5/multiprocessing/synchronize.py\", line 96, in __enter__\n", 889 | " return self._semlock.__enter__()\n", 890 | " File \"/usr/lib/python3.5/multiprocessing/synchronize.py\", line 96, in __enter__\n", 891 | " return self._semlock.__enter__()\n", 892 | "KeyboardInterrupt\n", 893 | " File \"/usr/lib/python3.5/multiprocessing/queues.py\", line 342, in get\n", 894 | " with self._rlock:\n", 895 | " File \"/usr/lib/python3.5/multiprocessing/queues.py\", line 342, in get\n", 896 | " with self._rlock:\n", 897 | " File \"/usr/lib/python3.5/multiprocessing/queues.py\", line 342, in get\n", 898 | " with self._rlock:\n", 899 | "KeyboardInterrupt\n", 900 | "KeyboardInterrupt\n", 901 | " File \"/usr/local/lib/python3.5/dist-packages/keras/utils/data_utils.py\", line 401, in get_index\n", 902 | " return _SHARED_SEQUENCES[uid][i]\n", 903 | " File \"/usr/lib/python3.5/multiprocessing/synchronize.py\", line 96, in __enter__\n", 904 | " return self._semlock.__enter__()\n", 905 | "KeyboardInterrupt\n", 906 | " File \"/usr/lib/python3.5/multiprocessing/queues.py\", line 342, in get\n", 907 | " with self._rlock:\n", 908 | " File \"/usr/lib/python3.5/multiprocessing/queues.py\", line 343, in get\n", 909 | " res = self._reader.recv_bytes()\n", 910 | " File \"/usr/lib/python3.5/multiprocessing/synchronize.py\", line 96, in __enter__\n", 911 | " return self._semlock.__enter__()\n", 912 | "KeyboardInterrupt\n", 913 | " File \"/usr/lib/python3.5/multiprocessing/synchronize.py\", line 96, in __enter__\n", 914 | " return self._semlock.__enter__()\n", 915 | " File \"/usr/lib/python3.5/multiprocessing/synchronize.py\", line 96, in __enter__\n", 916 | " return self._semlock.__enter__()\n", 917 | " File \"/usr/lib/python3.5/multiprocessing/connection.py\", line 216, in recv_bytes\n", 918 | " buf = self._recv_bytes(maxlength)\n", 919 | " File \"/usr/lib/python3.5/multiprocessing/synchronize.py\", line 96, in __enter__\n", 920 | " return self._semlock.__enter__()\n", 921 | " File \"/usr/lib/python3.5/multiprocessing/synchronize.py\", line 96, in __enter__\n", 922 | " return self._semlock.__enter__()\n", 923 | "KeyboardInterrupt\n", 924 | "KeyboardInterrupt\n", 925 | " File \"/usr/lib/python3.5/multiprocessing/synchronize.py\", line 96, in __enter__\n", 926 | " return self._semlock.__enter__()\n", 927 | " File \"/usr/lib/python3.5/multiprocessing/synchronize.py\", line 96, in __enter__\n", 928 | " return self._semlock.__enter__()\n", 929 | " File \"/usr/lib/python3.5/multiprocessing/synchronize.py\", line 96, in __enter__\n", 930 | " return self._semlock.__enter__()\n", 931 | " File \"/usr/lib/python3.5/multiprocessing/synchronize.py\", line 96, in __enter__\n", 932 | " return self._semlock.__enter__()\n", 933 | "KeyboardInterrupt\n", 934 | " File \"\", line 28, in __getitem__\n", 935 | " neg_idx = np.where(self.y!=label)[0]\n", 936 | " File \"/usr/lib/python3.5/multiprocessing/connection.py\", line 216, in recv_bytes\n", 937 | " buf = self._recv_bytes(maxlength)\n", 938 | " File \"/usr/lib/python3.5/multiprocessing/synchronize.py\", line 96, in __enter__\n", 939 | " return self._semlock.__enter__()\n", 940 | "KeyboardInterrupt\n", 941 | "KeyboardInterrupt\n", 942 | " File \"/usr/lib/python3.5/multiprocessing/synchronize.py\", line 96, in __enter__\n", 943 | " return self._semlock.__enter__()\n", 944 | "KeyboardInterrupt\n", 945 | "KeyboardInterrupt\n", 946 | "KeyboardInterrupt\n", 947 | "KeyboardInterrupt\n", 948 | " File \"/usr/lib/python3.5/multiprocessing/connection.py\", line 407, in _recv_bytes\n", 949 | " buf = self._recv(4)\n", 950 | " File \"/usr/lib/python3.5/multiprocessing/connection.py\", line 407, in _recv_bytes\n", 951 | " buf = self._recv(4)\n", 952 | "KeyboardInterrupt\n", 953 | "KeyboardInterrupt\n", 954 | "KeyboardInterrupt\n", 955 | "KeyboardInterrupt\n", 956 | "KeyboardInterrupt\n" 957 | ] 958 | }, 959 | { 960 | "name": "stdout", 961 | "output_type": "stream", 962 | "text": [ 963 | "\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b" 964 | ] 965 | }, 966 | { 967 | "name": "stderr", 968 | "output_type": "stream", 969 | "text": [ 970 | "KeyboardInterrupt\n", 971 | " File \"/usr/lib/python3.5/multiprocessing/connection.py\", line 379, in _recv\n", 972 | " chunk = read(handle, remaining)\n", 973 | " File \"/usr/lib/python3.5/multiprocessing/connection.py\", line 379, in _recv\n", 974 | " chunk = read(handle, remaining)\n", 975 | "KeyboardInterrupt\n", 976 | "KeyboardInterrupt\n", 977 | " File \"/usr/lib/python3.5/multiprocessing/synchronize.py\", line 96, in __enter__\n", 978 | " return self._semlock.__enter__()\n", 979 | "KeyboardInterrupt\n", 980 | "KeyboardInterrupt\n" 981 | ] 982 | }, 983 | { 984 | "ename": "KeyboardInterrupt", 985 | "evalue": "", 986 | "output_type": "error", 987 | "traceback": [ 988 | "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", 989 | "\u001b[0;31mKeyboardInterrupt\u001b[0m Traceback (most recent call last)", 990 | "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 12\u001b[0m model.fit_generator(train_gen, steps_per_epoch=len(train_gen), epochs=150, \n\u001b[1;32m 13\u001b[0m \u001b[0mvalidation_data\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mvalid_gen\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mvalidation_steps\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mlen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mvalid_gen\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 14\u001b[0;31m workers=12, use_multiprocessing=True, callbacks=[checkpoint])\n\u001b[0m\u001b[1;32m 15\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 16\u001b[0m \u001b[0;31m# train for 50 more epochs with validation data\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 991 | "\u001b[0;32m/usr/local/lib/python3.5/dist-packages/keras/legacy/interfaces.py\u001b[0m in \u001b[0;36mwrapper\u001b[0;34m(*args, **kwargs)\u001b[0m\n\u001b[1;32m 89\u001b[0m warnings.warn('Update your `' + object_name + '` call to the ' +\n\u001b[1;32m 90\u001b[0m 'Keras 2 API: ' + signature, stacklevel=2)\n\u001b[0;32m---> 91\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mfunc\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 92\u001b[0m \u001b[0mwrapper\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_original_function\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mfunc\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 93\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mwrapper\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 992 | "\u001b[0;32m/usr/local/lib/python3.5/dist-packages/keras/engine/training.py\u001b[0m in \u001b[0;36mfit_generator\u001b[0;34m(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)\u001b[0m\n\u001b[1;32m 1416\u001b[0m \u001b[0muse_multiprocessing\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0muse_multiprocessing\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1417\u001b[0m \u001b[0mshuffle\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mshuffle\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1418\u001b[0;31m initial_epoch=initial_epoch)\n\u001b[0m\u001b[1;32m 1419\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1420\u001b[0m \u001b[0;34m@\u001b[0m\u001b[0minterfaces\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlegacy_generator_methods_support\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 993 | "\u001b[0;32m/usr/local/lib/python3.5/dist-packages/keras/engine/training_generator.py\u001b[0m in \u001b[0;36mfit_generator\u001b[0;34m(model, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)\u001b[0m\n\u001b[1;32m 215\u001b[0m outs = model.train_on_batch(x, y,\n\u001b[1;32m 216\u001b[0m \u001b[0msample_weight\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0msample_weight\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 217\u001b[0;31m class_weight=class_weight)\n\u001b[0m\u001b[1;32m 218\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 219\u001b[0m \u001b[0mouts\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mto_list\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mouts\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 994 | "\u001b[0;32m/usr/local/lib/python3.5/dist-packages/keras/engine/training.py\u001b[0m in \u001b[0;36mtrain_on_batch\u001b[0;34m(self, x, y, sample_weight, class_weight)\u001b[0m\n\u001b[1;32m 1215\u001b[0m \u001b[0mins\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mx\u001b[0m \u001b[0;34m+\u001b[0m \u001b[0my\u001b[0m \u001b[0;34m+\u001b[0m \u001b[0msample_weights\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1216\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_make_train_function\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1217\u001b[0;31m \u001b[0moutputs\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtrain_function\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mins\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1218\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0munpack_singleton\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0moutputs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1219\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", 995 | "\u001b[0;32m/usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, inputs)\u001b[0m\n\u001b[1;32m 2713\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_legacy_call\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minputs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2714\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 2715\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_call\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minputs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 2716\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2717\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mpy_any\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mis_tensor\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mx\u001b[0m \u001b[0;32min\u001b[0m \u001b[0minputs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 996 | "\u001b[0;32m/usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py\u001b[0m in \u001b[0;36m_call\u001b[0;34m(self, inputs)\u001b[0m\n\u001b[1;32m 2673\u001b[0m \u001b[0mfetched\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_callable_fn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0marray_vals\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mrun_metadata\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrun_metadata\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2674\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 2675\u001b[0;31m \u001b[0mfetched\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_callable_fn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0marray_vals\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 2676\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mfetched\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0mlen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0moutputs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2677\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", 997 | "\u001b[0;32m/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, *args, **kwargs)\u001b[0m\n\u001b[1;32m 1437\u001b[0m ret = tf_session.TF_SessionRunCallable(\n\u001b[1;32m 1438\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_session\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_session\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_handle\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mstatus\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1439\u001b[0;31m run_metadata_ptr)\n\u001b[0m\u001b[1;32m 1440\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mrun_metadata\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1441\u001b[0m \u001b[0mproto_data\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtf_session\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mTF_GetBuffer\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrun_metadata_ptr\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 998 | "\u001b[0;31mKeyboardInterrupt\u001b[0m: " 999 | ] 1000 | } 1001 | ], 1002 | "source": [ 1003 | "from keras.callbacks import ModelCheckpoint\n", 1004 | "from keras.optimizers import Adam\n", 1005 | "# Compile the model\n", 1006 | "model.compile(Adam(lr = 0.00005), loss = triplet_loss)\n", 1007 | "\n", 1008 | "# create generators\n", 1009 | "train_gen = EmbLoader(X_train, y_train, batchSize = 64)\n", 1010 | "valid_gen = EmbLoader(X_valid, y_valid, batchSize = 64)\n", 1011 | "all_gen = EmbLoader(convfeats, labels, batchSize = 64)\n", 1012 | "\n", 1013 | "checkpoint = ModelCheckpoint('siamese.h5', monitor='val_loss', verbose=1, save_best_only=True, save_weights_only=True)\n", 1014 | "model.fit_generator(train_gen, steps_per_epoch=len(train_gen), epochs=150, \n", 1015 | " validation_data=valid_gen, validation_steps=len(valid_gen),\n", 1016 | " workers=12, use_multiprocessing=True, callbacks=[checkpoint])\n", 1017 | "\n", 1018 | "# train for 50 more epochs with validation data\n", 1019 | "model.load_weights('siamese.h5')\n", 1020 | "model.fit_generator(all_gen, steps_per_epoch=len(all_gen), epochs=200, initial_epoch = 150,\n", 1021 | " workers=8, use_multiprocessing=True)\n", 1022 | "model.save('siamese.h5')" 1023 | ] 1024 | }, 1025 | { 1026 | "cell_type": "code", 1027 | "execution_count": 72, 1028 | "metadata": {}, 1029 | "outputs": [], 1030 | "source": [ 1031 | "full_model = load_model('siamese_xception.h5', compile=False)\n", 1032 | "# Modify fully-connected layers model with our new one\n", 1033 | "\n", 1034 | "full_model.layers[3].name = 'conv_model'\n", 1035 | "model.layers[3].name = 'embedding_model'\n", 1036 | "\n", 1037 | "conv_model = full_model.layers[3]\n", 1038 | "fc_model = model.layers[3]\n", 1039 | "\n", 1040 | "inp_a, inp_p, inp_n = full_model.input\n", 1041 | "conv_a = conv_model(inp_a)\n", 1042 | "conv_p = conv_model(inp_p)\n", 1043 | "conv_n = conv_model(inp_n)\n", 1044 | "out_a = fc_model(conv_a)\n", 1045 | "out_p = fc_model(conv_p)\n", 1046 | "out_n = fc_model(conv_n)\n", 1047 | "\n", 1048 | "merged_vector = Concatenate(axis=-1, name = 'Concat_3_images')([out_a, out_p, out_n])\n", 1049 | "final_model = Model(inputs=[inp_a, inp_p, inp_n], outputs=merged_vector)\n", 1050 | "\n", 1051 | "final_model.save('siamese_xception.h5', include_optimizer=False)" 1052 | ] 1053 | }, 1054 | { 1055 | "cell_type": "code", 1056 | "execution_count": null, 1057 | "metadata": {}, 1058 | "outputs": [], 1059 | "source": [ 1060 | "full_model = load_model('siamese_xception.h5', compile=False)\n" 1061 | ] 1062 | } 1063 | ], 1064 | "metadata": { 1065 | "kernelspec": { 1066 | "display_name": "Python 3", 1067 | "language": "python", 1068 | "name": "python3" 1069 | } 1070 | }, 1071 | "nbformat": 4, 1072 | "nbformat_minor": 2 1073 | } 1074 | -------------------------------------------------------------------------------- /6. Train face attributes model.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 1, 6 | "metadata": {}, 7 | "outputs": [ 8 | { 9 | "name": "stderr", 10 | "output_type": "stream", 11 | "text": [ 12 | "Using TensorFlow backend.\n" 13 | ] 14 | } 15 | ], 16 | "source": [ 17 | "import os\n", 18 | "import pandas as pd\n", 19 | "import matplotlib.pyplot as plt\n", 20 | "%matplotlib inline\n", 21 | "import numpy as np\n", 22 | "\n", 23 | "from keras.applications.xception import Xception, preprocess_input\n", 24 | "from keras.layers import GlobalAveragePooling2D\n", 25 | "from keras.layers import Input, Dense, LeakyReLU\n", 26 | "from keras import backend as K\n", 27 | "from keras.models import Model\n", 28 | "from keras.callbacks import ModelCheckpoint\n", 29 | "from keras.preprocessing.image import ImageDataGenerator\n", 30 | "import tensorflow as tf\n", 31 | "\n", 32 | "PATH = '/workspace/dataset/'\n", 33 | "FACE_DEFAULT_SHAPE = (218, 178)\n", 34 | "BS = 32\n", 35 | "\n", 36 | "celeb_data = pd.read_csv('identity_CelebA.txt', sep=\" \", header=None)\n", 37 | "celeb_data.columns = [\"image\", \"label\"]\n", 38 | "attributes = pd.read_csv(PATH + 'list_attr_celeba.csv')\n", 39 | "attributes = attributes.replace(-1, 0)\n", 40 | "\n", 41 | "# 0 - train, 1 - validation, 2 - test\n", 42 | "train_val_test = pd.read_csv('list_eval_partition.csv', usecols=['partition']).values[:, 0]" 43 | ] 44 | }, 45 | { 46 | "cell_type": "code", 47 | "execution_count": 2, 48 | "metadata": { 49 | "scrolled": true 50 | }, 51 | "outputs": [ 52 | { 53 | "name": "stdout", 54 | "output_type": "stream", 55 | "text": [ 56 | "Found 162770 images.\n", 57 | "Found 19867 images.\n", 58 | "WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\n", 59 | "Instructions for updating:\n", 60 | "Colocations handled automatically by placer.\n" 61 | ] 62 | } 63 | ], 64 | "source": [ 65 | "# checkpoint = ModelCheckpoint('attributes.h5', monitor='val_loss', verbose=1, save_best_only=True, save_weights_only=True)\n", 66 | "\n", 67 | "attributes = attributes[['image_id','Attractive','Bald','Male','Bags_Under_Eyes','Narrow_Eyes',\n", 68 | " 'Oval_Face','Pointy_Nose','Receding_Hairline','Young']]\n", 69 | "\n", 70 | "features = attributes.drop(['image_id'], axis=1).columns\n", 71 | "\n", 72 | "df_train = attributes.iloc[train_val_test == 0]\n", 73 | "df_valid = attributes.iloc[train_val_test == 1]\n", 74 | "df_test = attributes.iloc[train_val_test == 2]\n", 75 | "\n", 76 | "# necessary for flow_from_dataframe method\n", 77 | "df_valid = df_valid.reset_index()\n", 78 | "df_test = df_test.reset_index()\n", 79 | "\n", 80 | "datagen = ImageDataGenerator(horizontal_flip=True, preprocessing_function=preprocess_input)\n", 81 | "\n", 82 | "train_gen = datagen.flow_from_dataframe(df_train, directory=PATH+'img_align_celeba', x_col='image_id', \n", 83 | " y_col=features, target_size=FACE_DEFAULT_SHAPE, color_mode='rgb',\n", 84 | " classes=None, class_mode='other', batch_size=BS, shuffle=True)\n", 85 | "valid_gen = datagen.flow_from_dataframe(df_valid, directory=PATH+'img_align_celeba', x_col='image_id', \n", 86 | " y_col=features, target_size=FACE_DEFAULT_SHAPE, color_mode='rgb',\n", 87 | " classes=None, class_mode='other', batch_size=BS, shuffle=True)\n", 88 | "\n", 89 | "\n", 90 | "xception = Xception(include_top=False, weights=None, input_shape = FACE_DEFAULT_SHAPE + (3,))\n", 91 | "output = GlobalAveragePooling2D()(xception.output)\n", 92 | "base_model = Model(xception.input, output, name = 'base_xception')\n", 93 | "\n", 94 | "def get_attr_model(conv_feat_size, num_feat):\n", 95 | " '''\n", 96 | " Takes the output of the conv feature extractor and yields the embeddings\n", 97 | " '''\n", 98 | " input = Input((conv_feat_size,), name = 'input')\n", 99 | " x = Dense(512)(input)\n", 100 | " x = LeakyReLU(alpha=0.1)(x)\n", 101 | " x = Dense(128)(x)\n", 102 | " x = LeakyReLU(alpha=0.1)(x)\n", 103 | " x = Dense(num_feat, activation='sigmoid')(x)\n", 104 | " model = Model(input, x, name = 'attr_classification')\n", 105 | " return model\n", 106 | "\n", 107 | "inp_shape = K.int_shape(base_model.input)[1:]\n", 108 | "conv_feat_size = K.int_shape(base_model.output)[-1]\n", 109 | "\n", 110 | "input = Input( inp_shape )\n", 111 | "emb_attr = get_attr_model(conv_feat_size, len(features))\n", 112 | "att_model = Model(input, emb_attr(base_model(input)))\n", 113 | "\n", 114 | "att_model.compile(Adam(lr=0.0002), loss = 'binary_crossentropy', metrics=['binary_accuracy'])\n", 115 | "att_model.fit_generator(train_gen, steps_per_epoch=len(train_gen), epochs=50, initial_epoch = 0,\n", 116 | " validation_data=valid_gen, validation_steps=len(valid_gen), \n", 117 | " use_multiprocessing=True, workers=12)" 118 | ] 119 | }, 120 | { 121 | "cell_type": "code", 122 | "execution_count": null, 123 | "metadata": {}, 124 | "outputs": [], 125 | "source": [ 126 | "att_model.save('attributes.h5')" 127 | ] 128 | } 129 | ], 130 | "metadata": { 131 | "kernelspec": { 132 | "display_name": "Python 3", 133 | "language": "python", 134 | "name": "python3" 135 | }, 136 | "language_info": { 137 | "codemirror_mode": { 138 | "name": "ipython", 139 | "version": 3 140 | }, 141 | "file_extension": ".py", 142 | "mimetype": "text/x-python", 143 | "name": "python", 144 | "nbconvert_exporter": "python", 145 | "pygments_lexer": "ipython3", 146 | "version": "3.6.8" 147 | } 148 | }, 149 | "nbformat": 4, 150 | "nbformat_minor": 2 151 | } 152 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Face Recognition and more with KERAS 2 | 3 | #### Keras implementation of the paper: [FaceNet: A Unified Embedding for Face Recognition and Clusterin](https://arxiv.org/abs/1503.03832) 4 | 5 | ![alt text](https://github.com/Golbstein/keras-face-recognition/blob/master/assets/face_reco.JPG) 6 | 7 | 8 | * ## Dataset: 9 | - **[CelebA Dataset](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html)** 10 | - Link for downloading face images: [img_align_celeba](https://drive.google.com/file/d/0B7EVK8r0v71pZjFTYXZWM3FlRnM/view?usp=sharing) 11 | 12 | * ## Dependencies 13 | - **Keras 2 (tensorflow backend)** 14 | - **open-cv** 15 | - **tqdm** 16 | - **pandas** 17 | 18 | * ## Model 19 | - Feature extractor model: [Xception](https://arxiv.org/pdf/1610.02357.pdf) 20 | - Embedding model: FaceNet 21 | 22 | ![alt text](https://github.com/Golbstein/keras-face-recognition/blob/master/assets/openface.jpg) 23 | 24 | * ## Loss 25 | **[Triplet Loss](https://towardsdatascience.com/lossless-triplet-loss-7e932f990b24)** with cosine similarity 26 | 27 | ![alt text](https://github.com/Golbstein/keras-face-recognition/blob/master/assets/obama.png) 28 | ![alt text](https://github.com/Golbstein/keras-face-recognition/blob/master/assets/loss.JPG) 29 | 30 | Where *f_i* is the embedding vector of image *i* 31 | 32 | - [x] Recognize celebrities with trained FaceNet model 33 | - [x] Find out your Doppelgänger 34 | - [ ] Beauty test 35 | -------------------------------------------------------------------------------- /assets/celeba.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Golbstein/keras-face-recognition/e6de4d2a677d5285664183910af04cda8682a986/assets/celeba.png -------------------------------------------------------------------------------- /assets/face_reco.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Golbstein/keras-face-recognition/e6de4d2a677d5285664183910af04cda8682a986/assets/face_reco.JPG -------------------------------------------------------------------------------- /assets/loss.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Golbstein/keras-face-recognition/e6de4d2a677d5285664183910af04cda8682a986/assets/loss.JPG -------------------------------------------------------------------------------- /assets/obama.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Golbstein/keras-face-recognition/e6de4d2a677d5285664183910af04cda8682a986/assets/obama.png -------------------------------------------------------------------------------- /assets/openface.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Golbstein/keras-face-recognition/e6de4d2a677d5285664183910af04cda8682a986/assets/openface.jpg -------------------------------------------------------------------------------- /losses.py: -------------------------------------------------------------------------------- 1 | from keras import backend as K 2 | import tensorflow as tf 3 | 4 | def triplet_loss(y_true, y_pred, cosine = True, alpha = 0.2, embedding_size = 128): 5 | ind = int(embedding_size * 2) 6 | a_pred = y_pred[:, :embedding_size] 7 | p_pred = y_pred[:, embedding_size:ind] 8 | n_pred = y_pred[:, ind:] 9 | if cosine: 10 | positive_distance = 1 - K.sum((a_pred * p_pred), axis=-1) 11 | negative_distance = 1 - K.sum((a_pred * n_pred), axis=-1) 12 | else: 13 | positive_distance = K.sqrt(K.sum(K.square(a_pred - p_pred), axis=-1)) 14 | negative_distance = K.sqrt(K.sum(K.square(a_pred - n_pred), axis=-1)) 15 | loss = K.maximum(0.0, positive_distance - negative_distance + alpha) 16 | return loss 17 | 18 | def attribute_crossentropy(y_true, y_pred): 19 | y_true = tf.gather(labels, tf.to_int32(y_true[0])) 20 | return K.mean(K.binary_crossentropy(y_true, y_pred), axis=-1) -------------------------------------------------------------------------------- /metrics.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | from tensorflow.keras import backend 3 | 4 | def TripletLossAccuracy(top_n=1): 5 | 6 | # y_true: (1D array) - [1, 1, 3, 4, 5, 5] 7 | # y_pred: (2D array) - embeddings of each image 8 | # returns the accuracy of the predictions within the batch (i.e., is 1st image most similar to the 2nd one?) 9 | # note - in case of no positive example as for id=3 and id=4 then we don't add it as bad record 10 | 11 | def triplet_loss_accuracy(y_true, y_pred): 12 | lshape = tf.shape(y_true) 13 | batch_size = lshape[0] 14 | y_true = tf.reshape(y_true, [batch_size, 1]) 15 | adjacency = tf.math.equal(y_true, tf.transpose(y_true)) 16 | adjacency = tf.cast(adjacency, dtype=tf.float32) 17 | 18 | pdist_matrix = backend.sqrt(backend.sum((y_pred - y_pred[:, None]) ** 2, axis=-1)) 19 | total_positive = tf.math.count_nonzero(backend.sum(adjacency, axis=-1) - 1, dtype=tf.float32) 20 | top_n_matches = tf.argsort(tf.where(pdist_matrix > 0, pdist_matrix, tf.ones_like(pdist_matrix) * float('inf')))[:, :top_n] 21 | y = tf.range(batch_size) 22 | tiled_y = tf.tile(y[:, None], [1, top_n]) 23 | indices = tf.reshape(tf.transpose(tf.stack([tiled_y, top_n_matches])), (-1, 2)) 24 | tensor_best_n = tf.zeros(shape=(batch_size, batch_size), dtype=tf.float32) 25 | tensor_best_n = tf.tensor_scatter_nd_update(tensor_best_n, indices, tf.ones(len(indices), dtype=tf.float32)) 26 | 27 | correct_predictions = tf.math.count_nonzero(backend.sum(adjacency * tensor_best_n, axis=-1), dtype=tf.float32) 28 | return correct_predictions / (total_positive + backend.epsilon()) 29 | 30 | return triplet_loss_accuracy 31 | -------------------------------------------------------------------------------- /test_imgs/jenia.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Golbstein/keras-face-recognition/e6de4d2a677d5285664183910af04cda8682a986/test_imgs/jenia.jpg -------------------------------------------------------------------------------- /test_imgs/mark_z.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Golbstein/keras-face-recognition/e6de4d2a677d5285664183910af04cda8682a986/test_imgs/mark_z.JPG -------------------------------------------------------------------------------- /utils.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | 3 | def get_faces_df(PATH='./'): 4 | ''' Returns 3 data frames - for train\validation\testing ''' 5 | 6 | celeb_data = pd.read_csv(PATH + 'identity_CelebA.txt', sep=" ", header=None) 7 | celeb_data.columns = ["image", "label"] 8 | 9 | # 0 - train, 1 - validation, 2 - test 10 | train_val_test = pd.read_csv(PATH+'list_eval_partition.csv', usecols=['partition']).values[:, 0] 11 | 12 | df_train = celeb_data.iloc[train_val_test == 0] 13 | df_valid = celeb_data.iloc[train_val_test == 1] 14 | df_test = celeb_data.iloc[train_val_test == 2] 15 | 16 | print('Train images:', len(df_train)) 17 | print('Validation images:', len(df_valid)) 18 | print('Test images:', len(df_test)) 19 | 20 | return df_train, df_valid, df_test, train_val_test 21 | --------------------------------------------------------------------------------