├── CODE_OF_CONDUCT.md ├── Facial_Expression_Detection.ipynb ├── LICENSE ├── README.md ├── face_crop.py ├── haarcascade_frontalface_alt.xml ├── label.py ├── label_image.py ├── retrain.py └── translation ├── README.Hindi.md └── README.Japanese.md /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Contributor Covenant Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | In the interest of fostering an open and welcoming environment, we as 6 | contributors and maintainers pledge to making participation in our project and 7 | our community a harassment-free experience for everyone, regardless of age, body 8 | size, disability, ethnicity, gender identity and expression, level of experience, 9 | education, socio-economic status, nationality, personal appearance, race, 10 | religion, or sexual identity and orientation. 11 | 12 | ## Our Standards 13 | 14 | Examples of behavior that contributes to creating a positive environment 15 | include: 16 | 17 | * Using welcoming and inclusive language 18 | * Being respectful of differing viewpoints and experiences 19 | * Gracefully accepting constructive criticism 20 | * Focusing on what is best for the community 21 | * Showing empathy towards other community members 22 | 23 | Examples of unacceptable behavior by participants include: 24 | 25 | * The use of sexualized language or imagery and unwelcome sexual attention or 26 | advances 27 | * Trolling, insulting/derogatory comments, and personal or political attacks 28 | * Public or private harassment 29 | * Publishing others' private information, such as a physical or electronic 30 | address, without explicit permission 31 | * Other conduct which could reasonably be considered inappropriate in a 32 | professional setting 33 | 34 | ## Our Responsibilities 35 | 36 | Project maintainers are responsible for clarifying the standards of acceptable 37 | behavior and are expected to take appropriate and fair corrective action in 38 | response to any instances of unacceptable behavior. 39 | 40 | Project maintainers have the right and responsibility to remove, edit, or 41 | reject comments, commits, code, wiki edits, issues, and other contributions 42 | that are not aligned to this Code of Conduct, or to ban temporarily or 43 | permanently any contributor for other behaviors that they deem inappropriate, 44 | threatening, offensive, or harmful. 45 | 46 | ## Scope 47 | 48 | This Code of Conduct applies both within project spaces and in public spaces 49 | when an individual is representing the project or its community. Examples of 50 | representing a project or community include using an official project e-mail 51 | address, posting via an official social media account, or acting as an appointed 52 | representative at an online or offline event. Representation of a project may be 53 | further defined and clarified by project maintainers. 54 | 55 | ## Enforcement 56 | 57 | Instances of abusive, harassing, or otherwise unacceptable behavior may be 58 | reported by contacting the project team at {{ email }}. All 59 | complaints will be reviewed and investigated and will result in a response that 60 | is deemed necessary and appropriate to the circumstances. The project team is 61 | obligated to maintain confidentiality with regard to the reporter of an incident. 62 | Further details of specific enforcement policies may be posted separately. 63 | 64 | Project maintainers who do not follow or enforce the Code of Conduct in good 65 | faith may face temporary or permanent repercussions as determined by other 66 | members of the project's leadership. 67 | 68 | ## Attribution 69 | 70 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, 71 | available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html 72 | 73 | [homepage]: https://www.contributor-covenant.org 74 | 75 | -------------------------------------------------------------------------------- /Facial_Expression_Detection.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "provenance": [] 7 | }, 8 | "kernelspec": { 9 | "name": "python3", 10 | "display_name": "Python 3" 11 | }, 12 | "language_info": { 13 | "name": "python" 14 | } 15 | }, 16 | "cells": [ 17 | { 18 | "cell_type": "code", 19 | "execution_count": null, 20 | "metadata": { 21 | "id": "IgAPGzyVkn6-" 22 | }, 23 | "outputs": [], 24 | "source": [ 25 | "## This program first ensures if the face of a person exists in the given image or not then if it exists, it crops\n", 26 | "## the image of the face and saves it in the given directory.\n", 27 | "\n", 28 | "## Importing Modules\n", 29 | "import cv2\n", 30 | "import os\n", 31 | "\n", 32 | "\n", 33 | "##Make changes to these lines for getting the desired results.\n", 34 | "\n", 35 | "## DIRECTORY of the images\n", 36 | "directory = \"E:/data/vids/a\"\n", 37 | "\n", 38 | "## directory where the images to be saved:\n", 39 | "f_directory = \"E:/data/vids/Yawning/\"\n", 40 | "\n", 41 | "\n", 42 | "def facecrop(image):\n", 43 | " ## Crops the face of a person from any image!\n", 44 | "\n", 45 | " ## OpenCV XML FILE for Frontal Facial Detection using HAAR CASCADES.\n", 46 | " facedata = \"haarcascade_frontalface_alt.xml\"\n", 47 | " cascade = cv2.CascadeClassifier(facedata)\n", 48 | "\n", 49 | " ## Reading the given Image with OpenCV\n", 50 | " img = cv2.imread(image)\n", 51 | "\n", 52 | " try:\n", 53 | " ## Some downloaded images are of unsupported type and should be ignored while raising Exception, so for that\n", 54 | " ## I'm using the try/except functions.\n", 55 | "\n", 56 | " minisize = (img.shape[1],img.shape[0])\n", 57 | " miniframe = cv2.resize(img, minisize)\n", 58 | "\n", 59 | " faces = cascade.detectMultiScale(miniframe)\n", 60 | "\n", 61 | " for f in faces:\n", 62 | " x, y, w, h = [ v for v in f ]\n", 63 | " cv2.rectangle(img, (x,y), (x+w,y+h), (0,255,0), 2)\n", 64 | "\n", 65 | " sub_face = img[y:y+h, x:x+w]\n", 66 | "\n", 67 | " f_name = image.split('/')\n", 68 | " f_name = f_name[-1]\n", 69 | "\n", 70 | " ## Change here the Desired directory.\n", 71 | " cv2.imwrite(f_directory + f_name, sub_face)\n", 72 | " print (\"Writing: \" + image)\n", 73 | "\n", 74 | " except:\n", 75 | " pass\n", 76 | "\n", 77 | "if __name__ == '__main__':\n", 78 | " images = os.listdir(directory)\n", 79 | " i = 0\n", 80 | "\n", 81 | " for img in images:\n", 82 | " file = directory + img\n", 83 | " print (i)\n", 84 | " facecrop(file)\n", 85 | " i += 1" 86 | ] 87 | }, 88 | { 89 | "cell_type": "code", 90 | "source": [ 91 | "import label_image\n", 92 | "\n", 93 | "size = 4\n", 94 | "\n", 95 | "\n", 96 | "#Load the xml file\n", 97 | "classifier = cv2.CascadeClassifier('haarcascade_frontalface_alt.xml')\n", 98 | "\n", 99 | "webcam = cv2.VideoCapture(0) #Using default WebCam connected to PC.\n", 100 | "\n", 101 | "while True:\n", 102 | " (rval, im) = webcam.read()\n", 103 | " im=cv2.flip(im,1,0) #Flip to act as a mirror\n", 104 | "\n", 105 | " # Resize the image to speed up detection\n", 106 | " mini = cv2.resize(im, (int(im.shape[1]/size), int(im.shape[0]/size)))\n", 107 | "\n", 108 | " # detect MultiScale / faces\n", 109 | " faces = classifier.detectMultiScale(mini)\n", 110 | "\n", 111 | " # Draw rectangles around each face\n", 112 | " for f in faces:\n", 113 | " (x, y, w, h) = [v * size for v in f] #Scale the shapesize backup\n", 114 | " cv2.rectangle(im, (x,y), (x+w,y+h), (0,255,0), 4)\n", 115 | "\n", 116 | " #Save just the rectangle faces in SubRecFaces\n", 117 | " sub_face = im[y:y+h, x:x+w]\n", 118 | "\n", 119 | " FaceFileName = \"test.jpg\" #Saving the current image from the webcam for testing.\n", 120 | " cv2.imwrite(FaceFileName, sub_face)\n", 121 | "\n", 122 | " text = label_image.main(FaceFileName)# Getting the Result from the label_image file, i.e., Classification Result.\n", 123 | " text = text.title()# Title Case looks Stunning.\n", 124 | " font = cv2.FONT_HERSHEY_TRIPLEX\n", 125 | " cv2.putText(im, text,(x+w,y), font, 1, (0,0,255), 2)\n", 126 | "\n", 127 | " # Show the image\n", 128 | " cv2.imshow('Capture', im)\n", 129 | " key = cv2.waitKey(10)\n", 130 | " # if Esc key is press then break out of the loop\n", 131 | " if key == 27: #The Esc key\n", 132 | " break" 133 | ], 134 | "metadata": { 135 | "id": "vEB2kQLYkwWm" 136 | }, 137 | "execution_count": null, 138 | "outputs": [] 139 | }, 140 | { 141 | "cell_type": "code", 142 | "source": [ 143 | "## Credit: Some parts of the program has been taken from OpenCV documentation\n", 144 | "#importing required libraries\n", 145 | "from __future__ import absolute_import\n", 146 | "from __future__ import division\n", 147 | "from __future__ import print_function\n", 148 | "\n", 149 | "import argparse\n", 150 | "import sys\n", 151 | "import time\n", 152 | "\n", 153 | "import numpy as np\n", 154 | "import tensorflow as tf\n", 155 | "\n", 156 | "#function to load TensorFlow graph from a model file\n", 157 | "def load_graph(model_file):\n", 158 | " graph = tf.Graph() #creating a tensorflow computation graph\n", 159 | " graph_def = tf.GraphDef()\n", 160 | "\n", 161 | " with open(model_file, \"rb\") as f:\n", 162 | " graph_def.ParseFromString(f.read()) #parsing binary graph definition\n", 163 | " with graph.as_default(): #setting this graph as default computation graph\n", 164 | " tf.import_graph_def(graph_def) #importing graph definitions into current graph\n", 165 | "\n", 166 | " return graph\n", 167 | "\n", 168 | "#function to read and pre-process the image\n", 169 | "def read_tensor_from_image_file(file_name, input_height=299, input_width=299,\n", 170 | "\t\t\t\tinput_mean=0, input_std=255):\n", 171 | " input_name = \"file_reader\"\n", 172 | " output_name = \"normalized\"\n", 173 | " file_reader = tf.read_file(file_name, input_name)\n", 174 | " if file_name.endswith(\".png\"): # if a PNG image, setting the number of color channels to 3\n", 175 | " image_reader = tf.image.decode_png(file_reader, channels = 3,\n", 176 | " name='png_reader')\n", 177 | " elif file_name.endswith(\".gif\"): # if a GIF image, removing the singleton dimension\n", 178 | " image_reader = tf.squeeze(tf.image.decode_gif(file_reader,\n", 179 | " name='gif_reader'))\n", 180 | " elif file_name.endswith(\".bmp\"): # if bmp, then decoding a BMP image\n", 181 | " image_reader = tf.image.decode_bmp(file_reader, name='bmp_reader')\n", 182 | " else: #default: decoding the image as a JPEG with 3 color channels\n", 183 | " image_reader = tf.image.decode_jpeg(file_reader, channels = 3,\n", 184 | " name='jpeg_reader')\n", 185 | " float_caster = tf.cast(image_reader, tf.float32) #converting the image into float32 dtype\n", 186 | " dims_expander = tf.expand_dims(float_caster, 0); #adding batch dimension\n", 187 | " resized = tf.image.resize_bilinear(dims_expander, [input_height, input_width]) #resizing the image\n", 188 | " normalized = tf.divide(tf.subtract(resized, [input_mean]), [input_std]) #normalizing the image\n", 189 | " sess = tf.Session()\n", 190 | " result = sess.run(normalized)\n", 191 | "\n", 192 | " return result\n", 193 | "\n", 194 | "#function for loading labels from a file\n", 195 | "def load_labels(label_file):\n", 196 | " label = []\n", 197 | " proto_as_ascii_lines = tf.gfile.GFile(label_file).readlines()\n", 198 | " for l in proto_as_ascii_lines:\n", 199 | " label.append(l.rstrip()) #appending labels after stripping newline characters\n", 200 | " return label\n", 201 | "\n", 202 | "#main function for image classification\n", 203 | "def main(img):\n", 204 | " file_name = img\n", 205 | " model_file = \"retrained_graph.pb\"\n", 206 | " label_file = \"retrained_labels.txt\"\n", 207 | " input_height = 224\n", 208 | " input_width = 224\n", 209 | " input_mean = 128\n", 210 | " input_std = 128\n", 211 | " input_layer = \"input\"\n", 212 | " output_layer = \"final_result\"\n", 213 | "\n", 214 | " #parsing command-line arguments\n", 215 | " parser = argparse.ArgumentParser()\n", 216 | " parser.add_argument(\"--image\", help=\"image to be processed\")\n", 217 | " parser.add_argument(\"--graph\", help=\"graph/model to be executed\")\n", 218 | " parser.add_argument(\"--labels\", help=\"name of file containing labels\")\n", 219 | " parser.add_argument(\"--input_height\", type=int, help=\"input height\")\n", 220 | " parser.add_argument(\"--input_width\", type=int, help=\"input width\")\n", 221 | " parser.add_argument(\"--input_mean\", type=int, help=\"input mean\")\n", 222 | " parser.add_argument(\"--input_std\", type=int, help=\"input std\")\n", 223 | " parser.add_argument(\"--input_layer\", help=\"name of input layer\")\n", 224 | " parser.add_argument(\"--output_layer\", help=\"name of output layer\")\n", 225 | " args = parser.parse_args()\n", 226 | "\n", 227 | " #over-riding default values with command line arguments(if provided)\n", 228 | " if args.graph:\n", 229 | " model_file = args.graph\n", 230 | " if args.image:\n", 231 | " file_name = args.image\n", 232 | " if args.labels:\n", 233 | " label_file = args.labels\n", 234 | " if args.input_height:\n", 235 | " input_height = args.input_height\n", 236 | " if args.input_width:\n", 237 | " input_width = args.input_width\n", 238 | " if args.input_mean:\n", 239 | " input_mean = args.input_mean\n", 240 | " if args.input_std:\n", 241 | " input_std = args.input_std\n", 242 | " if args.input_layer:\n", 243 | " input_layer = args.input_layer\n", 244 | " if args.output_layer:\n", 245 | " output_layer = args.output_layer\n", 246 | "\n", 247 | " graph = load_graph(model_file)\n", 248 | " t = read_tensor_from_image_file(file_name, #reading and pre-processing the image input\n", 249 | " input_height=input_height,\n", 250 | " input_width=input_width,\n", 251 | " input_mean=input_mean,\n", 252 | " input_std=input_std)\n", 253 | "\n", 254 | " input_name = \"import/\" + input_layer\n", 255 | " output_name = \"import/\" + output_layer\n", 256 | " input_operation = graph.get_operation_by_name(input_name); # obtaining references to the input and output operations within the graph\n", 257 | " output_operation = graph.get_operation_by_name(output_name);\n", 258 | "\n", 259 | " #running the image through the model\n", 260 | " with tf.Session(graph=graph) as sess:\n", 261 | " start = time.time() #starting the timer\n", 262 | " results = sess.run(output_operation.outputs[0],\n", 263 | " {input_operation.outputs[0]: t})\n", 264 | " end=time.time() #recording the end time for measuring performance\n", 265 | " results = np.squeeze(results) #removing dimensions of size 1, making it a 1D Array\n", 266 | "\n", 267 | " #identifying the top k results\n", 268 | " top_k = results.argsort()[-5:][::-1]\n", 269 | " labels = load_labels(label_file)\n", 270 | "\n", 271 | " for i in top_k:\n", 272 | " return labels[i] #returning the label with highest confidence" 273 | ], 274 | "metadata": { 275 | "id": "VqlQRJ0dkwT0" 276 | }, 277 | "execution_count": null, 278 | "outputs": [] 279 | }, 280 | { 281 | "cell_type": "code", 282 | "source": [ 283 | "# Copyright 2015 The TensorFlow Authors. All Rights Reserved.\n", 284 | "#\n", 285 | "# Acknowledgement: This is a work by Tesnsorflow authors at Google.\n", 286 | "\n", 287 | "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", 288 | "# you may not use this file except in compliance with the License.\n", 289 | "# You may obtain a copy of the License at\n", 290 | "#\n", 291 | "# http://www.apache.org/licenses/LICENSE-2.0\n", 292 | "#\n", 293 | "# Unless required by applicable law or agreed to in writing, software\n", 294 | "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", 295 | "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", 296 | "# See the License for the specific language governing permissions and\n", 297 | "# limitations under the License.\n", 298 | "# ==============================================================================\n", 299 | "r\"\"\"Simple transfer learning with Inception v3 or Mobilenet models.\n", 300 | "\n", 301 | "With support for TensorBoard.\n", 302 | "\n", 303 | "This example shows how to take a Inception v3 or Mobilenet model trained on\n", 304 | "ImageNet images, and train a new top layer that can recognize other classes of\n", 305 | "images.\n", 306 | "\n", 307 | "The top layer receives as input a 2048-dimensional vector (1001-dimensional for\n", 308 | "Mobilenet) for each image. We train a softmax layer on top of this\n", 309 | "representation. Assuming the softmax layer contains N labels, this corresponds\n", 310 | "to learning N + 2048*N (or 1001*N) model parameters corresponding to the\n", 311 | "learned biases and weights.\n", 312 | "\n", 313 | "Here's an example, which assumes you have a folder containing class-named\n", 314 | "subfolders, each full of images for each label. The example folder flower_photos\n", 315 | "should have a structure like this:\n", 316 | "\n", 317 | "~/flower_photos/daisy/photo1.jpg\n", 318 | "~/flower_photos/daisy/photo2.jpg\n", 319 | "...\n", 320 | "~/flower_photos/rose/anotherphoto77.jpg\n", 321 | "...\n", 322 | "~/flower_photos/sunflower/somepicture.jpg\n", 323 | "\n", 324 | "The subfolder names are important, since they define what label is applied to\n", 325 | "each image, but the filenames themselves don't matter. Once your images are\n", 326 | "prepared, you can run the training with a command like this:\n", 327 | "\n", 328 | "\n", 329 | "```bash\n", 330 | "bazel build tensorflow/examples/image_retraining:retrain && \\\n", 331 | "bazel-bin/tensorflow/examples/image_retraining/retrain \\\n", 332 | " --image_dir ~/flower_photos\n", 333 | "```\n", 334 | "\n", 335 | "Or, if you have a pip installation of tensorflow, `retrain.py` can be run\n", 336 | "without bazel:\n", 337 | "\n", 338 | "```bash\n", 339 | "python tensorflow/examples/image_retraining/retrain.py \\\n", 340 | " --image_dir ~/flower_photos\n", 341 | "```\n", 342 | "\n", 343 | "You can replace the image_dir argument with any folder containing subfolders of\n", 344 | "images. The label for each image is taken from the name of the subfolder it's\n", 345 | "in.\n", 346 | "\n", 347 | "This produces a new model file that can be loaded and run by any TensorFlow\n", 348 | "program, for example the label_image sample code.\n", 349 | "\n", 350 | "By default this script will use the high accuracy, but comparatively large and\n", 351 | "slow Inception v3 model architecture. It's recommended that you start with this\n", 352 | "to validate that you have gathered good training data, but if you want to deploy\n", 353 | "on resource-limited platforms, you can try the `--architecture` flag with a\n", 354 | "Mobilenet model. For example:\n", 355 | "\n", 356 | "```bash\n", 357 | "python tensorflow/examples/image_retraining/retrain.py \\\n", 358 | " --image_dir ~/flower_photos --architecture mobilenet_1.0_224\n", 359 | "```\n", 360 | "\n", 361 | "There are 32 different Mobilenet models to choose from, with a variety of file\n", 362 | "size and latency options. The first number can be '1.0', '0.75', '0.50', or\n", 363 | "'0.25' to control the size, and the second controls the input image size, either\n", 364 | "'224', '192', '160', or '128', with smaller sizes running faster. See\n", 365 | "https://research.googleblog.com/2017/06/mobilenets-open-source-models-for.html\n", 366 | "for more information on Mobilenet.\n", 367 | "\n", 368 | "To use with TensorBoard:\n", 369 | "\n", 370 | "By default, this script will log summaries to /tmp/retrain_logs directory\n", 371 | "\n", 372 | "Visualize the summaries with this command:\n", 373 | "\n", 374 | "tensorboard --logdir /tmp/retrain_logs\n", 375 | "\n", 376 | "\"\"\"\n", 377 | "from __future__ import absolute_import\n", 378 | "from __future__ import division\n", 379 | "from __future__ import print_function\n", 380 | "\n", 381 | "import argparse\n", 382 | "import collections\n", 383 | "from datetime import datetime\n", 384 | "import hashlib\n", 385 | "import os.path\n", 386 | "import random\n", 387 | "import re\n", 388 | "import sys\n", 389 | "import tarfile\n", 390 | "\n", 391 | "import numpy as np\n", 392 | "from six.moves import urllib\n", 393 | "import tensorflow as tf\n", 394 | "\n", 395 | "from tensorflow.python.framework import graph_util\n", 396 | "from tensorflow.python.framework import tensor_shape\n", 397 | "from tensorflow.python.platform import gfile\n", 398 | "from tensorflow.python.util import compat\n", 399 | "\n", 400 | "FLAGS = None\n", 401 | "\n", 402 | "# These are all parameters that are tied to the particular model architecture\n", 403 | "# we're using for Inception v3. These include things like tensor names and their\n", 404 | "# sizes. If you want to adapt this script to work with another model, you will\n", 405 | "# need to update these to reflect the values in the network you're using.\n", 406 | "MAX_NUM_IMAGES_PER_CLASS = 2 ** 27 - 1 # ~134M\n", 407 | "\n", 408 | "\n", 409 | "def create_image_lists(image_dir, testing_percentage, validation_percentage):\n", 410 | " \"\"\"Builds a list of training images from the file system.\n", 411 | "\n", 412 | " Analyzes the sub folders in the image directory, splits them into stable\n", 413 | " training, testing, and validation sets, and returns a data structure\n", 414 | " describing the lists of images for each label and their paths.\n", 415 | "\n", 416 | " Args:\n", 417 | " image_dir: String path to a folder containing subfolders of images.\n", 418 | " testing_percentage: Integer percentage of the images to reserve for tests.\n", 419 | " validation_percentage: Integer percentage of images reserved for validation.\n", 420 | "\n", 421 | " Returns:\n", 422 | " A dictionary containing an entry for each label subfolder, with images split\n", 423 | " into training, testing, and validation sets within each label.\n", 424 | " \"\"\"\n", 425 | " if not gfile.Exists(image_dir):\n", 426 | " tf.logging.error(\"Image directory '\" + image_dir + \"' not found.\")\n", 427 | " return None\n", 428 | " result = collections.OrderedDict()\n", 429 | " sub_dirs = [\n", 430 | " os.path.join(image_dir,item)\n", 431 | " for item in gfile.ListDirectory(image_dir)]\n", 432 | " sub_dirs = sorted(item for item in sub_dirs\n", 433 | " if gfile.IsDirectory(item))\n", 434 | " for sub_dir in sub_dirs:\n", 435 | " extensions = ['jpg', 'jpeg', 'JPG', 'JPEG']\n", 436 | " file_list = []\n", 437 | " dir_name = os.path.basename(sub_dir)\n", 438 | " if dir_name == image_dir:\n", 439 | " continue\n", 440 | " tf.logging.info(\"Looking for images in '\" + dir_name + \"'\")\n", 441 | " for extension in extensions:\n", 442 | " file_glob = os.path.join(image_dir, dir_name, '*.' + extension)\n", 443 | " file_list.extend(gfile.Glob(file_glob))\n", 444 | " if not file_list:\n", 445 | " tf.logging.warning('No files found')\n", 446 | " continue\n", 447 | " if len(file_list) < 20:\n", 448 | " tf.logging.warning(\n", 449 | " 'WARNING: Folder has less than 20 images, which may cause issues.')\n", 450 | " elif len(file_list) > MAX_NUM_IMAGES_PER_CLASS:\n", 451 | " tf.logging.warning(\n", 452 | " 'WARNING: Folder {} has more than {} images. Some images will '\n", 453 | " 'never be selected.'.format(dir_name, MAX_NUM_IMAGES_PER_CLASS))\n", 454 | " label_name = re.sub(r'[^a-z0-9]+', ' ', dir_name.lower())\n", 455 | " training_images = []\n", 456 | " testing_images = []\n", 457 | " validation_images = []\n", 458 | " for file_name in file_list:\n", 459 | " base_name = os.path.basename(file_name)\n", 460 | " # We want to ignore anything after '_nohash_' in the file name when\n", 461 | " # deciding which set to put an image in, the data set creator has a way of\n", 462 | " # grouping photos that are close variations of each other. For example\n", 463 | " # this is used in the plant disease data set to group multiple pictures of\n", 464 | " # the same leaf.\n", 465 | " hash_name = re.sub(r'_nohash_.*$', '', file_name)\n", 466 | " # This looks a bit magical, but we need to decide whether this file should\n", 467 | " # go into the training, testing, or validation sets, and we want to keep\n", 468 | " # existing files in the same set even if more files are subsequently\n", 469 | " # added.\n", 470 | " # To do that, we need a stable way of deciding based on just the file name\n", 471 | " # itself, so we do a hash of that and then use that to generate a\n", 472 | " # probability value that we use to assign it.\n", 473 | " hash_name_hashed = hashlib.sha1(compat.as_bytes(hash_name)).hexdigest()\n", 474 | " percentage_hash = ((int(hash_name_hashed, 16) %\n", 475 | " (MAX_NUM_IMAGES_PER_CLASS + 1)) *\n", 476 | " (100.0 / MAX_NUM_IMAGES_PER_CLASS))\n", 477 | " if percentage_hash < validation_percentage:\n", 478 | " validation_images.append(base_name)\n", 479 | " elif percentage_hash < (testing_percentage + validation_percentage):\n", 480 | " testing_images.append(base_name)\n", 481 | " else:\n", 482 | " training_images.append(base_name)\n", 483 | " result[label_name] = {\n", 484 | " 'dir': dir_name,\n", 485 | " 'training': training_images,\n", 486 | " 'testing': testing_images,\n", 487 | " 'validation': validation_images,\n", 488 | " }\n", 489 | " return result\n", 490 | "\n", 491 | "\n", 492 | "def get_image_path(image_lists, label_name, index, image_dir, category):\n", 493 | " \"\"\"\"Returns a path to an image for a label at the given index.\n", 494 | "\n", 495 | " Args:\n", 496 | " image_lists: Dictionary of training images for each label.\n", 497 | " label_name: Label string we want to get an image for.\n", 498 | " index: Int offset of the image we want. This will be moduloed by the\n", 499 | " available number of images for the label, so it can be arbitrarily large.\n", 500 | " image_dir: Root folder string of the subfolders containing the training\n", 501 | " images.\n", 502 | " category: Name string of set to pull images from - training, testing, or\n", 503 | " validation.\n", 504 | "\n", 505 | " Returns:\n", 506 | " File system path string to an image that meets the requested parameters.\n", 507 | "\n", 508 | " \"\"\"\n", 509 | " if label_name not in image_lists:\n", 510 | " tf.logging.fatal('Label does not exist %s.', label_name)\n", 511 | " label_lists = image_lists[label_name]\n", 512 | " if category not in label_lists:\n", 513 | " tf.logging.fatal('Category does not exist %s.', category)\n", 514 | " category_list = label_lists[category]\n", 515 | " if not category_list:\n", 516 | " tf.logging.fatal('Label %s has no images in the category %s.',\n", 517 | " label_name, category)\n", 518 | " mod_index = index % len(category_list)\n", 519 | " base_name = category_list[mod_index]\n", 520 | " sub_dir = label_lists['dir']\n", 521 | " full_path = os.path.join(image_dir, sub_dir, base_name)\n", 522 | " return full_path\n", 523 | "\n", 524 | "\n", 525 | "def get_bottleneck_path(image_lists, label_name, index, bottleneck_dir,\n", 526 | " category, architecture):\n", 527 | " \"\"\"\"Returns a path to a bottleneck file for a label at the given index.\n", 528 | "\n", 529 | " Args:\n", 530 | " image_lists: Dictionary of training images for each label.\n", 531 | " label_name: Label string we want to get an image for.\n", 532 | " index: Integer offset of the image we want. This will be moduloed by the\n", 533 | " available number of images for the label, so it can be arbitrarily large.\n", 534 | " bottleneck_dir: Folder string holding cached files of bottleneck values.\n", 535 | " category: Name string of set to pull images from - training, testing, or\n", 536 | " validation.\n", 537 | " architecture: The name of the model architecture.\n", 538 | "\n", 539 | " Returns:\n", 540 | " File system path string to an image that meets the requested parameters.\n", 541 | " \"\"\"\n", 542 | " return get_image_path(image_lists, label_name, index, bottleneck_dir,\n", 543 | " category) + '_' + architecture + '.txt'\n", 544 | "\n", 545 | "\n", 546 | "def create_model_graph(model_info):\n", 547 | " \"\"\"\"Creates a graph from saved GraphDef file and returns a Graph object.\n", 548 | "\n", 549 | " Args:\n", 550 | " model_info: Dictionary containing information about the model architecture.\n", 551 | "\n", 552 | " Returns:\n", 553 | " Graph holding the trained Inception network, and various tensors we'll be\n", 554 | " manipulating.\n", 555 | " \"\"\"\n", 556 | " with tf.Graph().as_default() as graph:\n", 557 | " model_path = os.path.join(FLAGS.model_dir, model_info['model_file_name'])\n", 558 | " with gfile.FastGFile(model_path, 'rb') as f:\n", 559 | " graph_def = tf.GraphDef()\n", 560 | " graph_def.ParseFromString(f.read())\n", 561 | " bottleneck_tensor, resized_input_tensor = (tf.import_graph_def(\n", 562 | " graph_def,\n", 563 | " name='',\n", 564 | " return_elements=[\n", 565 | " model_info['bottleneck_tensor_name'],\n", 566 | " model_info['resized_input_tensor_name'],\n", 567 | " ]))\n", 568 | " return graph, bottleneck_tensor, resized_input_tensor\n", 569 | "\n", 570 | "\n", 571 | "def run_bottleneck_on_image(sess, image_data, image_data_tensor,\n", 572 | " decoded_image_tensor, resized_input_tensor,\n", 573 | " bottleneck_tensor):\n", 574 | " \"\"\"Runs inference on an image to extract the 'bottleneck' summary layer.\n", 575 | "\n", 576 | " Args:\n", 577 | " sess: Current active TensorFlow Session.\n", 578 | " image_data: String of raw JPEG data.\n", 579 | " image_data_tensor: Input data layer in the graph.\n", 580 | " decoded_image_tensor: Output of initial image resizing and preprocessing.\n", 581 | " resized_input_tensor: The input node of the recognition graph.\n", 582 | " bottleneck_tensor: Layer before the final softmax.\n", 583 | "\n", 584 | " Returns:\n", 585 | " Numpy array of bottleneck values.\n", 586 | " \"\"\"\n", 587 | " # First decode the JPEG image, resize it, and rescale the pixel values.\n", 588 | " resized_input_values = sess.run(decoded_image_tensor,\n", 589 | " {image_data_tensor: image_data})\n", 590 | " # Then run it through the recognition network.\n", 591 | " bottleneck_values = sess.run(bottleneck_tensor,\n", 592 | " {resized_input_tensor: resized_input_values})\n", 593 | " bottleneck_values = np.squeeze(bottleneck_values)\n", 594 | " return bottleneck_values\n", 595 | "\n", 596 | "\n", 597 | "def maybe_download_and_extract(data_url):\n", 598 | " \"\"\"Download and extract model tar file.\n", 599 | "\n", 600 | " If the pretrained model we're using doesn't already exist, this function\n", 601 | " downloads it from the TensorFlow.org website and unpacks it into a directory.\n", 602 | "\n", 603 | " Args:\n", 604 | " data_url: Web location of the tar file containing the pretrained model.\n", 605 | " \"\"\"\n", 606 | " dest_directory = FLAGS.model_dir\n", 607 | " if not os.path.exists(dest_directory):\n", 608 | " os.makedirs(dest_directory)\n", 609 | " filename = data_url.split('/')[-1]\n", 610 | " filepath = os.path.join(dest_directory, filename)\n", 611 | " if not os.path.exists(filepath):\n", 612 | "\n", 613 | " def _progress(count, block_size, total_size):\n", 614 | " sys.stdout.write('\\r>> Downloading %s %.1f%%' %\n", 615 | " (filename,\n", 616 | " float(count * block_size) / float(total_size) * 100.0))\n", 617 | " sys.stdout.flush()\n", 618 | "\n", 619 | " filepath, _ = urllib.request.urlretrieve(data_url, filepath, _progress)\n", 620 | " print()\n", 621 | " statinfo = os.stat(filepath)\n", 622 | " tf.logging.info('Successfully downloaded', filename, statinfo.st_size,\n", 623 | " 'bytes.')\n", 624 | " tarfile.open(filepath, 'r:gz').extractall(dest_directory)\n", 625 | "\n", 626 | "\n", 627 | "def ensure_dir_exists(dir_name):\n", 628 | " \"\"\"Makes sure the folder exists on disk.\n", 629 | "\n", 630 | " Args:\n", 631 | " dir_name: Path string to the folder we want to create.\n", 632 | " \"\"\"\n", 633 | " if not os.path.exists(dir_name):\n", 634 | " os.makedirs(dir_name)\n", 635 | "\n", 636 | "\n", 637 | "bottleneck_path_2_bottleneck_values = {}\n", 638 | "\n", 639 | "\n", 640 | "def create_bottleneck_file(bottleneck_path, image_lists, label_name, index,\n", 641 | " image_dir, category, sess, jpeg_data_tensor,\n", 642 | " decoded_image_tensor, resized_input_tensor,\n", 643 | " bottleneck_tensor):\n", 644 | " \"\"\"Create a single bottleneck file.\"\"\"\n", 645 | " tf.logging.info('Creating bottleneck at ' + bottleneck_path)\n", 646 | " image_path = get_image_path(image_lists, label_name, index,\n", 647 | " image_dir, category)\n", 648 | " if not gfile.Exists(image_path):\n", 649 | " tf.logging.fatal('File does not exist %s', image_path)\n", 650 | " image_data = gfile.FastGFile(image_path, 'rb').read()\n", 651 | " try:\n", 652 | " bottleneck_values = run_bottleneck_on_image(\n", 653 | " sess, image_data, jpeg_data_tensor, decoded_image_tensor,\n", 654 | " resized_input_tensor, bottleneck_tensor)\n", 655 | " except Exception as e:\n", 656 | " raise RuntimeError('Error during processing file %s (%s)' % (image_path,\n", 657 | " str(e)))\n", 658 | " bottleneck_string = ','.join(str(x) for x in bottleneck_values)\n", 659 | " with open(bottleneck_path, 'w') as bottleneck_file:\n", 660 | " bottleneck_file.write(bottleneck_string)\n", 661 | "\n", 662 | "\n", 663 | "def get_or_create_bottleneck(sess, image_lists, label_name, index, image_dir,\n", 664 | " category, bottleneck_dir, jpeg_data_tensor,\n", 665 | " decoded_image_tensor, resized_input_tensor,\n", 666 | " bottleneck_tensor, architecture):\n", 667 | " \"\"\"Retrieves or calculates bottleneck values for an image.\n", 668 | "\n", 669 | " If a cached version of the bottleneck data exists on-disk, return that,\n", 670 | " otherwise calculate the data and save it to disk for future use.\n", 671 | "\n", 672 | " Args:\n", 673 | " sess: The current active TensorFlow Session.\n", 674 | " image_lists: Dictionary of training images for each label.\n", 675 | " label_name: Label string we want to get an image for.\n", 676 | " index: Integer offset of the image we want. This will be modulo-ed by the\n", 677 | " available number of images for the label, so it can be arbitrarily large.\n", 678 | " image_dir: Root folder string of the subfolders containing the training\n", 679 | " images.\n", 680 | " category: Name string of which set to pull images from - training, testing,\n", 681 | " or validation.\n", 682 | " bottleneck_dir: Folder string holding cached files of bottleneck values.\n", 683 | " jpeg_data_tensor: The tensor to feed loaded jpeg data into.\n", 684 | " decoded_image_tensor: The output of decoding and resizing the image.\n", 685 | " resized_input_tensor: The input node of the recognition graph.\n", 686 | " bottleneck_tensor: The output tensor for the bottleneck values.\n", 687 | " architecture: The name of the model architecture.\n", 688 | "\n", 689 | " Returns:\n", 690 | " Numpy array of values produced by the bottleneck layer for the image.\n", 691 | " \"\"\"\n", 692 | " label_lists = image_lists[label_name]\n", 693 | " sub_dir = label_lists['dir']\n", 694 | " sub_dir_path = os.path.join(bottleneck_dir, sub_dir)\n", 695 | " ensure_dir_exists(sub_dir_path)\n", 696 | " bottleneck_path = get_bottleneck_path(image_lists, label_name, index,\n", 697 | " bottleneck_dir, category, architecture)\n", 698 | " if not os.path.exists(bottleneck_path):\n", 699 | " create_bottleneck_file(bottleneck_path, image_lists, label_name, index,\n", 700 | " image_dir, category, sess, jpeg_data_tensor,\n", 701 | " decoded_image_tensor, resized_input_tensor,\n", 702 | " bottleneck_tensor)\n", 703 | " with open(bottleneck_path, 'r') as bottleneck_file:\n", 704 | " bottleneck_string = bottleneck_file.read()\n", 705 | " did_hit_error = False\n", 706 | " try:\n", 707 | " bottleneck_values = [float(x) for x in bottleneck_string.split(',')]\n", 708 | " except ValueError:\n", 709 | " tf.logging.warning('Invalid float found, recreating bottleneck')\n", 710 | " did_hit_error = True\n", 711 | " if did_hit_error:\n", 712 | " create_bottleneck_file(bottleneck_path, image_lists, label_name, index,\n", 713 | " image_dir, category, sess, jpeg_data_tensor,\n", 714 | " decoded_image_tensor, resized_input_tensor,\n", 715 | " bottleneck_tensor)\n", 716 | " with open(bottleneck_path, 'r') as bottleneck_file:\n", 717 | " bottleneck_string = bottleneck_file.read()\n", 718 | " # Allow exceptions to propagate here, since they shouldn't happen after a\n", 719 | " # fresh creation\n", 720 | " bottleneck_values = [float(x) for x in bottleneck_string.split(',')]\n", 721 | " return bottleneck_values\n", 722 | "\n", 723 | "\n", 724 | "def cache_bottlenecks(sess, image_lists, image_dir, bottleneck_dir,\n", 725 | " jpeg_data_tensor, decoded_image_tensor,\n", 726 | " resized_input_tensor, bottleneck_tensor, architecture):\n", 727 | " \"\"\"Ensures all the training, testing, and validation bottlenecks are cached.\n", 728 | "\n", 729 | " Because we're likely to read the same image multiple times (if there are no\n", 730 | " distortions applied during training) it can speed things up a lot if we\n", 731 | " calculate the bottleneck layer values once for each image during\n", 732 | " preprocessing, and then just read those cached values repeatedly during\n", 733 | " training. Here we go through all the images we've found, calculate those\n", 734 | " values, and save them off.\n", 735 | "\n", 736 | " Args:\n", 737 | " sess: The current active TensorFlow Session.\n", 738 | " image_lists: Dictionary of training images for each label.\n", 739 | " image_dir: Root folder string of the subfolders containing the training\n", 740 | " images.\n", 741 | " bottleneck_dir: Folder string holding cached files of bottleneck values.\n", 742 | " jpeg_data_tensor: Input tensor for jpeg data from file.\n", 743 | " decoded_image_tensor: The output of decoding and resizing the image.\n", 744 | " resized_input_tensor: The input node of the recognition graph.\n", 745 | " bottleneck_tensor: The penultimate output layer of the graph.\n", 746 | " architecture: The name of the model architecture.\n", 747 | "\n", 748 | " Returns:\n", 749 | " Nothing.\n", 750 | " \"\"\"\n", 751 | " how_many_bottlenecks = 0\n", 752 | " ensure_dir_exists(bottleneck_dir)\n", 753 | " for label_name, label_lists in image_lists.items():\n", 754 | " for category in ['training', 'testing', 'validation']:\n", 755 | " category_list = label_lists[category]\n", 756 | " for index, unused_base_name in enumerate(category_list):\n", 757 | " get_or_create_bottleneck(\n", 758 | " sess, image_lists, label_name, index, image_dir, category,\n", 759 | " bottleneck_dir, jpeg_data_tensor, decoded_image_tensor,\n", 760 | " resized_input_tensor, bottleneck_tensor, architecture)\n", 761 | "\n", 762 | " how_many_bottlenecks += 1\n", 763 | " if how_many_bottlenecks % 100 == 0:\n", 764 | " tf.logging.info(\n", 765 | " str(how_many_bottlenecks) + ' bottleneck files created.')\n", 766 | "\n", 767 | "\n", 768 | "def get_random_cached_bottlenecks(sess, image_lists, how_many, category,\n", 769 | " bottleneck_dir, image_dir, jpeg_data_tensor,\n", 770 | " decoded_image_tensor, resized_input_tensor,\n", 771 | " bottleneck_tensor, architecture):\n", 772 | " \"\"\"Retrieves bottleneck values for cached images.\n", 773 | "\n", 774 | " If no distortions are being applied, this function can retrieve the cached\n", 775 | " bottleneck values directly from disk for images. It picks a random set of\n", 776 | " images from the specified category.\n", 777 | "\n", 778 | " Args:\n", 779 | " sess: Current TensorFlow Session.\n", 780 | " image_lists: Dictionary of training images for each label.\n", 781 | " how_many: If positive, a random sample of this size will be chosen.\n", 782 | " If negative, all bottlenecks will be retrieved.\n", 783 | " category: Name string of which set to pull from - training, testing, or\n", 784 | " validation.\n", 785 | " bottleneck_dir: Folder string holding cached files of bottleneck values.\n", 786 | " image_dir: Root folder string of the subfolders containing the training\n", 787 | " images.\n", 788 | " jpeg_data_tensor: The layer to feed jpeg image data into.\n", 789 | " decoded_image_tensor: The output of decoding and resizing the image.\n", 790 | " resized_input_tensor: The input node of the recognition graph.\n", 791 | " bottleneck_tensor: The bottleneck output layer of the CNN graph.\n", 792 | " architecture: The name of the model architecture.\n", 793 | "\n", 794 | " Returns:\n", 795 | " List of bottleneck arrays, their corresponding ground truths, and the\n", 796 | " relevant filenames.\n", 797 | " \"\"\"\n", 798 | " class_count = len(image_lists.keys())\n", 799 | " bottlenecks = []\n", 800 | " ground_truths = []\n", 801 | " filenames = []\n", 802 | " if how_many >= 0:\n", 803 | " # Retrieve a random sample of bottlenecks.\n", 804 | " for unused_i in range(how_many):\n", 805 | " label_index = random.randrange(class_count)\n", 806 | " label_name = list(image_lists.keys())[label_index]\n", 807 | " image_index = random.randrange(MAX_NUM_IMAGES_PER_CLASS + 1)\n", 808 | " image_name = get_image_path(image_lists, label_name, image_index,\n", 809 | " image_dir, category)\n", 810 | " bottleneck = get_or_create_bottleneck(\n", 811 | " sess, image_lists, label_name, image_index, image_dir, category,\n", 812 | " bottleneck_dir, jpeg_data_tensor, decoded_image_tensor,\n", 813 | " resized_input_tensor, bottleneck_tensor, architecture)\n", 814 | " ground_truth = np.zeros(class_count, dtype=np.float32)\n", 815 | " ground_truth[label_index] = 1.0\n", 816 | " bottlenecks.append(bottleneck)\n", 817 | " ground_truths.append(ground_truth)\n", 818 | " filenames.append(image_name)\n", 819 | " else:\n", 820 | " # Retrieve all bottlenecks.\n", 821 | " for label_index, label_name in enumerate(image_lists.keys()):\n", 822 | " for image_index, image_name in enumerate(\n", 823 | " image_lists[label_name][category]):\n", 824 | " image_name = get_image_path(image_lists, label_name, image_index,\n", 825 | " image_dir, category)\n", 826 | " bottleneck = get_or_create_bottleneck(\n", 827 | " sess, image_lists, label_name, image_index, image_dir, category,\n", 828 | " bottleneck_dir, jpeg_data_tensor, decoded_image_tensor,\n", 829 | " resized_input_tensor, bottleneck_tensor, architecture)\n", 830 | " ground_truth = np.zeros(class_count, dtype=np.float32)\n", 831 | " ground_truth[label_index] = 1.0\n", 832 | " bottlenecks.append(bottleneck)\n", 833 | " ground_truths.append(ground_truth)\n", 834 | " filenames.append(image_name)\n", 835 | " return bottlenecks, ground_truths, filenames\n", 836 | "\n", 837 | "\n", 838 | "def get_random_distorted_bottlenecks(\n", 839 | " sess, image_lists, how_many, category, image_dir, input_jpeg_tensor,\n", 840 | " distorted_image, resized_input_tensor, bottleneck_tensor):\n", 841 | " \"\"\"Retrieves bottleneck values for training images, after distortions.\n", 842 | "\n", 843 | " If we're training with distortions like crops, scales, or flips, we have to\n", 844 | " recalculate the full model for every image, and so we can't use cached\n", 845 | " bottleneck values. Instead we find random images for the requested category,\n", 846 | " run them through the distortion graph, and then the full graph to get the\n", 847 | " bottleneck results for each.\n", 848 | "\n", 849 | " Args:\n", 850 | " sess: Current TensorFlow Session.\n", 851 | " image_lists: Dictionary of training images for each label.\n", 852 | " how_many: The integer number of bottleneck values to return.\n", 853 | " category: Name string of which set of images to fetch - training, testing,\n", 854 | " or validation.\n", 855 | " image_dir: Root folder string of the subfolders containing the training\n", 856 | " images.\n", 857 | " input_jpeg_tensor: The input layer we feed the image data to.\n", 858 | " distorted_image: The output node of the distortion graph.\n", 859 | " resized_input_tensor: The input node of the recognition graph.\n", 860 | " bottleneck_tensor: The bottleneck output layer of the CNN graph.\n", 861 | "\n", 862 | " Returns:\n", 863 | " List of bottleneck arrays and their corresponding ground truths.\n", 864 | " \"\"\"\n", 865 | " class_count = len(image_lists.keys())\n", 866 | " bottlenecks = []\n", 867 | " ground_truths = []\n", 868 | " for unused_i in range(how_many):\n", 869 | " label_index = random.randrange(class_count)\n", 870 | " label_name = list(image_lists.keys())[label_index]\n", 871 | " image_index = random.randrange(MAX_NUM_IMAGES_PER_CLASS + 1)\n", 872 | " image_path = get_image_path(image_lists, label_name, image_index, image_dir,\n", 873 | " category)\n", 874 | " if not gfile.Exists(image_path):\n", 875 | " tf.logging.fatal('File does not exist %s', image_path)\n", 876 | " jpeg_data = gfile.FastGFile(image_path, 'rb').read()\n", 877 | " # Note that we materialize the distorted_image_data as a numpy array before\n", 878 | " # sending running inference on the image. This involves 2 memory copies and\n", 879 | " # might be optimized in other implementations.\n", 880 | " distorted_image_data = sess.run(distorted_image,\n", 881 | " {input_jpeg_tensor: jpeg_data})\n", 882 | " bottleneck_values = sess.run(bottleneck_tensor,\n", 883 | " {resized_input_tensor: distorted_image_data})\n", 884 | " bottleneck_values = np.squeeze(bottleneck_values)\n", 885 | " ground_truth = np.zeros(class_count, dtype=np.float32)\n", 886 | " ground_truth[label_index] = 1.0\n", 887 | " bottlenecks.append(bottleneck_values)\n", 888 | " ground_truths.append(ground_truth)\n", 889 | " return bottlenecks, ground_truths\n", 890 | "\n", 891 | "\n", 892 | "def should_distort_images(flip_left_right, random_crop, random_scale,\n", 893 | " random_brightness):\n", 894 | " \"\"\"Whether any distortions are enabled, from the input flags.\n", 895 | "\n", 896 | " Args:\n", 897 | " flip_left_right: Boolean whether to randomly mirror images horizontally.\n", 898 | " random_crop: Integer percentage setting the total margin used around the\n", 899 | " crop box.\n", 900 | " random_scale: Integer percentage of how much to vary the scale by.\n", 901 | " random_brightness: Integer range to randomly multiply the pixel values by.\n", 902 | "\n", 903 | " Returns:\n", 904 | " Boolean value indicating whether any distortions should be applied.\n", 905 | " \"\"\"\n", 906 | " return (flip_left_right or (random_crop != 0) or (random_scale != 0) or\n", 907 | " (random_brightness != 0))\n", 908 | "\n", 909 | "\n", 910 | "def add_input_distortions(flip_left_right, random_crop, random_scale,\n", 911 | " random_brightness, input_width, input_height,\n", 912 | " input_depth, input_mean, input_std):\n", 913 | " \"\"\"Creates the operations to apply the specified distortions.\n", 914 | "\n", 915 | " During training it can help to improve the results if we run the images\n", 916 | " through simple distortions like crops, scales, and flips. These reflect the\n", 917 | " kind of variations we expect in the real world, and so can help train the\n", 918 | " model to cope with natural data more effectively. Here we take the supplied\n", 919 | " parameters and construct a network of operations to apply them to an image.\n", 920 | "\n", 921 | " Cropping\n", 922 | " ~~~~~~~~\n", 923 | "\n", 924 | " Cropping is done by placing a bounding box at a random position in the full\n", 925 | " image. The cropping parameter controls the size of that box relative to the\n", 926 | " input image. If it's zero, then the box is the same size as the input and no\n", 927 | " cropping is performed. If the value is 50%, then the crop box will be half the\n", 928 | " width and height of the input. In a diagram it looks like this:\n", 929 | "\n", 930 | " < width >\n", 931 | " +---------------------+\n", 932 | " | |\n", 933 | " | width - crop% |\n", 934 | " | < > |\n", 935 | " | +------+ |\n", 936 | " | | | |\n", 937 | " | | | |\n", 938 | " | | | |\n", 939 | " | +------+ |\n", 940 | " | |\n", 941 | " | |\n", 942 | " +---------------------+\n", 943 | "\n", 944 | " Scaling\n", 945 | " ~~~~~~~\n", 946 | "\n", 947 | " Scaling is a lot like cropping, except that the bounding box is always\n", 948 | " centered and its size varies randomly within the given range. For example if\n", 949 | " the scale percentage is zero, then the bounding box is the same size as the\n", 950 | " input and no scaling is applied. If it's 50%, then the bounding box will be in\n", 951 | " a random range between half the width and height and full size.\n", 952 | "\n", 953 | " Args:\n", 954 | " flip_left_right: Boolean whether to randomly mirror images horizontally.\n", 955 | " random_crop: Integer percentage setting the total margin used around the\n", 956 | " crop box.\n", 957 | " random_scale: Integer percentage of how much to vary the scale by.\n", 958 | " random_brightness: Integer range to randomly multiply the pixel values by.\n", 959 | " graph.\n", 960 | " input_width: Horizontal size of expected input image to model.\n", 961 | " input_height: Vertical size of expected input image to model.\n", 962 | " input_depth: How many channels the expected input image should have.\n", 963 | " input_mean: Pixel value that should be zero in the image for the graph.\n", 964 | " input_std: How much to divide the pixel values by before recognition.\n", 965 | "\n", 966 | " Returns:\n", 967 | " The jpeg input layer and the distorted result tensor.\n", 968 | " \"\"\"\n", 969 | "\n", 970 | " jpeg_data = tf.placeholder(tf.string, name='DistortJPGInput')\n", 971 | " decoded_image = tf.image.decode_jpeg(jpeg_data, channels=input_depth)\n", 972 | " decoded_image_as_float = tf.cast(decoded_image, dtype=tf.float32)\n", 973 | " decoded_image_4d = tf.expand_dims(decoded_image_as_float, 0)\n", 974 | " margin_scale = 1.0 + (random_crop / 100.0)\n", 975 | " resize_scale = 1.0 + (random_scale / 100.0)\n", 976 | " margin_scale_value = tf.constant(margin_scale)\n", 977 | " resize_scale_value = tf.random_uniform(tensor_shape.scalar(),\n", 978 | " minval=1.0,\n", 979 | " maxval=resize_scale)\n", 980 | " scale_value = tf.multiply(margin_scale_value, resize_scale_value)\n", 981 | " precrop_width = tf.multiply(scale_value, input_width)\n", 982 | " precrop_height = tf.multiply(scale_value, input_height)\n", 983 | " precrop_shape = tf.stack([precrop_height, precrop_width])\n", 984 | " precrop_shape_as_int = tf.cast(precrop_shape, dtype=tf.int32)\n", 985 | " precropped_image = tf.image.resize_bilinear(decoded_image_4d,\n", 986 | " precrop_shape_as_int)\n", 987 | " precropped_image_3d = tf.squeeze(precropped_image, squeeze_dims=[0])\n", 988 | " cropped_image = tf.random_crop(precropped_image_3d,\n", 989 | " [input_height, input_width, input_depth])\n", 990 | " if flip_left_right:\n", 991 | " flipped_image = tf.image.random_flip_left_right(cropped_image)\n", 992 | " else:\n", 993 | " flipped_image = cropped_image\n", 994 | " brightness_min = 1.0 - (random_brightness / 100.0)\n", 995 | " brightness_max = 1.0 + (random_brightness / 100.0)\n", 996 | " brightness_value = tf.random_uniform(tensor_shape.scalar(),\n", 997 | " minval=brightness_min,\n", 998 | " maxval=brightness_max)\n", 999 | " brightened_image = tf.multiply(flipped_image, brightness_value)\n", 1000 | " offset_image = tf.subtract(brightened_image, input_mean)\n", 1001 | " mul_image = tf.multiply(offset_image, 1.0 / input_std)\n", 1002 | " distort_result = tf.expand_dims(mul_image, 0, name='DistortResult')\n", 1003 | " return jpeg_data, distort_result\n", 1004 | "\n", 1005 | "\n", 1006 | "def variable_summaries(var):\n", 1007 | " \"\"\"Attach a lot of summaries to a Tensor (for TensorBoard visualization).\"\"\"\n", 1008 | " with tf.name_scope('summaries'):\n", 1009 | " mean = tf.reduce_mean(var)\n", 1010 | " tf.summary.scalar('mean', mean)\n", 1011 | " with tf.name_scope('stddev'):\n", 1012 | " stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))\n", 1013 | " tf.summary.scalar('stddev', stddev)\n", 1014 | " tf.summary.scalar('max', tf.reduce_max(var))\n", 1015 | " tf.summary.scalar('min', tf.reduce_min(var))\n", 1016 | " tf.summary.histogram('histogram', var)\n", 1017 | "\n", 1018 | "\n", 1019 | "def add_final_training_ops(class_count, final_tensor_name, bottleneck_tensor,\n", 1020 | " bottleneck_tensor_size):\n", 1021 | " \"\"\"Adds a new softmax and fully-connected layer for training.\n", 1022 | "\n", 1023 | " We need to retrain the top layer to identify our new classes, so this function\n", 1024 | " adds the right operations to the graph, along with some variables to hold the\n", 1025 | " weights, and then sets up all the gradients for the backward pass.\n", 1026 | "\n", 1027 | " The set up for the softmax and fully-connected layers is based on:\n", 1028 | " https://www.tensorflow.org/versions/master/tutorials/mnist/beginners/index.html\n", 1029 | "\n", 1030 | " Args:\n", 1031 | " class_count: Integer of how many categories of things we're trying to\n", 1032 | " recognize.\n", 1033 | " final_tensor_name: Name string for the new final node that produces results.\n", 1034 | " bottleneck_tensor: The output of the main CNN graph.\n", 1035 | " bottleneck_tensor_size: How many entries in the bottleneck vector.\n", 1036 | "\n", 1037 | " Returns:\n", 1038 | " The tensors for the training and cross entropy results, and tensors for the\n", 1039 | " bottleneck input and ground truth input.\n", 1040 | " \"\"\"\n", 1041 | " with tf.name_scope('input'):\n", 1042 | " bottleneck_input = tf.placeholder_with_default(\n", 1043 | " bottleneck_tensor,\n", 1044 | " shape=[None, bottleneck_tensor_size],\n", 1045 | " name='BottleneckInputPlaceholder')\n", 1046 | "\n", 1047 | " ground_truth_input = tf.placeholder(tf.float32,\n", 1048 | " [None, class_count],\n", 1049 | " name='GroundTruthInput')\n", 1050 | "\n", 1051 | " # Organizing the following ops as `final_training_ops` so they're easier\n", 1052 | " # to see in TensorBoard\n", 1053 | " layer_name = 'final_training_ops'\n", 1054 | " with tf.name_scope(layer_name):\n", 1055 | " with tf.name_scope('weights'):\n", 1056 | " initial_value = tf.truncated_normal(\n", 1057 | " [bottleneck_tensor_size, class_count], stddev=0.001)\n", 1058 | "\n", 1059 | " layer_weights = tf.Variable(initial_value, name='final_weights')\n", 1060 | "\n", 1061 | " variable_summaries(layer_weights)\n", 1062 | " with tf.name_scope('biases'):\n", 1063 | " layer_biases = tf.Variable(tf.zeros([class_count]), name='final_biases')\n", 1064 | " variable_summaries(layer_biases)\n", 1065 | " with tf.name_scope('Wx_plus_b'):\n", 1066 | " logits = tf.matmul(bottleneck_input, layer_weights) + layer_biases\n", 1067 | " tf.summary.histogram('pre_activations', logits)\n", 1068 | "\n", 1069 | " final_tensor = tf.nn.softmax(logits, name=final_tensor_name)\n", 1070 | " tf.summary.histogram('activations', final_tensor)\n", 1071 | "\n", 1072 | " with tf.name_scope('cross_entropy'):\n", 1073 | " cross_entropy = tf.nn.softmax_cross_entropy_with_logits(\n", 1074 | " labels=ground_truth_input, logits=logits)\n", 1075 | " with tf.name_scope('total'):\n", 1076 | " cross_entropy_mean = tf.reduce_mean(cross_entropy)\n", 1077 | " tf.summary.scalar('cross_entropy', cross_entropy_mean)\n", 1078 | "\n", 1079 | " with tf.name_scope('train'):\n", 1080 | " optimizer = tf.train.GradientDescentOptimizer(FLAGS.learning_rate)\n", 1081 | " train_step = optimizer.minimize(cross_entropy_mean)\n", 1082 | "\n", 1083 | " return (train_step, cross_entropy_mean, bottleneck_input, ground_truth_input,\n", 1084 | " final_tensor)\n", 1085 | "\n", 1086 | "\n", 1087 | "def add_evaluation_step(result_tensor, ground_truth_tensor):\n", 1088 | " \"\"\"Inserts the operations we need to evaluate the accuracy of our results.\n", 1089 | "\n", 1090 | " Args:\n", 1091 | " result_tensor: The new final node that produces results.\n", 1092 | " ground_truth_tensor: The node we feed ground truth data\n", 1093 | " into.\n", 1094 | "\n", 1095 | " Returns:\n", 1096 | " Tuple of (evaluation step, prediction).\n", 1097 | " \"\"\"\n", 1098 | " with tf.name_scope('accuracy'):\n", 1099 | " with tf.name_scope('correct_prediction'):\n", 1100 | " prediction = tf.argmax(result_tensor, 1)\n", 1101 | " correct_prediction = tf.equal(\n", 1102 | " prediction, tf.argmax(ground_truth_tensor, 1))\n", 1103 | " with tf.name_scope('accuracy'):\n", 1104 | " evaluation_step = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n", 1105 | " tf.summary.scalar('accuracy', evaluation_step)\n", 1106 | " return evaluation_step, prediction\n", 1107 | "\n", 1108 | "\n", 1109 | "def save_graph_to_file(sess, graph, graph_file_name):\n", 1110 | " output_graph_def = graph_util.convert_variables_to_constants(\n", 1111 | " sess, graph.as_graph_def(), [FLAGS.final_tensor_name])\n", 1112 | " with gfile.FastGFile(graph_file_name, 'wb') as f:\n", 1113 | " f.write(output_graph_def.SerializeToString())\n", 1114 | " return\n", 1115 | "\n", 1116 | "\n", 1117 | "def prepare_file_system():\n", 1118 | " # Setup the directory we'll write summaries to for TensorBoard\n", 1119 | " if tf.gfile.Exists(FLAGS.summaries_dir):\n", 1120 | " tf.gfile.DeleteRecursively(FLAGS.summaries_dir)\n", 1121 | " tf.gfile.MakeDirs(FLAGS.summaries_dir)\n", 1122 | " if FLAGS.intermediate_store_frequency > 0:\n", 1123 | " ensure_dir_exists(FLAGS.intermediate_output_graphs_dir)\n", 1124 | " return\n", 1125 | "\n", 1126 | "\n", 1127 | "def create_model_info(architecture):\n", 1128 | " \"\"\"Given the name of a model architecture, returns information about it.\n", 1129 | "\n", 1130 | " There are different base image recognition pretrained models that can be\n", 1131 | " retrained using transfer learning, and this function translates from the name\n", 1132 | " of a model to the attributes that are needed to download and train with it.\n", 1133 | "\n", 1134 | " Args:\n", 1135 | " architecture: Name of a model architecture.\n", 1136 | "\n", 1137 | " Returns:\n", 1138 | " Dictionary of information about the model, or None if the name isn't\n", 1139 | " recognized\n", 1140 | "\n", 1141 | " Raises:\n", 1142 | " ValueError: If architecture name is unknown.\n", 1143 | " \"\"\"\n", 1144 | " architecture = architecture.lower()\n", 1145 | " if architecture == 'inception_v3':\n", 1146 | " # pylint: disable=line-too-long\n", 1147 | " data_url = 'http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz'\n", 1148 | " # pylint: enable=line-too-long\n", 1149 | " bottleneck_tensor_name = 'pool_3/_reshape:0'\n", 1150 | " bottleneck_tensor_size = 2048\n", 1151 | " input_width = 299\n", 1152 | " input_height = 299\n", 1153 | " input_depth = 3\n", 1154 | " resized_input_tensor_name = 'Mul:0'\n", 1155 | " model_file_name = 'classify_image_graph_def.pb'\n", 1156 | " input_mean = 128\n", 1157 | " input_std = 128\n", 1158 | " elif architecture.startswith('mobilenet_'):\n", 1159 | " parts = architecture.split('_')\n", 1160 | " if len(parts) != 3 and len(parts) != 4:\n", 1161 | " tf.logging.error(\"Couldn't understand architecture name '%s'\",\n", 1162 | " architecture)\n", 1163 | " return None\n", 1164 | " version_string = parts[1]\n", 1165 | " if (version_string != '1.0' and version_string != '0.75' and\n", 1166 | " version_string != '0.50' and version_string != '0.25'):\n", 1167 | " tf.logging.error(\n", 1168 | " \"\"\"\"The Mobilenet version should be '1.0', '0.75', '0.50', or '0.25',\n", 1169 | " but found '%s' for architecture '%s'\"\"\",\n", 1170 | " version_string, architecture)\n", 1171 | " return None\n", 1172 | " size_string = parts[2]\n", 1173 | " if (size_string != '224' and size_string != '192' and\n", 1174 | " size_string != '160' and size_string != '128'):\n", 1175 | " tf.logging.error(\n", 1176 | " \"\"\"The Mobilenet input size should be '224', '192', '160', or '128',\n", 1177 | " but found '%s' for architecture '%s'\"\"\",\n", 1178 | " size_string, architecture)\n", 1179 | " return None\n", 1180 | " if len(parts) == 3:\n", 1181 | " is_quantized = False\n", 1182 | " else:\n", 1183 | " if parts[3] != 'quantized':\n", 1184 | " tf.logging.error(\n", 1185 | " \"Couldn't understand architecture suffix '%s' for '%s'\", parts[3],\n", 1186 | " architecture)\n", 1187 | " return None\n", 1188 | " is_quantized = True\n", 1189 | " data_url = 'http://download.tensorflow.org/models/mobilenet_v1_'\n", 1190 | " data_url += version_string + '_' + size_string + '_frozen.tgz'\n", 1191 | " bottleneck_tensor_name = 'MobilenetV1/Predictions/Reshape:0'\n", 1192 | " bottleneck_tensor_size = 1001\n", 1193 | " input_width = int(size_string)\n", 1194 | " input_height = int(size_string)\n", 1195 | " input_depth = 3\n", 1196 | " resized_input_tensor_name = 'input:0'\n", 1197 | " if is_quantized:\n", 1198 | " model_base_name = 'quantized_graph.pb'\n", 1199 | " else:\n", 1200 | " model_base_name = 'frozen_graph.pb'\n", 1201 | " model_dir_name = 'mobilenet_v1_' + version_string + '_' + size_string\n", 1202 | " model_file_name = os.path.join(model_dir_name, model_base_name)\n", 1203 | " input_mean = 127.5\n", 1204 | " input_std = 127.5\n", 1205 | " else:\n", 1206 | " tf.logging.error(\"Couldn't understand architecture name '%s'\", architecture)\n", 1207 | " raise ValueError('Unknown architecture', architecture)\n", 1208 | "\n", 1209 | " return {\n", 1210 | " 'data_url': data_url,\n", 1211 | " 'bottleneck_tensor_name': bottleneck_tensor_name,\n", 1212 | " 'bottleneck_tensor_size': bottleneck_tensor_size,\n", 1213 | " 'input_width': input_width,\n", 1214 | " 'input_height': input_height,\n", 1215 | " 'input_depth': input_depth,\n", 1216 | " 'resized_input_tensor_name': resized_input_tensor_name,\n", 1217 | " 'model_file_name': model_file_name,\n", 1218 | " 'input_mean': input_mean,\n", 1219 | " 'input_std': input_std,\n", 1220 | " }\n", 1221 | "\n", 1222 | "\n", 1223 | "def add_jpeg_decoding(input_width, input_height, input_depth, input_mean,\n", 1224 | " input_std):\n", 1225 | " \"\"\"Adds operations that perform JPEG decoding and resizing to the graph..\n", 1226 | "\n", 1227 | " Args:\n", 1228 | " input_width: Desired width of the image fed into the recognizer graph.\n", 1229 | " input_height: Desired width of the image fed into the recognizer graph.\n", 1230 | " input_depth: Desired channels of the image fed into the recognizer graph.\n", 1231 | " input_mean: Pixel value that should be zero in the image for the graph.\n", 1232 | " input_std: How much to divide the pixel values by before recognition.\n", 1233 | "\n", 1234 | " Returns:\n", 1235 | " Tensors for the node to feed JPEG data into, and the output of the\n", 1236 | " preprocessing steps.\n", 1237 | " \"\"\"\n", 1238 | " jpeg_data = tf.placeholder(tf.string, name='DecodeJPGInput')\n", 1239 | " decoded_image = tf.image.decode_jpeg(jpeg_data, channels=input_depth)\n", 1240 | " decoded_image_as_float = tf.cast(decoded_image, dtype=tf.float32)\n", 1241 | " decoded_image_4d = tf.expand_dims(decoded_image_as_float, 0)\n", 1242 | " resize_shape = tf.stack([input_height, input_width])\n", 1243 | " resize_shape_as_int = tf.cast(resize_shape, dtype=tf.int32)\n", 1244 | " resized_image = tf.image.resize_bilinear(decoded_image_4d,\n", 1245 | " resize_shape_as_int)\n", 1246 | " offset_image = tf.subtract(resized_image, input_mean)\n", 1247 | " mul_image = tf.multiply(offset_image, 1.0 / input_std)\n", 1248 | " return jpeg_data, mul_image\n", 1249 | "\n", 1250 | "\n", 1251 | "def main(_):\n", 1252 | " # Needed to make sure the logging output is visible.\n", 1253 | " # See https://github.com/tensorflow/tensorflow/issues/3047\n", 1254 | " tf.logging.set_verbosity(tf.logging.INFO)\n", 1255 | "\n", 1256 | " # Prepare necessary directories that can be used during training\n", 1257 | " prepare_file_system()\n", 1258 | "\n", 1259 | " # Gather information about the model architecture we'll be using.\n", 1260 | " model_info = create_model_info(FLAGS.architecture)\n", 1261 | " if not model_info:\n", 1262 | " tf.logging.error('Did not recognize architecture flag')\n", 1263 | " return -1\n", 1264 | "\n", 1265 | " # Set up the pre-trained graph.\n", 1266 | " maybe_download_and_extract(model_info['data_url'])\n", 1267 | " graph, bottleneck_tensor, resized_image_tensor = (\n", 1268 | " create_model_graph(model_info))\n", 1269 | "\n", 1270 | " # Look at the folder structure, and create lists of all the images.\n", 1271 | " image_lists = create_image_lists(FLAGS.image_dir, FLAGS.testing_percentage,\n", 1272 | " FLAGS.validation_percentage)\n", 1273 | " class_count = len(image_lists.keys())\n", 1274 | " if class_count == 0:\n", 1275 | " tf.logging.error('No valid folders of images found at ' + FLAGS.image_dir)\n", 1276 | " return -1\n", 1277 | " if class_count == 1:\n", 1278 | " tf.logging.error('Only one valid folder of images found at ' +\n", 1279 | " FLAGS.image_dir +\n", 1280 | " ' - multiple classes are needed for classification.')\n", 1281 | " return -1\n", 1282 | "\n", 1283 | " # See if the command-line flags mean we're applying any distortions.\n", 1284 | " do_distort_images = should_distort_images(\n", 1285 | " FLAGS.flip_left_right, FLAGS.random_crop, FLAGS.random_scale,\n", 1286 | " FLAGS.random_brightness)\n", 1287 | "\n", 1288 | " with tf.Session(graph=graph) as sess:\n", 1289 | " # Set up the image decoding sub-graph.\n", 1290 | " jpeg_data_tensor, decoded_image_tensor = add_jpeg_decoding(\n", 1291 | " model_info['input_width'], model_info['input_height'],\n", 1292 | " model_info['input_depth'], model_info['input_mean'],\n", 1293 | " model_info['input_std'])\n", 1294 | "\n", 1295 | " if do_distort_images:\n", 1296 | " # We will be applying distortions, so setup the operations we'll need.\n", 1297 | " (distorted_jpeg_data_tensor,\n", 1298 | " distorted_image_tensor) = add_input_distortions(\n", 1299 | " FLAGS.flip_left_right, FLAGS.random_crop, FLAGS.random_scale,\n", 1300 | " FLAGS.random_brightness, model_info['input_width'],\n", 1301 | " model_info['input_height'], model_info['input_depth'],\n", 1302 | " model_info['input_mean'], model_info['input_std'])\n", 1303 | " else:\n", 1304 | " # We'll make sure we've calculated the 'bottleneck' image summaries and\n", 1305 | " # cached them on disk.\n", 1306 | " cache_bottlenecks(sess, image_lists, FLAGS.image_dir,\n", 1307 | " FLAGS.bottleneck_dir, jpeg_data_tensor,\n", 1308 | " decoded_image_tensor, resized_image_tensor,\n", 1309 | " bottleneck_tensor, FLAGS.architecture)\n", 1310 | "\n", 1311 | " # Add the new layer that we'll be training.\n", 1312 | " (train_step, cross_entropy, bottleneck_input, ground_truth_input,\n", 1313 | " final_tensor) = add_final_training_ops(\n", 1314 | " len(image_lists.keys()), FLAGS.final_tensor_name, bottleneck_tensor,\n", 1315 | " model_info['bottleneck_tensor_size'])\n", 1316 | "\n", 1317 | " # Create the operations we need to evaluate the accuracy of our new layer.\n", 1318 | " evaluation_step, prediction = add_evaluation_step(\n", 1319 | " final_tensor, ground_truth_input)\n", 1320 | "\n", 1321 | " # Merge all the summaries and write them out to the summaries_dir\n", 1322 | " merged = tf.summary.merge_all()\n", 1323 | " train_writer = tf.summary.FileWriter(FLAGS.summaries_dir + '/train',\n", 1324 | " sess.graph)\n", 1325 | "\n", 1326 | " validation_writer = tf.summary.FileWriter(\n", 1327 | " FLAGS.summaries_dir + '/validation')\n", 1328 | "\n", 1329 | " # Set up all our weights to their initial default values.\n", 1330 | " init = tf.global_variables_initializer()\n", 1331 | " sess.run(init)\n", 1332 | "\n", 1333 | " # Run the training for as many cycles as requested on the command line.\n", 1334 | " for i in range(FLAGS.how_many_training_steps):\n", 1335 | " # Get a batch of input bottleneck values, either calculated fresh every\n", 1336 | " # time with distortions applied, or from the cache stored on disk.\n", 1337 | " if do_distort_images:\n", 1338 | " (train_bottlenecks,\n", 1339 | " train_ground_truth) = get_random_distorted_bottlenecks(\n", 1340 | " sess, image_lists, FLAGS.train_batch_size, 'training',\n", 1341 | " FLAGS.image_dir, distorted_jpeg_data_tensor,\n", 1342 | " distorted_image_tensor, resized_image_tensor, bottleneck_tensor)\n", 1343 | " else:\n", 1344 | " (train_bottlenecks,\n", 1345 | " train_ground_truth, _) = get_random_cached_bottlenecks(\n", 1346 | " sess, image_lists, FLAGS.train_batch_size, 'training',\n", 1347 | " FLAGS.bottleneck_dir, FLAGS.image_dir, jpeg_data_tensor,\n", 1348 | " decoded_image_tensor, resized_image_tensor, bottleneck_tensor,\n", 1349 | " FLAGS.architecture)\n", 1350 | " # Feed the bottlenecks and ground truth into the graph, and run a training\n", 1351 | " # step. Capture training summaries for TensorBoard with the `merged` op.\n", 1352 | " train_summary, _ = sess.run(\n", 1353 | " [merged, train_step],\n", 1354 | " feed_dict={bottleneck_input: train_bottlenecks,\n", 1355 | " ground_truth_input: train_ground_truth})\n", 1356 | " train_writer.add_summary(train_summary, i)\n", 1357 | "\n", 1358 | " # Every so often, print out how well the graph is training.\n", 1359 | " is_last_step = (i + 1 == FLAGS.how_many_training_steps)\n", 1360 | " if (i % FLAGS.eval_step_interval) == 0 or is_last_step:\n", 1361 | " train_accuracy, cross_entropy_value = sess.run(\n", 1362 | " [evaluation_step, cross_entropy],\n", 1363 | " feed_dict={bottleneck_input: train_bottlenecks,\n", 1364 | " ground_truth_input: train_ground_truth})\n", 1365 | " tf.logging.info('%s: Step %d: Train accuracy = %.1f%%' %\n", 1366 | " (datetime.now(), i, train_accuracy * 100))\n", 1367 | " tf.logging.info('%s: Step %d: Cross entropy = %f' %\n", 1368 | " (datetime.now(), i, cross_entropy_value))\n", 1369 | " validation_bottlenecks, validation_ground_truth, _ = (\n", 1370 | " get_random_cached_bottlenecks(\n", 1371 | " sess, image_lists, FLAGS.validation_batch_size, 'validation',\n", 1372 | " FLAGS.bottleneck_dir, FLAGS.image_dir, jpeg_data_tensor,\n", 1373 | " decoded_image_tensor, resized_image_tensor, bottleneck_tensor,\n", 1374 | " FLAGS.architecture))\n", 1375 | " # Run a validation step and capture training summaries for TensorBoard\n", 1376 | " # with the `merged` op.\n", 1377 | " validation_summary, validation_accuracy = sess.run(\n", 1378 | " [merged, evaluation_step],\n", 1379 | " feed_dict={bottleneck_input: validation_bottlenecks,\n", 1380 | " ground_truth_input: validation_ground_truth})\n", 1381 | " validation_writer.add_summary(validation_summary, i)\n", 1382 | " tf.logging.info('%s: Step %d: Validation accuracy = %.1f%% (N=%d)' %\n", 1383 | " (datetime.now(), i, validation_accuracy * 100,\n", 1384 | " len(validation_bottlenecks)))\n", 1385 | "\n", 1386 | " # Store intermediate results\n", 1387 | " intermediate_frequency = FLAGS.intermediate_store_frequency\n", 1388 | "\n", 1389 | " if (intermediate_frequency > 0 and (i % intermediate_frequency == 0)\n", 1390 | " and i > 0):\n", 1391 | " intermediate_file_name = (FLAGS.intermediate_output_graphs_dir +\n", 1392 | " 'intermediate_' + str(i) + '.pb')\n", 1393 | " tf.logging.info('Save intermediate result to : ' +\n", 1394 | " intermediate_file_name)\n", 1395 | " save_graph_to_file(sess, graph, intermediate_file_name)\n", 1396 | "\n", 1397 | " # We've completed all our training, so run a final test evaluation on\n", 1398 | " # some new images we haven't used before.\n", 1399 | " test_bottlenecks, test_ground_truth, test_filenames = (\n", 1400 | " get_random_cached_bottlenecks(\n", 1401 | " sess, image_lists, FLAGS.test_batch_size, 'testing',\n", 1402 | " FLAGS.bottleneck_dir, FLAGS.image_dir, jpeg_data_tensor,\n", 1403 | " decoded_image_tensor, resized_image_tensor, bottleneck_tensor,\n", 1404 | " FLAGS.architecture))\n", 1405 | " test_accuracy, predictions = sess.run(\n", 1406 | " [evaluation_step, prediction],\n", 1407 | " feed_dict={bottleneck_input: test_bottlenecks,\n", 1408 | " ground_truth_input: test_ground_truth})\n", 1409 | " tf.logging.info('Final test accuracy = %.1f%% (N=%d)' %\n", 1410 | " (test_accuracy * 100, len(test_bottlenecks)))\n", 1411 | "\n", 1412 | " if FLAGS.print_misclassified_test_images:\n", 1413 | " tf.logging.info('=== MISCLASSIFIED TEST IMAGES ===')\n", 1414 | " for i, test_filename in enumerate(test_filenames):\n", 1415 | " if predictions[i] != test_ground_truth[i].argmax():\n", 1416 | " tf.logging.info('%70s %s' %\n", 1417 | " (test_filename,\n", 1418 | " list(image_lists.keys())[predictions[i]]))\n", 1419 | "\n", 1420 | " # Write out the trained graph and labels with the weights stored as\n", 1421 | " # constants.\n", 1422 | " save_graph_to_file(sess, graph, FLAGS.output_graph)\n", 1423 | " with gfile.FastGFile(FLAGS.output_labels, 'w') as f:\n", 1424 | " f.write('\\n'.join(image_lists.keys()) + '\\n')\n", 1425 | "\n", 1426 | "\n", 1427 | "if __name__ == '__main__':\n", 1428 | " parser = argparse.ArgumentParser()\n", 1429 | " parser.add_argument(\n", 1430 | " '--image_dir',\n", 1431 | " type=str,\n", 1432 | " default='',\n", 1433 | " help='Path to folders of labeled images.'\n", 1434 | " )\n", 1435 | " parser.add_argument(\n", 1436 | " '--output_graph',\n", 1437 | " type=str,\n", 1438 | " default='/tmp/output_graph.pb',\n", 1439 | " help='Where to save the trained graph.'\n", 1440 | " )\n", 1441 | " parser.add_argument(\n", 1442 | " '--intermediate_output_graphs_dir',\n", 1443 | " type=str,\n", 1444 | " default='/tmp/intermediate_graph/',\n", 1445 | " help='Where to save the intermediate graphs.'\n", 1446 | " )\n", 1447 | " parser.add_argument(\n", 1448 | " '--intermediate_store_frequency',\n", 1449 | " type=int,\n", 1450 | " default=0,\n", 1451 | " help=\"\"\"\\\n", 1452 | " How many steps to store intermediate graph. If \"0\" then will not\n", 1453 | " store.\\\n", 1454 | " \"\"\"\n", 1455 | " )\n", 1456 | " parser.add_argument(\n", 1457 | " '--output_labels',\n", 1458 | " type=str,\n", 1459 | " default='/tmp/output_labels.txt',\n", 1460 | " help='Where to save the trained graph\\'s labels.'\n", 1461 | " )\n", 1462 | " parser.add_argument(\n", 1463 | " '--summaries_dir',\n", 1464 | " type=str,\n", 1465 | " default='/tmp/retrain_logs',\n", 1466 | " help='Where to save summary logs for TensorBoard.'\n", 1467 | " )\n", 1468 | " parser.add_argument(\n", 1469 | " '--how_many_training_steps',\n", 1470 | " type=int,\n", 1471 | " default=6000,\n", 1472 | " help='How many training steps to run before ending.'\n", 1473 | " )\n", 1474 | " parser.add_argument(\n", 1475 | " '--learning_rate',\n", 1476 | " type=float,\n", 1477 | " default=0.01,\n", 1478 | " help='How large a learning rate to use when training.'\n", 1479 | " )\n", 1480 | " parser.add_argument(\n", 1481 | " '--testing_percentage',\n", 1482 | " type=int,\n", 1483 | " default=10,\n", 1484 | " help='What percentage of images to use as a test set.'\n", 1485 | " )\n", 1486 | " parser.add_argument(\n", 1487 | " '--validation_percentage',\n", 1488 | " type=int,\n", 1489 | " default=10,\n", 1490 | " help='What percentage of images to use as a validation set.'\n", 1491 | " )\n", 1492 | " parser.add_argument(\n", 1493 | " '--eval_step_interval',\n", 1494 | " type=int,\n", 1495 | " default=10,\n", 1496 | " help='How often to evaluate the training results.'\n", 1497 | " )\n", 1498 | " parser.add_argument(\n", 1499 | " '--train_batch_size',\n", 1500 | " type=int,\n", 1501 | " default=100,\n", 1502 | " help='How many images to train on at a time.'\n", 1503 | " )\n", 1504 | " parser.add_argument(\n", 1505 | " '--test_batch_size',\n", 1506 | " type=int,\n", 1507 | " default=-1,\n", 1508 | " help=\"\"\"\\\n", 1509 | " How many images to test on. This test set is only used once, to evaluate\n", 1510 | " the final accuracy of the model after training completes.\n", 1511 | " A value of -1 causes the entire test set to be used, which leads to more\n", 1512 | " stable results across runs.\\\n", 1513 | " \"\"\"\n", 1514 | " )\n", 1515 | " parser.add_argument(\n", 1516 | " '--validation_batch_size',\n", 1517 | " type=int,\n", 1518 | " default=100,\n", 1519 | " help=\"\"\"\\\n", 1520 | " How many images to use in an evaluation batch. This validation set is\n", 1521 | " used much more often than the test set, and is an early indicator of how\n", 1522 | " accurate the model is during training.\n", 1523 | " A value of -1 causes the entire validation set to be used, which leads to\n", 1524 | " more stable results across training iterations, but may be slower on large\n", 1525 | " training sets.\\\n", 1526 | " \"\"\"\n", 1527 | " )\n", 1528 | " parser.add_argument(\n", 1529 | " '--print_misclassified_test_images',\n", 1530 | " default=False,\n", 1531 | " help=\"\"\"\\\n", 1532 | " Whether to print out a list of all misclassified test images.\\\n", 1533 | " \"\"\",\n", 1534 | " action='store_true'\n", 1535 | " )\n", 1536 | " parser.add_argument(\n", 1537 | " '--model_dir',\n", 1538 | " type=str,\n", 1539 | " default='/tmp/imagenet',\n", 1540 | " help=\"\"\"\\\n", 1541 | " Path to classify_image_graph_def.pb,\n", 1542 | " imagenet_synset_to_human_label_map.txt, and\n", 1543 | " imagenet_2012_challenge_label_map_proto.pbtxt.\\\n", 1544 | " \"\"\"\n", 1545 | " )\n", 1546 | " parser.add_argument(\n", 1547 | " '--bottleneck_dir',\n", 1548 | " type=str,\n", 1549 | " default='/tmp/bottleneck',\n", 1550 | " help='Path to cache bottleneck layer values as files.'\n", 1551 | " )\n", 1552 | " parser.add_argument(\n", 1553 | " '--final_tensor_name',\n", 1554 | " type=str,\n", 1555 | " default='final_result',\n", 1556 | " help=\"\"\"\\\n", 1557 | " The name of the output classification layer in the retrained graph.\\\n", 1558 | " \"\"\"\n", 1559 | " )\n", 1560 | " parser.add_argument(\n", 1561 | " '--flip_left_right',\n", 1562 | " default=False,\n", 1563 | " help=\"\"\"\\\n", 1564 | " Whether to randomly flip half of the training images horizontally.\\\n", 1565 | " \"\"\",\n", 1566 | " action='store_true'\n", 1567 | " )\n", 1568 | " parser.add_argument(\n", 1569 | " '--random_crop',\n", 1570 | " type=int,\n", 1571 | " default=0,\n", 1572 | " help=\"\"\"\\\n", 1573 | " A percentage determining how much of a margin to randomly crop off the\n", 1574 | " training images.\\\n", 1575 | " \"\"\"\n", 1576 | " )\n", 1577 | " parser.add_argument(\n", 1578 | " '--random_scale',\n", 1579 | " type=int,\n", 1580 | " default=0,\n", 1581 | " help=\"\"\"\\\n", 1582 | " A percentage determining how much to randomly scale up the size of the\n", 1583 | " training images by.\\\n", 1584 | " \"\"\"\n", 1585 | " )\n", 1586 | " parser.add_argument(\n", 1587 | " '--random_brightness',\n", 1588 | " type=int,\n", 1589 | " default=0,\n", 1590 | " help=\"\"\"\\\n", 1591 | " A percentage determining how much to randomly multiply the training image\n", 1592 | " input pixels up or down by.\\\n", 1593 | " \"\"\"\n", 1594 | " )\n", 1595 | " parser.add_argument(\n", 1596 | " '--architecture',\n", 1597 | " type=str,\n", 1598 | " default='inception_v3',\n", 1599 | " help=\"\"\"\\\n", 1600 | " Which model architecture to use. 'inception_v3' is the most accurate, but\n", 1601 | " also the slowest. For faster or smaller models, chose a MobileNet with the\n", 1602 | " form 'mobilenet__[_quantized]'. For example,\n", 1603 | " 'mobilenet_1.0_224' will pick a model that is 17 MB in size and takes 224\n", 1604 | " pixel input images, while 'mobilenet_0.25_128_quantized' will choose a much\n", 1605 | " less accurate, but smaller and faster network that's 920 KB on disk and\n", 1606 | " takes 128x128 images. See https://research.googleblog.com/2017/06/mobilenets-open-source-models-for.html\n", 1607 | " for more information on Mobilenet.\\\n", 1608 | " \"\"\")\n", 1609 | " FLAGS, unparsed = parser.parse_known_args()\n", 1610 | " tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)" 1611 | ], 1612 | "metadata": { 1613 | "id": "q4iUnZdilBTt" 1614 | }, 1615 | "execution_count": null, 1616 | "outputs": [] 1617 | } 1618 | ] 1619 | } -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023 MauryaRitesh 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ### Read this in other languages 2 | 3 |
4 | 5 | [हिंदी](translation/README.Hindi.md)   6 | [日本語](translation/README.Japanese.md) 7 |
8 | 9 | 10 | # Facial-Expression-Detection 11 | 12 | This is the code for [this video](https://youtu.be/Dqa-3N8VZbw) by Ritesh on YouTube. 13 | 14 | Facial Expression or Facial Emotion Detector can be used to know whether a person is sad, happy, angry and so on only through his/her face. This Repository can be used to carry out such a task. It uses your WebCamera and then identifies your expression in Real Time. Yeah, in real-time! 15 | 16 | # PLAN 17 | 18 | This is a three step process. In the first step, we load the XML file for detecting the presence of faces and then we retrain our network with our image on five different categories. After that, we import the label_image.py program from the [last video]() and set up everything in realtime. 19 | 20 | # DEPENDENCIES 21 | 22 | Hit the following in CMD/Terminal if you don't have them already installed: 23 | 24 | pip install tensorflow 25 | pip install opencv-python 26 | 27 | That's it for now. 28 | 29 | So let's take a brief look at each Step. 30 | 31 | ## STEP 1 - Implementation of OpenCV HAAR CASCADES 32 | 33 | I'm using the "Frontal Face Alt" classifier for detecting the presence of Face in the WebCam. This file is included with this repository. You can find the other classifiers [here](https://github.com/opencv/opencv/tree/master/data/haarcascades). 34 | 35 | Next, we have the task to load this file, which can be found in the [label.py](https://github.com/MauryaRitesh/Facial-Expression-Detection/blob/master/label.py) program. E.g.: 36 | 37 | # We load the xml file 38 | classifier = cv2.CascadeClassifier('haarcascade_frontalface_alt.xml') 39 | 40 | Now, everything can be set with the Label.py program. So, let's move to the next Step. 41 | 42 | ## STEP 2 - ReTraining the Network - Tensorflow Image Classifier 43 | 44 | We are going to create an Image classifier that identifies whether a person is sad, happy and so on and then show this text on the OpenCV Window. 45 | This step will consist of several sub steps: 46 | 47 | - We need to first create a directory named images. In this directory, create five or six sub directories with names like Happy, Sad, Angry, Calm and Neutral. You can add more than this. 48 | - Now fill these directories with respective images by downloading them from the Internet. Ex: In "Happy" directory, fill only images of happy faces. 49 | - Now run the "face-crop.py" program as suggested in the [video](https://youtu.be/Dqa-3N8VZbw) 50 | - Once you have only cleaned images, you are ready to retrain the network. For this purpose, I'm using Mobilenet Model which is quite fast and accurate. To run the training, hit the go to the parent folder and open CMD/Terminal here and hit the following: 51 | 52 | python retrain.py --output_graph=retrained_graph.pb --output_labels=retrained_labels.txt --architecture=MobileNet_1.0_224 --image_dir=images 53 | 54 | That's it for this Step. 55 | 56 | ## STEP 3 - Importing the ReTrained Model and Setting Everything Up 57 | 58 | Finally, I've put everything under the "label_image.py" file from where you can get everything. 59 | Now run the "label.py" program by typing the following in CMD/Terminal: 60 | 61 | python label.py 62 | 63 | It'll open a new window of OpenCV and then identifies your Facial Expression. 64 | We are done now! 65 | 66 | ## Contributing Guidelines 67 | Thank you for considering contributing to the "Facial-Expression-Detection" project! 68 | 69 | ### Code of Conduct 70 | Before you start contributing, please read and adhere to our [Code of Conduct](CODE_OF_CONDUCT.md). We expect all contributors to follow these rules and maintain a respectful and inclusive environment for everyone. 71 | 72 | ### Getting Started 73 | 74 | 1.Fork the Repository: To contribute, fork the main repository to your GitHub account. 75 | 76 | 2.Clone the Repository: Clone your forked repository to your local machine: 77 | 78 | ```bash 79 | git clone https://github.com/your-username/Facial-Expression-Detection.git 80 | ``` 81 | 3.Set Up Development Environment: Install the necessary dependencies if you haven't already. You can do this by running the following commands: 82 | ```python 83 | pip install tensorflow 84 | pip install opencv-python 85 | ``` 86 | 4.Create a Branch: Create a new branch for your contribution. Choose a descriptive name for the branch that reflects the nature of your contribution. 87 | ```bash 88 | git checkout -b feature/your-feature-name 89 | ``` 90 | 5.Make Your Changes: Make the necessary changes and additions in your branch. 91 | 92 | 6.Commit Your Changes: Make clear, concise, and well-documented commit messages. Reference any relevant issues or pull requests in your commits. 93 | 94 | ```bash 95 | git commit -m "Add new feature" 96 | ``` 97 | 7.Push Your Changes: Push your branch to your GitHub repository: 98 | 99 | ```bash 100 | git push origin feature/your-feature-name 101 | ``` 102 | 8.Create a Pull Request: Create a pull request from your forked repository to the main repository. 103 | 104 | 105 | PLEASE DO STAR THIS REPO IF YOU FOUND SOMETHING INTERESTING. <3 106 | -------------------------------------------------------------------------------- /face_crop.py: -------------------------------------------------------------------------------- 1 | ## This program first ensures if the face of a person exists in the given image or not then if it exists, it crops 2 | ## the image of the face and saves it in the given directory. 3 | 4 | ## Importing Modules 5 | import cv2 6 | import os 7 | 8 | 9 | ##Make changes to these lines for getting the desired results. 10 | 11 | ## DIRECTORY of the images 12 | directory = "E:/data/vids/a" 13 | 14 | ## directory where the images to be saved: 15 | f_directory = "E:/data/vids/Yawning/" 16 | 17 | 18 | def facecrop(image): 19 | ## Crops the face of a person from any image! 20 | 21 | ## OpenCV XML FILE for Frontal Facial Detection using HAAR CASCADES. 22 | facedata = "haarcascade_frontalface_alt.xml" 23 | cascade = cv2.CascadeClassifier(facedata) 24 | 25 | ## Reading the given Image with OpenCV 26 | img = cv2.imread(image) 27 | 28 | try: 29 | ## Some downloaded images are of unsupported type and should be ignored while raising Exception, so for that 30 | ## I'm using the try/except functions. 31 | 32 | minisize = (img.shape[1],img.shape[0]) 33 | miniframe = cv2.resize(img, minisize) 34 | 35 | faces = cascade.detectMultiScale(miniframe) 36 | 37 | for f in faces: 38 | x, y, w, h = [ v for v in f ] 39 | cv2.rectangle(img, (x,y), (x+w,y+h), (0,255,0), 2) 40 | 41 | sub_face = img[y:y+h, x:x+w] 42 | 43 | f_name = image.split('/') 44 | f_name = f_name[-1] 45 | 46 | ## Change here the Desired directory. 47 | cv2.imwrite(f_directory + f_name, sub_face) 48 | print ("Writing: " + image) 49 | 50 | except: 51 | pass 52 | 53 | if __name__ == '__main__': 54 | images = os.listdir(directory) 55 | i = 0 56 | 57 | for img in images: 58 | file = directory + img 59 | print (i) 60 | facecrop(file) 61 | i += 1 62 | -------------------------------------------------------------------------------- /label.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import label_image 3 | 4 | size = 4 5 | 6 | 7 | #Load the xml file 8 | classifier = cv2.CascadeClassifier('haarcascade_frontalface_alt.xml') 9 | 10 | webcam = cv2.VideoCapture(0) #Using default WebCam connected to PC. 11 | 12 | while True: 13 | (rval, im) = webcam.read() 14 | im=cv2.flip(im,1,0) #Flip to act as a mirror 15 | 16 | # Resize the image to speed up detection 17 | mini = cv2.resize(im, (int(im.shape[1]/size), int(im.shape[0]/size))) 18 | 19 | # detect MultiScale / faces 20 | faces = classifier.detectMultiScale(mini) 21 | 22 | # Draw rectangles around each face 23 | for f in faces: 24 | (x, y, w, h) = [v * size for v in f] #Scale the shapesize backup 25 | cv2.rectangle(im, (x,y), (x+w,y+h), (0,255,0), 4) 26 | 27 | #Save just the rectangle faces in SubRecFaces 28 | sub_face = im[y:y+h, x:x+w] 29 | 30 | FaceFileName = "test.jpg" #Saving the current image from the webcam for testing. 31 | cv2.imwrite(FaceFileName, sub_face) 32 | 33 | text = label_image.main(FaceFileName)# Getting the Result from the label_image file, i.e., Classification Result. 34 | text = text.title()# Title Case looks Stunning. 35 | font = cv2.FONT_HERSHEY_TRIPLEX 36 | cv2.putText(im, text,(x+w,y), font, 1, (0,0,255), 2) 37 | 38 | # Show the image 39 | cv2.imshow('Capture', im) 40 | key = cv2.waitKey(10) 41 | # if Esc key is press then break out of the loop 42 | if key == 27: #The Esc key 43 | break 44 | -------------------------------------------------------------------------------- /label_image.py: -------------------------------------------------------------------------------- 1 | ## Credit: Some parts of the program has been taken from OpenCV documentation 2 | #importing required libraries 3 | from __future__ import absolute_import 4 | from __future__ import division 5 | from __future__ import print_function 6 | 7 | import argparse 8 | import sys 9 | import time 10 | 11 | import numpy as np 12 | import tensorflow as tf 13 | 14 | #function to load TensorFlow graph from a model file 15 | def load_graph(model_file): 16 | graph = tf.Graph() #creating a tensorflow computation graph 17 | graph_def = tf.GraphDef() 18 | 19 | with open(model_file, "rb") as f: 20 | graph_def.ParseFromString(f.read()) #parsing binary graph definition 21 | with graph.as_default(): #setting this graph as default computation graph 22 | tf.import_graph_def(graph_def) #importing graph definitions into current graph 23 | 24 | return graph 25 | 26 | #function to read and pre-process the image 27 | def read_tensor_from_image_file(file_name, input_height=299, input_width=299, 28 | input_mean=0, input_std=255): 29 | input_name = "file_reader" 30 | output_name = "normalized" 31 | file_reader = tf.read_file(file_name, input_name) 32 | if file_name.endswith(".png"): # if a PNG image, setting the number of color channels to 3 33 | image_reader = tf.image.decode_png(file_reader, channels = 3, 34 | name='png_reader') 35 | elif file_name.endswith(".gif"): # if a GIF image, removing the singleton dimension 36 | image_reader = tf.squeeze(tf.image.decode_gif(file_reader, 37 | name='gif_reader')) 38 | elif file_name.endswith(".bmp"): # if bmp, then decoding a BMP image 39 | image_reader = tf.image.decode_bmp(file_reader, name='bmp_reader') 40 | else: #default: decoding the image as a JPEG with 3 color channels 41 | image_reader = tf.image.decode_jpeg(file_reader, channels = 3, 42 | name='jpeg_reader') 43 | float_caster = tf.cast(image_reader, tf.float32) #converting the image into float32 dtype 44 | dims_expander = tf.expand_dims(float_caster, 0); #adding batch dimension 45 | resized = tf.image.resize_bilinear(dims_expander, [input_height, input_width]) #resizing the image 46 | normalized = tf.divide(tf.subtract(resized, [input_mean]), [input_std]) #normalizing the image 47 | sess = tf.Session() 48 | result = sess.run(normalized) 49 | 50 | return result 51 | 52 | #function for loading labels from a file 53 | def load_labels(label_file): 54 | label = [] 55 | proto_as_ascii_lines = tf.gfile.GFile(label_file).readlines() 56 | for l in proto_as_ascii_lines: 57 | label.append(l.rstrip()) #appending labels after stripping newline characters 58 | return label 59 | 60 | #main function for image classification 61 | def main(img): 62 | file_name = img 63 | model_file = "retrained_graph.pb" 64 | label_file = "retrained_labels.txt" 65 | input_height = 224 66 | input_width = 224 67 | input_mean = 128 68 | input_std = 128 69 | input_layer = "input" 70 | output_layer = "final_result" 71 | 72 | #parsing command-line arguments 73 | parser = argparse.ArgumentParser() 74 | parser.add_argument("--image", help="image to be processed") 75 | parser.add_argument("--graph", help="graph/model to be executed") 76 | parser.add_argument("--labels", help="name of file containing labels") 77 | parser.add_argument("--input_height", type=int, help="input height") 78 | parser.add_argument("--input_width", type=int, help="input width") 79 | parser.add_argument("--input_mean", type=int, help="input mean") 80 | parser.add_argument("--input_std", type=int, help="input std") 81 | parser.add_argument("--input_layer", help="name of input layer") 82 | parser.add_argument("--output_layer", help="name of output layer") 83 | args = parser.parse_args() 84 | 85 | #over-riding default values with command line arguments(if provided) 86 | if args.graph: 87 | model_file = args.graph 88 | if args.image: 89 | file_name = args.image 90 | if args.labels: 91 | label_file = args.labels 92 | if args.input_height: 93 | input_height = args.input_height 94 | if args.input_width: 95 | input_width = args.input_width 96 | if args.input_mean: 97 | input_mean = args.input_mean 98 | if args.input_std: 99 | input_std = args.input_std 100 | if args.input_layer: 101 | input_layer = args.input_layer 102 | if args.output_layer: 103 | output_layer = args.output_layer 104 | 105 | graph = load_graph(model_file) 106 | t = read_tensor_from_image_file(file_name, #reading and pre-processing the image input 107 | input_height=input_height, 108 | input_width=input_width, 109 | input_mean=input_mean, 110 | input_std=input_std) 111 | 112 | input_name = "import/" + input_layer 113 | output_name = "import/" + output_layer 114 | input_operation = graph.get_operation_by_name(input_name); # obtaining references to the input and output operations within the graph 115 | output_operation = graph.get_operation_by_name(output_name); 116 | 117 | #running the image through the model 118 | with tf.Session(graph=graph) as sess: 119 | start = time.time() #starting the timer 120 | results = sess.run(output_operation.outputs[0], 121 | {input_operation.outputs[0]: t}) 122 | end=time.time() #recording the end time for measuring performance 123 | results = np.squeeze(results) #removing dimensions of size 1, making it a 1D Array 124 | 125 | #identifying the top k results 126 | top_k = results.argsort()[-5:][::-1] 127 | labels = load_labels(label_file) 128 | 129 | for i in top_k: 130 | return labels[i] #returning the label with highest confidence 131 | -------------------------------------------------------------------------------- /retrain.py: -------------------------------------------------------------------------------- 1 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved. 2 | # 3 | # Acknowledgement: This is a work by Tesnsorflow authors at Google. 4 | 5 | # Licensed under the Apache License, Version 2.0 (the "License"); 6 | # you may not use this file except in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | # ============================================================================== 17 | r"""Simple transfer learning with Inception v3 or Mobilenet models. 18 | 19 | With support for TensorBoard. 20 | 21 | This example shows how to take a Inception v3 or Mobilenet model trained on 22 | ImageNet images, and train a new top layer that can recognize other classes of 23 | images. 24 | 25 | The top layer receives as input a 2048-dimensional vector (1001-dimensional for 26 | Mobilenet) for each image. We train a softmax layer on top of this 27 | representation. Assuming the softmax layer contains N labels, this corresponds 28 | to learning N + 2048*N (or 1001*N) model parameters corresponding to the 29 | learned biases and weights. 30 | 31 | Here's an example, which assumes you have a folder containing class-named 32 | subfolders, each full of images for each label. The example folder flower_photos 33 | should have a structure like this: 34 | 35 | ~/flower_photos/daisy/photo1.jpg 36 | ~/flower_photos/daisy/photo2.jpg 37 | ... 38 | ~/flower_photos/rose/anotherphoto77.jpg 39 | ... 40 | ~/flower_photos/sunflower/somepicture.jpg 41 | 42 | The subfolder names are important, since they define what label is applied to 43 | each image, but the filenames themselves don't matter. Once your images are 44 | prepared, you can run the training with a command like this: 45 | 46 | 47 | ```bash 48 | bazel build tensorflow/examples/image_retraining:retrain && \ 49 | bazel-bin/tensorflow/examples/image_retraining/retrain \ 50 | --image_dir ~/flower_photos 51 | ``` 52 | 53 | Or, if you have a pip installation of tensorflow, `retrain.py` can be run 54 | without bazel: 55 | 56 | ```bash 57 | python tensorflow/examples/image_retraining/retrain.py \ 58 | --image_dir ~/flower_photos 59 | ``` 60 | 61 | You can replace the image_dir argument with any folder containing subfolders of 62 | images. The label for each image is taken from the name of the subfolder it's 63 | in. 64 | 65 | This produces a new model file that can be loaded and run by any TensorFlow 66 | program, for example the label_image sample code. 67 | 68 | By default this script will use the high accuracy, but comparatively large and 69 | slow Inception v3 model architecture. It's recommended that you start with this 70 | to validate that you have gathered good training data, but if you want to deploy 71 | on resource-limited platforms, you can try the `--architecture` flag with a 72 | Mobilenet model. For example: 73 | 74 | ```bash 75 | python tensorflow/examples/image_retraining/retrain.py \ 76 | --image_dir ~/flower_photos --architecture mobilenet_1.0_224 77 | ``` 78 | 79 | There are 32 different Mobilenet models to choose from, with a variety of file 80 | size and latency options. The first number can be '1.0', '0.75', '0.50', or 81 | '0.25' to control the size, and the second controls the input image size, either 82 | '224', '192', '160', or '128', with smaller sizes running faster. See 83 | https://research.googleblog.com/2017/06/mobilenets-open-source-models-for.html 84 | for more information on Mobilenet. 85 | 86 | To use with TensorBoard: 87 | 88 | By default, this script will log summaries to /tmp/retrain_logs directory 89 | 90 | Visualize the summaries with this command: 91 | 92 | tensorboard --logdir /tmp/retrain_logs 93 | 94 | """ 95 | from __future__ import absolute_import 96 | from __future__ import division 97 | from __future__ import print_function 98 | 99 | import argparse 100 | import collections 101 | from datetime import datetime 102 | import hashlib 103 | import os.path 104 | import random 105 | import re 106 | import sys 107 | import tarfile 108 | 109 | import numpy as np 110 | from six.moves import urllib 111 | import tensorflow as tf 112 | 113 | from tensorflow.python.framework import graph_util 114 | from tensorflow.python.framework import tensor_shape 115 | from tensorflow.python.platform import gfile 116 | from tensorflow.python.util import compat 117 | 118 | FLAGS = None 119 | 120 | # These are all parameters that are tied to the particular model architecture 121 | # we're using for Inception v3. These include things like tensor names and their 122 | # sizes. If you want to adapt this script to work with another model, you will 123 | # need to update these to reflect the values in the network you're using. 124 | MAX_NUM_IMAGES_PER_CLASS = 2 ** 27 - 1 # ~134M 125 | 126 | 127 | def create_image_lists(image_dir, testing_percentage, validation_percentage): 128 | """Builds a list of training images from the file system. 129 | 130 | Analyzes the sub folders in the image directory, splits them into stable 131 | training, testing, and validation sets, and returns a data structure 132 | describing the lists of images for each label and their paths. 133 | 134 | Args: 135 | image_dir: String path to a folder containing subfolders of images. 136 | testing_percentage: Integer percentage of the images to reserve for tests. 137 | validation_percentage: Integer percentage of images reserved for validation. 138 | 139 | Returns: 140 | A dictionary containing an entry for each label subfolder, with images split 141 | into training, testing, and validation sets within each label. 142 | """ 143 | if not gfile.Exists(image_dir): 144 | tf.logging.error("Image directory '" + image_dir + "' not found.") 145 | return None 146 | result = collections.OrderedDict() 147 | sub_dirs = [ 148 | os.path.join(image_dir,item) 149 | for item in gfile.ListDirectory(image_dir)] 150 | sub_dirs = sorted(item for item in sub_dirs 151 | if gfile.IsDirectory(item)) 152 | for sub_dir in sub_dirs: 153 | extensions = ['jpg', 'jpeg', 'JPG', 'JPEG'] 154 | file_list = [] 155 | dir_name = os.path.basename(sub_dir) 156 | if dir_name == image_dir: 157 | continue 158 | tf.logging.info("Looking for images in '" + dir_name + "'") 159 | for extension in extensions: 160 | file_glob = os.path.join(image_dir, dir_name, '*.' + extension) 161 | file_list.extend(gfile.Glob(file_glob)) 162 | if not file_list: 163 | tf.logging.warning('No files found') 164 | continue 165 | if len(file_list) < 20: 166 | tf.logging.warning( 167 | 'WARNING: Folder has less than 20 images, which may cause issues.') 168 | elif len(file_list) > MAX_NUM_IMAGES_PER_CLASS: 169 | tf.logging.warning( 170 | 'WARNING: Folder {} has more than {} images. Some images will ' 171 | 'never be selected.'.format(dir_name, MAX_NUM_IMAGES_PER_CLASS)) 172 | label_name = re.sub(r'[^a-z0-9]+', ' ', dir_name.lower()) 173 | training_images = [] 174 | testing_images = [] 175 | validation_images = [] 176 | for file_name in file_list: 177 | base_name = os.path.basename(file_name) 178 | # We want to ignore anything after '_nohash_' in the file name when 179 | # deciding which set to put an image in, the data set creator has a way of 180 | # grouping photos that are close variations of each other. For example 181 | # this is used in the plant disease data set to group multiple pictures of 182 | # the same leaf. 183 | hash_name = re.sub(r'_nohash_.*$', '', file_name) 184 | # This looks a bit magical, but we need to decide whether this file should 185 | # go into the training, testing, or validation sets, and we want to keep 186 | # existing files in the same set even if more files are subsequently 187 | # added. 188 | # To do that, we need a stable way of deciding based on just the file name 189 | # itself, so we do a hash of that and then use that to generate a 190 | # probability value that we use to assign it. 191 | hash_name_hashed = hashlib.sha1(compat.as_bytes(hash_name)).hexdigest() 192 | percentage_hash = ((int(hash_name_hashed, 16) % 193 | (MAX_NUM_IMAGES_PER_CLASS + 1)) * 194 | (100.0 / MAX_NUM_IMAGES_PER_CLASS)) 195 | if percentage_hash < validation_percentage: 196 | validation_images.append(base_name) 197 | elif percentage_hash < (testing_percentage + validation_percentage): 198 | testing_images.append(base_name) 199 | else: 200 | training_images.append(base_name) 201 | result[label_name] = { 202 | 'dir': dir_name, 203 | 'training': training_images, 204 | 'testing': testing_images, 205 | 'validation': validation_images, 206 | } 207 | return result 208 | 209 | 210 | def get_image_path(image_lists, label_name, index, image_dir, category): 211 | """"Returns a path to an image for a label at the given index. 212 | 213 | Args: 214 | image_lists: Dictionary of training images for each label. 215 | label_name: Label string we want to get an image for. 216 | index: Int offset of the image we want. This will be moduloed by the 217 | available number of images for the label, so it can be arbitrarily large. 218 | image_dir: Root folder string of the subfolders containing the training 219 | images. 220 | category: Name string of set to pull images from - training, testing, or 221 | validation. 222 | 223 | Returns: 224 | File system path string to an image that meets the requested parameters. 225 | 226 | """ 227 | if label_name not in image_lists: 228 | tf.logging.fatal('Label does not exist %s.', label_name) 229 | label_lists = image_lists[label_name] 230 | if category not in label_lists: 231 | tf.logging.fatal('Category does not exist %s.', category) 232 | category_list = label_lists[category] 233 | if not category_list: 234 | tf.logging.fatal('Label %s has no images in the category %s.', 235 | label_name, category) 236 | mod_index = index % len(category_list) 237 | base_name = category_list[mod_index] 238 | sub_dir = label_lists['dir'] 239 | full_path = os.path.join(image_dir, sub_dir, base_name) 240 | return full_path 241 | 242 | 243 | def get_bottleneck_path(image_lists, label_name, index, bottleneck_dir, 244 | category, architecture): 245 | """"Returns a path to a bottleneck file for a label at the given index. 246 | 247 | Args: 248 | image_lists: Dictionary of training images for each label. 249 | label_name: Label string we want to get an image for. 250 | index: Integer offset of the image we want. This will be moduloed by the 251 | available number of images for the label, so it can be arbitrarily large. 252 | bottleneck_dir: Folder string holding cached files of bottleneck values. 253 | category: Name string of set to pull images from - training, testing, or 254 | validation. 255 | architecture: The name of the model architecture. 256 | 257 | Returns: 258 | File system path string to an image that meets the requested parameters. 259 | """ 260 | return get_image_path(image_lists, label_name, index, bottleneck_dir, 261 | category) + '_' + architecture + '.txt' 262 | 263 | 264 | def create_model_graph(model_info): 265 | """"Creates a graph from saved GraphDef file and returns a Graph object. 266 | 267 | Args: 268 | model_info: Dictionary containing information about the model architecture. 269 | 270 | Returns: 271 | Graph holding the trained Inception network, and various tensors we'll be 272 | manipulating. 273 | """ 274 | with tf.Graph().as_default() as graph: 275 | model_path = os.path.join(FLAGS.model_dir, model_info['model_file_name']) 276 | with gfile.FastGFile(model_path, 'rb') as f: 277 | graph_def = tf.GraphDef() 278 | graph_def.ParseFromString(f.read()) 279 | bottleneck_tensor, resized_input_tensor = (tf.import_graph_def( 280 | graph_def, 281 | name='', 282 | return_elements=[ 283 | model_info['bottleneck_tensor_name'], 284 | model_info['resized_input_tensor_name'], 285 | ])) 286 | return graph, bottleneck_tensor, resized_input_tensor 287 | 288 | 289 | def run_bottleneck_on_image(sess, image_data, image_data_tensor, 290 | decoded_image_tensor, resized_input_tensor, 291 | bottleneck_tensor): 292 | """Runs inference on an image to extract the 'bottleneck' summary layer. 293 | 294 | Args: 295 | sess: Current active TensorFlow Session. 296 | image_data: String of raw JPEG data. 297 | image_data_tensor: Input data layer in the graph. 298 | decoded_image_tensor: Output of initial image resizing and preprocessing. 299 | resized_input_tensor: The input node of the recognition graph. 300 | bottleneck_tensor: Layer before the final softmax. 301 | 302 | Returns: 303 | Numpy array of bottleneck values. 304 | """ 305 | # First decode the JPEG image, resize it, and rescale the pixel values. 306 | resized_input_values = sess.run(decoded_image_tensor, 307 | {image_data_tensor: image_data}) 308 | # Then run it through the recognition network. 309 | bottleneck_values = sess.run(bottleneck_tensor, 310 | {resized_input_tensor: resized_input_values}) 311 | bottleneck_values = np.squeeze(bottleneck_values) 312 | return bottleneck_values 313 | 314 | 315 | def maybe_download_and_extract(data_url): 316 | """Download and extract model tar file. 317 | 318 | If the pretrained model we're using doesn't already exist, this function 319 | downloads it from the TensorFlow.org website and unpacks it into a directory. 320 | 321 | Args: 322 | data_url: Web location of the tar file containing the pretrained model. 323 | """ 324 | dest_directory = FLAGS.model_dir 325 | if not os.path.exists(dest_directory): 326 | os.makedirs(dest_directory) 327 | filename = data_url.split('/')[-1] 328 | filepath = os.path.join(dest_directory, filename) 329 | if not os.path.exists(filepath): 330 | 331 | def _progress(count, block_size, total_size): 332 | sys.stdout.write('\r>> Downloading %s %.1f%%' % 333 | (filename, 334 | float(count * block_size) / float(total_size) * 100.0)) 335 | sys.stdout.flush() 336 | 337 | filepath, _ = urllib.request.urlretrieve(data_url, filepath, _progress) 338 | print() 339 | statinfo = os.stat(filepath) 340 | tf.logging.info('Successfully downloaded', filename, statinfo.st_size, 341 | 'bytes.') 342 | tarfile.open(filepath, 'r:gz').extractall(dest_directory) 343 | 344 | 345 | def ensure_dir_exists(dir_name): 346 | """Makes sure the folder exists on disk. 347 | 348 | Args: 349 | dir_name: Path string to the folder we want to create. 350 | """ 351 | if not os.path.exists(dir_name): 352 | os.makedirs(dir_name) 353 | 354 | 355 | bottleneck_path_2_bottleneck_values = {} 356 | 357 | 358 | def create_bottleneck_file(bottleneck_path, image_lists, label_name, index, 359 | image_dir, category, sess, jpeg_data_tensor, 360 | decoded_image_tensor, resized_input_tensor, 361 | bottleneck_tensor): 362 | """Create a single bottleneck file.""" 363 | tf.logging.info('Creating bottleneck at ' + bottleneck_path) 364 | image_path = get_image_path(image_lists, label_name, index, 365 | image_dir, category) 366 | if not gfile.Exists(image_path): 367 | tf.logging.fatal('File does not exist %s', image_path) 368 | image_data = gfile.FastGFile(image_path, 'rb').read() 369 | try: 370 | bottleneck_values = run_bottleneck_on_image( 371 | sess, image_data, jpeg_data_tensor, decoded_image_tensor, 372 | resized_input_tensor, bottleneck_tensor) 373 | except Exception as e: 374 | raise RuntimeError('Error during processing file %s (%s)' % (image_path, 375 | str(e))) 376 | bottleneck_string = ','.join(str(x) for x in bottleneck_values) 377 | with open(bottleneck_path, 'w') as bottleneck_file: 378 | bottleneck_file.write(bottleneck_string) 379 | 380 | 381 | def get_or_create_bottleneck(sess, image_lists, label_name, index, image_dir, 382 | category, bottleneck_dir, jpeg_data_tensor, 383 | decoded_image_tensor, resized_input_tensor, 384 | bottleneck_tensor, architecture): 385 | """Retrieves or calculates bottleneck values for an image. 386 | 387 | If a cached version of the bottleneck data exists on-disk, return that, 388 | otherwise calculate the data and save it to disk for future use. 389 | 390 | Args: 391 | sess: The current active TensorFlow Session. 392 | image_lists: Dictionary of training images for each label. 393 | label_name: Label string we want to get an image for. 394 | index: Integer offset of the image we want. This will be modulo-ed by the 395 | available number of images for the label, so it can be arbitrarily large. 396 | image_dir: Root folder string of the subfolders containing the training 397 | images. 398 | category: Name string of which set to pull images from - training, testing, 399 | or validation. 400 | bottleneck_dir: Folder string holding cached files of bottleneck values. 401 | jpeg_data_tensor: The tensor to feed loaded jpeg data into. 402 | decoded_image_tensor: The output of decoding and resizing the image. 403 | resized_input_tensor: The input node of the recognition graph. 404 | bottleneck_tensor: The output tensor for the bottleneck values. 405 | architecture: The name of the model architecture. 406 | 407 | Returns: 408 | Numpy array of values produced by the bottleneck layer for the image. 409 | """ 410 | label_lists = image_lists[label_name] 411 | sub_dir = label_lists['dir'] 412 | sub_dir_path = os.path.join(bottleneck_dir, sub_dir) 413 | ensure_dir_exists(sub_dir_path) 414 | bottleneck_path = get_bottleneck_path(image_lists, label_name, index, 415 | bottleneck_dir, category, architecture) 416 | if not os.path.exists(bottleneck_path): 417 | create_bottleneck_file(bottleneck_path, image_lists, label_name, index, 418 | image_dir, category, sess, jpeg_data_tensor, 419 | decoded_image_tensor, resized_input_tensor, 420 | bottleneck_tensor) 421 | with open(bottleneck_path, 'r') as bottleneck_file: 422 | bottleneck_string = bottleneck_file.read() 423 | did_hit_error = False 424 | try: 425 | bottleneck_values = [float(x) for x in bottleneck_string.split(',')] 426 | except ValueError: 427 | tf.logging.warning('Invalid float found, recreating bottleneck') 428 | did_hit_error = True 429 | if did_hit_error: 430 | create_bottleneck_file(bottleneck_path, image_lists, label_name, index, 431 | image_dir, category, sess, jpeg_data_tensor, 432 | decoded_image_tensor, resized_input_tensor, 433 | bottleneck_tensor) 434 | with open(bottleneck_path, 'r') as bottleneck_file: 435 | bottleneck_string = bottleneck_file.read() 436 | # Allow exceptions to propagate here, since they shouldn't happen after a 437 | # fresh creation 438 | bottleneck_values = [float(x) for x in bottleneck_string.split(',')] 439 | return bottleneck_values 440 | 441 | 442 | def cache_bottlenecks(sess, image_lists, image_dir, bottleneck_dir, 443 | jpeg_data_tensor, decoded_image_tensor, 444 | resized_input_tensor, bottleneck_tensor, architecture): 445 | """Ensures all the training, testing, and validation bottlenecks are cached. 446 | 447 | Because we're likely to read the same image multiple times (if there are no 448 | distortions applied during training) it can speed things up a lot if we 449 | calculate the bottleneck layer values once for each image during 450 | preprocessing, and then just read those cached values repeatedly during 451 | training. Here we go through all the images we've found, calculate those 452 | values, and save them off. 453 | 454 | Args: 455 | sess: The current active TensorFlow Session. 456 | image_lists: Dictionary of training images for each label. 457 | image_dir: Root folder string of the subfolders containing the training 458 | images. 459 | bottleneck_dir: Folder string holding cached files of bottleneck values. 460 | jpeg_data_tensor: Input tensor for jpeg data from file. 461 | decoded_image_tensor: The output of decoding and resizing the image. 462 | resized_input_tensor: The input node of the recognition graph. 463 | bottleneck_tensor: The penultimate output layer of the graph. 464 | architecture: The name of the model architecture. 465 | 466 | Returns: 467 | Nothing. 468 | """ 469 | how_many_bottlenecks = 0 470 | ensure_dir_exists(bottleneck_dir) 471 | for label_name, label_lists in image_lists.items(): 472 | for category in ['training', 'testing', 'validation']: 473 | category_list = label_lists[category] 474 | for index, unused_base_name in enumerate(category_list): 475 | get_or_create_bottleneck( 476 | sess, image_lists, label_name, index, image_dir, category, 477 | bottleneck_dir, jpeg_data_tensor, decoded_image_tensor, 478 | resized_input_tensor, bottleneck_tensor, architecture) 479 | 480 | how_many_bottlenecks += 1 481 | if how_many_bottlenecks % 100 == 0: 482 | tf.logging.info( 483 | str(how_many_bottlenecks) + ' bottleneck files created.') 484 | 485 | 486 | def get_random_cached_bottlenecks(sess, image_lists, how_many, category, 487 | bottleneck_dir, image_dir, jpeg_data_tensor, 488 | decoded_image_tensor, resized_input_tensor, 489 | bottleneck_tensor, architecture): 490 | """Retrieves bottleneck values for cached images. 491 | 492 | If no distortions are being applied, this function can retrieve the cached 493 | bottleneck values directly from disk for images. It picks a random set of 494 | images from the specified category. 495 | 496 | Args: 497 | sess: Current TensorFlow Session. 498 | image_lists: Dictionary of training images for each label. 499 | how_many: If positive, a random sample of this size will be chosen. 500 | If negative, all bottlenecks will be retrieved. 501 | category: Name string of which set to pull from - training, testing, or 502 | validation. 503 | bottleneck_dir: Folder string holding cached files of bottleneck values. 504 | image_dir: Root folder string of the subfolders containing the training 505 | images. 506 | jpeg_data_tensor: The layer to feed jpeg image data into. 507 | decoded_image_tensor: The output of decoding and resizing the image. 508 | resized_input_tensor: The input node of the recognition graph. 509 | bottleneck_tensor: The bottleneck output layer of the CNN graph. 510 | architecture: The name of the model architecture. 511 | 512 | Returns: 513 | List of bottleneck arrays, their corresponding ground truths, and the 514 | relevant filenames. 515 | """ 516 | class_count = len(image_lists.keys()) 517 | bottlenecks = [] 518 | ground_truths = [] 519 | filenames = [] 520 | if how_many >= 0: 521 | # Retrieve a random sample of bottlenecks. 522 | for unused_i in range(how_many): 523 | label_index = random.randrange(class_count) 524 | label_name = list(image_lists.keys())[label_index] 525 | image_index = random.randrange(MAX_NUM_IMAGES_PER_CLASS + 1) 526 | image_name = get_image_path(image_lists, label_name, image_index, 527 | image_dir, category) 528 | bottleneck = get_or_create_bottleneck( 529 | sess, image_lists, label_name, image_index, image_dir, category, 530 | bottleneck_dir, jpeg_data_tensor, decoded_image_tensor, 531 | resized_input_tensor, bottleneck_tensor, architecture) 532 | ground_truth = np.zeros(class_count, dtype=np.float32) 533 | ground_truth[label_index] = 1.0 534 | bottlenecks.append(bottleneck) 535 | ground_truths.append(ground_truth) 536 | filenames.append(image_name) 537 | else: 538 | # Retrieve all bottlenecks. 539 | for label_index, label_name in enumerate(image_lists.keys()): 540 | for image_index, image_name in enumerate( 541 | image_lists[label_name][category]): 542 | image_name = get_image_path(image_lists, label_name, image_index, 543 | image_dir, category) 544 | bottleneck = get_or_create_bottleneck( 545 | sess, image_lists, label_name, image_index, image_dir, category, 546 | bottleneck_dir, jpeg_data_tensor, decoded_image_tensor, 547 | resized_input_tensor, bottleneck_tensor, architecture) 548 | ground_truth = np.zeros(class_count, dtype=np.float32) 549 | ground_truth[label_index] = 1.0 550 | bottlenecks.append(bottleneck) 551 | ground_truths.append(ground_truth) 552 | filenames.append(image_name) 553 | return bottlenecks, ground_truths, filenames 554 | 555 | 556 | def get_random_distorted_bottlenecks( 557 | sess, image_lists, how_many, category, image_dir, input_jpeg_tensor, 558 | distorted_image, resized_input_tensor, bottleneck_tensor): 559 | """Retrieves bottleneck values for training images, after distortions. 560 | 561 | If we're training with distortions like crops, scales, or flips, we have to 562 | recalculate the full model for every image, and so we can't use cached 563 | bottleneck values. Instead we find random images for the requested category, 564 | run them through the distortion graph, and then the full graph to get the 565 | bottleneck results for each. 566 | 567 | Args: 568 | sess: Current TensorFlow Session. 569 | image_lists: Dictionary of training images for each label. 570 | how_many: The integer number of bottleneck values to return. 571 | category: Name string of which set of images to fetch - training, testing, 572 | or validation. 573 | image_dir: Root folder string of the subfolders containing the training 574 | images. 575 | input_jpeg_tensor: The input layer we feed the image data to. 576 | distorted_image: The output node of the distortion graph. 577 | resized_input_tensor: The input node of the recognition graph. 578 | bottleneck_tensor: The bottleneck output layer of the CNN graph. 579 | 580 | Returns: 581 | List of bottleneck arrays and their corresponding ground truths. 582 | """ 583 | class_count = len(image_lists.keys()) 584 | bottlenecks = [] 585 | ground_truths = [] 586 | for unused_i in range(how_many): 587 | label_index = random.randrange(class_count) 588 | label_name = list(image_lists.keys())[label_index] 589 | image_index = random.randrange(MAX_NUM_IMAGES_PER_CLASS + 1) 590 | image_path = get_image_path(image_lists, label_name, image_index, image_dir, 591 | category) 592 | if not gfile.Exists(image_path): 593 | tf.logging.fatal('File does not exist %s', image_path) 594 | jpeg_data = gfile.FastGFile(image_path, 'rb').read() 595 | # Note that we materialize the distorted_image_data as a numpy array before 596 | # sending running inference on the image. This involves 2 memory copies and 597 | # might be optimized in other implementations. 598 | distorted_image_data = sess.run(distorted_image, 599 | {input_jpeg_tensor: jpeg_data}) 600 | bottleneck_values = sess.run(bottleneck_tensor, 601 | {resized_input_tensor: distorted_image_data}) 602 | bottleneck_values = np.squeeze(bottleneck_values) 603 | ground_truth = np.zeros(class_count, dtype=np.float32) 604 | ground_truth[label_index] = 1.0 605 | bottlenecks.append(bottleneck_values) 606 | ground_truths.append(ground_truth) 607 | return bottlenecks, ground_truths 608 | 609 | 610 | def should_distort_images(flip_left_right, random_crop, random_scale, 611 | random_brightness): 612 | """Whether any distortions are enabled, from the input flags. 613 | 614 | Args: 615 | flip_left_right: Boolean whether to randomly mirror images horizontally. 616 | random_crop: Integer percentage setting the total margin used around the 617 | crop box. 618 | random_scale: Integer percentage of how much to vary the scale by. 619 | random_brightness: Integer range to randomly multiply the pixel values by. 620 | 621 | Returns: 622 | Boolean value indicating whether any distortions should be applied. 623 | """ 624 | return (flip_left_right or (random_crop != 0) or (random_scale != 0) or 625 | (random_brightness != 0)) 626 | 627 | 628 | def add_input_distortions(flip_left_right, random_crop, random_scale, 629 | random_brightness, input_width, input_height, 630 | input_depth, input_mean, input_std): 631 | """Creates the operations to apply the specified distortions. 632 | 633 | During training it can help to improve the results if we run the images 634 | through simple distortions like crops, scales, and flips. These reflect the 635 | kind of variations we expect in the real world, and so can help train the 636 | model to cope with natural data more effectively. Here we take the supplied 637 | parameters and construct a network of operations to apply them to an image. 638 | 639 | Cropping 640 | ~~~~~~~~ 641 | 642 | Cropping is done by placing a bounding box at a random position in the full 643 | image. The cropping parameter controls the size of that box relative to the 644 | input image. If it's zero, then the box is the same size as the input and no 645 | cropping is performed. If the value is 50%, then the crop box will be half the 646 | width and height of the input. In a diagram it looks like this: 647 | 648 | < width > 649 | +---------------------+ 650 | | | 651 | | width - crop% | 652 | | < > | 653 | | +------+ | 654 | | | | | 655 | | | | | 656 | | | | | 657 | | +------+ | 658 | | | 659 | | | 660 | +---------------------+ 661 | 662 | Scaling 663 | ~~~~~~~ 664 | 665 | Scaling is a lot like cropping, except that the bounding box is always 666 | centered and its size varies randomly within the given range. For example if 667 | the scale percentage is zero, then the bounding box is the same size as the 668 | input and no scaling is applied. If it's 50%, then the bounding box will be in 669 | a random range between half the width and height and full size. 670 | 671 | Args: 672 | flip_left_right: Boolean whether to randomly mirror images horizontally. 673 | random_crop: Integer percentage setting the total margin used around the 674 | crop box. 675 | random_scale: Integer percentage of how much to vary the scale by. 676 | random_brightness: Integer range to randomly multiply the pixel values by. 677 | graph. 678 | input_width: Horizontal size of expected input image to model. 679 | input_height: Vertical size of expected input image to model. 680 | input_depth: How many channels the expected input image should have. 681 | input_mean: Pixel value that should be zero in the image for the graph. 682 | input_std: How much to divide the pixel values by before recognition. 683 | 684 | Returns: 685 | The jpeg input layer and the distorted result tensor. 686 | """ 687 | 688 | jpeg_data = tf.placeholder(tf.string, name='DistortJPGInput') 689 | decoded_image = tf.image.decode_jpeg(jpeg_data, channels=input_depth) 690 | decoded_image_as_float = tf.cast(decoded_image, dtype=tf.float32) 691 | decoded_image_4d = tf.expand_dims(decoded_image_as_float, 0) 692 | margin_scale = 1.0 + (random_crop / 100.0) 693 | resize_scale = 1.0 + (random_scale / 100.0) 694 | margin_scale_value = tf.constant(margin_scale) 695 | resize_scale_value = tf.random_uniform(tensor_shape.scalar(), 696 | minval=1.0, 697 | maxval=resize_scale) 698 | scale_value = tf.multiply(margin_scale_value, resize_scale_value) 699 | precrop_width = tf.multiply(scale_value, input_width) 700 | precrop_height = tf.multiply(scale_value, input_height) 701 | precrop_shape = tf.stack([precrop_height, precrop_width]) 702 | precrop_shape_as_int = tf.cast(precrop_shape, dtype=tf.int32) 703 | precropped_image = tf.image.resize_bilinear(decoded_image_4d, 704 | precrop_shape_as_int) 705 | precropped_image_3d = tf.squeeze(precropped_image, squeeze_dims=[0]) 706 | cropped_image = tf.random_crop(precropped_image_3d, 707 | [input_height, input_width, input_depth]) 708 | if flip_left_right: 709 | flipped_image = tf.image.random_flip_left_right(cropped_image) 710 | else: 711 | flipped_image = cropped_image 712 | brightness_min = 1.0 - (random_brightness / 100.0) 713 | brightness_max = 1.0 + (random_brightness / 100.0) 714 | brightness_value = tf.random_uniform(tensor_shape.scalar(), 715 | minval=brightness_min, 716 | maxval=brightness_max) 717 | brightened_image = tf.multiply(flipped_image, brightness_value) 718 | offset_image = tf.subtract(brightened_image, input_mean) 719 | mul_image = tf.multiply(offset_image, 1.0 / input_std) 720 | distort_result = tf.expand_dims(mul_image, 0, name='DistortResult') 721 | return jpeg_data, distort_result 722 | 723 | 724 | def variable_summaries(var): 725 | """Attach a lot of summaries to a Tensor (for TensorBoard visualization).""" 726 | with tf.name_scope('summaries'): 727 | mean = tf.reduce_mean(var) 728 | tf.summary.scalar('mean', mean) 729 | with tf.name_scope('stddev'): 730 | stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean))) 731 | tf.summary.scalar('stddev', stddev) 732 | tf.summary.scalar('max', tf.reduce_max(var)) 733 | tf.summary.scalar('min', tf.reduce_min(var)) 734 | tf.summary.histogram('histogram', var) 735 | 736 | 737 | def add_final_training_ops(class_count, final_tensor_name, bottleneck_tensor, 738 | bottleneck_tensor_size): 739 | """Adds a new softmax and fully-connected layer for training. 740 | 741 | We need to retrain the top layer to identify our new classes, so this function 742 | adds the right operations to the graph, along with some variables to hold the 743 | weights, and then sets up all the gradients for the backward pass. 744 | 745 | The set up for the softmax and fully-connected layers is based on: 746 | https://www.tensorflow.org/versions/master/tutorials/mnist/beginners/index.html 747 | 748 | Args: 749 | class_count: Integer of how many categories of things we're trying to 750 | recognize. 751 | final_tensor_name: Name string for the new final node that produces results. 752 | bottleneck_tensor: The output of the main CNN graph. 753 | bottleneck_tensor_size: How many entries in the bottleneck vector. 754 | 755 | Returns: 756 | The tensors for the training and cross entropy results, and tensors for the 757 | bottleneck input and ground truth input. 758 | """ 759 | with tf.name_scope('input'): 760 | bottleneck_input = tf.placeholder_with_default( 761 | bottleneck_tensor, 762 | shape=[None, bottleneck_tensor_size], 763 | name='BottleneckInputPlaceholder') 764 | 765 | ground_truth_input = tf.placeholder(tf.float32, 766 | [None, class_count], 767 | name='GroundTruthInput') 768 | 769 | # Organizing the following ops as `final_training_ops` so they're easier 770 | # to see in TensorBoard 771 | layer_name = 'final_training_ops' 772 | with tf.name_scope(layer_name): 773 | with tf.name_scope('weights'): 774 | initial_value = tf.truncated_normal( 775 | [bottleneck_tensor_size, class_count], stddev=0.001) 776 | 777 | layer_weights = tf.Variable(initial_value, name='final_weights') 778 | 779 | variable_summaries(layer_weights) 780 | with tf.name_scope('biases'): 781 | layer_biases = tf.Variable(tf.zeros([class_count]), name='final_biases') 782 | variable_summaries(layer_biases) 783 | with tf.name_scope('Wx_plus_b'): 784 | logits = tf.matmul(bottleneck_input, layer_weights) + layer_biases 785 | tf.summary.histogram('pre_activations', logits) 786 | 787 | final_tensor = tf.nn.softmax(logits, name=final_tensor_name) 788 | tf.summary.histogram('activations', final_tensor) 789 | 790 | with tf.name_scope('cross_entropy'): 791 | cross_entropy = tf.nn.softmax_cross_entropy_with_logits( 792 | labels=ground_truth_input, logits=logits) 793 | with tf.name_scope('total'): 794 | cross_entropy_mean = tf.reduce_mean(cross_entropy) 795 | tf.summary.scalar('cross_entropy', cross_entropy_mean) 796 | 797 | with tf.name_scope('train'): 798 | optimizer = tf.train.GradientDescentOptimizer(FLAGS.learning_rate) 799 | train_step = optimizer.minimize(cross_entropy_mean) 800 | 801 | return (train_step, cross_entropy_mean, bottleneck_input, ground_truth_input, 802 | final_tensor) 803 | 804 | 805 | def add_evaluation_step(result_tensor, ground_truth_tensor): 806 | """Inserts the operations we need to evaluate the accuracy of our results. 807 | 808 | Args: 809 | result_tensor: The new final node that produces results. 810 | ground_truth_tensor: The node we feed ground truth data 811 | into. 812 | 813 | Returns: 814 | Tuple of (evaluation step, prediction). 815 | """ 816 | with tf.name_scope('accuracy'): 817 | with tf.name_scope('correct_prediction'): 818 | prediction = tf.argmax(result_tensor, 1) 819 | correct_prediction = tf.equal( 820 | prediction, tf.argmax(ground_truth_tensor, 1)) 821 | with tf.name_scope('accuracy'): 822 | evaluation_step = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 823 | tf.summary.scalar('accuracy', evaluation_step) 824 | return evaluation_step, prediction 825 | 826 | 827 | def save_graph_to_file(sess, graph, graph_file_name): 828 | output_graph_def = graph_util.convert_variables_to_constants( 829 | sess, graph.as_graph_def(), [FLAGS.final_tensor_name]) 830 | with gfile.FastGFile(graph_file_name, 'wb') as f: 831 | f.write(output_graph_def.SerializeToString()) 832 | return 833 | 834 | 835 | def prepare_file_system(): 836 | # Setup the directory we'll write summaries to for TensorBoard 837 | if tf.gfile.Exists(FLAGS.summaries_dir): 838 | tf.gfile.DeleteRecursively(FLAGS.summaries_dir) 839 | tf.gfile.MakeDirs(FLAGS.summaries_dir) 840 | if FLAGS.intermediate_store_frequency > 0: 841 | ensure_dir_exists(FLAGS.intermediate_output_graphs_dir) 842 | return 843 | 844 | 845 | def create_model_info(architecture): 846 | """Given the name of a model architecture, returns information about it. 847 | 848 | There are different base image recognition pretrained models that can be 849 | retrained using transfer learning, and this function translates from the name 850 | of a model to the attributes that are needed to download and train with it. 851 | 852 | Args: 853 | architecture: Name of a model architecture. 854 | 855 | Returns: 856 | Dictionary of information about the model, or None if the name isn't 857 | recognized 858 | 859 | Raises: 860 | ValueError: If architecture name is unknown. 861 | """ 862 | architecture = architecture.lower() 863 | if architecture == 'inception_v3': 864 | # pylint: disable=line-too-long 865 | data_url = 'http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz' 866 | # pylint: enable=line-too-long 867 | bottleneck_tensor_name = 'pool_3/_reshape:0' 868 | bottleneck_tensor_size = 2048 869 | input_width = 299 870 | input_height = 299 871 | input_depth = 3 872 | resized_input_tensor_name = 'Mul:0' 873 | model_file_name = 'classify_image_graph_def.pb' 874 | input_mean = 128 875 | input_std = 128 876 | elif architecture.startswith('mobilenet_'): 877 | parts = architecture.split('_') 878 | if len(parts) != 3 and len(parts) != 4: 879 | tf.logging.error("Couldn't understand architecture name '%s'", 880 | architecture) 881 | return None 882 | version_string = parts[1] 883 | if (version_string != '1.0' and version_string != '0.75' and 884 | version_string != '0.50' and version_string != '0.25'): 885 | tf.logging.error( 886 | """"The Mobilenet version should be '1.0', '0.75', '0.50', or '0.25', 887 | but found '%s' for architecture '%s'""", 888 | version_string, architecture) 889 | return None 890 | size_string = parts[2] 891 | if (size_string != '224' and size_string != '192' and 892 | size_string != '160' and size_string != '128'): 893 | tf.logging.error( 894 | """The Mobilenet input size should be '224', '192', '160', or '128', 895 | but found '%s' for architecture '%s'""", 896 | size_string, architecture) 897 | return None 898 | if len(parts) == 3: 899 | is_quantized = False 900 | else: 901 | if parts[3] != 'quantized': 902 | tf.logging.error( 903 | "Couldn't understand architecture suffix '%s' for '%s'", parts[3], 904 | architecture) 905 | return None 906 | is_quantized = True 907 | data_url = 'http://download.tensorflow.org/models/mobilenet_v1_' 908 | data_url += version_string + '_' + size_string + '_frozen.tgz' 909 | bottleneck_tensor_name = 'MobilenetV1/Predictions/Reshape:0' 910 | bottleneck_tensor_size = 1001 911 | input_width = int(size_string) 912 | input_height = int(size_string) 913 | input_depth = 3 914 | resized_input_tensor_name = 'input:0' 915 | if is_quantized: 916 | model_base_name = 'quantized_graph.pb' 917 | else: 918 | model_base_name = 'frozen_graph.pb' 919 | model_dir_name = 'mobilenet_v1_' + version_string + '_' + size_string 920 | model_file_name = os.path.join(model_dir_name, model_base_name) 921 | input_mean = 127.5 922 | input_std = 127.5 923 | else: 924 | tf.logging.error("Couldn't understand architecture name '%s'", architecture) 925 | raise ValueError('Unknown architecture', architecture) 926 | 927 | return { 928 | 'data_url': data_url, 929 | 'bottleneck_tensor_name': bottleneck_tensor_name, 930 | 'bottleneck_tensor_size': bottleneck_tensor_size, 931 | 'input_width': input_width, 932 | 'input_height': input_height, 933 | 'input_depth': input_depth, 934 | 'resized_input_tensor_name': resized_input_tensor_name, 935 | 'model_file_name': model_file_name, 936 | 'input_mean': input_mean, 937 | 'input_std': input_std, 938 | } 939 | 940 | 941 | def add_jpeg_decoding(input_width, input_height, input_depth, input_mean, 942 | input_std): 943 | """Adds operations that perform JPEG decoding and resizing to the graph.. 944 | 945 | Args: 946 | input_width: Desired width of the image fed into the recognizer graph. 947 | input_height: Desired width of the image fed into the recognizer graph. 948 | input_depth: Desired channels of the image fed into the recognizer graph. 949 | input_mean: Pixel value that should be zero in the image for the graph. 950 | input_std: How much to divide the pixel values by before recognition. 951 | 952 | Returns: 953 | Tensors for the node to feed JPEG data into, and the output of the 954 | preprocessing steps. 955 | """ 956 | jpeg_data = tf.placeholder(tf.string, name='DecodeJPGInput') 957 | decoded_image = tf.image.decode_jpeg(jpeg_data, channels=input_depth) 958 | decoded_image_as_float = tf.cast(decoded_image, dtype=tf.float32) 959 | decoded_image_4d = tf.expand_dims(decoded_image_as_float, 0) 960 | resize_shape = tf.stack([input_height, input_width]) 961 | resize_shape_as_int = tf.cast(resize_shape, dtype=tf.int32) 962 | resized_image = tf.image.resize_bilinear(decoded_image_4d, 963 | resize_shape_as_int) 964 | offset_image = tf.subtract(resized_image, input_mean) 965 | mul_image = tf.multiply(offset_image, 1.0 / input_std) 966 | return jpeg_data, mul_image 967 | 968 | 969 | def main(_): 970 | # Needed to make sure the logging output is visible. 971 | # See https://github.com/tensorflow/tensorflow/issues/3047 972 | tf.logging.set_verbosity(tf.logging.INFO) 973 | 974 | # Prepare necessary directories that can be used during training 975 | prepare_file_system() 976 | 977 | # Gather information about the model architecture we'll be using. 978 | model_info = create_model_info(FLAGS.architecture) 979 | if not model_info: 980 | tf.logging.error('Did not recognize architecture flag') 981 | return -1 982 | 983 | # Set up the pre-trained graph. 984 | maybe_download_and_extract(model_info['data_url']) 985 | graph, bottleneck_tensor, resized_image_tensor = ( 986 | create_model_graph(model_info)) 987 | 988 | # Look at the folder structure, and create lists of all the images. 989 | image_lists = create_image_lists(FLAGS.image_dir, FLAGS.testing_percentage, 990 | FLAGS.validation_percentage) 991 | class_count = len(image_lists.keys()) 992 | if class_count == 0: 993 | tf.logging.error('No valid folders of images found at ' + FLAGS.image_dir) 994 | return -1 995 | if class_count == 1: 996 | tf.logging.error('Only one valid folder of images found at ' + 997 | FLAGS.image_dir + 998 | ' - multiple classes are needed for classification.') 999 | return -1 1000 | 1001 | # See if the command-line flags mean we're applying any distortions. 1002 | do_distort_images = should_distort_images( 1003 | FLAGS.flip_left_right, FLAGS.random_crop, FLAGS.random_scale, 1004 | FLAGS.random_brightness) 1005 | 1006 | with tf.Session(graph=graph) as sess: 1007 | # Set up the image decoding sub-graph. 1008 | jpeg_data_tensor, decoded_image_tensor = add_jpeg_decoding( 1009 | model_info['input_width'], model_info['input_height'], 1010 | model_info['input_depth'], model_info['input_mean'], 1011 | model_info['input_std']) 1012 | 1013 | if do_distort_images: 1014 | # We will be applying distortions, so setup the operations we'll need. 1015 | (distorted_jpeg_data_tensor, 1016 | distorted_image_tensor) = add_input_distortions( 1017 | FLAGS.flip_left_right, FLAGS.random_crop, FLAGS.random_scale, 1018 | FLAGS.random_brightness, model_info['input_width'], 1019 | model_info['input_height'], model_info['input_depth'], 1020 | model_info['input_mean'], model_info['input_std']) 1021 | else: 1022 | # We'll make sure we've calculated the 'bottleneck' image summaries and 1023 | # cached them on disk. 1024 | cache_bottlenecks(sess, image_lists, FLAGS.image_dir, 1025 | FLAGS.bottleneck_dir, jpeg_data_tensor, 1026 | decoded_image_tensor, resized_image_tensor, 1027 | bottleneck_tensor, FLAGS.architecture) 1028 | 1029 | # Add the new layer that we'll be training. 1030 | (train_step, cross_entropy, bottleneck_input, ground_truth_input, 1031 | final_tensor) = add_final_training_ops( 1032 | len(image_lists.keys()), FLAGS.final_tensor_name, bottleneck_tensor, 1033 | model_info['bottleneck_tensor_size']) 1034 | 1035 | # Create the operations we need to evaluate the accuracy of our new layer. 1036 | evaluation_step, prediction = add_evaluation_step( 1037 | final_tensor, ground_truth_input) 1038 | 1039 | # Merge all the summaries and write them out to the summaries_dir 1040 | merged = tf.summary.merge_all() 1041 | train_writer = tf.summary.FileWriter(FLAGS.summaries_dir + '/train', 1042 | sess.graph) 1043 | 1044 | validation_writer = tf.summary.FileWriter( 1045 | FLAGS.summaries_dir + '/validation') 1046 | 1047 | # Set up all our weights to their initial default values. 1048 | init = tf.global_variables_initializer() 1049 | sess.run(init) 1050 | 1051 | # Run the training for as many cycles as requested on the command line. 1052 | for i in range(FLAGS.how_many_training_steps): 1053 | # Get a batch of input bottleneck values, either calculated fresh every 1054 | # time with distortions applied, or from the cache stored on disk. 1055 | if do_distort_images: 1056 | (train_bottlenecks, 1057 | train_ground_truth) = get_random_distorted_bottlenecks( 1058 | sess, image_lists, FLAGS.train_batch_size, 'training', 1059 | FLAGS.image_dir, distorted_jpeg_data_tensor, 1060 | distorted_image_tensor, resized_image_tensor, bottleneck_tensor) 1061 | else: 1062 | (train_bottlenecks, 1063 | train_ground_truth, _) = get_random_cached_bottlenecks( 1064 | sess, image_lists, FLAGS.train_batch_size, 'training', 1065 | FLAGS.bottleneck_dir, FLAGS.image_dir, jpeg_data_tensor, 1066 | decoded_image_tensor, resized_image_tensor, bottleneck_tensor, 1067 | FLAGS.architecture) 1068 | # Feed the bottlenecks and ground truth into the graph, and run a training 1069 | # step. Capture training summaries for TensorBoard with the `merged` op. 1070 | train_summary, _ = sess.run( 1071 | [merged, train_step], 1072 | feed_dict={bottleneck_input: train_bottlenecks, 1073 | ground_truth_input: train_ground_truth}) 1074 | train_writer.add_summary(train_summary, i) 1075 | 1076 | # Every so often, print out how well the graph is training. 1077 | is_last_step = (i + 1 == FLAGS.how_many_training_steps) 1078 | if (i % FLAGS.eval_step_interval) == 0 or is_last_step: 1079 | train_accuracy, cross_entropy_value = sess.run( 1080 | [evaluation_step, cross_entropy], 1081 | feed_dict={bottleneck_input: train_bottlenecks, 1082 | ground_truth_input: train_ground_truth}) 1083 | tf.logging.info('%s: Step %d: Train accuracy = %.1f%%' % 1084 | (datetime.now(), i, train_accuracy * 100)) 1085 | tf.logging.info('%s: Step %d: Cross entropy = %f' % 1086 | (datetime.now(), i, cross_entropy_value)) 1087 | validation_bottlenecks, validation_ground_truth, _ = ( 1088 | get_random_cached_bottlenecks( 1089 | sess, image_lists, FLAGS.validation_batch_size, 'validation', 1090 | FLAGS.bottleneck_dir, FLAGS.image_dir, jpeg_data_tensor, 1091 | decoded_image_tensor, resized_image_tensor, bottleneck_tensor, 1092 | FLAGS.architecture)) 1093 | # Run a validation step and capture training summaries for TensorBoard 1094 | # with the `merged` op. 1095 | validation_summary, validation_accuracy = sess.run( 1096 | [merged, evaluation_step], 1097 | feed_dict={bottleneck_input: validation_bottlenecks, 1098 | ground_truth_input: validation_ground_truth}) 1099 | validation_writer.add_summary(validation_summary, i) 1100 | tf.logging.info('%s: Step %d: Validation accuracy = %.1f%% (N=%d)' % 1101 | (datetime.now(), i, validation_accuracy * 100, 1102 | len(validation_bottlenecks))) 1103 | 1104 | # Store intermediate results 1105 | intermediate_frequency = FLAGS.intermediate_store_frequency 1106 | 1107 | if (intermediate_frequency > 0 and (i % intermediate_frequency == 0) 1108 | and i > 0): 1109 | intermediate_file_name = (FLAGS.intermediate_output_graphs_dir + 1110 | 'intermediate_' + str(i) + '.pb') 1111 | tf.logging.info('Save intermediate result to : ' + 1112 | intermediate_file_name) 1113 | save_graph_to_file(sess, graph, intermediate_file_name) 1114 | 1115 | # We've completed all our training, so run a final test evaluation on 1116 | # some new images we haven't used before. 1117 | test_bottlenecks, test_ground_truth, test_filenames = ( 1118 | get_random_cached_bottlenecks( 1119 | sess, image_lists, FLAGS.test_batch_size, 'testing', 1120 | FLAGS.bottleneck_dir, FLAGS.image_dir, jpeg_data_tensor, 1121 | decoded_image_tensor, resized_image_tensor, bottleneck_tensor, 1122 | FLAGS.architecture)) 1123 | test_accuracy, predictions = sess.run( 1124 | [evaluation_step, prediction], 1125 | feed_dict={bottleneck_input: test_bottlenecks, 1126 | ground_truth_input: test_ground_truth}) 1127 | tf.logging.info('Final test accuracy = %.1f%% (N=%d)' % 1128 | (test_accuracy * 100, len(test_bottlenecks))) 1129 | 1130 | if FLAGS.print_misclassified_test_images: 1131 | tf.logging.info('=== MISCLASSIFIED TEST IMAGES ===') 1132 | for i, test_filename in enumerate(test_filenames): 1133 | if predictions[i] != test_ground_truth[i].argmax(): 1134 | tf.logging.info('%70s %s' % 1135 | (test_filename, 1136 | list(image_lists.keys())[predictions[i]])) 1137 | 1138 | # Write out the trained graph and labels with the weights stored as 1139 | # constants. 1140 | save_graph_to_file(sess, graph, FLAGS.output_graph) 1141 | with gfile.FastGFile(FLAGS.output_labels, 'w') as f: 1142 | f.write('\n'.join(image_lists.keys()) + '\n') 1143 | 1144 | 1145 | if __name__ == '__main__': 1146 | parser = argparse.ArgumentParser() 1147 | parser.add_argument( 1148 | '--image_dir', 1149 | type=str, 1150 | default='', 1151 | help='Path to folders of labeled images.' 1152 | ) 1153 | parser.add_argument( 1154 | '--output_graph', 1155 | type=str, 1156 | default='/tmp/output_graph.pb', 1157 | help='Where to save the trained graph.' 1158 | ) 1159 | parser.add_argument( 1160 | '--intermediate_output_graphs_dir', 1161 | type=str, 1162 | default='/tmp/intermediate_graph/', 1163 | help='Where to save the intermediate graphs.' 1164 | ) 1165 | parser.add_argument( 1166 | '--intermediate_store_frequency', 1167 | type=int, 1168 | default=0, 1169 | help="""\ 1170 | How many steps to store intermediate graph. If "0" then will not 1171 | store.\ 1172 | """ 1173 | ) 1174 | parser.add_argument( 1175 | '--output_labels', 1176 | type=str, 1177 | default='/tmp/output_labels.txt', 1178 | help='Where to save the trained graph\'s labels.' 1179 | ) 1180 | parser.add_argument( 1181 | '--summaries_dir', 1182 | type=str, 1183 | default='/tmp/retrain_logs', 1184 | help='Where to save summary logs for TensorBoard.' 1185 | ) 1186 | parser.add_argument( 1187 | '--how_many_training_steps', 1188 | type=int, 1189 | default=6000, 1190 | help='How many training steps to run before ending.' 1191 | ) 1192 | parser.add_argument( 1193 | '--learning_rate', 1194 | type=float, 1195 | default=0.01, 1196 | help='How large a learning rate to use when training.' 1197 | ) 1198 | parser.add_argument( 1199 | '--testing_percentage', 1200 | type=int, 1201 | default=10, 1202 | help='What percentage of images to use as a test set.' 1203 | ) 1204 | parser.add_argument( 1205 | '--validation_percentage', 1206 | type=int, 1207 | default=10, 1208 | help='What percentage of images to use as a validation set.' 1209 | ) 1210 | parser.add_argument( 1211 | '--eval_step_interval', 1212 | type=int, 1213 | default=10, 1214 | help='How often to evaluate the training results.' 1215 | ) 1216 | parser.add_argument( 1217 | '--train_batch_size', 1218 | type=int, 1219 | default=100, 1220 | help='How many images to train on at a time.' 1221 | ) 1222 | parser.add_argument( 1223 | '--test_batch_size', 1224 | type=int, 1225 | default=-1, 1226 | help="""\ 1227 | How many images to test on. This test set is only used once, to evaluate 1228 | the final accuracy of the model after training completes. 1229 | A value of -1 causes the entire test set to be used, which leads to more 1230 | stable results across runs.\ 1231 | """ 1232 | ) 1233 | parser.add_argument( 1234 | '--validation_batch_size', 1235 | type=int, 1236 | default=100, 1237 | help="""\ 1238 | How many images to use in an evaluation batch. This validation set is 1239 | used much more often than the test set, and is an early indicator of how 1240 | accurate the model is during training. 1241 | A value of -1 causes the entire validation set to be used, which leads to 1242 | more stable results across training iterations, but may be slower on large 1243 | training sets.\ 1244 | """ 1245 | ) 1246 | parser.add_argument( 1247 | '--print_misclassified_test_images', 1248 | default=False, 1249 | help="""\ 1250 | Whether to print out a list of all misclassified test images.\ 1251 | """, 1252 | action='store_true' 1253 | ) 1254 | parser.add_argument( 1255 | '--model_dir', 1256 | type=str, 1257 | default='/tmp/imagenet', 1258 | help="""\ 1259 | Path to classify_image_graph_def.pb, 1260 | imagenet_synset_to_human_label_map.txt, and 1261 | imagenet_2012_challenge_label_map_proto.pbtxt.\ 1262 | """ 1263 | ) 1264 | parser.add_argument( 1265 | '--bottleneck_dir', 1266 | type=str, 1267 | default='/tmp/bottleneck', 1268 | help='Path to cache bottleneck layer values as files.' 1269 | ) 1270 | parser.add_argument( 1271 | '--final_tensor_name', 1272 | type=str, 1273 | default='final_result', 1274 | help="""\ 1275 | The name of the output classification layer in the retrained graph.\ 1276 | """ 1277 | ) 1278 | parser.add_argument( 1279 | '--flip_left_right', 1280 | default=False, 1281 | help="""\ 1282 | Whether to randomly flip half of the training images horizontally.\ 1283 | """, 1284 | action='store_true' 1285 | ) 1286 | parser.add_argument( 1287 | '--random_crop', 1288 | type=int, 1289 | default=0, 1290 | help="""\ 1291 | A percentage determining how much of a margin to randomly crop off the 1292 | training images.\ 1293 | """ 1294 | ) 1295 | parser.add_argument( 1296 | '--random_scale', 1297 | type=int, 1298 | default=0, 1299 | help="""\ 1300 | A percentage determining how much to randomly scale up the size of the 1301 | training images by.\ 1302 | """ 1303 | ) 1304 | parser.add_argument( 1305 | '--random_brightness', 1306 | type=int, 1307 | default=0, 1308 | help="""\ 1309 | A percentage determining how much to randomly multiply the training image 1310 | input pixels up or down by.\ 1311 | """ 1312 | ) 1313 | parser.add_argument( 1314 | '--architecture', 1315 | type=str, 1316 | default='inception_v3', 1317 | help="""\ 1318 | Which model architecture to use. 'inception_v3' is the most accurate, but 1319 | also the slowest. For faster or smaller models, chose a MobileNet with the 1320 | form 'mobilenet__[_quantized]'. For example, 1321 | 'mobilenet_1.0_224' will pick a model that is 17 MB in size and takes 224 1322 | pixel input images, while 'mobilenet_0.25_128_quantized' will choose a much 1323 | less accurate, but smaller and faster network that's 920 KB on disk and 1324 | takes 128x128 images. See https://research.googleblog.com/2017/06/mobilenets-open-source-models-for.html 1325 | for more information on Mobilenet.\ 1326 | """) 1327 | FLAGS, unparsed = parser.parse_known_args() 1328 | tf.app.run(main=main, argv=[sys.argv[0]] + unparsed) 1329 | -------------------------------------------------------------------------------- /translation/README.Hindi.md: -------------------------------------------------------------------------------- 1 | # चेहरे-अभिव्यक्ति-पहचान 2 | 3 | यह YouTube पर रितेश द्वारा बनाए गए [इस वीडियो](https://youtu.be/Dqa-3N8VZbw) का कोड है। 4 | 5 | फेशियल एक्सप्रेशन या फेशियल इमोशन डिटेक्टर का उपयोग केवल उसके चेहरे से यह जानने के लिए किया जा सकता है कि कोई व्यक्ति दुखी, खुश, क्रोधित आदि है या नहीं। इस रिपॉजिटरी का उपयोग ऐसे कार्य को करने के लिए किया जा सकता है। यह आपके वेबकैमरा का उपयोग करता है और फिर वास्तविक समय में आपकी अभिव्यक्ति की पहचान करता है। हाँ वास्तविक समय में! 6 | 7 | # योजना 8 | 9 | यह तीन चरणों वाली प्रक्रिया है. सबसे पहले, हम चेहरों की उपस्थिति का पता लगाने के लिए XML फ़ाइल लोड करते हैं और फिर हम अपने नेटवर्क को पांच अलग-अलग श्रेणियों में अपनी छवि के साथ पुनः प्रशिक्षित करते हैं। उसके बाद, हम [अंतिम वीडियो]() से label_image.py प्रोग्राम आयात करते हैं और वास्तविक समय में सब कुछ सेट करते हैं। 10 | 11 | # निर्भरताएं 12 | 13 | यदि आपने इन्हें पहले से इंस्टॉल नहीं किया है तो सीएमडी/टर्मिनल में निम्नलिखित को हिट करें: 14 | 15 | pip install tensorflow 16 | pip install opencv-python 17 | 18 | अभी के लिए बस इतना ही। 19 | 20 | तो आइए प्रत्येक चरण पर एक संक्षिप्त नज़र डालें। 21 | 22 | ## चरण 1 - ओपनसीवी हार कैस्केड का कार्यान्वयन 23 | 24 | मैं वेबकैम में चेहरे की उपस्थिति का पता लगाने के लिए "फ्रंटल फेस ऑल्ट" क्लासिफायर का उपयोग कर रहा हूं। यह फ़ाइल इस रिपॉजिटरी के साथ शामिल है। आप अन्य क्लासिफायर [यहां](https://github.com/opencv/opencv/tree/master/data/haarcascades) पा सकते हैं। 25 | 26 | इसके बाद, हमारे पास इस फ़ाइल को लोड करने का कार्य है, जिसे [label.py](https://github.com/MauryaRitesh/Facial-Expression-डिटेक्शन/blob/master/label.py) प्रोग्राम में पाया जा सकता है। जैसे: 27 | 28 | # We load the xml file 29 | classifier = cv2.CascadeClassifier('haarcascade_frontalface_alt.xml') 30 | 31 | अब सब कुछ Label.py प्रोग्राम से सेट किया जा सकता है। तो चलिए अगले चरण पर चलते हैं। 32 | 33 | ## चरण 2 - नेटवर्क को पुनः प्रशिक्षित करना - टेन्सरफ्लो इमेज क्लासिफायर 34 | 35 | हम एक इमेज क्लासिफायरियर बनाने जा रहे हैं जो पहचानता है कि कोई व्यक्ति दुखी है, खुश है या नहीं और फिर इस टेक्स्ट को ओपनसीवी विंडो पर दिखाएं। 36 | इस चरण में कई उप चरण शामिल होंगे: 37 | 38 | - हमें सबसे पहले इमेजेज नाम से एक डायरेक्टरी बनानी होगी। इस निर्देशिका में हैप्पी, सैड, एंग्री, शांत और न्यूट्रल जैसे नामों से पांच या छह उप निर्देशिकाएं बनाएं। आप इससे अधिक भी जोड़ सकते हैं. 39 | - अब इन निर्देशिकाओं को इंटरनेट से डाउनलोड करके संबंधित छवियों से भरें। उदाहरण के लिए, "खुश" निर्देशिका में, केवल उन व्यक्तियों की संख्या भरें जो खुश हैं। 40 | - अब [वीडियो](https://youtu.be/Dqa-3N8VZbw) में सुझाए अनुसार "face-crop.py" प्रोग्राम चलाएं 41 | - एक बार जब आप केवल छवियां साफ कर लेते हैं, तो आप नेटवर्क को फिर से प्रशिक्षित करने के लिए तैयार हैं। इस उद्देश्य के लिए मैं मोबाइलनेट मॉडल का उपयोग कर रहा हूं जो काफी तेज़ और सटीक है। प्रशिक्षण चलाने के लिए, मूल फ़ोल्डर पर क्लिक करें और यहां सीएमडी/टर्मिनल खोलें और निम्नलिखित पर क्लिक करें: 42 | 43 | python retrain.py --output_graph=retrained_graph.pb --output_labels=retrained_labels.txt --architecture=MobileNet_1.0_224 --image_dir=images 44 | 45 | इस चरण के लिए बस इतना ही. 46 | 47 | ## चरण 3 - पुनः प्रशिक्षित मॉडल को आयात करना और सब कुछ सेट करना 48 | 49 | अंत में, मैंने सब कुछ "label_image.py" फ़ाइल के अंतर्गत डाल दिया है जहाँ से आप सब कुछ प्राप्त कर सकते हैं। 50 | अब CMD/टर्मिनल में निम्नलिखित टाइप करके "label.py" प्रोग्राम चलाएँ: 51 | 52 | python label.py 53 | 54 | यह OpenCV की एक नई विंडो खोलेगा और फिर आपके चेहरे के भाव की पहचान करेगा। 55 | अब हमारा काम हो गया! 56 | 57 | ## योगदान दिशानिर्देश 58 | "चेहरे की अभिव्यक्ति-पहचान" परियोजना में योगदान देने पर विचार करने के लिए धन्यवाद! 59 | ### शुरू करना 60 | 61 | 1. रिपॉजिटरी को फोर्क करें: योगदान देने के लिए, मुख्य रिपॉजिटरी को अपने GitHub खाते में फोर्क करें। 62 | 63 | 2. रिपॉजिटरी को क्लोन करें: अपनी फोर्कड रिपॉजिटरी को अपनी स्थानीय मशीन पर क्लोन करें: 64 | ```bash 65 | git clone https://github.com/MauryaRitesh/Facial-Expression-Detection.git 66 | ``` 67 | 68 | 3. विकास परिवेश स्थापित करें: यदि आपने पहले से आवश्यक निर्भरताएँ स्थापित नहीं की हैं। आप निम्न आदेश चलाकर ऐसा कर सकते हैं: 69 | ```python 70 | pip install tensorflow 71 | pip install opencv-python 72 | ``` 73 | 74 | 4. एक शाखा बनाएं: अपने योगदान के लिए एक नई शाखा बनाएं। शाखा के लिए एक वर्णनात्मक नाम चुनें जो आपके योगदान की प्रकृति को दर्शाता हो। 75 | ```bash 76 | git checkout -b feature/your-feature-name 77 | ``` 78 | 79 | 5.अपने परिवर्तन करें: अपनी शाखा में आवश्यक परिवर्तन और परिवर्धन करें। 80 | 81 | 6. अपने परिवर्तन प्रतिबद्ध करें: स्पष्ट, संक्षिप्त और अच्छी तरह से प्रलेखित प्रतिबद्ध संदेश बनाएं। किसी भी प्रासंगिक मुद्दे का संदर्भ लें या अपनी प्रतिबद्धताओं में अनुरोध खींचें। 82 | ```bash 83 | git commit -m "Add new feature" 84 | ``` 85 | 86 | 7. अपने परिवर्तन पुश करें: अपनी शाखा को अपने GitHub रिपॉजिटरी में पुश करें: 87 | ```bash 88 | git push origin feature/your-feature-name 89 | ``` 90 | 91 | 8. एक पुल अनुरोध बनाएं: अपने फोर्कड रिपॉजिटरी से मुख्य रिपॉजिटरी तक एक पुल अनुरोध बनाएं। 92 | 93 | 94 | यदि आपको कुछ दिलचस्प लगे तो कृपया इस रेपो को तारांकित करें। <3 95 | -------------------------------------------------------------------------------- /translation/README.Japanese.md: -------------------------------------------------------------------------------- 1 | # 表情検出 2 | 3 | これは、YouTube 上の リテシュ による [このビデオ](https://youtu.be/Dqa-3N8VZbw) のコードです。 4 | 5 | 顔の表情または顔の感情検出器を使用すると、人が悲しい、幸せ、怒っているなどを顔だけで知ることができます。このリポジトリは、そのようなタスクを実行するために使用できます。 Webカメラを使用して、リアルタイムで表情を識別します。そう、リアルタイムで! 6 | 7 | # プラン 8 | 9 | これは 3 つのステップからなるプロセスです。最初に、顔の存在を検出するために XML ファイルをロードし、次に 5 つの異なるカテゴリの画像を使用してネットワークを再トレーニングします。その後、[最後のビデオ]() から label_image.py プログラムをインポートし、すべてをリアルタイムでセットアップします。 10 | 11 | # 依存関係 12 | 13 | まだインストールしていない場合は、CMD/ターミナルで次のコマンドを実行します。 14 | 15 | pip install tensorflow 16 | pip install opencv-python 17 | 18 | 今のところはそれだけです。 19 | 20 | それでは、各ステップを簡単に見てみましょう。 21 | 22 | ## ステップ 1 - OpenCV HAAR カスケードの実装 23 | 24 | Web カメラ内の顔の存在を検出するために「Frontal Face Alt」分類子を使用しています。このファイルはこのリポジトリに含まれています。他の分類子は [ここ](https://github.com/opencv/opencv/tree/master/data/haarcascades) で見つけることができます。 25 | 26 | 次に、このファイルをロードするタスクがあります。このファイルは [label.py](https://github.com/MauryaRitesh/Facial-Expression-Detection/blob/master/label.py) プログラムにあります。例えば。: 27 | 28 | # We load the xml file 29 | classifier = cv2.CascadeClassifier('haarcascade_frontalface_alt.xml') 30 | 31 | これで、Label.py プログラムを使用してすべてを設定できるようになりました。それでは、次のステップに進みましょう。 32 | 33 | ## ステップ 2 - ネットワークの再トレーニング - Tensorflow 画像分類子 34 | 35 | 人が悲しいか幸せかなどを識別する画像分類子を作成し、このテキストを OpenCV ウィンドウに表示します。 36 | このステップは、いくつかのサブステップで構成されます。 37 | 38 | - まず、images という名前のディレクトリを作成する必要があります。このディレクトリに、Happy、Sad、Angry、Calm、Neutral などの名前を付けた 5 つまたは 6 つのサブディレクトリを作成します。これ以外にも追加できます。 39 | - 次に、インターネットからイメージをダウンロードして、これらのディレクトリにそれぞれのイメージを入力します。たとえば、「Happy」ディレクトリには、幸せな人のイメージだけを記入します。 40 | - [ビデオ](https://youtu.be/Dqa-3N8VZbw) で提案されているように、「face-crop.py」プログラムを実行します。 41 | - イメージのみをクリーニングしたら、ネットワークを再トレーニングする準備が整います。この目的のために、私は非常に高速で正確な Mobilenet モデルを使用しています。トレーニングを実行するには、「get」を押して親フォルダーに移動し、ここで CMD/ターミナル を開き、次のコマンドを押します。 42 | 43 | python retrain.py --output_graph=retrained_graph.pb --output_labels=retrained_labels.txt --architecture=MobileNet_1.0_224 --image_dir=images 44 | 45 | このステップはこれで終わりです。 46 | 47 | ## ステップ 3 - 再トレーニングされたモデルのインポートとすべてのセットアップ 48 | 49 | 最後に、すべてを「label_image.py」ファイルの下に置きました。ここからすべてを入手できます。 50 | 次に、CMD/ターミナルに次のように入力して、「label.py」プログラムを実行します。 51 | 52 | python label.py 53 | 54 | OpenCV の新しいウィンドウが開き、あなたの顔の表情が識別されます。 55 | これで完了です! 56 | 57 | ## 貢献ガイドライン 58 | 「表情検出」プロジェクトへの貢献をご検討いただきありがとうございます。 59 | 60 | OpenCV の新しいウィンドウが開き、あなたの顔の表情が識別されます。 61 | これで完了です! 62 | 63 | ## 貢献ガイドライン 64 | 「表情検出」プロジェクトへの貢献をご検討いただきありがとうございます。 65 | ### はじめる 66 | 67 | 1.リポジトリをフォークする: 貢献するには、メイン リポジトリを GitHub アカウントにフォークします。 68 | 69 | 2.リポジトリのクローンを作成します: フォークされたリポジトリのクローンをローカル マシンに作成します。 70 | 71 | 72 | git clone https://github.com/MauryaRitesh/Facial-Expression-Detection.git 73 | 74 | 3.開発環境のセットアップ: 必要な依存関係をまだインストールしていない場合はインストールします。これを行うには、次のコマンドを実行します。 75 | ```python 76 | pip install tensorflow 77 | pip install opencv-python 78 | ``` 79 | 80 | 4. ブランチの作成: 投稿用に新しいブランチを作成します。あなたの貢献の性質を反映した、ブランチのわかりやすい名前を選択してください。 81 | ```bash 82 | git checkout -b feature/your-feature-name 83 | ``` 84 | 85 | 5.変更を加えます: ブランチに必要な変更と追加を加えま​​す。 86 | 87 | 6.変更をコミットする: 明確かつ簡潔で、十分に文書化されたコミット メッセージを作成します。関連する問題やプル リクエストをコミット内で参照します。 88 | 89 | ```bash 90 | git commit -m "Add new feature" 91 | ``` 92 | 93 | 7.変更をプッシュする: ブランチを GitHub リポジトリにプッシュします。 94 | ```bash 95 | git push origin feature/your-feature-name 96 | ``` 97 | 98 | 8.プル リクエストの作成: フォークされたリポジトリからメイン リポジトリへのプル リクエストを作成します。 99 | 100 | 101 | 興味深いものを見つけた場合は、このリポジトリにスターを付けてください。 <3 102 | --------------------------------------------------------------------------------