├── .gitignore ├── README.md ├── detector ├── generate_dataset.ipynb ├── models │ ├── faster_rcnn_v1.config │ └── ssd_mobilenet_v1.config ├── run_model.ipynb └── train_model.ipynb ├── horizon └── horizon_line.ipynb └── output_images ├── detection ├── image10.png ├── image15.png ├── image16.png ├── image17.png ├── image22.png └── image29.png └── horizon ├── image10.png ├── image11.png ├── image13.png ├── image2.png └── image26.png /.gitignore: -------------------------------------------------------------------------------- 1 | # =========== # 2 | # Dir # 3 | # =========== # 4 | detector/models/* 5 | !detector/models/*.config 6 | detector/data/* 7 | 8 | # =========== # 9 | # Notebooks # 10 | # =========== # 11 | .idea* 12 | __pycache__* 13 | .ipynb_checkpoints* 14 | 15 | # =========== # 16 | # Images # 17 | # =========== # 18 | *.ppm 19 | *.jpg 20 | *.jpeg 21 | *.png 22 | !output_images/detection/*.png 23 | !output_images/horizon/*.png 24 | 25 | # =========== # 26 | # CSV/XML # 27 | # =========== # 28 | *.csv 29 | *.xml 30 | 31 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Iris Drone Challenge 2 | 3 | This is a submission to the [Iris Challenge](https://www.irisonboard.com/challenge/). Taken from the challenge instructions: 4 | 5 | ***There are 31 images in the dataset. They are all taken from drones in a range of different environments.Provide us with as much contextual awareness about these environments as you possibly can. We want to automatically understand everything about the 6 | scene in the way a human pilot might.*** 7 | 8 | This repo contains two parts: 9 | - Object detection model for clouds, sun, houses, and trees 10 | - Horizon line detector 11 | 12 | *Detector* 13 | 14 | The detector (`detector/*`) uses the TF Object Detection API. The procedure is as follows: 15 | - Find images on the internet, use [LabelImg](https://github.com/tzutalin/labelImg) to label them (I used a tiny dataset of 20 images) 16 | - Generate TFRecords files for training using `detector/generate_dataset.ipynb` 17 | - Train a model from a pre-trained coco OD model using `detector/train_model.ipynb` 18 | - Run the saved model on the the challenge test_images using `detector/run_model.ipynb` 19 | 20 | *Horizon Angle* 21 | 22 | The horizon angle regression model (`horizon/*`) uses OpenCV built-ins. The procedure is as follows: 23 | - Preprocess image (gray, blurring) 24 | - Use Canny Edge Detection on image, dilate the resulting edges 25 | - Use Hough Line Transform to get lines, fine tune parameters as needed 26 | - Average the resulting Hough Lines to get the horizon 27 | 28 | ## Results 29 | 30 | Below are some *hand-picked* results. (Find more in `output_images/*`) 31 | 32 | ![alt text](output_images/detection/image15.png "Clouds, Houses, Trees, Suns") 33 | ![alt text](output_images/horizon/image13.png "Horizon Line Detected") 34 | ![alt text](output_images/detection/image16.png "Clouds, Houses, Trees, Suns") 35 | ![alt text](output_images/horizon/image11.png "Horizon Line Detected") 36 | ![alt text](output_images/detection/image29.png "Clouds, Houses, Trees, Suns") 37 | ![alt text](output_images/horizon/image10.png "Horizon Line Detected") 38 | ![alt text](output_images/detection/image22.png "Clouds, Houses, Trees, Suns") 39 | ![alt text](output_images/horizon/image26.png "Horizon Line Detected") 40 | 41 | ## Requirements 42 | 43 | - Python 3.5 44 | - [TensorFlow](https://www.tensorflow.org/) 0.14 45 | - [Jupyter Notebooks](https://github.com/jupyter/notebook) 46 | - [OpenCV](https://opencv.org/opencv-3-3.html) 3.3 47 | 48 | ## Author 49 | 50 | Hugo Ponte 51 | -------------------------------------------------------------------------------- /detector/generate_dataset.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "### Object Detector: Dataset Creation\n", 8 | "\n", 9 | "This notebook generates the datasets for creating an object detector using TF models. This object detector will detect 4 seperate classes:\n", 10 | "\n", 11 | " - 1 - Clouds\n", 12 | " - 2 - the Sun\n", 13 | " - 3 - Houses\n", 14 | " - 4 - Trees\n", 15 | "\n", 16 | "Sources:\n", 17 | "- [1] https://www.oreilly.com/ideas/object-detection-with-tensorflow\n", 18 | "- [2] https://github.com/tzutalin/labelImg" 19 | ] 20 | }, 21 | { 22 | "cell_type": "code", 23 | "execution_count": 3, 24 | "metadata": { 25 | "ExecuteTime": { 26 | "end_time": "2018-02-03T22:16:30.508902Z", 27 | "start_time": "2018-02-03T22:16:30.475429Z" 28 | }, 29 | "scrolled": true 30 | }, 31 | "outputs": [], 32 | "source": [ 33 | "from pathlib import Path\n", 34 | "import numpy as np\n", 35 | "\n", 36 | "# Image sizes\n", 37 | "MAX_WIDTH = 1280\n", 38 | "MAX_HEIGHT = 300\n", 39 | "\n", 40 | "# Training params\n", 41 | "TRAIN_TEST_SPLIT = 0.9\n", 42 | "\n", 43 | "# Class dictionary\n", 44 | "CLASS_NAMES = {1:'cloud', 2:'sun', 3:'house', 4:'tree'}\n", 45 | "\n", 46 | "# Define paths to sub-folders\n", 47 | "root_dir = Path.cwd()\n", 48 | "images_path = root_dir / 'images'\n", 49 | "labels_path = root_dir / 'labels'\n", 50 | "train_path = root_dir / 'models'\n", 51 | "data_path = root_dir / 'data'\n", 52 | "\n", 53 | "# Output filenames paths\n", 54 | "train_tfrecord_path = data_path / 'train.record'\n", 55 | "test_tfrecord_path = data_path / 'test.record'\n", 56 | "labels_csv_path = data_path / 'labels.csv'\n" 57 | ] 58 | }, 59 | { 60 | "cell_type": "markdown", 61 | "metadata": {}, 62 | "source": [ 63 | "### Label images\n", 64 | "\n", 65 | "Label the images using LabelImg [2]. This creates an xml file for each image with the bounding boxes and classes." 66 | ] 67 | }, 68 | { 69 | "cell_type": "code", 70 | "execution_count": 4, 71 | "metadata": { 72 | "ExecuteTime": { 73 | "end_time": "2018-02-03T22:16:33.822355Z", 74 | "start_time": "2018-02-03T22:16:33.780820Z" 75 | } 76 | }, 77 | "outputs": [ 78 | { 79 | "name": "stdout", 80 | "output_type": "stream", 81 | "text": [ 82 | "Converted xmls to csv file\n" 83 | ] 84 | } 85 | ], 86 | "source": [ 87 | "import pandas as pd\n", 88 | "import xml.etree.ElementTree as ET\n", 89 | "\n", 90 | "# Convert the XMLs into a single CSV file\n", 91 | "xml_list = []\n", 92 | "for xml_path in list(labels_path.glob('*.xml')):\n", 93 | " tree = ET.parse(str(xml_path))\n", 94 | " root = tree.getroot()\n", 95 | " for member in root.findall('object'):\n", 96 | " # Unpack each object (BB) from the xml\n", 97 | " value = (root.find('filename').text,\n", 98 | " int(root.find('size')[0].text),\n", 99 | " int(root.find('size')[1].text),\n", 100 | " member[0].text,\n", 101 | " int(member[4][0].text),\n", 102 | " int(member[4][1].text),\n", 103 | " int(member[4][2].text),\n", 104 | " int(member[4][3].text))\n", 105 | " xml_list.append(value)\n", 106 | "# Create pandas dataframe from the labels in the XML\n", 107 | "column_name = ['filename', 'width', 'height', 'class', 'xmin', 'ymin', 'xmax', 'ymax']\n", 108 | "xml_df = pd.DataFrame(xml_list, columns=column_name)\n", 109 | "xml_df.to_csv(str(labels_csv_path), index=None)\n", 110 | "print('Converted xmls to csv file')" 111 | ] 112 | }, 113 | { 114 | "cell_type": "markdown", 115 | "metadata": {}, 116 | "source": [ 117 | "### Convert to TFRecords\n", 118 | "\n", 119 | "Convert to TFRecords so we can train better" 120 | ] 121 | }, 122 | { 123 | "cell_type": "code", 124 | "execution_count": 7, 125 | "metadata": { 126 | "ExecuteTime": { 127 | "end_time": "2018-02-03T22:17:07.009500Z", 128 | "start_time": "2018-02-03T22:17:06.895407Z" 129 | } 130 | }, 131 | "outputs": [], 132 | "source": [ 133 | "import io\n", 134 | "import tensorflow as tf\n", 135 | "from PIL import Image\n", 136 | "from object_detection.utils import dataset_util\n", 137 | "\n", 138 | "def to_tfrecords(image_paths, labels_path, tfrecord_path):\n", 139 | " if tfrecord_path.exists():\n", 140 | " print('TFRecord already created, delete it before making a new one')\n", 141 | " return\n", 142 | " writer = tf.python_io.TFRecordWriter(str(tfrecord_path))\n", 143 | " # Read labels from csv\n", 144 | " label_df = pd.read_csv(str(labels_path))\n", 145 | " gb = label_df.groupby('filename')\n", 146 | " # Convert each image to a tfrecords example then write\n", 147 | " for image_path in image_paths:\n", 148 | " try:\n", 149 | " group = gb.get_group(image_path.name)\n", 150 | " except KeyError:\n", 151 | " print('Could not find labels for %s' % image_path.name)\n", 152 | " continue\n", 153 | " # Write each serialized example to writer\n", 154 | " writer.write(_create_tf_example(image_path, group).SerializeToString())\n", 155 | " writer.close()\n", 156 | " print('TFRecord created at %s' % str(tfrecord_path))\n", 157 | "\n", 158 | "def _create_tf_example(image_path, groups):\n", 159 | " # Read image and encode it\n", 160 | " with tf.gfile.GFile(str(image_path), 'rb') as fid:\n", 161 | " encoded_jpg = fid.read()\n", 162 | " encoded_jpg_io = io.BytesIO(encoded_jpg)\n", 163 | " image = Image.open(encoded_jpg_io)\n", 164 | " width, height = image.size\n", 165 | " # Feature defines each discrete entry in the tfrecords file\n", 166 | " filename = image_path.name.encode('utf8')\n", 167 | " image_format = b'jpg'\n", 168 | " xmins = []\n", 169 | " xmaxs = []\n", 170 | " ymins = []\n", 171 | " ymaxs = []\n", 172 | " classes_text = []\n", 173 | " classes = []\n", 174 | " print('groups: ', groups)\n", 175 | " for index, row in groups.iterrows():\n", 176 | " xmins.append(row['xmin'] / width)\n", 177 | " xmaxs.append(row['xmax'] / width)\n", 178 | " ymins.append(row['ymin'] / height)\n", 179 | " ymaxs.append(row['ymax'] / height)\n", 180 | " classes.append(row['class'])\n", 181 | " classes_text.append(CLASS_NAMES[row['class']].encode('utf8'))\n", 182 | " example = tf.train.Example(features=tf.train.Features(feature={\n", 183 | " 'image/height': dataset_util.int64_feature(height),\n", 184 | " 'image/width': dataset_util.int64_feature(width),\n", 185 | " 'image/filename': dataset_util.bytes_feature(filename),\n", 186 | " 'image/source_id': dataset_util.bytes_feature(filename),\n", 187 | " 'image/encoded': dataset_util.bytes_feature(encoded_jpg),\n", 188 | " 'image/format': dataset_util.bytes_feature(image_format),\n", 189 | " 'image/object/bbox/xmin': dataset_util.float_list_feature(xmins),\n", 190 | " 'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs),\n", 191 | " 'image/object/bbox/ymin': dataset_util.float_list_feature(ymins),\n", 192 | " 'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs),\n", 193 | " 'image/object/class/text': dataset_util.bytes_list_feature(classes_text),\n", 194 | " 'image/object/class/label': dataset_util.int64_list_feature(classes),\n", 195 | " }))\n", 196 | " return example" 197 | ] 198 | }, 199 | { 200 | "cell_type": "code", 201 | "execution_count": 9, 202 | "metadata": { 203 | "ExecuteTime": { 204 | "end_time": "2018-02-03T22:17:21.777665Z", 205 | "start_time": "2018-02-03T22:17:21.622114Z" 206 | } 207 | }, 208 | "outputs": [ 209 | { 210 | "name": "stdout", 211 | "output_type": "stream", 212 | "text": [ 213 | "There are 20 images total, split into 18 train and 2 test\n", 214 | "groups: filename width height class xmin ymin xmax ymax\n", 215 | "50 18.jpg 1280 853 2 563 357 603 395\n", 216 | "51 18.jpg 1280 853 4 227 320 309 414\n", 217 | "groups: filename width height class xmin ymin xmax ymax\n", 218 | "92 12.jpg 275 183 3 140 122 153 142\n", 219 | "93 12.jpg 275 183 3 33 115 53 136\n", 220 | "94 12.jpg 275 183 1 93 26 139 41\n", 221 | "95 12.jpg 275 183 1 213 14 272 27\n", 222 | "96 12.jpg 275 183 4 190 153 215 163\n", 223 | "groups: filename width height class xmin ymin xmax ymax\n", 224 | "10 10.jpg 1000 751 1 135 85 319 142\n", 225 | "11 10.jpg 1000 751 1 547 63 656 107\n", 226 | "12 10.jpg 1000 751 1 683 108 793 147\n", 227 | "13 10.jpg 1000 751 1 708 1 944 76\n", 228 | "14 10.jpg 1000 751 1 424 147 546 182\n", 229 | "15 10.jpg 1000 751 3 763 307 857 445\n", 230 | "16 10.jpg 1000 751 4 743 616 806 679\n", 231 | "17 10.jpg 1000 751 4 241 652 317 723\n", 232 | "18 10.jpg 1000 751 3 420 303 518 475\n", 233 | "19 10.jpg 1000 751 3 233 453 323 578\n", 234 | "20 10.jpg 1000 751 3 614 441 700 572\n", 235 | "21 10.jpg 1000 751 3 201 292 286 363\n", 236 | "22 10.jpg 1000 751 4 148 410 216 468\n", 237 | "groups: filename width height class xmin ymin xmax ymax\n", 238 | "0 8.jpg 1200 900 3 375 523 606 749\n", 239 | "1 8.jpg 1200 900 3 704 456 876 612\n", 240 | "2 8.jpg 1200 900 3 607 111 710 634\n", 241 | "3 8.jpg 1200 900 4 483 447 540 485\n", 242 | "4 8.jpg 1200 900 4 289 497 368 585\n", 243 | "5 8.jpg 1200 900 1 632 27 760 67\n", 244 | "6 8.jpg 1200 900 1 369 176 531 237\n", 245 | "groups: filename width height class xmin ymin xmax ymax\n", 246 | "28 11.jpg 750 422 1 313 61 402 97\n", 247 | "29 11.jpg 750 422 1 188 88 294 134\n", 248 | "30 11.jpg 750 422 1 393 83 535 116\n", 249 | "31 11.jpg 750 422 4 519 278 541 311\n", 250 | "32 11.jpg 750 422 4 257 372 285 422\n", 251 | "33 11.jpg 750 422 4 654 213 690 267\n", 252 | "groups: filename width height class xmin ymin xmax ymax\n", 253 | "68 9.jpg 900 600 3 291 221 364 247\n", 254 | "69 9.jpg 900 600 3 582 209 664 262\n", 255 | "70 9.jpg 900 600 4 705 240 743 280\n", 256 | "71 9.jpg 900 600 4 549 402 764 578\n", 257 | "72 9.jpg 900 600 4 584 162 622 190\n", 258 | "73 9.jpg 900 600 1 482 68 576 112\n", 259 | "74 9.jpg 900 600 1 795 48 900 93\n", 260 | "groups: filename width height class xmin ymin xmax ymax\n", 261 | "52 17.jpg 1600 1067 1 116 271 346 373\n", 262 | "53 17.jpg 1600 1067 2 1511 331 1600 452\n", 263 | "54 17.jpg 1600 1067 3 1165 575 1351 733\n", 264 | "55 17.jpg 1600 1067 3 1415 463 1462 584\n", 265 | "56 17.jpg 1600 1067 4 757 865 874 1015\n", 266 | "57 17.jpg 1600 1067 4 373 892 569 991\n", 267 | "58 17.jpg 1600 1067 3 258 504 384 565\n", 268 | "59 17.jpg 1600 1067 1 462 188 619 299\n", 269 | "groups: filename width height class xmin ymin xmax ymax\n", 270 | "34 6.jpg 1282 722 1 474 259 713 496\n", 271 | "35 6.jpg 1282 722 4 311 518 472 722\n", 272 | "36 6.jpg 1282 722 1 836 116 973 222\n", 273 | "groups: filename width height class xmin ymin xmax ymax\n", 274 | "43 19.jpg 275 183 4 126 144 156 175\n", 275 | "44 19.jpg 275 183 3 227 92 249 120\n", 276 | "45 19.jpg 275 183 1 80 30 116 52\n", 277 | "46 19.jpg 275 183 1 99 4 130 28\n", 278 | "47 19.jpg 275 183 1 218 40 251 60\n", 279 | "48 19.jpg 275 183 3 45 73 69 93\n", 280 | "49 19.jpg 275 183 4 115 103 132 118\n", 281 | "groups: filename width height class xmin ymin xmax ymax\n", 282 | "97 2.jpg 960 540 1 454 226 644 284\n", 283 | "98 2.jpg 960 540 1 789 12 911 58\n", 284 | "99 2.jpg 960 540 1 640 153 810 209\n", 285 | "100 2.jpg 960 540 1 210 266 348 326\n", 286 | "101 2.jpg 960 540 1 186 39 349 148\n", 287 | "groups: filename width height class xmin ymin xmax ymax\n", 288 | "40 20.jpg 259 194 2 133 29 151 43\n", 289 | "41 20.jpg 259 194 4 39 25 50 52\n", 290 | "42 20.jpg 259 194 4 209 34 230 53\n", 291 | "groups: filename width height class xmin ymin xmax ymax\n", 292 | "7 15.jpg 275 183 2 97 45 122 66\n", 293 | "8 15.jpg 275 183 3 136 97 203 124\n", 294 | "9 15.jpg 275 183 3 224 111 266 168\n", 295 | "groups: filename width height class xmin ymin xmax ymax\n", 296 | "85 3.jpg 276 183 1 50 20 80 37\n", 297 | "86 3.jpg 276 183 1 122 102 153 114\n", 298 | "groups: filename width height class xmin ymin xmax ymax\n", 299 | "37 5.jpg 852 480 1 240 20 420 195\n", 300 | "38 5.jpg 852 480 1 560 50 655 169\n", 301 | "39 5.jpg 852 480 1 227 246 429 314\n", 302 | "groups: filename width height class xmin ymin xmax ymax\n", 303 | "75 14.jpg 3640 2153 4 2780 1333 2996 1895\n", 304 | "76 14.jpg 3640 2153 3 1255 851 2402 1645\n", 305 | "77 14.jpg 3640 2153 4 1630 554 1896 776\n", 306 | "78 14.jpg 3640 2153 1 1834 145 2184 264\n", 307 | "79 14.jpg 3640 2153 3 190 948 777 1533\n", 308 | "80 14.jpg 3640 2153 4 2677 676 2830 883\n", 309 | "groups: filename width height class xmin ymin xmax ymax\n", 310 | "60 16.jpg 1000 550 2 1 57 67 162\n", 311 | "61 16.jpg 1000 550 3 689 110 719 189\n", 312 | "62 16.jpg 1000 550 3 146 254 303 462\n", 313 | "63 16.jpg 1000 550 3 823 164 907 194\n", 314 | "groups: filename width height class xmin ymin xmax ymax\n", 315 | "87 13.jpg 2698 2706 3 247 853 1182 1823\n", 316 | "88 13.jpg 2698 2706 4 1152 1371 1360 1728\n", 317 | "89 13.jpg 2698 2706 4 1379 1299 1574 1623\n", 318 | "90 13.jpg 2698 2706 3 2244 1201 2698 1591\n", 319 | "91 13.jpg 2698 2706 4 385 2315 850 2696\n", 320 | "groups: filename width height class xmin ymin xmax ymax\n", 321 | "64 4.jpg 300 168 1 113 33 160 72\n", 322 | "65 4.jpg 300 168 1 254 25 298 44\n", 323 | "66 4.jpg 300 168 1 190 48 223 68\n", 324 | "67 4.jpg 300 168 1 270 53 300 76\n", 325 | "TFRecord created at /home/ook/repos/iris_challenge/detector/data/train.record\n", 326 | "groups: filename width height class xmin ymin xmax ymax\n", 327 | "23 1.jpg 275 183 1 49 21 147 58\n", 328 | "24 1.jpg 275 183 1 174 68 202 86\n", 329 | "25 1.jpg 275 183 1 209 46 236 61\n", 330 | "26 1.jpg 275 183 1 9 85 73 104\n", 331 | "27 1.jpg 275 183 1 90 92 130 105\n", 332 | "groups: filename width height class xmin ymin xmax ymax\n", 333 | "81 7.jpg 279 180 1 43 80 96 101\n", 334 | "82 7.jpg 279 180 1 227 58 260 75\n", 335 | "83 7.jpg 279 180 1 70 35 115 72\n", 336 | "84 7.jpg 279 180 1 155 91 192 101\n", 337 | "TFRecord created at /home/ook/repos/iris_challenge/detector/data/test.record\n" 338 | ] 339 | } 340 | ], 341 | "source": [ 342 | "# Split data into test and train\n", 343 | "image_paths = list(images_path.glob('*.jpg'))\n", 344 | "num_images = len(image_paths)\n", 345 | "num_train = int(TRAIN_TEST_SPLIT * num_images)\n", 346 | "train_index = np.random.choice(num_images, size=num_train, replace=False)\n", 347 | "test_index = np.setdiff1d(list(range(num_images)), train_index)\n", 348 | "train_image_paths = [image_paths[i] for i in train_index]\n", 349 | "test_image_paths = [image_paths[i] for i in test_index]\n", 350 | "print('There are %d images total, split into %s train and %s test' % (num_images,\n", 351 | " len(train_image_paths),\n", 352 | " len(test_image_paths)))\n", 353 | "# Convert list of train and test images into a tfrecord\n", 354 | "to_tfrecords(train_image_paths, labels_csv_path, train_tfrecord_path)\n", 355 | "to_tfrecords(test_image_paths, labels_csv_path, test_tfrecord_path)" 356 | ] 357 | }, 358 | { 359 | "cell_type": "code", 360 | "execution_count": null, 361 | "metadata": {}, 362 | "outputs": [], 363 | "source": [] 364 | } 365 | ], 366 | "metadata": { 367 | "kernelspec": { 368 | "display_name": "Python [conda env:vectorize]", 369 | "language": "python", 370 | "name": "conda-env-vectorize-py" 371 | }, 372 | "language_info": { 373 | "codemirror_mode": { 374 | "name": "ipython", 375 | "version": 3 376 | }, 377 | "file_extension": ".py", 378 | "mimetype": "text/x-python", 379 | "name": "python", 380 | "nbconvert_exporter": "python", 381 | "pygments_lexer": "ipython3", 382 | "version": "3.5.4" 383 | } 384 | }, 385 | "nbformat": 4, 386 | "nbformat_minor": 2 387 | } 388 | -------------------------------------------------------------------------------- /detector/models/faster_rcnn_v1.config: -------------------------------------------------------------------------------- 1 | model { 2 | faster_rcnn { 3 | num_classes: 4 4 | image_resizer { 5 | keep_aspect_ratio_resizer { 6 | min_dimension: 600 7 | max_dimension: 1024 8 | } 9 | } 10 | feature_extractor { 11 | type: "faster_rcnn_inception_v2" 12 | first_stage_features_stride: 16 13 | } 14 | first_stage_anchor_generator { 15 | grid_anchor_generator { 16 | height_stride: 16 17 | width_stride: 16 18 | scales: 0.25 19 | scales: 0.5 20 | scales: 1.0 21 | scales: 2.0 22 | aspect_ratios: 0.5 23 | aspect_ratios: 1.0 24 | aspect_ratios: 2.0 25 | } 26 | } 27 | first_stage_box_predictor_conv_hyperparams { 28 | op: CONV 29 | regularizer { 30 | l2_regularizer { 31 | weight: 0.0 32 | } 33 | } 34 | initializer { 35 | truncated_normal_initializer { 36 | stddev: 0.00999999977648 37 | } 38 | } 39 | } 40 | first_stage_nms_score_threshold: 0.0 41 | first_stage_nms_iou_threshold: 0.699999988079 42 | first_stage_max_proposals: 300 43 | first_stage_localization_loss_weight: 2.0 44 | first_stage_objectness_loss_weight: 1.0 45 | initial_crop_size: 14 46 | maxpool_kernel_size: 2 47 | maxpool_stride: 2 48 | second_stage_box_predictor { 49 | mask_rcnn_box_predictor { 50 | fc_hyperparams { 51 | op: FC 52 | regularizer { 53 | l2_regularizer { 54 | weight: 0.0 55 | } 56 | } 57 | initializer { 58 | variance_scaling_initializer { 59 | factor: 1.0 60 | uniform: true 61 | mode: FAN_AVG 62 | } 63 | } 64 | } 65 | use_dropout: false 66 | dropout_keep_probability: 1.0 67 | } 68 | } 69 | second_stage_post_processing { 70 | batch_non_max_suppression { 71 | score_threshold: 0.0 72 | iou_threshold: 0.600000023842 73 | max_detections_per_class: 100 74 | max_total_detections: 300 75 | } 76 | score_converter: SOFTMAX 77 | } 78 | second_stage_localization_loss_weight: 2.0 79 | second_stage_classification_loss_weight: 1.0 80 | } 81 | } 82 | train_config { 83 | batch_size: 1 84 | data_augmentation_options { 85 | random_horizontal_flip { 86 | } 87 | } 88 | optimizer { 89 | momentum_optimizer { 90 | learning_rate { 91 | manual_step_learning_rate { 92 | initial_learning_rate: 0.000199999994948 93 | schedule { 94 | step: 0 95 | learning_rate: 0.000199999994948 96 | } 97 | schedule { 98 | step: 900000 99 | learning_rate: 1.99999994948e-05 100 | } 101 | schedule { 102 | step: 1200000 103 | learning_rate: 1.99999999495e-06 104 | } 105 | } 106 | } 107 | momentum_optimizer_value: 0.899999976158 108 | } 109 | use_moving_average: false 110 | } 111 | gradient_clipping_by_norm: 10.0 112 | fine_tune_checkpoint: "/home/ook/repos/iris_challenge/detector/models/faster_rcnn_inception_v2_coco_2017_11_08/model.ckpt" 113 | from_detection_checkpoint: true 114 | num_steps: 200000 115 | } 116 | train_input_reader { 117 | label_map_path: "/home/ook/repos/iris_challenge/detector/data/label_map.pbtxt" 118 | tf_record_input_reader { 119 | input_path: "/home/ook/repos/iris_challenge/detector/data/train.record" 120 | } 121 | } 122 | eval_config { 123 | num_examples: 8000 124 | max_evals: 10 125 | use_moving_averages: false 126 | } 127 | eval_input_reader { 128 | label_map_path: "/home/ook/repos/iris_challenge/detector/data/label_map.pbtxt" 129 | shuffle: false 130 | num_readers: 1 131 | tf_record_input_reader { 132 | input_path: "/home/ook/repos/iris_challenge/detector/data/test.record" 133 | } 134 | } 135 | -------------------------------------------------------------------------------- /detector/models/ssd_mobilenet_v1.config: -------------------------------------------------------------------------------- 1 | # SSD with Mobilenet v1 configuration for MSCOCO Dataset. 2 | # Users should configure the fine_tune_checkpoint field in the train config as 3 | # well as the label_map_path and input_path fields in the train_input_reader and 4 | # eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that 5 | # should be configured. 6 | 7 | model { 8 | ssd { 9 | num_classes: 90 10 | box_coder { 11 | faster_rcnn_box_coder { 12 | y_scale: 10.0 13 | x_scale: 10.0 14 | height_scale: 5.0 15 | width_scale: 5.0 16 | } 17 | } 18 | matcher { 19 | argmax_matcher { 20 | matched_threshold: 0.5 21 | unmatched_threshold: 0.5 22 | ignore_thresholds: false 23 | negatives_lower_than_unmatched: true 24 | force_match_for_each_row: true 25 | } 26 | } 27 | similarity_calculator { 28 | iou_similarity { 29 | } 30 | } 31 | anchor_generator { 32 | ssd_anchor_generator { 33 | num_layers: 6 34 | min_scale: 0.2 35 | max_scale: 0.95 36 | aspect_ratios: 1.0 37 | aspect_ratios: 2.0 38 | aspect_ratios: 0.5 39 | aspect_ratios: 3.0 40 | aspect_ratios: 0.3333 41 | } 42 | } 43 | image_resizer { 44 | fixed_shape_resizer { 45 | height: 300 46 | width: 300 47 | } 48 | } 49 | box_predictor { 50 | convolutional_box_predictor { 51 | min_depth: 0 52 | max_depth: 0 53 | num_layers_before_predictor: 0 54 | use_dropout: false 55 | dropout_keep_probability: 0.8 56 | kernel_size: 1 57 | box_code_size: 4 58 | apply_sigmoid_to_scores: false 59 | conv_hyperparams { 60 | activation: RELU_6, 61 | regularizer { 62 | l2_regularizer { 63 | weight: 0.00004 64 | } 65 | } 66 | initializer { 67 | truncated_normal_initializer { 68 | stddev: 0.03 69 | mean: 0.0 70 | } 71 | } 72 | batch_norm { 73 | train: true, 74 | scale: true, 75 | center: true, 76 | decay: 0.9997, 77 | epsilon: 0.001, 78 | } 79 | } 80 | } 81 | } 82 | feature_extractor { 83 | type: 'ssd_mobilenet_v1' 84 | min_depth: 16 85 | depth_multiplier: 1.0 86 | conv_hyperparams { 87 | activation: RELU_6, 88 | regularizer { 89 | l2_regularizer { 90 | weight: 0.00004 91 | } 92 | } 93 | initializer { 94 | truncated_normal_initializer { 95 | stddev: 0.03 96 | mean: 0.0 97 | } 98 | } 99 | batch_norm { 100 | train: true, 101 | scale: true, 102 | center: true, 103 | decay: 0.9997, 104 | epsilon: 0.001, 105 | } 106 | } 107 | } 108 | loss { 109 | classification_loss { 110 | weighted_sigmoid { 111 | anchorwise_output: true 112 | } 113 | } 114 | localization_loss { 115 | weighted_smooth_l1 { 116 | anchorwise_output: true 117 | } 118 | } 119 | hard_example_miner { 120 | num_hard_examples: 3000 121 | iou_threshold: 0.99 122 | loss_type: CLASSIFICATION 123 | max_negatives_per_positive: 3 124 | min_negatives_per_image: 0 125 | } 126 | classification_weight: 1.0 127 | localization_weight: 1.0 128 | } 129 | normalize_loss_by_num_matches: true 130 | post_processing { 131 | batch_non_max_suppression { 132 | score_threshold: 1e-8 133 | iou_threshold: 0.6 134 | max_detections_per_class: 100 135 | max_total_detections: 100 136 | } 137 | score_converter: SIGMOID 138 | } 139 | } 140 | } 141 | 142 | train_config: { 143 | batch_size: 24 144 | optimizer { 145 | rms_prop_optimizer: { 146 | learning_rate: { 147 | exponential_decay_learning_rate { 148 | initial_learning_rate: 0.004 149 | decay_steps: 800720 150 | decay_factor: 0.95 151 | } 152 | } 153 | momentum_optimizer_value: 0.9 154 | decay: 0.9 155 | epsilon: 1.0 156 | } 157 | } 158 | fine_tune_checkpoint: "/home/ook/repos/iris_challenge/detector/models/ssd_mobilenet_v1_coco_2017_11_17/model.ckpt" 159 | from_detection_checkpoint: true 160 | # Note: The below line limits the training process to 200K steps, which we 161 | # empirically found to be sufficient enough to train the pets dataset. This 162 | # effectively bypasses the learning rate schedule (the learning rate will 163 | # never decay). Remove the below line to train indefinitely. 164 | num_steps: 200000 165 | data_augmentation_options { 166 | random_horizontal_flip { 167 | } 168 | } 169 | data_augmentation_options { 170 | ssd_random_crop { 171 | } 172 | } 173 | } 174 | 175 | train_input_reader: { 176 | tf_record_input_reader { 177 | input_path: "/home/ook/repos/iris_challenge/detector/data/train.record" 178 | } 179 | label_map_path: "/home/ook/repos/iris_challenge/detector/data/label_map.pbtxt" 180 | } 181 | 182 | eval_config: { 183 | num_examples: 8000 184 | # Note: The below line limits the evaluation process to 10 evaluations. 185 | # Remove the below line to evaluate indefinitely. 186 | max_evals: 10 187 | } 188 | 189 | eval_input_reader: { 190 | tf_record_input_reader { 191 | input_path: "/home/ook/repos/iris_challenge/detector/data/test.record" 192 | } 193 | label_map_path: "/home/ook/repos/iris_challenge/detector/data/label_map.pbtxt" 194 | shuffle: false 195 | num_readers: 1 196 | num_epochs: 1 197 | } -------------------------------------------------------------------------------- /detector/run_model.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "### Object Detector: Run the model\n", 8 | "\n", 9 | "This notebook runs the saved model (that was trained with the training notebook) on the iris_challenge images.\n", 10 | "\n", 11 | "Sources:\n", 12 | "- [1] https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb" 13 | ] 14 | }, 15 | { 16 | "cell_type": "code", 17 | "execution_count": null, 18 | "metadata": {}, 19 | "outputs": [], 20 | "source": [ 21 | "from pathlib import Path\n", 22 | "from PIL import Image\n", 23 | "import numpy as np\n", 24 | "import tensorflow as tf\n", 25 | "\n", 26 | "# Object detection imports\n", 27 | "from object_detection.utils import label_map_util\n", 28 | "from object_detection.utils import visualization_utils as vis_util\n", 29 | "\n", 30 | "# Matplot lib for plotting\n", 31 | "from matplotlib import pyplot as plt\n", 32 | "%matplotlib inline\n", 33 | "\n", 34 | "# Size, in inches, of the output images.\n", 35 | "IMAGE_SIZE = (12, 8)\n", 36 | "\n", 37 | "# Define paths to image sub-folders\n", 38 | "root_dir = Path.cwd()\n", 39 | "images_path = root_dir / '..' / 'test_images'\n", 40 | "output_path = root_dir / '..' / 'output_images'\n", 41 | "\n", 42 | "# Frozen graph file\n", 43 | "FROZEN_GRAPH = root_dir / 'models' / 'output_inference_graph.pb' / 'frozen_inference_graph.pb'\n", 44 | "\n", 45 | "# Class labels file\n", 46 | "PATH_TO_LABELS = root_dir / 'data' / 'label_map.pbtxt'" 47 | ] 48 | }, 49 | { 50 | "cell_type": "code", 51 | "execution_count": null, 52 | "metadata": {}, 53 | "outputs": [], 54 | "source": [ 55 | "# Load frozen model graph from file\n", 56 | "detection_graph = tf.Graph()\n", 57 | "with detection_graph.as_default():\n", 58 | " od_graph_def = tf.GraphDef()\n", 59 | " with tf.gfile.GFile(str(FROZEN_GRAPH), 'rb') as fid:\n", 60 | " serialized_graph = fid.read()\n", 61 | " od_graph_def.ParseFromString(serialized_graph)\n", 62 | " tf.import_graph_def(od_graph_def, name='')\n", 63 | "\n", 64 | "# Load label map from file\n", 65 | "label_map = label_map_util.load_labelmap(str(PATH_TO_LABELS))\n", 66 | "categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=4, use_display_name=True)\n", 67 | "category_index = label_map_util.create_category_index(categories)" 68 | ] 69 | }, 70 | { 71 | "cell_type": "code", 72 | "execution_count": null, 73 | "metadata": { 74 | "scrolled": false 75 | }, 76 | "outputs": [], 77 | "source": [ 78 | "def load_image_into_numpy_array(image):\n", 79 | " (im_width, im_height) = image.size\n", 80 | " print('Image loaded of size (%s, %s)' % (im_width, im_height))\n", 81 | " return np.array(image.getdata()).reshape((im_height, im_width, 3)).astype(np.uint8)\n", 82 | "\n", 83 | "with detection_graph.as_default():\n", 84 | " with tf.Session(graph=detection_graph) as sess:\n", 85 | " # Definite input and output Tensors for detection_graph\n", 86 | " image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')\n", 87 | " # Each box represents a part of the image where a particular object was detected.\n", 88 | " detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')\n", 89 | " # Each score represent how level of confidence for each of the objects.\n", 90 | " # Score is shown on the result image, together with the class label.\n", 91 | " detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')\n", 92 | " detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')\n", 93 | " num_detections = detection_graph.get_tensor_by_name('num_detections:0')\n", 94 | " for image_path in list(images_path.glob('*.png')):\n", 95 | " image = Image.open(str(image_path))\n", 96 | " # the array based representation of the image will be used later in order to prepare the\n", 97 | " # result image with boxes and labels on it.\n", 98 | " image_np = load_image_into_numpy_array(image)\n", 99 | " # Expand dimensions since the model expects images to have shape: [1, None, None, 3]\n", 100 | " image_np_expanded = np.expand_dims(image_np, axis=0)\n", 101 | " # Actual detection.\n", 102 | " (boxes, scores, classes, num) = sess.run(\n", 103 | " [detection_boxes, detection_scores, detection_classes, num_detections],\n", 104 | " feed_dict={image_tensor: image_np_expanded})\n", 105 | " print('Detected %s objects in image' % len(boxes))\n", 106 | " # Visualization of the results of a detection.\n", 107 | " vis_util.visualize_boxes_and_labels_on_image_array(\n", 108 | " image_np,\n", 109 | " np.squeeze(boxes),\n", 110 | " np.squeeze(classes).astype(np.int32),\n", 111 | " np.squeeze(scores),\n", 112 | " category_index,\n", 113 | " use_normalized_coordinates=True,\n", 114 | " line_thickness=8)\n", 115 | " plt.figure(figsize=IMAGE_SIZE)\n", 116 | " plt.imshow(image_np)\n", 117 | " plt.show()\n", 118 | " # Save image to output directory\n", 119 | " Image.fromarray(image_np).save(str(output_path / image_path.name))" 120 | ] 121 | } 122 | ], 123 | "metadata": { 124 | "kernelspec": { 125 | "display_name": "Python [conda env:vectorize]", 126 | "language": "python", 127 | "name": "conda-env-vectorize-py" 128 | }, 129 | "language_info": { 130 | "codemirror_mode": { 131 | "name": "ipython", 132 | "version": 3 133 | }, 134 | "file_extension": ".py", 135 | "mimetype": "text/x-python", 136 | "name": "python", 137 | "nbconvert_exporter": "python", 138 | "pygments_lexer": "ipython3", 139 | "version": "3.5.4" 140 | } 141 | }, 142 | "nbformat": 4, 143 | "nbformat_minor": 2 144 | } 145 | -------------------------------------------------------------------------------- /detector/train_model.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "### Object Detector: Train the model\n", 8 | "\n", 9 | "This notebook contains instructions on how to train the model locally.\n", 10 | "\n", 11 | "Sources:\n", 12 | "- [1] https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_locally.md" 13 | ] 14 | }, 15 | { 16 | "cell_type": "markdown", 17 | "metadata": {}, 18 | "source": [ 19 | "Make sure to add the `IRIS_DIR` variable to your session. It should be the location of the iris_challenge repo. Then add the remainder of the path variables. For training on my local computer these paths are as follows:\n", 20 | "\n", 21 | "```\n", 22 | "IRIS_DIR=/home/ook/repos/iris_challenge\n", 23 | "PATH_TO_YOUR_PIPELINE_CONFIG=${IRIS_DIR}/detector/models/ssd_mobilenet_v1.config\n", 24 | "PATH_TO_TRAIN_DIR=${IRIS_DIR}/detector/models/model/train\n", 25 | "PATH_TO_EVAL_DIR=${IRIS_DIR}/detector/models/model/eval\n", 26 | "TRAIN_PATH=${PATH_TO_TRAIN_DIR}/model.ckpt-30788\n", 27 | "PATH_TO_SAVED_MODEL=${IRIS_DIR}/detector/models/output_inference_graph.pb\n", 28 | "PIPELINE_CONFIG_PATH=${PATH_TO_TRAIN_DIR}/pipeline.config\n", 29 | "```\n", 30 | "\n", 31 | "Then run the following commands from the `tensorflow/models/research/` directory to train and test.\n", 32 | "\n", 33 | "*Train*\n", 34 | "\n", 35 | "```\n", 36 | "python object_detection/train.py \\\n", 37 | " --logtostderr \\\n", 38 | " --pipeline_config_path=${PATH_TO_YOUR_PIPELINE_CONFIG} \\\n", 39 | " --train_dir=${PATH_TO_TRAIN_DIR}\n", 40 | "```\n", 41 | "\n", 42 | "*Test*\n", 43 | "\n", 44 | "```\n", 45 | "python object_detection/eval.py \\\n", 46 | " --logtostderr \\\n", 47 | " --pipeline_config_path=${PATH_TO_YOUR_PIPELINE_CONFIG} \\\n", 48 | " --checkpoint_dir=${PATH_TO_TRAIN_DIR} \\\n", 49 | " --eval_dir=${PATH_TO_EVAL_DIR}\n", 50 | "```\n", 51 | "\n", 52 | "*Save Model*\n", 53 | "\n", 54 | "```\n", 55 | "python object_detection/export_inference_graph.py \\\n", 56 | " --input_type image_tensor \\\n", 57 | " --pipeline_config_path ${PIPELINE_CONFIG_PATH} \\\n", 58 | " --trained_checkpoint_prefix ${TRAIN_PATH} \\\n", 59 | " --output_directory ${PATH_TO_SAVED_MODEL}\n", 60 | "```" 61 | ] 62 | } 63 | ], 64 | "metadata": { 65 | "kernelspec": { 66 | "display_name": "Python [conda env:vectorize]", 67 | "language": "python", 68 | "name": "conda-env-vectorize-py" 69 | }, 70 | "language_info": { 71 | "codemirror_mode": { 72 | "name": "ipython", 73 | "version": 3 74 | }, 75 | "file_extension": ".py", 76 | "mimetype": "text/x-python", 77 | "name": "python", 78 | "nbconvert_exporter": "python", 79 | "pygments_lexer": "ipython3", 80 | "version": "3.5.4" 81 | } 82 | }, 83 | "nbformat": 4, 84 | "nbformat_minor": 2 85 | } 86 | -------------------------------------------------------------------------------- /horizon/horizon_line.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "### Horizon Line Detection\n", 8 | "\n", 9 | "This notebook carries out horizon line detection using the Canny Edge Detection that comes with OpenCV.\n", 10 | "\n", 11 | "Sources:\n", 12 | "- [1] https://www.pyimagesearch.com/2015/04/06/zero-parameter-automatic-canny-edge-detection-with-python-and-opencv/\n", 13 | "- [2] https://stackoverflow.com/questions/44449871/fine-tuning-hough-line-function-parameters-opencv\n", 14 | "- [3] https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_houghlines/py_houghlines.html#hough-tranform-in-opencv" 15 | ] 16 | }, 17 | { 18 | "cell_type": "code", 19 | "execution_count": null, 20 | "metadata": {}, 21 | "outputs": [], 22 | "source": [ 23 | "from pathlib import Path\n", 24 | "import cv2\n", 25 | "import numpy as np\n", 26 | "import math\n", 27 | "\n", 28 | "# Paths to images\n", 29 | "root_dir = Path.cwd()\n", 30 | "images_path = root_dir / '..' / 'test_images'\n", 31 | "output_path = root_dir / '..' / 'output_images'" 32 | ] 33 | }, 34 | { 35 | "cell_type": "code", 36 | "execution_count": null, 37 | "metadata": {}, 38 | "outputs": [], 39 | "source": [ 40 | "def custom_canny(img, sigma=0.33):\n", 41 | " # compute the median of the single channel pixel intensities\n", 42 | " v = np.median(image)\n", 43 | " # apply automatic Canny edge detection using the computed median\n", 44 | " lower = int(max(0, (1.0 - sigma) * v))\n", 45 | " upper = int(min(255, (1.0 + sigma) * v))\n", 46 | " edged = cv2.Canny(img, lower, upper)\n", 47 | " return edged\n", 48 | "\n", 49 | "\n", 50 | "def plot_line(img, rho, theta):\n", 51 | " # Plots the line coming out of a Hough Line Transform\n", 52 | " a = math.cos(theta)\n", 53 | " b = math.sin(theta)\n", 54 | " x0 = a * rho\n", 55 | " y0 = b * rho\n", 56 | " pt1 = (int(x0 + 10000 * (-b)), int(y0 + 10000 * (a)))\n", 57 | " pt2 = (int(x0 - 10000 * (-b)), int(y0 - 10000 * (a)))\n", 58 | " cv2.line(img, pt1, pt2, (255, 0, 0), 3)" 59 | ] 60 | }, 61 | { 62 | "cell_type": "code", 63 | "execution_count": null, 64 | "metadata": {}, 65 | "outputs": [], 66 | "source": [ 67 | "for image_path in list(images_path.glob('*.png')):\n", 68 | " image = cv2.imread(str(image_path))\n", 69 | " # Blur image and convert to grayscale\n", 70 | " gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n", 71 | " blurred = cv2.GaussianBlur(gray, (3, 3), 0)\n", 72 | " # Use Canny edge detection to find edges\n", 73 | " edges = custom_canny(blurred)\n", 74 | " # Dilate edges of lines\n", 75 | " dilated = cv2.dilate(edges, np.ones((3, 3), dtype=np.uint8))\n", 76 | " # Use Hough Line Transform to get lines\n", 77 | " lines = cv2.HoughLines(dilated, 1, np.pi / 100,\n", 78 | " threshold = 400,\n", 79 | " min_theta=np.pi / 3,\n", 80 | " max_theta=2 * np.pi / 3)\n", 81 | " if lines is not None:\n", 82 | " print('Found %s lines' % (len(lines)))\n", 83 | " # # Print all lines\n", 84 | " # for line in lines:\n", 85 | " # for rho, theta in line:\n", 86 | " # plot_line(edges, rho, theta)\n", 87 | " # Average line only\n", 88 | " avg_rho = np.mean([line[0][0] for line in lines])\n", 89 | " avg_theta = np.mean([line[0][1] for line in lines])\n", 90 | " plot_line(image, avg_rho, avg_theta)\n", 91 | " else:\n", 92 | " print('No Horizon Found')\n", 93 | "\n", 94 | " # Output images to window\n", 95 | " # cv2.imshow(\"Original\", image)\n", 96 | " # cv2.imshow(\"Edges\", edges)\n", 97 | " # cv2.waitKey(0)\n", 98 | " \n", 99 | " # Save images to file\n", 100 | " cv2.imwrite(str(output_path / image_path.name), image)" 101 | ] 102 | } 103 | ], 104 | "metadata": { 105 | "kernelspec": { 106 | "display_name": "Python [conda env:vectorize]", 107 | "language": "python", 108 | "name": "conda-env-vectorize-py" 109 | }, 110 | "language_info": { 111 | "codemirror_mode": { 112 | "name": "ipython", 113 | "version": 3 114 | }, 115 | "file_extension": ".py", 116 | "mimetype": "text/x-python", 117 | "name": "python", 118 | "nbconvert_exporter": "python", 119 | "pygments_lexer": "ipython3", 120 | "version": "3.5.4" 121 | } 122 | }, 123 | "nbformat": 4, 124 | "nbformat_minor": 2 125 | } 126 | -------------------------------------------------------------------------------- /output_images/detection/image10.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hu-po/iris_challenge/2635b1a19653d598ef2219e340ac5065c3d942a6/output_images/detection/image10.png -------------------------------------------------------------------------------- /output_images/detection/image15.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hu-po/iris_challenge/2635b1a19653d598ef2219e340ac5065c3d942a6/output_images/detection/image15.png -------------------------------------------------------------------------------- /output_images/detection/image16.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hu-po/iris_challenge/2635b1a19653d598ef2219e340ac5065c3d942a6/output_images/detection/image16.png -------------------------------------------------------------------------------- /output_images/detection/image17.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hu-po/iris_challenge/2635b1a19653d598ef2219e340ac5065c3d942a6/output_images/detection/image17.png -------------------------------------------------------------------------------- /output_images/detection/image22.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hu-po/iris_challenge/2635b1a19653d598ef2219e340ac5065c3d942a6/output_images/detection/image22.png -------------------------------------------------------------------------------- /output_images/detection/image29.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hu-po/iris_challenge/2635b1a19653d598ef2219e340ac5065c3d942a6/output_images/detection/image29.png -------------------------------------------------------------------------------- /output_images/horizon/image10.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hu-po/iris_challenge/2635b1a19653d598ef2219e340ac5065c3d942a6/output_images/horizon/image10.png -------------------------------------------------------------------------------- /output_images/horizon/image11.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hu-po/iris_challenge/2635b1a19653d598ef2219e340ac5065c3d942a6/output_images/horizon/image11.png -------------------------------------------------------------------------------- /output_images/horizon/image13.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hu-po/iris_challenge/2635b1a19653d598ef2219e340ac5065c3d942a6/output_images/horizon/image13.png -------------------------------------------------------------------------------- /output_images/horizon/image2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hu-po/iris_challenge/2635b1a19653d598ef2219e340ac5065c3d942a6/output_images/horizon/image2.png -------------------------------------------------------------------------------- /output_images/horizon/image26.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hu-po/iris_challenge/2635b1a19653d598ef2219e340ac5065c3d942a6/output_images/horizon/image26.png --------------------------------------------------------------------------------