├── LICENSE
├── README.md
├── assets
├── 2.png
├── coco.jpeg
├── detection_anchors.png
├── detection_final.png
├── detection_masks.png
├── detection_refinement.png
├── park.png
├── street.png
├── synthia.jpeg
└── synthia2.jpeg
├── coco.py
├── config.py
├── demo_coco.py
├── demo_synthia.py
├── images
├── 1045023827_4ec3e8ba5c_z.jpg
├── 12283150_12d37e6389_z.jpg
├── 2383514521_1fc8d7b0de_z.jpg
├── 2502287818_41e4b0c4fb_z.jpg
├── 2516944023_d00345997d_z.jpg
├── 25691390_f9944f61b5_z.jpg
├── 262985539_1709e54576_z.jpg
├── 3132016470_c27baa00e8_z.jpg
├── 3627527276_6fe8cd9bfe_z.jpg
├── 3651581213_f81963d1dd_z.jpg
├── 3800883468_12af3c0b50_z.jpg
├── 3862500489_6fd195d183_z.jpg
├── 3878153025_8fde829928_z.jpg
├── 4410436637_7b0ca36ee7_z.jpg
├── 4782628554_668bc31826_z.jpg
├── 5951960966_d4e1cda5d0_z.jpg
├── 6584515005_fce9cec486_z.jpg
├── 6821351586_59aa0dc110_z.jpg
├── 7581246086_cf7bbb7255_z.jpg
├── 7933423348_c30bd9bd4e_z.jpg
├── 8053677163_d4c8f416be_z.jpg
├── 8239308689_efa6c11b08_z.jpg
├── 8433365521_9252889f9a_z.jpg
├── 8512296263_5fc5458e20_z.jpg
├── 8699757338_c3941051b6_z.jpg
├── 8734543718_37f6b8bd45_z.jpg
├── 8829708882_48f263491e_z.jpg
├── 9118579087_f9ffa19e63_z.jpg
└── 9247489789_132c0d534a_z.jpg
├── model.py
├── nms
├── build.py
├── nms_wrapper.py
├── pth_nms.py
└── src
│ ├── cuda
│ ├── nms_kernel.cu
│ └── nms_kernel.h
│ ├── nms.c
│ ├── nms.h
│ ├── nms_cuda.c
│ └── nms_cuda.h
├── roialign
└── roi_align
│ ├── build.py
│ ├── crop_and_resize.py
│ ├── roi_align.py
│ └── src
│ ├── crop_and_resize.c
│ ├── crop_and_resize.h
│ ├── crop_and_resize_gpu.c
│ ├── crop_and_resize_gpu.h
│ └── cuda
│ ├── crop_and_resize_kernel.cu
│ └── crop_and_resize_kernel.h
├── synthia.py
├── test.txt
├── train.txt
├── utils.py
└── visualize.py
/LICENSE:
--------------------------------------------------------------------------------
1 | Mask R-CNN
2 |
3 | The MIT License (MIT)
4 |
5 | Copyright (c) 2017 Matterport, Inc.
6 |
7 | Permission is hereby granted, free of charge, to any person obtaining a copy
8 | of this software and associated documentation files (the "Software"), to deal
9 | in the Software without restriction, including without limitation the rights
10 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
11 | copies of the Software, and to permit persons to whom the Software is
12 | furnished to do so, subject to the following conditions:
13 |
14 | The above copyright notice and this permission notice shall be included in
15 | all copies or substantial portions of the Software.
16 |
17 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
18 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
19 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
20 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
21 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
22 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
23 | THE SOFTWARE.
24 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Mask_RCNN_Pytorch
2 |
3 | This is an implementation of the instance segmentation model [Mask R-CNN](https://arxiv.org/abs/1703.06870) on Pytorch, based on the previous work of [Matterport](https://github.com/matterport/Mask_RCNN) and [lasseha](https://github.com/multimodallearning/pytorch-mask-rcnn). Matterport's repository is an implementation on Keras and TensorFlow while lasseha's repository is an implementation on Pytorch.
4 |
5 | ## Features
6 | Compared with other PyTorch implementations, this repository has the following features:
7 | * It supports multi-image batch training (i.e., batch size >1).
8 | * It supports PyTorch 0.4.0 (Currently does not support Pytorch >1.0).
9 | * It supports both GPU and CPU. You can use a CPU to visualize the results.
10 | * It supports multiple GPUs training (please look at instrctions [here](https://github.com/jytime/Mask_RCNN_Pytorch/blob/05053cbd00d1dde2ae7edd59f276d1560ce9fe1f/synthia.py#L278)).
11 | * You could train Mask R-CNN on your own dataset (please see [synthia.py](https://github.com/jytime/Mask_RCNN_Pytorch/blob/master/synthia.py), which demonstrates how we trained a model on [Synthia Dataset](http://synthia-dataset.net/), starting from the model pre-trained on COCO Dataset).
12 | * You could use a model pre-trained on COCO or ImageNet to segment objects in your own images (please see [demo_coco.py](https://github.com/jytime/Mask_RCNN_Pytorch/blob/master/demo_coco.py) or [demo_synthia.py](https://github.com/jytime/Mask_RCNN_Pytorch/blob/master/demo_synthia.py)).
13 |
14 |
15 |
16 | ## Requirements
17 | * Python 3
18 | * Linux
19 | * PyTorch 0.4.0
20 | * matplotlib, scipy, skimage, h5py, numpy
21 |
22 | ## Demo
23 | ### [Synthia Dataset](http://synthia-dataset.net/)
24 |
25 |
26 |
27 | ### [COCO dataset](http://cocodataset.org/#home)
28 |
29 |
30 |
31 | ## Compilation
32 | The instructions come from lasseha's repository.
33 | * We use the [Non-Maximum Suppression](https://github.com/ruotianluo/pytorch-faster-rcnn) from ruotianluo and the [RoiAlign](https://github.com/longcw/RoIAlign.pytorch) from longcw. Please follow the instructions below to build the functions.
34 |
35 | cd nms/src/cuda/
36 | nvcc -c -o nms_kernel.cu.o nms_kernel.cu -x cu -Xcompiler -fPIC -arch=arch
37 | cd ../../
38 | python build.py
39 | cd ../
40 |
41 | cd roialign/roi_align/src/cuda/
42 | nvcc -c -o crop_and_resize_kernel.cu.o crop_and_resize_kernel.cu -x cu -Xcompiler -fPIC -arch=arch
43 | cd ../../
44 | python build.py
45 | cd ../../
46 |
47 |
48 | where 'arch' is determined by your GPU model:
49 |
50 | | GPU | TitanX | GTX 960M | GTX 1070 | GTX 1080 (Ti) |
51 | | :--: | :--: | :--: | :--: | :--: |
52 | | arch | sm_52 |sm_50 |sm_61 |sm_61 |
53 | * If you want to train the network on the [COCO dataset](http://cocodataset.org/#home), please install the [Python COCO API](https://github.com/cocodataset/cocoapi) and create a symlink.
54 |
55 | ln -s /path/to/coco/cocoapi/PythonAPI/pycocotools/ pycocotools
56 | * The pretrained models on COCO and ImageNet are available [here](https://drive.google.com/open?id=1LXUgC2IZUYNEoXr05tdqyKFZY0pZyPDc).
57 |
58 | ## Results(COCO)
59 | The training and evaluation is based on COCO Dataset 2014. To understand the indicators below, please have a look at [pycocotools](https://github.com/cocodataset/cocoapi/tree/master/PythonAPI/pycocotools).
60 | Notably, I only used one GTX 1080 (Ti). I think the performance could be improved if more GPUs are available.
61 |
62 |
63 | | Indicator | IoU | area | maxDets | Value|
64 | | :--: | :--: | :--: |:--: | :--: |
65 | |Average Precision (AP) | 0.50:0.95 | all | 100 | 0.392|
66 | |Average Precision (AP) | 0.50 | all | 100 | 0.574|
67 | |Average Precision (AP) | 0.75 | all | 100 | 0.434|
68 | |Average Precision (AP) | 0.50:0.95 | small | 100 | 0.199|
69 | |Average Precision (AP) | 0.50:0.95 | medium | 100 | 0.448|
70 | |Average Precision (AP) | 0.50:0.95 | large | 100 | 0.575|
71 | |Average Recall (AR) | 0.50:0.95 | all | 1 | 0.321|
72 | |Average Recall (AR) | 0.50:0.95 | all | 10 | 0.445|
73 | |Average Recall (AR) | 0.50:0.95 | all | 100 | 0.457|
74 | |Average Recall (AR) | 0.50:0.95 | small | 100 | 0.231|
75 | |Average Recall (AR) | 0.50:0.95 | medium | 100 | 0.508|
76 | |Average Recall (AR) | 0.50:0.95 | large | 100 | 0.645|
77 |
78 |
--------------------------------------------------------------------------------
/assets/2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/assets/2.png
--------------------------------------------------------------------------------
/assets/coco.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/assets/coco.jpeg
--------------------------------------------------------------------------------
/assets/detection_anchors.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/assets/detection_anchors.png
--------------------------------------------------------------------------------
/assets/detection_final.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/assets/detection_final.png
--------------------------------------------------------------------------------
/assets/detection_masks.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/assets/detection_masks.png
--------------------------------------------------------------------------------
/assets/detection_refinement.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/assets/detection_refinement.png
--------------------------------------------------------------------------------
/assets/park.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/assets/park.png
--------------------------------------------------------------------------------
/assets/street.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/assets/street.png
--------------------------------------------------------------------------------
/assets/synthia.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/assets/synthia.jpeg
--------------------------------------------------------------------------------
/assets/synthia2.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/assets/synthia2.jpeg
--------------------------------------------------------------------------------
/coco.py:
--------------------------------------------------------------------------------
1 | """
2 | Mask R-CNN
3 | Configurations and data loading code for MS COCO.
4 |
5 | Copyright (c) 2017 Matterport, Inc.
6 | Licensed under the MIT License (see LICENSE for details)
7 | Written by Waleed Abdulla
8 |
9 | ------------------------------------------------------------
10 |
11 | Usage: import the module (see Jupyter notebooks for examples), or run from
12 | the command line as such:
13 |
14 | # Train a new model starting from pre-trained COCO weights
15 | python3 coco.py train --dataset=/path/to/coco/ --model=coco
16 |
17 | # Train a new model starting from ImageNet weights
18 | python3 coco.py train --dataset=/path/to/coco/ --model=imagenet
19 |
20 | # Continue training a model that you had trained earlier
21 | python3 coco.py train --dataset=/path/to/coco/ --model=/path/to/weights.h5
22 |
23 | # Continue training the last model you trained
24 | python3 coco.py train --dataset=/path/to/coco/ --model=last
25 |
26 | # Run COCO evaluatoin on the last model you trained
27 | python3 coco.py evaluate --dataset=/path/to/coco/ --model=last
28 | """
29 |
30 | import os
31 | import time
32 | import numpy as np
33 |
34 | # Download and install the Python COCO tools from https://github.com/waleedka/coco
35 | # That's a fork from the original https://github.com/pdollar/coco with a bug
36 | # fix for Python 3.
37 | # I submitted a pull request https://github.com/cocodataset/cocoapi/pull/50
38 | # If the PR is merged then use the original repo.
39 | # Note: Edit PythonAPI/Makefile and replace "python" with "python3".
40 | from pycocotools.coco import COCO
41 | from pycocotools.cocoeval import COCOeval
42 | from pycocotools import mask as maskUtils
43 |
44 | import zipfile
45 | import urllib.request
46 | import shutil
47 |
48 | from config import Config
49 | import utils
50 | import model as modellib
51 |
52 | import torch
53 |
54 | # Root directory of the project
55 | ROOT_DIR = os.getcwd()
56 |
57 | # Path to trained weights file
58 | COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.pth")
59 |
60 | # Directory to save logs and model checkpoints, if not provided
61 | # through the command line argument --logs
62 | DEFAULT_LOGS_DIR = os.path.join(ROOT_DIR, "logs")
63 | DEFAULT_DATASET_YEAR = "2014"
64 |
65 | ############################################################
66 | # Configurations
67 | ############################################################
68 |
69 | class CocoConfig(Config):
70 | """Configuration for training on MS COCO.
71 | Derives from the base Config class and overrides values specific
72 | to the COCO dataset.
73 | """
74 | # Give the configuration a recognizable name
75 | NAME = "coco"
76 |
77 | # We use one GPU with 8GB memory, which can fit one image.
78 | # Adjust down if you use a smaller GPU.
79 | IMAGES_PER_GPU = 16
80 |
81 | # Uncomment to train on 8 GPUs (default is 1)
82 | # GPU_COUNT = 8
83 |
84 | # Number of classes (including background)
85 | NUM_CLASSES = 1 + 80 # COCO has 80 classes
86 |
87 |
88 | ############################################################
89 | # Dataset
90 | ############################################################
91 |
92 | class CocoDataset(utils.Dataset):
93 | def load_coco(self, dataset_dir, subset, year=DEFAULT_DATASET_YEAR, class_ids=None,
94 | class_map=None, return_coco=False, auto_download=False):
95 | """Load a subset of the COCO dataset.
96 | dataset_dir: The root directory of the COCO dataset.
97 | subset: What to load (train, val, minival, valminusminival)
98 | year: What dataset year to load (2014, 2017) as a string, not an integer
99 | class_ids: If provided, only loads images that have the given classes.
100 | class_map: TODO: Not implemented yet. Supports maping classes from
101 | different datasets to the same class ID.
102 | return_coco: If True, returns the COCO object.
103 | auto_download: Automatically download and unzip MS-COCO images and annotations
104 | """
105 |
106 | if auto_download is True:
107 | self.auto_download(dataset_dir, subset, year)
108 |
109 | coco = COCO("{}/annotations/instances_{}{}.json".format(dataset_dir, subset, year))
110 | if subset == "minival" or subset == "valminusminival":
111 | subset = "val"
112 | image_dir = "{}/{}{}".format(dataset_dir, subset, year)
113 |
114 | # Load all classes or a subset?
115 | if not class_ids:
116 | # All classes
117 | class_ids = sorted(coco.getCatIds())
118 |
119 | # All images or a subset?
120 | if class_ids:
121 | image_ids = []
122 | for id in class_ids:
123 | image_ids.extend(list(coco.getImgIds(catIds=[id])))
124 | # Remove duplicates
125 | image_ids = list(set(image_ids))
126 | else:
127 | # All images
128 | image_ids = list(coco.imgs.keys())
129 |
130 | # Add classes
131 | for i in class_ids:
132 | self.add_class("coco", i, coco.loadCats(i)[0]["name"])
133 |
134 | # Add images
135 | for i in image_ids:
136 | self.add_image(
137 | "coco", image_id=i,
138 | path=os.path.join(image_dir, coco.imgs[i]['file_name']),
139 | width=coco.imgs[i]["width"],
140 | height=coco.imgs[i]["height"],
141 | annotations=coco.loadAnns(coco.getAnnIds(
142 | imgIds=[i], catIds=class_ids, iscrowd=None)))
143 | if return_coco:
144 | return coco
145 |
146 | def auto_download(self, dataDir, dataType, dataYear):
147 | """Download the COCO dataset/annotations if requested.
148 | dataDir: The root directory of the COCO dataset.
149 | dataType: What to load (train, val, minival, valminusminival)
150 | dataYear: What dataset year to load (2014, 2017) as a string, not an integer
151 | Note:
152 | For 2014, use "train", "val", "minival", or "valminusminival"
153 | For 2017, only "train" and "val" annotations are available
154 | """
155 |
156 | # Setup paths and file names
157 | if dataType == "minival" or dataType == "valminusminival":
158 | imgDir = "{}/{}{}".format(dataDir, "val", dataYear)
159 | imgZipFile = "{}/{}{}.zip".format(dataDir, "val", dataYear)
160 | imgURL = "http://images.cocodataset.org/zips/{}{}.zip".format("val", dataYear)
161 | else:
162 | imgDir = "{}/{}{}".format(dataDir, dataType, dataYear)
163 | imgZipFile = "{}/{}{}.zip".format(dataDir, dataType, dataYear)
164 | imgURL = "http://images.cocodataset.org/zips/{}{}.zip".format(dataType, dataYear)
165 | # print("Image paths:"); print(imgDir); print(imgZipFile); print(imgURL)
166 |
167 | # Create main folder if it doesn't exist yet
168 | if not os.path.exists(dataDir):
169 | os.makedirs(dataDir)
170 |
171 | # Download images if not available locally
172 | if not os.path.exists(imgDir):
173 | os.makedirs(imgDir)
174 | print("Downloading images to " + imgZipFile + " ...")
175 | with urllib.request.urlopen(imgURL) as resp, open(imgZipFile, 'wb') as out:
176 | shutil.copyfileobj(resp, out)
177 | print("... done downloading.")
178 | print("Unzipping " + imgZipFile)
179 | with zipfile.ZipFile(imgZipFile, "r") as zip_ref:
180 | zip_ref.extractall(dataDir)
181 | print("... done unzipping")
182 | print("Will use images in " + imgDir)
183 |
184 | # Setup annotations data paths
185 | annDir = "{}/annotations".format(dataDir)
186 | if dataType == "minival":
187 | annZipFile = "{}/instances_minival2014.json.zip".format(dataDir)
188 | annFile = "{}/instances_minival2014.json".format(annDir)
189 | annURL = "https://dl.dropboxusercontent.com/s/o43o90bna78omob/instances_minival2014.json.zip?dl=0"
190 | unZipDir = annDir
191 | elif dataType == "valminusminival":
192 | annZipFile = "{}/instances_valminusminival2014.json.zip".format(dataDir)
193 | annFile = "{}/instances_valminusminival2014.json".format(annDir)
194 | annURL = "https://dl.dropboxusercontent.com/s/s3tw5zcg7395368/instances_valminusminival2014.json.zip?dl=0"
195 | unZipDir = annDir
196 | else:
197 | annZipFile = "{}/annotations_trainval{}.zip".format(dataDir, dataYear)
198 | annFile = "{}/instances_{}{}.json".format(annDir, dataType, dataYear)
199 | annURL = "http://images.cocodataset.org/annotations/annotations_trainval{}.zip".format(dataYear)
200 | unZipDir = dataDir
201 | # print("Annotations paths:"); print(annDir); print(annFile); print(annZipFile); print(annURL)
202 |
203 | # Download annotations if not available locally
204 | if not os.path.exists(annDir):
205 | os.makedirs(annDir)
206 | if not os.path.exists(annFile):
207 | if not os.path.exists(annZipFile):
208 | print("Downloading zipped annotations to " + annZipFile + " ...")
209 | with urllib.request.urlopen(annURL) as resp, open(annZipFile, 'wb') as out:
210 | shutil.copyfileobj(resp, out)
211 | print("... done downloading.")
212 | print("Unzipping " + annZipFile)
213 | with zipfile.ZipFile(annZipFile, "r") as zip_ref:
214 | zip_ref.extractall(unZipDir)
215 | print("... done unzipping")
216 | print("Will use annotations in " + annFile)
217 |
218 | def load_mask(self, image_id):
219 | """Load instance masks for the given image.
220 |
221 | Different datasets use different ways to store masks. This
222 | function converts the different mask format to one format
223 | in the form of a bitmap [height, width, instances].
224 |
225 | Returns:
226 | masks: A bool array of shape [height, width, instance count] with
227 | one mask per instance.
228 | class_ids: a 1D array of class IDs of the instance masks.
229 | """
230 | # If not a COCO image, delegate to parent class.
231 | image_info = self.image_info[image_id]
232 | if image_info["source"] != "coco":
233 | return super(CocoDataset, self).load_mask(image_id)
234 |
235 | instance_masks = []
236 | class_ids = []
237 | annotations = self.image_info[image_id]["annotations"]
238 | # Build mask of shape [height, width, instance_count] and list
239 | # of class IDs that correspond to each channel of the mask.
240 | for annotation in annotations:
241 | class_id = self.map_source_class_id(
242 | "coco.{}".format(annotation['category_id']))
243 | if class_id:
244 | m = self.annToMask(annotation, image_info["height"],
245 | image_info["width"])
246 | # Some objects are so small that they're less than 1 pixel area
247 | # and end up rounded out. Skip those objects.
248 | if m.max() < 1:
249 | continue
250 | # Is it a crowd? If so, use a negative class ID.
251 | if annotation['iscrowd']:
252 | # Use negative class ID for crowds
253 | class_id *= -1
254 | # For crowd masks, annToMask() sometimes returns a mask
255 | # smaller than the given dimensions. If so, resize it.
256 | if m.shape[0] != image_info["height"] or m.shape[1] != image_info["width"]:
257 | m = np.ones([image_info["height"], image_info["width"]], dtype=bool)
258 | instance_masks.append(m)
259 | class_ids.append(class_id)
260 |
261 | # Pack instance masks into an array
262 | if class_ids:
263 | mask = np.stack(instance_masks, axis=2)
264 | class_ids = np.array(class_ids, dtype=np.int32)
265 | return mask, class_ids
266 | else:
267 | # Call super class to return an empty mask
268 | return super(CocoDataset, self).load_mask(image_id)
269 |
270 | def image_reference(self, image_id):
271 | """Return a link to the image in the COCO Website."""
272 | info = self.image_info[image_id]
273 | if info["source"] == "coco":
274 | return "http://cocodataset.org/#explore?id={}".format(info["id"])
275 | else:
276 | super(CocoDataset, self).image_reference(image_id)
277 |
278 | # The following two functions are from pycocotools with a few changes.
279 |
280 | def annToRLE(self, ann, height, width):
281 | """
282 | Convert annotation which can be polygons, uncompressed RLE to RLE.
283 | :return: binary mask (numpy 2D array)
284 | """
285 | segm = ann['segmentation']
286 | if isinstance(segm, list):
287 | # polygon -- a single object might consist of multiple parts
288 | # we merge all parts into one mask rle code
289 | rles = maskUtils.frPyObjects(segm, height, width)
290 | rle = maskUtils.merge(rles)
291 | elif isinstance(segm['counts'], list):
292 | # uncompressed RLE
293 | rle = maskUtils.frPyObjects(segm, height, width)
294 | else:
295 | # rle
296 | rle = ann['segmentation']
297 | return rle
298 |
299 | def annToMask(self, ann, height, width):
300 | """
301 | Convert annotation which can be polygons, uncompressed RLE, or RLE to binary mask.
302 | :return: binary mask (numpy 2D array)
303 | """
304 | rle = self.annToRLE(ann, height, width)
305 | m = maskUtils.decode(rle)
306 | return m
307 |
308 |
309 | ############################################################
310 | # COCO Evaluation
311 | ############################################################
312 |
313 | def build_coco_results(dataset, image_ids, rois, class_ids, scores, masks):
314 | """Arrange resutls to match COCO specs in http://cocodataset.org/#format
315 | """
316 | # If no results, return an empty list
317 | if rois is None:
318 | return []
319 |
320 | results = []
321 | for image_id in image_ids:
322 | # Loop through detections
323 | for i in range(rois.shape[0]):
324 | class_id = class_ids[i]
325 | score = scores[i]
326 | bbox = np.around(rois[i], 1)
327 | mask = masks[:, :, i]
328 |
329 | result = {
330 | "image_id": image_id,
331 | "category_id": dataset.get_source_class_id(class_id, "coco"),
332 | "bbox": [bbox[1], bbox[0], bbox[3] - bbox[1], bbox[2] - bbox[0]],
333 | "score": score,
334 | "segmentation": maskUtils.encode(np.asfortranarray(mask))
335 | }
336 | results.append(result)
337 | return results
338 |
339 |
340 | def evaluate_coco(model, dataset, coco, eval_type="bbox", limit=0, image_ids=None):
341 | """Runs official COCO evaluation.
342 | dataset: A Dataset object with valiadtion data
343 | eval_type: "bbox" or "segm" for bounding box or segmentation evaluation
344 | limit: if not 0, it's the number of images to use for evaluation
345 | """
346 | # Pick COCO images from the dataset
347 | image_ids = image_ids or dataset.image_ids
348 |
349 | # Limit to a subset
350 | if limit:
351 | image_ids = image_ids[:limit]
352 |
353 | # Get corresponding COCO image IDs.
354 | coco_image_ids = [dataset.image_info[id]["id"] for id in image_ids]
355 |
356 | t_prediction = 0
357 | t_start = time.time()
358 |
359 | results = []
360 | for i, image_id in enumerate(image_ids):
361 | # Load image
362 | image = dataset.load_image(image_id)
363 |
364 | # Run detection
365 | t = time.time()
366 | r = model.detect([image])[0]
367 | t_prediction += (time.time() - t)
368 |
369 | # Convert results to COCO format
370 | image_results = build_coco_results(dataset, coco_image_ids[i:i + 1],
371 | r["rois"], r["class_ids"],
372 | r["scores"], r["masks"])
373 | results.extend(image_results)
374 |
375 | # Load results. This modifies results with additional attributes.
376 | coco_results = coco.loadRes(results)
377 |
378 | # Evaluate
379 | cocoEval = COCOeval(coco, coco_results, eval_type)
380 | cocoEval.params.imgIds = coco_image_ids
381 | cocoEval.evaluate()
382 | cocoEval.accumulate()
383 | cocoEval.summarize()
384 |
385 | print("Prediction time: {}. Average {}/image".format(
386 | t_prediction, t_prediction / len(image_ids)))
387 | print("Total time: ", time.time() - t_start)
388 |
389 |
390 | ############################################################
391 | # Training
392 | ############################################################
393 |
394 |
395 | if __name__ == '__main__':
396 | import argparse
397 |
398 | # Parse command line arguments
399 | parser = argparse.ArgumentParser(
400 | description='Train Mask R-CNN on MS COCO.')
401 | parser.add_argument("command",
402 | metavar="",
403 | help="'train' or 'evaluate' on MS COCO")
404 | parser.add_argument('--dataset', required=True,
405 | metavar="/path/to/coco/",
406 | help='Directory of the MS-COCO dataset')
407 | parser.add_argument('--year', required=False,
408 | default=DEFAULT_DATASET_YEAR,
409 | metavar="",
410 | help='Year of the MS-COCO dataset (2014 or 2017) (default=2014)')
411 | parser.add_argument('--model', required=False,
412 | metavar="/path/to/weights.pth",
413 | help="Path to weights .pth file or 'coco'")
414 | parser.add_argument('--logs', required=False,
415 | default=DEFAULT_LOGS_DIR,
416 | metavar="/path/to/logs/",
417 | help='Logs and checkpoints directory (default=logs/)')
418 | parser.add_argument('--limit', required=False,
419 | default=500,
420 | metavar="",
421 | help='Images to use for evaluation (default=500)')
422 | parser.add_argument('--download', required=False,
423 | default=False,
424 | metavar="",
425 | help='Automatically download and unzip MS-COCO files (default=False)',
426 | type=bool)
427 | parser.add_argument('--lr', required=False,
428 | default=0.001,
429 | help='Learning rate')
430 | parser.add_argument('--batchsize', required=False,
431 | default=4,
432 | help='Batch size')
433 | parser.add_argument('--steps', required=False,
434 | default=200,
435 | help='steps per epoch')
436 | parser.add_argument('--device', required=False,
437 | default="gpu",
438 | help='gpu or cpu')
439 | args = parser.parse_args()
440 |
441 | print("Command: ", args.command)
442 | print("Model: ", args.model)
443 | print("Dataset: ", args.dataset)
444 | print("Year: ", args.year)
445 | print("Logs: ", args.logs)
446 | print("Auto Download: ", args.download)
447 |
448 | # Configurations
449 | if args.command == "train":
450 | config = CocoConfig()
451 | else:
452 | class InferenceConfig(CocoConfig):
453 | # Set batch size to 1 since we'll be running inference on
454 | # one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
455 | GPU_COUNT = 1
456 | IMAGES_PER_GPU = 1
457 | DETECTION_MIN_CONFIDENCE = 0
458 | config = InferenceConfig()
459 | config.display()
460 |
461 | # Create model
462 | if args.command == "train":
463 | model = modellib.MaskRCNN(config=config,
464 | model_dir=args.logs)
465 | else:
466 | model = modellib.MaskRCNN(config=config,
467 | model_dir=args.logs)
468 |
469 | # Select Device
470 | if args.device == "gpu":
471 | device = torch.device("cuda")
472 | else:
473 | device = torch.device("cpu")
474 |
475 | model = model.to(device)
476 |
477 | # Select weights file to load
478 | if args.model:
479 | if args.model.lower() == "coco":
480 | model_path = COCO_MODEL_PATH
481 | elif args.model.lower() == "last":
482 | # Find last trained weights
483 | model_path = model.find_last()[1]
484 | elif args.model.lower() == "imagenet":
485 | # Start from ImageNet trained weights
486 | model_path = config.IMAGENET_MODEL_PATH
487 | else:
488 | model_path = args.model
489 | else:
490 | model_path = ""
491 |
492 | # Load weights
493 | print("Loading weights ", model_path)
494 | model.load_weights(model_path)
495 |
496 | # input parameters
497 | lr=float(args.lr)
498 | batchsize=int(args.batchsize)
499 | steps=int(args.steps)
500 |
501 | # Train or evaluate
502 | if args.command == "train":
503 | # Training dataset. Use the training set and 35K from the
504 | # validation set, as as in the Mask RCNN paper.
505 | dataset_train = CocoDataset()
506 | dataset_train.load_coco(args.dataset, "train", year=args.year, auto_download=args.download)
507 | dataset_train.load_coco(args.dataset, "valminusminival", year=args.year, auto_download=args.download)
508 | dataset_train.prepare()
509 |
510 | # Validation dataset
511 | dataset_val = CocoDataset()
512 | dataset_val.load_coco(args.dataset, "minival", year=args.year, auto_download=args.download)
513 | dataset_val.prepare()
514 |
515 | # *** This training schedule is an example. Update to your needs ***
516 |
517 | # Training - Stage 1
518 | print("Training network heads")
519 | model.train_model(dataset_train, dataset_val,
520 | learning_rate=config.LEARNING_RATE,
521 | epochs=40,
522 | BatchSize=batchsize,
523 | steps=steps,
524 | layers='heads')
525 |
526 | # Training - Stage 2
527 | # Finetune layers from ResNet stage 4 and up
528 | print("Fine tune Resnet stage 4 and up")
529 | model.train_model(dataset_train, dataset_val,
530 | learning_rate=config.LEARNING_RATE,
531 | epochs=120,
532 | BatchSize=batchsize,
533 | steps=steps,
534 | layers='4+')
535 |
536 | # Training - Stage 3
537 | # Fine tune all layers
538 | print("Fine tune all layers")
539 | model.train_model(dataset_train, dataset_val,
540 | learning_rate=config.LEARNING_RATE / 10,
541 | epochs=160,
542 | BatchSize=batchsize,
543 | steps=steps,
544 | layers='all')
545 |
546 | elif args.command == "evaluate":
547 | # Validation dataset
548 | dataset_val = CocoDataset()
549 | coco = dataset_val.load_coco(args.dataset, "minival", year=args.year, return_coco=True, auto_download=args.download)
550 | dataset_val.prepare()
551 | print("Running COCO evaluation on {} images.".format(args.limit))
552 | evaluate_coco(model, dataset_val, coco, "bbox", limit=int(args.limit))
553 | evaluate_coco(model, dataset_val, coco, "segm", limit=int(args.limit))
554 | else:
555 | print("'{}' is not recognized. "
556 | "Use 'train' or 'evaluate'".format(args.command))
557 |
--------------------------------------------------------------------------------
/config.py:
--------------------------------------------------------------------------------
1 | """
2 | Mask R-CNN
3 | Base Configurations class.
4 |
5 | Copyright (c) 2017 Matterport, Inc.
6 | Licensed under the MIT License (see LICENSE for details)
7 | Written by Waleed Abdulla
8 | """
9 |
10 | import math
11 | import numpy as np
12 | import os
13 |
14 |
15 | # Base Configuration Class
16 | # Don't use this class directly. Instead, sub-class it and override
17 | # the configurations you need to change.
18 |
19 | class Config(object):
20 | """Base configuration class. For custom configurations, create a
21 | sub-class that inherits from this one and override properties
22 | that need to be changed.
23 | """
24 | # Name the configurations. For example, 'COCO', 'Experiment 3', ...etc.
25 | # Useful if your code needs to do things differently depending on which
26 | # experiment is running.
27 | NAME = None # Override in sub-classes
28 |
29 | # Path to pretrained imagenet model
30 | IMAGENET_MODEL_PATH = os.path.join(os.getcwd(), "resnet50_imagenet.pth")
31 |
32 | # NUMBER OF GPUs to use. For CPU use 0
33 | GPU_COUNT = 1
34 |
35 | # Number of images to train with on each GPU. A 12GB GPU can typically
36 | # handle 2 images of 1024x1024px.
37 | # Adjust based on your GPU memory and image sizes. Use the highest
38 | # number that your GPU can handle for best performance.
39 | IMAGES_PER_GPU = 1
40 |
41 | # Number of training steps per epoch
42 | # This doesn't need to match the size of the training set. Tensorboard
43 | # updates are saved at the end of each epoch, so setting this to a
44 | # smaller number means getting more frequent TensorBoard updates.
45 | # Validation stats are also calculated at each epoch end and they
46 | # might take a while, so don't set this too small to avoid spending
47 | # a lot of time on validation stats.
48 | STEPS_PER_EPOCH = 1000
49 |
50 | # Number of validation steps to run at the end of every training epoch.
51 | # A bigger number improves accuracy of validation stats, but slows
52 | # down the training.
53 | VALIDATION_STEPS = 50
54 |
55 | # The strides of each layer of the FPN Pyramid. These values
56 | # are based on a Resnet101 backbone.
57 | BACKBONE_STRIDES = [4, 8, 16, 32, 64]
58 |
59 | # Number of classification classes (including background)
60 | NUM_CLASSES = 1 # Override in sub-classes
61 |
62 | # Length of square anchor side in pixels
63 | RPN_ANCHOR_SCALES = (32, 64, 128, 256, 512)
64 |
65 | # Ratios of anchors at each cell (width/height)
66 | # A value of 1 represents a square anchor, and 0.5 is a wide anchor
67 | RPN_ANCHOR_RATIOS = [0.5, 1, 2]
68 |
69 | # Anchor stride
70 | # If 1 then anchors are created for each cell in the backbone feature map.
71 | # If 2, then anchors are created for every other cell, and so on.
72 | RPN_ANCHOR_STRIDE = 1
73 |
74 | # Non-max suppression threshold to filter RPN proposals.
75 | # You can reduce this during training to generate more propsals.
76 | RPN_NMS_THRESHOLD = 0.7
77 |
78 | # How many anchors per image to use for RPN training
79 | RPN_TRAIN_ANCHORS_PER_IMAGE = 256
80 |
81 | # ROIs kept after non-maximum supression (training and inference)
82 | POST_NMS_ROIS_TRAINING = 2000
83 | POST_NMS_ROIS_INFERENCE = 1000
84 |
85 | # If enabled, resizes instance masks to a smaller size to reduce
86 | # memory load. Recommended when using high-resolution images.
87 | USE_MINI_MASK = True
88 | MINI_MASK_SHAPE = (56, 56) # (height, width) of the mini-mask
89 |
90 | # Input image resing
91 | # Images are resized such that the smallest side is >= IMAGE_MIN_DIM and
92 | # the longest side is <= IMAGE_MAX_DIM. In case both conditions can't
93 | # be satisfied together the IMAGE_MAX_DIM is enforced.
94 | IMAGE_MIN_DIM = 800
95 | IMAGE_MAX_DIM = 1024
96 | # If True, pad images with zeros such that they're (max_dim by max_dim)
97 | IMAGE_PADDING = True # currently, the False option is not supported
98 |
99 | # Image mean (RGB)
100 | MEAN_PIXEL = np.array([123.7, 116.8, 103.9])
101 |
102 | # Number of ROIs per image to feed to classifier/mask heads
103 | # The Mask RCNN paper uses 512 but often the RPN doesn't generate
104 | # enough positive proposals to fill this and keep a positive:negative
105 | # ratio of 1:3. You can increase the number of proposals by adjusting
106 | # the RPN NMS threshold.
107 | TRAIN_ROIS_PER_IMAGE = 200
108 |
109 | # Percent of positive ROIs used to train classifier/mask heads
110 | ROI_POSITIVE_RATIO = 0.33
111 |
112 | # Pooled ROIs
113 | POOL_SIZE = 7
114 | MASK_POOL_SIZE = 14
115 | MASK_SHAPE = [28, 28]
116 |
117 | # Maximum number of ground truth instances to use in one image
118 | MAX_GT_INSTANCES = 100
119 |
120 | # Bounding box refinement standard deviation for RPN and final detections.
121 | RPN_BBOX_STD_DEV = np.array([0.1, 0.1, 0.2, 0.2])
122 | BBOX_STD_DEV = np.array([0.1, 0.1, 0.2, 0.2])
123 |
124 | # Max number of final detections
125 | DETECTION_MAX_INSTANCES = 100
126 |
127 | # Minimum probability value to accept a detected instance
128 | # ROIs below this threshold are skipped
129 | DETECTION_MIN_CONFIDENCE = 0.7
130 |
131 | # Non-maximum suppression threshold for detection
132 | DETECTION_NMS_THRESHOLD = 0.3
133 |
134 | # Learning rate and momentum
135 | # The Mask RCNN paper uses lr=0.02, but on TensorFlow it causes
136 | # weights to explode. Likely due to differences in optimzer
137 | # implementation.
138 | LEARNING_RATE = 0.001
139 | LEARNING_MOMENTUM = 0.9
140 |
141 | # Weight decay regularization
142 | WEIGHT_DECAY = 0.0001
143 |
144 | # Use RPN ROIs or externally generated ROIs for training
145 | # Keep this True for most situations. Set to False if you want to train
146 | # the head branches on ROI generated by code rather than the ROIs from
147 | # the RPN. For example, to debug the classifier head without having to
148 | # train the RPN.
149 | USE_RPN_ROIS = True
150 |
151 | def __init__(self):
152 | """Set values of computed attributes."""
153 | # Effective batch size
154 | if self.GPU_COUNT > 0:
155 | self.BATCH_SIZE = self.IMAGES_PER_GPU * self.GPU_COUNT
156 | else:
157 | self.BATCH_SIZE = self.IMAGES_PER_GPU
158 |
159 | # Adjust step size based on batch size
160 | self.STEPS_PER_EPOCH = self.BATCH_SIZE * self.STEPS_PER_EPOCH
161 |
162 | # Input image size
163 | self.IMAGE_SHAPE = np.array(
164 | [self.IMAGE_MAX_DIM, self.IMAGE_MAX_DIM, 3])
165 |
166 | # Compute backbone size from input image size
167 | self.BACKBONE_SHAPES = np.array(
168 | [[int(math.ceil(self.IMAGE_SHAPE[0] / stride)),
169 | int(math.ceil(self.IMAGE_SHAPE[1] / stride))]
170 | for stride in self.BACKBONE_STRIDES])
171 |
172 | def display(self):
173 | """Display Configuration values."""
174 | print("\nConfigurations:")
175 | for a in dir(self):
176 | if not a.startswith("__") and not callable(getattr(self, a)):
177 | print("{:30} {}".format(a, getattr(self, a)))
178 | print("\n")
179 |
--------------------------------------------------------------------------------
/demo_coco.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 | import random
4 | import math
5 | import numpy as np
6 | import skimage.io
7 | import matplotlib
8 | import matplotlib.pyplot as plt
9 |
10 | import coco
11 | import utils
12 | import model as modellib
13 | import visualize
14 |
15 | import torch
16 |
17 |
18 | # Root directory of the project
19 | ROOT_DIR = os.getcwd()
20 |
21 | # Directory to save logs and trained model
22 | MODEL_DIR = os.path.join(ROOT_DIR, "logs")
23 |
24 | # Path to trained weights file
25 | # Download this file and place in the root of your
26 | # project (See README file for details)
27 | COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.pth")
28 |
29 | # Directory of images to run detection on
30 | IMAGE_DIR = os.path.join(ROOT_DIR, "images")
31 |
32 | class InferenceConfig(coco.CocoConfig):
33 | # Set batch size to 1 since we'll be running inference on
34 | # one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
35 | # GPU_COUNT = 0 for CPU
36 | GPU_COUNT = 1
37 | IMAGES_PER_GPU = 1
38 |
39 | config = InferenceConfig()
40 | config.display()
41 |
42 | # Create model object.
43 | model = modellib.MaskRCNN(model_dir=MODEL_DIR, config=config)
44 | if config.GPU_COUNT:
45 | model = model.cuda()
46 |
47 | # Load weights trained on MS-COCO
48 | model.load_state_dict(torch.load(COCO_MODEL_PATH))
49 |
50 | # COCO Class names
51 | # Index of the class in the list is its ID. For example, to get ID of
52 | # the teddy bear class, use: class_names.index('teddy bear')
53 | class_names = ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane',
54 | 'bus', 'train', 'truck', 'boat', 'traffic light',
55 | 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird',
56 | 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear',
57 | 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie',
58 | 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
59 | 'kite', 'baseball bat', 'baseball glove', 'skateboard',
60 | 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup',
61 | 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
62 | 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
63 | 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed',
64 | 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote',
65 | 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster',
66 | 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors',
67 | 'teddy bear', 'hair drier', 'toothbrush']
68 |
69 | # Load a random image from the images folder
70 | file_names = next(os.walk(IMAGE_DIR))[2]
71 | image = skimage.io.imread(os.path.join(IMAGE_DIR, random.choice(file_names)))
72 |
73 | # Run detection
74 | results = model.detect([image])
75 |
76 | # Visualize results
77 | r = results[0]
78 | visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'],
79 | class_names, r['scores'])
80 | plt.show()
--------------------------------------------------------------------------------
/demo_synthia.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 | import random
4 | import math
5 | import numpy as np
6 | import skimage.io
7 | import matplotlib
8 | import matplotlib.pyplot as plt
9 |
10 | import coco
11 | import utils
12 | import model as modellib
13 | import visualize
14 |
15 | import torch
16 | from config import Config
17 |
18 | # Root directory of the project
19 | ROOT_DIR = os.getcwd()
20 |
21 | # Directory to save logs and trained model
22 | MODEL_DIR = os.path.join(ROOT_DIR, "logs")
23 |
24 | device=torch.device("cuda")
25 |
26 | # Path to trained weights file
27 | # Download this file and place in the root of your
28 | # project (See README file for details)
29 | #COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.pth")
30 |
31 | # Directory of images to run detection on
32 | IMAGE_DIR = os.path.join(ROOT_DIR, "images")
33 |
34 | class synthiaConfig(Config):
35 | """Configuration for training on the toy shapes dataset.
36 | Derives from the base Config class and overrides values specific
37 | to the toy shapes dataset.
38 | """
39 | # Give the configuration a recognizable name
40 | NAME = "synthia"
41 |
42 | # Train on 1 GPU and 8 images per GPU. We can put multiple images on each
43 | # GPU because the images are small. Batch size is 8 (GPUs * images/GPU).
44 | GPU_COUNT = 1
45 | IMAGES_PER_GPU = 10
46 |
47 | # Number of classes (including background)
48 | NUM_CLASSES = 1 + 22 # background + 22 shapes
49 |
50 | # Use small images for faster training. Set the limits of the small side
51 | # the large side, and that determines the image shape.
52 | #IMAGE_MIN_DIM = 512
53 | #IMAGE_MAX_DIM = 768
54 | IMAGE_MIN_DIM = 760
55 | IMAGE_MAX_DIM = 1280
56 |
57 | #MEAN_PIXEL = np.array([123.7, 116.8, 103.9,123.7, 116.8, 103.9])
58 | # MEAN_PIXEL = np.array([123.7, 116.8, 103.9,1000])
59 | # Use smaller anchors because our image and objects are small
60 | # RPN_ANCHOR_SCALES = (8, 16, 32, 64, 128) # anchor side in pixels
61 |
62 | # Reduce training ROIs per image because the images are small and have
63 | # few objects. Aim to allow ROI sampling to pick 33% positive ROIs.
64 | #TRAIN_ROIS_PER_IMAGE =
65 |
66 | # Use a small epoch since the data is simple
67 | STEPS_PER_EPOCH = 100
68 |
69 | # use small validation steps since the epoch is small
70 | VALIDATION_STEPS = 20
71 |
72 | class InferenceConfig(synthiaConfig):
73 | # Set batch size to 1 since we'll be running inference on
74 | # one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
75 | GPU_COUNT = 1
76 | IMAGES_PER_GPU = 1
77 | #DETECTION_MIN_CONFIDENCE = 0
78 |
79 | config = InferenceConfig()
80 | config.display()
81 |
82 | # Create model object.
83 | model = modellib.MaskRCNN(model_dir=MODEL_DIR, config=config)
84 | if config.GPU_COUNT:
85 | model = model.cuda()
86 |
87 | # Load weights trained on MS-COCO
88 | #model.load_state_dict(torch.load(COCO_MODEL_PATH))
89 | model_path = "/mnt/backup/jianyuan/pytorch-mask-rcnn/logs/synthia20180907T2148/mask_rcnn_synthia_0002.pth"
90 | #model.find_last()[1]
91 | model.load_weights(model_path)
92 | # COCO Class names
93 | # Index of the class in the list is its ID. For example, to get ID of
94 | # the teddy bear class, use: class_names.index('teddy bear')
95 | class_names = ["BG", "sky","Building","Road", "Sidewalk","Fence", "Vegetation","Pole", "Car","Traffic sign","Pedestrian","Bicycle","Motorcycle","Parking-slot" ,"Road-work","Traffic light","Terrain","Rider","Truck", "Bus", "Train", "Wall","Lanemarking"]
96 |
97 | # Load a random image from the images folder
98 | file_names = next(os.walk(IMAGE_DIR))[2]
99 | image = skimage.io.imread(os.path.join(IMAGE_DIR, random.choice(file_names)))
100 |
101 | # Run detection
102 | results = model.detect([image],device)
103 |
104 | # Visualize results
105 | r = results[0]
106 | visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'],
107 | class_names, r['scores'])
108 | plt.show()
--------------------------------------------------------------------------------
/images/1045023827_4ec3e8ba5c_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/1045023827_4ec3e8ba5c_z.jpg
--------------------------------------------------------------------------------
/images/12283150_12d37e6389_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/12283150_12d37e6389_z.jpg
--------------------------------------------------------------------------------
/images/2383514521_1fc8d7b0de_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/2383514521_1fc8d7b0de_z.jpg
--------------------------------------------------------------------------------
/images/2502287818_41e4b0c4fb_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/2502287818_41e4b0c4fb_z.jpg
--------------------------------------------------------------------------------
/images/2516944023_d00345997d_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/2516944023_d00345997d_z.jpg
--------------------------------------------------------------------------------
/images/25691390_f9944f61b5_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/25691390_f9944f61b5_z.jpg
--------------------------------------------------------------------------------
/images/262985539_1709e54576_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/262985539_1709e54576_z.jpg
--------------------------------------------------------------------------------
/images/3132016470_c27baa00e8_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/3132016470_c27baa00e8_z.jpg
--------------------------------------------------------------------------------
/images/3627527276_6fe8cd9bfe_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/3627527276_6fe8cd9bfe_z.jpg
--------------------------------------------------------------------------------
/images/3651581213_f81963d1dd_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/3651581213_f81963d1dd_z.jpg
--------------------------------------------------------------------------------
/images/3800883468_12af3c0b50_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/3800883468_12af3c0b50_z.jpg
--------------------------------------------------------------------------------
/images/3862500489_6fd195d183_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/3862500489_6fd195d183_z.jpg
--------------------------------------------------------------------------------
/images/3878153025_8fde829928_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/3878153025_8fde829928_z.jpg
--------------------------------------------------------------------------------
/images/4410436637_7b0ca36ee7_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/4410436637_7b0ca36ee7_z.jpg
--------------------------------------------------------------------------------
/images/4782628554_668bc31826_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/4782628554_668bc31826_z.jpg
--------------------------------------------------------------------------------
/images/5951960966_d4e1cda5d0_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/5951960966_d4e1cda5d0_z.jpg
--------------------------------------------------------------------------------
/images/6584515005_fce9cec486_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/6584515005_fce9cec486_z.jpg
--------------------------------------------------------------------------------
/images/6821351586_59aa0dc110_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/6821351586_59aa0dc110_z.jpg
--------------------------------------------------------------------------------
/images/7581246086_cf7bbb7255_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/7581246086_cf7bbb7255_z.jpg
--------------------------------------------------------------------------------
/images/7933423348_c30bd9bd4e_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/7933423348_c30bd9bd4e_z.jpg
--------------------------------------------------------------------------------
/images/8053677163_d4c8f416be_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/8053677163_d4c8f416be_z.jpg
--------------------------------------------------------------------------------
/images/8239308689_efa6c11b08_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/8239308689_efa6c11b08_z.jpg
--------------------------------------------------------------------------------
/images/8433365521_9252889f9a_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/8433365521_9252889f9a_z.jpg
--------------------------------------------------------------------------------
/images/8512296263_5fc5458e20_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/8512296263_5fc5458e20_z.jpg
--------------------------------------------------------------------------------
/images/8699757338_c3941051b6_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/8699757338_c3941051b6_z.jpg
--------------------------------------------------------------------------------
/images/8734543718_37f6b8bd45_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/8734543718_37f6b8bd45_z.jpg
--------------------------------------------------------------------------------
/images/8829708882_48f263491e_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/8829708882_48f263491e_z.jpg
--------------------------------------------------------------------------------
/images/9118579087_f9ffa19e63_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/9118579087_f9ffa19e63_z.jpg
--------------------------------------------------------------------------------
/images/9247489789_132c0d534a_z.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jytime/Mask_RCNN_Pytorch/3f5de29e1cc7fca852b8c7e04007d9f461202636/images/9247489789_132c0d534a_z.jpg
--------------------------------------------------------------------------------
/nms/build.py:
--------------------------------------------------------------------------------
1 | import os
2 | import torch
3 | from torch.utils.ffi import create_extension
4 |
5 |
6 | sources = ['src/nms.c']
7 | headers = ['src/nms.h']
8 | defines = []
9 | with_cuda = False
10 |
11 | if torch.cuda.is_available():
12 | print('Including CUDA code.')
13 | sources += ['src/nms_cuda.c']
14 | headers += ['src/nms_cuda.h']
15 | defines += [('WITH_CUDA', None)]
16 | with_cuda = True
17 |
18 | this_file = os.path.dirname(os.path.realpath(__file__))
19 | print(this_file)
20 | extra_objects = ['src/cuda/nms_kernel.cu.o']
21 | extra_objects = [os.path.join(this_file, fname) for fname in extra_objects]
22 |
23 | ffi = create_extension(
24 | '_ext.nms',
25 | headers=headers,
26 | sources=sources,
27 | define_macros=defines,
28 | relative_to=__file__,
29 | with_cuda=with_cuda,
30 | extra_objects=extra_objects
31 | )
32 |
33 | if __name__ == '__main__':
34 | ffi.build()
35 |
--------------------------------------------------------------------------------
/nms/nms_wrapper.py:
--------------------------------------------------------------------------------
1 | # --------------------------------------------------------
2 | # Fast R-CNN
3 | # Copyright (c) 2015 Microsoft
4 | # Licensed under The MIT License [see LICENSE for details]
5 | # Written by Ross Girshick
6 | # --------------------------------------------------------
7 | from __future__ import absolute_import
8 | from __future__ import division
9 | from __future__ import print_function
10 |
11 | from nms.pth_nms import pth_nms
12 |
13 |
14 | def nms(dets, thresh):
15 | """Dispatch to either CPU or GPU NMS implementations.
16 | Accept dets as tensor"""
17 | return pth_nms(dets, thresh)
18 |
--------------------------------------------------------------------------------
/nms/pth_nms.py:
--------------------------------------------------------------------------------
1 | import torch
2 | from ._ext import nms
3 | import numpy as np
4 |
5 | def pth_nms(dets, thresh):
6 | """
7 | dets has to be a tensor
8 | """
9 | if not dets.is_cuda:
10 | x1 = dets[:, 1]
11 | y1 = dets[:, 0]
12 | x2 = dets[:, 3]
13 | y2 = dets[:, 2]
14 | scores = dets[:, 4]
15 |
16 | areas = (x2 - x1 + 1) * (y2 - y1 + 1)
17 | order = scores.sort(0, descending=True)[1]
18 | # order = torch.from_numpy(np.ascontiguousarray(scores.numpy().argsort()[::-1])).long()
19 |
20 | keep = torch.LongTensor(dets.size(0))
21 | num_out = torch.LongTensor(1)
22 | nms.cpu_nms(keep, num_out, dets, order, areas, thresh)
23 |
24 | return keep[:num_out[0]]
25 | else:
26 | x1 = dets[:, 1]
27 | y1 = dets[:, 0]
28 | x2 = dets[:, 3]
29 | y2 = dets[:, 2]
30 | scores = dets[:, 4]
31 |
32 | dets_temp = torch.FloatTensor(dets.size()).cuda()
33 | dets_temp[:, 0] = dets[:, 1]
34 | dets_temp[:, 1] = dets[:, 0]
35 | dets_temp[:, 2] = dets[:, 3]
36 | dets_temp[:, 3] = dets[:, 2]
37 | dets_temp[:, 4] = dets[:, 4]
38 |
39 | areas = (x2 - x1 + 1) * (y2 - y1 + 1)
40 | order = scores.sort(0, descending=True)[1]
41 | # order = torch.from_numpy(np.ascontiguousarray(scores.cpu().numpy().argsort()[::-1])).long().cuda()
42 |
43 | dets = dets[order].contiguous()
44 |
45 | keep = torch.LongTensor(dets.size(0))
46 | num_out = torch.LongTensor(1)
47 | # keep = torch.cuda.LongTensor(dets.size(0))
48 | # num_out = torch.cuda.LongTensor(1)
49 | nms.gpu_nms(keep, num_out, dets_temp, thresh)
50 |
51 | return order[keep[:num_out[0]].cuda()].contiguous()
52 | # return order[keep[:num_out[0]]].contiguous()
53 |
54 |
--------------------------------------------------------------------------------
/nms/src/cuda/nms_kernel.cu:
--------------------------------------------------------------------------------
1 | // ------------------------------------------------------------------
2 | // Faster R-CNN
3 | // Copyright (c) 2015 Microsoft
4 | // Licensed under The MIT License [see fast-rcnn/LICENSE for details]
5 | // Written by Shaoqing Ren
6 | // ------------------------------------------------------------------
7 | #ifdef __cplusplus
8 | extern "C" {
9 | #endif
10 |
11 | #include
12 | #include
13 | #include
14 | #include "nms_kernel.h"
15 |
16 | __device__ inline float devIoU(float const * const a, float const * const b) {
17 | float left = fmaxf(a[0], b[0]), right = fminf(a[2], b[2]);
18 | float top = fmaxf(a[1], b[1]), bottom = fminf(a[3], b[3]);
19 | float width = fmaxf(right - left + 1, 0.f), height = fmaxf(bottom - top + 1, 0.f);
20 | float interS = width * height;
21 | float Sa = (a[2] - a[0] + 1) * (a[3] - a[1] + 1);
22 | float Sb = (b[2] - b[0] + 1) * (b[3] - b[1] + 1);
23 | return interS / (Sa + Sb - interS);
24 | }
25 |
26 | __global__ void nms_kernel(const int n_boxes, const float nms_overlap_thresh,
27 | const float *dev_boxes, unsigned long long *dev_mask) {
28 | const int row_start = blockIdx.y;
29 | const int col_start = blockIdx.x;
30 |
31 | // if (row_start > col_start) return;
32 |
33 | const int row_size =
34 | fminf(n_boxes - row_start * threadsPerBlock, threadsPerBlock);
35 | const int col_size =
36 | fminf(n_boxes - col_start * threadsPerBlock, threadsPerBlock);
37 |
38 | __shared__ float block_boxes[threadsPerBlock * 5];
39 | if (threadIdx.x < col_size) {
40 | block_boxes[threadIdx.x * 5 + 0] =
41 | dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 0];
42 | block_boxes[threadIdx.x * 5 + 1] =
43 | dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 1];
44 | block_boxes[threadIdx.x * 5 + 2] =
45 | dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 2];
46 | block_boxes[threadIdx.x * 5 + 3] =
47 | dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 3];
48 | block_boxes[threadIdx.x * 5 + 4] =
49 | dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 4];
50 | }
51 | __syncthreads();
52 |
53 | if (threadIdx.x < row_size) {
54 | const int cur_box_idx = threadsPerBlock * row_start + threadIdx.x;
55 | const float *cur_box = dev_boxes + cur_box_idx * 5;
56 | int i = 0;
57 | unsigned long long t = 0;
58 | int start = 0;
59 | if (row_start == col_start) {
60 | start = threadIdx.x + 1;
61 | }
62 | for (i = start; i < col_size; i++) {
63 | if (devIoU(cur_box, block_boxes + i * 5) > nms_overlap_thresh) {
64 | t |= 1ULL << i;
65 | }
66 | }
67 | const int col_blocks = DIVUP(n_boxes, threadsPerBlock);
68 | dev_mask[cur_box_idx * col_blocks + col_start] = t;
69 | }
70 | }
71 |
72 |
73 | void _nms(int boxes_num, float * boxes_dev,
74 | unsigned long long * mask_dev, float nms_overlap_thresh) {
75 |
76 | dim3 blocks(DIVUP(boxes_num, threadsPerBlock),
77 | DIVUP(boxes_num, threadsPerBlock));
78 | dim3 threads(threadsPerBlock);
79 | nms_kernel<<>>(boxes_num,
80 | nms_overlap_thresh,
81 | boxes_dev,
82 | mask_dev);
83 | }
84 |
85 | #ifdef __cplusplus
86 | }
87 | #endif
88 |
--------------------------------------------------------------------------------
/nms/src/cuda/nms_kernel.h:
--------------------------------------------------------------------------------
1 | #ifndef _NMS_KERNEL
2 | #define _NMS_KERNEL
3 |
4 | #ifdef __cplusplus
5 | extern "C" {
6 | #endif
7 |
8 | #define DIVUP(m,n) ((m) / (n) + ((m) % (n) > 0))
9 | int const threadsPerBlock = sizeof(unsigned long long) * 8;
10 |
11 | void _nms(int boxes_num, float * boxes_dev,
12 | unsigned long long * mask_dev, float nms_overlap_thresh);
13 |
14 | #ifdef __cplusplus
15 | }
16 | #endif
17 |
18 | #endif
19 |
20 |
--------------------------------------------------------------------------------
/nms/src/nms.c:
--------------------------------------------------------------------------------
1 | #include
2 | #include
3 |
4 | int cpu_nms(THLongTensor * keep_out, THLongTensor * num_out, THFloatTensor * boxes, THLongTensor * order, THFloatTensor * areas, float nms_overlap_thresh) {
5 | // boxes has to be sorted
6 | THArgCheck(THLongTensor_isContiguous(keep_out), 0, "keep_out must be contiguous");
7 | THArgCheck(THLongTensor_isContiguous(boxes), 2, "boxes must be contiguous");
8 | THArgCheck(THLongTensor_isContiguous(order), 3, "order must be contiguous");
9 | THArgCheck(THLongTensor_isContiguous(areas), 4, "areas must be contiguous");
10 | // Number of ROIs
11 | long boxes_num = THFloatTensor_size(boxes, 0);
12 | long boxes_dim = THFloatTensor_size(boxes, 1);
13 |
14 | long * keep_out_flat = THLongTensor_data(keep_out);
15 | float * boxes_flat = THFloatTensor_data(boxes);
16 | long * order_flat = THLongTensor_data(order);
17 | float * areas_flat = THFloatTensor_data(areas);
18 |
19 | THByteTensor* suppressed = THByteTensor_newWithSize1d(boxes_num);
20 | THByteTensor_fill(suppressed, 0);
21 | unsigned char * suppressed_flat = THByteTensor_data(suppressed);
22 |
23 | // nominal indices
24 | int i, j;
25 | // sorted indices
26 | int _i, _j;
27 | // temp variables for box i's (the box currently under consideration)
28 | float ix1, iy1, ix2, iy2, iarea;
29 | // variables for computing overlap with box j (lower scoring box)
30 | float xx1, yy1, xx2, yy2;
31 | float w, h;
32 | float inter, ovr;
33 |
34 | long num_to_keep = 0;
35 | for (_i=0; _i < boxes_num; ++_i) {
36 | i = order_flat[_i];
37 | if (suppressed_flat[i] == 1) {
38 | continue;
39 | }
40 | keep_out_flat[num_to_keep++] = i;
41 | ix1 = boxes_flat[i * boxes_dim];
42 | iy1 = boxes_flat[i * boxes_dim + 1];
43 | ix2 = boxes_flat[i * boxes_dim + 2];
44 | iy2 = boxes_flat[i * boxes_dim + 3];
45 | iarea = areas_flat[i];
46 | for (_j = _i + 1; _j < boxes_num; ++_j) {
47 | j = order_flat[_j];
48 | if (suppressed_flat[j] == 1) {
49 | continue;
50 | }
51 | xx1 = fmaxf(ix1, boxes_flat[j * boxes_dim]);
52 | yy1 = fmaxf(iy1, boxes_flat[j * boxes_dim + 1]);
53 | xx2 = fminf(ix2, boxes_flat[j * boxes_dim + 2]);
54 | yy2 = fminf(iy2, boxes_flat[j * boxes_dim + 3]);
55 | w = fmaxf(0.0, xx2 - xx1 + 1);
56 | h = fmaxf(0.0, yy2 - yy1 + 1);
57 | inter = w * h;
58 | ovr = inter / (iarea + areas_flat[j] - inter);
59 | if (ovr >= nms_overlap_thresh) {
60 | suppressed_flat[j] = 1;
61 | }
62 | }
63 | }
64 |
65 | long *num_out_flat = THLongTensor_data(num_out);
66 | *num_out_flat = num_to_keep;
67 | THByteTensor_free(suppressed);
68 | return 1;
69 | }
--------------------------------------------------------------------------------
/nms/src/nms.h:
--------------------------------------------------------------------------------
1 | int cpu_nms(THLongTensor * keep_out, THLongTensor * num_out, THFloatTensor * boxes, THLongTensor * order, THFloatTensor * areas, float nms_overlap_thresh);
--------------------------------------------------------------------------------
/nms/src/nms_cuda.c:
--------------------------------------------------------------------------------
1 | // ------------------------------------------------------------------
2 | // Faster R-CNN
3 | // Copyright (c) 2015 Microsoft
4 | // Licensed under The MIT License [see fast-rcnn/LICENSE for details]
5 | // Written by Shaoqing Ren
6 | // ------------------------------------------------------------------
7 | #include
8 | #include
9 | #include
10 | #include
11 |
12 | #include "cuda/nms_kernel.h"
13 |
14 |
15 | extern THCState *state;
16 |
17 | int gpu_nms(THLongTensor * keep, THLongTensor* num_out, THCudaTensor * boxes, float nms_overlap_thresh) {
18 | // boxes has to be sorted
19 | THArgCheck(THLongTensor_isContiguous(keep), 0, "boxes must be contiguous");
20 | THArgCheck(THCudaTensor_isContiguous(state, boxes), 2, "boxes must be contiguous");
21 | // Number of ROIs
22 | int boxes_num = THCudaTensor_size(state, boxes, 0);
23 | int boxes_dim = THCudaTensor_size(state, boxes, 1);
24 |
25 | float* boxes_flat = THCudaTensor_data(state, boxes);
26 |
27 | const int col_blocks = DIVUP(boxes_num, threadsPerBlock);
28 | THCudaLongTensor * mask = THCudaLongTensor_newWithSize2d(state, boxes_num, col_blocks);
29 | unsigned long long* mask_flat = THCudaLongTensor_data(state, mask);
30 |
31 | _nms(boxes_num, boxes_flat, mask_flat, nms_overlap_thresh);
32 |
33 | THLongTensor * mask_cpu = THLongTensor_newWithSize2d(boxes_num, col_blocks);
34 | THLongTensor_copyCuda(state, mask_cpu, mask);
35 | THCudaLongTensor_free(state, mask);
36 |
37 | unsigned long long * mask_cpu_flat = THLongTensor_data(mask_cpu);
38 |
39 | THLongTensor * remv_cpu = THLongTensor_newWithSize1d(col_blocks);
40 | unsigned long long* remv_cpu_flat = THLongTensor_data(remv_cpu);
41 | THLongTensor_fill(remv_cpu, 0);
42 |
43 | long * keep_flat = THLongTensor_data(keep);
44 | long num_to_keep = 0;
45 |
46 | int i, j;
47 | for (i = 0; i < boxes_num; i++) {
48 | int nblock = i / threadsPerBlock;
49 | int inblock = i % threadsPerBlock;
50 |
51 | if (!(remv_cpu_flat[nblock] & (1ULL << inblock))) {
52 | keep_flat[num_to_keep++] = i;
53 | unsigned long long *p = &mask_cpu_flat[0] + i * col_blocks;
54 | for (j = nblock; j < col_blocks; j++) {
55 | remv_cpu_flat[j] |= p[j];
56 | }
57 | }
58 | }
59 |
60 | long * num_out_flat = THLongTensor_data(num_out);
61 | * num_out_flat = num_to_keep;
62 |
63 | THLongTensor_free(mask_cpu);
64 | THLongTensor_free(remv_cpu);
65 |
66 | return 1;
67 | }
68 |
--------------------------------------------------------------------------------
/nms/src/nms_cuda.h:
--------------------------------------------------------------------------------
1 | int gpu_nms(THLongTensor * keep_out, THLongTensor* num_out, THCudaTensor * boxes, float nms_overlap_thresh);
--------------------------------------------------------------------------------
/roialign/roi_align/build.py:
--------------------------------------------------------------------------------
1 | import os
2 | import torch
3 | from torch.utils.ffi import create_extension
4 |
5 |
6 | sources = ['src/crop_and_resize.c']
7 | headers = ['src/crop_and_resize.h']
8 | defines = []
9 | with_cuda = False
10 |
11 | extra_objects = []
12 | if torch.cuda.is_available():
13 | print('Including CUDA code.')
14 | sources += ['src/crop_and_resize_gpu.c']
15 | headers += ['src/crop_and_resize_gpu.h']
16 | defines += [('WITH_CUDA', None)]
17 | extra_objects += ['src/cuda/crop_and_resize_kernel.cu.o']
18 | with_cuda = True
19 |
20 | extra_compile_args = ['-fopenmp', '-std=c99']
21 |
22 | this_file = os.path.dirname(os.path.realpath(__file__))
23 | print(this_file)
24 | sources = [os.path.join(this_file, fname) for fname in sources]
25 | headers = [os.path.join(this_file, fname) for fname in headers]
26 | extra_objects = [os.path.join(this_file, fname) for fname in extra_objects]
27 |
28 | ffi = create_extension(
29 | '_ext.crop_and_resize',
30 | headers=headers,
31 | sources=sources,
32 | define_macros=defines,
33 | relative_to=__file__,
34 | with_cuda=with_cuda,
35 | extra_objects=extra_objects,
36 | extra_compile_args=extra_compile_args
37 | )
38 |
39 | if __name__ == '__main__':
40 | ffi.build()
41 |
--------------------------------------------------------------------------------
/roialign/roi_align/crop_and_resize.py:
--------------------------------------------------------------------------------
1 | import math
2 | import torch
3 | import torch.nn as nn
4 | import torch.nn.functional as F
5 | from torch.autograd import Function
6 |
7 | from ._ext import crop_and_resize as _backend
8 |
9 |
10 | class CropAndResizeFunction(Function):
11 |
12 | def __init__(self, crop_height, crop_width, extrapolation_value=0):
13 | self.crop_height = crop_height
14 | self.crop_width = crop_width
15 | self.extrapolation_value = extrapolation_value
16 |
17 | def forward(self, image, boxes, box_ind):
18 | crops = torch.zeros_like(image)
19 |
20 | if image.is_cuda:
21 | _backend.crop_and_resize_gpu_forward(
22 | image, boxes, box_ind,
23 | self.extrapolation_value, self.crop_height, self.crop_width, crops)
24 | else:
25 | _backend.crop_and_resize_forward(
26 | image, boxes, box_ind,
27 | self.extrapolation_value, self.crop_height, self.crop_width, crops)
28 |
29 | # save for backward
30 | self.im_size = image.size()
31 | self.save_for_backward(boxes, box_ind)
32 |
33 | return crops
34 |
35 | def backward(self, grad_outputs):
36 | boxes, box_ind = self.saved_tensors
37 |
38 | grad_outputs = grad_outputs.contiguous()
39 | grad_image = torch.zeros_like(grad_outputs).resize_(*self.im_size)
40 |
41 | if grad_outputs.is_cuda:
42 | _backend.crop_and_resize_gpu_backward(
43 | grad_outputs, boxes, box_ind, grad_image
44 | )
45 | else:
46 | _backend.crop_and_resize_backward(
47 | grad_outputs, boxes, box_ind, grad_image
48 | )
49 |
50 | return grad_image, None, None
51 |
52 |
53 | class CropAndResize(nn.Module):
54 | """
55 | Crop and resize ported from tensorflow
56 | See more details on https://www.tensorflow.org/api_docs/python/tf/image/crop_and_resize
57 | """
58 |
59 | def __init__(self, crop_height, crop_width, extrapolation_value=0):
60 | super(CropAndResize, self).__init__()
61 |
62 | self.crop_height = crop_height
63 | self.crop_width = crop_width
64 | self.extrapolation_value = extrapolation_value
65 |
66 | def forward(self, image, boxes, box_ind):
67 | return CropAndResizeFunction(self.crop_height, self.crop_width, self.extrapolation_value)(image, boxes, box_ind)
68 |
--------------------------------------------------------------------------------
/roialign/roi_align/roi_align.py:
--------------------------------------------------------------------------------
1 | import torch
2 | from torch import nn
3 |
4 | from .crop_and_resize import CropAndResizeFunction, CropAndResize
5 |
6 |
7 | class RoIAlign(nn.Module):
8 |
9 | def __init__(self, crop_height, crop_width, extrapolation_value=0, transform_fpcoor=True):
10 | super(RoIAlign, self).__init__()
11 |
12 | self.crop_height = crop_height
13 | self.crop_width = crop_width
14 | self.extrapolation_value = extrapolation_value
15 | self.transform_fpcoor = transform_fpcoor
16 |
17 | def forward(self, featuremap, boxes, box_ind):
18 | """
19 | RoIAlign based on crop_and_resize.
20 | See more details on https://github.com/ppwwyyxx/tensorpack/blob/6d5ba6a970710eaaa14b89d24aace179eb8ee1af/examples/FasterRCNN/model.py#L301
21 | :param featuremap: NxCxHxW
22 | :param boxes: Mx4 float box with (x1, y1, x2, y2) **without normalization**
23 | :param box_ind: M
24 | :return: MxCxoHxoW
25 | """
26 | x1, y1, x2, y2 = torch.split(boxes, 1, dim=1)
27 | image_height, image_width = featuremap.size()[2:4]
28 |
29 | if self.transform_fpcoor:
30 | spacing_w = (x2 - x1) / float(self.crop_width)
31 | spacing_h = (y2 - y1) / float(self.crop_height)
32 |
33 | nx0 = (x1 + spacing_w / 2 - 0.5) / float(image_width - 1)
34 | ny0 = (y1 + spacing_h / 2 - 0.5) / float(image_height - 1)
35 | nw = spacing_w * float(self.crop_width - 1) / float(image_width - 1)
36 | nh = spacing_h * float(self.crop_height - 1) / float(image_height - 1)
37 |
38 | boxes = torch.cat((ny0, nx0, ny0 + nh, nx0 + nw), 1)
39 | else:
40 | x1 = x1 / float(image_width - 1)
41 | x2 = x2 / float(image_width - 1)
42 | y1 = y1 / float(image_height - 1)
43 | y2 = y2 / float(image_height - 1)
44 | boxes = torch.cat((y1, x1, y2, x2), 1)
45 |
46 | boxes = boxes.detach().contiguous()
47 | box_ind = box_ind.detach()
48 | return CropAndResizeFunction(self.crop_height, self.crop_width, self.extrapolation_value)(featuremap, boxes, box_ind)
49 |
--------------------------------------------------------------------------------
/roialign/roi_align/src/crop_and_resize.c:
--------------------------------------------------------------------------------
1 | #include
2 | #include
3 | #include
4 |
5 |
6 | void CropAndResizePerBox(
7 | const float * image_data,
8 | const int batch_size,
9 | const int depth,
10 | const int image_height,
11 | const int image_width,
12 |
13 | const float * boxes_data,
14 | const int * box_index_data,
15 | const int start_box,
16 | const int limit_box,
17 |
18 | float * corps_data,
19 | const int crop_height,
20 | const int crop_width,
21 | const float extrapolation_value
22 | ) {
23 | const int image_channel_elements = image_height * image_width;
24 | const int image_elements = depth * image_channel_elements;
25 |
26 | const int channel_elements = crop_height * crop_width;
27 | const int crop_elements = depth * channel_elements;
28 |
29 | int b;
30 | #pragma omp parallel for
31 | for (b = start_box; b < limit_box; ++b) {
32 | const float * box = boxes_data + b * 4;
33 | const float y1 = box[0];
34 | const float x1 = box[1];
35 | const float y2 = box[2];
36 | const float x2 = box[3];
37 |
38 | const int b_in = box_index_data[b];
39 | if (b_in < 0 || b_in >= batch_size) {
40 | printf("Error: batch_index %d out of range [0, %d)\n", b_in, batch_size);
41 | exit(-1);
42 | }
43 |
44 | const float height_scale =
45 | (crop_height > 1)
46 | ? (y2 - y1) * (image_height - 1) / (crop_height - 1)
47 | : 0;
48 | const float width_scale =
49 | (crop_width > 1) ? (x2 - x1) * (image_width - 1) / (crop_width - 1)
50 | : 0;
51 |
52 | for (int y = 0; y < crop_height; ++y)
53 | {
54 | const float in_y = (crop_height > 1)
55 | ? y1 * (image_height - 1) + y * height_scale
56 | : 0.5 * (y1 + y2) * (image_height - 1);
57 |
58 | if (in_y < 0 || in_y > image_height - 1)
59 | {
60 | for (int x = 0; x < crop_width; ++x)
61 | {
62 | for (int d = 0; d < depth; ++d)
63 | {
64 | // crops(b, y, x, d) = extrapolation_value;
65 | corps_data[crop_elements * b + channel_elements * d + y * crop_width + x] = extrapolation_value;
66 | }
67 | }
68 | continue;
69 | }
70 |
71 | const int top_y_index = floorf(in_y);
72 | const int bottom_y_index = ceilf(in_y);
73 | const float y_lerp = in_y - top_y_index;
74 |
75 | for (int x = 0; x < crop_width; ++x)
76 | {
77 | const float in_x = (crop_width > 1)
78 | ? x1 * (image_width - 1) + x * width_scale
79 | : 0.5 * (x1 + x2) * (image_width - 1);
80 | if (in_x < 0 || in_x > image_width - 1)
81 | {
82 | for (int d = 0; d < depth; ++d)
83 | {
84 | corps_data[crop_elements * b + channel_elements * d + y * crop_width + x] = extrapolation_value;
85 | }
86 | continue;
87 | }
88 |
89 | const int left_x_index = floorf(in_x);
90 | const int right_x_index = ceilf(in_x);
91 | const float x_lerp = in_x - left_x_index;
92 |
93 | for (int d = 0; d < depth; ++d)
94 | {
95 | const float *pimage = image_data + b_in * image_elements + d * image_channel_elements;
96 |
97 | const float top_left = pimage[top_y_index * image_width + left_x_index];
98 | const float top_right = pimage[top_y_index * image_width + right_x_index];
99 | const float bottom_left = pimage[bottom_y_index * image_width + left_x_index];
100 | const float bottom_right = pimage[bottom_y_index * image_width + right_x_index];
101 |
102 | const float top = top_left + (top_right - top_left) * x_lerp;
103 | const float bottom =
104 | bottom_left + (bottom_right - bottom_left) * x_lerp;
105 |
106 | corps_data[crop_elements * b + channel_elements * d + y * crop_width + x] = top + (bottom - top) * y_lerp;
107 | }
108 | } // end for x
109 | } // end for y
110 | } // end for b
111 |
112 | }
113 |
114 |
115 | void crop_and_resize_forward(
116 | THFloatTensor * image,
117 | THFloatTensor * boxes, // [y1, x1, y2, x2]
118 | THIntTensor * box_index, // range in [0, batch_size)
119 | const float extrapolation_value,
120 | const int crop_height,
121 | const int crop_width,
122 | THFloatTensor * crops
123 | ) {
124 | const int batch_size = image->size[0];
125 | const int depth = image->size[1];
126 | const int image_height = image->size[2];
127 | const int image_width = image->size[3];
128 |
129 | const int num_boxes = boxes->size[0];
130 |
131 | // init output space
132 | THFloatTensor_resize4d(crops, num_boxes, depth, crop_height, crop_width);
133 | THFloatTensor_zero(crops);
134 |
135 | // crop_and_resize for each box
136 | CropAndResizePerBox(
137 | THFloatTensor_data(image),
138 | batch_size,
139 | depth,
140 | image_height,
141 | image_width,
142 |
143 | THFloatTensor_data(boxes),
144 | THIntTensor_data(box_index),
145 | 0,
146 | num_boxes,
147 |
148 | THFloatTensor_data(crops),
149 | crop_height,
150 | crop_width,
151 | extrapolation_value
152 | );
153 |
154 | }
155 |
156 |
157 | void crop_and_resize_backward(
158 | THFloatTensor * grads,
159 | THFloatTensor * boxes, // [y1, x1, y2, x2]
160 | THIntTensor * box_index, // range in [0, batch_size)
161 | THFloatTensor * grads_image // resize to [bsize, c, hc, wc]
162 | )
163 | {
164 | // shape
165 | const int batch_size = grads_image->size[0];
166 | const int depth = grads_image->size[1];
167 | const int image_height = grads_image->size[2];
168 | const int image_width = grads_image->size[3];
169 |
170 | const int num_boxes = grads->size[0];
171 | const int crop_height = grads->size[2];
172 | const int crop_width = grads->size[3];
173 |
174 | // n_elements
175 | const int image_channel_elements = image_height * image_width;
176 | const int image_elements = depth * image_channel_elements;
177 |
178 | const int channel_elements = crop_height * crop_width;
179 | const int crop_elements = depth * channel_elements;
180 |
181 | // init output space
182 | THFloatTensor_zero(grads_image);
183 |
184 | // data pointer
185 | const float * grads_data = THFloatTensor_data(grads);
186 | const float * boxes_data = THFloatTensor_data(boxes);
187 | const int * box_index_data = THIntTensor_data(box_index);
188 | float * grads_image_data = THFloatTensor_data(grads_image);
189 |
190 | for (int b = 0; b < num_boxes; ++b) {
191 | const float * box = boxes_data + b * 4;
192 | const float y1 = box[0];
193 | const float x1 = box[1];
194 | const float y2 = box[2];
195 | const float x2 = box[3];
196 |
197 | const int b_in = box_index_data[b];
198 | if (b_in < 0 || b_in >= batch_size) {
199 | printf("Error: batch_index %d out of range [0, %d)\n", b_in, batch_size);
200 | exit(-1);
201 | }
202 |
203 | const float height_scale =
204 | (crop_height > 1) ? (y2 - y1) * (image_height - 1) / (crop_height - 1)
205 | : 0;
206 | const float width_scale =
207 | (crop_width > 1) ? (x2 - x1) * (image_width - 1) / (crop_width - 1)
208 | : 0;
209 |
210 | for (int y = 0; y < crop_height; ++y)
211 | {
212 | const float in_y = (crop_height > 1)
213 | ? y1 * (image_height - 1) + y * height_scale
214 | : 0.5 * (y1 + y2) * (image_height - 1);
215 | if (in_y < 0 || in_y > image_height - 1)
216 | {
217 | continue;
218 | }
219 | const int top_y_index = floorf(in_y);
220 | const int bottom_y_index = ceilf(in_y);
221 | const float y_lerp = in_y - top_y_index;
222 |
223 | for (int x = 0; x < crop_width; ++x)
224 | {
225 | const float in_x = (crop_width > 1)
226 | ? x1 * (image_width - 1) + x * width_scale
227 | : 0.5 * (x1 + x2) * (image_width - 1);
228 | if (in_x < 0 || in_x > image_width - 1)
229 | {
230 | continue;
231 | }
232 | const int left_x_index = floorf(in_x);
233 | const int right_x_index = ceilf(in_x);
234 | const float x_lerp = in_x - left_x_index;
235 |
236 | for (int d = 0; d < depth; ++d)
237 | {
238 | float *pimage = grads_image_data + b_in * image_elements + d * image_channel_elements;
239 | const float grad_val = grads_data[crop_elements * b + channel_elements * d + y * crop_width + x];
240 |
241 | const float dtop = (1 - y_lerp) * grad_val;
242 | pimage[top_y_index * image_width + left_x_index] += (1 - x_lerp) * dtop;
243 | pimage[top_y_index * image_width + right_x_index] += x_lerp * dtop;
244 |
245 | const float dbottom = y_lerp * grad_val;
246 | pimage[bottom_y_index * image_width + left_x_index] += (1 - x_lerp) * dbottom;
247 | pimage[bottom_y_index * image_width + right_x_index] += x_lerp * dbottom;
248 | } // end d
249 | } // end x
250 | } // end y
251 | } // end b
252 | }
--------------------------------------------------------------------------------
/roialign/roi_align/src/crop_and_resize.h:
--------------------------------------------------------------------------------
1 | void crop_and_resize_forward(
2 | THFloatTensor * image,
3 | THFloatTensor * boxes, // [y1, x1, y2, x2]
4 | THIntTensor * box_index, // range in [0, batch_size)
5 | const float extrapolation_value,
6 | const int crop_height,
7 | const int crop_width,
8 | THFloatTensor * crops
9 | );
10 |
11 | void crop_and_resize_backward(
12 | THFloatTensor * grads,
13 | THFloatTensor * boxes, // [y1, x1, y2, x2]
14 | THIntTensor * box_index, // range in [0, batch_size)
15 | THFloatTensor * grads_image // resize to [bsize, c, hc, wc]
16 | );
--------------------------------------------------------------------------------
/roialign/roi_align/src/crop_and_resize_gpu.c:
--------------------------------------------------------------------------------
1 | #include
2 | #include "cuda/crop_and_resize_kernel.h"
3 |
4 | extern THCState *state;
5 |
6 |
7 | void crop_and_resize_gpu_forward(
8 | THCudaTensor * image,
9 | THCudaTensor * boxes, // [y1, x1, y2, x2]
10 | THCudaIntTensor * box_index, // range in [0, batch_size)
11 | const float extrapolation_value,
12 | const int crop_height,
13 | const int crop_width,
14 | THCudaTensor * crops
15 | ) {
16 | const int batch_size = THCudaTensor_size(state, image, 0);
17 | const int depth = THCudaTensor_size(state, image, 1);
18 | const int image_height = THCudaTensor_size(state, image, 2);
19 | const int image_width = THCudaTensor_size(state, image, 3);
20 |
21 | const int num_boxes = THCudaTensor_size(state, boxes, 0);
22 |
23 | // init output space
24 | THCudaTensor_resize4d(state, crops, num_boxes, depth, crop_height, crop_width);
25 | THCudaTensor_zero(state, crops);
26 |
27 | cudaStream_t stream = THCState_getCurrentStream(state);
28 | CropAndResizeLaucher(
29 | THCudaTensor_data(state, image),
30 | THCudaTensor_data(state, boxes),
31 | THCudaIntTensor_data(state, box_index),
32 | num_boxes, batch_size, image_height, image_width,
33 | crop_height, crop_width, depth, extrapolation_value,
34 | THCudaTensor_data(state, crops),
35 | stream
36 | );
37 | }
38 |
39 |
40 | void crop_and_resize_gpu_backward(
41 | THCudaTensor * grads,
42 | THCudaTensor * boxes, // [y1, x1, y2, x2]
43 | THCudaIntTensor * box_index, // range in [0, batch_size)
44 | THCudaTensor * grads_image // resize to [bsize, c, hc, wc]
45 | ) {
46 | // shape
47 | const int batch_size = THCudaTensor_size(state, grads_image, 0);
48 | const int depth = THCudaTensor_size(state, grads_image, 1);
49 | const int image_height = THCudaTensor_size(state, grads_image, 2);
50 | const int image_width = THCudaTensor_size(state, grads_image, 3);
51 |
52 | const int num_boxes = THCudaTensor_size(state, grads, 0);
53 | const int crop_height = THCudaTensor_size(state, grads, 2);
54 | const int crop_width = THCudaTensor_size(state, grads, 3);
55 |
56 | // init output space
57 | THCudaTensor_zero(state, grads_image);
58 |
59 | cudaStream_t stream = THCState_getCurrentStream(state);
60 | CropAndResizeBackpropImageLaucher(
61 | THCudaTensor_data(state, grads),
62 | THCudaTensor_data(state, boxes),
63 | THCudaIntTensor_data(state, box_index),
64 | num_boxes, batch_size, image_height, image_width,
65 | crop_height, crop_width, depth,
66 | THCudaTensor_data(state, grads_image),
67 | stream
68 | );
69 | }
--------------------------------------------------------------------------------
/roialign/roi_align/src/crop_and_resize_gpu.h:
--------------------------------------------------------------------------------
1 | void crop_and_resize_gpu_forward(
2 | THCudaTensor * image,
3 | THCudaTensor * boxes, // [y1, x1, y2, x2]
4 | THCudaIntTensor * box_index, // range in [0, batch_size)
5 | const float extrapolation_value,
6 | const int crop_height,
7 | const int crop_width,
8 | THCudaTensor * crops
9 | );
10 |
11 | void crop_and_resize_gpu_backward(
12 | THCudaTensor * grads,
13 | THCudaTensor * boxes, // [y1, x1, y2, x2]
14 | THCudaIntTensor * box_index, // range in [0, batch_size)
15 | THCudaTensor * grads_image // resize to [bsize, c, hc, wc]
16 | );
--------------------------------------------------------------------------------
/roialign/roi_align/src/cuda/crop_and_resize_kernel.cu:
--------------------------------------------------------------------------------
1 | #include
2 | #include
3 | #include "crop_and_resize_kernel.h"
4 |
5 | #define CUDA_1D_KERNEL_LOOP(i, n) \
6 | for (int i = blockIdx.x * blockDim.x + threadIdx.x; i < n; \
7 | i += blockDim.x * gridDim.x)
8 |
9 |
10 | __global__
11 | void CropAndResizeKernel(
12 | const int nthreads, const float *image_ptr, const float *boxes_ptr,
13 | const int *box_ind_ptr, int num_boxes, int batch, int image_height,
14 | int image_width, int crop_height, int crop_width, int depth,
15 | float extrapolation_value, float *crops_ptr)
16 | {
17 | CUDA_1D_KERNEL_LOOP(out_idx, nthreads)
18 | {
19 | // NHWC: out_idx = d + depth * (w + crop_width * (h + crop_height * b))
20 | // NCHW: out_idx = w + crop_width * (h + crop_height * (d + depth * b))
21 | int idx = out_idx;
22 | const int x = idx % crop_width;
23 | idx /= crop_width;
24 | const int y = idx % crop_height;
25 | idx /= crop_height;
26 | const int d = idx % depth;
27 | const int b = idx / depth;
28 |
29 | const float y1 = boxes_ptr[b * 4];
30 | const float x1 = boxes_ptr[b * 4 + 1];
31 | const float y2 = boxes_ptr[b * 4 + 2];
32 | const float x2 = boxes_ptr[b * 4 + 3];
33 |
34 | const int b_in = box_ind_ptr[b];
35 | if (b_in < 0 || b_in >= batch)
36 | {
37 | continue;
38 | }
39 |
40 | const float height_scale =
41 | (crop_height > 1) ? (y2 - y1) * (image_height - 1) / (crop_height - 1)
42 | : 0;
43 | const float width_scale =
44 | (crop_width > 1) ? (x2 - x1) * (image_width - 1) / (crop_width - 1) : 0;
45 |
46 | const float in_y = (crop_height > 1)
47 | ? y1 * (image_height - 1) + y * height_scale
48 | : 0.5 * (y1 + y2) * (image_height - 1);
49 | if (in_y < 0 || in_y > image_height - 1)
50 | {
51 | crops_ptr[out_idx] = extrapolation_value;
52 | continue;
53 | }
54 |
55 | const float in_x = (crop_width > 1)
56 | ? x1 * (image_width - 1) + x * width_scale
57 | : 0.5 * (x1 + x2) * (image_width - 1);
58 | if (in_x < 0 || in_x > image_width - 1)
59 | {
60 | crops_ptr[out_idx] = extrapolation_value;
61 | continue;
62 | }
63 |
64 | const int top_y_index = floorf(in_y);
65 | const int bottom_y_index = ceilf(in_y);
66 | const float y_lerp = in_y - top_y_index;
67 |
68 | const int left_x_index = floorf(in_x);
69 | const int right_x_index = ceilf(in_x);
70 | const float x_lerp = in_x - left_x_index;
71 |
72 | const float *pimage = image_ptr + (b_in * depth + d) * image_height * image_width;
73 | const float top_left = pimage[top_y_index * image_width + left_x_index];
74 | const float top_right = pimage[top_y_index * image_width + right_x_index];
75 | const float bottom_left = pimage[bottom_y_index * image_width + left_x_index];
76 | const float bottom_right = pimage[bottom_y_index * image_width + right_x_index];
77 |
78 | const float top = top_left + (top_right - top_left) * x_lerp;
79 | const float bottom = bottom_left + (bottom_right - bottom_left) * x_lerp;
80 | crops_ptr[out_idx] = top + (bottom - top) * y_lerp;
81 | }
82 | }
83 |
84 | __global__
85 | void CropAndResizeBackpropImageKernel(
86 | const int nthreads, const float *grads_ptr, const float *boxes_ptr,
87 | const int *box_ind_ptr, int num_boxes, int batch, int image_height,
88 | int image_width, int crop_height, int crop_width, int depth,
89 | float *grads_image_ptr)
90 | {
91 | CUDA_1D_KERNEL_LOOP(out_idx, nthreads)
92 | {
93 | // NHWC: out_idx = d + depth * (w + crop_width * (h + crop_height * b))
94 | // NCHW: out_idx = w + crop_width * (h + crop_height * (d + depth * b))
95 | int idx = out_idx;
96 | const int x = idx % crop_width;
97 | idx /= crop_width;
98 | const int y = idx % crop_height;
99 | idx /= crop_height;
100 | const int d = idx % depth;
101 | const int b = idx / depth;
102 |
103 | const float y1 = boxes_ptr[b * 4];
104 | const float x1 = boxes_ptr[b * 4 + 1];
105 | const float y2 = boxes_ptr[b * 4 + 2];
106 | const float x2 = boxes_ptr[b * 4 + 3];
107 |
108 | const int b_in = box_ind_ptr[b];
109 | if (b_in < 0 || b_in >= batch)
110 | {
111 | continue;
112 | }
113 |
114 | const float height_scale =
115 | (crop_height > 1) ? (y2 - y1) * (image_height - 1) / (crop_height - 1)
116 | : 0;
117 | const float width_scale =
118 | (crop_width > 1) ? (x2 - x1) * (image_width - 1) / (crop_width - 1) : 0;
119 |
120 | const float in_y = (crop_height > 1)
121 | ? y1 * (image_height - 1) + y * height_scale
122 | : 0.5 * (y1 + y2) * (image_height - 1);
123 | if (in_y < 0 || in_y > image_height - 1)
124 | {
125 | continue;
126 | }
127 |
128 | const float in_x = (crop_width > 1)
129 | ? x1 * (image_width - 1) + x * width_scale
130 | : 0.5 * (x1 + x2) * (image_width - 1);
131 | if (in_x < 0 || in_x > image_width - 1)
132 | {
133 | continue;
134 | }
135 |
136 | const int top_y_index = floorf(in_y);
137 | const int bottom_y_index = ceilf(in_y);
138 | const float y_lerp = in_y - top_y_index;
139 |
140 | const int left_x_index = floorf(in_x);
141 | const int right_x_index = ceilf(in_x);
142 | const float x_lerp = in_x - left_x_index;
143 |
144 | float *pimage = grads_image_ptr + (b_in * depth + d) * image_height * image_width;
145 | const float dtop = (1 - y_lerp) * grads_ptr[out_idx];
146 | atomicAdd(
147 | pimage + top_y_index * image_width + left_x_index,
148 | (1 - x_lerp) * dtop
149 | );
150 | atomicAdd(
151 | pimage + top_y_index * image_width + right_x_index,
152 | x_lerp * dtop
153 | );
154 |
155 | const float dbottom = y_lerp * grads_ptr[out_idx];
156 | atomicAdd(
157 | pimage + bottom_y_index * image_width + left_x_index,
158 | (1 - x_lerp) * dbottom
159 | );
160 | atomicAdd(
161 | pimage + bottom_y_index * image_width + right_x_index,
162 | x_lerp * dbottom
163 | );
164 | }
165 | }
166 |
167 |
168 | void CropAndResizeLaucher(
169 | const float *image_ptr, const float *boxes_ptr,
170 | const int *box_ind_ptr, int num_boxes, int batch, int image_height,
171 | int image_width, int crop_height, int crop_width, int depth,
172 | float extrapolation_value, float *crops_ptr, cudaStream_t stream)
173 | {
174 | const int total_count = num_boxes * crop_height * crop_width * depth;
175 | const int thread_per_block = 1024;
176 | const int block_count = (total_count + thread_per_block - 1) / thread_per_block;
177 | cudaError_t err;
178 |
179 | if (total_count > 0)
180 | {
181 | CropAndResizeKernel<<>>(
182 | total_count, image_ptr, boxes_ptr,
183 | box_ind_ptr, num_boxes, batch, image_height, image_width,
184 | crop_height, crop_width, depth, extrapolation_value, crops_ptr);
185 |
186 | err = cudaGetLastError();
187 | if (cudaSuccess != err)
188 | {
189 | fprintf(stderr, "cudaCheckError() failed : %s\n", cudaGetErrorString(err));
190 | exit(-1);
191 | }
192 | }
193 | }
194 |
195 |
196 | void CropAndResizeBackpropImageLaucher(
197 | const float *grads_ptr, const float *boxes_ptr,
198 | const int *box_ind_ptr, int num_boxes, int batch, int image_height,
199 | int image_width, int crop_height, int crop_width, int depth,
200 | float *grads_image_ptr, cudaStream_t stream)
201 | {
202 | const int total_count = num_boxes * crop_height * crop_width * depth;
203 | const int thread_per_block = 1024;
204 | const int block_count = (total_count + thread_per_block - 1) / thread_per_block;
205 | cudaError_t err;
206 |
207 | if (total_count > 0)
208 | {
209 | CropAndResizeBackpropImageKernel<<>>(
210 | total_count, grads_ptr, boxes_ptr,
211 | box_ind_ptr, num_boxes, batch, image_height, image_width,
212 | crop_height, crop_width, depth, grads_image_ptr);
213 |
214 | err = cudaGetLastError();
215 | if (cudaSuccess != err)
216 | {
217 | fprintf(stderr, "cudaCheckError() failed : %s\n", cudaGetErrorString(err));
218 | exit(-1);
219 | }
220 | }
221 | }
--------------------------------------------------------------------------------
/roialign/roi_align/src/cuda/crop_and_resize_kernel.h:
--------------------------------------------------------------------------------
1 | #ifndef _CropAndResize_Kernel
2 | #define _CropAndResize_Kernel
3 |
4 | #ifdef __cplusplus
5 | extern "C" {
6 | #endif
7 |
8 | void CropAndResizeLaucher(
9 | const float *image_ptr, const float *boxes_ptr,
10 | const int *box_ind_ptr, int num_boxes, int batch, int image_height,
11 | int image_width, int crop_height, int crop_width, int depth,
12 | float extrapolation_value, float *crops_ptr, cudaStream_t stream);
13 |
14 | void CropAndResizeBackpropImageLaucher(
15 | const float *grads_ptr, const float *boxes_ptr,
16 | const int *box_ind_ptr, int num_boxes, int batch, int image_height,
17 | int image_width, int crop_height, int crop_width, int depth,
18 | float *grads_image_ptr, cudaStream_t stream);
19 |
20 | #ifdef __cplusplus
21 | }
22 | #endif
23 |
24 | #endif
--------------------------------------------------------------------------------
/synthia.py:
--------------------------------------------------------------------------------
1 | """
2 | Mask R-CNN
3 | Configurations and data loading code for MS COCO.
4 |
5 | Copyright (c) 2017 Matterport, Inc.
6 | Licensed under the MIT License (see LICENSE for details)
7 | Written by Waleed Abdulla
8 |
9 | ------------------------------------------------------------
10 |
11 | Usage: run from the command line as such:
12 |
13 | # Train a new model starting from pre-trained COCO weights
14 | python synthia.py train --dataset=/path/to/synthia/ --model=coco
15 |
16 | # Train a new model starting from ImageNet weights
17 | python synthia.py train --dataset=/path/to/synthia/ --model=imagenet
18 |
19 | # Continue training a model that you had trained earlier
20 | python synthia.py train --dataset=/path/to/synthia/ --model=/path/to/weights.h5
21 |
22 | # Continue training the last model you trained
23 | python synthia.py train --dataset=/path/to/synthia/ --model=last
24 |
25 | # Run evaluatoin on the last model you trained
26 | python synthia.py evaluate --dataset=/path/to/synthia/ --model=last
27 | """
28 |
29 | import os
30 | import sys
31 | import random
32 | import math
33 | import re
34 | import time
35 | import json
36 | import numpy as np
37 | import cv2
38 | import matplotlib
39 | import matplotlib.pyplot as plt
40 | import skimage.color
41 | import skimage.io
42 | import skimage.transform
43 |
44 |
45 | import zipfile
46 | import urllib.request
47 | import shutil
48 |
49 | from config import Config
50 | import utils
51 | import model as modellib
52 |
53 | import torch
54 |
55 | # Root directory of the project
56 | ROOT_DIR = os.getcwd()
57 |
58 | # Path to trained weights file
59 | COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.pth")
60 |
61 | # Directory to save logs and model checkpoints, if not provided
62 | # through the command line argument --logs
63 | DEFAULT_LOGS_DIR = os.path.join(ROOT_DIR, "logs")
64 |
65 | ############################################################
66 | # Configurations
67 | ############################################################
68 |
69 | class synthiaConfig(Config):
70 | """Configuration for training on the toy shapes dataset.
71 | Derives from the base Config class and overrides values specific
72 | to the toy shapes dataset.
73 | """
74 | # Give the configuration a recognizable name
75 | NAME = "synthia"
76 |
77 | # Number of classes (including background)
78 | NUM_CLASSES = 1 + 22 # background + 22 shapes
79 |
80 | IMAGE_MIN_DIM = 760
81 | IMAGE_MAX_DIM = 1280
82 |
83 |
84 | ############################################################
85 | # Dataset
86 | ############################################################
87 |
88 | class synthiaDataset(utils.Dataset):
89 | """Generates the shapes synthetic dataset. The dataset consists of simple
90 | shapes (triangles, squares, circles) placed randomly on a blank surface.
91 | The images are generated on the fly. No file access required.
92 | """
93 |
94 | def load_synthia(self, dataset_dir,subset):
95 | """Generate the requested number of synthetic images.
96 | count: number of images to generate.
97 | height, width: the size of the generated images.
98 | """
99 | # Add classes
100 | self.add_class("synthia", 1, "sky")
101 | self.add_class("synthia", 2, "Building")
102 | self.add_class("synthia", 3, "Road")
103 | self.add_class("synthia", 4, "Sidewalk")
104 | self.add_class("synthia", 5, "Fence")
105 | self.add_class("synthia", 6, "Vegetation")
106 | self.add_class("synthia", 7, "Pole")
107 | self.add_class("synthia", 8, "Car")
108 | self.add_class("synthia", 9, "Traffic sign")
109 | self.add_class("synthia", 10, "Pedestrian")
110 | self.add_class("synthia", 11, "Bicycle")
111 | self.add_class("synthia", 12, "Motorcycle")
112 | self.add_class("synthia", 13, "Parking-slot")
113 | self.add_class("synthia", 14, "Road-work")
114 | self.add_class("synthia", 15, "Traffic light")
115 | self.add_class("synthia", 16, "Terrain")
116 | self.add_class("synthia", 17, "Rider")
117 | self.add_class("synthia", 18, "Truck")
118 | self.add_class("synthia", 19, "Bus")
119 | self.add_class("synthia", 20, "Train")
120 | self.add_class("synthia", 21, "Wall")
121 | self.add_class("synthia", 22, "Lanemarking")
122 |
123 | if subset == "test":
124 | fname="test.txt"
125 | else:
126 | fname="train.txt"
127 |
128 | # obtain the image ids
129 | with open(fname) as f:
130 | content = f.readlines()
131 | image_ids = [x.strip() for x in content]
132 |
133 | for image_id in image_ids:
134 | if int(image_id)<201:
135 | Path=os.path.join(dataset_dir, "val","{}.png".format(image_id))
136 | self.add_image(
137 | "synthia",
138 | image_id=image_id,
139 | path=Path)
140 | else:
141 | Path=os.path.join(dataset_dir, "train","{}.png".format(image_id))
142 | self.add_image(
143 | "synthia",
144 | image_id=image_id,
145 | path=Path)
146 |
147 | def load_image(self, image_id):
148 | """Load the specified image and return a [H,W,4] Numpy array.
149 | """
150 | # Load image
151 | imgPath = self.image_info[image_id]['path']
152 | img=skimage.io.imread(imgPath)
153 | return img
154 |
155 | def image_reference(self, image_id):
156 | """Return the shapes data of the image."""
157 | info = self.image_info[image_id]
158 | if info["source"] == "synthia":
159 | return info["synthia"]
160 | else:
161 | super(self.__class__).image_reference(self, image_id)
162 |
163 | def load_mask(self, image_id):
164 | """Generate instance masks for shapes of the given image ID.
165 | """
166 | info = self.image_info[image_id]
167 | path=info['path']
168 | mpath=path.replace("RGB","GT")
169 | label=cv2.imread(mpath,cv2.IMREAD_UNCHANGED)
170 | raw_mask=label[:,:,1]
171 | number=np.unique(raw_mask)
172 | number=number[1:]
173 | # you should change the mask shape according to the image shape
174 | mask = np.zeros([760, 1280, len(number)],dtype=np.uint8)
175 | class_ids=np.zeros([len(number)],dtype=np.uint32)
176 | for i,p in enumerate(number):
177 | location=np.argwhere(raw_mask==p)
178 | mask[location[:,0], location[:,1], i] = 1
179 | class_ids[i]=label[location[0,0],location[0,1],2]
180 | # mask = [m for m in mask if set(np.unique(m).flatten()) != {0}]
181 | return mask.astype(np.bool), class_ids.astype(np.int32)
182 |
183 | ############################################################
184 | # Training
185 | ############################################################
186 |
187 | if __name__ == '__main__':
188 | import argparse
189 |
190 | # Parse command line arguments
191 | parser = argparse.ArgumentParser(
192 | description='Train Mask R-CNN on Synthia.')
193 | parser.add_argument("command",
194 | metavar="",
195 | help="'train' or 'evaluate' on Synthia")
196 | parser.add_argument('--dataset', required=False,
197 | default="/mnt/backup/jianyuan/synthia/RAND_CITYSCAPES/RGB",
198 | metavar="/path/to/coco/",
199 | help='Directory of the Synthia dataset')
200 | parser.add_argument('--model', required=False,
201 | metavar="/path/to/weights.pth",
202 | help="Path to weights .pth file ")
203 | parser.add_argument('--logs', required=False,
204 | default=DEFAULT_LOGS_DIR,
205 | metavar="/mnt/backup/jianyuan/pytorch-mask-rcnn/logs",
206 | help='Logs and checkpoints directory (default=logs/)')
207 | parser.add_argument('--lr', required=False,
208 | default=0.001,
209 | # metavar="/mnt/backup/jianyuan/pytorch-mask-rcnn/logs",
210 | help='Learning rate')
211 | parser.add_argument('--batchsize', required=False,
212 | default=4,
213 | help='Batch size')
214 | parser.add_argument('--steps', required=False,
215 | default=200,
216 | help='steps per epoch')
217 | parser.add_argument('--device', required=False,
218 | default="gpu",
219 | help='gpu or cpu')
220 | args = parser.parse_args()
221 |
222 | # Configurations
223 | if args.command == "train":
224 | config = synthiaConfig()
225 | else:
226 | class InferenceConfig(synthiaConfig):
227 | # Set batch size to 1 since we'll be running inference on
228 | # one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
229 | GPU_COUNT = 1
230 | IMAGES_PER_GPU = 1
231 | DETECTION_MIN_CONFIDENCE = 0
232 | config = InferenceConfig()
233 | inference_config = InferenceConfig()
234 | config.display()
235 |
236 | # Select Device
237 | if args.device == "gpu":
238 | device = torch.device("cuda")
239 | else:
240 | device = torch.device("cpu")
241 |
242 | # Create model
243 | if args.command == "train":
244 | model = modellib.MaskRCNN(config=config,
245 | model_dir=args.logs)
246 | else:
247 | model = modellib.MaskRCNN(config=config,
248 | model_dir=args.logs)
249 |
250 | model = model.to(device)
251 |
252 | # Select weights file to load
253 | if args.model:
254 | if args.model.lower() == "coco":
255 | model_path = COCO_MODEL_PATH
256 | # load pre-trained weights from coco or imagenet
257 | model.load_pre_weights(model_path)
258 | elif args.model.lower() == "last":
259 | # Find last trained weights
260 | model_path = model.find_last()[1]
261 | model.load_weights(model_path)
262 | elif args.model.lower() == "imagenet":
263 | # Start from ImageNet trained weights
264 | model_path = config.IMAGENET_MODEL_PATH
265 | # load pre-trained weights from coco or imagenet
266 | model.load_pre_weights(model_path)
267 | else:
268 | model_path = args.model
269 | model.load_weights(model_path)
270 | else:
271 | model_path = ""
272 | model.load_weights(model_path)
273 | #
274 | ## Load weights
275 | print("Loading weights ", model_path)
276 |
277 |
278 | # For Multi-gpu training, please uncomment the following part
279 | # Notably, in the following codes, the model will be wrapped in DataParallel()
280 | # it means you need to change the model. to model.module
281 | # for example, model.train_model --> model.module.train_model
282 | #if torch.cuda.device_count() > 1:
283 | # print("Let's use", torch.cuda.device_count(), "GPUs!")
284 | # # dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
285 | # model = torch.nn.DataParallel(model)
286 |
287 |
288 | data_dir=args.dataset
289 | # Training dataset
290 | dataset_train = synthiaDataset()
291 | dataset_train.load_synthia(data_dir,"train")
292 | dataset_train.prepare()
293 |
294 | # Validation dataset
295 | dataset_val = synthiaDataset()
296 | dataset_val.load_synthia(data_dir,"test")
297 | dataset_val.prepare()
298 |
299 | # input parameters
300 | lr=float(args.lr)
301 | batchsize=int(args.batchsize)
302 | steps=int(args.steps)
303 |
304 | # Train or evaluate
305 | if args.command == "train":
306 |
307 |
308 | print(" Training Image Count: {}".format(len(dataset_train.image_ids)))
309 | print("Class Count: {}".format(dataset_train.num_classes))
310 | print("Validation Image Count: {}".format(len(dataset_val.image_ids)))
311 | print("Class Count: {}".format(dataset_val.num_classes))
312 | # *** This training schedule is an example. Update to your needs ***
313 |
314 | # Training - Stage 1
315 | print("Training network heads")
316 | model.train_model(dataset_train, dataset_val,
317 | learning_rate=lr,
318 | epochs=1,
319 | BatchSize=batchsize,
320 | steps=steps,
321 | layers='heads')
322 |
323 | # Training - Stage 2
324 | # Finetune layers from ResNet stage 4 and up
325 | print("Fine tune Resnet stage 4 and up")
326 | model.train_model(dataset_train, dataset_val,
327 | learning_rate=lr/2,
328 | epochs=3,
329 | BatchSize=batchsize,
330 | steps=steps,
331 | layers='4+')
332 |
333 | # Training - Stage 3
334 | # Fine tune all layers
335 | print("Fine tune all layers")
336 | model.train_model(dataset_train, dataset_val,
337 | learning_rate=lr / 10,
338 | epochs=15,
339 | BatchSize=batchsize,
340 | steps=steps,
341 | layers='all')
342 |
343 | elif args.command == "evaluate":
344 | # Validation dataset
345 | image_ids = np.random.choice(dataset_val.image_ids, 1)
346 | model.eval()
347 | APs = []
348 | for image_id in image_ids:
349 | # Load image and ground truth data
350 | image, image_meta, gt_class_id, gt_bbox, gt_mask =\
351 | modellib.load_image_gt(dataset_val, inference_config,
352 | image_id, use_mini_mask=False)
353 | molded_images = np.expand_dims(modellib.mold_image(image, inference_config), 0)
354 | # Run object detection
355 | results = model.detect([image],device)
356 | r = results[0]
357 | # Compute AP
358 | AP, precisions, recalls, overlaps =\
359 | utils.compute_ap(gt_bbox, gt_class_id, gt_mask,
360 | r["rois"], r["class_ids"], r["scores"], r['masks'])
361 | APs.append(AP)
362 |
363 | print("mAP: ", np.mean(APs))
364 |
365 |
366 | else:
367 | print("'{}' is not recognized. "
368 | "Use 'train' or 'evaluate'".format(args.command))
369 |
--------------------------------------------------------------------------------
/test.txt:
--------------------------------------------------------------------------------
1 | 0005859
2 | 0000086
3 | 0002226
4 | 0003311
5 | 0003558
6 | 0000021
7 | 0005524
8 | 0002649
9 | 0007285
10 | 0000447
11 | 0007680
12 | 0007979
13 | 0007173
14 | 0007483
15 | 0003648
16 | 0000487
17 | 0006047
18 | 0004681
19 | 0003148
20 | 0009284
21 | 0002919
22 | 0002279
23 | 0004499
24 | 0006669
25 | 0000867
26 | 0005112
27 | 0007339
28 | 0005151
29 | 0003375
30 | 0008358
31 | 0005870
32 | 0004119
33 | 0003335
34 | 0003224
35 | 0002317
36 | 0005406
37 | 0003015
38 | 0007939
39 | 0008150
40 | 0000173
41 | 0002923
42 | 0001759
43 | 0001207
44 | 0001085
45 | 0000152
46 | 0007358
47 | 0003542
48 | 0001316
49 | 0008018
50 | 0007671
51 | 0000906
52 | 0008700
53 | 0004190
54 | 0004407
55 | 0007089
56 | 0004871
57 | 0000671
58 | 0000839
59 | 0006609
60 | 0006642
61 | 0007700
62 | 0008378
63 | 0003829
64 | 0006178
65 | 0002961
66 | 0004692
67 | 0004414
68 | 0003625
69 | 0000858
70 | 0007020
71 | 0004396
72 | 0001828
73 | 0001205
74 | 0006064
75 | 0002911
76 | 0007030
77 | 0008455
78 | 0004851
79 | 0004039
80 | 0001648
81 | 0005962
82 | 0009321
83 | 0000028
84 | 0007001
85 | 0006546
86 | 0003684
87 | 0004553
88 | 0002986
89 | 0009278
90 | 0006798
91 | 0001614
92 | 0000094
93 | 0000790
94 | 0005892
95 | 0008310
96 | 0001665
97 | 0005503
98 | 0005293
99 | 0001597
100 | 0003815
101 | 0007268
102 | 0003834
103 | 0008618
104 | 0005127
105 | 0007378
106 | 0009149
107 | 0002644
108 | 0007864
109 | 0008780
110 | 0001434
111 | 0003599
112 | 0005905
113 | 0003994
114 | 0006539
115 | 0001664
116 | 0007468
117 | 0001916
118 | 0007341
119 | 0000794
120 | 0007169
121 | 0004815
122 | 0003351
123 | 0001408
124 | 0003248
125 | 0005472
126 | 0009377
127 | 0000043
128 | 0009102
129 | 0002343
130 | 0009360
131 | 0002277
132 | 0003462
133 | 0004904
134 | 0004165
135 | 0000411
136 | 0005627
137 | 0001333
138 | 0007381
139 | 0007124
140 | 0003392
141 | 0007556
142 | 0004547
143 | 0002316
144 | 0001689
145 | 0003548
146 | 0008249
147 | 0004713
148 | 0004491
149 | 0003949
150 | 0000523
151 | 0001852
152 | 0002606
153 | 0006366
154 | 0006016
155 | 0003885
156 | 0005401
157 | 0003490
158 | 0007302
159 | 0004117
160 | 0003581
161 | 0007482
162 | 0000401
163 | 0006199
164 | 0004514
165 | 0000890
166 | 0002809
167 | 0004845
168 | 0005233
169 | 0003092
170 | 0003876
171 | 0005419
172 | 0006194
173 | 0005562
174 | 0002070
175 | 0000118
176 | 0005488
177 | 0006784
178 | 0006558
179 | 0007818
180 | 0005035
181 | 0002314
182 | 0000735
183 | 0001384
184 | 0005584
185 | 0004521
186 | 0004096
187 | 0007749
188 | 0002259
189 | 0001383
190 | 0007488
191 | 0002842
192 | 0009376
193 | 0000502
194 | 0002162
195 | 0000167
196 | 0003017
197 | 0002907
198 | 0004655
199 | 0002305
200 | 0000658
201 | 0008465
202 | 0002974
203 | 0008313
204 | 0006309
205 | 0007687
206 | 0007509
207 | 0007519
208 | 0007159
209 | 0007612
210 | 0008394
211 | 0001813
212 | 0005242
213 | 0003877
214 | 0007830
215 | 0007386
216 | 0000694
217 | 0000255
218 | 0005797
219 | 0002046
220 | 0004377
221 | 0008846
222 | 0005107
223 | 0003389
224 | 0007627
225 | 0006529
226 | 0008680
227 | 0008253
228 | 0005087
229 | 0001886
230 | 0000585
231 | 0001865
232 | 0006536
233 | 0003537
234 | 0003767
235 | 0004410
236 | 0005098
237 | 0002821
238 | 0002506
239 | 0008657
240 | 0008574
241 | 0007245
242 | 0002096
243 | 0002373
244 | 0006817
245 | 0006916
246 | 0004256
247 | 0002020
248 | 0004876
249 | 0002405
250 | 0001272
251 | 0001976
252 | 0002181
253 | 0006715
254 | 0004646
255 | 0005598
256 | 0000915
257 | 0003060
258 | 0007481
259 | 0001561
260 | 0001548
261 | 0005454
262 | 0002927
263 | 0004171
264 | 0001071
265 | 0000072
266 | 0002589
267 | 0000458
268 | 0007453
269 | 0000229
270 | 0006456
271 | 0003917
272 | 0009220
273 | 0000829
274 | 0006273
275 | 0003511
276 | 0002764
277 | 0006666
278 | 0001013
279 | 0001463
280 | 0005381
281 | 0003631
282 | 0007654
283 | 0006214
284 | 0007806
285 | 0004254
286 | 0009108
287 | 0008819
288 | 0009242
289 | 0002290
290 | 0000155
291 | 0003287
292 | 0006288
293 | 0007987
294 | 0006355
295 | 0008842
296 | 0006474
297 | 0004124
298 | 0005263
299 | 0004504
300 | 0008280
301 | 0006613
302 | 0008451
303 | 0000973
304 | 0008151
305 | 0002737
306 | 0001888
307 | 0002687
308 | 0001036
309 | 0006140
310 | 0002946
311 | 0004616
312 | 0001949
313 | 0000437
314 | 0007269
315 | 0001110
316 | 0008532
317 | 0008133
318 | 0008948
319 | 0007279
320 | 0005339
321 | 0007669
322 | 0008724
323 | 0002159
324 | 0003105
325 | 0006923
326 | 0006118
327 | 0001963
328 | 0000291
329 | 0001984
330 | 0004105
331 | 0003129
332 | 0000338
333 | 0003669
334 | 0009110
335 | 0003986
336 | 0008304
337 | 0000628
338 | 0004077
339 | 0004908
340 | 0002381
341 | 0002033
342 | 0009243
343 | 0005139
344 | 0003494
345 | 0007116
346 | 0004438
347 | 0002777
348 | 0005133
349 | 0002714
350 | 0004418
351 | 0000792
352 | 0001148
353 | 0004421
354 | 0001891
355 | 0007300
356 | 0002163
357 | 0000635
358 | 0003398
359 | 0005931
360 | 0001577
361 | 0007828
362 | 0004373
363 | 0003520
364 | 0003826
365 | 0008987
366 | 0004673
367 | 0009144
368 | 0001296
369 | 0008124
370 | 0004133
371 | 0004151
372 | 0009043
373 | 0000381
374 | 0008642
375 | 0002114
376 | 0001385
377 | 0001388
378 | 0007980
379 | 0000232
380 | 0008561
381 | 0000982
382 | 0006903
383 | 0006866
384 | 0002425
385 | 0009039
386 | 0004567
387 | 0008511
388 | 0005736
389 | 0002420
390 | 0001506
391 | 0006634
392 | 0002043
393 | 0006518
394 | 0007881
395 | 0007478
396 | 0006230
397 | 0006050
398 | 0004103
399 | 0004966
400 | 0006189
401 | 0006006
402 | 0007831
403 | 0007581
404 | 0001008
405 | 0002559
406 | 0008107
407 | 0004766
408 | 0003235
409 | 0001424
410 | 0000552
411 | 0006855
412 | 0003193
413 | 0001709
414 | 0006496
415 | 0001400
416 | 0005144
417 | 0005742
418 | 0004920
419 | 0002595
420 | 0003366
421 | 0001168
422 | 0007361
423 | 0008841
424 | 0006542
425 | 0009119
426 | 0002308
427 | 0006552
428 | 0007521
429 | 0008518
430 | 0000524
431 | 0000293
432 | 0000460
433 | 0002389
434 | 0002126
435 | 0008349
436 | 0000679
437 | 0007102
438 | 0003444
439 | 0004637
440 | 0005227
441 | 0004371
442 | 0004726
443 | 0003554
444 | 0001519
445 | 0005820
446 | 0003051
447 | 0005825
448 | 0003098
449 | 0007385
450 | 0004621
451 | 0000809
452 | 0002571
453 | 0000092
454 | 0004737
455 | 0006808
456 | 0001953
457 | 0000446
458 | 0005732
459 | 0005640
460 | 0007726
461 | 0006425
462 | 0004011
463 | 0004739
464 | 0000035
465 | 0002833
466 | 0000005
467 | 0005196
468 | 0009261
469 | 0007966
470 | 0008672
471 | 0008849
472 | 0000720
473 | 0005612
474 | 0006183
475 | 0007370
476 | 0003095
477 | 0008898
478 | 0001390
479 | 0008168
480 | 0002376
481 | 0003189
482 | 0004276
483 | 0003888
484 | 0009025
485 | 0002899
486 | 0001185
487 | 0001623
488 | 0003487
489 | 0002873
490 | 0009286
491 | 0002024
492 | 0000098
493 | 0008270
494 | 0008750
495 | 0001756
496 | 0005508
497 | 0004031
498 | 0006443
499 | 0000274
500 | 0006639
501 | 0001370
502 | 0004412
503 | 0002134
504 | 0003140
505 | 0007525
506 | 0003501
507 | 0008994
508 | 0006020
509 | 0004322
510 | 0001579
511 | 0007676
512 | 0002171
513 | 0005617
514 | 0003699
515 | 0004287
516 | 0002184
517 | 0007293
518 | 0001024
519 | 0005026
520 | 0006921
521 | 0001211
522 | 0004036
523 | 0003377
524 | 0000353
525 | 0005266
526 | 0009287
527 | 0002032
528 | 0003041
529 | 0006565
530 | 0007513
531 | 0006439
532 | 0001683
533 | 0004035
534 | 0003721
535 | 0002549
536 | 0000205
537 | 0000699
538 | 0000622
539 | 0000668
540 | 0006653
541 | 0008649
542 | 0001552
543 | 0004758
544 | 0006833
545 | 0009090
546 | 0003445
547 | 0003532
548 | 0003983
549 | 0007113
550 | 0002636
551 | 0008230
552 | 0005137
553 | 0006773
554 | 0006757
555 | 0002704
556 | 0008444
557 | 0008765
558 | 0001229
559 | 0002102
560 | 0001109
561 | 0009363
562 | 0008060
563 | 0002390
564 | 0004405
565 | 0000878
566 | 0003783
567 | 0007248
568 | 0000423
569 | 0001344
570 | 0004569
571 | 0002064
572 | 0006589
573 | 0003857
574 | 0001419
575 | 0003267
576 | 0006787
577 | 0005855
578 | 0004787
579 | 0001697
580 | 0001666
581 | 0002210
582 | 0002638
583 | 0006473
584 | 0006924
585 | 0004501
586 | 0001929
587 | 0008460
588 | 0007189
589 | 0006778
590 | 0000762
591 | 0001744
592 | 0000275
593 | 0003646
594 | 0004176
595 | 0000802
596 | 0003062
597 | 0001698
598 | 0001858
599 | 0007668
600 | 0006161
601 | 0007451
602 | 0001092
603 | 0005670
604 | 0000027
605 | 0005625
606 | 0001329
607 | 0003505
608 | 0008472
609 | 0001402
610 | 0004360
611 | 0004028
612 | 0008678
613 | 0008676
614 | 0001985
615 | 0009296
616 | 0005316
617 | 0006684
618 | 0003711
619 | 0002447
620 | 0006202
621 | 0000532
622 | 0000740
623 | 0003323
624 | 0001477
625 | 0002654
626 | 0007795
627 | 0008653
628 | 0003209
629 | 0001174
630 | 0008639
631 | 0004227
632 | 0000400
633 | 0000413
634 | 0002839
635 | 0000038
636 | 0004971
637 | 0004535
638 | 0000254
639 | 0001531
640 | 0003613
641 | 0008016
642 | 0008281
643 | 0006432
644 | 0007771
645 | 0007595
646 | 0003752
647 | 0000419
648 | 0002963
649 | 0007305
650 | 0008969
651 | 0008343
652 | 0008714
653 | 0009336
654 | 0005481
655 | 0003591
656 | 0005377
657 | 0008544
658 | 0000805
659 | 0004657
660 | 0000899
661 | 0005652
662 | 0006857
663 | 0008214
664 | 0008587
665 | 0000452
666 | 0006182
667 | 0002848
668 | 0008622
669 | 0004144
670 | 0005699
671 | 0002635
672 | 0005505
673 | 0005725
674 | 0005271
675 | 0004639
676 | 0008526
677 | 0002800
678 | 0007781
679 | 0005519
680 | 0002012
681 | 0006423
682 | 0003182
683 | 0001673
684 | 0008473
685 | 0002456
686 | 0008764
687 | 0006146
688 | 0006533
689 | 0005624
690 | 0006349
691 | 0004796
692 | 0000581
693 | 0000824
694 | 0007000
695 | 0005193
696 | 0008259
697 | 0008922
698 | 0008528
699 | 0007946
700 | 0008674
701 | 0007948
702 | 0003970
703 | 0001240
704 | 0001045
705 | 0004391
706 | 0004902
707 | 0000143
708 | 0005817
709 | 0007716
710 | 0006094
711 | 0008342
712 | 0009079
713 | 0007492
714 | 0001062
715 | 0002669
716 | 0000187
717 | 0006897
718 | 0003240
719 | 0008337
720 | 0006122
721 | 0008302
722 | 0002508
723 | 0005041
724 | 0003417
725 | 0002422
726 | 0002489
727 | 0005944
728 | 0001526
729 | 0007079
730 | 0007337
731 | 0005194
732 | 0005163
733 | 0007399
734 | 0004981
735 | 0001661
736 | 0006678
737 | 0008065
738 | 0008937
739 | 0009209
740 | 0007576
741 | 0008418
742 | 0007575
743 | 0002457
744 | 0003178
745 | 0008471
746 | 0005556
747 | 0001738
748 | 0007853
749 | 0007976
750 | 0001877
751 | 0005447
752 | 0002132
753 | 0008226
754 | 0006804
755 | 0003282
756 | 0006311
757 | 0003619
758 | 0006667
759 | 0002933
760 | 0001238
761 | 0004872
762 | 0003076
763 | 0001332
764 | 0004212
765 | 0005383
766 | 0004784
767 | 0007208
768 | 0004953
769 | 0007005
770 | 0006700
771 | 0005150
772 | 0007323
773 | 0008827
774 | 0002663
775 | 0000292
776 | 0008095
777 | 0006328
778 | 0004073
779 | 0001269
780 | 0003099
781 | 0000920
782 | 0007070
783 | 0004390
784 | 0003940
785 | 0001195
786 | 0004423
787 | 0009098
788 | 0003381
789 | 0000557
790 | 0004108
791 | 0007807
792 | 0006975
793 | 0007418
794 | 0000571
795 | 0000319
796 | 0001121
797 | 0001217
798 | 0008258
799 | 0001814
800 | 0007560
801 | 0008193
802 | 0007652
803 | 0006846
804 | 0000365
805 | 0001017
806 | 0005328
807 | 0005685
808 | 0004834
809 | 0005149
810 | 0006592
811 | 0008038
812 | 0008801
813 | 0006208
814 | 0003481
815 | 0005211
816 | 0007636
817 | 0003552
818 | 0004839
819 | 0007922
820 | 0001026
821 | 0003904
822 | 0001541
823 | 0008782
824 | 0006495
825 | 0000613
826 | 0005897
827 | 0006240
828 | 0001366
829 | 0007717
830 | 0001612
831 | 0000307
832 | 0005877
833 | 0003571
834 | 0005982
835 | 0006026
836 | 0005869
837 | 0000597
838 | 0008486
839 | 0005493
840 | 0003833
841 | 0002097
842 | 0002744
843 | 0004102
844 | 0007096
845 | 0002798
846 | 0003965
847 | 0004992
848 | 0009176
849 | 0001087
850 | 0005744
851 | 0006793
852 | 0007289
853 | 0006499
854 | 0002599
855 | 0007899
856 | 0007787
857 | 0006765
858 | 0006737
859 | 0004437
860 | 0004880
861 | 0008379
862 | 0008661
863 | 0006960
864 | 0001086
865 | 0000358
866 | 0000077
867 | 0004023
868 | 0007929
869 | 0003160
870 | 0009301
871 | 0007635
872 | 0001054
873 | 0002120
874 | 0008208
875 | 0005220
876 | 0008742
877 | 0006465
878 | 0002100
879 | 0002013
880 | 0008002
881 | 0008318
882 | 0009082
883 | 0002483
884 | 0004245
885 | 0001259
886 | 0007174
887 | 0008816
888 | 0000405
889 | 0005678
890 | 0001882
891 | 0005917
892 | 0008554
893 | 0000230
894 | 0004728
895 | 0002382
896 | 0004044
897 | 0008702
898 | 0004881
899 | 0005626
900 | 0006181
901 | 0008261
902 | 0006176
903 | 0005441
904 | 0001498
905 | 0002962
906 | 0003782
907 | 0001027
908 | 0003167
909 | 0005605
910 | 0007991
911 | 0006462
912 | 0004209
913 | 0004798
914 | 0004361
915 | 0003655
916 | 0000713
917 | 0007367
918 | 0004836
919 | 0004543
920 | 0009236
921 | 0003561
922 | 0004364
923 | 0003365
924 | 0001680
925 | 0003159
926 | 0002796
927 | 0001899
928 | 0002500
929 | 0005749
930 | 0004846
931 | 0000653
932 | 0005413
933 | 0004302
934 | 0007625
935 | 0002883
936 | 0007080
937 | 0002238
938 | 0003694
939 | 0004476
940 | 0007887
941 | 0007820
942 | 0008601
943 | 0006354
944 | 0007870
945 | 0000454
946 | 0005195
947 | 0002440
948 | 0002209
949 | 0002360
950 | 0003027
951 | 0001590
952 | 0002906
953 | 0001840
954 | 0005027
955 | 0004940
956 | 0008228
957 | 0000120
958 | 0003269
959 | 0001539
960 | 0004723
961 | 0009212
962 | 0000693
963 | 0001177
964 | 0003354
965 | 0006100
966 | 0008030
967 | 0005201
968 | 0003623
969 | 0004708
970 | 0005750
971 | 0005767
972 | 0005819
973 | 0008717
974 | 0000749
975 | 0006154
976 | 0006582
977 | 0003087
978 | 0006803
979 | 0005363
980 | 0002211
981 | 0007512
982 | 0000796
983 | 0004707
984 | 0008252
985 | 0009393
986 | 0006944
987 | 0001593
988 | 0009218
989 | 0005616
990 | 0001701
991 | 0007129
992 | 0008087
993 | 0006075
994 | 0001170
995 | 0001303
996 | 0002031
997 | 0000640
998 | 0007869
999 | 0004821
1000 | 0006021
1001 | 0006364
1002 | 0002678
1003 | 0006269
1004 | 0009189
1005 | 0000351
1006 | 0007499
1007 | 0008272
1008 | 0002596
1009 | 0004986
1010 | 0003452
1011 | 0000528
1012 | 0000132
1013 | 0005681
1014 | 0008536
1015 | 0007630
1016 | 0005085
1017 | 0006996
1018 | 0006674
1019 | 0007241
1020 | 0002285
1021 | 0005623
1022 | 0007759
1023 | 0001974
1024 | 0006672
1025 | 0008982
1026 | 0008069
1027 | 0002855
1028 | 0007270
1029 | 0005249
1030 | 0002886
1031 | 0007260
1032 | 0008867
1033 | 0006977
1034 | 0004223
1035 | 0007056
1036 | 0000535
1037 | 0001603
1038 | 0001015
1039 | 0002573
1040 | 0006756
1041 | 0007851
1042 | 0000465
1043 | 0004428
1044 | 0008834
1045 | 0005292
1046 | 0006523
1047 | 0005331
1048 | 0005934
1049 | 0003894
1050 | 0005810
1051 | 0009395
1052 | 0009103
1053 | 0004419
1054 | 0007438
1055 | 0007450
1056 | 0000208
1057 | 0001736
1058 | 0001857
1059 | 0004762
1060 | 0003698
1061 | 0006618
1062 | 0000639
1063 | 0001849
1064 | 0008543
1065 | 0000455
1066 | 0003489
1067 | 0002645
1068 | 0003139
1069 | 0001911
1070 | 0006430
1071 | 0008739
1072 | 0002763
1073 | 0005437
1074 | 0000296
1075 | 0007324
1076 | 0005967
1077 | 0003012
1078 | 0003321
1079 | 0001267
1080 | 0001797
1081 | 0002888
1082 | 0006795
1083 | 0002204
1084 | 0004200
1085 | 0000265
1086 | 0005673
1087 | 0003837
1088 | 0003205
1089 | 0006071
1090 | 0007813
1091 | 0006254
1092 | 0001876
1093 | 0002739
1094 | 0004694
1095 | 0002498
1096 | 0007928
1097 | 0006256
1098 | 0005463
1099 | 0008970
1100 | 0002400
1101 | 0008995
1102 | 0008135
1103 | 0003420
1104 | 0002362
1105 | 0002947
1106 | 0009012
1107 | 0008020
1108 | 0004720
1109 | 0006132
1110 | 0008071
1111 | 0000011
1112 | 0001277
1113 | 0008615
1114 | 0003556
1115 | 0001793
1116 | 0007375
1117 | 0007261
1118 | 0000736
1119 | 0004589
1120 | 0003578
1121 | 0008698
1122 | 0003104
1123 | 0005318
1124 | 0000127
1125 | 0001478
1126 | 0004530
1127 | 0001134
1128 | 0001944
1129 | 0009355
1130 | 0004686
1131 | 0001020
1132 | 0000970
1133 | 0006630
1134 | 0003564
1135 | 0004497
1136 | 0006501
1137 | 0007284
1138 | 0000210
1139 | 0004805
1140 | 0003514
1141 | 0008570
1142 | 0001802
1143 | 0003849
1144 | 0008887
1145 | 0001739
1146 | 0000960
1147 | 0004913
1148 | 0001647
1149 | 0003245
1150 | 0008712
1151 | 0004236
1152 | 0003997
1153 | 0007141
1154 | 0003929
1155 | 0004804
1156 | 0003383
1157 | 0002136
1158 | 0005462
1159 | 0005430
1160 | 0005888
1161 | 0002387
1162 | 0008744
1163 | 0007332
1164 | 0007276
1165 | 0006594
1166 | 0000444
1167 | 0009388
1168 | 0006461
1169 | 0009196
1170 | 0000302
1171 | 0000998
1172 | 0007170
1173 | 0004353
1174 | 0005040
1175 | 0002169
1176 | 0003517
1177 | 0006934
1178 | 0004925
1179 | 0001754
1180 | 0003152
1181 | 0004574
1182 | 0002487
1183 | 0007314
1184 | 0001521
1185 | 0003726
1186 | 0006271
1187 | 0007424
1188 | 0009188
1189 | 0005320
1190 | 0006234
1191 | 0002770
1192 | 0002030
1193 | 0009297
1194 | 0009386
1195 | 0007322
1196 | 0003008
1197 | 0001122
1198 | 0001124
1199 | 0005248
1200 | 0003814
1201 | 0004779
1202 | 0003418
1203 | 0003938
1204 | 0006963
1205 | 0001440
1206 | 0002665
1207 | 0005443
1208 | 0005298
1209 | 0002781
1210 | 0003723
1211 | 0003026
1212 | 0000095
1213 | 0000198
1214 | 0008446
1215 | 0004590
1216 | 0004578
1217 | 0000258
1218 | 0008504
1219 | 0001295
1220 | 0002950
1221 | 0003204
1222 | 0004634
1223 | 0007734
1224 | 0000377
1225 | 0006932
1226 | 0007876
1227 | 0009290
1228 | 0006838
1229 | 0003183
1230 | 0008752
1231 | 0003141
1232 | 0007354
1233 | 0005069
1234 | 0007251
1235 | 0001780
1236 | 0005669
1237 | 0002221
1238 | 0005128
1239 | 0008549
1240 | 0004076
1241 | 0006515
1242 | 0007232
1243 | 0002695
1244 | 0003094
1245 | 0004346
1246 | 0004232
1247 | 0003657
1248 | 0008255
1249 | 0007739
1250 | 0007541
1251 | 0002133
1252 | 0005142
1253 | 0003966
1254 | 0007886
1255 | 0000642
1256 | 0008863
1257 | 0008666
1258 | 0001414
1259 | 0009167
1260 | 0008651
1261 | 0000637
1262 | 0005023
1263 | 0000917
1264 | 0002419
1265 | 0006828
1266 | 0009258
1267 | 0003864
1268 | 0006697
1269 | 0004889
1270 | 0008215
1271 | 0000036
1272 | 0000054
1273 | 0002812
1274 | 0009089
1275 | 0004879
1276 | 0002700
1277 | 0003310
1278 | 0002661
1279 | 0008422
1280 | 0002467
1281 | 0001161
1282 | 0006593
1283 | 0008900
1284 | 0003344
1285 | 0004757
1286 | 0009375
1287 | 0003855
1288 | 0001578
1289 | 0003899
1290 | 0000313
1291 | 0007338
1292 | 0009380
1293 | 0004061
1294 | 0008373
1295 | 0003122
1296 | 0001114
1297 | 0001538
1298 | 0008977
1299 | 0006702
1300 | 0002610
1301 | 0006380
1302 | 0008741
1303 | 0008110
1304 | 0005842
1305 | 0002439
1306 | 0007138
1307 | 0008527
1308 | 0008412
1309 | 0007310
1310 | 0008231
1311 | 0009362
1312 | 0004352
1313 | 0002353
1314 | 0004169
1315 | 0009153
1316 | 0008894
1317 | 0003047
1318 | 0005563
1319 | 0005613
1320 | 0005574
1321 | 0006790
1322 | 0005416
1323 | 0005928
1324 | 0000343
1325 | 0003545
1326 | 0003645
1327 | 0001951
1328 | 0001358
1329 | 0000690
1330 | 0007708
1331 | 0003003
1332 | 0006053
1333 | 0002082
1334 | 0005802
1335 | 0004323
1336 | 0004042
1337 | 0007350
1338 | 0001093
1339 | 0006511
1340 | 0007477
1341 | 0004284
1342 | 0007042
1343 | 0001103
1344 | 0007318
1345 | 0003666
1346 | 0008691
1347 | 0002007
1348 | 0002393
1349 | 0006175
1350 | 0004434
1351 | 0003011
1352 | 0008088
1353 | 0002566
1354 | 0000716
1355 | 0007641
1356 | 0005924
1357 | 0001068
1358 | 0004754
1359 | 0004024
1360 | 0004448
1361 | 0002458
1362 | 0004121
1363 | 0006284
1364 | 0006292
1365 | 0006061
1366 | 0004795
1367 | 0002372
1368 | 0008879
1369 | 0004299
1370 | 0007176
1371 | 0008975
1372 | 0008998
1373 | 0008501
1374 | 0004460
1375 | 0005310
1376 | 0006852
1377 | 0007194
1378 | 0007824
1379 | 0005610
1380 | 0001190
1381 | 0006744
1382 | 0003956
1383 | 0006028
1384 | 0006337
1385 | 0005816
1386 | 0005867
1387 | 0006051
1388 | 0003918
1389 | 0008381
1390 | 0007592
1391 | 0004402
1392 | 0008626
1393 | 0009076
1394 | 0006655
1395 | 0002650
1396 | 0002273
1397 | 0000168
1398 | 0008359
1399 | 0006995
1400 | 0000281
1401 | 0006982
1402 | 0004560
1403 | 0005942
1404 | 0001842
1405 | 0003928
1406 | 0003747
1407 | 0007767
1408 | 0003436
1409 | 0002465
1410 | 0005186
1411 | 0000891
1412 | 0004949
1413 | 0005075
1414 | 0003719
1415 | 0003757
1416 | 0003796
1417 | 0006371
1418 | 0000112
1419 | 0003127
1420 | 0006575
1421 | 0004217
1422 | 0001582
1423 | 0006322
1424 | 0000816
1425 | 0005380
1426 | 0001176
1427 | 0001395
1428 | 0002768
1429 | 0001871
1430 | 0003442
1431 | 0004605
1432 | 0005238
1433 | 0001567
1434 | 0007021
1435 | 0009277
1436 | 0000768
1437 | 0002129
1438 | 0006305
1439 | 0005290
1440 | 0000334
1441 | 0001491
1442 | 0004580
1443 | 0000468
1444 | 0004422
1445 | 0004359
1446 | 0002881
1447 | 0001412
1448 | 0007559
1449 | 0001522
1450 | 0000037
1451 | 0006177
1452 | 0003612
1453 | 0009126
1454 | 0000012
1455 | 0004258
1456 | 0008229
1457 | 0007355
1458 | 0009342
1459 | 0000374
1460 | 0008548
1461 | 0001305
1462 | 0004989
1463 | 0003049
1464 | 0000434
1465 | 0002696
1466 | 0009054
1467 | 0007004
1468 | 0004279
1469 | 0008385
1470 | 0004268
1471 | 0004045
1472 | 0002790
1473 | 0001160
1474 | 0004888
1475 | 0009351
1476 | 0000688
1477 | 0005121
1478 | 0003572
1479 | 0005480
1480 | 0000817
1481 | 0001543
1482 | 0002110
1483 | 0008770
1484 | 0003931
1485 | 0000480
1486 | 0001622
1487 | 0003968
1488 | 0009133
1489 | 0000497
1490 | 0002879
1491 | 0003643
1492 | 0006286
1493 | 0005777
1494 | 0007839
1495 | 0009128
1496 | 0004700
1497 | 0002976
1498 | 0005336
1499 | 0000294
1500 | 0000562
1501 | 0006981
1502 | 0005932
1503 | 0003435
1504 | 0006420
1505 | 0009334
1506 | 0002988
1507 | 0005236
1508 | 0004563
1509 | 0001559
1510 | 0005458
1511 | 0008734
1512 | 0008217
1513 | 0007960
1514 | 0004188
1515 | 0008426
1516 | 0007101
1517 | 0008820
1518 | 0000033
1519 | 0003350
1520 | 0004307
1521 | 0006685
1522 | 0001317
1523 | 0009055
1524 | 0001834
1525 | 0000706
1526 | 0000122
1527 | 0005603
1528 | 0006647
1529 | 0002461
1530 | 0006762
1531 | 0000457
1532 | 0008183
1533 | 0007686
1534 | 0001702
1535 | 0007755
1536 | 0005053
1537 | 0009041
1538 | 0000828
1539 | 0008609
1540 | 0004819
1541 | 0003957
1542 | 0006619
1543 | 0006111
1544 | 0003244
1545 | 0005839
1546 | 0005500
1547 | 0009068
1548 | 0007792
1549 | 0005695
1550 | 0002401
1551 | 0003570
1552 | 0001321
1553 | 0001532
1554 | 0003777
1555 | 0003089
1556 | 0002402
1557 | 0009052
1558 | 0008514
1559 | 0005219
1560 | 0002322
1561 | 0000128
1562 | 0005457
1563 | 0005350
1564 | 0000592
1565 | 0000129
1566 | 0000948
1567 | 0007272
1568 | 0004517
1569 | 0006800
1570 | 0000425
1571 | 0007538
1572 | 0001374
1573 | 0005984
1574 | 0006740
1575 | 0008768
1576 | 0008237
1577 | 0008298
1578 | 0004559
1579 | 0005576
1580 | 0006760
1581 | 0008227
1582 | 0006902
1583 | 0000862
1584 | 0001351
1585 | 0000728
1586 | 0007059
1587 | 0003053
1588 | 0002019
1589 | 0001824
1590 | 0003197
1591 | 0006188
1592 | 0008336
1593 | 0008916
1594 | 0003663
1595 | 0009270
1596 | 0007752
1597 | 0002485
1598 | 0005635
1599 | 0007133
1600 | 0005522
1601 | 0008145
1602 | 0001291
1603 | 0004084
1604 | 0001453
1605 | 0002044
1606 | 0004997
1607 | 0007303
1608 | 0007364
1609 | 0009109
1610 | 0008295
1611 | 0004263
1612 | 0007234
1613 | 0001264
1614 | 0007496
1615 | 0003212
1616 | 0007463
1617 | 0002944
1618 | 0000901
1619 | 0007537
1620 | 0007210
1621 | 0007810
1622 | 0001136
1623 | 0005881
1624 | 0004365
1625 | 0006895
1626 | 0003194
1627 | 0000819
1628 | 0000492
1629 | 0004079
1630 | 0002505
1631 | 0008438
1632 | 0003874
1633 | 0006531
1634 | 0005832
1635 | 0000226
1636 | 0004788
1637 | 0002623
1638 | 0000287
1639 | 0008824
1640 | 0002037
1641 | 0000660
1642 | 0004829
1643 | 0002630
1644 | 0002533
1645 | 0005740
1646 | 0004827
1647 | 0003595
1648 | 0006148
1649 | 0001033
1650 | 0004824
1651 | 0009064
1652 | 0006657
1653 | 0001373
1654 | 0003973
1655 | 0001800
1656 | 0003161
1657 | 0008213
1658 | 0002092
1659 | 0007031
1660 | 0005183
1661 | 0007398
1662 | 0000378
1663 | 0000589
1664 | 0001401
1665 | 0005789
1666 | 0004049
1667 | 0001115
1668 | 0008427
1669 | 0001769
1670 | 0002041
1671 | 0004426
1672 | 0003706
1673 | 0005351
1674 | 0008630
1675 | 0001203
1676 | 0003708
1677 | 0004435
1678 | 0002151
1679 | 0000430
1680 | 0006692
1681 | 0000974
1682 | 0007466
1683 | 0002681
1684 | 0006851
1685 | 0001265
1686 | 0000463
1687 | 0008059
1688 | 0000231
1689 | 0006484
1690 | 0002641
1691 | 0006370
1692 | 0000064
1693 | 0001427
1694 | 0005656
1695 | 0000655
1696 | 0002959
1697 | 0000110
1698 | 0002218
1699 | 0007219
1700 | 0003502
1701 | 0006733
1702 | 0000448
1703 | 0004135
1704 | 0005226
1705 | 0007614
1706 | 0007485
1707 | 0005916
1708 | 0003390
1709 | 0000339
1710 | 0006586
1711 | 0001883
1712 | 0007455
1713 | 0004927
1714 | 0007240
1715 | 0004608
1716 | 0002574
1717 | 0006785
1718 | 0002085
1719 | 0007561
1720 | 0001143
1721 | 0004651
1722 | 0002719
1723 | 0001099
1724 | 0004052
1725 | 0002264
1726 | 0004205
1727 | 0000701
1728 | 0004248
1729 | 0006073
1730 | 0000947
1731 | 0008188
1732 | 0005232
1733 | 0008146
1734 | 0005715
1735 | 0004043
1736 | 0005172
1737 | 0005110
1738 | 0001450
1739 | 0002628
1740 | 0004385
1741 | 0006458
1742 | 0002066
1743 | 0002655
1744 | 0003470
1745 | 0005156
1746 | 0000515
1747 | 0004568
1748 | 0003399
1749 | 0006910
1750 | 0000221
1751 | 0006324
1752 | 0007602
1753 | 0007085
1754 | 0003155
1755 | 0001201
1756 | 0007443
1757 | 0001489
1758 | 0005521
1759 | 0000317
1760 | 0009305
1761 | 0003821
1762 | 0001067
1763 | 0006726
1764 | 0008276
1765 | 0000225
1766 | 0006985
1767 | 0004538
1768 | 0002068
1769 | 0008826
1770 | 0007422
1771 | 0000652
1772 | 0002235
1773 | 0002736
1774 | 0007157
1775 | 0004288
1776 | 0008175
1777 | 0008099
1778 | 0006741
1779 | 0001191
1780 | 0000621
1781 | 0006957
1782 | 0004479
1783 | 0007874
1784 | 0007186
1785 | 0008631
1786 | 0001990
1787 | 0001887
1788 | 0002331
1789 | 0000821
1790 | 0003858
1791 | 0006304
1792 | 0008705
1793 | 0005015
1794 | 0002794
1795 | 0003881
1796 | 0001575
1797 | 0005018
1798 | 0001994
1799 | 0000256
1800 | 0004010
1801 | 0009289
1802 | 0002787
1803 | 0005726
1804 | 0008521
1805 | 0009000
1806 | 0002955
1807 | 0005791
1808 | 0001654
1809 | 0008843
1810 | 0006591
1811 | 0006931
1812 | 0006859
1813 | 0003668
1814 | 0004706
1815 | 0008305
1816 | 0002118
1817 | 0004146
1818 | 0001032
1819 | 0001988
1820 | 0003125
1821 | 0002205
1822 | 0001139
1823 | 0003168
1824 | 0008926
1825 | 0004719
1826 | 0003430
1827 | 0000407
1828 | 0001298
1829 | 0009063
1830 | 0006867
1831 | 0007181
1832 | 0005955
1833 | 0006045
1834 | 0002269
1835 | 0001245
1836 | 0004139
1837 | 0002001
1838 | 0006390
1839 | 0004034
1840 | 0007750
1841 | 0003993
1842 | 0002909
1843 | 0003279
1844 | 0003169
1845 | 0008972
1846 | 0000691
1847 | 0001349
1848 | 0002005
1849 | 0008361
1850 | 0005007
1851 | 0008973
1852 | 0008711
1853 | 0007735
1854 | 0002468
1855 | 0009132
1856 | 0003893
1857 | 0003523
1858 | 0009122
1859 | 0003050
1860 | 0000237
1861 | 0006110
1862 | 0000872
1863 | 0009171
1864 | 0005487
1865 | 0002709
1866 | 0001972
1867 | 0005021
1868 | 0003519
1869 | 0002702
1870 | 0003035
1871 | 0005411
1872 | 0007402
1873 | 0006512
1874 | 0007213
1875 | 0005871
1876 | 0009003
1877 | 0008176
1878 | 0001588
1879 | 0007816
1880 | 0007286
1881 | 0007497
1882 | 0002344
1883 | 0007383
1884 | 0008893
1885 | 0001805
1886 | 0006922
1887 | 0005253
1888 | 0009384
1889 | 0004214
1890 | 0004800
1891 | 0003795
1892 | 0007390
1893 | 0003898
1894 | 0003102
1895 | 0006844
1896 | 0007474
1897 | 0009353
1898 | 0000058
1899 | 0004618
1900 | 0007764
1901 | 0005729
1902 | 0004159
1903 | 0007988
1904 | 0000385
1905 | 0006999
1906 | 0001181
1907 | 0007772
1908 | 0000926
1909 | 0006130
1910 | 0002861
1911 | 0009269
1912 | 0000201
1913 | 0004629
1914 | 0000530
1915 | 0001856
1916 | 0005830
1917 | 0005929
1918 | 0005113
1919 | 0007837
1920 | 0007811
1921 | 0001130
1922 | 0004870
1923 | 0003679
1924 | 0008245
1925 | 0004097
1926 | 0006493
1927 | 0008696
1928 | 0004919
1929 | 0003100
1930 | 0006138
1931 | 0000687
1932 | 0001180
1933 | 0009361
1934 | 0005561
1935 | 0003908
1936 | 0008266
1937 | 0003897
1938 | 0008432
1939 | 0008907
1940 | 0007122
1941 | 0004020
1942 | 0004249
1943 | 0003111
1944 | 0007598
1945 | 0009252
1946 | 0002502
1947 | 0004831
1948 | 0008414
1949 | 0008098
1950 | 0005132
1951 | 0001072
1952 | 0001323
1953 | 0007320
1954 | 0006538
1955 | 0004471
1956 | 0007132
1957 | 0005886
1958 | 0002905
1959 | 0004584
1960 | 0008421
1961 | 0001624
1962 | 0005780
1963 | 0002724
1964 | 0005662
1965 | 0009058
1966 | 0001799
1967 | 0006724
1968 | 0003319
1969 | 0006611
1970 | 0003295
1971 | 0001386
1972 | 0001682
1973 | 0005592
1974 | 0003173
1975 | 0005295
1976 | 0003901
1977 | 0009400
1978 | 0003746
1979 | 0008818
1980 | 0007221
1981 | 0005154
1982 | 0006201
1983 | 0001311
1984 | 0009304
1985 | 0009146
1986 | 0003406
1987 | 0006675
1988 | 0007651
1989 | 0000300
1990 | 0001549
1991 | 0003959
1992 | 0009219
1993 | 0003255
1994 | 0006046
1995 | 0004370
1996 | 0005490
1997 | 0002718
1998 | 0006103
1999 | 0005368
2000 | 0001456
2001 | 0006625
2002 | 0001347
2003 | 0004037
2004 | 0001708
2005 | 0007088
2006 | 0006569
2007 | 0009309
2008 | 0008396
2009 | 0003891
2010 | 0002225
2011 | 0006453
2012 | 0004982
2013 | 0006406
2014 | 0001004
2015 | 0006750
2016 | 0005168
2017 | 0001626
2018 | 0002892
2019 | 0009265
2020 | 0001705
2021 | 0007879
2022 | 0004612
2023 | 0003944
2024 | 0001175
2025 | 0004765
2026 | 0003432
2027 | 0000882
2028 | 0003317
2029 | 0004979
2030 | 0002659
2031 | 0008235
2032 | 0007304
2033 | 0008945
2034 | 0008187
2035 | 0004741
2036 | 0001912
2037 | 0006805
2038 | 0005641
2039 | 0004803
2040 | 0001830
2041 | 0006359
2042 | 0003118
2043 | 0008179
2044 | 0008629
2045 | 0001018
2046 | 0003810
2047 | 0007433
2048 | 0007629
2049 | 0003503
2050 | 0003068
2051 | 0006081
2052 | 0006212
2053 | 0001472
2054 | 0005907
2055 | 0007659
2056 | 0003467
2057 | 0000382
2058 | 0006718
2059 | 0008207
2060 | 0007784
2061 | 0004855
2062 | 0005834
2063 | 0000242
2064 | 0000451
2065 | 0000941
2066 | 0008800
2067 | 0006768
2068 | 0005335
2069 | 0007517
2070 | 0008340
2071 | 0003621
2072 | 0007094
2073 | 0005659
2074 | 0006752
2075 | 0007051
2076 | 0008032
2077 | 0008182
2078 | 0005734
2079 | 0007655
2080 | 0000803
2081 | 0002631
2082 | 0000440
2083 | 0006323
2084 | 0001851
2085 | 0002825
2086 | 0004066
2087 | 0000595
2088 | 0008697
2089 | 0009346
2090 | 0006170
2091 | 0004179
2092 | 0003792
2093 | 0005223
2094 | 0009136
2095 | 0009266
2096 | 0008323
2097 | 0004164
2098 | 0002276
2099 | 0006823
2100 | 0006917
2101 | 0001776
2102 | 0006229
2103 | 0003996
2104 | 0000006
2105 | 0007130
2106 | 0007396
2107 | 0000991
2108 | 0008905
2109 | 0001252
2110 | 0003283
2111 | 0004545
2112 | 0007608
2113 | 0005167
2114 | 0001970
2115 | 0006899
2116 | 0008480
2117 | 0003763
2118 | 0001299
2119 | 0002711
2120 | 0006517
2121 | 0004721
2122 | 0004266
2123 | 0008531
2124 | 0008806
2125 | 0005012
2126 | 0004789
2127 | 0003951
2128 | 0002832
2129 | 0002324
2130 | 0001895
2131 | 0006658
2132 | 0004837
2133 | 0008723
2134 | 0004774
2135 | 0005658
2136 | 0007343
2137 | 0003338
2138 | 0000997
2139 | 0006979
2140 | 0008901
2141 | 0009229
2142 | 0008771
2143 | 0002816
2144 | 0000429
2145 | 0005068
2146 | 0008788
2147 | 0007765
2148 | 0006849
2149 | 0004449
2150 | 0008134
2151 | 0000509
2152 | 0003995
2153 | 0004454
2154 | 0006545
2155 | 0004006
2156 | 0008345
2157 | 0009037
2158 | 0006147
2159 | 0003072
2160 | 0008160
2161 | 0002089
2162 | 0004970
2163 | 0008798
2164 | 0000719
2165 | 0003397
2166 | 0000261
2167 | 0002745
2168 | 0008401
2169 | 0005647
2170 | 0008517
2171 | 0009186
2172 | 0004397
2173 | 0007246
2174 | 0002446
2175 | 0006450
2176 | 0004730
2177 | 0006920
2178 | 0003455
2179 | 0005554
2180 | 0000383
2181 | 0005218
2182 | 0006329
2183 | 0004532
2184 | 0008989
2185 | 0004869
2186 | 0007256
2187 | 0009254
2188 | 0007162
2189 | 0006037
2190 | 0008424
2191 | 0002239
2192 | 0001594
2193 | 0003185
2194 | 0007841
2195 | 0002050
2196 | 0007207
2197 | 0001784
2198 | 0000436
2199 | 0004492
2200 | 0007003
2201 | 0002212
2202 | 0000140
2203 | 0007867
2204 | 0006246
2205 | 0004537
2206 | 0007017
2207 | 0005062
2208 | 0000976
2209 | 0002086
2210 | 0003106
2211 | 0000931
2212 | 0009302
2213 | 0004189
2214 | 0008445
2215 | 0000889
2216 | 0001922
2217 | 0008224
2218 | 0006287
2219 | 0004210
2220 | 0004029
2221 | 0005329
2222 | 0000566
2223 | 0008220
2224 | 0000603
2225 | 0005404
2226 | 0002729
2227 | 0005849
2228 | 0002403
2229 | 0005058
2230 | 0001335
2231 | 0002313
2232 | 0000534
2233 | 0005247
2234 | 0000245
2235 | 0004763
2236 | 0007255
2237 | 0007084
2238 | 0002653
2239 | 0007410
2240 | 0007588
2241 | 0003540
2242 | 0000139
2243 | 0000666
2244 | 0008625
2245 | 0000346
2246 | 0007697
2247 | 0003294
2248 | 0008409
2249 | 0005557
2250 | 0005535
2251 | 0008007
2252 | 0000309
2253 | 0000285
2254 | 0001156
2255 | 0001822
2256 | 0007898
2257 | 0003644
2258 | 0005912
2259 | 0006898
2260 | 0003592
2261 | 0003656
2262 | 0008776
2263 | 0004181
2264 | 0005808
2265 | 0009306
2266 | 0005705
2267 | 0003128
2268 | 0008817
2269 | 0007779
2270 | 0001485
2271 | 0007860
2272 | 0003422
2273 | 0003301
2274 | 0005042
2275 | 0000748
2276 | 0000211
2277 | 0007616
2278 | 0003156
2279 | 0004270
2280 | 0000362
2281 | 0004470
2282 | 0004131
2283 | 0001285
2284 | 0002319
2285 | 0000846
2286 | 0007435
2287 | 0005384
2288 | 0004408
2289 | 0007515
2290 | 0006587
2291 | 0004068
2292 | 0004630
2293 | 0008044
2294 | 0005937
2295 | 0000191
2296 | 0008402
2297 | 0001873
2298 | 0005398
2299 | 0003473
2300 | 0005407
2301 | 0002564
2302 | 0004695
2303 | 0000682
2304 | 0000550
2305 | 0000477
2306 | 0007087
2307 | 0001500
2308 | 0001924
2309 | 0000490
2310 | 0008961
2311 | 0009086
2312 | 0008287
2313 | 0000838
2314 | 0005244
2315 | 0006716
2316 | 0006237
2317 | 0006821
2318 | 0007212
2319 | 0002396
2320 | 0009183
2321 | 0004781
2322 | 0005807
2323 | 0000966
2324 | 0000572
2325 | 0003576
2326 | 0007710
2327 | 0001471
2328 | 0008563
2329 | 0004910
2330 | 0001118
2331 | 0000935
2332 | 0006206
2333 | 0006282
2334 | 0005229
2335 | 0006617
2336 | 0000070
2337 | 0008264
2338 | 0004790
2339 | 0006856
2340 | 0000062
2341 | 0008611
2342 | 0005588
2343 | 0006263
2344 | 0003477
2345 | 0001369
2346 | 0002648
2347 | 0008658
2348 | 0005515
2349 | 0006431
2350 | 0002819
2351 | 0002981
2352 | 0002188
2353 | 0008003
2354 | 0005102
2355 | 0004140
2356 | 0004503
2357 | 0004297
2358 | 0007933
2359 | 0000216
2360 | 0008936
2361 | 0004678
2362 | 0006360
2363 | 0005628
2364 | 0004524
2365 | 0001422
2366 | 0007878
2367 | 0000556
2368 | 0007421
2369 | 0008263
2370 | 0005001
2371 | 0006478
2372 | 0001639
2373 | 0001314
2374 | 0008599
2375 | 0004825
2376 | 0001926
2377 | 0008577
2378 | 0007014
2379 | 0000673
2380 | 0006340
2381 | 0000363
2382 | 0007353
2383 | 0001081
2384 | 0002637
2385 | 0005918
2386 | 0002587
2387 | 0009398
2388 | 0004764
2389 | 0004294
2390 | 0004485
2391 | 0003303
2392 | 0004914
2393 | 0008784
2394 | 0001671
2395 | 0008390
2396 | 0004260
2397 | 0002664
2398 | 0004602
2399 | 0006505
2400 | 0006734
2401 | 0005953
2402 | 0006925
2403 | 0004696
2404 | 0001391
2405 | 0006779
2406 | 0004961
2407 | 0006868
2408 | 0002901
2409 | 0008978
2410 | 0005200
2411 | 0008553
2412 | 0006585
2413 | 0002753
2414 | 0009207
2415 | 0005105
2416 | 0001159
2417 | 0000384
2418 | 0002318
2419 | 0001244
2420 | 0004663
2421 | 0004952
2422 | 0000199
2423 | 0005882
2424 | 0005961
2425 | 0001131
2426 | 0000903
2427 | 0006879
2428 | 0000786
2429 | 0008162
2430 | 0000971
2431 | 0001123
2432 | 0009046
2433 | 0006597
2434 | 0000314
2435 | 0008122
2436 | 0004216
2437 | 0007349
2438 | 0001210
2439 | 0008185
2440 | 0003131
2441 | 0008140
2442 | 0004885
2443 | 0006641
2444 | 0002466
2445 | 0000894
2446 | 0002552
2447 | 0005983
2448 | 0003347
2449 | 0001288
2450 | 0007943
2451 | 0005644
2452 | 0005019
2453 | 0000689
2454 | 0002815
2455 | 0007801
2456 | 0000936
2457 | 0005642
2458 | 0000598
2459 | 0006792
2460 | 0004894
2461 | 0004536
2462 | 0005323
2463 | 0008031
2464 | 0008338
2465 | 0008049
2466 | 0003892
2467 | 0000238
2468 | 0007826
2469 | 0001341
2470 | 0004786
2471 | 0001279
2472 | 0006410
2473 | 0002838
2474 | 0007763
2475 | 0005424
2476 | 0003577
2477 | 0004170
2478 | 0001052
2479 | 0002080
2480 | 0005312
2481 | 0006477
2482 | 0000707
2483 | 0006889
2484 | 0003115
2485 | 0009192
2486 | 0001811
2487 | 0005894
2488 | 0006401
2489 | 0002558
2490 | 0000367
2491 | 0005668
2492 | 0008328
2493 | 0002912
2494 | 0009300
2495 | 0004466
2496 | 0005215
2497 | 0007555
2498 | 0007931
2499 | 0004752
2500 | 0002028
2501 | 0006005
2502 | 0001677
2503 | 0007520
2504 | 0007404
2505 | 0002917
2506 | 0000856
2507 | 0007804
2508 | 0002369
2509 | 0004540
2510 | 0002303
2511 | 0001574
2512 | 0001361
2513 | 0007835
2514 | 0003247
2515 | 0007963
2516 | 0000508
2517 | 0003701
2518 | 0000298
2519 | 0006777
2520 | 0004772
2521 | 0001714
2522 | 0000751
2523 | 0004221
2524 | 0008393
2525 | 0000356
2526 | 0007888
2527 | 0006283
2528 | 0004799
2529 | 0004759
2530 | 0002730
2531 | 0007736
2532 | 0007611
2533 | 0008996
2534 | 0000875
2535 | 0003982
2536 | 0000697
2537 | 0002980
2538 | 0002117
2539 | 0004218
2540 | 0002863
2541 | 0006759
2542 | 0009045
2543 | 0000732
2544 | 0005523
2545 | 0006993
2546 | 0000333
2547 | 0007365
2548 | 0006486
2549 | 0000024
2550 | 0004498
2551 | 0008233
2552 | 0004429
2553 | 0008492
2554 | 0009158
2555 | 0006471
2556 | 0009160
2557 | 0005960
2558 | 0005188
2559 | 0001533
2560 | 0004293
2561 | 0006681
2562 | 0007657
2563 | 0002572
2564 | 0000781
2565 | 0004463
2566 | 0006032
2567 | 0006346
2568 | 0008068
2569 | 0000340
2570 | 0005981
2571 | 0002889
2572 | 0003363
2573 | 0006179
2574 | 0005594
2575 | 0003945
2576 | 0008347
2577 | 0001534
2578 | 0003171
2579 | 0007027
2580 | 0005741
2581 | 0003142
2582 | 0004290
2583 | 0008991
2584 | 0007745
2585 | 0000366
2586 | 0001356
2587 | 0000290
2588 | 0005718
2589 | 0000080
2590 | 0006854
2591 | 0004416
2592 | 0001243
2593 | 0002738
2594 | 0001495
2595 | 0006774
2596 | 0000504
2597 | 0008634
2598 | 0002486
2599 | 0001576
2600 | 0001376
2601 | 0001328
2602 | 0004461
2603 | 0008104
2604 | 0005034
2605 | 0000252
2606 | 0002780
2607 | 0004856
2608 | 0001149
2609 | 0000172
2610 | 0003472
2611 | 0005388
2612 | 0008333
2613 | 0004750
2614 | 0008147
2615 | 0008677
2616 | 0008435
2617 | 0002895
2618 | 0004333
2619 | 0008314
2620 | 0001794
2621 | 0005080
2622 | 0003916
2623 | 0004386
2624 | 0006213
2625 | 0004116
2626 | 0004552
2627 | 0002289
2628 | 0008874
2629 | 0007845
2630 | 0003498
2631 | 0004291
2632 | 0004246
2633 | 0002125
2634 | 0008294
2635 | 0007691
2636 | 0002872
2637 | 0001755
2638 | 0007153
2639 | 0002215
2640 | 0006010
2641 | 0006758
2642 | 0006806
2643 | 0005527
2644 | 0009369
2645 | 0003627
2646 | 0006184
2647 | 0002618
2648 | 0000580
2649 | 0003529
2650 | 0008196
2651 | 0001513
2652 | 0001573
2653 | 0005935
2654 | 0002771
2655 | 0003869
2656 | 0009018
2657 | 0006887
2658 | 0005551
2659 | 0008729
2660 | 0005806
2661 | 0008275
2662 | 0000672
2663 | 0002565
2664 | 0009150
2665 | 0005510
2666 | 0006384
2667 | 0008792
2668 | 0000488
2669 | 0008326
2670 | 0008026
2671 | 0002521
2672 | 0005713
2673 | 0002708
2674 | 0008139
2675 | 0003042
2676 | 0007705
2677 | 0003495
2678 | 0002323
2679 | 0004531
2680 | 0001443
2681 | 0007872
2682 | 0001845
2683 | 0006342
2684 | 0005122
2685 | 0006896
2686 | 0004813
2687 | 0007924
2688 | 0007582
2689 | 0002327
2690 | 0008382
2691 | 0007166
2692 | 0003358
2693 | 0002185
2694 | 0006470
2695 | 0001910
2696 | 0008958
2697 | 0005322
2698 | 0007850
2699 | 0005846
2700 | 0000607
2701 | 0008807
2702 | 0005225
2703 | 0003428
2704 | 0003617
2705 | 0009394
2706 | 0002452
2707 | 0002284
2708 | 0009235
2709 | 0003790
2710 | 0002074
2711 | 0005992
2712 | 0004968
2713 | 0009038
2714 | 0006489
2715 | 0005724
2716 | 0003419
2717 | 0003096
2718 | 0008607
2719 | 0002386
2720 | 0008154
2721 | 0005161
2722 | 0003300
2723 | 0009253
2724 | 0006799
2725 | 0008582
2726 | 0008155
2727 | 0008906
2728 | 0004631
2729 | 0007533
2730 | 0005475
2731 | 0002245
2732 | 0000680
2733 | 0006408
2734 | 0007242
2735 | 0003902
2736 | 0003500
2737 | 0008718
2738 | 0005552
2739 | 0004273
2740 | 0002198
2741 | 0007412
2742 | 0005432
2743 | 0005775
2744 | 0008919
2745 | 0002067
2746 | 0001137
2747 | 0002806
2748 | 0001869
2749 | 0009111
2750 | 0004467
2751 | 0007294
2752 | 0009354
2753 | 0001946
2754 | 0008648
2755 | 0006435
2756 | 0001132
2757 | 0005602
2758 | 0002052
2759 | 0001920
2760 | 0008406
2761 | 0007067
2762 | 0008354
2763 | 0006297
2764 | 0005070
2765 | 0003278
2766 | 0005755
2767 | 0003446
2768 | 0009170
2769 | 0005512
2770 | 0007940
2771 | 0001223
2772 | 0007487
2773 | 0003010
2774 | 0004141
2775 | 0001637
2776 | 0001940
2777 | 0006069
2778 | 0000272
2779 | 0006572
2780 | 0008856
2781 | 0002111
2782 | 0000484
2783 | 0006335
2784 | 0007667
2785 | 0000125
2786 | 0004369
2787 | 0005260
2788 | 0008376
2789 | 0001006
2790 | 0000983
2791 | 0005971
2792 | 0005714
2793 | 0003930
2794 | 0003569
2795 | 0001692
2796 | 0003058
2797 | 0003827
2798 | 0001430
2799 | 0002177
2800 | 0006125
2801 | 0008915
2802 | 0005341
2803 | 0001591
2804 | 0002578
2805 | 0003774
2806 | 0007413
2807 | 0005703
2808 | 0003243
2809 | 0003530
2810 | 0005547
2811 | 0005731
2812 | 0007562
2813 | 0003266
2814 | 0001823
2815 | 0006781
2816 | 0000320
2817 | 0000831
2818 | 0006358
2819 | 0009134
2820 | 0002530
2821 | 0006325
2822 | 0000978
2823 | 0007126
2824 | 0006070
2825 | 0002166
2826 | 0003361
2827 | 0002639
2828 | 0005572
2829 | 0004512
2830 | 0001785
2831 | 0004019
2832 | 0008699
2833 | 0009035
2834 | 0002826
2835 | 0001354
2836 | 0000799
2837 | 0001209
2838 | 0009008
2839 | 0008854
2840 | 0004192
2841 | 0008627
2842 | 0008571
2843 | 0001135
2844 | 0007184
2845 | 0002196
2846 | 0004931
2847 | 0005653
2848 | 0007805
2849 | 0006192
2850 | 0001975
2851 | 0003486
2852 | 0006564
2853 | 0005887
2854 | 0005651
2855 | 0008317
2856 | 0000213
2857 | 0008881
2858 | 0002765
2859 | 0000149
2860 | 0001987
2861 | 0001371
2862 | 0008851
2863 | 0007704
2864 | 0000271
2865 | 0005950
2866 | 0005090
2867 | 0001011
2868 | 0008477
2869 | 0000501
2870 | 0000048
2871 | 0007843
2872 | 0007013
2873 | 0008886
2874 | 0006637
2875 | 0003238
2876 | 0007677
2877 | 0004198
2878 | 0002459
2879 | 0009323
2880 | 0004972
2881 | 0006275
2882 | 0009264
2883 | 0003718
2884 | 0002233
2885 | 0001831
2886 | 0002208
2887 | 0005997
2888 | 0003603
2889 | 0001908
2890 | 0002256
2891 | 0006894
2892 | 0000121
2893 | 0007891
2894 | 0004957
2895 | 0002722
2896 | 0000206
2897 | 0008932
2898 | 0000433
2899 | 0001063
2900 | 0004041
2901 | 0007895
2902 | 0005768
2903 | 0008791
2904 | 0006034
2905 | 0008966
2906 | 0001716
2907 | 0006365
2908 | 0001550
2909 | 0000044
2910 | 0003400
2911 | 0008810
2912 | 0008158
2913 | 0006488
2914 | 0005707
2915 | 0000039
2916 | 0000754
2917 | 0001125
2918 | 0007345
2919 | 0000283
2920 | 0006905
2921 | 0003896
2922 | 0009276
2923 | 0007382
2924 | 0003842
2925 | 0003975
2926 | 0000984
2927 | 0003550
2928 | 0007863
2929 | 0002301
2930 | 0001694
2931 | 0004007
2932 | 0003249
2933 | 0006494
2934 | 0006780
2935 | 0006719
2936 | 0007083
2937 | 0001808
2938 | 0004863
2939 | 0000106
2940 | 0004053
2941 | 0005735
2942 | 0006353
2943 | 0000986
2944 | 0000100
2945 | 0000975
2946 | 0003342
2947 | 0003990
2948 | 0008420
2949 | 0001051
2950 | 0000636
2951 | 0007098
2952 | 0002201
2953 | 0002885
2954 | 0001679
2955 | 0003291
2956 | 0006984
2957 | 0005784
2958 | 0007775
2959 | 0008538
2960 | 0005850
2961 | 0000910
2962 | 0006682
2963 | 0006106
2964 | 0009081
2965 | 0008106
2966 | 0004138
2967 | 0008808
2968 | 0000355
2969 | 0006906
2970 | 0008496
2971 | 0005530
2972 | 0005365
2973 | 0005643
2974 | 0005370
2975 | 0006301
2976 | 0001222
2977 | 0000376
2978 | 0007016
2979 | 0008356
2980 | 0003038
2981 | 0007981
2982 | 0008408
2983 | 0009310
2984 | 0002532
2985 | 0007589
2986 | 0004525
2987 | 0002306
2988 | 0008404
2989 | 0007397
2990 | 0006964
2991 | 0000659
2992 | 0000227
2993 | 0000308
2994 | 0000461
2995 | 0008619
2996 | 0007254
2997 | 0002496
2998 | 0006308
2999 | 0001204
3000 | 0004204
3001 | 0002998
3002 | 0001796
3003 | 0001420
3004 | 0004089
3005 | 0008979
3006 | 0007622
3007 | 0000526
3008 | 0004785
3009 | 0000022
3010 | 0000049
3011 | 0007516
3012 | 0005455
3013 | 0009215
3014 | 0008789
3015 | 0007732
3016 | 0008236
3017 | 0005378
3018 | 0001986
3019 | 0005084
3020 | 0005672
3021 | 0008583
3022 | 0007777
3023 | 0001224
3024 | 0003281
3025 | 0001088
3026 | 0002690
3027 | 0003475
3028 | 0003705
3029 | 0008608
3030 | 0003198
3031 | 0004597
3032 | 0001343
3033 | 0004725
3034 | 0004978
3035 | 0002965
3036 | 0001025
3037 | 0006534
3038 | 0006067
3039 | 0002243
3040 | 0005518
3041 | 0003991
3042 | 0004842
3043 | 0000764
3044 | 0001981
3045 | 0001127
3046 | 0003286
3047 | 0003565
3048 | 0009198
3049 | 0003380
3050 | 0002292
3051 | 0001049
3052 | 0008684
3053 | 0006123
3054 | 0006151
3055 | 0008290
3056 | 0000774
3057 | 0007788
3058 | 0005258
3059 | 0009117
3060 | 0005464
3061 | 0001442
3062 | 0006976
3063 | 0006216
3064 | 0000170
3065 | 0001145
3066 | 0008748
3067 | 0000602
3068 | 0001983
3069 | 0002392
3070 | 0008070
3071 | 0008283
3072 | 0004166
3073 | 0005631
3074 | 0001952
3075 | 0005728
3076 | 0004533
3077 | 0001855
3078 | 0002741
3079 | 0008884
3080 | 0002213
3081 | 0008307
3082 | 0006936
3083 | 0006874
3084 | 0005840
3085 | 0007855
3086 | 0004509
3087 | 0004747
3088 | 0005148
3089 | 0000224
3090 | 0002996
3091 | 0008743
3092 | 0008965
3093 | 0009173
3094 | 0005889
3095 | 0005431
3096 | 0001480
3097 | 0006378
3098 | 0002786
3099 | 0001078
3100 | 0009152
3101 | 0001966
3102 | 0000069
3103 | 0004592
3104 | 0006503
3105 | 0000311
3106 | 0000842
3107 | 0001331
3108 | 0008918
3109 | 0007911
3110 | 0006603
3111 | 0009139
3112 | 0009221
3113 | 0002683
3114 | 0001825
3115 | 0005126
3116 | 0002539
3117 | 0005494
3118 | 0006185
3119 | 0003382
3120 | 0000352
3121 | 0002926
3122 | 0005809
3123 | 0001757
3124 | 0001413
3125 | 0001293
3126 | 0006914
3127 | 0004801
3128 | 0000270
3129 | 0006601
3130 | 0005045
3131 | 0007331
3132 | 0006252
3133 | 0007725
3134 | 0005583
3135 | 0004439
3136 | 0003372
3137 | 0000262
3138 | 0006709
3139 | 0001283
3140 | 0009061
3141 | 0006313
3142 | 0003862
3143 | 0005893
3144 | 0001866
3145 | 0008857
3146 | 0008033
3147 | 0003889
3148 | 0003368
3149 | 0004551
3150 | 0001312
3151 | 0001565
3152 | 0004211
3153 | 0004178
3154 | 0008763
3155 | 0001875
3156 | 0000554
3157 | 0006407
3158 | 0003340
3159 | 0001544
3160 | 0008025
3161 | 0004326
3162 | 0003633
3163 | 0003789
3164 | 0008488
3165 | 0009279
3166 | 0001847
3167 | 0000244
3168 | 0004892
3169 | 0005119
3170 | 0000812
3171 | 0009295
3172 | 0007838
3173 | 0004059
3174 | 0006886
3175 | 0008152
3176 | 0006257
3177 | 0002608
3178 | 0004848
3179 | 0002053
3180 | 0001241
3181 | 0000073
3182 | 0003909
3183 | 0007328
3184 | 0005073
3185 | 0001745
3186 | 0006519
3187 | 0008296
3188 | 0006107
3189 | 0000288
3190 | 0008312
3191 | 0008116
3192 | 0005147
3193 | 0001893
3194 | 0001128
3195 | 0008327
3196 | 0006226
3197 | 0006640
3198 | 0003602
3199 | 0001787
3200 | 0004095
3201 | 0006049
3202 | 0000811
3203 | 0003289
3204 | 0004861
3205 | 0005991
3206 | 0003460
3207 | 0006940
3208 | 0007333
3209 | 0005060
3210 | 0001046
3211 | 0004427
3212 | 0000868
3213 | 0002267
3214 | 0005348
3215 | 0005039
3216 | 0003671
3217 | 0006820
3218 | 0005611
3219 | 0008282
3220 | 0001315
3221 | 0005856
3222 | 0003604
3223 | 0006395
3224 | 0000182
3225 | 0001016
3226 | 0000046
3227 | 0000677
3228 | 0002214
3229 | 0002682
3230 | 0005364
3231 | 0006058
3232 | 0009182
3233 | 0005230
3234 | 0005425
3235 | 0003019
3236 | 0006251
3237 | 0005499
3238 | 0004935
3239 | 0007722
3240 | 0009029
3241 | 0000830
3242 | 0006643
3243 | 0003618
3244 | 0009178
3245 | 0003153
3246 | 0007803
3247 | 0000273
3248 | 0003256
3249 | 0003187
3250 | 0002802
3251 | 0008136
3252 | 0005300
3253 | 0003907
3254 | 0004860
3255 | 0007916
3256 | 0000686
3257 | 0005926
3258 | 0006048
3259 | 0008868
3260 | 0003022
3261 | 0004456
3262 | 0002101
3263 | 0006843
3264 | 0007105
3265 | 0001454
3266 | 0008448
3267 | 0005782
3268 | 0008440
3269 | 0008388
3270 | 0002027
3271 | 0001846
3272 | 0005739
3273 | 0008367
3274 | 0008500
3275 | 0008552
3276 | 0008206
3277 | 0009291
3278 | 0002109
3279 | 0005861
3280 | 0008713
3281 | 0002984
3282 | 0003404
3283 | 0007758
3284 | 0005367
3285 | 0001140
3286 | 0001906
3287 | 0002040
3288 | 0007918
3289 | 0004528
3290 | 0007015
3291 | 0006507
3292 | 0009251
3293 | 0000347
3294 | 0000712
3295 | 0006989
3296 | 0000240
3297 | 0003704
3298 | 0003065
3299 | 0009069
3300 | 0001097
3301 | 0004487
3302 | 0004667
3303 | 0002523
3304 | 0002932
3305 | 0008584
3306 | 0006126
3307 | 0002982
3308 | 0003744
3309 | 0003992
3310 | 0007673
3311 | 0007844
3312 | 0009091
3313 | 0001044
3314 | 0007796
3315 | 0002789
3316 | 0004577
3317 | 0009114
3318 | 0007914
3319 | 0007319
3320 | 0008024
3321 | 0002517
3322 | 0000791
3323 | 0008730
3324 | 0001324
3325 | 0001166
3326 | 0008759
3327 | 0001019
3328 | 0004933
3329 | 0008387
3330 | 0005812
3331 | 0006093
3332 | 0003825
3333 | 0003206
3334 | 0003689
3335 | 0006891
3336 | 0005706
3337 | 0001237
3338 | 0002829
3339 | 0001199
3340 | 0005429
3341 | 0006207
3342 | 0005504
3343 | 0003199
3344 | 0000360
3345 | 0001404
3346 | 0001971
3347 | 0005362
3348 | 0000332
3349 | 0006116
3350 | 0004225
3351 | 0008662
3352 | 0003325
3353 | 0005936
3354 | 0002547
3355 | 0006928
3356 | 0009121
3357 | 0007526
3358 | 0002436
3359 | 0005459
3360 | 0000739
3361 | 0004005
3362 | 0000836
3363 | 0006376
3364 | 0006343
3365 | 0006735
3366 | 0000770
3367 | 0000041
3368 | 0008701
3369 | 0004659
3370 | 0006320
3371 | 0000963
3372 | 0007977
3373 | 0004296
3374 | 0002242
3375 | 0000815
3376 | 0008656
3377 | 0007968
3378 | 0006661
3379 | 0006782
3380 | 0000674
3381 | 0007821
3382 | 0007007
3383 | 0008156
3384 | 0000780
3385 | 0003137
3386 | 0002079
3387 | 0001353
3388 | 0002087
3389 | 0003822
3390 | 0003751
3391 | 0003980
3392 | 0001363
3393 | 0005389
3394 | 0008428
3395 | 0003259
3396 | 0007522
3397 | 0001255
3398 | 0005951
3399 | 0006761
3400 | 0006235
3401 |
--------------------------------------------------------------------------------
/utils.py:
--------------------------------------------------------------------------------
1 | """
2 | Mask R-CNN
3 | Common utility functions and classes.
4 |
5 | Copyright (c) 2017 Matterport, Inc.
6 | Licensed under the MIT License (see LICENSE for details)
7 | Written by Waleed Abdulla
8 | """
9 |
10 | import sys
11 | import os
12 | import math
13 | import random
14 | import numpy as np
15 | import scipy.misc
16 | import scipy.ndimage
17 | import skimage.color
18 | import skimage.io
19 | import torch
20 |
21 | ############################################################
22 | # Bounding Boxes
23 | ############################################################
24 |
25 | def extract_bboxes(mask):
26 | """Compute bounding boxes from masks.
27 | mask: [height, width, num_instances]. Mask pixels are either 1 or 0.
28 |
29 | Returns: bbox array [num_instances, (y1, x1, y2, x2)].
30 | """
31 | boxes = np.zeros([mask.shape[-1], 4], dtype=np.int32)
32 | for i in range(mask.shape[-1]):
33 | m = mask[:, :, i]
34 | # Bounding box.
35 | horizontal_indicies = np.where(np.any(m, axis=0))[0]
36 | vertical_indicies = np.where(np.any(m, axis=1))[0]
37 | if horizontal_indicies.shape[0]:
38 | x1, x2 = horizontal_indicies[[0, -1]]
39 | y1, y2 = vertical_indicies[[0, -1]]
40 | # x2 and y2 should not be part of the box. Increment by 1.
41 | x2 += 1
42 | y2 += 1
43 | else:
44 | # No mask for this instance. Might happen due to
45 | # resizing or cropping. Set bbox to zeros
46 | x1, x2, y1, y2 = 0, 0, 0, 0
47 | boxes[i] = np.array([y1, x1, y2, x2])
48 | return boxes.astype(np.int32)
49 |
50 |
51 | def compute_iou(box, boxes, box_area, boxes_area):
52 | """Calculates IoU of the given box with the array of the given boxes.
53 | box: 1D vector [y1, x1, y2, x2]
54 | boxes: [boxes_count, (y1, x1, y2, x2)]
55 | box_area: float. the area of 'box'
56 | boxes_area: array of length boxes_count.
57 |
58 | Note: the areas are passed in rather than calculated here for
59 | efficency. Calculate once in the caller to avoid duplicate work.
60 | """
61 | # Calculate intersection areas
62 | y1 = np.maximum(box[0], boxes[:, 0])
63 | y2 = np.minimum(box[2], boxes[:, 2])
64 | x1 = np.maximum(box[1], boxes[:, 1])
65 | x2 = np.minimum(box[3], boxes[:, 3])
66 | intersection = np.maximum(x2 - x1, 0) * np.maximum(y2 - y1, 0)
67 | union = box_area + boxes_area[:] - intersection[:]
68 | iou = intersection / union
69 | return iou
70 |
71 |
72 | def compute_overlaps(boxes1, boxes2):
73 | """Computes IoU overlaps between two sets of boxes.
74 | boxes1, boxes2: [N, (y1, x1, y2, x2)].
75 |
76 | For better performance, pass the largest set first and the smaller second.
77 | """
78 | # Areas of anchors and GT boxes
79 | area1 = (boxes1[:, 2] - boxes1[:, 0]) * (boxes1[:, 3] - boxes1[:, 1])
80 | area2 = (boxes2[:, 2] - boxes2[:, 0]) * (boxes2[:, 3] - boxes2[:, 1])
81 |
82 | # Compute overlaps to generate matrix [boxes1 count, boxes2 count]
83 | # Each cell contains the IoU value.
84 | overlaps = np.zeros((boxes1.shape[0], boxes2.shape[0]))
85 | for i in range(overlaps.shape[1]):
86 | box2 = boxes2[i]
87 | overlaps[:, i] = compute_iou(box2, boxes1, area2[i], area1)
88 | return overlaps
89 |
90 | def box_refinement(box, gt_box):
91 | """Compute refinement needed to transform box to gt_box.
92 | box and gt_box are [N, (y1, x1, y2, x2)]
93 | """
94 |
95 | height = box[:, 2] - box[:, 0]
96 | width = box[:, 3] - box[:, 1]
97 | center_y = box[:, 0] + 0.5 * height
98 | center_x = box[:, 1] + 0.5 * width
99 |
100 | gt_height = gt_box[:, 2] - gt_box[:, 0]
101 | gt_width = gt_box[:, 3] - gt_box[:, 1]
102 | gt_center_y = gt_box[:, 0] + 0.5 * gt_height
103 | gt_center_x = gt_box[:, 1] + 0.5 * gt_width
104 |
105 | dy = (gt_center_y - center_y) / height
106 | dx = (gt_center_x - center_x) / width
107 | dh = torch.log(gt_height / height)
108 | dw = torch.log(gt_width / width)
109 |
110 | result = torch.stack([dy, dx, dh, dw], dim=1)
111 | return result
112 |
113 |
114 |
115 | ############################################################
116 | # Dataset
117 | ############################################################
118 |
119 | class Dataset(object):
120 | """The base class for dataset classes.
121 | To use it, create a new class that adds functions specific to the dataset
122 | you want to use. For example:
123 |
124 | class CatsAndDogsDataset(Dataset):
125 | def load_cats_and_dogs(self):
126 | ...
127 | def load_mask(self, image_id):
128 | ...
129 | def image_reference(self, image_id):
130 | ...
131 |
132 | See COCODataset and ShapesDataset as examples.
133 | """
134 |
135 | def __init__(self, class_map=None):
136 | self._image_ids = []
137 | self.image_info = []
138 | # Background is always the first class
139 | self.class_info = [{"source": "", "id": 0, "name": "BG"}]
140 | self.source_class_ids = {}
141 |
142 | def add_class(self, source, class_id, class_name):
143 | assert "." not in source, "Source name cannot contain a dot"
144 | # Does the class exist already?
145 | for info in self.class_info:
146 | if info['source'] == source and info["id"] == class_id:
147 | # source.class_id combination already available, skip
148 | return
149 | # Add the class
150 | self.class_info.append({
151 | "source": source,
152 | "id": class_id,
153 | "name": class_name,
154 | })
155 |
156 | def add_image(self, source, image_id, path, **kwargs):
157 | image_info = {
158 | "id": image_id,
159 | "source": source,
160 | "path": path,
161 | }
162 | image_info.update(kwargs)
163 | self.image_info.append(image_info)
164 |
165 | def image_reference(self, image_id):
166 | """Return a link to the image in its source Website or details about
167 | the image that help looking it up or debugging it.
168 |
169 | Override for your dataset, but pass to this function
170 | if you encounter images not in your dataset.
171 | """
172 | return ""
173 |
174 | def prepare(self, class_map=None):
175 | """Prepares the Dataset class for use.
176 |
177 | TODO: class map is not supported yet. When done, it should handle mapping
178 | classes from different datasets to the same class ID.
179 | """
180 | def clean_name(name):
181 | """Returns a shorter version of object names for cleaner display."""
182 | return ",".join(name.split(",")[:1])
183 |
184 | # Build (or rebuild) everything else from the info dicts.
185 | self.num_classes = len(self.class_info)
186 | self.class_ids = np.arange(self.num_classes)
187 | self.class_names = [clean_name(c["name"]) for c in self.class_info]
188 | self.num_images = len(self.image_info)
189 | self._image_ids = np.arange(self.num_images)
190 |
191 | self.class_from_source_map = {"{}.{}".format(info['source'], info['id']): id
192 | for info, id in zip(self.class_info, self.class_ids)}
193 |
194 | # Map sources to class_ids they support
195 | self.sources = list(set([i['source'] for i in self.class_info]))
196 | self.source_class_ids = {}
197 | # Loop over datasets
198 | for source in self.sources:
199 | self.source_class_ids[source] = []
200 | # Find classes that belong to this dataset
201 | for i, info in enumerate(self.class_info):
202 | # Include BG class in all datasets
203 | if i == 0 or source == info['source']:
204 | self.source_class_ids[source].append(i)
205 |
206 | def map_source_class_id(self, source_class_id):
207 | """Takes a source class ID and returns the int class ID assigned to it.
208 |
209 | For example:
210 | dataset.map_source_class_id("coco.12") -> 23
211 | """
212 | return self.class_from_source_map[source_class_id]
213 |
214 | def get_source_class_id(self, class_id, source):
215 | """Map an internal class ID to the corresponding class ID in the source dataset."""
216 | info = self.class_info[class_id]
217 | assert info['source'] == source
218 | return info['id']
219 |
220 | def append_data(self, class_info, image_info):
221 | self.external_to_class_id = {}
222 | for i, c in enumerate(self.class_info):
223 | for ds, id in c["map"]:
224 | self.external_to_class_id[ds + str(id)] = i
225 |
226 | # Map external image IDs to internal ones.
227 | self.external_to_image_id = {}
228 | for i, info in enumerate(self.image_info):
229 | self.external_to_image_id[info["ds"] + str(info["id"])] = i
230 |
231 | @property
232 | def image_ids(self):
233 | return self._image_ids
234 |
235 | def source_image_link(self, image_id):
236 | """Returns the path or URL to the image.
237 | Override this to return a URL to the image if it's availble online for easy
238 | debugging.
239 | """
240 | return self.image_info[image_id]["path"]
241 |
242 | def load_image(self, image_id):
243 | """Load the specified image and return a [H,W,3] Numpy array.
244 | """
245 | # Load image
246 | image = skimage.io.imread(self.image_info[image_id]['path'])
247 | # If grayscale. Convert to RGB for consistency.
248 | if image.ndim != 3:
249 | image = skimage.color.gray2rgb(image)
250 | return image
251 |
252 | def load_mask(self, image_id):
253 | """Load instance masks for the given image.
254 |
255 | Different datasets use different ways to store masks. Override this
256 | method to load instance masks and return them in the form of am
257 | array of binary masks of shape [height, width, instances].
258 |
259 | Returns:
260 | masks: A bool array of shape [height, width, instance count] with
261 | a binary mask per instance.
262 | class_ids: a 1D array of class IDs of the instance masks.
263 | """
264 | # Override this function to load a mask from your dataset.
265 | # Otherwise, it returns an empty mask.
266 | mask = np.empty([0, 0, 0])
267 | class_ids = np.empty([0], np.int32)
268 | return mask, class_ids
269 |
270 |
271 | def resize_image(image, min_dim=None, max_dim=None, padding=False):
272 | """
273 | Resizes an image keeping the aspect ratio.
274 |
275 | min_dim: if provided, resizes the image such that it's smaller
276 | dimension == min_dim
277 | max_dim: if provided, ensures that the image longest side doesn't
278 | exceed this value.
279 | padding: If true, pads image with zeros so it's size is max_dim x max_dim
280 |
281 | Returns:
282 | image: the resized image
283 | window: (y1, x1, y2, x2). If max_dim is provided, padding might
284 | be inserted in the returned image. If so, this window is the
285 | coordinates of the image part of the full image (excluding
286 | the padding). The x2, y2 pixels are not included.
287 | scale: The scale factor used to resize the image
288 | padding: Padding added to the image [(top, bottom), (left, right), (0, 0)]
289 | """
290 | # Default window (y1, x1, y2, x2) and default scale == 1.
291 | h, w = image.shape[:2]
292 | window = (0, 0, h, w)
293 | scale = 1
294 |
295 | # Scale?
296 | if min_dim:
297 | # Scale up but not down
298 | scale = max(1, min_dim / min(h, w))
299 | # Does it exceed max dim?
300 | if max_dim:
301 | image_max = max(h, w)
302 | if round(image_max * scale) > max_dim:
303 | scale = max_dim / image_max
304 | # Resize image and mask
305 | if scale != 1:
306 | image = scipy.misc.imresize(
307 | image, (round(h * scale), round(w * scale)))
308 | # Need padding?
309 | if padding:
310 | # Get new height and width
311 | h, w = image.shape[:2]
312 | top_pad = (max_dim - h) // 2
313 | bottom_pad = max_dim - h - top_pad
314 | left_pad = (max_dim - w) // 2
315 | right_pad = max_dim - w - left_pad
316 | padding = [(top_pad, bottom_pad), (left_pad, right_pad), (0, 0)]
317 | image = np.pad(image, padding, mode='constant', constant_values=0)
318 | window = (top_pad, left_pad, h + top_pad, w + left_pad)
319 | return image, window, scale, padding
320 |
321 |
322 | def resize_mask(mask, scale, padding):
323 | """Resizes a mask using the given scale and padding.
324 | Typically, you get the scale and padding from resize_image() to
325 | ensure both, the image and the mask, are resized consistently.
326 |
327 | scale: mask scaling factor
328 | padding: Padding to add to the mask in the form
329 | [(top, bottom), (left, right), (0, 0)]
330 | """
331 | h, w = mask.shape[:2]
332 | mask = scipy.ndimage.zoom(mask, zoom=[scale, scale, 1], order=0)
333 | mask = np.pad(mask, padding, mode='constant', constant_values=0)
334 | return mask
335 |
336 |
337 | def minimize_mask(bbox, mask, mini_shape):
338 | """Resize masks to a smaller version to cut memory load.
339 | Mini-masks can then resized back to image scale using expand_masks()
340 |
341 | See inspect_data.ipynb notebook for more details.
342 | """
343 | mini_mask = np.zeros(mini_shape + (mask.shape[-1],), dtype=bool)
344 | for i in range(mask.shape[-1]):
345 | m = mask[:, :, i]
346 | y1, x1, y2, x2 = bbox[i][:4]
347 | m = m[y1:y2, x1:x2]
348 | if m.size == 0:
349 | raise Exception("Invalid bounding box with area of zero")
350 | m = scipy.misc.imresize(m.astype(float), mini_shape, interp='bilinear')
351 | mini_mask[:, :, i] = np.where(m >= 128, 1, 0)
352 | return mini_mask
353 |
354 |
355 | def expand_mask(bbox, mini_mask, image_shape):
356 | """Resizes mini masks back to image size. Reverses the change
357 | of minimize_mask().
358 |
359 | See inspect_data.ipynb notebook for more details.
360 | """
361 | mask = np.zeros(image_shape[:2] + (mini_mask.shape[-1],), dtype=bool)
362 | for i in range(mask.shape[-1]):
363 | m = mini_mask[:, :, i]
364 | y1, x1, y2, x2 = bbox[i][:4]
365 | h = y2 - y1
366 | w = x2 - x1
367 | m = scipy.misc.imresize(m.astype(float), (h, w), interp='bilinear')
368 | mask[y1:y2, x1:x2, i] = np.where(m >= 128, 1, 0)
369 | return mask
370 |
371 |
372 | # TODO: Build and use this function to reduce code duplication
373 | def mold_mask(mask, config):
374 | pass
375 |
376 |
377 | def unmold_mask(mask, bbox, image_shape):
378 | """Converts a mask generated by the neural network into a format similar
379 | to it's original shape.
380 | mask: [height, width] of type float. A small, typically 28x28 mask.
381 | bbox: [y1, x1, y2, x2]. The box to fit the mask in.
382 |
383 | Returns a binary mask with the same size as the original image.
384 | """
385 | threshold = 0.5
386 | y1, x1, y2, x2 = bbox
387 | mask = scipy.misc.imresize(
388 | mask, (y2 - y1, x2 - x1), interp='bilinear').astype(np.float32) / 255.0
389 | mask = np.where(mask >= threshold, 1, 0).astype(np.uint8)
390 |
391 | # Put the mask in the right location.
392 | full_mask = np.zeros(image_shape[:2], dtype=np.uint8)
393 | full_mask[y1:y2, x1:x2] = mask
394 | return full_mask
395 |
396 |
397 | ############################################################
398 | # Anchors
399 | ############################################################
400 |
401 | def generate_anchors(scales, ratios, shape, feature_stride, anchor_stride):
402 | """
403 | scales: 1D array of anchor sizes in pixels. Example: [32, 64, 128]
404 | ratios: 1D array of anchor ratios of width/height. Example: [0.5, 1, 2]
405 | shape: [height, width] spatial shape of the feature map over which
406 | to generate anchors.
407 | feature_stride: Stride of the feature map relative to the image in pixels.
408 | anchor_stride: Stride of anchors on the feature map. For example, if the
409 | value is 2 then generate anchors for every other feature map pixel.
410 | """
411 | # Get all combinations of scales and ratios
412 | scales, ratios = np.meshgrid(np.array(scales), np.array(ratios))
413 | scales = scales.flatten()
414 | ratios = ratios.flatten()
415 |
416 | # Enumerate heights and widths from scales and ratios
417 | heights = scales / np.sqrt(ratios)
418 | widths = scales * np.sqrt(ratios)
419 |
420 | # Enumerate shifts in feature space
421 | shifts_y = np.arange(0, shape[0], anchor_stride) * feature_stride
422 | shifts_x = np.arange(0, shape[1], anchor_stride) * feature_stride
423 | shifts_x, shifts_y = np.meshgrid(shifts_x, shifts_y)
424 |
425 | # Enumerate combinations of shifts, widths, and heights
426 | box_widths, box_centers_x = np.meshgrid(widths, shifts_x)
427 | box_heights, box_centers_y = np.meshgrid(heights, shifts_y)
428 |
429 | # Reshape to get a list of (y, x) and a list of (h, w)
430 | box_centers = np.stack(
431 | [box_centers_y, box_centers_x], axis=2).reshape([-1, 2])
432 | box_sizes = np.stack([box_heights, box_widths], axis=2).reshape([-1, 2])
433 |
434 | # Convert to corner coordinates (y1, x1, y2, x2)
435 | boxes = np.concatenate([box_centers - 0.5 * box_sizes,
436 | box_centers + 0.5 * box_sizes], axis=1)
437 | return boxes
438 |
439 |
440 | def generate_pyramid_anchors(scales, ratios, feature_shapes, feature_strides,
441 | anchor_stride):
442 | """Generate anchors at different levels of a feature pyramid. Each scale
443 | is associated with a level of the pyramid, but each ratio is used in
444 | all levels of the pyramid.
445 |
446 | Returns:
447 | anchors: [N, (y1, x1, y2, x2)]. All generated anchors in one array. Sorted
448 | with the same order of the given scales. So, anchors of scale[0] come
449 | first, then anchors of scale[1], and so on.
450 | """
451 | # Anchors
452 | # [anchor_count, (y1, x1, y2, x2)]
453 | anchors = []
454 | for i in range(len(scales)):
455 | anchors.append(generate_anchors(scales[i], ratios, feature_shapes[i],
456 | feature_strides[i], anchor_stride))
457 | return np.concatenate(anchors, axis=0)
458 |
459 |
460 | ########################
461 |
462 | """
463 | Mask R-CNN
464 | Common utility functions and classes.
465 |
466 | Copyright (c) 2017 Matterport, Inc.
467 | Licensed under the MIT License (see LICENSE for details)
468 | Written by Waleed Abdulla
469 | """
470 |
471 |
472 |
473 | # URL from which to download the latest COCO trained weights
474 | COCO_MODEL_URL = "https://github.com/matterport/Mask_RCNN/releases/download/v2.0/mask_rcnn_coco.h5"
475 |
476 |
477 | ############################################################
478 | # Bounding Boxes
479 | ############################################################
480 |
481 |
482 | def compute_overlaps_masks(masks1, masks2):
483 | '''Computes IoU overlaps between two sets of masks.
484 | masks1, masks2: [Height, Width, instances]
485 | '''
486 | # flatten masks
487 | masks1 = np.reshape(masks1 > .5, (-1, masks1.shape[-1])).astype(np.float32)
488 | masks2 = np.reshape(masks2 > .5, (-1, masks2.shape[-1])).astype(np.float32)
489 | area1 = np.sum(masks1, axis=0)
490 | area2 = np.sum(masks2, axis=0)
491 |
492 | # intersections and union
493 | intersections = np.dot(masks1.T, masks2)
494 | union = area1[:, None] + area2[None, :] - intersections
495 | overlaps = intersections / union
496 |
497 | return overlaps
498 |
499 |
500 | def non_max_suppression(boxes, scores, threshold):
501 | """Performs non-maximum supression and returns indicies of kept boxes.
502 | boxes: [N, (y1, x1, y2, x2)]. Notice that (y2, x2) lays outside the box.
503 | scores: 1-D array of box scores.
504 | threshold: Float. IoU threshold to use for filtering.
505 | """
506 | assert boxes.shape[0] > 0
507 | if boxes.dtype.kind != "f":
508 | boxes = boxes.astype(np.float32)
509 |
510 | # Compute box areas
511 | y1 = boxes[:, 0]
512 | x1 = boxes[:, 1]
513 | y2 = boxes[:, 2]
514 | x2 = boxes[:, 3]
515 | area = (y2 - y1) * (x2 - x1)
516 |
517 | # Get indicies of boxes sorted by scores (highest first)
518 | ixs = scores.argsort()[::-1]
519 |
520 | pick = []
521 | while len(ixs) > 0:
522 | # Pick top box and add its index to the list
523 | i = ixs[0]
524 | pick.append(i)
525 | # Compute IoU of the picked box with the rest
526 | iou = compute_iou(boxes[i], boxes[ixs[1:]], area[i], area[ixs[1:]])
527 | # Identify boxes with IoU over the threshold. This
528 | # returns indicies into ixs[1:], so add 1 to get
529 | # indicies into ixs.
530 | remove_ixs = np.where(iou > threshold)[0] + 1
531 | # Remove indicies of the picked and overlapped boxes.
532 | ixs = np.delete(ixs, remove_ixs)
533 | ixs = np.delete(ixs, 0)
534 | return np.array(pick, dtype=np.int32)
535 |
536 |
537 | def apply_box_deltas(boxes, deltas):
538 | """Applies the given deltas to the given boxes.
539 | boxes: [N, (y1, x1, y2, x2)]. Note that (y2, x2) is outside the box.
540 | deltas: [N, (dy, dx, log(dh), log(dw))]
541 | """
542 | boxes = boxes.astype(np.float32)
543 | # Convert to y, x, h, w
544 | height = boxes[:, 2] - boxes[:, 0]
545 | width = boxes[:, 3] - boxes[:, 1]
546 | center_y = boxes[:, 0] + 0.5 * height
547 | center_x = boxes[:, 1] + 0.5 * width
548 | # Apply deltas
549 | center_y += deltas[:, 0] * height
550 | center_x += deltas[:, 1] * width
551 | height *= np.exp(deltas[:, 2])
552 | width *= np.exp(deltas[:, 3])
553 | # Convert back to y1, x1, y2, x2
554 | y1 = center_y - 0.5 * height
555 | x1 = center_x - 0.5 * width
556 | y2 = y1 + height
557 | x2 = x1 + width
558 | return np.stack([y1, x1, y2, x2], axis=1)
559 |
560 |
561 |
562 |
563 |
564 |
565 | ############################################################
566 | # Miscellaneous
567 | ############################################################
568 |
569 | def trim_zeros(x):
570 | """It's common to have tensors larger than the available data and
571 | pad with zeros. This function removes rows that are all zeros.
572 |
573 | x: [rows, columns].
574 | """
575 | assert len(x.shape) == 2
576 | return x[~np.all(x == 0, axis=1)]
577 |
578 |
579 | def compute_matches(gt_boxes, gt_class_ids, gt_masks,
580 | pred_boxes, pred_class_ids, pred_scores, pred_masks,
581 | iou_threshold=0.5, score_threshold=0.0):
582 | """Finds matches between prediction and ground truth instances.
583 |
584 | Returns:
585 | gt_match: 1-D array. For each GT box it has the index of the matched
586 | predicted box.
587 | pred_match: 1-D array. For each predicted box, it has the index of
588 | the matched ground truth box.
589 | overlaps: [pred_boxes, gt_boxes] IoU overlaps.
590 | """
591 | # Trim zero padding
592 | # TODO: cleaner to do zero unpadding upstream
593 | gt_boxes = trim_zeros(gt_boxes)
594 | gt_masks = gt_masks[..., :gt_boxes.shape[0]]
595 | pred_boxes = trim_zeros(pred_boxes)
596 | pred_scores = pred_scores[:pred_boxes.shape[0]]
597 | # Sort predictions by score from high to low
598 | indices = np.argsort(pred_scores)[::-1]
599 | pred_boxes = pred_boxes[indices]
600 | pred_class_ids = pred_class_ids[indices]
601 | pred_scores = pred_scores[indices]
602 | pred_masks = pred_masks[..., indices]
603 |
604 | # Compute IoU overlaps [pred_masks, gt_masks]
605 | overlaps = compute_overlaps_masks(pred_masks, gt_masks)
606 |
607 | # Loop through predictions and find matching ground truth boxes
608 | match_count = 0
609 | pred_match = -1 * np.ones([pred_boxes.shape[0]])
610 | gt_match = -1 * np.ones([gt_boxes.shape[0]])
611 | for i in range(len(pred_boxes)):
612 | # Find best matching ground truth box
613 | # 1. Sort matches by score
614 | sorted_ixs = np.argsort(overlaps[i])[::-1]
615 | # 2. Remove low scores
616 | low_score_idx = np.where(overlaps[i, sorted_ixs] < score_threshold)[0]
617 | if low_score_idx.size > 0:
618 | sorted_ixs = sorted_ixs[:low_score_idx[0]]
619 | # 3. Find the match
620 | for j in sorted_ixs:
621 | # If ground truth box is already matched, go to next one
622 | if gt_match[j] > 0:
623 | continue
624 | # If we reach IoU smaller than the threshold, end the loop
625 | iou = overlaps[i, j]
626 | if iou < iou_threshold:
627 | break
628 | # Do we have a match?
629 | if pred_class_ids[i] == gt_class_ids[j]:
630 | match_count += 1
631 | gt_match[j] = i
632 | pred_match[i] = j
633 | break
634 |
635 | return gt_match, pred_match, overlaps
636 |
637 |
638 | def compute_ap(gt_boxes, gt_class_ids, gt_masks,
639 | pred_boxes, pred_class_ids, pred_scores, pred_masks,
640 | iou_threshold=0.5):
641 | """Compute Average Precision at a set IoU threshold (default 0.5).
642 |
643 | Returns:
644 | mAP: Mean Average Precision
645 | precisions: List of precisions at different class score thresholds.
646 | recalls: List of recall values at different class score thresholds.
647 | overlaps: [pred_boxes, gt_boxes] IoU overlaps.
648 | """
649 | # Get matches and overlaps
650 | gt_match, pred_match, overlaps = compute_matches(
651 | gt_boxes, gt_class_ids, gt_masks,
652 | pred_boxes, pred_class_ids, pred_scores, pred_masks,
653 | iou_threshold)
654 |
655 | # Compute precision and recall at each prediction box step
656 | precisions = np.cumsum(pred_match > -1) / (np.arange(len(pred_match)) + 1)
657 | recalls = np.cumsum(pred_match > -1).astype(np.float32) / len(gt_match)
658 |
659 | # Pad with start and end values to simplify the math
660 | precisions = np.concatenate([[0], precisions, [0]])
661 | recalls = np.concatenate([[0], recalls, [1]])
662 |
663 | # Ensure precision values decrease but don't increase. This way, the
664 | # precision value at each recall threshold is the maximum it can be
665 | # for all following recall thresholds, as specified by the VOC paper.
666 | for i in range(len(precisions) - 2, -1, -1):
667 | precisions[i] = np.maximum(precisions[i], precisions[i + 1])
668 |
669 | # Compute mean AP over recall range
670 | indices = np.where(recalls[:-1] != recalls[1:])[0] + 1
671 | mAP = np.sum((recalls[indices] - recalls[indices - 1]) *
672 | precisions[indices])
673 |
674 | return mAP, precisions, recalls, overlaps
675 |
676 |
677 | def compute_ap_range(gt_box, gt_class_id, gt_mask,
678 | pred_box, pred_class_id, pred_score, pred_mask,
679 | iou_thresholds=None, verbose=1):
680 | """Compute AP over a range or IoU thresholds. Default range is 0.5-0.95."""
681 | # Default is 0.5 to 0.95 with increments of 0.05
682 | iou_thresholds = iou_thresholds or np.arange(0.5, 1.0, 0.05)
683 |
684 | # Compute AP over range of IoU thresholds
685 | AP = []
686 | for iou_threshold in iou_thresholds:
687 | ap, precisions, recalls, overlaps =\
688 | compute_ap(gt_box, gt_class_id, gt_mask,
689 | pred_box, pred_class_id, pred_score, pred_mask,
690 | iou_threshold=iou_threshold)
691 | if verbose:
692 | print("AP @{:.2f}:\t {:.3f}".format(iou_threshold, ap))
693 | AP.append(ap)
694 | AP = np.array(AP).mean()
695 | if verbose:
696 | print("AP @{:.2f}-{:.2f}:\t {:.3f}".format(
697 | iou_thresholds[0], iou_thresholds[-1], AP))
698 | return AP
699 |
700 |
701 | def compute_recall(pred_boxes, gt_boxes, iou):
702 | """Compute the recall at the given IoU threshold. It's an indication
703 | of how many GT boxes were found by the given prediction boxes.
704 |
705 | pred_boxes: [N, (y1, x1, y2, x2)] in image coordinates
706 | gt_boxes: [N, (y1, x1, y2, x2)] in image coordinates
707 | """
708 | # Measure overlaps
709 | overlaps = compute_overlaps(pred_boxes, gt_boxes)
710 | iou_max = np.max(overlaps, axis=1)
711 | iou_argmax = np.argmax(overlaps, axis=1)
712 | positive_ids = np.where(iou_max >= iou)[0]
713 | matched_gt_boxes = iou_argmax[positive_ids]
714 |
715 | recall = len(set(matched_gt_boxes)) / gt_boxes.shape[0]
716 | return recall, positive_ids
717 |
718 |
719 | # ## Batch Slicing
720 | # Some custom layers support a batch size of 1 only, and require a lot of work
721 | # to support batches greater than 1. This function slices an input tensor
722 | # across the batch dimension and feeds batches of size 1. Effectively,
723 | # an easy way to support batches > 1 quickly with little code modification.
724 | # In the long run, it's more efficient to modify the code to support large
725 | # batches and getting rid of this function. Consider this a temporary solution
726 |
727 |
728 |
729 |
730 | def norm_boxes(boxes, shape):
731 | """Converts boxes from pixel coordinates to normalized coordinates.
732 | boxes: [N, (y1, x1, y2, x2)] in pixel coordinates
733 | shape: [..., (height, width)] in pixels
734 |
735 | Note: In pixel coordinates (y2, x2) is outside the box. But in normalized
736 | coordinates it's inside the box.
737 |
738 | Returns:
739 | [N, (y1, x1, y2, x2)] in normalized coordinates
740 | """
741 | h, w = shape
742 | scale = np.array([h - 1, w - 1, h - 1, w - 1])
743 | shift = np.array([0, 0, 1, 1])
744 | return np.divide((boxes - shift), scale).astype(np.float32)
745 |
746 |
747 | def denorm_boxes(boxes, shape):
748 | """Converts boxes from normalized coordinates to pixel coordinates.
749 | boxes: [N, (y1, x1, y2, x2)] in normalized coordinates
750 | shape: [..., (height, width)] in pixels
751 |
752 | Note: In pixel coordinates (y2, x2) is outside the box. But in normalized
753 | coordinates it's inside the box.
754 |
755 | Returns:
756 | [N, (y1, x1, y2, x2)] in pixel coordinates
757 | """
758 | h, w = shape
759 | scale = np.array([h - 1, w - 1, h - 1, w - 1])
760 | shift = np.array([0, 0, 1, 1])
761 | return np.around(np.multiply(boxes, scale) + shift).astype(np.int32)
762 |
763 |
764 |
--------------------------------------------------------------------------------
/visualize.py:
--------------------------------------------------------------------------------
1 | """
2 | Mask R-CNN
3 | Display and Visualization Functions.
4 |
5 | Copyright (c) 2017 Matterport, Inc.
6 | Licensed under the MIT License (see LICENSE for details)
7 | Written by Waleed Abdulla
8 | """
9 |
10 | import os
11 | import random
12 | import itertools
13 | import colorsys
14 | import numpy as np
15 | from skimage.measure import find_contours
16 | import matplotlib.pyplot as plt
17 | if "DISPLAY" not in os.environ:
18 | plt.switch_backend('agg')
19 | import matplotlib.patches as patches
20 | import matplotlib.lines as lines
21 | from matplotlib.patches import Polygon
22 |
23 | import utils
24 |
25 |
26 | ############################################################
27 | # Visualization
28 | ############################################################
29 |
30 | def display_images(images, titles=None, cols=4, cmap=None, norm=None,
31 | interpolation=None):
32 | """Display the given set of images, optionally with titles.
33 | images: list or array of image tensors in HWC format.
34 | titles: optional. A list of titles to display with each image.
35 | cols: number of images per row
36 | cmap: Optional. Color map to use. For example, "Blues".
37 | norm: Optional. A Normalize instance to map values to colors.
38 | interpolation: Optional. Image interporlation to use for display.
39 | """
40 | titles = titles if titles is not None else [""] * len(images)
41 | rows = len(images) // cols + 1
42 | plt.figure(figsize=(14, 14 * rows // cols))
43 | i = 1
44 | for image, title in zip(images, titles):
45 | plt.subplot(rows, cols, i)
46 | plt.title(title, fontsize=9)
47 | plt.axis('off')
48 | plt.imshow(image.astype(np.uint8), cmap=cmap,
49 | norm=norm, interpolation=interpolation)
50 | i += 1
51 | plt.show()
52 |
53 |
54 | def random_colors(N, bright=True):
55 | """
56 | Generate random colors.
57 | To get visually distinct colors, generate them in HSV space then
58 | convert to RGB.
59 | """
60 | brightness = 1.0 if bright else 0.7
61 | hsv = [(i / N, 1, brightness) for i in range(N)]
62 | colors = list(map(lambda c: colorsys.hsv_to_rgb(*c), hsv))
63 | random.shuffle(colors)
64 | return colors
65 |
66 |
67 | def apply_mask(image, mask, color, alpha=0.5):
68 | """Apply the given mask to the image.
69 | """
70 | for c in range(3):
71 | image[:, :, c] = np.where(mask == 1,
72 | image[:, :, c] *
73 | (1 - alpha) + alpha * color[c] * 255,
74 | image[:, :, c])
75 | return image
76 |
77 |
78 | def display_instances(image, boxes, masks, class_ids, class_names,
79 | scores=None, title="",
80 | figsize=(16, 16), ax=None):
81 | """
82 | boxes: [num_instance, (y1, x1, y2, x2, class_id)] in image coordinates.
83 | masks: [height, width, num_instances]
84 | class_ids: [num_instances]
85 | class_names: list of class names of the dataset
86 | scores: (optional) confidence scores for each box
87 | figsize: (optional) the size of the image.
88 | """
89 | # Number of instances
90 | N = boxes.shape[0]
91 | if not N:
92 | print("\n*** No instances to display *** \n")
93 | else:
94 | assert boxes.shape[0] == masks.shape[-1] == class_ids.shape[0]
95 |
96 | if not ax:
97 | _, ax = plt.subplots(1, figsize=figsize)
98 |
99 | # Generate random colors
100 | colors = random_colors(N)
101 |
102 | # Show area outside image boundaries.
103 | height, width = image.shape[:2]
104 | ax.set_ylim(height + 10, -10)
105 | ax.set_xlim(-10, width + 10)
106 | ax.axis('off')
107 | ax.set_title(title)
108 |
109 | masked_image = image.astype(np.uint32).copy()
110 | for i in range(N):
111 | color = colors[i]
112 |
113 | # Bounding box
114 | if not np.any(boxes[i]):
115 | # Skip this instance. Has no bbox. Likely lost in image cropping.
116 | continue
117 | y1, x1, y2, x2 = boxes[i]
118 | p = patches.Rectangle((x1, y1), x2 - x1, y2 - y1, linewidth=2,
119 | alpha=0.7, linestyle="dashed",
120 | edgecolor=color, facecolor='none')
121 | ax.add_patch(p)
122 |
123 | # Label
124 | class_id = class_ids[i]
125 | score = scores[i] if scores is not None else None
126 | label = class_names[class_id]
127 | x = random.randint(x1, (x1 + x2) // 2)
128 | caption = "{} {:.3f}".format(label, score) if score else label
129 | ax.text(x1, y1 + 8, caption,
130 | color='w', size=11, backgroundcolor="none")
131 |
132 | # Mask
133 | mask = masks[:, :, i]
134 | masked_image = apply_mask(masked_image, mask, color)
135 |
136 | # Mask Polygon
137 | # Pad to ensure proper polygons for masks that touch image edges.
138 | padded_mask = np.zeros(
139 | (mask.shape[0] + 2, mask.shape[1] + 2), dtype=np.uint8)
140 | padded_mask[1:-1, 1:-1] = mask
141 | contours = find_contours(padded_mask, 0.5)
142 | for verts in contours:
143 | # Subtract the padding and flip (y, x) to (x, y)
144 | verts = np.fliplr(verts) - 1
145 | p = Polygon(verts, facecolor="none", edgecolor=color)
146 | ax.add_patch(p)
147 | ax.imshow(masked_image.astype(np.uint8))
148 | plt.show()
149 |
150 |
151 | def draw_rois(image, rois, refined_rois, mask, class_ids, class_names, limit=10):
152 | """
153 | anchors: [n, (y1, x1, y2, x2)] list of anchors in image coordinates.
154 | proposals: [n, 4] the same anchors but refined to fit objects better.
155 | """
156 | masked_image = image.copy()
157 |
158 | # Pick random anchors in case there are too many.
159 | ids = np.arange(rois.shape[0], dtype=np.int32)
160 | ids = np.random.choice(
161 | ids, limit, replace=False) if ids.shape[0] > limit else ids
162 |
163 | fig, ax = plt.subplots(1, figsize=(12, 12))
164 | if rois.shape[0] > limit:
165 | plt.title("Showing {} random ROIs out of {}".format(
166 | len(ids), rois.shape[0]))
167 | else:
168 | plt.title("{} ROIs".format(len(ids)))
169 |
170 | # Show area outside image boundaries.
171 | ax.set_ylim(image.shape[0] + 20, -20)
172 | ax.set_xlim(-50, image.shape[1] + 20)
173 | ax.axis('off')
174 |
175 | for i, id in enumerate(ids):
176 | color = np.random.rand(3)
177 | class_id = class_ids[id]
178 | # ROI
179 | y1, x1, y2, x2 = rois[id]
180 | p = patches.Rectangle((x1, y1), x2 - x1, y2 - y1, linewidth=2,
181 | edgecolor=color if class_id else "gray",
182 | facecolor='none', linestyle="dashed")
183 | ax.add_patch(p)
184 | # Refined ROI
185 | if class_id:
186 | ry1, rx1, ry2, rx2 = refined_rois[id]
187 | p = patches.Rectangle((rx1, ry1), rx2 - rx1, ry2 - ry1, linewidth=2,
188 | edgecolor=color, facecolor='none')
189 | ax.add_patch(p)
190 | # Connect the top-left corners of the anchor and proposal for easy visualization
191 | ax.add_line(lines.Line2D([x1, rx1], [y1, ry1], color=color))
192 |
193 | # Label
194 | label = class_names[class_id]
195 | ax.text(rx1, ry1 + 8, "{}".format(label),
196 | color='w', size=11, backgroundcolor="none")
197 |
198 | # Mask
199 | m = utils.unmold_mask(mask[id], rois[id]
200 | [:4].astype(np.int32), image.shape)
201 | masked_image = apply_mask(masked_image, m, color)
202 |
203 | ax.imshow(masked_image)
204 |
205 | # Print stats
206 | print("Positive ROIs: ", class_ids[class_ids > 0].shape[0])
207 | print("Negative ROIs: ", class_ids[class_ids == 0].shape[0])
208 | print("Positive Ratio: {:.2f}".format(
209 | class_ids[class_ids > 0].shape[0] / class_ids.shape[0]))
210 |
211 |
212 | # TODO: Replace with matplotlib equivalent?
213 | def draw_box(image, box, color):
214 | """Draw 3-pixel width bounding boxes on the given image array.
215 | color: list of 3 int values for RGB.
216 | """
217 | y1, x1, y2, x2 = box
218 | image[y1:y1 + 2, x1:x2] = color
219 | image[y2:y2 + 2, x1:x2] = color
220 | image[y1:y2, x1:x1 + 2] = color
221 | image[y1:y2, x2:x2 + 2] = color
222 | return image
223 |
224 |
225 | def display_top_masks(image, mask, class_ids, class_names, limit=4):
226 | """Display the given image and the top few class masks."""
227 | to_display = []
228 | titles = []
229 | to_display.append(image)
230 | titles.append("H x W={}x{}".format(image.shape[0], image.shape[1]))
231 | # Pick top prominent classes in this image
232 | unique_class_ids = np.unique(class_ids)
233 | mask_area = [np.sum(mask[:, :, np.where(class_ids == i)[0]])
234 | for i in unique_class_ids]
235 | top_ids = [v[0] for v in sorted(zip(unique_class_ids, mask_area),
236 | key=lambda r: r[1], reverse=True) if v[1] > 0]
237 | # Generate images and titles
238 | for i in range(limit):
239 | class_id = top_ids[i] if i < len(top_ids) else -1
240 | # Pull masks of instances belonging to the same class.
241 | m = mask[:, :, np.where(class_ids == class_id)[0]]
242 | m = np.sum(m * np.arange(1, m.shape[-1] + 1), -1)
243 | to_display.append(m)
244 | titles.append(class_names[class_id] if class_id != -1 else "-")
245 | display_images(to_display, titles=titles, cols=limit + 1, cmap="Blues_r")
246 |
247 |
248 | def plot_precision_recall(AP, precisions, recalls):
249 | """Draw the precision-recall curve.
250 |
251 | AP: Average precision at IoU >= 0.5
252 | precisions: list of precision values
253 | recalls: list of recall values
254 | """
255 | # Plot the Precision-Recall curve
256 | _, ax = plt.subplots(1)
257 | ax.set_title("Precision-Recall Curve. AP@50 = {:.3f}".format(AP))
258 | ax.set_ylim(0, 1.1)
259 | ax.set_xlim(0, 1.1)
260 | _ = ax.plot(recalls, precisions)
261 |
262 |
263 | def plot_overlaps(gt_class_ids, pred_class_ids, pred_scores,
264 | overlaps, class_names, threshold=0.5):
265 | """Draw a grid showing how ground truth objects are classified.
266 | gt_class_ids: [N] int. Ground truth class IDs
267 | pred_class_id: [N] int. Predicted class IDs
268 | pred_scores: [N] float. The probability scores of predicted classes
269 | overlaps: [pred_boxes, gt_boxes] IoU overlaps of predictins and GT boxes.
270 | class_names: list of all class names in the dataset
271 | threshold: Float. The prediction probability required to predict a class
272 | """
273 | gt_class_ids = gt_class_ids[gt_class_ids != 0]
274 | pred_class_ids = pred_class_ids[pred_class_ids != 0]
275 |
276 | plt.figure(figsize=(12, 10))
277 | plt.imshow(overlaps, interpolation='nearest', cmap=plt.cm.Blues)
278 | plt.yticks(np.arange(len(pred_class_ids)),
279 | ["{} ({:.2f})".format(class_names[int(id)], pred_scores[i])
280 | for i, id in enumerate(pred_class_ids)])
281 | plt.xticks(np.arange(len(gt_class_ids)),
282 | [class_names[int(id)] for id in gt_class_ids], rotation=90)
283 |
284 | thresh = overlaps.max() / 2.
285 | for i, j in itertools.product(range(overlaps.shape[0]),
286 | range(overlaps.shape[1])):
287 | text = ""
288 | if overlaps[i, j] > threshold:
289 | text = "match" if gt_class_ids[j] == pred_class_ids[i] else "wrong"
290 | color = ("white" if overlaps[i, j] > thresh
291 | else "black" if overlaps[i, j] > 0
292 | else "grey")
293 | plt.text(j, i, "{:.3f}\n{}".format(overlaps[i, j], text),
294 | horizontalalignment="center", verticalalignment="center",
295 | fontsize=9, color=color)
296 |
297 | plt.tight_layout()
298 | plt.xlabel("Ground Truth")
299 | plt.ylabel("Predictions")
300 |
301 |
302 | def draw_boxes(image, boxes=None, refined_boxes=None,
303 | masks=None, captions=None, visibilities=None,
304 | title="", ax=None):
305 | """Draw bounding boxes and segmentation masks with differnt
306 | customizations.
307 |
308 | boxes: [N, (y1, x1, y2, x2, class_id)] in image coordinates.
309 | refined_boxes: Like boxes, but draw with solid lines to show
310 | that they're the result of refining 'boxes'.
311 | masks: [N, height, width]
312 | captions: List of N titles to display on each box
313 | visibilities: (optional) List of values of 0, 1, or 2. Determine how
314 | prominant each bounding box should be.
315 | title: An optional title to show over the image
316 | ax: (optional) Matplotlib axis to draw on.
317 | """
318 | # Number of boxes
319 | assert boxes is not None or refined_boxes is not None
320 | N = boxes.shape[0] if boxes is not None else refined_boxes.shape[0]
321 |
322 | # Matplotlib Axis
323 | if not ax:
324 | _, ax = plt.subplots(1, figsize=(12, 12))
325 |
326 | # Generate random colors
327 | colors = random_colors(N)
328 |
329 | # Show area outside image boundaries.
330 | margin = image.shape[0] // 10
331 | ax.set_ylim(image.shape[0] + margin, -margin)
332 | ax.set_xlim(-margin, image.shape[1] + margin)
333 | ax.axis('off')
334 |
335 | ax.set_title(title)
336 |
337 | masked_image = image.astype(np.uint32).copy()
338 | for i in range(N):
339 | # Box visibility
340 | visibility = visibilities[i] if visibilities is not None else 1
341 | if visibility == 0:
342 | color = "gray"
343 | style = "dotted"
344 | alpha = 0.5
345 | elif visibility == 1:
346 | color = colors[i]
347 | style = "dotted"
348 | alpha = 1
349 | elif visibility == 2:
350 | color = colors[i]
351 | style = "solid"
352 | alpha = 1
353 |
354 | # Boxes
355 | if boxes is not None:
356 | if not np.any(boxes[i]):
357 | # Skip this instance. Has no bbox. Likely lost in cropping.
358 | continue
359 | y1, x1, y2, x2 = boxes[i]
360 | p = patches.Rectangle((x1, y1), x2 - x1, y2 - y1, linewidth=2,
361 | alpha=alpha, linestyle=style,
362 | edgecolor=color, facecolor='none')
363 | ax.add_patch(p)
364 |
365 | # Refined boxes
366 | if refined_boxes is not None and visibility > 0:
367 | ry1, rx1, ry2, rx2 = refined_boxes[i].astype(np.int32)
368 | p = patches.Rectangle((rx1, ry1), rx2 - rx1, ry2 - ry1, linewidth=2,
369 | edgecolor=color, facecolor='none')
370 | ax.add_patch(p)
371 | # Connect the top-left corners of the anchor and proposal
372 | if boxes is not None:
373 | ax.add_line(lines.Line2D([x1, rx1], [y1, ry1], color=color))
374 |
375 | # Captions
376 | if captions is not None:
377 | caption = captions[i]
378 | # If there are refined boxes, display captions on them
379 | if refined_boxes is not None:
380 | y1, x1, y2, x2 = ry1, rx1, ry2, rx2
381 | x = random.randint(x1, (x1 + x2) // 2)
382 | ax.text(x1, y1, caption, size=11, verticalalignment='top',
383 | color='w', backgroundcolor="none",
384 | bbox={'facecolor': color, 'alpha': 0.5,
385 | 'pad': 2, 'edgecolor': 'none'})
386 |
387 | # Masks
388 | if masks is not None:
389 | mask = masks[:, :, i]
390 | masked_image = apply_mask(masked_image, mask, color)
391 | # Mask Polygon
392 | # Pad to ensure proper polygons for masks that touch image edges.
393 | padded_mask = np.zeros(
394 | (mask.shape[0] + 2, mask.shape[1] + 2), dtype=np.uint8)
395 | padded_mask[1:-1, 1:-1] = mask
396 | contours = find_contours(padded_mask, 0.5)
397 | for verts in contours:
398 | # Subtract the padding and flip (y, x) to (x, y)
399 | verts = np.fliplr(verts) - 1
400 | p = Polygon(verts, facecolor="none", edgecolor=color)
401 | ax.add_patch(p)
402 | ax.imshow(masked_image.astype(np.uint8))
403 |
404 | def plot_loss(loss, val_loss, save=True, log_dir=None):
405 | loss = np.array(loss)
406 | val_loss = np.array(val_loss)
407 |
408 | plt.figure("loss")
409 | plt.gcf().clear()
410 | plt.plot(loss[:, 0], label='train')
411 | plt.plot(val_loss[:, 0], label='valid')
412 | plt.xlabel('epoch')
413 | plt.ylabel('loss')
414 | plt.legend()
415 | if save:
416 | save_path = os.path.join(log_dir, "loss.png")
417 | plt.savefig(save_path)
418 | else:
419 | plt.show(block=False)
420 | plt.pause(0.1)
421 |
422 | plt.figure("rpn_class_loss")
423 | plt.gcf().clear()
424 | plt.plot(loss[:, 1], label='train')
425 | plt.plot(val_loss[:, 1], label='valid')
426 | plt.xlabel('epoch')
427 | plt.ylabel('loss')
428 | plt.legend()
429 | if save:
430 | save_path = os.path.join(log_dir, "rpn_class_loss.png")
431 | plt.savefig(save_path)
432 | else:
433 | plt.show(block=False)
434 | plt.pause(0.1)
435 |
436 | plt.figure("rpn_bbox_loss")
437 | plt.gcf().clear()
438 | plt.plot(loss[:, 2], label='train')
439 | plt.plot(val_loss[:, 2], label='valid')
440 | plt.xlabel('epoch')
441 | plt.ylabel('loss')
442 | plt.legend()
443 | if save:
444 | save_path = os.path.join(log_dir, "rpn_bbox_loss.png")
445 | plt.savefig(save_path)
446 | else:
447 | plt.show(block=False)
448 | plt.pause(0.1)
449 |
450 | plt.figure("mrcnn_class_loss")
451 | plt.gcf().clear()
452 | plt.plot(loss[:, 3], label='train')
453 | plt.plot(val_loss[:, 3], label='valid')
454 | plt.xlabel('epoch')
455 | plt.ylabel('loss')
456 | plt.legend()
457 | if save:
458 | save_path = os.path.join(log_dir, "mrcnn_class_loss.png")
459 | plt.savefig(save_path)
460 | else:
461 | plt.show(block=False)
462 | plt.pause(0.1)
463 |
464 | plt.figure("mrcnn_bbox_loss")
465 | plt.gcf().clear()
466 | plt.plot(loss[:, 4], label='train')
467 | plt.plot(val_loss[:, 4], label='valid')
468 | plt.xlabel('epoch')
469 | plt.ylabel('loss')
470 | plt.legend()
471 | if save:
472 | save_path = os.path.join(log_dir, "mrcnn_bbox_loss.png")
473 | plt.savefig(save_path)
474 | else:
475 | plt.show(block=False)
476 | plt.pause(0.1)
477 |
478 | plt.figure("mrcnn_mask_loss")
479 | plt.gcf().clear()
480 | plt.plot(loss[:, 5], label='train')
481 | plt.plot(val_loss[:, 5], label='valid')
482 | plt.xlabel('epoch')
483 | plt.ylabel('loss')
484 | plt.legend()
485 | if save:
486 | save_path = os.path.join(log_dir, "mrcnn_mask_loss.png")
487 | plt.savefig(save_path)
488 | else:
489 | plt.show(block=False)
490 | plt.pause(0.1)
491 |
492 |
493 |
--------------------------------------------------------------------------------
| | |