├── .gitignore
├── LICENSE
├── README.md
├── annotations
├── wider_train_annotation.txt
└── wider_val_annotation.txt
├── convert.py
├── convert_annotation.ipynb
├── darknet53.cfg
├── font
├── FiraMono-Medium.otf
└── SIL Open Font License.txt
├── image
└── thumbnail_image.png
├── kmeans.py
├── model_data
├── coco_classes.txt
├── tiny_yolo_anchors.txt
├── voc_classes.txt
├── wider_anchors.txt
├── wider_classes.txt
└── yolo_anchors.txt
├── train.py
├── train_bottleneck.py
├── wider_face_split
├── readme.txt
├── wider_face_test.mat
├── wider_face_test_filelist.txt
├── wider_face_train.mat
├── wider_face_train_bbx_gt.txt
├── wider_face_val.mat
└── wider_face_val_bbx_gt.txt
├── yolo.py
├── yolo3
├── __init__.py
├── model.py
└── utils.py
├── yolo_video.py
├── yolov3-tiny.cfg
└── yolov3.cfg
/.gitignore:
--------------------------------------------------------------------------------
1 | *.jpg
2 | *.png
3 | !image/thumbnail_image.png
4 | *.weights
5 | *.h5
6 | logs/
7 | *_test.py
8 |
9 | WIDER_train/
10 | *.mp4
11 | *.ini
12 |
13 | # Byte-compiled / optimized / DLL files
14 | __pycache__/
15 | *.py[cod]
16 | *$py.class
17 |
18 | # C extensions
19 | *.so
20 |
21 | # Distribution / packaging
22 | .Python
23 | env/
24 | build/
25 | develop-eggs/
26 | dist/
27 | downloads/
28 | eggs/
29 | .eggs/
30 | lib/
31 | lib64/
32 | parts/
33 | sdist/
34 | var/
35 | wheels/
36 | *.egg-info/
37 | .installed.cfg
38 | *.egg
39 |
40 | # PyInstaller
41 | # Usually these files are written by a python script from a template
42 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
43 | *.manifest
44 | *.spec
45 |
46 | # Installer logs
47 | pip-log.txt
48 | pip-delete-this-directory.txt
49 |
50 | # Unit test / coverage reports
51 | htmlcov/
52 | .tox/
53 | .coverage
54 | .coverage.*
55 | .cache
56 | nosetests.xml
57 | coverage.xml
58 | *.cover
59 | .hypothesis/
60 |
61 | # Translations
62 | *.mo
63 | *.pot
64 |
65 | # Django stuff:
66 | *.log
67 | local_settings.py
68 |
69 | # Flask stuff:
70 | instance/
71 | .webassets-cache
72 |
73 | # Scrapy stuff:
74 | .scrapy
75 |
76 | # Sphinx documentation
77 | docs/_build/
78 |
79 | # PyBuilder
80 | target/
81 |
82 | # Jupyter Notebook
83 | .ipynb_checkpoints
84 |
85 | # pyenv
86 | .python-version
87 |
88 | # celery beat schedule file
89 | celerybeat-schedule
90 |
91 | # SageMath parsed files
92 | *.sage.py
93 |
94 | # dotenv
95 | .env
96 |
97 | # virtualenv
98 | .venv
99 | venv/
100 | ENV/
101 |
102 | # Spyder project settings
103 | .spyderproject
104 | .spyproject
105 |
106 | # Rope project settings
107 | .ropeproject
108 |
109 | # mkdocs documentation
110 | /site
111 |
112 | # mypy
113 | .mypy_cache/
114 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2018 qqwweee
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # keras-yolo3-facedetection
2 |
3 | [](LICENSE)
4 |
5 | ## Introduction
6 |
7 | This is a real-time face detection model using YOLOv3 with Keras.
8 |
9 | The YOLOv3 in Keras was done by [qqweee](https://github.com/qqwweee/keras-yolo3).
10 | Face dataset from [WIDER Face](http://shuoyang1213.me/WIDERFACE/)
11 |
12 | ---
13 |
14 | ## Quick Start
15 |
16 | 1. Download YOLOv3-Face model from [HERE](https://drive.google.com/file/d/1zU_n5CwnGfYgFNLQ1JZlsl-rHjPV-kmp/view?usp=sharing)
17 | 2. Place `wider_face_yolo.h5` into `model_data/`
18 | 3. Run YOLO detection.
19 |
20 | ```
21 | python yolo_video.py [OPTIONS...] --image, for image detection mode, OR
22 | python yolo_video.py [video_path] [output_path (optional)]
23 | ```
24 |
25 | ### Usage
26 | Use --help to see usage of yolo_video.py:
27 | ```
28 | usage: yolo_video.py [-h] [--model MODEL] [--anchors ANCHORS]
29 | [--classes CLASSES] [--gpu_num GPU_NUM] [--image]
30 | [--input] [--output]
31 |
32 | positional arguments:
33 | --input Video input path
34 | --output Video output path
35 |
36 | optional arguments:
37 | -h, --help show this help message and exit
38 | --model MODEL path to model weight file, default model_data/yolo.h5
39 | --anchors ANCHORS path to anchor definitions, default
40 | model_data/yolo_anchors.txt
41 | --classes CLASSES path to class definitions, default
42 | model_data/coco_classes.txt
43 | --gpu_num GPU_NUM Number of GPU to use, default 1
44 | --image Image detection mode, will ignore all positional arguments
45 | ```
46 | ---
47 |
48 | ## Training
49 |
50 | 1. Generate your own annotation file and class names file.
51 | One row for one image;
52 | Row format: `image_file_path box1 box2 ... boxN`;
53 | Box format: `x_min,y_min,x_max,y_max,class_id` (no space).
54 | For VOC dataset, try `python voc_annotation.py`
55 | Here is an example:
56 | ```
57 | path/to/img1.jpg 50,100,150,200,0 30,50,200,120,3
58 | path/to/img2.jpg 120,300,250,600,2
59 | ...
60 | ```
61 |
62 | 2. Make sure you have run `python convert.py -w yolov3.cfg yolov3.weights model_data/yolo_weights.h5`
63 | The file model_data/yolo_weights.h5 is used to load pretrained weights.
64 |
65 | 3. Modify train.py and start training.
66 | `python train.py`
67 | Use your trained weights or checkpoint weights with command line option `--model model_file` when using yolo_video.py
68 | Remember to modify class path or anchor path, with `--classes class_file` and `--anchors anchor_file`.
69 |
70 | If you want to use original pretrained weights for YOLOv3:
71 | 1. `wget https://pjreddie.com/media/files/darknet53.conv.74`
72 | 2. rename it as darknet53.weights
73 | 3. `python convert.py -w darknet53.cfg darknet53.weights model_data/darknet53_weights.h5`
74 | 4. use model_data/darknet53_weights.h5 in train.py
75 |
76 | ---
77 |
78 | ## Result
79 |
80 | #### Video Inference Result
81 | [](https://youtu.be/XPMrPiRkBUc)
82 |
--------------------------------------------------------------------------------
/convert.py:
--------------------------------------------------------------------------------
1 | #! /usr/bin/env python
2 | """
3 | Reads Darknet config and weights and creates Keras model with TF backend.
4 |
5 | """
6 |
7 | import argparse
8 | import configparser
9 | import io
10 | import os
11 | from collections import defaultdict
12 |
13 | import numpy as np
14 | from keras import backend as K
15 | from keras.layers import (Conv2D, Input, ZeroPadding2D, Add,
16 | UpSampling2D, MaxPooling2D, Concatenate)
17 | from keras.layers.advanced_activations import LeakyReLU
18 | from keras.layers.normalization import BatchNormalization
19 | from keras.models import Model
20 | from keras.regularizers import l2
21 | from keras.utils.vis_utils import plot_model as plot
22 |
23 |
24 | parser = argparse.ArgumentParser(description='Darknet To Keras Converter.')
25 | parser.add_argument('config_path', help='Path to Darknet cfg file.')
26 | parser.add_argument('weights_path', help='Path to Darknet weights file.')
27 | parser.add_argument('output_path', help='Path to output Keras model file.')
28 | parser.add_argument(
29 | '-p',
30 | '--plot_model',
31 | help='Plot generated Keras model and save as image.',
32 | action='store_true')
33 | parser.add_argument(
34 | '-w',
35 | '--weights_only',
36 | help='Save as Keras weights file instead of model file.',
37 | action='store_true')
38 |
39 | def unique_config_sections(config_file):
40 | """Convert all config sections to have unique names.
41 |
42 | Adds unique suffixes to config sections for compability with configparser.
43 | """
44 | section_counters = defaultdict(int)
45 | output_stream = io.StringIO()
46 | with open(config_file) as fin:
47 | for line in fin:
48 | if line.startswith('['):
49 | section = line.strip().strip('[]')
50 | _section = section + '_' + str(section_counters[section])
51 | section_counters[section] += 1
52 | line = line.replace(section, _section)
53 | output_stream.write(line)
54 | output_stream.seek(0)
55 | return output_stream
56 |
57 | # %%
58 | def _main(args):
59 | config_path = os.path.expanduser(args.config_path)
60 | weights_path = os.path.expanduser(args.weights_path)
61 | assert config_path.endswith('.cfg'), '{} is not a .cfg file'.format(
62 | config_path)
63 | assert weights_path.endswith(
64 | '.weights'), '{} is not a .weights file'.format(weights_path)
65 |
66 | output_path = os.path.expanduser(args.output_path)
67 | assert output_path.endswith(
68 | '.h5'), 'output path {} is not a .h5 file'.format(output_path)
69 | output_root = os.path.splitext(output_path)[0]
70 |
71 | # Load weights and config.
72 | print('Loading weights.')
73 | weights_file = open(weights_path, 'rb')
74 | major, minor, revision = np.ndarray(
75 | shape=(3, ), dtype='int32', buffer=weights_file.read(12))
76 | if (major*10+minor)>=2 and major<1000 and minor<1000:
77 | seen = np.ndarray(shape=(1,), dtype='int64', buffer=weights_file.read(8))
78 | else:
79 | seen = np.ndarray(shape=(1,), dtype='int32', buffer=weights_file.read(4))
80 | print('Weights Header: ', major, minor, revision, seen)
81 |
82 | print('Parsing Darknet config.')
83 | unique_config_file = unique_config_sections(config_path)
84 | cfg_parser = configparser.ConfigParser()
85 | cfg_parser.read_file(unique_config_file)
86 |
87 | print('Creating Keras model.')
88 | input_layer = Input(shape=(None, None, 3))
89 | prev_layer = input_layer
90 | all_layers = []
91 |
92 | weight_decay = float(cfg_parser['net_0']['decay']
93 | ) if 'net_0' in cfg_parser.sections() else 5e-4
94 | count = 0
95 | out_index = []
96 | for section in cfg_parser.sections():
97 | print('Parsing section {}'.format(section))
98 | if section.startswith('convolutional'):
99 | filters = int(cfg_parser[section]['filters'])
100 | size = int(cfg_parser[section]['size'])
101 | stride = int(cfg_parser[section]['stride'])
102 | pad = int(cfg_parser[section]['pad'])
103 | activation = cfg_parser[section]['activation']
104 | batch_normalize = 'batch_normalize' in cfg_parser[section]
105 |
106 | padding = 'same' if pad == 1 and stride == 1 else 'valid'
107 |
108 | # Setting weights.
109 | # Darknet serializes convolutional weights as:
110 | # [bias/beta, [gamma, mean, variance], conv_weights]
111 | prev_layer_shape = K.int_shape(prev_layer)
112 |
113 | weights_shape = (size, size, prev_layer_shape[-1], filters)
114 | darknet_w_shape = (filters, weights_shape[2], size, size)
115 | weights_size = np.product(weights_shape)
116 |
117 | print('conv2d', 'bn'
118 | if batch_normalize else ' ', activation, weights_shape)
119 |
120 | conv_bias = np.ndarray(
121 | shape=(filters, ),
122 | dtype='float32',
123 | buffer=weights_file.read(filters * 4))
124 | count += filters
125 |
126 | if batch_normalize:
127 | bn_weights = np.ndarray(
128 | shape=(3, filters),
129 | dtype='float32',
130 | buffer=weights_file.read(filters * 12))
131 | count += 3 * filters
132 |
133 | bn_weight_list = [
134 | bn_weights[0], # scale gamma
135 | conv_bias, # shift beta
136 | bn_weights[1], # running mean
137 | bn_weights[2] # running var
138 | ]
139 |
140 | conv_weights = np.ndarray(
141 | shape=darknet_w_shape,
142 | dtype='float32',
143 | buffer=weights_file.read(weights_size * 4))
144 | count += weights_size
145 |
146 | # DarkNet conv_weights are serialized Caffe-style:
147 | # (out_dim, in_dim, height, width)
148 | # We would like to set these to Tensorflow order:
149 | # (height, width, in_dim, out_dim)
150 | conv_weights = np.transpose(conv_weights, [2, 3, 1, 0])
151 | conv_weights = [conv_weights] if batch_normalize else [
152 | conv_weights, conv_bias
153 | ]
154 |
155 | # Handle activation.
156 | act_fn = None
157 | if activation == 'leaky':
158 | pass # Add advanced activation later.
159 | elif activation != 'linear':
160 | raise ValueError(
161 | 'Unknown activation function `{}` in section {}'.format(
162 | activation, section))
163 |
164 | # Create Conv2D layer
165 | if stride>1:
166 | # Darknet uses left and top padding instead of 'same' mode
167 | prev_layer = ZeroPadding2D(((1,0),(1,0)))(prev_layer)
168 | conv_layer = (Conv2D(
169 | filters, (size, size),
170 | strides=(stride, stride),
171 | kernel_regularizer=l2(weight_decay),
172 | use_bias=not batch_normalize,
173 | weights=conv_weights,
174 | activation=act_fn,
175 | padding=padding))(prev_layer)
176 |
177 | if batch_normalize:
178 | conv_layer = (BatchNormalization(
179 | weights=bn_weight_list))(conv_layer)
180 | prev_layer = conv_layer
181 |
182 | if activation == 'linear':
183 | all_layers.append(prev_layer)
184 | elif activation == 'leaky':
185 | act_layer = LeakyReLU(alpha=0.1)(prev_layer)
186 | prev_layer = act_layer
187 | all_layers.append(act_layer)
188 |
189 | elif section.startswith('route'):
190 | ids = [int(i) for i in cfg_parser[section]['layers'].split(',')]
191 | layers = [all_layers[i] for i in ids]
192 | if len(layers) > 1:
193 | print('Concatenating route layers:', layers)
194 | concatenate_layer = Concatenate()(layers)
195 | all_layers.append(concatenate_layer)
196 | prev_layer = concatenate_layer
197 | else:
198 | skip_layer = layers[0] # only one layer to route
199 | all_layers.append(skip_layer)
200 | prev_layer = skip_layer
201 |
202 | elif section.startswith('maxpool'):
203 | size = int(cfg_parser[section]['size'])
204 | stride = int(cfg_parser[section]['stride'])
205 | all_layers.append(
206 | MaxPooling2D(
207 | pool_size=(size, size),
208 | strides=(stride, stride),
209 | padding='same')(prev_layer))
210 | prev_layer = all_layers[-1]
211 |
212 | elif section.startswith('shortcut'):
213 | index = int(cfg_parser[section]['from'])
214 | activation = cfg_parser[section]['activation']
215 | assert activation == 'linear', 'Only linear activation supported.'
216 | all_layers.append(Add()([all_layers[index], prev_layer]))
217 | prev_layer = all_layers[-1]
218 |
219 | elif section.startswith('upsample'):
220 | stride = int(cfg_parser[section]['stride'])
221 | assert stride == 2, 'Only stride=2 supported.'
222 | all_layers.append(UpSampling2D(stride)(prev_layer))
223 | prev_layer = all_layers[-1]
224 |
225 | elif section.startswith('yolo'):
226 | out_index.append(len(all_layers)-1)
227 | all_layers.append(None)
228 | prev_layer = all_layers[-1]
229 |
230 | elif section.startswith('net'):
231 | pass
232 |
233 | else:
234 | raise ValueError(
235 | 'Unsupported section header type: {}'.format(section))
236 |
237 | # Create and save model.
238 | if len(out_index)==0: out_index.append(len(all_layers)-1)
239 | model = Model(inputs=input_layer, outputs=[all_layers[i] for i in out_index])
240 | print(model.summary())
241 | if args.weights_only:
242 | model.save_weights('{}'.format(output_path))
243 | print('Saved Keras weights to {}'.format(output_path))
244 | else:
245 | model.save('{}'.format(output_path))
246 | print('Saved Keras model to {}'.format(output_path))
247 |
248 | # Check to see if all weights have been read.
249 | remaining_weights = len(weights_file.read()) / 4
250 | weights_file.close()
251 | print('Read {} of {} from Darknet weights.'.format(count, count +
252 | remaining_weights))
253 | if remaining_weights > 0:
254 | print('Warning: {} unused weights'.format(remaining_weights))
255 |
256 | if args.plot_model:
257 | plot(model, to_file='{}.png'.format(output_root), show_shapes=True)
258 | print('Saved model plot to {}.png'.format(output_root))
259 |
260 |
261 | if __name__ == '__main__':
262 | _main(parser.parse_args())
263 |
--------------------------------------------------------------------------------
/convert_annotation.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": 1,
6 | "metadata": {},
7 | "outputs": [],
8 | "source": [
9 | "train_dir = './WIDER_train/images/'\n",
10 | "val_dir = './WIDER_val/images/'\n",
11 | "annotation_txt_location = './wider_face_split/'\n",
12 | "annotation_file_txt = 'wider_face_train_bbx_gt.txt'\n",
13 | "val_annotation_file_txt = 'wider_face_val_bbx_gt.txt'\n",
14 | "wider_yolo_annotation = 'wider_train_annotation.txt'\n",
15 | "wider_val_annotation = 'wider_val_annotation.txt'"
16 | ]
17 | },
18 | {
19 | "cell_type": "markdown",
20 | "metadata": {},
21 | "source": [
22 | "## YOLO ANNOTATION"
23 | ]
24 | },
25 | {
26 | "cell_type": "markdown",
27 | "metadata": {},
28 | "source": [
29 | "Generate your own annotation file and class names file. \n",
30 | "One row for one image; \n",
31 | "Row format: `image_file_path box1 box2 ... boxN`; \n",
32 | "Box format: `x_min,y_min,x_max,y_max,class_id` (no space). \n",
33 | "For VOC dataset, try `python voc_annotation.py` \n",
34 | "Here is an example:\n",
35 | "```\n",
36 | "path/to/img1.jpg 50,100,150,200,0 30,50,200,120,3\n",
37 | "path/to/img2.jpg 120,300,250,600,2\n",
38 | "...\n",
39 | "```"
40 | ]
41 | },
42 | {
43 | "cell_type": "markdown",
44 | "metadata": {},
45 | "source": [
46 | "## WIDER ANNOTATION"
47 | ]
48 | },
49 | {
50 | "cell_type": "markdown",
51 | "metadata": {},
52 | "source": [
53 | "Attached the mappings between attribute names and label values.\n",
54 | "\n",
55 | "blur:\n",
56 | " clear->0\n",
57 | " normal blur->1\n",
58 | " heavy blur->2\n",
59 | "\n",
60 | "expression:\n",
61 | " typical expression->0\n",
62 | " exaggerate expression->1\n",
63 | "\n",
64 | "illumination:\n",
65 | " normal illumination->0\n",
66 | " extreme illumination->1\n",
67 | "\n",
68 | "occlusion:\n",
69 | " no occlusion->0\n",
70 | " partial occlusion->1\n",
71 | " heavy occlusion->2\n",
72 | "\n",
73 | "pose:\n",
74 | " typical pose->0\n",
75 | " atypical pose->1\n",
76 | "\n",
77 | "invalid:\n",
78 | " false->0(valid image)\n",
79 | " true->1(invalid image)\n",
80 | "\n",
81 | "The format of txt ground truth.\n",
82 | "File name\n",
83 | "Number of bounding box\n",
84 | "x1(left), y1(top), w, h, blur, expression, illumination, invalid, occlusion, pose"
85 | ]
86 | },
87 | {
88 | "cell_type": "code",
89 | "execution_count": 4,
90 | "metadata": {},
91 | "outputs": [
92 | {
93 | "name": "stdout",
94 | "output_type": "stream",
95 | "text": [
96 | "867 164 0 1 1 0 0 0 0 0 \n",
97 | "713 34 1 0 1 0 0 0 0 0 \n",
98 | "299 116 3 0 2 0 0 0 0 0 \n",
99 | "done\n"
100 | ]
101 | }
102 | ],
103 | "source": [
104 | "annotation = annotation_txt_location + annotation_file_txt\n",
105 | "\n",
106 | "yolo_ann = open(wider_yolo_annotation, 'w')\n",
107 | "\n",
108 | "string_buffer = ''\n",
109 | "read_flag = False\n",
110 | "\n",
111 | "with open(annotation) as f:\n",
112 | " for line in f:\n",
113 | " line = line.splitlines()[0]\n",
114 | " if('.jpg' in line):\n",
115 | " if(read_flag):\n",
116 | " yolo_ann.write(string_buffer)\n",
117 | "\n",
118 | " string_buffer = ''\n",
119 | " string_buffer = '\\n' + train_dir + line\n",
120 | " read_flag = False\n",
121 | " #print(line)\n",
122 | " else:\n",
123 | " if(len(line) < 2):\n",
124 | " if(line == '0'):\n",
125 | " string_buffer = ''\n",
126 | " read_flag = False\n",
127 | " else:\n",
128 | " read_flag = True\n",
129 | " else:\n",
130 | " if(read_flag):\n",
131 | " # get the WIDER annotation formats\n",
132 | " get_line_elements = line.split(' ')\n",
133 | " x = int(get_line_elements[0])\n",
134 | " y = int(get_line_elements[1])\n",
135 | " width = int(get_line_elements[2])\n",
136 | " height = int(get_line_elements[3])\n",
137 | " invalid = get_line_elements[7]\n",
138 | "\n",
139 | " # convert to yolo friendly\n",
140 | " x_min = str(x)\n",
141 | " x_max = str(x + width)\n",
142 | " y_min = str(y)\n",
143 | " y_max = str(y + height)\n",
144 | "\n",
145 | " new_annot = ' ' + x_min + ',' + y_min + ',' + x_max + ',' + y_max + ',' + '0'\n",
146 | "\n",
147 | " if(int(x_min) >= int(x_max) or int(y_min) >= int(y_max)):\n",
148 | " #ignore\n",
149 | " print(line)\n",
150 | " else:\n",
151 | " string_buffer = string_buffer + new_annot\n",
152 | " else:\n",
153 | " string_buffer = ''\n",
154 | " \n",
155 | "\n",
156 | "f.close()\n",
157 | "yolo_ann.close()\n",
158 | "print(\"done\")"
159 | ]
160 | },
161 | {
162 | "cell_type": "code",
163 | "execution_count": 3,
164 | "metadata": {},
165 | "outputs": [
166 | {
167 | "name": "stdout",
168 | "output_type": "stream",
169 | "text": [
170 | "0 0 0 0 0 0 0 0 0 0 \n",
171 | "0 0 0 0 0 0 0 0 0 0 \n",
172 | "0 0 0 0 0 0 0 0 0 0 \n",
173 | "0 0 0 0 0 0 0 0 0 0 \n",
174 | "done\n"
175 | ]
176 | }
177 | ],
178 | "source": [
179 | "annotation_val = annotation_txt_location + val_annotation_file_txt\n",
180 | "\n",
181 | "yolo_ann = open(wider_val_annotation, 'w')\n",
182 | "\n",
183 | "string_buffer = ''\n",
184 | "read_flag = False\n",
185 | "\n",
186 | "with open(annotation_val) as f:\n",
187 | " for line in f:\n",
188 | " line = line.splitlines()[0]\n",
189 | " if('.jpg' in line):\n",
190 | " if(read_flag):\n",
191 | " yolo_ann.write(string_buffer)\n",
192 | "\n",
193 | " string_buffer = ''\n",
194 | " string_buffer = '\\n' + val_dir + line\n",
195 | " read_flag = False\n",
196 | " else:\n",
197 | " if(len(line) < 2):\n",
198 | " if(line == '0'):\n",
199 | " string_buffer = ''\n",
200 | " read_flag = False\n",
201 | " else:\n",
202 | " read_flag = True\n",
203 | " else:\n",
204 | " if(read_flag):\n",
205 | " # get the WIDER annotation formats\n",
206 | " get_line_elements = line.split(' ')\n",
207 | " x = int(get_line_elements[0])\n",
208 | " y = int(get_line_elements[1])\n",
209 | " width = int(get_line_elements[2])\n",
210 | " height = int(get_line_elements[3])\n",
211 | " invalid = get_line_elements[7]\n",
212 | "\n",
213 | " # convert to yolo friendly\n",
214 | " x_min = str(x)\n",
215 | " x_max = str(x + width)\n",
216 | " y_min = str(y)\n",
217 | " y_max = str(y + height)\n",
218 | "\n",
219 | " new_annot = ' ' + x_min + ',' + y_min + ',' + x_max + ',' + y_max + ',' + '0'\n",
220 | "\n",
221 | " if(int(x_min) >= int(x_max) or int(y_min) >= int(y_max)):\n",
222 | " #ignore\n",
223 | " print(line)\n",
224 | " else:\n",
225 | " string_buffer = string_buffer + new_annot\n",
226 | " else:\n",
227 | " string_buffer = ''\n",
228 | "\n",
229 | "f.close()\n",
230 | "yolo_ann.close()\n",
231 | "print(\"done\")"
232 | ]
233 | },
234 | {
235 | "cell_type": "code",
236 | "execution_count": null,
237 | "metadata": {},
238 | "outputs": [],
239 | "source": []
240 | }
241 | ],
242 | "metadata": {
243 | "kernelspec": {
244 | "display_name": "Python 3",
245 | "language": "python",
246 | "name": "python3"
247 | },
248 | "language_info": {
249 | "codemirror_mode": {
250 | "name": "ipython",
251 | "version": 3
252 | },
253 | "file_extension": ".py",
254 | "mimetype": "text/x-python",
255 | "name": "python",
256 | "nbconvert_exporter": "python",
257 | "pygments_lexer": "ipython3",
258 | "version": "3.6.8"
259 | }
260 | },
261 | "nbformat": 4,
262 | "nbformat_minor": 2
263 | }
264 |
--------------------------------------------------------------------------------
/darknet53.cfg:
--------------------------------------------------------------------------------
1 | [net]
2 | # Testing
3 | batch=1
4 | subdivisions=1
5 | # Training
6 | # batch=64
7 | # subdivisions=16
8 | width=416
9 | height=416
10 | channels=3
11 | momentum=0.9
12 | decay=0.0005
13 | angle=0
14 | saturation = 1.5
15 | exposure = 1.5
16 | hue=.1
17 |
18 | learning_rate=0.001
19 | burn_in=1000
20 | max_batches = 500200
21 | policy=steps
22 | steps=400000,450000
23 | scales=.1,.1
24 |
25 | [convolutional]
26 | batch_normalize=1
27 | filters=32
28 | size=3
29 | stride=1
30 | pad=1
31 | activation=leaky
32 |
33 | # Downsample
34 |
35 | [convolutional]
36 | batch_normalize=1
37 | filters=64
38 | size=3
39 | stride=2
40 | pad=1
41 | activation=leaky
42 |
43 | [convolutional]
44 | batch_normalize=1
45 | filters=32
46 | size=1
47 | stride=1
48 | pad=1
49 | activation=leaky
50 |
51 | [convolutional]
52 | batch_normalize=1
53 | filters=64
54 | size=3
55 | stride=1
56 | pad=1
57 | activation=leaky
58 |
59 | [shortcut]
60 | from=-3
61 | activation=linear
62 |
63 | # Downsample
64 |
65 | [convolutional]
66 | batch_normalize=1
67 | filters=128
68 | size=3
69 | stride=2
70 | pad=1
71 | activation=leaky
72 |
73 | [convolutional]
74 | batch_normalize=1
75 | filters=64
76 | size=1
77 | stride=1
78 | pad=1
79 | activation=leaky
80 |
81 | [convolutional]
82 | batch_normalize=1
83 | filters=128
84 | size=3
85 | stride=1
86 | pad=1
87 | activation=leaky
88 |
89 | [shortcut]
90 | from=-3
91 | activation=linear
92 |
93 | [convolutional]
94 | batch_normalize=1
95 | filters=64
96 | size=1
97 | stride=1
98 | pad=1
99 | activation=leaky
100 |
101 | [convolutional]
102 | batch_normalize=1
103 | filters=128
104 | size=3
105 | stride=1
106 | pad=1
107 | activation=leaky
108 |
109 | [shortcut]
110 | from=-3
111 | activation=linear
112 |
113 | # Downsample
114 |
115 | [convolutional]
116 | batch_normalize=1
117 | filters=256
118 | size=3
119 | stride=2
120 | pad=1
121 | activation=leaky
122 |
123 | [convolutional]
124 | batch_normalize=1
125 | filters=128
126 | size=1
127 | stride=1
128 | pad=1
129 | activation=leaky
130 |
131 | [convolutional]
132 | batch_normalize=1
133 | filters=256
134 | size=3
135 | stride=1
136 | pad=1
137 | activation=leaky
138 |
139 | [shortcut]
140 | from=-3
141 | activation=linear
142 |
143 | [convolutional]
144 | batch_normalize=1
145 | filters=128
146 | size=1
147 | stride=1
148 | pad=1
149 | activation=leaky
150 |
151 | [convolutional]
152 | batch_normalize=1
153 | filters=256
154 | size=3
155 | stride=1
156 | pad=1
157 | activation=leaky
158 |
159 | [shortcut]
160 | from=-3
161 | activation=linear
162 |
163 | [convolutional]
164 | batch_normalize=1
165 | filters=128
166 | size=1
167 | stride=1
168 | pad=1
169 | activation=leaky
170 |
171 | [convolutional]
172 | batch_normalize=1
173 | filters=256
174 | size=3
175 | stride=1
176 | pad=1
177 | activation=leaky
178 |
179 | [shortcut]
180 | from=-3
181 | activation=linear
182 |
183 | [convolutional]
184 | batch_normalize=1
185 | filters=128
186 | size=1
187 | stride=1
188 | pad=1
189 | activation=leaky
190 |
191 | [convolutional]
192 | batch_normalize=1
193 | filters=256
194 | size=3
195 | stride=1
196 | pad=1
197 | activation=leaky
198 |
199 | [shortcut]
200 | from=-3
201 | activation=linear
202 |
203 |
204 | [convolutional]
205 | batch_normalize=1
206 | filters=128
207 | size=1
208 | stride=1
209 | pad=1
210 | activation=leaky
211 |
212 | [convolutional]
213 | batch_normalize=1
214 | filters=256
215 | size=3
216 | stride=1
217 | pad=1
218 | activation=leaky
219 |
220 | [shortcut]
221 | from=-3
222 | activation=linear
223 |
224 | [convolutional]
225 | batch_normalize=1
226 | filters=128
227 | size=1
228 | stride=1
229 | pad=1
230 | activation=leaky
231 |
232 | [convolutional]
233 | batch_normalize=1
234 | filters=256
235 | size=3
236 | stride=1
237 | pad=1
238 | activation=leaky
239 |
240 | [shortcut]
241 | from=-3
242 | activation=linear
243 |
244 | [convolutional]
245 | batch_normalize=1
246 | filters=128
247 | size=1
248 | stride=1
249 | pad=1
250 | activation=leaky
251 |
252 | [convolutional]
253 | batch_normalize=1
254 | filters=256
255 | size=3
256 | stride=1
257 | pad=1
258 | activation=leaky
259 |
260 | [shortcut]
261 | from=-3
262 | activation=linear
263 |
264 | [convolutional]
265 | batch_normalize=1
266 | filters=128
267 | size=1
268 | stride=1
269 | pad=1
270 | activation=leaky
271 |
272 | [convolutional]
273 | batch_normalize=1
274 | filters=256
275 | size=3
276 | stride=1
277 | pad=1
278 | activation=leaky
279 |
280 | [shortcut]
281 | from=-3
282 | activation=linear
283 |
284 | # Downsample
285 |
286 | [convolutional]
287 | batch_normalize=1
288 | filters=512
289 | size=3
290 | stride=2
291 | pad=1
292 | activation=leaky
293 |
294 | [convolutional]
295 | batch_normalize=1
296 | filters=256
297 | size=1
298 | stride=1
299 | pad=1
300 | activation=leaky
301 |
302 | [convolutional]
303 | batch_normalize=1
304 | filters=512
305 | size=3
306 | stride=1
307 | pad=1
308 | activation=leaky
309 |
310 | [shortcut]
311 | from=-3
312 | activation=linear
313 |
314 |
315 | [convolutional]
316 | batch_normalize=1
317 | filters=256
318 | size=1
319 | stride=1
320 | pad=1
321 | activation=leaky
322 |
323 | [convolutional]
324 | batch_normalize=1
325 | filters=512
326 | size=3
327 | stride=1
328 | pad=1
329 | activation=leaky
330 |
331 | [shortcut]
332 | from=-3
333 | activation=linear
334 |
335 |
336 | [convolutional]
337 | batch_normalize=1
338 | filters=256
339 | size=1
340 | stride=1
341 | pad=1
342 | activation=leaky
343 |
344 | [convolutional]
345 | batch_normalize=1
346 | filters=512
347 | size=3
348 | stride=1
349 | pad=1
350 | activation=leaky
351 |
352 | [shortcut]
353 | from=-3
354 | activation=linear
355 |
356 |
357 | [convolutional]
358 | batch_normalize=1
359 | filters=256
360 | size=1
361 | stride=1
362 | pad=1
363 | activation=leaky
364 |
365 | [convolutional]
366 | batch_normalize=1
367 | filters=512
368 | size=3
369 | stride=1
370 | pad=1
371 | activation=leaky
372 |
373 | [shortcut]
374 | from=-3
375 | activation=linear
376 |
377 | [convolutional]
378 | batch_normalize=1
379 | filters=256
380 | size=1
381 | stride=1
382 | pad=1
383 | activation=leaky
384 |
385 | [convolutional]
386 | batch_normalize=1
387 | filters=512
388 | size=3
389 | stride=1
390 | pad=1
391 | activation=leaky
392 |
393 | [shortcut]
394 | from=-3
395 | activation=linear
396 |
397 |
398 | [convolutional]
399 | batch_normalize=1
400 | filters=256
401 | size=1
402 | stride=1
403 | pad=1
404 | activation=leaky
405 |
406 | [convolutional]
407 | batch_normalize=1
408 | filters=512
409 | size=3
410 | stride=1
411 | pad=1
412 | activation=leaky
413 |
414 | [shortcut]
415 | from=-3
416 | activation=linear
417 |
418 |
419 | [convolutional]
420 | batch_normalize=1
421 | filters=256
422 | size=1
423 | stride=1
424 | pad=1
425 | activation=leaky
426 |
427 | [convolutional]
428 | batch_normalize=1
429 | filters=512
430 | size=3
431 | stride=1
432 | pad=1
433 | activation=leaky
434 |
435 | [shortcut]
436 | from=-3
437 | activation=linear
438 |
439 | [convolutional]
440 | batch_normalize=1
441 | filters=256
442 | size=1
443 | stride=1
444 | pad=1
445 | activation=leaky
446 |
447 | [convolutional]
448 | batch_normalize=1
449 | filters=512
450 | size=3
451 | stride=1
452 | pad=1
453 | activation=leaky
454 |
455 | [shortcut]
456 | from=-3
457 | activation=linear
458 |
459 | # Downsample
460 |
461 | [convolutional]
462 | batch_normalize=1
463 | filters=1024
464 | size=3
465 | stride=2
466 | pad=1
467 | activation=leaky
468 |
469 | [convolutional]
470 | batch_normalize=1
471 | filters=512
472 | size=1
473 | stride=1
474 | pad=1
475 | activation=leaky
476 |
477 | [convolutional]
478 | batch_normalize=1
479 | filters=1024
480 | size=3
481 | stride=1
482 | pad=1
483 | activation=leaky
484 |
485 | [shortcut]
486 | from=-3
487 | activation=linear
488 |
489 | [convolutional]
490 | batch_normalize=1
491 | filters=512
492 | size=1
493 | stride=1
494 | pad=1
495 | activation=leaky
496 |
497 | [convolutional]
498 | batch_normalize=1
499 | filters=1024
500 | size=3
501 | stride=1
502 | pad=1
503 | activation=leaky
504 |
505 | [shortcut]
506 | from=-3
507 | activation=linear
508 |
509 | [convolutional]
510 | batch_normalize=1
511 | filters=512
512 | size=1
513 | stride=1
514 | pad=1
515 | activation=leaky
516 |
517 | [convolutional]
518 | batch_normalize=1
519 | filters=1024
520 | size=3
521 | stride=1
522 | pad=1
523 | activation=leaky
524 |
525 | [shortcut]
526 | from=-3
527 | activation=linear
528 |
529 | [convolutional]
530 | batch_normalize=1
531 | filters=512
532 | size=1
533 | stride=1
534 | pad=1
535 | activation=leaky
536 |
537 | [convolutional]
538 | batch_normalize=1
539 | filters=1024
540 | size=3
541 | stride=1
542 | pad=1
543 | activation=leaky
544 |
545 | [shortcut]
546 | from=-3
547 | activation=linear
548 |
549 |
--------------------------------------------------------------------------------
/font/FiraMono-Medium.otf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/swdev1202/keras-yolo3-facedetection/e51b6b159f428add904bec4c2b6ed58a3db41ca6/font/FiraMono-Medium.otf
--------------------------------------------------------------------------------
/font/SIL Open Font License.txt:
--------------------------------------------------------------------------------
1 | Copyright (c) 2014, Mozilla Foundation https://mozilla.org/ with Reserved Font Name Fira Mono.
2 |
3 | Copyright (c) 2014, Telefonica S.A.
4 |
5 | This Font Software is licensed under the SIL Open Font License, Version 1.1.
6 | This license is copied below, and is also available with a FAQ at: http://scripts.sil.org/OFL
7 |
8 | -----------------------------------------------------------
9 | SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
10 | -----------------------------------------------------------
11 |
12 | PREAMBLE
13 | The goals of the Open Font License (OFL) are to stimulate worldwide development of collaborative font projects, to support the font creation efforts of academic and linguistic communities, and to provide a free and open framework in which fonts may be shared and improved in partnership with others.
14 |
15 | The OFL allows the licensed fonts to be used, studied, modified and redistributed freely as long as they are not sold by themselves. The fonts, including any derivative works, can be bundled, embedded, redistributed and/or sold with any software provided that any reserved names are not used by derivative works. The fonts and derivatives, however, cannot be released under any other type of license. The requirement for fonts to remain under this license does not apply to any document created using the fonts or their derivatives.
16 |
17 | DEFINITIONS
18 | "Font Software" refers to the set of files released by the Copyright Holder(s) under this license and clearly marked as such. This may include source files, build scripts and documentation.
19 |
20 | "Reserved Font Name" refers to any names specified as such after the copyright statement(s).
21 |
22 | "Original Version" refers to the collection of Font Software components as distributed by the Copyright Holder(s).
23 |
24 | "Modified Version" refers to any derivative made by adding to, deleting, or substituting -- in part or in whole -- any of the components of the Original Version, by changing formats or by porting the Font Software to a new environment.
25 |
26 | "Author" refers to any designer, engineer, programmer, technical writer or other person who contributed to the Font Software.
27 |
28 | PERMISSION & CONDITIONS
29 | Permission is hereby granted, free of charge, to any person obtaining a copy of the Font Software, to use, study, copy, merge, embed, modify, redistribute, and sell modified and unmodified copies of the Font Software, subject to the following conditions:
30 |
31 | 1) Neither the Font Software nor any of its individual components, in Original or Modified Versions, may be sold by itself.
32 |
33 | 2) Original or Modified Versions of the Font Software may be bundled, redistributed and/or sold with any software, provided that each copy contains the above copyright notice and this license. These can be included either as stand-alone text files, human-readable headers or in the appropriate machine-readable metadata fields within text or binary files as long as those fields can be easily viewed by the user.
34 |
35 | 3) No Modified Version of the Font Software may use the Reserved Font Name(s) unless explicit written permission is granted by the corresponding Copyright Holder. This restriction only applies to the primary font name as presented to the users.
36 |
37 | 4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font Software shall not be used to promote, endorse or advertise any Modified Version, except to acknowledge the contribution(s) of the Copyright Holder(s) and the Author(s) or with their explicit written permission.
38 |
39 | 5) The Font Software, modified or unmodified, in part or in whole, must be distributed entirely under this license, and must not be distributed under any other license. The requirement for fonts to remain under this license does not apply to any document created using the Font Software.
40 |
41 | TERMINATION
42 | This license becomes null and void if any of the above conditions are not met.
43 |
44 | DISCLAIMER
45 | THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM OTHER DEALINGS IN THE FONT SOFTWARE.
--------------------------------------------------------------------------------
/image/thumbnail_image.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/swdev1202/keras-yolo3-facedetection/e51b6b159f428add904bec4c2b6ed58a3db41ca6/image/thumbnail_image.png
--------------------------------------------------------------------------------
/kmeans.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 |
4 | class YOLO_Kmeans:
5 |
6 | def __init__(self, cluster_number, filename):
7 | self.cluster_number = cluster_number
8 | self.filename = "2012_train.txt"
9 |
10 | def iou(self, boxes, clusters): # 1 box -> k clusters
11 | n = boxes.shape[0]
12 | k = self.cluster_number
13 |
14 | box_area = boxes[:, 0] * boxes[:, 1]
15 | box_area = box_area.repeat(k)
16 | box_area = np.reshape(box_area, (n, k))
17 |
18 | cluster_area = clusters[:, 0] * clusters[:, 1]
19 | cluster_area = np.tile(cluster_area, [1, n])
20 | cluster_area = np.reshape(cluster_area, (n, k))
21 |
22 | box_w_matrix = np.reshape(boxes[:, 0].repeat(k), (n, k))
23 | cluster_w_matrix = np.reshape(np.tile(clusters[:, 0], (1, n)), (n, k))
24 | min_w_matrix = np.minimum(cluster_w_matrix, box_w_matrix)
25 |
26 | box_h_matrix = np.reshape(boxes[:, 1].repeat(k), (n, k))
27 | cluster_h_matrix = np.reshape(np.tile(clusters[:, 1], (1, n)), (n, k))
28 | min_h_matrix = np.minimum(cluster_h_matrix, box_h_matrix)
29 | inter_area = np.multiply(min_w_matrix, min_h_matrix)
30 |
31 | result = inter_area / (box_area + cluster_area - inter_area)
32 | return result
33 |
34 | def avg_iou(self, boxes, clusters):
35 | accuracy = np.mean([np.max(self.iou(boxes, clusters), axis=1)])
36 | return accuracy
37 |
38 | def kmeans(self, boxes, k, dist=np.median):
39 | box_number = boxes.shape[0]
40 | distances = np.empty((box_number, k))
41 | last_nearest = np.zeros((box_number,))
42 | np.random.seed()
43 | clusters = boxes[np.random.choice(
44 | box_number, k, replace=False)] # init k clusters
45 | while True:
46 |
47 | distances = 1 - self.iou(boxes, clusters)
48 |
49 | current_nearest = np.argmin(distances, axis=1)
50 | if (last_nearest == current_nearest).all():
51 | break # clusters won't change
52 | for cluster in range(k):
53 | clusters[cluster] = dist( # update clusters
54 | boxes[current_nearest == cluster], axis=0)
55 |
56 | last_nearest = current_nearest
57 |
58 | return clusters
59 |
60 | def result2txt(self, data):
61 | f = open("yolo_anchors.txt", 'w')
62 | row = np.shape(data)[0]
63 | for i in range(row):
64 | if i == 0:
65 | x_y = "%d,%d" % (data[i][0], data[i][1])
66 | else:
67 | x_y = ", %d,%d" % (data[i][0], data[i][1])
68 | f.write(x_y)
69 | f.close()
70 |
71 | def txt2boxes(self):
72 | f = open(self.filename, 'r')
73 | dataSet = []
74 | for line in f:
75 | infos = line.split(" ")
76 | length = len(infos)
77 | for i in range(1, length):
78 | width = int(infos[i].split(",")[2]) - \
79 | int(infos[i].split(",")[0])
80 | height = int(infos[i].split(",")[3]) - \
81 | int(infos[i].split(",")[1])
82 | dataSet.append([width, height])
83 | result = np.array(dataSet)
84 | f.close()
85 | return result
86 |
87 | def txt2clusters(self):
88 | all_boxes = self.txt2boxes()
89 | result = self.kmeans(all_boxes, k=self.cluster_number)
90 | result = result[np.lexsort(result.T[0, None])]
91 | self.result2txt(result)
92 | print("K anchors:\n {}".format(result))
93 | print("Accuracy: {:.2f}%".format(
94 | self.avg_iou(all_boxes, result) * 100))
95 |
96 |
97 | if __name__ == "__main__":
98 | cluster_number = 9
99 | filename = "2012_train.txt"
100 | kmeans = YOLO_Kmeans(cluster_number, filename)
101 | kmeans.txt2clusters()
102 |
--------------------------------------------------------------------------------
/model_data/coco_classes.txt:
--------------------------------------------------------------------------------
1 | person
2 | bicycle
3 | car
4 | motorbike
5 | aeroplane
6 | bus
7 | train
8 | truck
9 | boat
10 | traffic light
11 | fire hydrant
12 | stop sign
13 | parking meter
14 | bench
15 | bird
16 | cat
17 | dog
18 | horse
19 | sheep
20 | cow
21 | elephant
22 | bear
23 | zebra
24 | giraffe
25 | backpack
26 | umbrella
27 | handbag
28 | tie
29 | suitcase
30 | frisbee
31 | skis
32 | snowboard
33 | sports ball
34 | kite
35 | baseball bat
36 | baseball glove
37 | skateboard
38 | surfboard
39 | tennis racket
40 | bottle
41 | wine glass
42 | cup
43 | fork
44 | knife
45 | spoon
46 | bowl
47 | banana
48 | apple
49 | sandwich
50 | orange
51 | broccoli
52 | carrot
53 | hot dog
54 | pizza
55 | donut
56 | cake
57 | chair
58 | sofa
59 | pottedplant
60 | bed
61 | diningtable
62 | toilet
63 | tvmonitor
64 | laptop
65 | mouse
66 | remote
67 | keyboard
68 | cell phone
69 | microwave
70 | oven
71 | toaster
72 | sink
73 | refrigerator
74 | book
75 | clock
76 | vase
77 | scissors
78 | teddy bear
79 | hair drier
80 | toothbrush
81 |
--------------------------------------------------------------------------------
/model_data/tiny_yolo_anchors.txt:
--------------------------------------------------------------------------------
1 | 10,14, 23,27, 37,58, 81,82, 135,169, 344,319
2 |
--------------------------------------------------------------------------------
/model_data/voc_classes.txt:
--------------------------------------------------------------------------------
1 | aeroplane
2 | bicycle
3 | bird
4 | boat
5 | bottle
6 | bus
7 | car
8 | cat
9 | chair
10 | cow
11 | diningtable
12 | dog
13 | horse
14 | motorbike
15 | person
16 | pottedplant
17 | sheep
18 | sofa
19 | train
20 | tvmonitor
21 |
--------------------------------------------------------------------------------
/model_data/wider_anchors.txt:
--------------------------------------------------------------------------------
1 | 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
2 |
--------------------------------------------------------------------------------
/model_data/wider_classes.txt:
--------------------------------------------------------------------------------
1 | face
--------------------------------------------------------------------------------
/model_data/yolo_anchors.txt:
--------------------------------------------------------------------------------
1 | 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
2 |
--------------------------------------------------------------------------------
/train.py:
--------------------------------------------------------------------------------
1 | """
2 | Retrain the YOLO model for your own dataset.
3 | """
4 |
5 | import numpy as np
6 | import keras.backend as K
7 | from keras.layers import Input, Lambda
8 | from keras.models import Model
9 | from keras.optimizers import Adam
10 | from keras.callbacks import TensorBoard, ModelCheckpoint, ReduceLROnPlateau, EarlyStopping
11 |
12 | from yolo3.model import preprocess_true_boxes, yolo_body, tiny_yolo_body, yolo_loss
13 | from yolo3.utils import get_random_data
14 |
15 |
16 | def _main():
17 | train_annotation_path = './annotations/wider_train_annotation.txt'
18 | val_annotation_path = './annotations/wider_val_annotation.txt'
19 | log_dir = './logs/000/'
20 | classes_path = './model_data/wider_classes.txt'
21 | anchors_path = './model_data/wider_anchors.txt'
22 | class_names = get_classes(classes_path)
23 | num_classes = len(class_names)
24 | anchors = get_anchors(anchors_path)
25 |
26 | input_shape = (608,608) # multiple of 32, hw
27 |
28 | is_tiny_version = len(anchors)==6 # default setting
29 | if is_tiny_version:
30 | model = create_tiny_model(input_shape, anchors, num_classes,
31 | freeze_body=2, weights_path='./model_data/tiny_yolo_weights.h5')
32 | else:
33 | model = create_model(input_shape, anchors, num_classes,
34 | freeze_body=2, weights_path='./model_data/wider_face_yolo.h5') # make sure you know what you freeze
35 |
36 | logging = TensorBoard(log_dir=log_dir)
37 | checkpoint = ModelCheckpoint(log_dir + 'ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5',
38 | monitor='val_loss', save_weights_only=True, save_best_only=True, period=3)
39 | reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=3, verbose=1)
40 | early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=10, verbose=1)
41 |
42 | val_split = 0.1
43 | with open(train_annotation_path) as f:
44 | lines = f.readlines()
45 |
46 | np.random.seed(10101)
47 | np.random.shuffle(lines)
48 | np.random.seed(None)
49 | num_val = int(len(lines)*val_split)
50 | num_train = len(lines) - num_val
51 |
52 | if True:
53 | for i in range(len(model.layers)):
54 | model.layers[i].trainable = True
55 | model.compile(optimizer=Adam(lr=1e-4), loss={'yolo_loss': lambda y_true, y_pred: y_pred}) # recompile to apply the change
56 | print('Unfreeze all of the layers.')
57 |
58 | batch_size = 4 # note that more GPU memory is required after unfreezing the body
59 | print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size))
60 | model.fit_generator(data_generator_wrapper(lines[:num_train], batch_size, input_shape, anchors, num_classes),
61 | steps_per_epoch=max(1, num_train//batch_size),
62 | validation_data=data_generator_wrapper(lines[num_train:], batch_size, input_shape, anchors, num_classes),
63 | validation_steps=max(1, num_val//batch_size),
64 | epochs=100,
65 | initial_epoch=60,
66 | callbacks=[logging, checkpoint, reduce_lr, early_stopping])
67 | model.save_weights(log_dir + 'trained_weights_final.h5')
68 |
69 | # Further training if needed.
70 |
71 |
72 | def get_classes(classes_path):
73 | '''loads the classes'''
74 | with open(classes_path) as f:
75 | class_names = f.readlines()
76 | class_names = [c.strip() for c in class_names]
77 | return class_names
78 |
79 | def get_anchors(anchors_path):
80 | '''loads the anchors from a file'''
81 | with open(anchors_path) as f:
82 | anchors = f.readline()
83 | anchors = [float(x) for x in anchors.split(',')]
84 | return np.array(anchors).reshape(-1, 2)
85 |
86 |
87 | def create_model(input_shape, anchors, num_classes, load_pretrained=True, freeze_body=2,
88 | weights_path='model_data/wider_face_yolo.h5'):
89 | '''create the training model'''
90 | K.clear_session() # get a new session
91 | image_input = Input(shape=(None, None, 3))
92 | h, w = input_shape
93 | num_anchors = len(anchors)
94 |
95 | y_true = [Input(shape=(h//{0:32, 1:16, 2:8}[l], w//{0:32, 1:16, 2:8}[l], \
96 | num_anchors//3, num_classes+5)) for l in range(3)]
97 |
98 | model_body = yolo_body(image_input, num_anchors//3, num_classes)
99 | print('Create YOLOv3 model with {} anchors and {} classes.'.format(num_anchors, num_classes))
100 |
101 | if load_pretrained:
102 | model_body.load_weights(weights_path, by_name=True, skip_mismatch=True)
103 | print('Load weights {}.'.format(weights_path))
104 | if freeze_body in [1, 2]:
105 | # Freeze darknet53 body or freeze all but 3 output layers.
106 | num = (185, len(model_body.layers)-3)[freeze_body-1]
107 | for i in range(num): model_body.layers[i].trainable = False
108 | print('Freeze the first {} layers of total {} layers.'.format(num, len(model_body.layers)))
109 |
110 | model_loss = Lambda(yolo_loss, output_shape=(1,), name='yolo_loss',
111 | arguments={'anchors': anchors, 'num_classes': num_classes, 'ignore_thresh': 0.5})(
112 | [*model_body.output, *y_true])
113 | model = Model([model_body.input, *y_true], model_loss)
114 |
115 | return model
116 |
117 | def create_tiny_model(input_shape, anchors, num_classes, load_pretrained=True, freeze_body=2,
118 | weights_path='model_data/tiny_yolo_weights.h5'):
119 | '''create the training model, for Tiny YOLOv3'''
120 | K.clear_session() # get a new session
121 | image_input = Input(shape=(None, None, 3))
122 | h, w = input_shape
123 | num_anchors = len(anchors)
124 |
125 | y_true = [Input(shape=(h//{0:32, 1:16}[l], w//{0:32, 1:16}[l], \
126 | num_anchors//2, num_classes+5)) for l in range(2)]
127 |
128 | model_body = tiny_yolo_body(image_input, num_anchors//2, num_classes)
129 | print('Create Tiny YOLOv3 model with {} anchors and {} classes.'.format(num_anchors, num_classes))
130 |
131 | if load_pretrained:
132 | model_body.load_weights(weights_path, by_name=True, skip_mismatch=True)
133 | print('Load weights {}.'.format(weights_path))
134 | if freeze_body in [1, 2]:
135 | # Freeze the darknet body or freeze all but 2 output layers.
136 | num = (20, len(model_body.layers)-2)[freeze_body-1]
137 | for i in range(num): model_body.layers[i].trainable = False
138 | print('Freeze the first {} layers of total {} layers.'.format(num, len(model_body.layers)))
139 |
140 | model_loss = Lambda(yolo_loss, output_shape=(1,), name='yolo_loss',
141 | arguments={'anchors': anchors, 'num_classes': num_classes, 'ignore_thresh': 0.7})(
142 | [*model_body.output, *y_true])
143 | model = Model([model_body.input, *y_true], model_loss)
144 |
145 | return model
146 |
147 | def data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes):
148 | '''data generator for fit_generator'''
149 | n = len(annotation_lines)
150 | i = 0
151 | while True:
152 | image_data = []
153 | box_data = []
154 | for b in range(batch_size):
155 | if i==0:
156 | np.random.shuffle(annotation_lines)
157 | image, box = get_random_data(annotation_lines[i], input_shape, random=True)
158 | image_data.append(image)
159 | box_data.append(box)
160 | i = (i+1) % n
161 | image_data = np.array(image_data)
162 | box_data = np.array(box_data)
163 | y_true = preprocess_true_boxes(box_data, input_shape, anchors, num_classes)
164 | yield [image_data, *y_true], np.zeros(batch_size)
165 |
166 | def data_generator_wrapper(annotation_lines, batch_size, input_shape, anchors, num_classes):
167 | n = len(annotation_lines)
168 | if n==0 or batch_size<=0: return None
169 | return data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes)
170 |
171 | if __name__ == '__main__':
172 | _main()
173 |
--------------------------------------------------------------------------------
/train_bottleneck.py:
--------------------------------------------------------------------------------
1 | """
2 | Retrain the YOLO model for your own dataset.
3 | """
4 | import os
5 | import numpy as np
6 | import keras.backend as K
7 | from keras.layers import Input, Lambda
8 | from keras.models import Model
9 | from keras.optimizers import Adam
10 | from keras.callbacks import TensorBoard, ModelCheckpoint, ReduceLROnPlateau, EarlyStopping
11 |
12 | from yolo3.model import preprocess_true_boxes, yolo_body, tiny_yolo_body, yolo_loss
13 | from yolo3.utils import get_random_data
14 |
15 |
16 | def _main():
17 | annotation_path = 'train.txt'
18 | log_dir = 'logs/000/'
19 | classes_path = 'model_data/coco_classes.txt'
20 | anchors_path = 'model_data/yolo_anchors.txt'
21 | class_names = get_classes(classes_path)
22 | num_classes = len(class_names)
23 | anchors = get_anchors(anchors_path)
24 |
25 | input_shape = (416,416) # multiple of 32, hw
26 |
27 | model, bottleneck_model, last_layer_model = create_model(input_shape, anchors, num_classes,
28 | freeze_body=2, weights_path='model_data/yolo_weights.h5') # make sure you know what you freeze
29 |
30 | logging = TensorBoard(log_dir=log_dir)
31 | checkpoint = ModelCheckpoint(log_dir + 'ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5',
32 | monitor='val_loss', save_weights_only=True, save_best_only=True, period=3)
33 | reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=3, verbose=1)
34 | early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=10, verbose=1)
35 |
36 | val_split = 0.1
37 | with open(annotation_path) as f:
38 | lines = f.readlines()
39 | np.random.seed(10101)
40 | np.random.shuffle(lines)
41 | np.random.seed(None)
42 | num_val = int(len(lines)*val_split)
43 | num_train = len(lines) - num_val
44 |
45 | # Train with frozen layers first, to get a stable loss.
46 | # Adjust num epochs to your dataset. This step is enough to obtain a not bad model.
47 | if True:
48 | # perform bottleneck training
49 | if not os.path.isfile("bottlenecks.npz"):
50 | print("calculating bottlenecks")
51 | batch_size=8
52 | bottlenecks=bottleneck_model.predict_generator(data_generator_wrapper(lines, batch_size, input_shape, anchors, num_classes, random=False, verbose=True),
53 | steps=(len(lines)//batch_size)+1, max_queue_size=1)
54 | np.savez("bottlenecks.npz", bot0=bottlenecks[0], bot1=bottlenecks[1], bot2=bottlenecks[2])
55 |
56 | # load bottleneck features from file
57 | dict_bot=np.load("bottlenecks.npz")
58 | bottlenecks_train=[dict_bot["bot0"][:num_train], dict_bot["bot1"][:num_train], dict_bot["bot2"][:num_train]]
59 | bottlenecks_val=[dict_bot["bot0"][num_train:], dict_bot["bot1"][num_train:], dict_bot["bot2"][num_train:]]
60 |
61 | # train last layers with fixed bottleneck features
62 | batch_size=8
63 | print("Training last layers with bottleneck features")
64 | print('with {} samples, val on {} samples and batch size {}.'.format(num_train, num_val, batch_size))
65 | last_layer_model.compile(optimizer='adam', loss={'yolo_loss': lambda y_true, y_pred: y_pred})
66 | last_layer_model.fit_generator(bottleneck_generator(lines[:num_train], batch_size, input_shape, anchors, num_classes, bottlenecks_train),
67 | steps_per_epoch=max(1, num_train//batch_size),
68 | validation_data=bottleneck_generator(lines[num_train:], batch_size, input_shape, anchors, num_classes, bottlenecks_val),
69 | validation_steps=max(1, num_val//batch_size),
70 | epochs=30,
71 | initial_epoch=0, max_queue_size=1)
72 | model.save_weights(log_dir + 'trained_weights_stage_0.h5')
73 |
74 | # train last layers with random augmented data
75 | model.compile(optimizer=Adam(lr=1e-3), loss={
76 | # use custom yolo_loss Lambda layer.
77 | 'yolo_loss': lambda y_true, y_pred: y_pred})
78 | batch_size = 16
79 | print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size))
80 | model.fit_generator(data_generator_wrapper(lines[:num_train], batch_size, input_shape, anchors, num_classes),
81 | steps_per_epoch=max(1, num_train//batch_size),
82 | validation_data=data_generator_wrapper(lines[num_train:], batch_size, input_shape, anchors, num_classes),
83 | validation_steps=max(1, num_val//batch_size),
84 | epochs=50,
85 | initial_epoch=0,
86 | callbacks=[logging, checkpoint])
87 | model.save_weights(log_dir + 'trained_weights_stage_1.h5')
88 |
89 | # Unfreeze and continue training, to fine-tune.
90 | # Train longer if the result is not good.
91 | if True:
92 | for i in range(len(model.layers)):
93 | model.layers[i].trainable = True
94 | model.compile(optimizer=Adam(lr=1e-4), loss={'yolo_loss': lambda y_true, y_pred: y_pred}) # recompile to apply the change
95 | print('Unfreeze all of the layers.')
96 |
97 | batch_size = 4 # note that more GPU memory is required after unfreezing the body
98 | print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size))
99 | model.fit_generator(data_generator_wrapper(lines[:num_train], batch_size, input_shape, anchors, num_classes),
100 | steps_per_epoch=max(1, num_train//batch_size),
101 | validation_data=data_generator_wrapper(lines[num_train:], batch_size, input_shape, anchors, num_classes),
102 | validation_steps=max(1, num_val//batch_size),
103 | epochs=100,
104 | initial_epoch=50,
105 | callbacks=[logging, checkpoint, reduce_lr, early_stopping])
106 | model.save_weights(log_dir + 'trained_weights_final.h5')
107 |
108 | # Further training if needed.
109 |
110 |
111 | def get_classes(classes_path):
112 | '''loads the classes'''
113 | with open(classes_path) as f:
114 | class_names = f.readlines()
115 | class_names = [c.strip() for c in class_names]
116 | return class_names
117 |
118 | def get_anchors(anchors_path):
119 | '''loads the anchors from a file'''
120 | with open(anchors_path) as f:
121 | anchors = f.readline()
122 | anchors = [float(x) for x in anchors.split(',')]
123 | return np.array(anchors).reshape(-1, 2)
124 |
125 |
126 | def create_model(input_shape, anchors, num_classes, load_pretrained=True, freeze_body=2,
127 | weights_path='model_data/yolo_weights.h5'):
128 | '''create the training model'''
129 | K.clear_session() # get a new session
130 | image_input = Input(shape=(None, None, 3))
131 | h, w = input_shape
132 | num_anchors = len(anchors)
133 |
134 | y_true = [Input(shape=(h//{0:32, 1:16, 2:8}[l], w//{0:32, 1:16, 2:8}[l], \
135 | num_anchors//3, num_classes+5)) for l in range(3)]
136 |
137 | model_body = yolo_body(image_input, num_anchors//3, num_classes)
138 | print('Create YOLOv3 model with {} anchors and {} classes.'.format(num_anchors, num_classes))
139 |
140 | if load_pretrained:
141 | model_body.load_weights(weights_path, by_name=True, skip_mismatch=True)
142 | print('Load weights {}.'.format(weights_path))
143 | if freeze_body in [1, 2]:
144 | # Freeze darknet53 body or freeze all but 3 output layers.
145 | num = (185, len(model_body.layers)-3)[freeze_body-1]
146 | for i in range(num): model_body.layers[i].trainable = False
147 | print('Freeze the first {} layers of total {} layers.'.format(num, len(model_body.layers)))
148 |
149 | # get output of second last layers and create bottleneck model of it
150 | out1=model_body.layers[246].output
151 | out2=model_body.layers[247].output
152 | out3=model_body.layers[248].output
153 | bottleneck_model = Model([model_body.input, *y_true], [out1, out2, out3])
154 |
155 | # create last layer model of last layers from yolo model
156 | in0 = Input(shape=bottleneck_model.output[0].shape[1:].as_list())
157 | in1 = Input(shape=bottleneck_model.output[1].shape[1:].as_list())
158 | in2 = Input(shape=bottleneck_model.output[2].shape[1:].as_list())
159 | last_out0=model_body.layers[249](in0)
160 | last_out1=model_body.layers[250](in1)
161 | last_out2=model_body.layers[251](in2)
162 | model_last=Model(inputs=[in0, in1, in2], outputs=[last_out0, last_out1, last_out2])
163 | model_loss_last =Lambda(yolo_loss, output_shape=(1,), name='yolo_loss',
164 | arguments={'anchors': anchors, 'num_classes': num_classes, 'ignore_thresh': 0.5})(
165 | [*model_last.output, *y_true])
166 | last_layer_model = Model([in0,in1,in2, *y_true], model_loss_last)
167 |
168 |
169 | model_loss = Lambda(yolo_loss, output_shape=(1,), name='yolo_loss',
170 | arguments={'anchors': anchors, 'num_classes': num_classes, 'ignore_thresh': 0.5})(
171 | [*model_body.output, *y_true])
172 | model = Model([model_body.input, *y_true], model_loss)
173 |
174 | return model, bottleneck_model, last_layer_model
175 |
176 | def data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes, random=True, verbose=False):
177 | '''data generator for fit_generator'''
178 | n = len(annotation_lines)
179 | i = 0
180 | while True:
181 | image_data = []
182 | box_data = []
183 | for b in range(batch_size):
184 | if i==0 and random:
185 | np.random.shuffle(annotation_lines)
186 | image, box = get_random_data(annotation_lines[i], input_shape, random=random)
187 | image_data.append(image)
188 | box_data.append(box)
189 | i = (i+1) % n
190 | image_data = np.array(image_data)
191 | if verbose:
192 | print("Progress: ",i,"/",n)
193 | box_data = np.array(box_data)
194 | y_true = preprocess_true_boxes(box_data, input_shape, anchors, num_classes)
195 | yield [image_data, *y_true], np.zeros(batch_size)
196 |
197 | def data_generator_wrapper(annotation_lines, batch_size, input_shape, anchors, num_classes, random=True, verbose=False):
198 | n = len(annotation_lines)
199 | if n==0 or batch_size<=0: return None
200 | return data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes, random, verbose)
201 |
202 | def bottleneck_generator(annotation_lines, batch_size, input_shape, anchors, num_classes, bottlenecks):
203 | n = len(annotation_lines)
204 | i = 0
205 | while True:
206 | box_data = []
207 | b0=np.zeros((batch_size,bottlenecks[0].shape[1],bottlenecks[0].shape[2],bottlenecks[0].shape[3]))
208 | b1=np.zeros((batch_size,bottlenecks[1].shape[1],bottlenecks[1].shape[2],bottlenecks[1].shape[3]))
209 | b2=np.zeros((batch_size,bottlenecks[2].shape[1],bottlenecks[2].shape[2],bottlenecks[2].shape[3]))
210 | for b in range(batch_size):
211 | _, box = get_random_data(annotation_lines[i], input_shape, random=False, proc_img=False)
212 | box_data.append(box)
213 | b0[b]=bottlenecks[0][i]
214 | b1[b]=bottlenecks[1][i]
215 | b2[b]=bottlenecks[2][i]
216 | i = (i+1) % n
217 | box_data = np.array(box_data)
218 | y_true = preprocess_true_boxes(box_data, input_shape, anchors, num_classes)
219 | yield [b0, b1, b2, *y_true], np.zeros(batch_size)
220 |
221 | if __name__ == '__main__':
222 | _main()
223 |
--------------------------------------------------------------------------------
/wider_face_split/readme.txt:
--------------------------------------------------------------------------------
1 | Attached the mappings between attribute names and label values.
2 |
3 | blur:
4 | clear->0
5 | normal blur->1
6 | heavy blur->2
7 |
8 | expression:
9 | typical expression->0
10 | exaggerate expression->1
11 |
12 | illumination:
13 | normal illumination->0
14 | extreme illumination->1
15 |
16 | occlusion:
17 | no occlusion->0
18 | partial occlusion->1
19 | heavy occlusion->2
20 |
21 | pose:
22 | typical pose->0
23 | atypical pose->1
24 |
25 | invalid:
26 | false->0(valid image)
27 | true->1(invalid image)
28 |
29 | The format of txt ground truth.
30 | File name
31 | Number of bounding box
32 | x1, y1, w, h, blur, expression, illumination, invalid, occlusion, pose
--------------------------------------------------------------------------------
/wider_face_split/wider_face_test.mat:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/swdev1202/keras-yolo3-facedetection/e51b6b159f428add904bec4c2b6ed58a3db41ca6/wider_face_split/wider_face_test.mat
--------------------------------------------------------------------------------
/wider_face_split/wider_face_train.mat:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/swdev1202/keras-yolo3-facedetection/e51b6b159f428add904bec4c2b6ed58a3db41ca6/wider_face_split/wider_face_train.mat
--------------------------------------------------------------------------------
/wider_face_split/wider_face_val.mat:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/swdev1202/keras-yolo3-facedetection/e51b6b159f428add904bec4c2b6ed58a3db41ca6/wider_face_split/wider_face_val.mat
--------------------------------------------------------------------------------
/yolo.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | """
3 | Class definition of YOLO_v3 style detection model on image and video
4 | """
5 |
6 | import colorsys
7 | import os
8 | from timeit import default_timer as timer
9 |
10 | import numpy as np
11 | from keras import backend as K
12 | from keras.models import load_model
13 | from keras.layers import Input
14 | from PIL import Image, ImageFont, ImageDraw
15 |
16 | from yolo3.model import yolo_eval, yolo_body, tiny_yolo_body
17 | from yolo3.utils import letterbox_image
18 | import os
19 | from keras.utils import multi_gpu_model
20 |
21 | class YOLO(object):
22 | # _defaults = {
23 | # "model_path": 'model_data/yolo.h5',
24 | # "anchors_path": 'model_data/yolo_anchors.txt',
25 | # "classes_path": 'model_data/coco_classes.txt',
26 | # "score" : 0.3,
27 | # "iou" : 0.45,
28 | # "model_image_size" : (416, 416),
29 | # "gpu_num" : 1,
30 | # }
31 | _defaults = {
32 | "model_path": 'model_data/wider_face_yolo.h5',
33 | "anchors_path": 'model_data/wider_anchors.txt',
34 | "classes_path": 'model_data/wider_classes.txt',
35 | "score" : 0.3,
36 | "iou" : 0.45,
37 | "model_image_size" : (608, 608),
38 | "gpu_num" : 1,
39 | }
40 |
41 | @classmethod
42 | def get_defaults(cls, n):
43 | if n in cls._defaults:
44 | return cls._defaults[n]
45 | else:
46 | return "Unrecognized attribute name '" + n + "'"
47 |
48 | def __init__(self, **kwargs):
49 | self.__dict__.update(self._defaults) # set up default values
50 | self.__dict__.update(kwargs) # and update with user overrides
51 | self.class_names = self._get_class()
52 | self.anchors = self._get_anchors()
53 | self.sess = K.get_session()
54 | self.boxes, self.scores, self.classes = self.generate()
55 |
56 | def _get_class(self):
57 | classes_path = os.path.expanduser(self.classes_path)
58 | with open(classes_path) as f:
59 | class_names = f.readlines()
60 | class_names = [c.strip() for c in class_names]
61 | return class_names
62 |
63 | def _get_anchors(self):
64 | anchors_path = os.path.expanduser(self.anchors_path)
65 | with open(anchors_path) as f:
66 | anchors = f.readline()
67 | anchors = [float(x) for x in anchors.split(',')]
68 | return np.array(anchors).reshape(-1, 2)
69 |
70 | def generate(self):
71 | model_path = os.path.expanduser(self.model_path)
72 | assert model_path.endswith('.h5'), 'Keras model or weights must be a .h5 file.'
73 |
74 | # Load model, or construct model and load weights.
75 | num_anchors = len(self.anchors)
76 | num_classes = len(self.class_names)
77 | is_tiny_version = num_anchors==6 # default setting
78 | try:
79 | self.yolo_model = load_model(model_path, compile=False)
80 | except:
81 | self.yolo_model = tiny_yolo_body(Input(shape=(None,None,3)), num_anchors//2, num_classes) \
82 | if is_tiny_version else yolo_body(Input(shape=(None,None,3)), num_anchors//3, num_classes)
83 | self.yolo_model.load_weights(self.model_path) # make sure model, anchors and classes match
84 | else:
85 | assert self.yolo_model.layers[-1].output_shape[-1] == \
86 | num_anchors/len(self.yolo_model.output) * (num_classes + 5), \
87 | 'Mismatch between model and given anchor and class sizes'
88 |
89 | print('{} model, anchors, and classes loaded.'.format(model_path))
90 |
91 | # Generate colors for drawing bounding boxes.
92 | hsv_tuples = [(x / len(self.class_names), 1., 1.)
93 | for x in range(len(self.class_names))]
94 | self.colors = list(map(lambda x: colorsys.hsv_to_rgb(*x), hsv_tuples))
95 | self.colors = list(
96 | map(lambda x: (int(x[0] * 255), int(x[1] * 255), int(x[2] * 255)),
97 | self.colors))
98 | np.random.seed(10101) # Fixed seed for consistent colors across runs.
99 | np.random.shuffle(self.colors) # Shuffle colors to decorrelate adjacent classes.
100 | np.random.seed(None) # Reset seed to default.
101 |
102 | # Generate output tensor targets for filtered bounding boxes.
103 | self.input_image_shape = K.placeholder(shape=(2, ))
104 | if self.gpu_num>=2:
105 | self.yolo_model = multi_gpu_model(self.yolo_model, gpus=self.gpu_num)
106 | boxes, scores, classes = yolo_eval(self.yolo_model.output, self.anchors,
107 | len(self.class_names), self.input_image_shape,
108 | score_threshold=self.score, iou_threshold=self.iou)
109 | return boxes, scores, classes
110 |
111 | def detect_image(self, image):
112 | start = timer()
113 |
114 | if self.model_image_size != (None, None):
115 | assert self.model_image_size[0]%32 == 0, 'Multiples of 32 required'
116 | assert self.model_image_size[1]%32 == 0, 'Multiples of 32 required'
117 | boxed_image = letterbox_image(image, tuple(reversed(self.model_image_size)))
118 | else:
119 | new_image_size = (image.width - (image.width % 32),
120 | image.height - (image.height % 32))
121 | boxed_image = letterbox_image(image, new_image_size)
122 | image_data = np.array(boxed_image, dtype='float32')
123 |
124 | print(image_data.shape)
125 | image_data /= 255.
126 | image_data = np.expand_dims(image_data, 0) # Add batch dimension.
127 |
128 | out_boxes, out_scores, out_classes = self.sess.run(
129 | [self.boxes, self.scores, self.classes],
130 | feed_dict={
131 | self.yolo_model.input: image_data,
132 | self.input_image_shape: [image.size[1], image.size[0]],
133 | K.learning_phase(): 0
134 | })
135 |
136 | print('Found {} boxes for {}'.format(len(out_boxes), 'img'))
137 |
138 | font = ImageFont.truetype(font='font/FiraMono-Medium.otf',
139 | size=np.floor(3e-2 * image.size[1] + 0.5).astype('int32'))
140 | thickness = (image.size[0] + image.size[1]) // 300
141 |
142 | for i, c in reversed(list(enumerate(out_classes))):
143 | predicted_class = self.class_names[c]
144 | box = out_boxes[i]
145 | score = out_scores[i]
146 |
147 | label = '{} {:.2f}'.format(predicted_class, score)
148 | draw = ImageDraw.Draw(image)
149 | label_size = draw.textsize(label, font)
150 |
151 | top, left, bottom, right = box
152 | top = max(0, np.floor(top + 0.5).astype('int32'))
153 | left = max(0, np.floor(left + 0.5).astype('int32'))
154 | bottom = min(image.size[1], np.floor(bottom + 0.5).astype('int32'))
155 | right = min(image.size[0], np.floor(right + 0.5).astype('int32'))
156 | print(label, (left, top), (right, bottom))
157 |
158 | # if top - label_size[1] >= 0:
159 | # text_origin = np.array([left, top - label_size[1]])
160 | # else:
161 | # text_origin = np.array([left, top + 1])
162 |
163 | # # My kingdom for a good redistributable image drawing library.
164 | # for i in range(thickness):
165 | # draw.rectangle(
166 | # [left + i, top + i, right - i, bottom - i],
167 | # outline=self.colors[c])
168 | # draw.rectangle(
169 | # [tuple(text_origin), tuple(text_origin + label_size)],
170 | # fill=self.colors[c])
171 | # draw.text(text_origin, label, fill=(0, 0, 0), font=font)
172 |
173 | ############### FOR PIXELIZATION ################
174 | area = (left, top, right, bottom)
175 | filter = image.crop(area)
176 | filter = filter.resize((8, 8), Image.ANTIALIAS)
177 | filter = filter.resize(((right-left),(bottom-top)), Image.ANTIALIAS)
178 | image.paste(filter, (left,top))
179 |
180 | del draw
181 |
182 | end = timer()
183 | print(end - start)
184 | return image
185 |
186 | def close_session(self):
187 | self.sess.close()
188 |
189 | def detect_video(yolo, video_path, output_path=""):
190 | import cv2
191 | vid = cv2.VideoCapture(video_path)
192 | if not vid.isOpened():
193 | raise IOError("Couldn't open webcam or video")
194 | #video_FourCC = int(vid.get(cv2.CAP_PROP_FOURCC))
195 | video_FourCC = cv2.VideoWriter_fourcc(*"XVID")
196 | video_fps = vid.get(cv2.CAP_PROP_FPS)
197 | video_size = (int(vid.get(cv2.CAP_PROP_FRAME_WIDTH)),
198 | int(vid.get(cv2.CAP_PROP_FRAME_HEIGHT)))
199 | isOutput = True if output_path != "" else False
200 | if isOutput:
201 | print("!!! TYPE:", type(output_path), type(video_FourCC), type(video_fps), type(video_size))
202 | out = cv2.VideoWriter(output_path, video_FourCC, 10, video_size)
203 | accum_time = 0
204 | curr_fps = 0
205 | fps = "FPS: ??"
206 | prev_time = timer()
207 | while True:
208 | return_value, frame = vid.read()
209 | if frame is None:
210 | break
211 | #frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
212 | image = Image.fromarray(frame)
213 | image = yolo.detect_image(image)
214 | result = np.asarray(image)
215 | curr_time = timer()
216 | exec_time = curr_time - prev_time
217 | prev_time = curr_time
218 | accum_time = accum_time + exec_time
219 | curr_fps = curr_fps + 1
220 | if accum_time > 1:
221 | accum_time = accum_time - 1
222 | fps = "FPS: " + str(curr_fps)
223 | curr_fps = 0
224 | #cv2.putText(result, text=fps, org=(3, 15), fontFace=cv2.FONT_HERSHEY_SIMPLEX,
225 | # fontScale=0.50, color=(255, 0, 0), thickness=2)
226 | #cv2.namedWindow("result", cv2.WINDOW_NORMAL)
227 | # cv2.imshow("result", result)
228 | if isOutput:
229 | #output_name = '/video_out/' + str(curr_time)
230 | #image.save(output_name, "JPG")
231 | out.write(result)
232 | if cv2.waitKey(1) & 0xFF == ord('q'):
233 | break
234 | yolo.close_session()
235 |
236 |
--------------------------------------------------------------------------------
/yolo3/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/swdev1202/keras-yolo3-facedetection/e51b6b159f428add904bec4c2b6ed58a3db41ca6/yolo3/__init__.py
--------------------------------------------------------------------------------
/yolo3/model.py:
--------------------------------------------------------------------------------
1 | """YOLO_v3 Model Defined in Keras."""
2 |
3 | from functools import wraps
4 |
5 | import numpy as np
6 | import tensorflow as tf
7 | from keras import backend as K
8 | from keras.layers import Conv2D, Add, ZeroPadding2D, UpSampling2D, Concatenate, MaxPooling2D
9 | from keras.layers.advanced_activations import LeakyReLU
10 | from keras.layers.normalization import BatchNormalization
11 | from keras.models import Model
12 | from keras.regularizers import l2
13 |
14 | from yolo3.utils import compose
15 |
16 |
17 | @wraps(Conv2D)
18 | def DarknetConv2D(*args, **kwargs):
19 | """Wrapper to set Darknet parameters for Convolution2D."""
20 | darknet_conv_kwargs = {'kernel_regularizer': l2(5e-4)}
21 | darknet_conv_kwargs['padding'] = 'valid' if kwargs.get('strides')==(2,2) else 'same'
22 | darknet_conv_kwargs.update(kwargs)
23 | return Conv2D(*args, **darknet_conv_kwargs)
24 |
25 | def DarknetConv2D_BN_Leaky(*args, **kwargs):
26 | """Darknet Convolution2D followed by BatchNormalization and LeakyReLU."""
27 | no_bias_kwargs = {'use_bias': False}
28 | no_bias_kwargs.update(kwargs)
29 | return compose(
30 | DarknetConv2D(*args, **no_bias_kwargs),
31 | BatchNormalization(),
32 | LeakyReLU(alpha=0.1))
33 |
34 | def resblock_body(x, num_filters, num_blocks):
35 | '''A series of resblocks starting with a downsampling Convolution2D'''
36 | # Darknet uses left and top padding instead of 'same' mode
37 | x = ZeroPadding2D(((1,0),(1,0)))(x)
38 | x = DarknetConv2D_BN_Leaky(num_filters, (3,3), strides=(2,2))(x)
39 | for i in range(num_blocks):
40 | y = compose(
41 | DarknetConv2D_BN_Leaky(num_filters//2, (1,1)),
42 | DarknetConv2D_BN_Leaky(num_filters, (3,3)))(x)
43 | x = Add()([x,y])
44 | return x
45 |
46 | def darknet_body(x):
47 | '''Darknent body having 52 Convolution2D layers'''
48 | x = DarknetConv2D_BN_Leaky(32, (3,3))(x)
49 | x = resblock_body(x, 64, 1)
50 | x = resblock_body(x, 128, 2)
51 | x = resblock_body(x, 256, 8)
52 | x = resblock_body(x, 512, 8)
53 | x = resblock_body(x, 1024, 4)
54 | return x
55 |
56 | def make_last_layers(x, num_filters, out_filters):
57 | '''6 Conv2D_BN_Leaky layers followed by a Conv2D_linear layer'''
58 | x = compose(
59 | DarknetConv2D_BN_Leaky(num_filters, (1,1)),
60 | DarknetConv2D_BN_Leaky(num_filters*2, (3,3)),
61 | DarknetConv2D_BN_Leaky(num_filters, (1,1)),
62 | DarknetConv2D_BN_Leaky(num_filters*2, (3,3)),
63 | DarknetConv2D_BN_Leaky(num_filters, (1,1)))(x)
64 | y = compose(
65 | DarknetConv2D_BN_Leaky(num_filters*2, (3,3)),
66 | DarknetConv2D(out_filters, (1,1)))(x)
67 | return x, y
68 |
69 |
70 | def yolo_body(inputs, num_anchors, num_classes):
71 | """Create YOLO_V3 model CNN body in Keras."""
72 | darknet = Model(inputs, darknet_body(inputs))
73 | x, y1 = make_last_layers(darknet.output, 512, num_anchors*(num_classes+5))
74 |
75 | x = compose(
76 | DarknetConv2D_BN_Leaky(256, (1,1)),
77 | UpSampling2D(2))(x)
78 | x = Concatenate()([x,darknet.layers[152].output])
79 | x, y2 = make_last_layers(x, 256, num_anchors*(num_classes+5))
80 |
81 | x = compose(
82 | DarknetConv2D_BN_Leaky(128, (1,1)),
83 | UpSampling2D(2))(x)
84 | x = Concatenate()([x,darknet.layers[92].output])
85 | x, y3 = make_last_layers(x, 128, num_anchors*(num_classes+5))
86 |
87 | return Model(inputs, [y1,y2,y3])
88 |
89 | def tiny_yolo_body(inputs, num_anchors, num_classes):
90 | '''Create Tiny YOLO_v3 model CNN body in keras.'''
91 | x1 = compose(
92 | DarknetConv2D_BN_Leaky(16, (3,3)),
93 | MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'),
94 | DarknetConv2D_BN_Leaky(32, (3,3)),
95 | MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'),
96 | DarknetConv2D_BN_Leaky(64, (3,3)),
97 | MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'),
98 | DarknetConv2D_BN_Leaky(128, (3,3)),
99 | MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'),
100 | DarknetConv2D_BN_Leaky(256, (3,3)))(inputs)
101 | x2 = compose(
102 | MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'),
103 | DarknetConv2D_BN_Leaky(512, (3,3)),
104 | MaxPooling2D(pool_size=(2,2), strides=(1,1), padding='same'),
105 | DarknetConv2D_BN_Leaky(1024, (3,3)),
106 | DarknetConv2D_BN_Leaky(256, (1,1)))(x1)
107 | y1 = compose(
108 | DarknetConv2D_BN_Leaky(512, (3,3)),
109 | DarknetConv2D(num_anchors*(num_classes+5), (1,1)))(x2)
110 |
111 | x2 = compose(
112 | DarknetConv2D_BN_Leaky(128, (1,1)),
113 | UpSampling2D(2))(x2)
114 | y2 = compose(
115 | Concatenate(),
116 | DarknetConv2D_BN_Leaky(256, (3,3)),
117 | DarknetConv2D(num_anchors*(num_classes+5), (1,1)))([x2,x1])
118 |
119 | return Model(inputs, [y1,y2])
120 |
121 |
122 | def yolo_head(feats, anchors, num_classes, input_shape, calc_loss=False):
123 | """Convert final layer features to bounding box parameters."""
124 | num_anchors = len(anchors)
125 | # Reshape to batch, height, width, num_anchors, box_params.
126 | anchors_tensor = K.reshape(K.constant(anchors), [1, 1, 1, num_anchors, 2])
127 |
128 | grid_shape = K.shape(feats)[1:3] # height, width
129 | grid_y = K.tile(K.reshape(K.arange(0, stop=grid_shape[0]), [-1, 1, 1, 1]),
130 | [1, grid_shape[1], 1, 1])
131 | grid_x = K.tile(K.reshape(K.arange(0, stop=grid_shape[1]), [1, -1, 1, 1]),
132 | [grid_shape[0], 1, 1, 1])
133 | grid = K.concatenate([grid_x, grid_y])
134 | grid = K.cast(grid, K.dtype(feats))
135 |
136 | feats = K.reshape(
137 | feats, [-1, grid_shape[0], grid_shape[1], num_anchors, num_classes + 5])
138 |
139 | # Adjust preditions to each spatial grid point and anchor size.
140 | box_xy = (K.sigmoid(feats[..., :2]) + grid) / K.cast(grid_shape[::-1], K.dtype(feats))
141 | box_wh = K.exp(feats[..., 2:4]) * anchors_tensor / K.cast(input_shape[::-1], K.dtype(feats))
142 | box_confidence = K.sigmoid(feats[..., 4:5])
143 | box_class_probs = K.sigmoid(feats[..., 5:])
144 |
145 | if calc_loss == True:
146 | return grid, feats, box_xy, box_wh
147 | return box_xy, box_wh, box_confidence, box_class_probs
148 |
149 |
150 | def yolo_correct_boxes(box_xy, box_wh, input_shape, image_shape):
151 | '''Get corrected boxes'''
152 | box_yx = box_xy[..., ::-1]
153 | box_hw = box_wh[..., ::-1]
154 | input_shape = K.cast(input_shape, K.dtype(box_yx))
155 | image_shape = K.cast(image_shape, K.dtype(box_yx))
156 | new_shape = K.round(image_shape * K.min(input_shape/image_shape))
157 | offset = (input_shape-new_shape)/2./input_shape
158 | scale = input_shape/new_shape
159 | box_yx = (box_yx - offset) * scale
160 | box_hw *= scale
161 |
162 | box_mins = box_yx - (box_hw / 2.)
163 | box_maxes = box_yx + (box_hw / 2.)
164 | boxes = K.concatenate([
165 | box_mins[..., 0:1], # y_min
166 | box_mins[..., 1:2], # x_min
167 | box_maxes[..., 0:1], # y_max
168 | box_maxes[..., 1:2] # x_max
169 | ])
170 |
171 | # Scale boxes back to original image shape.
172 | boxes *= K.concatenate([image_shape, image_shape])
173 | return boxes
174 |
175 |
176 | def yolo_boxes_and_scores(feats, anchors, num_classes, input_shape, image_shape):
177 | '''Process Conv layer output'''
178 | box_xy, box_wh, box_confidence, box_class_probs = yolo_head(feats,
179 | anchors, num_classes, input_shape)
180 | boxes = yolo_correct_boxes(box_xy, box_wh, input_shape, image_shape)
181 | boxes = K.reshape(boxes, [-1, 4])
182 | box_scores = box_confidence * box_class_probs
183 | box_scores = K.reshape(box_scores, [-1, num_classes])
184 | return boxes, box_scores
185 |
186 |
187 | def yolo_eval(yolo_outputs,
188 | anchors,
189 | num_classes,
190 | image_shape,
191 | max_boxes=20,
192 | score_threshold=.6,
193 | iou_threshold=.5):
194 | """Evaluate YOLO model on given input and return filtered boxes."""
195 | num_layers = len(yolo_outputs)
196 | anchor_mask = [[6,7,8], [3,4,5], [0,1,2]] if num_layers==3 else [[3,4,5], [1,2,3]] # default setting
197 | input_shape = K.shape(yolo_outputs[0])[1:3] * 32
198 | boxes = []
199 | box_scores = []
200 | for l in range(num_layers):
201 | _boxes, _box_scores = yolo_boxes_and_scores(yolo_outputs[l],
202 | anchors[anchor_mask[l]], num_classes, input_shape, image_shape)
203 | boxes.append(_boxes)
204 | box_scores.append(_box_scores)
205 | boxes = K.concatenate(boxes, axis=0)
206 | box_scores = K.concatenate(box_scores, axis=0)
207 |
208 | mask = box_scores >= score_threshold
209 | max_boxes_tensor = K.constant(max_boxes, dtype='int32')
210 | boxes_ = []
211 | scores_ = []
212 | classes_ = []
213 | for c in range(num_classes):
214 | # TODO: use keras backend instead of tf.
215 | class_boxes = tf.boolean_mask(boxes, mask[:, c])
216 | class_box_scores = tf.boolean_mask(box_scores[:, c], mask[:, c])
217 | nms_index = tf.image.non_max_suppression(
218 | class_boxes, class_box_scores, max_boxes_tensor, iou_threshold=iou_threshold)
219 | class_boxes = K.gather(class_boxes, nms_index)
220 | class_box_scores = K.gather(class_box_scores, nms_index)
221 | classes = K.ones_like(class_box_scores, 'int32') * c
222 | boxes_.append(class_boxes)
223 | scores_.append(class_box_scores)
224 | classes_.append(classes)
225 | boxes_ = K.concatenate(boxes_, axis=0)
226 | scores_ = K.concatenate(scores_, axis=0)
227 | classes_ = K.concatenate(classes_, axis=0)
228 |
229 | return boxes_, scores_, classes_
230 |
231 |
232 | def preprocess_true_boxes(true_boxes, input_shape, anchors, num_classes):
233 | '''Preprocess true boxes to training input format
234 |
235 | Parameters
236 | ----------
237 | true_boxes: array, shape=(m, T, 5)
238 | Absolute x_min, y_min, x_max, y_max, class_id relative to input_shape.
239 | input_shape: array-like, hw, multiples of 32
240 | anchors: array, shape=(N, 2), wh
241 | num_classes: integer
242 |
243 | Returns
244 | -------
245 | y_true: list of array, shape like yolo_outputs, xywh are reletive value
246 |
247 | '''
248 | assert (true_boxes[..., 4]0
269 |
270 | for b in range(m):
271 | # Discard zero rows.
272 | wh = boxes_wh[b, valid_mask[b]]
273 | if len(wh)==0: continue
274 | # Expand dim to apply broadcasting.
275 | wh = np.expand_dims(wh, -2)
276 | box_maxes = wh / 2.
277 | box_mins = -box_maxes
278 |
279 | intersect_mins = np.maximum(box_mins, anchor_mins)
280 | intersect_maxes = np.minimum(box_maxes, anchor_maxes)
281 | intersect_wh = np.maximum(intersect_maxes - intersect_mins, 0.)
282 | intersect_area = intersect_wh[..., 0] * intersect_wh[..., 1]
283 | box_area = wh[..., 0] * wh[..., 1]
284 | anchor_area = anchors[..., 0] * anchors[..., 1]
285 | iou = intersect_area / (box_area + anchor_area - intersect_area)
286 |
287 | # Find best anchor for each true box
288 | best_anchor = np.argmax(iou, axis=-1)
289 |
290 | for t, n in enumerate(best_anchor):
291 | for l in range(num_layers):
292 | if n in anchor_mask[l]:
293 | i = np.floor(true_boxes[b,t,0]*grid_shapes[l][1]).astype('int32')
294 | j = np.floor(true_boxes[b,t,1]*grid_shapes[l][0]).astype('int32')
295 | k = anchor_mask[l].index(n)
296 | c = true_boxes[b,t, 4].astype('int32')
297 | y_true[l][b, j, i, k, 0:4] = true_boxes[b,t, 0:4]
298 | y_true[l][b, j, i, k, 4] = 1
299 | y_true[l][b, j, i, k, 5+c] = 1
300 |
301 | return y_true
302 |
303 |
304 | def box_iou(b1, b2):
305 | '''Return iou tensor
306 |
307 | Parameters
308 | ----------
309 | b1: tensor, shape=(i1,...,iN, 4), xywh
310 | b2: tensor, shape=(j, 4), xywh
311 |
312 | Returns
313 | -------
314 | iou: tensor, shape=(i1,...,iN, j)
315 |
316 | '''
317 |
318 | # Expand dim to apply broadcasting.
319 | b1 = K.expand_dims(b1, -2)
320 | b1_xy = b1[..., :2]
321 | b1_wh = b1[..., 2:4]
322 | b1_wh_half = b1_wh/2.
323 | b1_mins = b1_xy - b1_wh_half
324 | b1_maxes = b1_xy + b1_wh_half
325 |
326 | # Expand dim to apply broadcasting.
327 | b2 = K.expand_dims(b2, 0)
328 | b2_xy = b2[..., :2]
329 | b2_wh = b2[..., 2:4]
330 | b2_wh_half = b2_wh/2.
331 | b2_mins = b2_xy - b2_wh_half
332 | b2_maxes = b2_xy + b2_wh_half
333 |
334 | intersect_mins = K.maximum(b1_mins, b2_mins)
335 | intersect_maxes = K.minimum(b1_maxes, b2_maxes)
336 | intersect_wh = K.maximum(intersect_maxes - intersect_mins, 0.)
337 | intersect_area = intersect_wh[..., 0] * intersect_wh[..., 1]
338 | b1_area = b1_wh[..., 0] * b1_wh[..., 1]
339 | b2_area = b2_wh[..., 0] * b2_wh[..., 1]
340 | iou = intersect_area / (b1_area + b2_area - intersect_area)
341 |
342 | return iou
343 |
344 |
345 | def yolo_loss(args, anchors, num_classes, ignore_thresh=.5, print_loss=False):
346 | '''Return yolo_loss tensor
347 |
348 | Parameters
349 | ----------
350 | yolo_outputs: list of tensor, the output of yolo_body or tiny_yolo_body
351 | y_true: list of array, the output of preprocess_true_boxes
352 | anchors: array, shape=(N, 2), wh
353 | num_classes: integer
354 | ignore_thresh: float, the iou threshold whether to ignore object confidence loss
355 |
356 | Returns
357 | -------
358 | loss: tensor, shape=(1,)
359 |
360 | '''
361 | num_layers = len(anchors)//3 # default setting
362 | yolo_outputs = args[:num_layers]
363 | y_true = args[num_layers:]
364 | anchor_mask = [[6,7,8], [3,4,5], [0,1,2]] if num_layers==3 else [[3,4,5], [1,2,3]]
365 | input_shape = K.cast(K.shape(yolo_outputs[0])[1:3] * 32, K.dtype(y_true[0]))
366 | grid_shapes = [K.cast(K.shape(yolo_outputs[l])[1:3], K.dtype(y_true[0])) for l in range(num_layers)]
367 | loss = 0
368 | m = K.shape(yolo_outputs[0])[0] # batch size, tensor
369 | mf = K.cast(m, K.dtype(yolo_outputs[0]))
370 |
371 | for l in range(num_layers):
372 | object_mask = y_true[l][..., 4:5]
373 | true_class_probs = y_true[l][..., 5:]
374 |
375 | grid, raw_pred, pred_xy, pred_wh = yolo_head(yolo_outputs[l],
376 | anchors[anchor_mask[l]], num_classes, input_shape, calc_loss=True)
377 | pred_box = K.concatenate([pred_xy, pred_wh])
378 |
379 | # Darknet raw box to calculate loss.
380 | raw_true_xy = y_true[l][..., :2]*grid_shapes[l][::-1] - grid
381 | raw_true_wh = K.log(y_true[l][..., 2:4] / anchors[anchor_mask[l]] * input_shape[::-1])
382 | raw_true_wh = K.switch(object_mask, raw_true_wh, K.zeros_like(raw_true_wh)) # avoid log(0)=-inf
383 | box_loss_scale = 2 - y_true[l][...,2:3]*y_true[l][...,3:4]
384 |
385 | # Find ignore mask, iterate over each of batch.
386 | ignore_mask = tf.TensorArray(K.dtype(y_true[0]), size=1, dynamic_size=True)
387 | object_mask_bool = K.cast(object_mask, 'bool')
388 | def loop_body(b, ignore_mask):
389 | true_box = tf.boolean_mask(y_true[l][b,...,0:4], object_mask_bool[b,...,0])
390 | iou = box_iou(pred_box[b], true_box)
391 | best_iou = K.max(iou, axis=-1)
392 | ignore_mask = ignore_mask.write(b, K.cast(best_iou0:
62 | np.random.shuffle(box)
63 | if len(box)>max_boxes: box = box[:max_boxes]
64 | box[:, [0,2]] = box[:, [0,2]]*scale + dx
65 | box[:, [1,3]] = box[:, [1,3]]*scale + dy
66 | box_data[:len(box)] = box
67 |
68 | return image_data, box_data
69 |
70 | # resize image
71 | new_ar = w/h * rand(1-jitter,1+jitter)/rand(1-jitter,1+jitter)
72 | scale = rand(.25, 2)
73 | if new_ar < 1:
74 | nh = int(scale*h)
75 | nw = int(nh*new_ar)
76 | else:
77 | nw = int(scale*w)
78 | nh = int(nw/new_ar)
79 | image = image.resize((nw,nh), Image.BICUBIC)
80 |
81 | # place image
82 | dx = int(rand(0, w-nw))
83 | dy = int(rand(0, h-nh))
84 | new_image = Image.new('RGB', (w,h), (128,128,128))
85 | new_image.paste(image, (dx, dy))
86 | image = new_image
87 |
88 | # flip image or not
89 | flip = rand()<.5
90 | if flip: image = image.transpose(Image.FLIP_LEFT_RIGHT)
91 |
92 | # distort image
93 | hue = rand(-hue, hue)
94 | sat = rand(1, sat) if rand()<.5 else 1/rand(1, sat)
95 | val = rand(1, val) if rand()<.5 else 1/rand(1, val)
96 | x = rgb_to_hsv(np.array(image)/255.)
97 | x[..., 0] += hue
98 | x[..., 0][x[..., 0]>1] -= 1
99 | x[..., 0][x[..., 0]<0] += 1
100 | x[..., 1] *= sat
101 | x[..., 2] *= val
102 | x[x>1] = 1
103 | x[x<0] = 0
104 | image_data = hsv_to_rgb(x) # numpy array, 0 to 1
105 |
106 | # correct boxes
107 | box_data = np.zeros((max_boxes,5))
108 | if len(box)>0:
109 | np.random.shuffle(box)
110 | box[:, [0,2]] = box[:, [0,2]]*nw/iw + dx
111 | box[:, [1,3]] = box[:, [1,3]]*nh/ih + dy
112 | if flip: box[:, [0,2]] = w - box[:, [2,0]]
113 | box[:, 0:2][box[:, 0:2]<0] = 0
114 | box[:, 2][box[:, 2]>w] = w
115 | box[:, 3][box[:, 3]>h] = h
116 | box_w = box[:, 2] - box[:, 0]
117 | box_h = box[:, 3] - box[:, 1]
118 | box = box[np.logical_and(box_w>1, box_h>1)] # discard invalid box
119 | if len(box)>max_boxes: box = box[:max_boxes]
120 | box_data[:len(box)] = box
121 |
122 | return image_data, box_data
123 |
--------------------------------------------------------------------------------
/yolo_video.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import argparse
3 | from yolo import YOLO, detect_video
4 | from PIL import Image
5 |
6 | def detect_img(yolo):
7 | while True:
8 | img = input('Input image filename:')
9 | try:
10 | image = Image.open(img)
11 | except:
12 | print('Open Error! Try again!')
13 | continue
14 | else:
15 | r_image = yolo.detect_image(image)
16 | r_image.show()
17 | yolo.close_session()
18 |
19 | FLAGS = None
20 |
21 | if __name__ == '__main__':
22 | # class YOLO defines the default value, so suppress any default here
23 | parser = argparse.ArgumentParser(argument_default=argparse.SUPPRESS)
24 | '''
25 | Command line options
26 | '''
27 | parser.add_argument(
28 | '--model', type=str,
29 | help='path to model weight file, default ' + YOLO.get_defaults("model_path")
30 | )
31 |
32 | parser.add_argument(
33 | '--anchors', type=str,
34 | help='path to anchor definitions, default ' + YOLO.get_defaults("anchors_path")
35 | )
36 |
37 | parser.add_argument(
38 | '--classes', type=str,
39 | help='path to class definitions, default ' + YOLO.get_defaults("classes_path")
40 | )
41 |
42 | parser.add_argument(
43 | '--gpu_num', type=int,
44 | help='Number of GPU to use, default ' + str(YOLO.get_defaults("gpu_num"))
45 | )
46 |
47 | parser.add_argument(
48 | '--image', default=False, action="store_true",
49 | help='Image detection mode, will ignore all positional arguments'
50 | )
51 | '''
52 | Command line positional arguments -- for video detection mode
53 | '''
54 | parser.add_argument(
55 | "--input", nargs='?', type=str,required=False,default='./path2your_video',
56 | help = "Video input path"
57 | )
58 |
59 | parser.add_argument(
60 | "--output", nargs='?', type=str, default="",
61 | help = "[Optional] Video output path"
62 | )
63 |
64 | FLAGS = parser.parse_args()
65 |
66 | if FLAGS.image:
67 | """
68 | Image detection mode, disregard any remaining command line arguments
69 | """
70 | print("Image detection mode")
71 | if "input" in FLAGS:
72 | print(" Ignoring remaining command line arguments: " + FLAGS.input + "," + FLAGS.output)
73 | detect_img(YOLO(**vars(FLAGS)))
74 | elif "input" in FLAGS:
75 | print("i'm here")
76 | detect_video(YOLO(**vars(FLAGS)), FLAGS.input, FLAGS.output)
77 | else:
78 | print("Must specify at least video_input_path. See usage with --help.")
79 |
--------------------------------------------------------------------------------
/yolov3-tiny.cfg:
--------------------------------------------------------------------------------
1 | [net]
2 | # Testing
3 | batch=1
4 | subdivisions=1
5 | # Training
6 | # batch=64
7 | # subdivisions=2
8 | width=416
9 | height=416
10 | channels=3
11 | momentum=0.9
12 | decay=0.0005
13 | angle=0
14 | saturation = 1.5
15 | exposure = 1.5
16 | hue=.1
17 |
18 | learning_rate=0.001
19 | burn_in=1000
20 | max_batches = 500200
21 | policy=steps
22 | steps=400000,450000
23 | scales=.1,.1
24 |
25 | [convolutional]
26 | batch_normalize=1
27 | filters=16
28 | size=3
29 | stride=1
30 | pad=1
31 | activation=leaky
32 |
33 | [maxpool]
34 | size=2
35 | stride=2
36 |
37 | [convolutional]
38 | batch_normalize=1
39 | filters=32
40 | size=3
41 | stride=1
42 | pad=1
43 | activation=leaky
44 |
45 | [maxpool]
46 | size=2
47 | stride=2
48 |
49 | [convolutional]
50 | batch_normalize=1
51 | filters=64
52 | size=3
53 | stride=1
54 | pad=1
55 | activation=leaky
56 |
57 | [maxpool]
58 | size=2
59 | stride=2
60 |
61 | [convolutional]
62 | batch_normalize=1
63 | filters=128
64 | size=3
65 | stride=1
66 | pad=1
67 | activation=leaky
68 |
69 | [maxpool]
70 | size=2
71 | stride=2
72 |
73 | [convolutional]
74 | batch_normalize=1
75 | filters=256
76 | size=3
77 | stride=1
78 | pad=1
79 | activation=leaky
80 |
81 | [maxpool]
82 | size=2
83 | stride=2
84 |
85 | [convolutional]
86 | batch_normalize=1
87 | filters=512
88 | size=3
89 | stride=1
90 | pad=1
91 | activation=leaky
92 |
93 | [maxpool]
94 | size=2
95 | stride=1
96 |
97 | [convolutional]
98 | batch_normalize=1
99 | filters=1024
100 | size=3
101 | stride=1
102 | pad=1
103 | activation=leaky
104 |
105 | ###########
106 |
107 | [convolutional]
108 | batch_normalize=1
109 | filters=256
110 | size=1
111 | stride=1
112 | pad=1
113 | activation=leaky
114 |
115 | [convolutional]
116 | batch_normalize=1
117 | filters=512
118 | size=3
119 | stride=1
120 | pad=1
121 | activation=leaky
122 |
123 | [convolutional]
124 | size=1
125 | stride=1
126 | pad=1
127 | filters=255
128 | activation=linear
129 |
130 |
131 |
132 | [yolo]
133 | mask = 3,4,5
134 | anchors = 10,14, 23,27, 37,58, 81,82, 135,169, 344,319
135 | classes=80
136 | num=6
137 | jitter=.3
138 | ignore_thresh = .7
139 | truth_thresh = 1
140 | random=1
141 |
142 | [route]
143 | layers = -4
144 |
145 | [convolutional]
146 | batch_normalize=1
147 | filters=128
148 | size=1
149 | stride=1
150 | pad=1
151 | activation=leaky
152 |
153 | [upsample]
154 | stride=2
155 |
156 | [route]
157 | layers = -1, 8
158 |
159 | [convolutional]
160 | batch_normalize=1
161 | filters=256
162 | size=3
163 | stride=1
164 | pad=1
165 | activation=leaky
166 |
167 | [convolutional]
168 | size=1
169 | stride=1
170 | pad=1
171 | filters=255
172 | activation=linear
173 |
174 | [yolo]
175 | mask = 1,2,3
176 | anchors = 10,14, 23,27, 37,58, 81,82, 135,169, 344,319
177 | classes=1
178 | num=6
179 | jitter=.3
180 | ignore_thresh = .7
181 | truth_thresh = 1
182 | random=1
183 |
--------------------------------------------------------------------------------
/yolov3.cfg:
--------------------------------------------------------------------------------
1 | [net]
2 | # Testing
3 | #batch=1
4 | #subdivisions=1
5 | # Training
6 | batch=4
7 | subdivisions=8
8 | width=608
9 | height=608
10 | channels=3
11 | momentum=0.9
12 | decay=0.0005
13 | angle=0
14 | saturation = 1.5
15 | exposure = 1.5
16 | hue=.1
17 |
18 | learning_rate=0.001
19 | burn_in=1000
20 | max_batches = 2000
21 | policy=steps
22 | steps=1600,1800
23 | scales=.1,.1
24 |
25 | [convolutional]
26 | batch_normalize=1
27 | filters=32
28 | size=3
29 | stride=1
30 | pad=1
31 | activation=leaky
32 |
33 | # Downsample
34 |
35 | [convolutional]
36 | batch_normalize=1
37 | filters=64
38 | size=3
39 | stride=2
40 | pad=1
41 | activation=leaky
42 |
43 | [convolutional]
44 | batch_normalize=1
45 | filters=32
46 | size=1
47 | stride=1
48 | pad=1
49 | activation=leaky
50 |
51 | [convolutional]
52 | batch_normalize=1
53 | filters=64
54 | size=3
55 | stride=1
56 | pad=1
57 | activation=leaky
58 |
59 | [shortcut]
60 | from=-3
61 | activation=linear
62 |
63 | # Downsample
64 |
65 | [convolutional]
66 | batch_normalize=1
67 | filters=128
68 | size=3
69 | stride=2
70 | pad=1
71 | activation=leaky
72 |
73 | [convolutional]
74 | batch_normalize=1
75 | filters=64
76 | size=1
77 | stride=1
78 | pad=1
79 | activation=leaky
80 |
81 | [convolutional]
82 | batch_normalize=1
83 | filters=128
84 | size=3
85 | stride=1
86 | pad=1
87 | activation=leaky
88 |
89 | [shortcut]
90 | from=-3
91 | activation=linear
92 |
93 | [convolutional]
94 | batch_normalize=1
95 | filters=64
96 | size=1
97 | stride=1
98 | pad=1
99 | activation=leaky
100 |
101 | [convolutional]
102 | batch_normalize=1
103 | filters=128
104 | size=3
105 | stride=1
106 | pad=1
107 | activation=leaky
108 |
109 | [shortcut]
110 | from=-3
111 | activation=linear
112 |
113 | # Downsample
114 |
115 | [convolutional]
116 | batch_normalize=1
117 | filters=256
118 | size=3
119 | stride=2
120 | pad=1
121 | activation=leaky
122 |
123 | [convolutional]
124 | batch_normalize=1
125 | filters=128
126 | size=1
127 | stride=1
128 | pad=1
129 | activation=leaky
130 |
131 | [convolutional]
132 | batch_normalize=1
133 | filters=256
134 | size=3
135 | stride=1
136 | pad=1
137 | activation=leaky
138 |
139 | [shortcut]
140 | from=-3
141 | activation=linear
142 |
143 | [convolutional]
144 | batch_normalize=1
145 | filters=128
146 | size=1
147 | stride=1
148 | pad=1
149 | activation=leaky
150 |
151 | [convolutional]
152 | batch_normalize=1
153 | filters=256
154 | size=3
155 | stride=1
156 | pad=1
157 | activation=leaky
158 |
159 | [shortcut]
160 | from=-3
161 | activation=linear
162 |
163 | [convolutional]
164 | batch_normalize=1
165 | filters=128
166 | size=1
167 | stride=1
168 | pad=1
169 | activation=leaky
170 |
171 | [convolutional]
172 | batch_normalize=1
173 | filters=256
174 | size=3
175 | stride=1
176 | pad=1
177 | activation=leaky
178 |
179 | [shortcut]
180 | from=-3
181 | activation=linear
182 |
183 | [convolutional]
184 | batch_normalize=1
185 | filters=128
186 | size=1
187 | stride=1
188 | pad=1
189 | activation=leaky
190 |
191 | [convolutional]
192 | batch_normalize=1
193 | filters=256
194 | size=3
195 | stride=1
196 | pad=1
197 | activation=leaky
198 |
199 | [shortcut]
200 | from=-3
201 | activation=linear
202 |
203 |
204 | [convolutional]
205 | batch_normalize=1
206 | filters=128
207 | size=1
208 | stride=1
209 | pad=1
210 | activation=leaky
211 |
212 | [convolutional]
213 | batch_normalize=1
214 | filters=256
215 | size=3
216 | stride=1
217 | pad=1
218 | activation=leaky
219 |
220 | [shortcut]
221 | from=-3
222 | activation=linear
223 |
224 | [convolutional]
225 | batch_normalize=1
226 | filters=128
227 | size=1
228 | stride=1
229 | pad=1
230 | activation=leaky
231 |
232 | [convolutional]
233 | batch_normalize=1
234 | filters=256
235 | size=3
236 | stride=1
237 | pad=1
238 | activation=leaky
239 |
240 | [shortcut]
241 | from=-3
242 | activation=linear
243 |
244 | [convolutional]
245 | batch_normalize=1
246 | filters=128
247 | size=1
248 | stride=1
249 | pad=1
250 | activation=leaky
251 |
252 | [convolutional]
253 | batch_normalize=1
254 | filters=256
255 | size=3
256 | stride=1
257 | pad=1
258 | activation=leaky
259 |
260 | [shortcut]
261 | from=-3
262 | activation=linear
263 |
264 | [convolutional]
265 | batch_normalize=1
266 | filters=128
267 | size=1
268 | stride=1
269 | pad=1
270 | activation=leaky
271 |
272 | [convolutional]
273 | batch_normalize=1
274 | filters=256
275 | size=3
276 | stride=1
277 | pad=1
278 | activation=leaky
279 |
280 | [shortcut]
281 | from=-3
282 | activation=linear
283 |
284 | # Downsample
285 |
286 | [convolutional]
287 | batch_normalize=1
288 | filters=512
289 | size=3
290 | stride=2
291 | pad=1
292 | activation=leaky
293 |
294 | [convolutional]
295 | batch_normalize=1
296 | filters=256
297 | size=1
298 | stride=1
299 | pad=1
300 | activation=leaky
301 |
302 | [convolutional]
303 | batch_normalize=1
304 | filters=512
305 | size=3
306 | stride=1
307 | pad=1
308 | activation=leaky
309 |
310 | [shortcut]
311 | from=-3
312 | activation=linear
313 |
314 |
315 | [convolutional]
316 | batch_normalize=1
317 | filters=256
318 | size=1
319 | stride=1
320 | pad=1
321 | activation=leaky
322 |
323 | [convolutional]
324 | batch_normalize=1
325 | filters=512
326 | size=3
327 | stride=1
328 | pad=1
329 | activation=leaky
330 |
331 | [shortcut]
332 | from=-3
333 | activation=linear
334 |
335 |
336 | [convolutional]
337 | batch_normalize=1
338 | filters=256
339 | size=1
340 | stride=1
341 | pad=1
342 | activation=leaky
343 |
344 | [convolutional]
345 | batch_normalize=1
346 | filters=512
347 | size=3
348 | stride=1
349 | pad=1
350 | activation=leaky
351 |
352 | [shortcut]
353 | from=-3
354 | activation=linear
355 |
356 |
357 | [convolutional]
358 | batch_normalize=1
359 | filters=256
360 | size=1
361 | stride=1
362 | pad=1
363 | activation=leaky
364 |
365 | [convolutional]
366 | batch_normalize=1
367 | filters=512
368 | size=3
369 | stride=1
370 | pad=1
371 | activation=leaky
372 |
373 | [shortcut]
374 | from=-3
375 | activation=linear
376 |
377 | [convolutional]
378 | batch_normalize=1
379 | filters=256
380 | size=1
381 | stride=1
382 | pad=1
383 | activation=leaky
384 |
385 | [convolutional]
386 | batch_normalize=1
387 | filters=512
388 | size=3
389 | stride=1
390 | pad=1
391 | activation=leaky
392 |
393 | [shortcut]
394 | from=-3
395 | activation=linear
396 |
397 |
398 | [convolutional]
399 | batch_normalize=1
400 | filters=256
401 | size=1
402 | stride=1
403 | pad=1
404 | activation=leaky
405 |
406 | [convolutional]
407 | batch_normalize=1
408 | filters=512
409 | size=3
410 | stride=1
411 | pad=1
412 | activation=leaky
413 |
414 | [shortcut]
415 | from=-3
416 | activation=linear
417 |
418 |
419 | [convolutional]
420 | batch_normalize=1
421 | filters=256
422 | size=1
423 | stride=1
424 | pad=1
425 | activation=leaky
426 |
427 | [convolutional]
428 | batch_normalize=1
429 | filters=512
430 | size=3
431 | stride=1
432 | pad=1
433 | activation=leaky
434 |
435 | [shortcut]
436 | from=-3
437 | activation=linear
438 |
439 | [convolutional]
440 | batch_normalize=1
441 | filters=256
442 | size=1
443 | stride=1
444 | pad=1
445 | activation=leaky
446 |
447 | [convolutional]
448 | batch_normalize=1
449 | filters=512
450 | size=3
451 | stride=1
452 | pad=1
453 | activation=leaky
454 |
455 | [shortcut]
456 | from=-3
457 | activation=linear
458 |
459 | # Downsample
460 |
461 | [convolutional]
462 | batch_normalize=1
463 | filters=1024
464 | size=3
465 | stride=2
466 | pad=1
467 | activation=leaky
468 |
469 | [convolutional]
470 | batch_normalize=1
471 | filters=512
472 | size=1
473 | stride=1
474 | pad=1
475 | activation=leaky
476 |
477 | [convolutional]
478 | batch_normalize=1
479 | filters=1024
480 | size=3
481 | stride=1
482 | pad=1
483 | activation=leaky
484 |
485 | [shortcut]
486 | from=-3
487 | activation=linear
488 |
489 | [convolutional]
490 | batch_normalize=1
491 | filters=512
492 | size=1
493 | stride=1
494 | pad=1
495 | activation=leaky
496 |
497 | [convolutional]
498 | batch_normalize=1
499 | filters=1024
500 | size=3
501 | stride=1
502 | pad=1
503 | activation=leaky
504 |
505 | [shortcut]
506 | from=-3
507 | activation=linear
508 |
509 | [convolutional]
510 | batch_normalize=1
511 | filters=512
512 | size=1
513 | stride=1
514 | pad=1
515 | activation=leaky
516 |
517 | [convolutional]
518 | batch_normalize=1
519 | filters=1024
520 | size=3
521 | stride=1
522 | pad=1
523 | activation=leaky
524 |
525 | [shortcut]
526 | from=-3
527 | activation=linear
528 |
529 | [convolutional]
530 | batch_normalize=1
531 | filters=512
532 | size=1
533 | stride=1
534 | pad=1
535 | activation=leaky
536 |
537 | [convolutional]
538 | batch_normalize=1
539 | filters=1024
540 | size=3
541 | stride=1
542 | pad=1
543 | activation=leaky
544 |
545 | [shortcut]
546 | from=-3
547 | activation=linear
548 |
549 | ######################
550 |
551 | [convolutional]
552 | batch_normalize=1
553 | filters=512
554 | size=1
555 | stride=1
556 | pad=1
557 | activation=leaky
558 |
559 | [convolutional]
560 | batch_normalize=1
561 | size=3
562 | stride=1
563 | pad=1
564 | filters=1024
565 | activation=leaky
566 |
567 | [convolutional]
568 | batch_normalize=1
569 | filters=512
570 | size=1
571 | stride=1
572 | pad=1
573 | activation=leaky
574 |
575 | [convolutional]
576 | batch_normalize=1
577 | size=3
578 | stride=1
579 | pad=1
580 | filters=1024
581 | activation=leaky
582 |
583 | [convolutional]
584 | batch_normalize=1
585 | filters=512
586 | size=1
587 | stride=1
588 | pad=1
589 | activation=leaky
590 |
591 | [convolutional]
592 | batch_normalize=1
593 | size=3
594 | stride=1
595 | pad=1
596 | filters=1024
597 | activation=leaky
598 |
599 | [convolutional]
600 | size=1
601 | stride=1
602 | pad=1
603 | filters=255
604 | activation=linear
605 |
606 |
607 | [yolo]
608 | mask = 6,7,8
609 | anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
610 | classes=1
611 | num=9
612 | jitter=.3
613 | ignore_thresh = .5
614 | truth_thresh = 1
615 | random=1
616 |
617 |
618 | [route]
619 | layers = -4
620 |
621 | [convolutional]
622 | batch_normalize=1
623 | filters=256
624 | size=1
625 | stride=1
626 | pad=1
627 | activation=leaky
628 |
629 | [upsample]
630 | stride=2
631 |
632 | [route]
633 | layers = -1, 61
634 |
635 |
636 |
637 | [convolutional]
638 | batch_normalize=1
639 | filters=256
640 | size=1
641 | stride=1
642 | pad=1
643 | activation=leaky
644 |
645 | [convolutional]
646 | batch_normalize=1
647 | size=3
648 | stride=1
649 | pad=1
650 | filters=512
651 | activation=leaky
652 |
653 | [convolutional]
654 | batch_normalize=1
655 | filters=256
656 | size=1
657 | stride=1
658 | pad=1
659 | activation=leaky
660 |
661 | [convolutional]
662 | batch_normalize=1
663 | size=3
664 | stride=1
665 | pad=1
666 | filters=512
667 | activation=leaky
668 |
669 | [convolutional]
670 | batch_normalize=1
671 | filters=256
672 | size=1
673 | stride=1
674 | pad=1
675 | activation=leaky
676 |
677 | [convolutional]
678 | batch_normalize=1
679 | size=3
680 | stride=1
681 | pad=1
682 | filters=512
683 | activation=leaky
684 |
685 | [convolutional]
686 | size=1
687 | stride=1
688 | pad=1
689 | filters=255
690 | activation=linear
691 |
692 |
693 | [yolo]
694 | mask = 3,4,5
695 | anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
696 | classes=1
697 | num=9
698 | jitter=.3
699 | ignore_thresh = .5
700 | truth_thresh = 1
701 | random=1
702 |
703 |
704 |
705 | [route]
706 | layers = -4
707 |
708 | [convolutional]
709 | batch_normalize=1
710 | filters=128
711 | size=1
712 | stride=1
713 | pad=1
714 | activation=leaky
715 |
716 | [upsample]
717 | stride=2
718 |
719 | [route]
720 | layers = -1, 36
721 |
722 |
723 |
724 | [convolutional]
725 | batch_normalize=1
726 | filters=128
727 | size=1
728 | stride=1
729 | pad=1
730 | activation=leaky
731 |
732 | [convolutional]
733 | batch_normalize=1
734 | size=3
735 | stride=1
736 | pad=1
737 | filters=256
738 | activation=leaky
739 |
740 | [convolutional]
741 | batch_normalize=1
742 | filters=128
743 | size=1
744 | stride=1
745 | pad=1
746 | activation=leaky
747 |
748 | [convolutional]
749 | batch_normalize=1
750 | size=3
751 | stride=1
752 | pad=1
753 | filters=256
754 | activation=leaky
755 |
756 | [convolutional]
757 | batch_normalize=1
758 | filters=128
759 | size=1
760 | stride=1
761 | pad=1
762 | activation=leaky
763 |
764 | [convolutional]
765 | batch_normalize=1
766 | size=3
767 | stride=1
768 | pad=1
769 | filters=256
770 | activation=leaky
771 |
772 | [convolutional]
773 | size=1
774 | stride=1
775 | pad=1
776 | filters=255
777 | activation=linear
778 |
779 |
780 | [yolo]
781 | mask = 0,1,2
782 | anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
783 | classes=1
784 | num=9
785 | jitter=.3
786 | ignore_thresh = .5
787 | truth_thresh = 1
788 | random=1
789 |
--------------------------------------------------------------------------------