├── .gitignore ├── LICENSE ├── README.md ├── coco_annotation.py ├── convert.py ├── darknet53.cfg ├── font ├── FiraMono-Medium.otf └── SIL Open Font License.txt ├── kmeans.py ├── model_data ├── coco_classes.txt ├── tiny_yolo_anchors.txt ├── voc_classes.txt └── yolo_anchors.txt ├── train.py ├── train_bottleneck.py ├── voc_annotation.py ├── yolo.py ├── yolo3 ├── __init__.py ├── model.py └── utils.py ├── yolo_video.py ├── yolov3-tiny.cfg └── yolov3.cfg /.gitignore: -------------------------------------------------------------------------------- 1 | *.jpg 2 | *.png 3 | *.weights 4 | *.h5 5 | logs/ 6 | *_test.py 7 | 8 | # Byte-compiled / optimized / DLL files 9 | __pycache__/ 10 | *.py[cod] 11 | *$py.class 12 | 13 | # C extensions 14 | *.so 15 | 16 | # Distribution / packaging 17 | .Python 18 | env/ 19 | build/ 20 | develop-eggs/ 21 | dist/ 22 | downloads/ 23 | eggs/ 24 | .eggs/ 25 | lib/ 26 | lib64/ 27 | parts/ 28 | sdist/ 29 | var/ 30 | wheels/ 31 | *.egg-info/ 32 | .installed.cfg 33 | *.egg 34 | 35 | # PyInstaller 36 | # Usually these files are written by a python script from a template 37 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 38 | *.manifest 39 | *.spec 40 | 41 | # Installer logs 42 | pip-log.txt 43 | pip-delete-this-directory.txt 44 | 45 | # Unit test / coverage reports 46 | htmlcov/ 47 | .tox/ 48 | .coverage 49 | .coverage.* 50 | .cache 51 | nosetests.xml 52 | coverage.xml 53 | *.cover 54 | .hypothesis/ 55 | 56 | # Translations 57 | *.mo 58 | *.pot 59 | 60 | # Django stuff: 61 | *.log 62 | local_settings.py 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | target/ 76 | 77 | # Jupyter Notebook 78 | .ipynb_checkpoints 79 | 80 | # pyenv 81 | .python-version 82 | 83 | # celery beat schedule file 84 | celerybeat-schedule 85 | 86 | # SageMath parsed files 87 | *.sage.py 88 | 89 | # dotenv 90 | .env 91 | 92 | # virtualenv 93 | .venv 94 | venv/ 95 | ENV/ 96 | 97 | # Spyder project settings 98 | .spyderproject 99 | .spyproject 100 | 101 | # Rope project settings 102 | .ropeproject 103 | 104 | # mkdocs documentation 105 | /site 106 | 107 | # mypy 108 | .mypy_cache/ 109 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 qqwweee 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # keras-yolo3 2 | 3 | [![license](https://img.shields.io/github/license/mashape/apistatus.svg)](LICENSE) 4 | 5 | ## Introduction 6 | 7 | A Keras implementation of YOLOv3 (Tensorflow backend) inspired by [allanzelener/YAD2K](https://github.com/allanzelener/YAD2K). 8 | 9 | 10 | --- 11 | 12 | ## Quick Start 13 | 14 | 1. Download YOLOv3 weights from [YOLO website](http://pjreddie.com/darknet/yolo/). 15 | 2. Convert the Darknet YOLO model to a Keras model. 16 | 3. Run YOLO detection. 17 | 18 | ``` 19 | wget https://pjreddie.com/media/files/yolov3.weights 20 | python convert.py yolov3.cfg yolov3.weights model_data/yolo.h5 21 | python yolo_video.py [OPTIONS...] --image, for image detection mode, OR 22 | python yolo_video.py [video_path] [output_path (optional)] 23 | ``` 24 | 25 | For Tiny YOLOv3, just do in a similar way, just specify model path and anchor path with `--model model_file` and `--anchors anchor_file`. 26 | 27 | ### Usage 28 | Use --help to see usage of yolo_video.py: 29 | ``` 30 | usage: yolo_video.py [-h] [--model MODEL] [--anchors ANCHORS] 31 | [--classes CLASSES] [--gpu_num GPU_NUM] [--image] 32 | [--input] [--output] 33 | 34 | positional arguments: 35 | --input Video input path 36 | --output Video output path 37 | 38 | optional arguments: 39 | -h, --help show this help message and exit 40 | --model MODEL path to model weight file, default model_data/yolo.h5 41 | --anchors ANCHORS path to anchor definitions, default 42 | model_data/yolo_anchors.txt 43 | --classes CLASSES path to class definitions, default 44 | model_data/coco_classes.txt 45 | --gpu_num GPU_NUM Number of GPU to use, default 1 46 | --image Image detection mode, will ignore all positional arguments 47 | ``` 48 | --- 49 | 50 | 4. MultiGPU usage: use `--gpu_num N` to use N GPUs. It is passed to the [Keras multi_gpu_model()](https://keras.io/utils/#multi_gpu_model). 51 | 52 | ## Training 53 | 54 | 1. Generate your own annotation file and class names file. 55 | One row for one image; 56 | Row format: `image_file_path box1 box2 ... boxN`; 57 | Box format: `x_min,y_min,x_max,y_max,class_id` (no space). 58 | For VOC dataset, try `python voc_annotation.py` 59 | Here is an example: 60 | ``` 61 | path/to/img1.jpg 50,100,150,200,0 30,50,200,120,3 62 | path/to/img2.jpg 120,300,250,600,2 63 | ... 64 | ``` 65 | 66 | 2. Make sure you have run `python convert.py -w yolov3.cfg yolov3.weights model_data/yolo_weights.h5` 67 | The file model_data/yolo_weights.h5 is used to load pretrained weights. 68 | 69 | 3. Modify train.py and start training. 70 | `python train.py` 71 | Use your trained weights or checkpoint weights with command line option `--model model_file` when using yolo_video.py 72 | Remember to modify class path or anchor path, with `--classes class_file` and `--anchors anchor_file`. 73 | 74 | If you want to use original pretrained weights for YOLOv3: 75 | 1. `wget https://pjreddie.com/media/files/darknet53.conv.74` 76 | 2. rename it as darknet53.weights 77 | 3. `python convert.py -w darknet53.cfg darknet53.weights model_data/darknet53_weights.h5` 78 | 4. use model_data/darknet53_weights.h5 in train.py 79 | 80 | --- 81 | 82 | ## Some issues to know 83 | 84 | 1. The test environment is 85 | - Python 3.5.2 86 | - Keras 2.1.5 87 | - tensorflow 1.6.0 88 | 89 | 2. Default anchors are used. If you use your own anchors, probably some changes are needed. 90 | 91 | 3. The inference result is not totally the same as Darknet but the difference is small. 92 | 93 | 4. The speed is slower than Darknet. Replacing PIL with opencv may help a little. 94 | 95 | 5. Always load pretrained weights and freeze layers in the first stage of training. Or try Darknet training. It's OK if there is a mismatch warning. 96 | 97 | 6. The training strategy is for reference only. Adjust it according to your dataset and your goal. And add further strategy if needed. 98 | 99 | 7. For speeding up the training process with frozen layers train_bottleneck.py can be used. It will compute the bottleneck features of the frozen model first and then only trains the last layers. This makes training on CPU possible in a reasonable time. See [this](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html) for more information on bottleneck features. 100 | -------------------------------------------------------------------------------- /coco_annotation.py: -------------------------------------------------------------------------------- 1 | import json 2 | from collections import defaultdict 3 | 4 | name_box_id = defaultdict(list) 5 | id_name = dict() 6 | f = open( 7 | "mscoco2017/annotations/instances_train2017.json", 8 | encoding='utf-8') 9 | data = json.load(f) 10 | 11 | annotations = data['annotations'] 12 | for ant in annotations: 13 | id = ant['image_id'] 14 | name = 'mscoco2017/train2017/%012d.jpg' % id 15 | cat = ant['category_id'] 16 | 17 | if cat >= 1 and cat <= 11: 18 | cat = cat - 1 19 | elif cat >= 13 and cat <= 25: 20 | cat = cat - 2 21 | elif cat >= 27 and cat <= 28: 22 | cat = cat - 3 23 | elif cat >= 31 and cat <= 44: 24 | cat = cat - 5 25 | elif cat >= 46 and cat <= 65: 26 | cat = cat - 6 27 | elif cat == 67: 28 | cat = cat - 7 29 | elif cat == 70: 30 | cat = cat - 9 31 | elif cat >= 72 and cat <= 82: 32 | cat = cat - 10 33 | elif cat >= 84 and cat <= 90: 34 | cat = cat - 11 35 | 36 | name_box_id[name].append([ant['bbox'], cat]) 37 | 38 | f = open('train.txt', 'w') 39 | for key in name_box_id.keys(): 40 | f.write(key) 41 | box_infos = name_box_id[key] 42 | for info in box_infos: 43 | x_min = int(info[0][0]) 44 | y_min = int(info[0][1]) 45 | x_max = x_min + int(info[0][2]) 46 | y_max = y_min + int(info[0][3]) 47 | 48 | box_info = " %d,%d,%d,%d,%d" % ( 49 | x_min, y_min, x_max, y_max, int(info[1])) 50 | f.write(box_info) 51 | f.write('\n') 52 | f.close() 53 | -------------------------------------------------------------------------------- /convert.py: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env python 2 | """ 3 | Reads Darknet config and weights and creates Keras model with TF backend. 4 | 5 | """ 6 | 7 | import argparse 8 | import configparser 9 | import io 10 | import os 11 | from collections import defaultdict 12 | 13 | import numpy as np 14 | from keras import backend as K 15 | from keras.layers import (Conv2D, Input, ZeroPadding2D, Add, 16 | UpSampling2D, MaxPooling2D, Concatenate) 17 | from keras.layers.advanced_activations import LeakyReLU 18 | from keras.layers.normalization import BatchNormalization 19 | from keras.models import Model 20 | from keras.regularizers import l2 21 | from keras.utils.vis_utils import plot_model as plot 22 | 23 | 24 | parser = argparse.ArgumentParser(description='Darknet To Keras Converter.') 25 | parser.add_argument('config_path', help='Path to Darknet cfg file.') 26 | parser.add_argument('weights_path', help='Path to Darknet weights file.') 27 | parser.add_argument('output_path', help='Path to output Keras model file.') 28 | parser.add_argument( 29 | '-p', 30 | '--plot_model', 31 | help='Plot generated Keras model and save as image.', 32 | action='store_true') 33 | parser.add_argument( 34 | '-w', 35 | '--weights_only', 36 | help='Save as Keras weights file instead of model file.', 37 | action='store_true') 38 | 39 | def unique_config_sections(config_file): 40 | """Convert all config sections to have unique names. 41 | 42 | Adds unique suffixes to config sections for compability with configparser. 43 | """ 44 | section_counters = defaultdict(int) 45 | output_stream = io.StringIO() 46 | with open(config_file) as fin: 47 | for line in fin: 48 | if line.startswith('['): 49 | section = line.strip().strip('[]') 50 | _section = section + '_' + str(section_counters[section]) 51 | section_counters[section] += 1 52 | line = line.replace(section, _section) 53 | output_stream.write(line) 54 | output_stream.seek(0) 55 | return output_stream 56 | 57 | # %% 58 | def _main(args): 59 | config_path = os.path.expanduser(args.config_path) 60 | weights_path = os.path.expanduser(args.weights_path) 61 | assert config_path.endswith('.cfg'), '{} is not a .cfg file'.format( 62 | config_path) 63 | assert weights_path.endswith( 64 | '.weights'), '{} is not a .weights file'.format(weights_path) 65 | 66 | output_path = os.path.expanduser(args.output_path) 67 | assert output_path.endswith( 68 | '.h5'), 'output path {} is not a .h5 file'.format(output_path) 69 | output_root = os.path.splitext(output_path)[0] 70 | 71 | # Load weights and config. 72 | print('Loading weights.') 73 | weights_file = open(weights_path, 'rb') 74 | major, minor, revision = np.ndarray( 75 | shape=(3, ), dtype='int32', buffer=weights_file.read(12)) 76 | if (major*10+minor)>=2 and major<1000 and minor<1000: 77 | seen = np.ndarray(shape=(1,), dtype='int64', buffer=weights_file.read(8)) 78 | else: 79 | seen = np.ndarray(shape=(1,), dtype='int32', buffer=weights_file.read(4)) 80 | print('Weights Header: ', major, minor, revision, seen) 81 | 82 | print('Parsing Darknet config.') 83 | unique_config_file = unique_config_sections(config_path) 84 | cfg_parser = configparser.ConfigParser() 85 | cfg_parser.read_file(unique_config_file) 86 | 87 | print('Creating Keras model.') 88 | input_layer = Input(shape=(None, None, 3)) 89 | prev_layer = input_layer 90 | all_layers = [] 91 | 92 | weight_decay = float(cfg_parser['net_0']['decay'] 93 | ) if 'net_0' in cfg_parser.sections() else 5e-4 94 | count = 0 95 | out_index = [] 96 | for section in cfg_parser.sections(): 97 | print('Parsing section {}'.format(section)) 98 | if section.startswith('convolutional'): 99 | filters = int(cfg_parser[section]['filters']) 100 | size = int(cfg_parser[section]['size']) 101 | stride = int(cfg_parser[section]['stride']) 102 | pad = int(cfg_parser[section]['pad']) 103 | activation = cfg_parser[section]['activation'] 104 | batch_normalize = 'batch_normalize' in cfg_parser[section] 105 | 106 | padding = 'same' if pad == 1 and stride == 1 else 'valid' 107 | 108 | # Setting weights. 109 | # Darknet serializes convolutional weights as: 110 | # [bias/beta, [gamma, mean, variance], conv_weights] 111 | prev_layer_shape = K.int_shape(prev_layer) 112 | 113 | weights_shape = (size, size, prev_layer_shape[-1], filters) 114 | darknet_w_shape = (filters, weights_shape[2], size, size) 115 | weights_size = np.product(weights_shape) 116 | 117 | print('conv2d', 'bn' 118 | if batch_normalize else ' ', activation, weights_shape) 119 | 120 | conv_bias = np.ndarray( 121 | shape=(filters, ), 122 | dtype='float32', 123 | buffer=weights_file.read(filters * 4)) 124 | count += filters 125 | 126 | if batch_normalize: 127 | bn_weights = np.ndarray( 128 | shape=(3, filters), 129 | dtype='float32', 130 | buffer=weights_file.read(filters * 12)) 131 | count += 3 * filters 132 | 133 | bn_weight_list = [ 134 | bn_weights[0], # scale gamma 135 | conv_bias, # shift beta 136 | bn_weights[1], # running mean 137 | bn_weights[2] # running var 138 | ] 139 | 140 | conv_weights = np.ndarray( 141 | shape=darknet_w_shape, 142 | dtype='float32', 143 | buffer=weights_file.read(weights_size * 4)) 144 | count += weights_size 145 | 146 | # DarkNet conv_weights are serialized Caffe-style: 147 | # (out_dim, in_dim, height, width) 148 | # We would like to set these to Tensorflow order: 149 | # (height, width, in_dim, out_dim) 150 | conv_weights = np.transpose(conv_weights, [2, 3, 1, 0]) 151 | conv_weights = [conv_weights] if batch_normalize else [ 152 | conv_weights, conv_bias 153 | ] 154 | 155 | # Handle activation. 156 | act_fn = None 157 | if activation == 'leaky': 158 | pass # Add advanced activation later. 159 | elif activation != 'linear': 160 | raise ValueError( 161 | 'Unknown activation function `{}` in section {}'.format( 162 | activation, section)) 163 | 164 | # Create Conv2D layer 165 | if stride>1: 166 | # Darknet uses left and top padding instead of 'same' mode 167 | prev_layer = ZeroPadding2D(((1,0),(1,0)))(prev_layer) 168 | conv_layer = (Conv2D( 169 | filters, (size, size), 170 | strides=(stride, stride), 171 | kernel_regularizer=l2(weight_decay), 172 | use_bias=not batch_normalize, 173 | weights=conv_weights, 174 | activation=act_fn, 175 | padding=padding))(prev_layer) 176 | 177 | if batch_normalize: 178 | conv_layer = (BatchNormalization( 179 | weights=bn_weight_list))(conv_layer) 180 | prev_layer = conv_layer 181 | 182 | if activation == 'linear': 183 | all_layers.append(prev_layer) 184 | elif activation == 'leaky': 185 | act_layer = LeakyReLU(alpha=0.1)(prev_layer) 186 | prev_layer = act_layer 187 | all_layers.append(act_layer) 188 | 189 | elif section.startswith('route'): 190 | ids = [int(i) for i in cfg_parser[section]['layers'].split(',')] 191 | layers = [all_layers[i] for i in ids] 192 | if len(layers) > 1: 193 | print('Concatenating route layers:', layers) 194 | concatenate_layer = Concatenate()(layers) 195 | all_layers.append(concatenate_layer) 196 | prev_layer = concatenate_layer 197 | else: 198 | skip_layer = layers[0] # only one layer to route 199 | all_layers.append(skip_layer) 200 | prev_layer = skip_layer 201 | 202 | elif section.startswith('maxpool'): 203 | size = int(cfg_parser[section]['size']) 204 | stride = int(cfg_parser[section]['stride']) 205 | all_layers.append( 206 | MaxPooling2D( 207 | pool_size=(size, size), 208 | strides=(stride, stride), 209 | padding='same')(prev_layer)) 210 | prev_layer = all_layers[-1] 211 | 212 | elif section.startswith('shortcut'): 213 | index = int(cfg_parser[section]['from']) 214 | activation = cfg_parser[section]['activation'] 215 | assert activation == 'linear', 'Only linear activation supported.' 216 | all_layers.append(Add()([all_layers[index], prev_layer])) 217 | prev_layer = all_layers[-1] 218 | 219 | elif section.startswith('upsample'): 220 | stride = int(cfg_parser[section]['stride']) 221 | assert stride == 2, 'Only stride=2 supported.' 222 | all_layers.append(UpSampling2D(stride)(prev_layer)) 223 | prev_layer = all_layers[-1] 224 | 225 | elif section.startswith('yolo'): 226 | out_index.append(len(all_layers)-1) 227 | all_layers.append(None) 228 | prev_layer = all_layers[-1] 229 | 230 | elif section.startswith('net'): 231 | pass 232 | 233 | else: 234 | raise ValueError( 235 | 'Unsupported section header type: {}'.format(section)) 236 | 237 | # Create and save model. 238 | if len(out_index)==0: out_index.append(len(all_layers)-1) 239 | model = Model(inputs=input_layer, outputs=[all_layers[i] for i in out_index]) 240 | print(model.summary()) 241 | if args.weights_only: 242 | model.save_weights('{}'.format(output_path)) 243 | print('Saved Keras weights to {}'.format(output_path)) 244 | else: 245 | model.save('{}'.format(output_path)) 246 | print('Saved Keras model to {}'.format(output_path)) 247 | 248 | # Check to see if all weights have been read. 249 | remaining_weights = len(weights_file.read()) / 4 250 | weights_file.close() 251 | print('Read {} of {} from Darknet weights.'.format(count, count + 252 | remaining_weights)) 253 | if remaining_weights > 0: 254 | print('Warning: {} unused weights'.format(remaining_weights)) 255 | 256 | if args.plot_model: 257 | plot(model, to_file='{}.png'.format(output_root), show_shapes=True) 258 | print('Saved model plot to {}.png'.format(output_root)) 259 | 260 | 261 | if __name__ == '__main__': 262 | _main(parser.parse_args()) 263 | -------------------------------------------------------------------------------- /darknet53.cfg: -------------------------------------------------------------------------------- 1 | [net] 2 | # Testing 3 | batch=1 4 | subdivisions=1 5 | # Training 6 | # batch=64 7 | # subdivisions=16 8 | width=416 9 | height=416 10 | channels=3 11 | momentum=0.9 12 | decay=0.0005 13 | angle=0 14 | saturation = 1.5 15 | exposure = 1.5 16 | hue=.1 17 | 18 | learning_rate=0.001 19 | burn_in=1000 20 | max_batches = 500200 21 | policy=steps 22 | steps=400000,450000 23 | scales=.1,.1 24 | 25 | [convolutional] 26 | batch_normalize=1 27 | filters=32 28 | size=3 29 | stride=1 30 | pad=1 31 | activation=leaky 32 | 33 | # Downsample 34 | 35 | [convolutional] 36 | batch_normalize=1 37 | filters=64 38 | size=3 39 | stride=2 40 | pad=1 41 | activation=leaky 42 | 43 | [convolutional] 44 | batch_normalize=1 45 | filters=32 46 | size=1 47 | stride=1 48 | pad=1 49 | activation=leaky 50 | 51 | [convolutional] 52 | batch_normalize=1 53 | filters=64 54 | size=3 55 | stride=1 56 | pad=1 57 | activation=leaky 58 | 59 | [shortcut] 60 | from=-3 61 | activation=linear 62 | 63 | # Downsample 64 | 65 | [convolutional] 66 | batch_normalize=1 67 | filters=128 68 | size=3 69 | stride=2 70 | pad=1 71 | activation=leaky 72 | 73 | [convolutional] 74 | batch_normalize=1 75 | filters=64 76 | size=1 77 | stride=1 78 | pad=1 79 | activation=leaky 80 | 81 | [convolutional] 82 | batch_normalize=1 83 | filters=128 84 | size=3 85 | stride=1 86 | pad=1 87 | activation=leaky 88 | 89 | [shortcut] 90 | from=-3 91 | activation=linear 92 | 93 | [convolutional] 94 | batch_normalize=1 95 | filters=64 96 | size=1 97 | stride=1 98 | pad=1 99 | activation=leaky 100 | 101 | [convolutional] 102 | batch_normalize=1 103 | filters=128 104 | size=3 105 | stride=1 106 | pad=1 107 | activation=leaky 108 | 109 | [shortcut] 110 | from=-3 111 | activation=linear 112 | 113 | # Downsample 114 | 115 | [convolutional] 116 | batch_normalize=1 117 | filters=256 118 | size=3 119 | stride=2 120 | pad=1 121 | activation=leaky 122 | 123 | [convolutional] 124 | batch_normalize=1 125 | filters=128 126 | size=1 127 | stride=1 128 | pad=1 129 | activation=leaky 130 | 131 | [convolutional] 132 | batch_normalize=1 133 | filters=256 134 | size=3 135 | stride=1 136 | pad=1 137 | activation=leaky 138 | 139 | [shortcut] 140 | from=-3 141 | activation=linear 142 | 143 | [convolutional] 144 | batch_normalize=1 145 | filters=128 146 | size=1 147 | stride=1 148 | pad=1 149 | activation=leaky 150 | 151 | [convolutional] 152 | batch_normalize=1 153 | filters=256 154 | size=3 155 | stride=1 156 | pad=1 157 | activation=leaky 158 | 159 | [shortcut] 160 | from=-3 161 | activation=linear 162 | 163 | [convolutional] 164 | batch_normalize=1 165 | filters=128 166 | size=1 167 | stride=1 168 | pad=1 169 | activation=leaky 170 | 171 | [convolutional] 172 | batch_normalize=1 173 | filters=256 174 | size=3 175 | stride=1 176 | pad=1 177 | activation=leaky 178 | 179 | [shortcut] 180 | from=-3 181 | activation=linear 182 | 183 | [convolutional] 184 | batch_normalize=1 185 | filters=128 186 | size=1 187 | stride=1 188 | pad=1 189 | activation=leaky 190 | 191 | [convolutional] 192 | batch_normalize=1 193 | filters=256 194 | size=3 195 | stride=1 196 | pad=1 197 | activation=leaky 198 | 199 | [shortcut] 200 | from=-3 201 | activation=linear 202 | 203 | 204 | [convolutional] 205 | batch_normalize=1 206 | filters=128 207 | size=1 208 | stride=1 209 | pad=1 210 | activation=leaky 211 | 212 | [convolutional] 213 | batch_normalize=1 214 | filters=256 215 | size=3 216 | stride=1 217 | pad=1 218 | activation=leaky 219 | 220 | [shortcut] 221 | from=-3 222 | activation=linear 223 | 224 | [convolutional] 225 | batch_normalize=1 226 | filters=128 227 | size=1 228 | stride=1 229 | pad=1 230 | activation=leaky 231 | 232 | [convolutional] 233 | batch_normalize=1 234 | filters=256 235 | size=3 236 | stride=1 237 | pad=1 238 | activation=leaky 239 | 240 | [shortcut] 241 | from=-3 242 | activation=linear 243 | 244 | [convolutional] 245 | batch_normalize=1 246 | filters=128 247 | size=1 248 | stride=1 249 | pad=1 250 | activation=leaky 251 | 252 | [convolutional] 253 | batch_normalize=1 254 | filters=256 255 | size=3 256 | stride=1 257 | pad=1 258 | activation=leaky 259 | 260 | [shortcut] 261 | from=-3 262 | activation=linear 263 | 264 | [convolutional] 265 | batch_normalize=1 266 | filters=128 267 | size=1 268 | stride=1 269 | pad=1 270 | activation=leaky 271 | 272 | [convolutional] 273 | batch_normalize=1 274 | filters=256 275 | size=3 276 | stride=1 277 | pad=1 278 | activation=leaky 279 | 280 | [shortcut] 281 | from=-3 282 | activation=linear 283 | 284 | # Downsample 285 | 286 | [convolutional] 287 | batch_normalize=1 288 | filters=512 289 | size=3 290 | stride=2 291 | pad=1 292 | activation=leaky 293 | 294 | [convolutional] 295 | batch_normalize=1 296 | filters=256 297 | size=1 298 | stride=1 299 | pad=1 300 | activation=leaky 301 | 302 | [convolutional] 303 | batch_normalize=1 304 | filters=512 305 | size=3 306 | stride=1 307 | pad=1 308 | activation=leaky 309 | 310 | [shortcut] 311 | from=-3 312 | activation=linear 313 | 314 | 315 | [convolutional] 316 | batch_normalize=1 317 | filters=256 318 | size=1 319 | stride=1 320 | pad=1 321 | activation=leaky 322 | 323 | [convolutional] 324 | batch_normalize=1 325 | filters=512 326 | size=3 327 | stride=1 328 | pad=1 329 | activation=leaky 330 | 331 | [shortcut] 332 | from=-3 333 | activation=linear 334 | 335 | 336 | [convolutional] 337 | batch_normalize=1 338 | filters=256 339 | size=1 340 | stride=1 341 | pad=1 342 | activation=leaky 343 | 344 | [convolutional] 345 | batch_normalize=1 346 | filters=512 347 | size=3 348 | stride=1 349 | pad=1 350 | activation=leaky 351 | 352 | [shortcut] 353 | from=-3 354 | activation=linear 355 | 356 | 357 | [convolutional] 358 | batch_normalize=1 359 | filters=256 360 | size=1 361 | stride=1 362 | pad=1 363 | activation=leaky 364 | 365 | [convolutional] 366 | batch_normalize=1 367 | filters=512 368 | size=3 369 | stride=1 370 | pad=1 371 | activation=leaky 372 | 373 | [shortcut] 374 | from=-3 375 | activation=linear 376 | 377 | [convolutional] 378 | batch_normalize=1 379 | filters=256 380 | size=1 381 | stride=1 382 | pad=1 383 | activation=leaky 384 | 385 | [convolutional] 386 | batch_normalize=1 387 | filters=512 388 | size=3 389 | stride=1 390 | pad=1 391 | activation=leaky 392 | 393 | [shortcut] 394 | from=-3 395 | activation=linear 396 | 397 | 398 | [convolutional] 399 | batch_normalize=1 400 | filters=256 401 | size=1 402 | stride=1 403 | pad=1 404 | activation=leaky 405 | 406 | [convolutional] 407 | batch_normalize=1 408 | filters=512 409 | size=3 410 | stride=1 411 | pad=1 412 | activation=leaky 413 | 414 | [shortcut] 415 | from=-3 416 | activation=linear 417 | 418 | 419 | [convolutional] 420 | batch_normalize=1 421 | filters=256 422 | size=1 423 | stride=1 424 | pad=1 425 | activation=leaky 426 | 427 | [convolutional] 428 | batch_normalize=1 429 | filters=512 430 | size=3 431 | stride=1 432 | pad=1 433 | activation=leaky 434 | 435 | [shortcut] 436 | from=-3 437 | activation=linear 438 | 439 | [convolutional] 440 | batch_normalize=1 441 | filters=256 442 | size=1 443 | stride=1 444 | pad=1 445 | activation=leaky 446 | 447 | [convolutional] 448 | batch_normalize=1 449 | filters=512 450 | size=3 451 | stride=1 452 | pad=1 453 | activation=leaky 454 | 455 | [shortcut] 456 | from=-3 457 | activation=linear 458 | 459 | # Downsample 460 | 461 | [convolutional] 462 | batch_normalize=1 463 | filters=1024 464 | size=3 465 | stride=2 466 | pad=1 467 | activation=leaky 468 | 469 | [convolutional] 470 | batch_normalize=1 471 | filters=512 472 | size=1 473 | stride=1 474 | pad=1 475 | activation=leaky 476 | 477 | [convolutional] 478 | batch_normalize=1 479 | filters=1024 480 | size=3 481 | stride=1 482 | pad=1 483 | activation=leaky 484 | 485 | [shortcut] 486 | from=-3 487 | activation=linear 488 | 489 | [convolutional] 490 | batch_normalize=1 491 | filters=512 492 | size=1 493 | stride=1 494 | pad=1 495 | activation=leaky 496 | 497 | [convolutional] 498 | batch_normalize=1 499 | filters=1024 500 | size=3 501 | stride=1 502 | pad=1 503 | activation=leaky 504 | 505 | [shortcut] 506 | from=-3 507 | activation=linear 508 | 509 | [convolutional] 510 | batch_normalize=1 511 | filters=512 512 | size=1 513 | stride=1 514 | pad=1 515 | activation=leaky 516 | 517 | [convolutional] 518 | batch_normalize=1 519 | filters=1024 520 | size=3 521 | stride=1 522 | pad=1 523 | activation=leaky 524 | 525 | [shortcut] 526 | from=-3 527 | activation=linear 528 | 529 | [convolutional] 530 | batch_normalize=1 531 | filters=512 532 | size=1 533 | stride=1 534 | pad=1 535 | activation=leaky 536 | 537 | [convolutional] 538 | batch_normalize=1 539 | filters=1024 540 | size=3 541 | stride=1 542 | pad=1 543 | activation=leaky 544 | 545 | [shortcut] 546 | from=-3 547 | activation=linear 548 | 549 | -------------------------------------------------------------------------------- /font/FiraMono-Medium.otf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/qqwweee/keras-yolo3/e6598d13c703029b2686bc2eb8d5c09badf42992/font/FiraMono-Medium.otf -------------------------------------------------------------------------------- /font/SIL Open Font License.txt: -------------------------------------------------------------------------------- 1 | Copyright (c) 2014, Mozilla Foundation https://mozilla.org/ with Reserved Font Name Fira Mono. 2 | 3 | Copyright (c) 2014, Telefonica S.A. 4 | 5 | This Font Software is licensed under the SIL Open Font License, Version 1.1. 6 | This license is copied below, and is also available with a FAQ at: http://scripts.sil.org/OFL 7 | 8 | ----------------------------------------------------------- 9 | SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007 10 | ----------------------------------------------------------- 11 | 12 | PREAMBLE 13 | The goals of the Open Font License (OFL) are to stimulate worldwide development of collaborative font projects, to support the font creation efforts of academic and linguistic communities, and to provide a free and open framework in which fonts may be shared and improved in partnership with others. 14 | 15 | The OFL allows the licensed fonts to be used, studied, modified and redistributed freely as long as they are not sold by themselves. The fonts, including any derivative works, can be bundled, embedded, redistributed and/or sold with any software provided that any reserved names are not used by derivative works. The fonts and derivatives, however, cannot be released under any other type of license. The requirement for fonts to remain under this license does not apply to any document created using the fonts or their derivatives. 16 | 17 | DEFINITIONS 18 | "Font Software" refers to the set of files released by the Copyright Holder(s) under this license and clearly marked as such. This may include source files, build scripts and documentation. 19 | 20 | "Reserved Font Name" refers to any names specified as such after the copyright statement(s). 21 | 22 | "Original Version" refers to the collection of Font Software components as distributed by the Copyright Holder(s). 23 | 24 | "Modified Version" refers to any derivative made by adding to, deleting, or substituting -- in part or in whole -- any of the components of the Original Version, by changing formats or by porting the Font Software to a new environment. 25 | 26 | "Author" refers to any designer, engineer, programmer, technical writer or other person who contributed to the Font Software. 27 | 28 | PERMISSION & CONDITIONS 29 | Permission is hereby granted, free of charge, to any person obtaining a copy of the Font Software, to use, study, copy, merge, embed, modify, redistribute, and sell modified and unmodified copies of the Font Software, subject to the following conditions: 30 | 31 | 1) Neither the Font Software nor any of its individual components, in Original or Modified Versions, may be sold by itself. 32 | 33 | 2) Original or Modified Versions of the Font Software may be bundled, redistributed and/or sold with any software, provided that each copy contains the above copyright notice and this license. These can be included either as stand-alone text files, human-readable headers or in the appropriate machine-readable metadata fields within text or binary files as long as those fields can be easily viewed by the user. 34 | 35 | 3) No Modified Version of the Font Software may use the Reserved Font Name(s) unless explicit written permission is granted by the corresponding Copyright Holder. This restriction only applies to the primary font name as presented to the users. 36 | 37 | 4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font Software shall not be used to promote, endorse or advertise any Modified Version, except to acknowledge the contribution(s) of the Copyright Holder(s) and the Author(s) or with their explicit written permission. 38 | 39 | 5) The Font Software, modified or unmodified, in part or in whole, must be distributed entirely under this license, and must not be distributed under any other license. The requirement for fonts to remain under this license does not apply to any document created using the Font Software. 40 | 41 | TERMINATION 42 | This license becomes null and void if any of the above conditions are not met. 43 | 44 | DISCLAIMER 45 | THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM OTHER DEALINGS IN THE FONT SOFTWARE. -------------------------------------------------------------------------------- /kmeans.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | 3 | 4 | class YOLO_Kmeans: 5 | 6 | def __init__(self, cluster_number, filename): 7 | self.cluster_number = cluster_number 8 | self.filename = "2012_train.txt" 9 | 10 | def iou(self, boxes, clusters): # 1 box -> k clusters 11 | n = boxes.shape[0] 12 | k = self.cluster_number 13 | 14 | box_area = boxes[:, 0] * boxes[:, 1] 15 | box_area = box_area.repeat(k) 16 | box_area = np.reshape(box_area, (n, k)) 17 | 18 | cluster_area = clusters[:, 0] * clusters[:, 1] 19 | cluster_area = np.tile(cluster_area, [1, n]) 20 | cluster_area = np.reshape(cluster_area, (n, k)) 21 | 22 | box_w_matrix = np.reshape(boxes[:, 0].repeat(k), (n, k)) 23 | cluster_w_matrix = np.reshape(np.tile(clusters[:, 0], (1, n)), (n, k)) 24 | min_w_matrix = np.minimum(cluster_w_matrix, box_w_matrix) 25 | 26 | box_h_matrix = np.reshape(boxes[:, 1].repeat(k), (n, k)) 27 | cluster_h_matrix = np.reshape(np.tile(clusters[:, 1], (1, n)), (n, k)) 28 | min_h_matrix = np.minimum(cluster_h_matrix, box_h_matrix) 29 | inter_area = np.multiply(min_w_matrix, min_h_matrix) 30 | 31 | result = inter_area / (box_area + cluster_area - inter_area) 32 | return result 33 | 34 | def avg_iou(self, boxes, clusters): 35 | accuracy = np.mean([np.max(self.iou(boxes, clusters), axis=1)]) 36 | return accuracy 37 | 38 | def kmeans(self, boxes, k, dist=np.median): 39 | box_number = boxes.shape[0] 40 | distances = np.empty((box_number, k)) 41 | last_nearest = np.zeros((box_number,)) 42 | np.random.seed() 43 | clusters = boxes[np.random.choice( 44 | box_number, k, replace=False)] # init k clusters 45 | while True: 46 | 47 | distances = 1 - self.iou(boxes, clusters) 48 | 49 | current_nearest = np.argmin(distances, axis=1) 50 | if (last_nearest == current_nearest).all(): 51 | break # clusters won't change 52 | for cluster in range(k): 53 | clusters[cluster] = dist( # update clusters 54 | boxes[current_nearest == cluster], axis=0) 55 | 56 | last_nearest = current_nearest 57 | 58 | return clusters 59 | 60 | def result2txt(self, data): 61 | f = open("yolo_anchors.txt", 'w') 62 | row = np.shape(data)[0] 63 | for i in range(row): 64 | if i == 0: 65 | x_y = "%d,%d" % (data[i][0], data[i][1]) 66 | else: 67 | x_y = ", %d,%d" % (data[i][0], data[i][1]) 68 | f.write(x_y) 69 | f.close() 70 | 71 | def txt2boxes(self): 72 | f = open(self.filename, 'r') 73 | dataSet = [] 74 | for line in f: 75 | infos = line.split(" ") 76 | length = len(infos) 77 | for i in range(1, length): 78 | width = int(infos[i].split(",")[2]) - \ 79 | int(infos[i].split(",")[0]) 80 | height = int(infos[i].split(",")[3]) - \ 81 | int(infos[i].split(",")[1]) 82 | dataSet.append([width, height]) 83 | result = np.array(dataSet) 84 | f.close() 85 | return result 86 | 87 | def txt2clusters(self): 88 | all_boxes = self.txt2boxes() 89 | result = self.kmeans(all_boxes, k=self.cluster_number) 90 | result = result[np.lexsort(result.T[0, None])] 91 | self.result2txt(result) 92 | print("K anchors:\n {}".format(result)) 93 | print("Accuracy: {:.2f}%".format( 94 | self.avg_iou(all_boxes, result) * 100)) 95 | 96 | 97 | if __name__ == "__main__": 98 | cluster_number = 9 99 | filename = "2012_train.txt" 100 | kmeans = YOLO_Kmeans(cluster_number, filename) 101 | kmeans.txt2clusters() 102 | -------------------------------------------------------------------------------- /model_data/coco_classes.txt: -------------------------------------------------------------------------------- 1 | person 2 | bicycle 3 | car 4 | motorbike 5 | aeroplane 6 | bus 7 | train 8 | truck 9 | boat 10 | traffic light 11 | fire hydrant 12 | stop sign 13 | parking meter 14 | bench 15 | bird 16 | cat 17 | dog 18 | horse 19 | sheep 20 | cow 21 | elephant 22 | bear 23 | zebra 24 | giraffe 25 | backpack 26 | umbrella 27 | handbag 28 | tie 29 | suitcase 30 | frisbee 31 | skis 32 | snowboard 33 | sports ball 34 | kite 35 | baseball bat 36 | baseball glove 37 | skateboard 38 | surfboard 39 | tennis racket 40 | bottle 41 | wine glass 42 | cup 43 | fork 44 | knife 45 | spoon 46 | bowl 47 | banana 48 | apple 49 | sandwich 50 | orange 51 | broccoli 52 | carrot 53 | hot dog 54 | pizza 55 | donut 56 | cake 57 | chair 58 | sofa 59 | pottedplant 60 | bed 61 | diningtable 62 | toilet 63 | tvmonitor 64 | laptop 65 | mouse 66 | remote 67 | keyboard 68 | cell phone 69 | microwave 70 | oven 71 | toaster 72 | sink 73 | refrigerator 74 | book 75 | clock 76 | vase 77 | scissors 78 | teddy bear 79 | hair drier 80 | toothbrush 81 | -------------------------------------------------------------------------------- /model_data/tiny_yolo_anchors.txt: -------------------------------------------------------------------------------- 1 | 10,14, 23,27, 37,58, 81,82, 135,169, 344,319 2 | -------------------------------------------------------------------------------- /model_data/voc_classes.txt: -------------------------------------------------------------------------------- 1 | aeroplane 2 | bicycle 3 | bird 4 | boat 5 | bottle 6 | bus 7 | car 8 | cat 9 | chair 10 | cow 11 | diningtable 12 | dog 13 | horse 14 | motorbike 15 | person 16 | pottedplant 17 | sheep 18 | sofa 19 | train 20 | tvmonitor 21 | -------------------------------------------------------------------------------- /model_data/yolo_anchors.txt: -------------------------------------------------------------------------------- 1 | 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326 2 | -------------------------------------------------------------------------------- /train.py: -------------------------------------------------------------------------------- 1 | """ 2 | Retrain the YOLO model for your own dataset. 3 | """ 4 | 5 | import numpy as np 6 | import keras.backend as K 7 | from keras.layers import Input, Lambda 8 | from keras.models import Model 9 | from keras.optimizers import Adam 10 | from keras.callbacks import TensorBoard, ModelCheckpoint, ReduceLROnPlateau, EarlyStopping 11 | 12 | from yolo3.model import preprocess_true_boxes, yolo_body, tiny_yolo_body, yolo_loss 13 | from yolo3.utils import get_random_data 14 | 15 | 16 | def _main(): 17 | annotation_path = 'train.txt' 18 | log_dir = 'logs/000/' 19 | classes_path = 'model_data/voc_classes.txt' 20 | anchors_path = 'model_data/yolo_anchors.txt' 21 | class_names = get_classes(classes_path) 22 | num_classes = len(class_names) 23 | anchors = get_anchors(anchors_path) 24 | 25 | input_shape = (416,416) # multiple of 32, hw 26 | 27 | is_tiny_version = len(anchors)==6 # default setting 28 | if is_tiny_version: 29 | model = create_tiny_model(input_shape, anchors, num_classes, 30 | freeze_body=2, weights_path='model_data/tiny_yolo_weights.h5') 31 | else: 32 | model = create_model(input_shape, anchors, num_classes, 33 | freeze_body=2, weights_path='model_data/yolo_weights.h5') # make sure you know what you freeze 34 | 35 | logging = TensorBoard(log_dir=log_dir) 36 | checkpoint = ModelCheckpoint(log_dir + 'ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5', 37 | monitor='val_loss', save_weights_only=True, save_best_only=True, period=3) 38 | reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=3, verbose=1) 39 | early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=10, verbose=1) 40 | 41 | val_split = 0.1 42 | with open(annotation_path) as f: 43 | lines = f.readlines() 44 | np.random.seed(10101) 45 | np.random.shuffle(lines) 46 | np.random.seed(None) 47 | num_val = int(len(lines)*val_split) 48 | num_train = len(lines) - num_val 49 | 50 | # Train with frozen layers first, to get a stable loss. 51 | # Adjust num epochs to your dataset. This step is enough to obtain a not bad model. 52 | if True: 53 | model.compile(optimizer=Adam(lr=1e-3), loss={ 54 | # use custom yolo_loss Lambda layer. 55 | 'yolo_loss': lambda y_true, y_pred: y_pred}) 56 | 57 | batch_size = 32 58 | print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size)) 59 | model.fit_generator(data_generator_wrapper(lines[:num_train], batch_size, input_shape, anchors, num_classes), 60 | steps_per_epoch=max(1, num_train//batch_size), 61 | validation_data=data_generator_wrapper(lines[num_train:], batch_size, input_shape, anchors, num_classes), 62 | validation_steps=max(1, num_val//batch_size), 63 | epochs=50, 64 | initial_epoch=0, 65 | callbacks=[logging, checkpoint]) 66 | model.save_weights(log_dir + 'trained_weights_stage_1.h5') 67 | 68 | # Unfreeze and continue training, to fine-tune. 69 | # Train longer if the result is not good. 70 | if True: 71 | for i in range(len(model.layers)): 72 | model.layers[i].trainable = True 73 | model.compile(optimizer=Adam(lr=1e-4), loss={'yolo_loss': lambda y_true, y_pred: y_pred}) # recompile to apply the change 74 | print('Unfreeze all of the layers.') 75 | 76 | batch_size = 32 # note that more GPU memory is required after unfreezing the body 77 | print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size)) 78 | model.fit_generator(data_generator_wrapper(lines[:num_train], batch_size, input_shape, anchors, num_classes), 79 | steps_per_epoch=max(1, num_train//batch_size), 80 | validation_data=data_generator_wrapper(lines[num_train:], batch_size, input_shape, anchors, num_classes), 81 | validation_steps=max(1, num_val//batch_size), 82 | epochs=100, 83 | initial_epoch=50, 84 | callbacks=[logging, checkpoint, reduce_lr, early_stopping]) 85 | model.save_weights(log_dir + 'trained_weights_final.h5') 86 | 87 | # Further training if needed. 88 | 89 | 90 | def get_classes(classes_path): 91 | '''loads the classes''' 92 | with open(classes_path) as f: 93 | class_names = f.readlines() 94 | class_names = [c.strip() for c in class_names] 95 | return class_names 96 | 97 | def get_anchors(anchors_path): 98 | '''loads the anchors from a file''' 99 | with open(anchors_path) as f: 100 | anchors = f.readline() 101 | anchors = [float(x) for x in anchors.split(',')] 102 | return np.array(anchors).reshape(-1, 2) 103 | 104 | 105 | def create_model(input_shape, anchors, num_classes, load_pretrained=True, freeze_body=2, 106 | weights_path='model_data/yolo_weights.h5'): 107 | '''create the training model''' 108 | K.clear_session() # get a new session 109 | image_input = Input(shape=(None, None, 3)) 110 | h, w = input_shape 111 | num_anchors = len(anchors) 112 | 113 | y_true = [Input(shape=(h//{0:32, 1:16, 2:8}[l], w//{0:32, 1:16, 2:8}[l], \ 114 | num_anchors//3, num_classes+5)) for l in range(3)] 115 | 116 | model_body = yolo_body(image_input, num_anchors//3, num_classes) 117 | print('Create YOLOv3 model with {} anchors and {} classes.'.format(num_anchors, num_classes)) 118 | 119 | if load_pretrained: 120 | model_body.load_weights(weights_path, by_name=True, skip_mismatch=True) 121 | print('Load weights {}.'.format(weights_path)) 122 | if freeze_body in [1, 2]: 123 | # Freeze darknet53 body or freeze all but 3 output layers. 124 | num = (185, len(model_body.layers)-3)[freeze_body-1] 125 | for i in range(num): model_body.layers[i].trainable = False 126 | print('Freeze the first {} layers of total {} layers.'.format(num, len(model_body.layers))) 127 | 128 | model_loss = Lambda(yolo_loss, output_shape=(1,), name='yolo_loss', 129 | arguments={'anchors': anchors, 'num_classes': num_classes, 'ignore_thresh': 0.5})( 130 | [*model_body.output, *y_true]) 131 | model = Model([model_body.input, *y_true], model_loss) 132 | 133 | return model 134 | 135 | def create_tiny_model(input_shape, anchors, num_classes, load_pretrained=True, freeze_body=2, 136 | weights_path='model_data/tiny_yolo_weights.h5'): 137 | '''create the training model, for Tiny YOLOv3''' 138 | K.clear_session() # get a new session 139 | image_input = Input(shape=(None, None, 3)) 140 | h, w = input_shape 141 | num_anchors = len(anchors) 142 | 143 | y_true = [Input(shape=(h//{0:32, 1:16}[l], w//{0:32, 1:16}[l], \ 144 | num_anchors//2, num_classes+5)) for l in range(2)] 145 | 146 | model_body = tiny_yolo_body(image_input, num_anchors//2, num_classes) 147 | print('Create Tiny YOLOv3 model with {} anchors and {} classes.'.format(num_anchors, num_classes)) 148 | 149 | if load_pretrained: 150 | model_body.load_weights(weights_path, by_name=True, skip_mismatch=True) 151 | print('Load weights {}.'.format(weights_path)) 152 | if freeze_body in [1, 2]: 153 | # Freeze the darknet body or freeze all but 2 output layers. 154 | num = (20, len(model_body.layers)-2)[freeze_body-1] 155 | for i in range(num): model_body.layers[i].trainable = False 156 | print('Freeze the first {} layers of total {} layers.'.format(num, len(model_body.layers))) 157 | 158 | model_loss = Lambda(yolo_loss, output_shape=(1,), name='yolo_loss', 159 | arguments={'anchors': anchors, 'num_classes': num_classes, 'ignore_thresh': 0.7})( 160 | [*model_body.output, *y_true]) 161 | model = Model([model_body.input, *y_true], model_loss) 162 | 163 | return model 164 | 165 | def data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes): 166 | '''data generator for fit_generator''' 167 | n = len(annotation_lines) 168 | i = 0 169 | while True: 170 | image_data = [] 171 | box_data = [] 172 | for b in range(batch_size): 173 | if i==0: 174 | np.random.shuffle(annotation_lines) 175 | image, box = get_random_data(annotation_lines[i], input_shape, random=True) 176 | image_data.append(image) 177 | box_data.append(box) 178 | i = (i+1) % n 179 | image_data = np.array(image_data) 180 | box_data = np.array(box_data) 181 | y_true = preprocess_true_boxes(box_data, input_shape, anchors, num_classes) 182 | yield [image_data, *y_true], np.zeros(batch_size) 183 | 184 | def data_generator_wrapper(annotation_lines, batch_size, input_shape, anchors, num_classes): 185 | n = len(annotation_lines) 186 | if n==0 or batch_size<=0: return None 187 | return data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes) 188 | 189 | if __name__ == '__main__': 190 | _main() 191 | -------------------------------------------------------------------------------- /train_bottleneck.py: -------------------------------------------------------------------------------- 1 | """ 2 | Retrain the YOLO model for your own dataset. 3 | """ 4 | import os 5 | import numpy as np 6 | import keras.backend as K 7 | from keras.layers import Input, Lambda 8 | from keras.models import Model 9 | from keras.optimizers import Adam 10 | from keras.callbacks import TensorBoard, ModelCheckpoint, ReduceLROnPlateau, EarlyStopping 11 | 12 | from yolo3.model import preprocess_true_boxes, yolo_body, tiny_yolo_body, yolo_loss 13 | from yolo3.utils import get_random_data 14 | 15 | 16 | def _main(): 17 | annotation_path = 'train.txt' 18 | log_dir = 'logs/000/' 19 | classes_path = 'model_data/coco_classes.txt' 20 | anchors_path = 'model_data/yolo_anchors.txt' 21 | class_names = get_classes(classes_path) 22 | num_classes = len(class_names) 23 | anchors = get_anchors(anchors_path) 24 | 25 | input_shape = (416,416) # multiple of 32, hw 26 | 27 | model, bottleneck_model, last_layer_model = create_model(input_shape, anchors, num_classes, 28 | freeze_body=2, weights_path='model_data/yolo_weights.h5') # make sure you know what you freeze 29 | 30 | logging = TensorBoard(log_dir=log_dir) 31 | checkpoint = ModelCheckpoint(log_dir + 'ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5', 32 | monitor='val_loss', save_weights_only=True, save_best_only=True, period=3) 33 | reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=3, verbose=1) 34 | early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=10, verbose=1) 35 | 36 | val_split = 0.1 37 | with open(annotation_path) as f: 38 | lines = f.readlines() 39 | np.random.seed(10101) 40 | np.random.shuffle(lines) 41 | np.random.seed(None) 42 | num_val = int(len(lines)*val_split) 43 | num_train = len(lines) - num_val 44 | 45 | # Train with frozen layers first, to get a stable loss. 46 | # Adjust num epochs to your dataset. This step is enough to obtain a not bad model. 47 | if True: 48 | # perform bottleneck training 49 | if not os.path.isfile("bottlenecks.npz"): 50 | print("calculating bottlenecks") 51 | batch_size=8 52 | bottlenecks=bottleneck_model.predict_generator(data_generator_wrapper(lines, batch_size, input_shape, anchors, num_classes, random=False, verbose=True), 53 | steps=(len(lines)//batch_size)+1, max_queue_size=1) 54 | np.savez("bottlenecks.npz", bot0=bottlenecks[0], bot1=bottlenecks[1], bot2=bottlenecks[2]) 55 | 56 | # load bottleneck features from file 57 | dict_bot=np.load("bottlenecks.npz") 58 | bottlenecks_train=[dict_bot["bot0"][:num_train], dict_bot["bot1"][:num_train], dict_bot["bot2"][:num_train]] 59 | bottlenecks_val=[dict_bot["bot0"][num_train:], dict_bot["bot1"][num_train:], dict_bot["bot2"][num_train:]] 60 | 61 | # train last layers with fixed bottleneck features 62 | batch_size=8 63 | print("Training last layers with bottleneck features") 64 | print('with {} samples, val on {} samples and batch size {}.'.format(num_train, num_val, batch_size)) 65 | last_layer_model.compile(optimizer='adam', loss={'yolo_loss': lambda y_true, y_pred: y_pred}) 66 | last_layer_model.fit_generator(bottleneck_generator(lines[:num_train], batch_size, input_shape, anchors, num_classes, bottlenecks_train), 67 | steps_per_epoch=max(1, num_train//batch_size), 68 | validation_data=bottleneck_generator(lines[num_train:], batch_size, input_shape, anchors, num_classes, bottlenecks_val), 69 | validation_steps=max(1, num_val//batch_size), 70 | epochs=30, 71 | initial_epoch=0, max_queue_size=1) 72 | model.save_weights(log_dir + 'trained_weights_stage_0.h5') 73 | 74 | # train last layers with random augmented data 75 | model.compile(optimizer=Adam(lr=1e-3), loss={ 76 | # use custom yolo_loss Lambda layer. 77 | 'yolo_loss': lambda y_true, y_pred: y_pred}) 78 | batch_size = 16 79 | print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size)) 80 | model.fit_generator(data_generator_wrapper(lines[:num_train], batch_size, input_shape, anchors, num_classes), 81 | steps_per_epoch=max(1, num_train//batch_size), 82 | validation_data=data_generator_wrapper(lines[num_train:], batch_size, input_shape, anchors, num_classes), 83 | validation_steps=max(1, num_val//batch_size), 84 | epochs=50, 85 | initial_epoch=0, 86 | callbacks=[logging, checkpoint]) 87 | model.save_weights(log_dir + 'trained_weights_stage_1.h5') 88 | 89 | # Unfreeze and continue training, to fine-tune. 90 | # Train longer if the result is not good. 91 | if True: 92 | for i in range(len(model.layers)): 93 | model.layers[i].trainable = True 94 | model.compile(optimizer=Adam(lr=1e-4), loss={'yolo_loss': lambda y_true, y_pred: y_pred}) # recompile to apply the change 95 | print('Unfreeze all of the layers.') 96 | 97 | batch_size = 4 # note that more GPU memory is required after unfreezing the body 98 | print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size)) 99 | model.fit_generator(data_generator_wrapper(lines[:num_train], batch_size, input_shape, anchors, num_classes), 100 | steps_per_epoch=max(1, num_train//batch_size), 101 | validation_data=data_generator_wrapper(lines[num_train:], batch_size, input_shape, anchors, num_classes), 102 | validation_steps=max(1, num_val//batch_size), 103 | epochs=100, 104 | initial_epoch=50, 105 | callbacks=[logging, checkpoint, reduce_lr, early_stopping]) 106 | model.save_weights(log_dir + 'trained_weights_final.h5') 107 | 108 | # Further training if needed. 109 | 110 | 111 | def get_classes(classes_path): 112 | '''loads the classes''' 113 | with open(classes_path) as f: 114 | class_names = f.readlines() 115 | class_names = [c.strip() for c in class_names] 116 | return class_names 117 | 118 | def get_anchors(anchors_path): 119 | '''loads the anchors from a file''' 120 | with open(anchors_path) as f: 121 | anchors = f.readline() 122 | anchors = [float(x) for x in anchors.split(',')] 123 | return np.array(anchors).reshape(-1, 2) 124 | 125 | 126 | def create_model(input_shape, anchors, num_classes, load_pretrained=True, freeze_body=2, 127 | weights_path='model_data/yolo_weights.h5'): 128 | '''create the training model''' 129 | K.clear_session() # get a new session 130 | image_input = Input(shape=(None, None, 3)) 131 | h, w = input_shape 132 | num_anchors = len(anchors) 133 | 134 | y_true = [Input(shape=(h//{0:32, 1:16, 2:8}[l], w//{0:32, 1:16, 2:8}[l], \ 135 | num_anchors//3, num_classes+5)) for l in range(3)] 136 | 137 | model_body = yolo_body(image_input, num_anchors//3, num_classes) 138 | print('Create YOLOv3 model with {} anchors and {} classes.'.format(num_anchors, num_classes)) 139 | 140 | if load_pretrained: 141 | model_body.load_weights(weights_path, by_name=True, skip_mismatch=True) 142 | print('Load weights {}.'.format(weights_path)) 143 | if freeze_body in [1, 2]: 144 | # Freeze darknet53 body or freeze all but 3 output layers. 145 | num = (185, len(model_body.layers)-3)[freeze_body-1] 146 | for i in range(num): model_body.layers[i].trainable = False 147 | print('Freeze the first {} layers of total {} layers.'.format(num, len(model_body.layers))) 148 | 149 | # get output of second last layers and create bottleneck model of it 150 | out1=model_body.layers[246].output 151 | out2=model_body.layers[247].output 152 | out3=model_body.layers[248].output 153 | bottleneck_model = Model([model_body.input, *y_true], [out1, out2, out3]) 154 | 155 | # create last layer model of last layers from yolo model 156 | in0 = Input(shape=bottleneck_model.output[0].shape[1:].as_list()) 157 | in1 = Input(shape=bottleneck_model.output[1].shape[1:].as_list()) 158 | in2 = Input(shape=bottleneck_model.output[2].shape[1:].as_list()) 159 | last_out0=model_body.layers[249](in0) 160 | last_out1=model_body.layers[250](in1) 161 | last_out2=model_body.layers[251](in2) 162 | model_last=Model(inputs=[in0, in1, in2], outputs=[last_out0, last_out1, last_out2]) 163 | model_loss_last =Lambda(yolo_loss, output_shape=(1,), name='yolo_loss', 164 | arguments={'anchors': anchors, 'num_classes': num_classes, 'ignore_thresh': 0.5})( 165 | [*model_last.output, *y_true]) 166 | last_layer_model = Model([in0,in1,in2, *y_true], model_loss_last) 167 | 168 | 169 | model_loss = Lambda(yolo_loss, output_shape=(1,), name='yolo_loss', 170 | arguments={'anchors': anchors, 'num_classes': num_classes, 'ignore_thresh': 0.5})( 171 | [*model_body.output, *y_true]) 172 | model = Model([model_body.input, *y_true], model_loss) 173 | 174 | return model, bottleneck_model, last_layer_model 175 | 176 | def data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes, random=True, verbose=False): 177 | '''data generator for fit_generator''' 178 | n = len(annotation_lines) 179 | i = 0 180 | while True: 181 | image_data = [] 182 | box_data = [] 183 | for b in range(batch_size): 184 | if i==0 and random: 185 | np.random.shuffle(annotation_lines) 186 | image, box = get_random_data(annotation_lines[i], input_shape, random=random) 187 | image_data.append(image) 188 | box_data.append(box) 189 | i = (i+1) % n 190 | image_data = np.array(image_data) 191 | if verbose: 192 | print("Progress: ",i,"/",n) 193 | box_data = np.array(box_data) 194 | y_true = preprocess_true_boxes(box_data, input_shape, anchors, num_classes) 195 | yield [image_data, *y_true], np.zeros(batch_size) 196 | 197 | def data_generator_wrapper(annotation_lines, batch_size, input_shape, anchors, num_classes, random=True, verbose=False): 198 | n = len(annotation_lines) 199 | if n==0 or batch_size<=0: return None 200 | return data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes, random, verbose) 201 | 202 | def bottleneck_generator(annotation_lines, batch_size, input_shape, anchors, num_classes, bottlenecks): 203 | n = len(annotation_lines) 204 | i = 0 205 | while True: 206 | box_data = [] 207 | b0=np.zeros((batch_size,bottlenecks[0].shape[1],bottlenecks[0].shape[2],bottlenecks[0].shape[3])) 208 | b1=np.zeros((batch_size,bottlenecks[1].shape[1],bottlenecks[1].shape[2],bottlenecks[1].shape[3])) 209 | b2=np.zeros((batch_size,bottlenecks[2].shape[1],bottlenecks[2].shape[2],bottlenecks[2].shape[3])) 210 | for b in range(batch_size): 211 | _, box = get_random_data(annotation_lines[i], input_shape, random=False, proc_img=False) 212 | box_data.append(box) 213 | b0[b]=bottlenecks[0][i] 214 | b1[b]=bottlenecks[1][i] 215 | b2[b]=bottlenecks[2][i] 216 | i = (i+1) % n 217 | box_data = np.array(box_data) 218 | y_true = preprocess_true_boxes(box_data, input_shape, anchors, num_classes) 219 | yield [b0, b1, b2, *y_true], np.zeros(batch_size) 220 | 221 | if __name__ == '__main__': 222 | _main() 223 | -------------------------------------------------------------------------------- /voc_annotation.py: -------------------------------------------------------------------------------- 1 | import xml.etree.ElementTree as ET 2 | from os import getcwd 3 | 4 | sets=[('2007', 'train'), ('2007', 'val'), ('2007', 'test')] 5 | 6 | classes = ["aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"] 7 | 8 | 9 | def convert_annotation(year, image_id, list_file): 10 | in_file = open('VOCdevkit/VOC%s/Annotations/%s.xml'%(year, image_id)) 11 | tree=ET.parse(in_file) 12 | root = tree.getroot() 13 | 14 | for obj in root.iter('object'): 15 | difficult = obj.find('difficult').text 16 | cls = obj.find('name').text 17 | if cls not in classes or int(difficult)==1: 18 | continue 19 | cls_id = classes.index(cls) 20 | xmlbox = obj.find('bndbox') 21 | b = (int(xmlbox.find('xmin').text), int(xmlbox.find('ymin').text), int(xmlbox.find('xmax').text), int(xmlbox.find('ymax').text)) 22 | list_file.write(" " + ",".join([str(a) for a in b]) + ',' + str(cls_id)) 23 | 24 | wd = getcwd() 25 | 26 | for year, image_set in sets: 27 | image_ids = open('VOCdevkit/VOC%s/ImageSets/Main/%s.txt'%(year, image_set)).read().strip().split() 28 | list_file = open('%s_%s.txt'%(year, image_set), 'w') 29 | for image_id in image_ids: 30 | list_file.write('%s/VOCdevkit/VOC%s/JPEGImages/%s.jpg'%(wd, year, image_id)) 31 | convert_annotation(year, image_id, list_file) 32 | list_file.write('\n') 33 | list_file.close() 34 | 35 | -------------------------------------------------------------------------------- /yolo.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Class definition of YOLO_v3 style detection model on image and video 4 | """ 5 | 6 | import colorsys 7 | import os 8 | from timeit import default_timer as timer 9 | 10 | import numpy as np 11 | from keras import backend as K 12 | from keras.models import load_model 13 | from keras.layers import Input 14 | from PIL import Image, ImageFont, ImageDraw 15 | 16 | from yolo3.model import yolo_eval, yolo_body, tiny_yolo_body 17 | from yolo3.utils import letterbox_image 18 | import os 19 | from keras.utils import multi_gpu_model 20 | 21 | class YOLO(object): 22 | _defaults = { 23 | "model_path": 'model_data/yolo.h5', 24 | "anchors_path": 'model_data/yolo_anchors.txt', 25 | "classes_path": 'model_data/coco_classes.txt', 26 | "score" : 0.3, 27 | "iou" : 0.45, 28 | "model_image_size" : (416, 416), 29 | "gpu_num" : 1, 30 | } 31 | 32 | @classmethod 33 | def get_defaults(cls, n): 34 | if n in cls._defaults: 35 | return cls._defaults[n] 36 | else: 37 | return "Unrecognized attribute name '" + n + "'" 38 | 39 | def __init__(self, **kwargs): 40 | self.__dict__.update(self._defaults) # set up default values 41 | self.__dict__.update(kwargs) # and update with user overrides 42 | self.class_names = self._get_class() 43 | self.anchors = self._get_anchors() 44 | self.sess = K.get_session() 45 | self.boxes, self.scores, self.classes = self.generate() 46 | 47 | def _get_class(self): 48 | classes_path = os.path.expanduser(self.classes_path) 49 | with open(classes_path) as f: 50 | class_names = f.readlines() 51 | class_names = [c.strip() for c in class_names] 52 | return class_names 53 | 54 | def _get_anchors(self): 55 | anchors_path = os.path.expanduser(self.anchors_path) 56 | with open(anchors_path) as f: 57 | anchors = f.readline() 58 | anchors = [float(x) for x in anchors.split(',')] 59 | return np.array(anchors).reshape(-1, 2) 60 | 61 | def generate(self): 62 | model_path = os.path.expanduser(self.model_path) 63 | assert model_path.endswith('.h5'), 'Keras model or weights must be a .h5 file.' 64 | 65 | # Load model, or construct model and load weights. 66 | num_anchors = len(self.anchors) 67 | num_classes = len(self.class_names) 68 | is_tiny_version = num_anchors==6 # default setting 69 | try: 70 | self.yolo_model = load_model(model_path, compile=False) 71 | except: 72 | self.yolo_model = tiny_yolo_body(Input(shape=(None,None,3)), num_anchors//2, num_classes) \ 73 | if is_tiny_version else yolo_body(Input(shape=(None,None,3)), num_anchors//3, num_classes) 74 | self.yolo_model.load_weights(self.model_path) # make sure model, anchors and classes match 75 | else: 76 | assert self.yolo_model.layers[-1].output_shape[-1] == \ 77 | num_anchors/len(self.yolo_model.output) * (num_classes + 5), \ 78 | 'Mismatch between model and given anchor and class sizes' 79 | 80 | print('{} model, anchors, and classes loaded.'.format(model_path)) 81 | 82 | # Generate colors for drawing bounding boxes. 83 | hsv_tuples = [(x / len(self.class_names), 1., 1.) 84 | for x in range(len(self.class_names))] 85 | self.colors = list(map(lambda x: colorsys.hsv_to_rgb(*x), hsv_tuples)) 86 | self.colors = list( 87 | map(lambda x: (int(x[0] * 255), int(x[1] * 255), int(x[2] * 255)), 88 | self.colors)) 89 | np.random.seed(10101) # Fixed seed for consistent colors across runs. 90 | np.random.shuffle(self.colors) # Shuffle colors to decorrelate adjacent classes. 91 | np.random.seed(None) # Reset seed to default. 92 | 93 | # Generate output tensor targets for filtered bounding boxes. 94 | self.input_image_shape = K.placeholder(shape=(2, )) 95 | if self.gpu_num>=2: 96 | self.yolo_model = multi_gpu_model(self.yolo_model, gpus=self.gpu_num) 97 | boxes, scores, classes = yolo_eval(self.yolo_model.output, self.anchors, 98 | len(self.class_names), self.input_image_shape, 99 | score_threshold=self.score, iou_threshold=self.iou) 100 | return boxes, scores, classes 101 | 102 | def detect_image(self, image): 103 | start = timer() 104 | 105 | if self.model_image_size != (None, None): 106 | assert self.model_image_size[0]%32 == 0, 'Multiples of 32 required' 107 | assert self.model_image_size[1]%32 == 0, 'Multiples of 32 required' 108 | boxed_image = letterbox_image(image, tuple(reversed(self.model_image_size))) 109 | else: 110 | new_image_size = (image.width - (image.width % 32), 111 | image.height - (image.height % 32)) 112 | boxed_image = letterbox_image(image, new_image_size) 113 | image_data = np.array(boxed_image, dtype='float32') 114 | 115 | print(image_data.shape) 116 | image_data /= 255. 117 | image_data = np.expand_dims(image_data, 0) # Add batch dimension. 118 | 119 | out_boxes, out_scores, out_classes = self.sess.run( 120 | [self.boxes, self.scores, self.classes], 121 | feed_dict={ 122 | self.yolo_model.input: image_data, 123 | self.input_image_shape: [image.size[1], image.size[0]], 124 | K.learning_phase(): 0 125 | }) 126 | 127 | print('Found {} boxes for {}'.format(len(out_boxes), 'img')) 128 | 129 | font = ImageFont.truetype(font='font/FiraMono-Medium.otf', 130 | size=np.floor(3e-2 * image.size[1] + 0.5).astype('int32')) 131 | thickness = (image.size[0] + image.size[1]) // 300 132 | 133 | for i, c in reversed(list(enumerate(out_classes))): 134 | predicted_class = self.class_names[c] 135 | box = out_boxes[i] 136 | score = out_scores[i] 137 | 138 | label = '{} {:.2f}'.format(predicted_class, score) 139 | draw = ImageDraw.Draw(image) 140 | label_size = draw.textsize(label, font) 141 | 142 | top, left, bottom, right = box 143 | top = max(0, np.floor(top + 0.5).astype('int32')) 144 | left = max(0, np.floor(left + 0.5).astype('int32')) 145 | bottom = min(image.size[1], np.floor(bottom + 0.5).astype('int32')) 146 | right = min(image.size[0], np.floor(right + 0.5).astype('int32')) 147 | print(label, (left, top), (right, bottom)) 148 | 149 | if top - label_size[1] >= 0: 150 | text_origin = np.array([left, top - label_size[1]]) 151 | else: 152 | text_origin = np.array([left, top + 1]) 153 | 154 | # My kingdom for a good redistributable image drawing library. 155 | for i in range(thickness): 156 | draw.rectangle( 157 | [left + i, top + i, right - i, bottom - i], 158 | outline=self.colors[c]) 159 | draw.rectangle( 160 | [tuple(text_origin), tuple(text_origin + label_size)], 161 | fill=self.colors[c]) 162 | draw.text(text_origin, label, fill=(0, 0, 0), font=font) 163 | del draw 164 | 165 | end = timer() 166 | print(end - start) 167 | return image 168 | 169 | def close_session(self): 170 | self.sess.close() 171 | 172 | def detect_video(yolo, video_path, output_path=""): 173 | import cv2 174 | vid = cv2.VideoCapture(video_path) 175 | if not vid.isOpened(): 176 | raise IOError("Couldn't open webcam or video") 177 | video_FourCC = int(vid.get(cv2.CAP_PROP_FOURCC)) 178 | video_fps = vid.get(cv2.CAP_PROP_FPS) 179 | video_size = (int(vid.get(cv2.CAP_PROP_FRAME_WIDTH)), 180 | int(vid.get(cv2.CAP_PROP_FRAME_HEIGHT))) 181 | isOutput = True if output_path != "" else False 182 | if isOutput: 183 | print("!!! TYPE:", type(output_path), type(video_FourCC), type(video_fps), type(video_size)) 184 | out = cv2.VideoWriter(output_path, video_FourCC, video_fps, video_size) 185 | accum_time = 0 186 | curr_fps = 0 187 | fps = "FPS: ??" 188 | prev_time = timer() 189 | while True: 190 | return_value, frame = vid.read() 191 | image = Image.fromarray(frame) 192 | image = yolo.detect_image(image) 193 | result = np.asarray(image) 194 | curr_time = timer() 195 | exec_time = curr_time - prev_time 196 | prev_time = curr_time 197 | accum_time = accum_time + exec_time 198 | curr_fps = curr_fps + 1 199 | if accum_time > 1: 200 | accum_time = accum_time - 1 201 | fps = "FPS: " + str(curr_fps) 202 | curr_fps = 0 203 | cv2.putText(result, text=fps, org=(3, 15), fontFace=cv2.FONT_HERSHEY_SIMPLEX, 204 | fontScale=0.50, color=(255, 0, 0), thickness=2) 205 | cv2.namedWindow("result", cv2.WINDOW_NORMAL) 206 | cv2.imshow("result", result) 207 | if isOutput: 208 | out.write(result) 209 | if cv2.waitKey(1) & 0xFF == ord('q'): 210 | break 211 | yolo.close_session() 212 | 213 | -------------------------------------------------------------------------------- /yolo3/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/qqwweee/keras-yolo3/e6598d13c703029b2686bc2eb8d5c09badf42992/yolo3/__init__.py -------------------------------------------------------------------------------- /yolo3/model.py: -------------------------------------------------------------------------------- 1 | """YOLO_v3 Model Defined in Keras.""" 2 | 3 | from functools import wraps 4 | 5 | import numpy as np 6 | import tensorflow as tf 7 | from keras import backend as K 8 | from keras.layers import Conv2D, Add, ZeroPadding2D, UpSampling2D, Concatenate, MaxPooling2D 9 | from keras.layers.advanced_activations import LeakyReLU 10 | from keras.layers.normalization import BatchNormalization 11 | from keras.models import Model 12 | from keras.regularizers import l2 13 | 14 | from yolo3.utils import compose 15 | 16 | 17 | @wraps(Conv2D) 18 | def DarknetConv2D(*args, **kwargs): 19 | """Wrapper to set Darknet parameters for Convolution2D.""" 20 | darknet_conv_kwargs = {'kernel_regularizer': l2(5e-4)} 21 | darknet_conv_kwargs['padding'] = 'valid' if kwargs.get('strides')==(2,2) else 'same' 22 | darknet_conv_kwargs.update(kwargs) 23 | return Conv2D(*args, **darknet_conv_kwargs) 24 | 25 | def DarknetConv2D_BN_Leaky(*args, **kwargs): 26 | """Darknet Convolution2D followed by BatchNormalization and LeakyReLU.""" 27 | no_bias_kwargs = {'use_bias': False} 28 | no_bias_kwargs.update(kwargs) 29 | return compose( 30 | DarknetConv2D(*args, **no_bias_kwargs), 31 | BatchNormalization(), 32 | LeakyReLU(alpha=0.1)) 33 | 34 | def resblock_body(x, num_filters, num_blocks): 35 | '''A series of resblocks starting with a downsampling Convolution2D''' 36 | # Darknet uses left and top padding instead of 'same' mode 37 | x = ZeroPadding2D(((1,0),(1,0)))(x) 38 | x = DarknetConv2D_BN_Leaky(num_filters, (3,3), strides=(2,2))(x) 39 | for i in range(num_blocks): 40 | y = compose( 41 | DarknetConv2D_BN_Leaky(num_filters//2, (1,1)), 42 | DarknetConv2D_BN_Leaky(num_filters, (3,3)))(x) 43 | x = Add()([x,y]) 44 | return x 45 | 46 | def darknet_body(x): 47 | '''Darknent body having 52 Convolution2D layers''' 48 | x = DarknetConv2D_BN_Leaky(32, (3,3))(x) 49 | x = resblock_body(x, 64, 1) 50 | x = resblock_body(x, 128, 2) 51 | x = resblock_body(x, 256, 8) 52 | x = resblock_body(x, 512, 8) 53 | x = resblock_body(x, 1024, 4) 54 | return x 55 | 56 | def make_last_layers(x, num_filters, out_filters): 57 | '''6 Conv2D_BN_Leaky layers followed by a Conv2D_linear layer''' 58 | x = compose( 59 | DarknetConv2D_BN_Leaky(num_filters, (1,1)), 60 | DarknetConv2D_BN_Leaky(num_filters*2, (3,3)), 61 | DarknetConv2D_BN_Leaky(num_filters, (1,1)), 62 | DarknetConv2D_BN_Leaky(num_filters*2, (3,3)), 63 | DarknetConv2D_BN_Leaky(num_filters, (1,1)))(x) 64 | y = compose( 65 | DarknetConv2D_BN_Leaky(num_filters*2, (3,3)), 66 | DarknetConv2D(out_filters, (1,1)))(x) 67 | return x, y 68 | 69 | 70 | def yolo_body(inputs, num_anchors, num_classes): 71 | """Create YOLO_V3 model CNN body in Keras.""" 72 | darknet = Model(inputs, darknet_body(inputs)) 73 | x, y1 = make_last_layers(darknet.output, 512, num_anchors*(num_classes+5)) 74 | 75 | x = compose( 76 | DarknetConv2D_BN_Leaky(256, (1,1)), 77 | UpSampling2D(2))(x) 78 | x = Concatenate()([x,darknet.layers[152].output]) 79 | x, y2 = make_last_layers(x, 256, num_anchors*(num_classes+5)) 80 | 81 | x = compose( 82 | DarknetConv2D_BN_Leaky(128, (1,1)), 83 | UpSampling2D(2))(x) 84 | x = Concatenate()([x,darknet.layers[92].output]) 85 | x, y3 = make_last_layers(x, 128, num_anchors*(num_classes+5)) 86 | 87 | return Model(inputs, [y1,y2,y3]) 88 | 89 | def tiny_yolo_body(inputs, num_anchors, num_classes): 90 | '''Create Tiny YOLO_v3 model CNN body in keras.''' 91 | x1 = compose( 92 | DarknetConv2D_BN_Leaky(16, (3,3)), 93 | MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'), 94 | DarknetConv2D_BN_Leaky(32, (3,3)), 95 | MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'), 96 | DarknetConv2D_BN_Leaky(64, (3,3)), 97 | MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'), 98 | DarknetConv2D_BN_Leaky(128, (3,3)), 99 | MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'), 100 | DarknetConv2D_BN_Leaky(256, (3,3)))(inputs) 101 | x2 = compose( 102 | MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'), 103 | DarknetConv2D_BN_Leaky(512, (3,3)), 104 | MaxPooling2D(pool_size=(2,2), strides=(1,1), padding='same'), 105 | DarknetConv2D_BN_Leaky(1024, (3,3)), 106 | DarknetConv2D_BN_Leaky(256, (1,1)))(x1) 107 | y1 = compose( 108 | DarknetConv2D_BN_Leaky(512, (3,3)), 109 | DarknetConv2D(num_anchors*(num_classes+5), (1,1)))(x2) 110 | 111 | x2 = compose( 112 | DarknetConv2D_BN_Leaky(128, (1,1)), 113 | UpSampling2D(2))(x2) 114 | y2 = compose( 115 | Concatenate(), 116 | DarknetConv2D_BN_Leaky(256, (3,3)), 117 | DarknetConv2D(num_anchors*(num_classes+5), (1,1)))([x2,x1]) 118 | 119 | return Model(inputs, [y1,y2]) 120 | 121 | 122 | def yolo_head(feats, anchors, num_classes, input_shape, calc_loss=False): 123 | """Convert final layer features to bounding box parameters.""" 124 | num_anchors = len(anchors) 125 | # Reshape to batch, height, width, num_anchors, box_params. 126 | anchors_tensor = K.reshape(K.constant(anchors), [1, 1, 1, num_anchors, 2]) 127 | 128 | grid_shape = K.shape(feats)[1:3] # height, width 129 | grid_y = K.tile(K.reshape(K.arange(0, stop=grid_shape[0]), [-1, 1, 1, 1]), 130 | [1, grid_shape[1], 1, 1]) 131 | grid_x = K.tile(K.reshape(K.arange(0, stop=grid_shape[1]), [1, -1, 1, 1]), 132 | [grid_shape[0], 1, 1, 1]) 133 | grid = K.concatenate([grid_x, grid_y]) 134 | grid = K.cast(grid, K.dtype(feats)) 135 | 136 | feats = K.reshape( 137 | feats, [-1, grid_shape[0], grid_shape[1], num_anchors, num_classes + 5]) 138 | 139 | # Adjust preditions to each spatial grid point and anchor size. 140 | box_xy = (K.sigmoid(feats[..., :2]) + grid) / K.cast(grid_shape[::-1], K.dtype(feats)) 141 | box_wh = K.exp(feats[..., 2:4]) * anchors_tensor / K.cast(input_shape[::-1], K.dtype(feats)) 142 | box_confidence = K.sigmoid(feats[..., 4:5]) 143 | box_class_probs = K.sigmoid(feats[..., 5:]) 144 | 145 | if calc_loss == True: 146 | return grid, feats, box_xy, box_wh 147 | return box_xy, box_wh, box_confidence, box_class_probs 148 | 149 | 150 | def yolo_correct_boxes(box_xy, box_wh, input_shape, image_shape): 151 | '''Get corrected boxes''' 152 | box_yx = box_xy[..., ::-1] 153 | box_hw = box_wh[..., ::-1] 154 | input_shape = K.cast(input_shape, K.dtype(box_yx)) 155 | image_shape = K.cast(image_shape, K.dtype(box_yx)) 156 | new_shape = K.round(image_shape * K.min(input_shape/image_shape)) 157 | offset = (input_shape-new_shape)/2./input_shape 158 | scale = input_shape/new_shape 159 | box_yx = (box_yx - offset) * scale 160 | box_hw *= scale 161 | 162 | box_mins = box_yx - (box_hw / 2.) 163 | box_maxes = box_yx + (box_hw / 2.) 164 | boxes = K.concatenate([ 165 | box_mins[..., 0:1], # y_min 166 | box_mins[..., 1:2], # x_min 167 | box_maxes[..., 0:1], # y_max 168 | box_maxes[..., 1:2] # x_max 169 | ]) 170 | 171 | # Scale boxes back to original image shape. 172 | boxes *= K.concatenate([image_shape, image_shape]) 173 | return boxes 174 | 175 | 176 | def yolo_boxes_and_scores(feats, anchors, num_classes, input_shape, image_shape): 177 | '''Process Conv layer output''' 178 | box_xy, box_wh, box_confidence, box_class_probs = yolo_head(feats, 179 | anchors, num_classes, input_shape) 180 | boxes = yolo_correct_boxes(box_xy, box_wh, input_shape, image_shape) 181 | boxes = K.reshape(boxes, [-1, 4]) 182 | box_scores = box_confidence * box_class_probs 183 | box_scores = K.reshape(box_scores, [-1, num_classes]) 184 | return boxes, box_scores 185 | 186 | 187 | def yolo_eval(yolo_outputs, 188 | anchors, 189 | num_classes, 190 | image_shape, 191 | max_boxes=20, 192 | score_threshold=.6, 193 | iou_threshold=.5): 194 | """Evaluate YOLO model on given input and return filtered boxes.""" 195 | num_layers = len(yolo_outputs) 196 | anchor_mask = [[6,7,8], [3,4,5], [0,1,2]] if num_layers==3 else [[3,4,5], [1,2,3]] # default setting 197 | input_shape = K.shape(yolo_outputs[0])[1:3] * 32 198 | boxes = [] 199 | box_scores = [] 200 | for l in range(num_layers): 201 | _boxes, _box_scores = yolo_boxes_and_scores(yolo_outputs[l], 202 | anchors[anchor_mask[l]], num_classes, input_shape, image_shape) 203 | boxes.append(_boxes) 204 | box_scores.append(_box_scores) 205 | boxes = K.concatenate(boxes, axis=0) 206 | box_scores = K.concatenate(box_scores, axis=0) 207 | 208 | mask = box_scores >= score_threshold 209 | max_boxes_tensor = K.constant(max_boxes, dtype='int32') 210 | boxes_ = [] 211 | scores_ = [] 212 | classes_ = [] 213 | for c in range(num_classes): 214 | # TODO: use keras backend instead of tf. 215 | class_boxes = tf.boolean_mask(boxes, mask[:, c]) 216 | class_box_scores = tf.boolean_mask(box_scores[:, c], mask[:, c]) 217 | nms_index = tf.image.non_max_suppression( 218 | class_boxes, class_box_scores, max_boxes_tensor, iou_threshold=iou_threshold) 219 | class_boxes = K.gather(class_boxes, nms_index) 220 | class_box_scores = K.gather(class_box_scores, nms_index) 221 | classes = K.ones_like(class_box_scores, 'int32') * c 222 | boxes_.append(class_boxes) 223 | scores_.append(class_box_scores) 224 | classes_.append(classes) 225 | boxes_ = K.concatenate(boxes_, axis=0) 226 | scores_ = K.concatenate(scores_, axis=0) 227 | classes_ = K.concatenate(classes_, axis=0) 228 | 229 | return boxes_, scores_, classes_ 230 | 231 | 232 | def preprocess_true_boxes(true_boxes, input_shape, anchors, num_classes): 233 | '''Preprocess true boxes to training input format 234 | 235 | Parameters 236 | ---------- 237 | true_boxes: array, shape=(m, T, 5) 238 | Absolute x_min, y_min, x_max, y_max, class_id relative to input_shape. 239 | input_shape: array-like, hw, multiples of 32 240 | anchors: array, shape=(N, 2), wh 241 | num_classes: integer 242 | 243 | Returns 244 | ------- 245 | y_true: list of array, shape like yolo_outputs, xywh are reletive value 246 | 247 | ''' 248 | assert (true_boxes[..., 4]0 269 | 270 | for b in range(m): 271 | # Discard zero rows. 272 | wh = boxes_wh[b, valid_mask[b]] 273 | if len(wh)==0: continue 274 | # Expand dim to apply broadcasting. 275 | wh = np.expand_dims(wh, -2) 276 | box_maxes = wh / 2. 277 | box_mins = -box_maxes 278 | 279 | intersect_mins = np.maximum(box_mins, anchor_mins) 280 | intersect_maxes = np.minimum(box_maxes, anchor_maxes) 281 | intersect_wh = np.maximum(intersect_maxes - intersect_mins, 0.) 282 | intersect_area = intersect_wh[..., 0] * intersect_wh[..., 1] 283 | box_area = wh[..., 0] * wh[..., 1] 284 | anchor_area = anchors[..., 0] * anchors[..., 1] 285 | iou = intersect_area / (box_area + anchor_area - intersect_area) 286 | 287 | # Find best anchor for each true box 288 | best_anchor = np.argmax(iou, axis=-1) 289 | 290 | for t, n in enumerate(best_anchor): 291 | for l in range(num_layers): 292 | if n in anchor_mask[l]: 293 | i = np.floor(true_boxes[b,t,0]*grid_shapes[l][1]).astype('int32') 294 | j = np.floor(true_boxes[b,t,1]*grid_shapes[l][0]).astype('int32') 295 | k = anchor_mask[l].index(n) 296 | c = true_boxes[b,t, 4].astype('int32') 297 | y_true[l][b, j, i, k, 0:4] = true_boxes[b,t, 0:4] 298 | y_true[l][b, j, i, k, 4] = 1 299 | y_true[l][b, j, i, k, 5+c] = 1 300 | 301 | return y_true 302 | 303 | 304 | def box_iou(b1, b2): 305 | '''Return iou tensor 306 | 307 | Parameters 308 | ---------- 309 | b1: tensor, shape=(i1,...,iN, 4), xywh 310 | b2: tensor, shape=(j, 4), xywh 311 | 312 | Returns 313 | ------- 314 | iou: tensor, shape=(i1,...,iN, j) 315 | 316 | ''' 317 | 318 | # Expand dim to apply broadcasting. 319 | b1 = K.expand_dims(b1, -2) 320 | b1_xy = b1[..., :2] 321 | b1_wh = b1[..., 2:4] 322 | b1_wh_half = b1_wh/2. 323 | b1_mins = b1_xy - b1_wh_half 324 | b1_maxes = b1_xy + b1_wh_half 325 | 326 | # Expand dim to apply broadcasting. 327 | b2 = K.expand_dims(b2, 0) 328 | b2_xy = b2[..., :2] 329 | b2_wh = b2[..., 2:4] 330 | b2_wh_half = b2_wh/2. 331 | b2_mins = b2_xy - b2_wh_half 332 | b2_maxes = b2_xy + b2_wh_half 333 | 334 | intersect_mins = K.maximum(b1_mins, b2_mins) 335 | intersect_maxes = K.minimum(b1_maxes, b2_maxes) 336 | intersect_wh = K.maximum(intersect_maxes - intersect_mins, 0.) 337 | intersect_area = intersect_wh[..., 0] * intersect_wh[..., 1] 338 | b1_area = b1_wh[..., 0] * b1_wh[..., 1] 339 | b2_area = b2_wh[..., 0] * b2_wh[..., 1] 340 | iou = intersect_area / (b1_area + b2_area - intersect_area) 341 | 342 | return iou 343 | 344 | 345 | def yolo_loss(args, anchors, num_classes, ignore_thresh=.5, print_loss=False): 346 | '''Return yolo_loss tensor 347 | 348 | Parameters 349 | ---------- 350 | yolo_outputs: list of tensor, the output of yolo_body or tiny_yolo_body 351 | y_true: list of array, the output of preprocess_true_boxes 352 | anchors: array, shape=(N, 2), wh 353 | num_classes: integer 354 | ignore_thresh: float, the iou threshold whether to ignore object confidence loss 355 | 356 | Returns 357 | ------- 358 | loss: tensor, shape=(1,) 359 | 360 | ''' 361 | num_layers = len(anchors)//3 # default setting 362 | yolo_outputs = args[:num_layers] 363 | y_true = args[num_layers:] 364 | anchor_mask = [[6,7,8], [3,4,5], [0,1,2]] if num_layers==3 else [[3,4,5], [1,2,3]] 365 | input_shape = K.cast(K.shape(yolo_outputs[0])[1:3] * 32, K.dtype(y_true[0])) 366 | grid_shapes = [K.cast(K.shape(yolo_outputs[l])[1:3], K.dtype(y_true[0])) for l in range(num_layers)] 367 | loss = 0 368 | m = K.shape(yolo_outputs[0])[0] # batch size, tensor 369 | mf = K.cast(m, K.dtype(yolo_outputs[0])) 370 | 371 | for l in range(num_layers): 372 | object_mask = y_true[l][..., 4:5] 373 | true_class_probs = y_true[l][..., 5:] 374 | 375 | grid, raw_pred, pred_xy, pred_wh = yolo_head(yolo_outputs[l], 376 | anchors[anchor_mask[l]], num_classes, input_shape, calc_loss=True) 377 | pred_box = K.concatenate([pred_xy, pred_wh]) 378 | 379 | # Darknet raw box to calculate loss. 380 | raw_true_xy = y_true[l][..., :2]*grid_shapes[l][::-1] - grid 381 | raw_true_wh = K.log(y_true[l][..., 2:4] / anchors[anchor_mask[l]] * input_shape[::-1]) 382 | raw_true_wh = K.switch(object_mask, raw_true_wh, K.zeros_like(raw_true_wh)) # avoid log(0)=-inf 383 | box_loss_scale = 2 - y_true[l][...,2:3]*y_true[l][...,3:4] 384 | 385 | # Find ignore mask, iterate over each of batch. 386 | ignore_mask = tf.TensorArray(K.dtype(y_true[0]), size=1, dynamic_size=True) 387 | object_mask_bool = K.cast(object_mask, 'bool') 388 | def loop_body(b, ignore_mask): 389 | true_box = tf.boolean_mask(y_true[l][b,...,0:4], object_mask_bool[b,...,0]) 390 | iou = box_iou(pred_box[b], true_box) 391 | best_iou = K.max(iou, axis=-1) 392 | ignore_mask = ignore_mask.write(b, K.cast(best_iou0: 61 | np.random.shuffle(box) 62 | if len(box)>max_boxes: box = box[:max_boxes] 63 | box[:, [0,2]] = box[:, [0,2]]*scale + dx 64 | box[:, [1,3]] = box[:, [1,3]]*scale + dy 65 | box_data[:len(box)] = box 66 | 67 | return image_data, box_data 68 | 69 | # resize image 70 | new_ar = w/h * rand(1-jitter,1+jitter)/rand(1-jitter,1+jitter) 71 | scale = rand(.25, 2) 72 | if new_ar < 1: 73 | nh = int(scale*h) 74 | nw = int(nh*new_ar) 75 | else: 76 | nw = int(scale*w) 77 | nh = int(nw/new_ar) 78 | image = image.resize((nw,nh), Image.BICUBIC) 79 | 80 | # place image 81 | dx = int(rand(0, w-nw)) 82 | dy = int(rand(0, h-nh)) 83 | new_image = Image.new('RGB', (w,h), (128,128,128)) 84 | new_image.paste(image, (dx, dy)) 85 | image = new_image 86 | 87 | # flip image or not 88 | flip = rand()<.5 89 | if flip: image = image.transpose(Image.FLIP_LEFT_RIGHT) 90 | 91 | # distort image 92 | hue = rand(-hue, hue) 93 | sat = rand(1, sat) if rand()<.5 else 1/rand(1, sat) 94 | val = rand(1, val) if rand()<.5 else 1/rand(1, val) 95 | x = rgb_to_hsv(np.array(image)/255.) 96 | x[..., 0] += hue 97 | x[..., 0][x[..., 0]>1] -= 1 98 | x[..., 0][x[..., 0]<0] += 1 99 | x[..., 1] *= sat 100 | x[..., 2] *= val 101 | x[x>1] = 1 102 | x[x<0] = 0 103 | image_data = hsv_to_rgb(x) # numpy array, 0 to 1 104 | 105 | # correct boxes 106 | box_data = np.zeros((max_boxes,5)) 107 | if len(box)>0: 108 | np.random.shuffle(box) 109 | box[:, [0,2]] = box[:, [0,2]]*nw/iw + dx 110 | box[:, [1,3]] = box[:, [1,3]]*nh/ih + dy 111 | if flip: box[:, [0,2]] = w - box[:, [2,0]] 112 | box[:, 0:2][box[:, 0:2]<0] = 0 113 | box[:, 2][box[:, 2]>w] = w 114 | box[:, 3][box[:, 3]>h] = h 115 | box_w = box[:, 2] - box[:, 0] 116 | box_h = box[:, 3] - box[:, 1] 117 | box = box[np.logical_and(box_w>1, box_h>1)] # discard invalid box 118 | if len(box)>max_boxes: box = box[:max_boxes] 119 | box_data[:len(box)] = box 120 | 121 | return image_data, box_data 122 | -------------------------------------------------------------------------------- /yolo_video.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import argparse 3 | from yolo import YOLO, detect_video 4 | from PIL import Image 5 | 6 | def detect_img(yolo): 7 | while True: 8 | img = input('Input image filename:') 9 | try: 10 | image = Image.open(img) 11 | except: 12 | print('Open Error! Try again!') 13 | continue 14 | else: 15 | r_image = yolo.detect_image(image) 16 | r_image.show() 17 | yolo.close_session() 18 | 19 | FLAGS = None 20 | 21 | if __name__ == '__main__': 22 | # class YOLO defines the default value, so suppress any default here 23 | parser = argparse.ArgumentParser(argument_default=argparse.SUPPRESS) 24 | ''' 25 | Command line options 26 | ''' 27 | parser.add_argument( 28 | '--model', type=str, 29 | help='path to model weight file, default ' + YOLO.get_defaults("model_path") 30 | ) 31 | 32 | parser.add_argument( 33 | '--anchors', type=str, 34 | help='path to anchor definitions, default ' + YOLO.get_defaults("anchors_path") 35 | ) 36 | 37 | parser.add_argument( 38 | '--classes', type=str, 39 | help='path to class definitions, default ' + YOLO.get_defaults("classes_path") 40 | ) 41 | 42 | parser.add_argument( 43 | '--gpu_num', type=int, 44 | help='Number of GPU to use, default ' + str(YOLO.get_defaults("gpu_num")) 45 | ) 46 | 47 | parser.add_argument( 48 | '--image', default=False, action="store_true", 49 | help='Image detection mode, will ignore all positional arguments' 50 | ) 51 | ''' 52 | Command line positional arguments -- for video detection mode 53 | ''' 54 | parser.add_argument( 55 | "--input", nargs='?', type=str,required=False,default='./path2your_video', 56 | help = "Video input path" 57 | ) 58 | 59 | parser.add_argument( 60 | "--output", nargs='?', type=str, default="", 61 | help = "[Optional] Video output path" 62 | ) 63 | 64 | FLAGS = parser.parse_args() 65 | 66 | if FLAGS.image: 67 | """ 68 | Image detection mode, disregard any remaining command line arguments 69 | """ 70 | print("Image detection mode") 71 | if "input" in FLAGS: 72 | print(" Ignoring remaining command line arguments: " + FLAGS.input + "," + FLAGS.output) 73 | detect_img(YOLO(**vars(FLAGS))) 74 | elif "input" in FLAGS: 75 | detect_video(YOLO(**vars(FLAGS)), FLAGS.input, FLAGS.output) 76 | else: 77 | print("Must specify at least video_input_path. See usage with --help.") 78 | -------------------------------------------------------------------------------- /yolov3-tiny.cfg: -------------------------------------------------------------------------------- 1 | [net] 2 | # Testing 3 | batch=1 4 | subdivisions=1 5 | # Training 6 | # batch=64 7 | # subdivisions=2 8 | width=416 9 | height=416 10 | channels=3 11 | momentum=0.9 12 | decay=0.0005 13 | angle=0 14 | saturation = 1.5 15 | exposure = 1.5 16 | hue=.1 17 | 18 | learning_rate=0.001 19 | burn_in=1000 20 | max_batches = 500200 21 | policy=steps 22 | steps=400000,450000 23 | scales=.1,.1 24 | 25 | [convolutional] 26 | batch_normalize=1 27 | filters=16 28 | size=3 29 | stride=1 30 | pad=1 31 | activation=leaky 32 | 33 | [maxpool] 34 | size=2 35 | stride=2 36 | 37 | [convolutional] 38 | batch_normalize=1 39 | filters=32 40 | size=3 41 | stride=1 42 | pad=1 43 | activation=leaky 44 | 45 | [maxpool] 46 | size=2 47 | stride=2 48 | 49 | [convolutional] 50 | batch_normalize=1 51 | filters=64 52 | size=3 53 | stride=1 54 | pad=1 55 | activation=leaky 56 | 57 | [maxpool] 58 | size=2 59 | stride=2 60 | 61 | [convolutional] 62 | batch_normalize=1 63 | filters=128 64 | size=3 65 | stride=1 66 | pad=1 67 | activation=leaky 68 | 69 | [maxpool] 70 | size=2 71 | stride=2 72 | 73 | [convolutional] 74 | batch_normalize=1 75 | filters=256 76 | size=3 77 | stride=1 78 | pad=1 79 | activation=leaky 80 | 81 | [maxpool] 82 | size=2 83 | stride=2 84 | 85 | [convolutional] 86 | batch_normalize=1 87 | filters=512 88 | size=3 89 | stride=1 90 | pad=1 91 | activation=leaky 92 | 93 | [maxpool] 94 | size=2 95 | stride=1 96 | 97 | [convolutional] 98 | batch_normalize=1 99 | filters=1024 100 | size=3 101 | stride=1 102 | pad=1 103 | activation=leaky 104 | 105 | ########### 106 | 107 | [convolutional] 108 | batch_normalize=1 109 | filters=256 110 | size=1 111 | stride=1 112 | pad=1 113 | activation=leaky 114 | 115 | [convolutional] 116 | batch_normalize=1 117 | filters=512 118 | size=3 119 | stride=1 120 | pad=1 121 | activation=leaky 122 | 123 | [convolutional] 124 | size=1 125 | stride=1 126 | pad=1 127 | filters=255 128 | activation=linear 129 | 130 | 131 | 132 | [yolo] 133 | mask = 3,4,5 134 | anchors = 10,14, 23,27, 37,58, 81,82, 135,169, 344,319 135 | classes=80 136 | num=6 137 | jitter=.3 138 | ignore_thresh = .7 139 | truth_thresh = 1 140 | random=1 141 | 142 | [route] 143 | layers = -4 144 | 145 | [convolutional] 146 | batch_normalize=1 147 | filters=128 148 | size=1 149 | stride=1 150 | pad=1 151 | activation=leaky 152 | 153 | [upsample] 154 | stride=2 155 | 156 | [route] 157 | layers = -1, 8 158 | 159 | [convolutional] 160 | batch_normalize=1 161 | filters=256 162 | size=3 163 | stride=1 164 | pad=1 165 | activation=leaky 166 | 167 | [convolutional] 168 | size=1 169 | stride=1 170 | pad=1 171 | filters=255 172 | activation=linear 173 | 174 | [yolo] 175 | mask = 1,2,3 176 | anchors = 10,14, 23,27, 37,58, 81,82, 135,169, 344,319 177 | classes=80 178 | num=6 179 | jitter=.3 180 | ignore_thresh = .7 181 | truth_thresh = 1 182 | random=1 183 | -------------------------------------------------------------------------------- /yolov3.cfg: -------------------------------------------------------------------------------- 1 | [net] 2 | # Testing 3 | batch=1 4 | subdivisions=1 5 | # Training 6 | # batch=64 7 | # subdivisions=16 8 | width=416 9 | height=416 10 | channels=3 11 | momentum=0.9 12 | decay=0.0005 13 | angle=0 14 | saturation = 1.5 15 | exposure = 1.5 16 | hue=.1 17 | 18 | learning_rate=0.001 19 | burn_in=1000 20 | max_batches = 500200 21 | policy=steps 22 | steps=400000,450000 23 | scales=.1,.1 24 | 25 | [convolutional] 26 | batch_normalize=1 27 | filters=32 28 | size=3 29 | stride=1 30 | pad=1 31 | activation=leaky 32 | 33 | # Downsample 34 | 35 | [convolutional] 36 | batch_normalize=1 37 | filters=64 38 | size=3 39 | stride=2 40 | pad=1 41 | activation=leaky 42 | 43 | [convolutional] 44 | batch_normalize=1 45 | filters=32 46 | size=1 47 | stride=1 48 | pad=1 49 | activation=leaky 50 | 51 | [convolutional] 52 | batch_normalize=1 53 | filters=64 54 | size=3 55 | stride=1 56 | pad=1 57 | activation=leaky 58 | 59 | [shortcut] 60 | from=-3 61 | activation=linear 62 | 63 | # Downsample 64 | 65 | [convolutional] 66 | batch_normalize=1 67 | filters=128 68 | size=3 69 | stride=2 70 | pad=1 71 | activation=leaky 72 | 73 | [convolutional] 74 | batch_normalize=1 75 | filters=64 76 | size=1 77 | stride=1 78 | pad=1 79 | activation=leaky 80 | 81 | [convolutional] 82 | batch_normalize=1 83 | filters=128 84 | size=3 85 | stride=1 86 | pad=1 87 | activation=leaky 88 | 89 | [shortcut] 90 | from=-3 91 | activation=linear 92 | 93 | [convolutional] 94 | batch_normalize=1 95 | filters=64 96 | size=1 97 | stride=1 98 | pad=1 99 | activation=leaky 100 | 101 | [convolutional] 102 | batch_normalize=1 103 | filters=128 104 | size=3 105 | stride=1 106 | pad=1 107 | activation=leaky 108 | 109 | [shortcut] 110 | from=-3 111 | activation=linear 112 | 113 | # Downsample 114 | 115 | [convolutional] 116 | batch_normalize=1 117 | filters=256 118 | size=3 119 | stride=2 120 | pad=1 121 | activation=leaky 122 | 123 | [convolutional] 124 | batch_normalize=1 125 | filters=128 126 | size=1 127 | stride=1 128 | pad=1 129 | activation=leaky 130 | 131 | [convolutional] 132 | batch_normalize=1 133 | filters=256 134 | size=3 135 | stride=1 136 | pad=1 137 | activation=leaky 138 | 139 | [shortcut] 140 | from=-3 141 | activation=linear 142 | 143 | [convolutional] 144 | batch_normalize=1 145 | filters=128 146 | size=1 147 | stride=1 148 | pad=1 149 | activation=leaky 150 | 151 | [convolutional] 152 | batch_normalize=1 153 | filters=256 154 | size=3 155 | stride=1 156 | pad=1 157 | activation=leaky 158 | 159 | [shortcut] 160 | from=-3 161 | activation=linear 162 | 163 | [convolutional] 164 | batch_normalize=1 165 | filters=128 166 | size=1 167 | stride=1 168 | pad=1 169 | activation=leaky 170 | 171 | [convolutional] 172 | batch_normalize=1 173 | filters=256 174 | size=3 175 | stride=1 176 | pad=1 177 | activation=leaky 178 | 179 | [shortcut] 180 | from=-3 181 | activation=linear 182 | 183 | [convolutional] 184 | batch_normalize=1 185 | filters=128 186 | size=1 187 | stride=1 188 | pad=1 189 | activation=leaky 190 | 191 | [convolutional] 192 | batch_normalize=1 193 | filters=256 194 | size=3 195 | stride=1 196 | pad=1 197 | activation=leaky 198 | 199 | [shortcut] 200 | from=-3 201 | activation=linear 202 | 203 | 204 | [convolutional] 205 | batch_normalize=1 206 | filters=128 207 | size=1 208 | stride=1 209 | pad=1 210 | activation=leaky 211 | 212 | [convolutional] 213 | batch_normalize=1 214 | filters=256 215 | size=3 216 | stride=1 217 | pad=1 218 | activation=leaky 219 | 220 | [shortcut] 221 | from=-3 222 | activation=linear 223 | 224 | [convolutional] 225 | batch_normalize=1 226 | filters=128 227 | size=1 228 | stride=1 229 | pad=1 230 | activation=leaky 231 | 232 | [convolutional] 233 | batch_normalize=1 234 | filters=256 235 | size=3 236 | stride=1 237 | pad=1 238 | activation=leaky 239 | 240 | [shortcut] 241 | from=-3 242 | activation=linear 243 | 244 | [convolutional] 245 | batch_normalize=1 246 | filters=128 247 | size=1 248 | stride=1 249 | pad=1 250 | activation=leaky 251 | 252 | [convolutional] 253 | batch_normalize=1 254 | filters=256 255 | size=3 256 | stride=1 257 | pad=1 258 | activation=leaky 259 | 260 | [shortcut] 261 | from=-3 262 | activation=linear 263 | 264 | [convolutional] 265 | batch_normalize=1 266 | filters=128 267 | size=1 268 | stride=1 269 | pad=1 270 | activation=leaky 271 | 272 | [convolutional] 273 | batch_normalize=1 274 | filters=256 275 | size=3 276 | stride=1 277 | pad=1 278 | activation=leaky 279 | 280 | [shortcut] 281 | from=-3 282 | activation=linear 283 | 284 | # Downsample 285 | 286 | [convolutional] 287 | batch_normalize=1 288 | filters=512 289 | size=3 290 | stride=2 291 | pad=1 292 | activation=leaky 293 | 294 | [convolutional] 295 | batch_normalize=1 296 | filters=256 297 | size=1 298 | stride=1 299 | pad=1 300 | activation=leaky 301 | 302 | [convolutional] 303 | batch_normalize=1 304 | filters=512 305 | size=3 306 | stride=1 307 | pad=1 308 | activation=leaky 309 | 310 | [shortcut] 311 | from=-3 312 | activation=linear 313 | 314 | 315 | [convolutional] 316 | batch_normalize=1 317 | filters=256 318 | size=1 319 | stride=1 320 | pad=1 321 | activation=leaky 322 | 323 | [convolutional] 324 | batch_normalize=1 325 | filters=512 326 | size=3 327 | stride=1 328 | pad=1 329 | activation=leaky 330 | 331 | [shortcut] 332 | from=-3 333 | activation=linear 334 | 335 | 336 | [convolutional] 337 | batch_normalize=1 338 | filters=256 339 | size=1 340 | stride=1 341 | pad=1 342 | activation=leaky 343 | 344 | [convolutional] 345 | batch_normalize=1 346 | filters=512 347 | size=3 348 | stride=1 349 | pad=1 350 | activation=leaky 351 | 352 | [shortcut] 353 | from=-3 354 | activation=linear 355 | 356 | 357 | [convolutional] 358 | batch_normalize=1 359 | filters=256 360 | size=1 361 | stride=1 362 | pad=1 363 | activation=leaky 364 | 365 | [convolutional] 366 | batch_normalize=1 367 | filters=512 368 | size=3 369 | stride=1 370 | pad=1 371 | activation=leaky 372 | 373 | [shortcut] 374 | from=-3 375 | activation=linear 376 | 377 | [convolutional] 378 | batch_normalize=1 379 | filters=256 380 | size=1 381 | stride=1 382 | pad=1 383 | activation=leaky 384 | 385 | [convolutional] 386 | batch_normalize=1 387 | filters=512 388 | size=3 389 | stride=1 390 | pad=1 391 | activation=leaky 392 | 393 | [shortcut] 394 | from=-3 395 | activation=linear 396 | 397 | 398 | [convolutional] 399 | batch_normalize=1 400 | filters=256 401 | size=1 402 | stride=1 403 | pad=1 404 | activation=leaky 405 | 406 | [convolutional] 407 | batch_normalize=1 408 | filters=512 409 | size=3 410 | stride=1 411 | pad=1 412 | activation=leaky 413 | 414 | [shortcut] 415 | from=-3 416 | activation=linear 417 | 418 | 419 | [convolutional] 420 | batch_normalize=1 421 | filters=256 422 | size=1 423 | stride=1 424 | pad=1 425 | activation=leaky 426 | 427 | [convolutional] 428 | batch_normalize=1 429 | filters=512 430 | size=3 431 | stride=1 432 | pad=1 433 | activation=leaky 434 | 435 | [shortcut] 436 | from=-3 437 | activation=linear 438 | 439 | [convolutional] 440 | batch_normalize=1 441 | filters=256 442 | size=1 443 | stride=1 444 | pad=1 445 | activation=leaky 446 | 447 | [convolutional] 448 | batch_normalize=1 449 | filters=512 450 | size=3 451 | stride=1 452 | pad=1 453 | activation=leaky 454 | 455 | [shortcut] 456 | from=-3 457 | activation=linear 458 | 459 | # Downsample 460 | 461 | [convolutional] 462 | batch_normalize=1 463 | filters=1024 464 | size=3 465 | stride=2 466 | pad=1 467 | activation=leaky 468 | 469 | [convolutional] 470 | batch_normalize=1 471 | filters=512 472 | size=1 473 | stride=1 474 | pad=1 475 | activation=leaky 476 | 477 | [convolutional] 478 | batch_normalize=1 479 | filters=1024 480 | size=3 481 | stride=1 482 | pad=1 483 | activation=leaky 484 | 485 | [shortcut] 486 | from=-3 487 | activation=linear 488 | 489 | [convolutional] 490 | batch_normalize=1 491 | filters=512 492 | size=1 493 | stride=1 494 | pad=1 495 | activation=leaky 496 | 497 | [convolutional] 498 | batch_normalize=1 499 | filters=1024 500 | size=3 501 | stride=1 502 | pad=1 503 | activation=leaky 504 | 505 | [shortcut] 506 | from=-3 507 | activation=linear 508 | 509 | [convolutional] 510 | batch_normalize=1 511 | filters=512 512 | size=1 513 | stride=1 514 | pad=1 515 | activation=leaky 516 | 517 | [convolutional] 518 | batch_normalize=1 519 | filters=1024 520 | size=3 521 | stride=1 522 | pad=1 523 | activation=leaky 524 | 525 | [shortcut] 526 | from=-3 527 | activation=linear 528 | 529 | [convolutional] 530 | batch_normalize=1 531 | filters=512 532 | size=1 533 | stride=1 534 | pad=1 535 | activation=leaky 536 | 537 | [convolutional] 538 | batch_normalize=1 539 | filters=1024 540 | size=3 541 | stride=1 542 | pad=1 543 | activation=leaky 544 | 545 | [shortcut] 546 | from=-3 547 | activation=linear 548 | 549 | ###################### 550 | 551 | [convolutional] 552 | batch_normalize=1 553 | filters=512 554 | size=1 555 | stride=1 556 | pad=1 557 | activation=leaky 558 | 559 | [convolutional] 560 | batch_normalize=1 561 | size=3 562 | stride=1 563 | pad=1 564 | filters=1024 565 | activation=leaky 566 | 567 | [convolutional] 568 | batch_normalize=1 569 | filters=512 570 | size=1 571 | stride=1 572 | pad=1 573 | activation=leaky 574 | 575 | [convolutional] 576 | batch_normalize=1 577 | size=3 578 | stride=1 579 | pad=1 580 | filters=1024 581 | activation=leaky 582 | 583 | [convolutional] 584 | batch_normalize=1 585 | filters=512 586 | size=1 587 | stride=1 588 | pad=1 589 | activation=leaky 590 | 591 | [convolutional] 592 | batch_normalize=1 593 | size=3 594 | stride=1 595 | pad=1 596 | filters=1024 597 | activation=leaky 598 | 599 | [convolutional] 600 | size=1 601 | stride=1 602 | pad=1 603 | filters=255 604 | activation=linear 605 | 606 | 607 | [yolo] 608 | mask = 6,7,8 609 | anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326 610 | classes=80 611 | num=9 612 | jitter=.3 613 | ignore_thresh = .5 614 | truth_thresh = 1 615 | random=1 616 | 617 | 618 | [route] 619 | layers = -4 620 | 621 | [convolutional] 622 | batch_normalize=1 623 | filters=256 624 | size=1 625 | stride=1 626 | pad=1 627 | activation=leaky 628 | 629 | [upsample] 630 | stride=2 631 | 632 | [route] 633 | layers = -1, 61 634 | 635 | 636 | 637 | [convolutional] 638 | batch_normalize=1 639 | filters=256 640 | size=1 641 | stride=1 642 | pad=1 643 | activation=leaky 644 | 645 | [convolutional] 646 | batch_normalize=1 647 | size=3 648 | stride=1 649 | pad=1 650 | filters=512 651 | activation=leaky 652 | 653 | [convolutional] 654 | batch_normalize=1 655 | filters=256 656 | size=1 657 | stride=1 658 | pad=1 659 | activation=leaky 660 | 661 | [convolutional] 662 | batch_normalize=1 663 | size=3 664 | stride=1 665 | pad=1 666 | filters=512 667 | activation=leaky 668 | 669 | [convolutional] 670 | batch_normalize=1 671 | filters=256 672 | size=1 673 | stride=1 674 | pad=1 675 | activation=leaky 676 | 677 | [convolutional] 678 | batch_normalize=1 679 | size=3 680 | stride=1 681 | pad=1 682 | filters=512 683 | activation=leaky 684 | 685 | [convolutional] 686 | size=1 687 | stride=1 688 | pad=1 689 | filters=255 690 | activation=linear 691 | 692 | 693 | [yolo] 694 | mask = 3,4,5 695 | anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326 696 | classes=80 697 | num=9 698 | jitter=.3 699 | ignore_thresh = .5 700 | truth_thresh = 1 701 | random=1 702 | 703 | 704 | 705 | [route] 706 | layers = -4 707 | 708 | [convolutional] 709 | batch_normalize=1 710 | filters=128 711 | size=1 712 | stride=1 713 | pad=1 714 | activation=leaky 715 | 716 | [upsample] 717 | stride=2 718 | 719 | [route] 720 | layers = -1, 36 721 | 722 | 723 | 724 | [convolutional] 725 | batch_normalize=1 726 | filters=128 727 | size=1 728 | stride=1 729 | pad=1 730 | activation=leaky 731 | 732 | [convolutional] 733 | batch_normalize=1 734 | size=3 735 | stride=1 736 | pad=1 737 | filters=256 738 | activation=leaky 739 | 740 | [convolutional] 741 | batch_normalize=1 742 | filters=128 743 | size=1 744 | stride=1 745 | pad=1 746 | activation=leaky 747 | 748 | [convolutional] 749 | batch_normalize=1 750 | size=3 751 | stride=1 752 | pad=1 753 | filters=256 754 | activation=leaky 755 | 756 | [convolutional] 757 | batch_normalize=1 758 | filters=128 759 | size=1 760 | stride=1 761 | pad=1 762 | activation=leaky 763 | 764 | [convolutional] 765 | batch_normalize=1 766 | size=3 767 | stride=1 768 | pad=1 769 | filters=256 770 | activation=leaky 771 | 772 | [convolutional] 773 | size=1 774 | stride=1 775 | pad=1 776 | filters=255 777 | activation=linear 778 | 779 | 780 | [yolo] 781 | mask = 0,1,2 782 | anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326 783 | classes=80 784 | num=9 785 | jitter=.3 786 | ignore_thresh = .5 787 | truth_thresh = 1 788 | random=1 789 | 790 | --------------------------------------------------------------------------------