├── models
├── __init__.py
├── gradcam.py
├── experimental.py
├── yolo_v5_object_detector.py
├── yolo.py
└── common.py
├── utils
├── __init__.py
├── activations.py
├── downloads.py
├── autoanchor.py
├── loss.py
├── augmentations.py
├── metrics.py
├── torch_utils.py
├── plots.py
└── general.py
├── images
├── bus.jpg
├── dog.jpg
├── eagle.jpg
└── cat-dog.jpg
├── outputs
├── bus-res.jpg
├── dog-res.jpg
├── eagle-res.jpg
└── cat-dog-res.jpg
├── requirements.txt
├── LICENSE
├── README.md
├── .gitignore
└── main.py
/models/__init__.py:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/utils/__init__.py:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/images/bus.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pooya-mohammadi/yolov5-gradcam/HEAD/images/bus.jpg
--------------------------------------------------------------------------------
/images/dog.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pooya-mohammadi/yolov5-gradcam/HEAD/images/dog.jpg
--------------------------------------------------------------------------------
/images/eagle.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pooya-mohammadi/yolov5-gradcam/HEAD/images/eagle.jpg
--------------------------------------------------------------------------------
/images/cat-dog.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pooya-mohammadi/yolov5-gradcam/HEAD/images/cat-dog.jpg
--------------------------------------------------------------------------------
/outputs/bus-res.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pooya-mohammadi/yolov5-gradcam/HEAD/outputs/bus-res.jpg
--------------------------------------------------------------------------------
/outputs/dog-res.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pooya-mohammadi/yolov5-gradcam/HEAD/outputs/dog-res.jpg
--------------------------------------------------------------------------------
/outputs/eagle-res.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pooya-mohammadi/yolov5-gradcam/HEAD/outputs/eagle-res.jpg
--------------------------------------------------------------------------------
/outputs/cat-dog-res.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pooya-mohammadi/yolov5-gradcam/HEAD/outputs/cat-dog-res.jpg
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | # pip install -r requirements.txt
2 |
3 | # Base ----------------------------------------
4 | matplotlib>=3.2.2
5 | numpy>=1.18.5
6 | opencv-python>=4.5.4
7 | Pillow>=7.1.2
8 | PyYAML>=5.3.1
9 | requests>=2.23.0
10 | scipy>=1.4.1
11 | torch>=1.7.0,<1.11.0
12 | torchvision>=0.8.1
13 | tqdm>=4.41.0
14 | # Logging -------------------------------------
15 | tensorboard>=2.4.1
16 | # wandb
17 |
18 | # Plotting ------------------------------------
19 | pandas>=1.1.4
20 | seaborn>=0.11.0
21 |
22 | # loading model ---------------------------------------
23 | deep_utils>=0.8.5
24 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2021 Pooya Mohammadi Kazaj
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # YOLO-V5 GRADCAM
2 |
3 | I constantly desired to know to which part of an object the object-detection models pay more attention. So I searched for it, but I didn't find any for Yolov5.
4 | Here is my implementation of Grad-cam for YOLO-v5. To load the model I used the yolov5's main codes, and for computing GradCam I used the codes from the gradcam_plus_plus-pytorch repository.
5 | Please follow my GitHub account and star ⭐ the project if this functionality benefits your research or projects.
6 |
7 | ## Update:
8 | Repo works fine with yolov5-v6.1
9 |
10 |
11 | ## Installation
12 | `pip install -r requirements.txt`
13 |
14 | ## Infer
15 | `python main.py --model-path yolov5s.pt --img-path images/cat-dog.jpg --output-dir outputs`
16 |
17 | **NOTE**: If you don't have any weights and just want to test, don't change the model-path argument. The yolov5s model will be automatically downloaded thanks to the download function from yolov5.
18 |
19 | **NOTE**: For more input arguments, check out the main.py or run the following command:
20 |
21 | ```python main.py -h```
22 |
23 | ### Custom Name
24 | To pass in your custom model you might want to pass in your custom names as well, which be done as below:
25 | ```
26 | python main.py --model-path cutom-model-path.pt --img-path img-path.jpg --output-dir outputs --names obj1,obj2,obj3
27 | ```
28 | ## Examples
29 | [](https://colab.research.google.com/github/pooya-mohammadi/yolov5-gradcam/blob/master/main.ipynb)
30 |
31 |
32 |
33 |
34 |
35 | ## Note
36 | I checked the code, but I couldn't find an explanation for why the truck's heatmap does not show anything. Please inform me or create a pull request if you find the reason.
37 |
38 | This problem is solved in version 6.1
39 |
40 | ## TO Do
41 | 1. Add GradCam++
42 | 2. Add ScoreCam
43 | 3. Add the functionality to the deep_utils library
44 |
45 | # References
46 | 1. https://github.com/1Konny/gradcam_plus_plus-pytorch
47 | 2. https://github.com/ultralytics/yolov5
48 | 3. https://github.com/pooya-mohammadi/deep_utils
49 |
50 | ## Citation
51 |
52 | Please cite **yolov5-gradcam** if it helps your research. You can use the following BibTeX entry:
53 | ```
54 | @misc{deep_utils,
55 | title = {yolov5-gradcam},
56 | author = {Mohammadi Kazaj, Pooya},
57 | howpublished = {\url{github.com/pooya-mohammadi/yolov5-gradcam}},
58 | year = {2021}
59 | }
60 | ```
61 |
--------------------------------------------------------------------------------
/models/gradcam.py:
--------------------------------------------------------------------------------
1 | import time
2 | import torch
3 | import torch.nn.functional as F
4 |
5 |
6 | def find_yolo_layer(model, layer_name):
7 | """Find yolov5 layer to calculate GradCAM and GradCAM++
8 |
9 | Args:
10 | model: yolov5 model.
11 | layer_name (str): the name of layer with its hierarchical information.
12 |
13 | Return:
14 | target_layer: found layer
15 | """
16 | hierarchy = layer_name.split('_')
17 | target_layer = model.model._modules[hierarchy[0]]
18 |
19 | for h in hierarchy[1:]:
20 | target_layer = target_layer._modules[h]
21 | return target_layer
22 |
23 |
24 | class YOLOV5GradCAM:
25 |
26 | def __init__(self, model, layer_name, img_size=(640, 640)):
27 | self.model = model
28 | self.gradients = dict()
29 | self.activations = dict()
30 |
31 | def backward_hook(module, grad_input, grad_output):
32 | self.gradients['value'] = grad_output[0]
33 | return None
34 |
35 | def forward_hook(module, input, output):
36 | self.activations['value'] = output
37 | return None
38 |
39 | target_layer = find_yolo_layer(self.model, layer_name)
40 | target_layer.register_forward_hook(forward_hook)
41 | target_layer.register_backward_hook(backward_hook)
42 |
43 | device = 'cuda' if next(self.model.model.parameters()).is_cuda else 'cpu'
44 | self.model(torch.zeros(1, 3, *img_size, device=device))
45 | print('[INFO] saliency_map size :', self.activations['value'].shape[2:])
46 |
47 | def forward(self, input_img, class_idx=True):
48 | """
49 | Args:
50 | input_img: input image with shape of (1, 3, H, W)
51 | Return:
52 | mask: saliency map of the same spatial dimension with input
53 | logit: model output
54 | preds: The object predictions
55 | """
56 | saliency_maps = []
57 | b, c, h, w = input_img.size()
58 | tic = time.time()
59 | preds, logits = self.model(input_img)
60 | print("[INFO] model-forward took: ", round(time.time() - tic, 4), 'seconds')
61 | for logit, cls, cls_name in zip(logits[0], preds[1][0], preds[2][0]):
62 | if class_idx:
63 | score = logit[cls]
64 | else:
65 | score = logit.max()
66 | self.model.zero_grad()
67 | tic = time.time()
68 | score.backward(retain_graph=True)
69 | print(f"[INFO] {cls_name}, model-backward took: ", round(time.time() - tic, 4), 'seconds')
70 | gradients = self.gradients['value']
71 | activations = self.activations['value']
72 | b, k, u, v = gradients.size()
73 | alpha = gradients.view(b, k, -1).mean(2)
74 | weights = alpha.view(b, k, 1, 1)
75 | saliency_map = (weights * activations).sum(1, keepdim=True)
76 | saliency_map = F.relu(saliency_map)
77 | saliency_map = F.upsample(saliency_map, size=(h, w), mode='bilinear', align_corners=False)
78 | saliency_map_min, saliency_map_max = saliency_map.min(), saliency_map.max()
79 | saliency_map = (saliency_map - saliency_map_min).div(saliency_map_max - saliency_map_min).data
80 | saliency_maps.append(saliency_map)
81 | return saliency_maps, logits, preds
82 |
83 | def __call__(self, input_img):
84 | return self.forward(input_img)
85 |
--------------------------------------------------------------------------------
/utils/activations.py:
--------------------------------------------------------------------------------
1 | # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2 | """
3 | Activation functions
4 | """
5 |
6 | import torch
7 | import torch.nn as nn
8 | import torch.nn.functional as F
9 |
10 |
11 | # SiLU https://arxiv.org/pdf/1606.08415.pdf ----------------------------------------------------------------------------
12 | class SiLU(nn.Module): # export-friendly version of nn.SiLU()
13 | @staticmethod
14 | def forward(x):
15 | return x * torch.sigmoid(x)
16 |
17 |
18 | class Hardswish(nn.Module): # export-friendly version of nn.Hardswish()
19 | @staticmethod
20 | def forward(x):
21 | # return x * F.hardsigmoid(x) # for torchscript and CoreML
22 | return x * F.hardtanh(x + 3, 0., 6.) / 6. # for torchscript, CoreML and ONNX
23 |
24 |
25 | # Mish https://github.com/digantamisra98/Mish --------------------------------------------------------------------------
26 | class Mish(nn.Module):
27 | @staticmethod
28 | def forward(x):
29 | return x * F.softplus(x).tanh()
30 |
31 |
32 | class MemoryEfficientMish(nn.Module):
33 | class F(torch.autograd.Function):
34 | @staticmethod
35 | def forward(ctx, x):
36 | ctx.save_for_backward(x)
37 | return x.mul(torch.tanh(F.softplus(x))) # x * tanh(ln(1 + exp(x)))
38 |
39 | @staticmethod
40 | def backward(ctx, grad_output):
41 | x = ctx.saved_tensors[0]
42 | sx = torch.sigmoid(x)
43 | fx = F.softplus(x).tanh()
44 | return grad_output * (fx + x * sx * (1 - fx * fx))
45 |
46 | def forward(self, x):
47 | return self.F.apply(x)
48 |
49 |
50 | # FReLU https://arxiv.org/abs/2007.11824 -------------------------------------------------------------------------------
51 | class FReLU(nn.Module):
52 | def __init__(self, c1, k=3): # ch_in, kernel
53 | super().__init__()
54 | self.conv = nn.Conv2d(c1, c1, k, 1, 1, groups=c1, bias=False)
55 | self.bn = nn.BatchNorm2d(c1)
56 |
57 | def forward(self, x):
58 | return torch.max(x, self.bn(self.conv(x)))
59 |
60 |
61 | # ACON https://arxiv.org/pdf/2009.04759.pdf ----------------------------------------------------------------------------
62 | class AconC(nn.Module):
63 | r""" ACON activation (activate or not).
64 | AconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is a learnable parameter
65 | according to "Activate or Not: Learning Customized Activation" .
66 | """
67 |
68 | def __init__(self, c1):
69 | super().__init__()
70 | self.p1 = nn.Parameter(torch.randn(1, c1, 1, 1))
71 | self.p2 = nn.Parameter(torch.randn(1, c1, 1, 1))
72 | self.beta = nn.Parameter(torch.ones(1, c1, 1, 1))
73 |
74 | def forward(self, x):
75 | dpx = (self.p1 - self.p2) * x
76 | return dpx * torch.sigmoid(self.beta * dpx) + self.p2 * x
77 |
78 |
79 | class MetaAconC(nn.Module):
80 | r""" ACON activation (activate or not).
81 | MetaAconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is generated by a small network
82 | according to "Activate or Not: Learning Customized Activation" .
83 | """
84 |
85 | def __init__(self, c1, k=1, s=1, r=16): # ch_in, kernel, stride, r
86 | super().__init__()
87 | c2 = max(r, c1 // r)
88 | self.p1 = nn.Parameter(torch.randn(1, c1, 1, 1))
89 | self.p2 = nn.Parameter(torch.randn(1, c1, 1, 1))
90 | self.fc1 = nn.Conv2d(c1, c2, k, s, bias=True)
91 | self.fc2 = nn.Conv2d(c2, c1, k, s, bias=True)
92 | # self.bn1 = nn.BatchNorm2d(c2)
93 | # self.bn2 = nn.BatchNorm2d(c1)
94 |
95 | def forward(self, x):
96 | y = x.mean(dim=2, keepdims=True).mean(dim=3, keepdims=True)
97 | # batch-size 1 bug/instabilities https://github.com/ultralytics/yolov5/issues/2891
98 | # beta = torch.sigmoid(self.bn2(self.fc2(self.bn1(self.fc1(y))))) # bug/unstable
99 | beta = torch.sigmoid(self.fc2(self.fc1(y))) # bug patch BN layers removed
100 | dpx = (self.p1 - self.p2) * x
101 | return dpx * torch.sigmoid(beta * dpx) + self.p2 * x
102 |
--------------------------------------------------------------------------------
/models/experimental.py:
--------------------------------------------------------------------------------
1 | # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2 | """
3 | Experimental modules
4 | """
5 | import math
6 |
7 | import numpy as np
8 | import torch
9 | import torch.nn as nn
10 |
11 | from models.common import Conv
12 | from utils.downloads import attempt_download
13 |
14 |
15 | class Sum(nn.Module):
16 | # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070
17 | def __init__(self, n, weight=False): # n: number of inputs
18 | super().__init__()
19 | self.weight = weight # apply weights boolean
20 | self.iter = range(n - 1) # iter object
21 | if weight:
22 | self.w = nn.Parameter(-torch.arange(1.0, n) / 2, requires_grad=True) # layer weights
23 |
24 | def forward(self, x):
25 | y = x[0] # no weight
26 | if self.weight:
27 | w = torch.sigmoid(self.w) * 2
28 | for i in self.iter:
29 | y = y + x[i + 1] * w[i]
30 | else:
31 | for i in self.iter:
32 | y = y + x[i + 1]
33 | return y
34 |
35 |
36 | class MixConv2d(nn.Module):
37 | # Mixed Depth-wise Conv https://arxiv.org/abs/1907.09595
38 | def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True): # ch_in, ch_out, kernel, stride, ch_strategy
39 | super().__init__()
40 | n = len(k) # number of convolutions
41 | if equal_ch: # equal c_ per group
42 | i = torch.linspace(0, n - 1E-6, c2).floor() # c2 indices
43 | c_ = [(i == g).sum() for g in range(n)] # intermediate channels
44 | else: # equal weight.numel() per group
45 | b = [c2] + [0] * n
46 | a = np.eye(n + 1, n, k=-1)
47 | a -= np.roll(a, 1, axis=1)
48 | a *= np.array(k) ** 2
49 | a[0] = 1
50 | c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b
51 |
52 | self.m = nn.ModuleList([
53 | nn.Conv2d(c1, int(c_), k, s, k // 2, groups=math.gcd(c1, int(c_)), bias=False) for k, c_ in zip(k, c_)])
54 | self.bn = nn.BatchNorm2d(c2)
55 | self.act = nn.SiLU()
56 |
57 | def forward(self, x):
58 | return self.act(self.bn(torch.cat([m(x) for m in self.m], 1)))
59 |
60 |
61 | class Ensemble(nn.ModuleList):
62 | # Ensemble of models
63 | def __init__(self):
64 | super().__init__()
65 |
66 | def forward(self, x, augment=False, profile=False, visualize=False):
67 | y = [module(x, augment, profile, visualize)[0] for module in self]
68 | # y = torch.stack(y).max(0)[0] # max ensemble
69 | # y = torch.stack(y).mean(0) # mean ensemble
70 | y = torch.cat(y, 1) # nms ensemble
71 | return y, None # inference, train output
72 |
73 |
74 | def attempt_load(weights, device=None, inplace=True, fuse=True):
75 | from models.yolo import Detect, Model
76 |
77 | # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a
78 | model = Ensemble()
79 | for w in weights if isinstance(weights, list) else [weights]:
80 | ckpt = torch.load(attempt_download(w), map_location=device)
81 | ckpt = (ckpt.get('ema') or ckpt['model']).float() # FP32 model
82 | model.append(ckpt.fuse().eval() if fuse else ckpt.eval()) # fused or un-fused model in eval mode
83 |
84 | # Compatibility updates
85 | for m in model.modules():
86 | t = type(m)
87 | if t in (nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU, Detect, Model):
88 | m.inplace = inplace # torch 1.7.0 compatibility
89 | if t is Detect and not isinstance(m.anchor_grid, list):
90 | delattr(m, 'anchor_grid')
91 | setattr(m, 'anchor_grid', [torch.zeros(1)] * m.nl)
92 | elif t is Conv:
93 | m._non_persistent_buffers_set = set() # torch 1.6.0 compatibility
94 | elif t is nn.Upsample and not hasattr(m, 'recompute_scale_factor'):
95 | m.recompute_scale_factor = None # torch 1.11.0 compatibility
96 |
97 | if len(model) == 1:
98 | return model[-1] # return model
99 | print(f'Ensemble created with {weights}\n')
100 | for k in 'names', 'nc', 'yaml':
101 | setattr(model, k, getattr(model[0], k))
102 | model.stride = model[torch.argmax(torch.tensor([m.stride.max() for m in model])).int()].stride # max stride
103 | assert all(model[0].nc == m.nc for m in model), f'Models have different class counts: {[m.nc for m in model]}'
104 | return model # return ensemble
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | # Repo-specific GitIgnore ----------------------------------------------------------------------------------------------
2 | *.jpeg
3 | *.png
4 | *.bmp
5 | *.tif
6 | *.tiff
7 | *.heic
8 | *.JPG
9 | *.JPEG
10 | *.PNG
11 | *.BMP
12 | *.TIF
13 | *.TIFF
14 | *.HEIC
15 | *.mp4
16 | *.mov
17 | *.MOV
18 | *.avi
19 | *.data
20 | *.json
21 | *.cfg
22 | !cfg/yolov3*.cfg
23 |
24 | storage.googleapis.com
25 | runs/*
26 | data/*
27 | !data/hyps/*
28 | !data/images/zidane.jpg
29 | !data/images/bus.jpg
30 | !data/*.sh
31 |
32 | results*.csv
33 |
34 | # Datasets -------------------------------------------------------------------------------------------------------------
35 | coco/
36 | coco128/
37 | VOC/
38 |
39 | # MATLAB GitIgnore -----------------------------------------------------------------------------------------------------
40 | *.m~
41 | *.mat
42 | !targets*.mat
43 |
44 | # Neural Network weights -----------------------------------------------------------------------------------------------
45 | *.weights
46 | *.pt
47 | *.pb
48 | *.onnx
49 | *.mlmodel
50 | *.torchscript
51 | *.tflite
52 | *.h5
53 | *_saved_model/
54 | *_web_model/
55 | darknet53.conv.74
56 | yolov3-tiny.conv.15
57 |
58 | # GitHub Python GitIgnore ----------------------------------------------------------------------------------------------
59 | # Byte-compiled / optimized / DLL files
60 | __pycache__/
61 | *.py[cod]
62 | *$py.class
63 |
64 | # C extensions
65 | *.so
66 |
67 | # Distribution / packaging
68 | .Python
69 | env/
70 | build/
71 | develop-eggs/
72 | dist/
73 | downloads/
74 | eggs/
75 | .eggs/
76 | lib/
77 | lib64/
78 | parts/
79 | sdist/
80 | var/
81 | wheels/
82 | *.egg-info/
83 | /wandb/
84 | .installed.cfg
85 | *.egg
86 |
87 |
88 | # PyInstaller
89 | # Usually these files are written by a python script from a template
90 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
91 | *.manifest
92 | *.spec
93 |
94 | # Installer logs
95 | pip-log.txt
96 | pip-delete-this-directory.txt
97 |
98 | # Unit test / coverage reports
99 | htmlcov/
100 | .tox/
101 | .coverage
102 | .coverage.*
103 | .cache
104 | nosetests.xml
105 | coverage.xml
106 | *.cover
107 | .hypothesis/
108 |
109 | # Translations
110 | *.mo
111 | *.pot
112 |
113 | # Django stuff:
114 | *.log
115 | local_settings.py
116 |
117 | # Flask stuff:
118 | instance/
119 | .webassets-cache
120 |
121 | # Scrapy stuff:
122 | .scrapy
123 |
124 | # Sphinx documentation
125 | docs/_build/
126 |
127 | # PyBuilder
128 | target/
129 |
130 | # Jupyter Notebook
131 | .ipynb_checkpoints
132 |
133 | # pyenv
134 | .python-version
135 |
136 | # celery beat schedule file
137 | celerybeat-schedule
138 |
139 | # SageMath parsed files
140 | *.sage.py
141 |
142 | # dotenv
143 | .env
144 |
145 | # virtualenv
146 | .venv*
147 | venv*/
148 | ENV*/
149 |
150 | # Spyder project settings
151 | .spyderproject
152 | .spyproject
153 |
154 | # Rope project settings
155 | .ropeproject
156 |
157 | # mkdocs documentation
158 | /site
159 |
160 | # mypy
161 | .mypy_cache/
162 |
163 |
164 | # https://github.com/github/gitignore/blob/master/Global/macOS.gitignore -----------------------------------------------
165 |
166 | # General
167 | .DS_Store
168 | .AppleDouble
169 | .LSOverride
170 |
171 | # Icon must end with two \r
172 | Icon
173 | Icon?
174 |
175 | # Thumbnails
176 | ._*
177 |
178 | # Files that might appear in the root of a volume
179 | .DocumentRevisions-V100
180 | .fseventsd
181 | .Spotlight-V100
182 | .TemporaryItems
183 | .Trashes
184 | .VolumeIcon.icns
185 | .com.apple.timemachine.donotpresent
186 |
187 | # Directories potentially created on remote AFP share
188 | .AppleDB
189 | .AppleDesktop
190 | Network Trash Folder
191 | Temporary Items
192 | .apdisk
193 |
194 |
195 | # https://github.com/github/gitignore/blob/master/Global/JetBrains.gitignore
196 | # Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and WebStorm
197 | # Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
198 |
199 | # User-specific stuff:
200 | .idea/*
201 | .idea/**/workspace.xml
202 | .idea/**/tasks.xml
203 | .idea/dictionaries
204 | .html # Bokeh Plots
205 | .pg # TensorFlow Frozen Graphs
206 | .avi # videos
207 |
208 | # Sensitive or high-churn files:
209 | .idea/**/dataSources/
210 | .idea/**/dataSources.ids
211 | .idea/**/dataSources.local.xml
212 | .idea/**/sqlDataSources.xml
213 | .idea/**/dynamic.xml
214 | .idea/**/uiDesigner.xml
215 |
216 | # Gradle:
217 | .idea/**/gradle.xml
218 | .idea/**/libraries
219 |
220 | # CMake
221 | cmake-build-debug/
222 | cmake-build-release/
223 |
224 | # Mongo Explorer plugin:
225 | .idea/**/mongoSettings.xml
226 |
227 | ## File-based project format:
228 | *.iws
229 |
230 | ## Plugin-specific files:
231 |
232 | # IntelliJ
233 | out/
234 |
235 | # mpeltonen/sbt-idea plugin
236 | .idea_modules/
237 |
238 | # JIRA plugin
239 | atlassian-ide-plugin.xml
240 |
241 | # Cursive Clojure plugin
242 | .idea/replstate.xml
243 |
244 | # Crashlytics plugin (for Android Studio and IntelliJ)
245 | com_crashlytics_export_strings.xml
246 | crashlytics.properties
247 | crashlytics-build.properties
248 | fabric.properties
249 |
--------------------------------------------------------------------------------
/main.py:
--------------------------------------------------------------------------------
1 | import os
2 | import time
3 | import argparse
4 | import numpy as np
5 | from models.gradcam import YOLOV5GradCAM
6 | from models.yolo_v5_object_detector import YOLOV5TorchObjectDetector
7 | import cv2
8 | from deep_utils import Box, split_extension
9 |
10 | # Arguments
11 | parser = argparse.ArgumentParser()
12 | parser.add_argument('--model-path', type=str, default="yolov5s.pt", help='Path to the model')
13 | parser.add_argument('--img-path', type=str, default='images/', help='input image path')
14 | parser.add_argument('--output-dir', type=str, default='outputs', help='output dir')
15 | parser.add_argument('--img-size', type=int, default=640, help="input image size")
16 | parser.add_argument('--target-layer', type=str, default='model_23_cv3_act',
17 | help='The layer hierarchical address to which gradcam will applied,'
18 | ' the names should be separated by underline')
19 | parser.add_argument('--method', type=str, default='gradcam', help='gradcam or gradcampp')
20 | parser.add_argument('--device', type=str, default='cpu', help='cuda or cpu')
21 | parser.add_argument('--names', type=str, default=None,
22 | help='The name of the classes. The default is set to None and is set to coco classes. Provide your custom names as follow: object1,object2,object3')
23 |
24 | args = parser.parse_args()
25 |
26 |
27 | def get_res_img(bbox, mask, res_img):
28 | mask = mask.squeeze(0).mul(255).add_(0.5).clamp_(0, 255).permute(1, 2, 0).detach().cpu().numpy().astype(
29 | np.uint8)
30 | heatmap = cv2.applyColorMap(mask, cv2.COLORMAP_JET)
31 | n_heatmat = (Box.fill_outer_box(heatmap, bbox) / 255).astype(np.float32)
32 | res_img = res_img / 255
33 | res_img = cv2.add(res_img, n_heatmat)
34 | res_img = (res_img / res_img.max())
35 | return res_img, n_heatmat
36 |
37 |
38 | def put_text_box(bbox, cls_name, res_img):
39 | x1, y1, x2, y2 = bbox
40 | # this is a bug in cv2. It does not put box on a converted image from torch unless it's buffered and read again!
41 | cv2.imwrite('temp.jpg', (res_img * 255).astype(np.uint8))
42 | res_img = cv2.imread('temp.jpg')
43 | res_img = Box.put_box(res_img, bbox)
44 | res_img = Box.put_text(res_img, cls_name, (x1, y1))
45 | return res_img
46 |
47 |
48 | def concat_images(images):
49 | w, h = images[0].shape[:2]
50 | width = w
51 | height = h * len(images)
52 | base_img = np.zeros((width, height, 3), dtype=np.uint8)
53 | for i, img in enumerate(images):
54 | base_img[:, h * i:h * (i + 1), ...] = img
55 | return base_img
56 |
57 |
58 | def main(img_path):
59 | device = args.device
60 | input_size = (args.img_size, args.img_size)
61 | img = cv2.imread(img_path)
62 | print('[INFO] Loading the model')
63 | model = YOLOV5TorchObjectDetector(args.model_path, device, img_size=input_size,
64 | names=None if args.names is None else args.names.strip().split(","))
65 | torch_img = model.preprocessing(img[..., ::-1])
66 | if args.method == 'gradcam':
67 | saliency_method = YOLOV5GradCAM(model=model, layer_name=args.target_layer, img_size=input_size)
68 | tic = time.time()
69 | masks, logits, [boxes, _, class_names, _] = saliency_method(torch_img)
70 | print("total time:", round(time.time() - tic, 4))
71 | result = torch_img.squeeze(0).mul(255).add_(0.5).clamp_(0, 255).permute(1, 2, 0).detach().cpu().numpy()
72 | result = result[..., ::-1] # convert to bgr
73 | images = [result]
74 | for i, mask in enumerate(masks):
75 | res_img = result.copy()
76 | bbox, cls_name = boxes[0][i], class_names[0][i]
77 | res_img, heat_map = get_res_img(bbox, mask, res_img)
78 | res_img = put_text_box(bbox, cls_name, res_img)
79 | images.append(res_img)
80 | final_image = concat_images(images)
81 | img_name = split_extension(os.path.split(img_path)[-1], suffix='-res')
82 | output_path = f'{args.output_dir}/{img_name}'
83 | os.makedirs(args.output_dir, exist_ok=True)
84 | print(f'[INFO] Saving the final image at {output_path}')
85 | cv2.imwrite(output_path, final_image)
86 |
87 |
88 | def folder_main(folder_path):
89 | device = args.device
90 | input_size = (args.img_size, args.img_size)
91 | print('[INFO] Loading the model')
92 | model = YOLOV5TorchObjectDetector(args.model_path, device, img_size=input_size,
93 | names=None if args.names is None else args.names.strip().split(","))
94 | for item in os.listdir(folder_path):
95 | img_path = os.path.join(folder_path, item)
96 | img = cv2.imread(img_path)
97 | torch_img = model.preprocessing(img[..., ::-1])
98 | if args.method == 'gradcam':
99 | saliency_method = YOLOV5GradCAM(model=model, layer_name=args.target_layer, img_size=input_size)
100 | tic = time.time()
101 | masks, logits, [boxes, _, class_names, _] = saliency_method(torch_img)
102 | print("total time:", round(time.time() - tic, 4))
103 | result = torch_img.squeeze(0).mul(255).add_(0.5).clamp_(0, 255).permute(1, 2, 0).detach().cpu().numpy()
104 | result = result[..., ::-1] # convert to bgr
105 | images = [result]
106 | for i, mask in enumerate(masks):
107 | res_img = result.copy()
108 | bbox, cls_name = boxes[0][i], class_names[0][i]
109 | res_img, heat_map = get_res_img(bbox, mask, res_img)
110 | res_img = put_text_box(bbox, cls_name, res_img)
111 | images.append(res_img)
112 | final_image = concat_images(images)
113 | img_name = split_extension(os.path.split(img_path)[-1], suffix='-res')
114 | output_path = f'{args.output_dir}/{img_name}'
115 | os.makedirs(args.output_dir, exist_ok=True)
116 | print(f'[INFO] Saving the final image at {output_path}')
117 | cv2.imwrite(output_path, final_image)
118 |
119 |
120 | if __name__ == '__main__':
121 | if os.path.isdir(args.img_path):
122 | folder_main(args.img_path)
123 | else:
124 | main(args.img_path)
125 |
--------------------------------------------------------------------------------
/utils/downloads.py:
--------------------------------------------------------------------------------
1 | # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2 | """
3 | Download utils
4 | """
5 |
6 | import os
7 | import platform
8 | import subprocess
9 | import time
10 | import urllib
11 | from pathlib import Path
12 | from zipfile import ZipFile
13 |
14 | import requests
15 | import torch
16 |
17 |
18 | def gsutil_getsize(url=''):
19 | # gs://bucket/file size https://cloud.google.com/storage/docs/gsutil/commands/du
20 | s = subprocess.check_output(f'gsutil du {url}', shell=True).decode('utf-8')
21 | return eval(s.split(' ')[0]) if len(s) else 0 # bytes
22 |
23 |
24 | def safe_download(file, url, url2=None, min_bytes=1E0, error_msg=''):
25 | # Attempts to download file from url or url2, checks and removes incomplete downloads < min_bytes
26 | file = Path(file)
27 | assert_msg = f"Downloaded file '{file}' does not exist or size is < min_bytes={min_bytes}"
28 | try: # url1
29 | print(f'Downloading {url} to {file}...')
30 | torch.hub.download_url_to_file(url, str(file))
31 | assert file.exists() and file.stat().st_size > min_bytes, assert_msg # check
32 | except Exception as e: # url2
33 | file.unlink(missing_ok=True) # remove partial downloads
34 | print(f'ERROR: {e}\nRe-attempting {url2 or url} to {file}...')
35 | os.system(f"curl -L '{url2 or url}' -o '{file}' --retry 3 -C -") # curl download, retry and resume on fail
36 | finally:
37 | if not file.exists() or file.stat().st_size < min_bytes: # check
38 | file.unlink(missing_ok=True) # remove partial downloads
39 | print(f"ERROR: {assert_msg}\n{error_msg}")
40 | print('')
41 |
42 |
43 | def attempt_download(file, repo='ultralytics/yolov5'): # from utils.downloads import *; attempt_download()
44 | # Attempt file download if does not exist
45 | file = Path(str(file).strip().replace("'", ''))
46 |
47 | if not file.exists():
48 | # URL specified
49 | name = Path(urllib.parse.unquote(str(file))).name # decode '%2F' to '/' etc.
50 | if str(file).startswith(('http:/', 'https:/')): # download
51 | url = str(file).replace(':/', '://') # Pathlib turns :// -> :/
52 | name = name.split('?')[0] # parse authentication https://url.com/file.txt?auth...
53 | safe_download(file=name, url=url, min_bytes=1E5)
54 | return name
55 |
56 | # GitHub assets
57 | file.parent.mkdir(parents=True, exist_ok=True) # make parent dir (if required)
58 | try:
59 | response = requests.get(f'https://api.github.com/repos/{repo}/releases/latest').json() # github api
60 | assets = [x['name'] for x in response['assets']] # release assets, i.e. ['yolov5s.pt', 'yolov5m.pt', ...]
61 | tag = response['tag_name'] # i.e. 'v1.0'
62 | except: # fallback plan
63 | assets = ['yolov5n.pt', 'yolov5s.pt', 'yolov5m.pt', 'yolov5l.pt', 'yolov5x.pt',
64 | 'yolov5n6.pt', 'yolov5s6.pt', 'yolov5m6.pt', 'yolov5l6.pt', 'yolov5x6.pt']
65 | try:
66 | tag = subprocess.check_output('git tag', shell=True, stderr=subprocess.STDOUT).decode().split()[-1]
67 | except:
68 | tag = 'v6.0' # current release
69 |
70 | if name in assets:
71 | safe_download(file,
72 | url=f'https://github.com/{repo}/releases/download/{tag}/{name}',
73 | # url2=f'https://storage.googleapis.com/{repo}/ckpt/{name}', # backup url (optional)
74 | min_bytes=1E5,
75 | error_msg=f'{file} missing, try downloading from https://github.com/{repo}/releases/')
76 |
77 | return str(file)
78 |
79 |
80 | def gdrive_download(id='16TiPfZj7htmTyhntwcZyEEAejOUxuT6m', file='tmp.zip'):
81 | # Downloads a file from Google Drive. from yolov5.utils.downloads import *; gdrive_download()
82 | t = time.time()
83 | file = Path(file)
84 | cookie = Path('cookie') # gdrive cookie
85 | print(f'Downloading https://drive.google.com/uc?export=download&id={id} as {file}... ', end='')
86 | file.unlink(missing_ok=True) # remove existing file
87 | cookie.unlink(missing_ok=True) # remove existing cookie
88 |
89 | # Attempt file download
90 | out = "NUL" if platform.system() == "Windows" else "/dev/null"
91 | os.system(f'curl -c ./cookie -s -L "drive.google.com/uc?export=download&id={id}" > {out}')
92 | if os.path.exists('cookie'): # large file
93 | s = f'curl -Lb ./cookie "drive.google.com/uc?export=download&confirm={get_token()}&id={id}" -o {file}'
94 | else: # small file
95 | s = f'curl -s -L -o {file} "drive.google.com/uc?export=download&id={id}"'
96 | r = os.system(s) # execute, capture return
97 | cookie.unlink(missing_ok=True) # remove existing cookie
98 |
99 | # Error check
100 | if r != 0:
101 | file.unlink(missing_ok=True) # remove partial
102 | print('Download error ') # raise Exception('Download error')
103 | return r
104 |
105 | # Unzip if archive
106 | if file.suffix == '.zip':
107 | print('unzipping... ', end='')
108 | ZipFile(file).extractall(path=file.parent) # unzip
109 | file.unlink() # remove zip
110 |
111 | print(f'Done ({time.time() - t:.1f}s)')
112 | return r
113 |
114 |
115 | def get_token(cookie="./cookie"):
116 | with open(cookie) as f:
117 | for line in f:
118 | if "download" in line:
119 | return line.split()[-1]
120 | return ""
121 |
122 | # Google utils: https://cloud.google.com/storage/docs/reference/libraries ----------------------------------------------
123 | #
124 | #
125 | # def upload_blob(bucket_name, source_file_name, destination_blob_name):
126 | # # Uploads a file to a bucket
127 | # # https://cloud.google.com/storage/docs/uploading-objects#storage-upload-object-python
128 | #
129 | # storage_client = storage.Client()
130 | # bucket = storage_client.get_bucket(bucket_name)
131 | # blob = bucket.blob(destination_blob_name)
132 | #
133 | # blob.upload_from_filename(source_file_name)
134 | #
135 | # print('File {} uploaded to {}.'.format(
136 | # source_file_name,
137 | # destination_blob_name))
138 | #
139 | #
140 | # def download_blob(bucket_name, source_blob_name, destination_file_name):
141 | # # Uploads a blob from a bucket
142 | # storage_client = storage.Client()
143 | # bucket = storage_client.get_bucket(bucket_name)
144 | # blob = bucket.blob(source_blob_name)
145 | #
146 | # blob.download_to_filename(destination_file_name)
147 | #
148 | # print('Blob {} downloaded to {}.'.format(
149 | # source_blob_name,
150 | # destination_file_name))
151 |
--------------------------------------------------------------------------------
/utils/autoanchor.py:
--------------------------------------------------------------------------------
1 | # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2 | """
3 | Auto-anchor utils
4 | """
5 |
6 | import random
7 |
8 | import numpy as np
9 | import torch
10 | import yaml
11 | from tqdm import tqdm
12 |
13 | from utils.general import colorstr
14 |
15 |
16 | def check_anchor_order(m):
17 | # Check anchor order against stride order for YOLOv5 Detect() module m, and correct if necessary
18 | a = m.anchors.prod(-1).view(-1) # anchor area
19 | da = a[-1] - a[0] # delta a
20 | ds = m.stride[-1] - m.stride[0] # delta s
21 | if da.sign() != ds.sign(): # same order
22 | print('Reversing anchor order')
23 | m.anchors[:] = m.anchors.flip(0)
24 |
25 |
26 | def check_anchors(dataset, model, thr=4.0, imgsz=640):
27 | # Check anchor fit to data, recompute if necessary
28 | prefix = colorstr('autoanchor: ')
29 | print(f'\n{prefix}Analyzing anchors... ', end='')
30 | m = model.module.model[-1] if hasattr(model, 'module') else model.model[-1] # Detect()
31 | shapes = imgsz * dataset.shapes / dataset.shapes.max(1, keepdims=True)
32 | scale = np.random.uniform(0.9, 1.1, size=(shapes.shape[0], 1)) # augment scale
33 | wh = torch.tensor(np.concatenate([l[:, 3:5] * s for s, l in zip(shapes * scale, dataset.labels)])).float() # wh
34 |
35 | def metric(k): # compute metric
36 | r = wh[:, None] / k[None]
37 | x = torch.min(r, 1. / r).min(2)[0] # ratio metric
38 | best = x.max(1)[0] # best_x
39 | aat = (x > 1. / thr).float().sum(1).mean() # anchors above threshold
40 | bpr = (best > 1. / thr).float().mean() # best possible recall
41 | return bpr, aat
42 |
43 | anchors = m.anchors.clone() * m.stride.to(m.anchors.device).view(-1, 1, 1) # current anchors
44 | bpr, aat = metric(anchors.cpu().view(-1, 2))
45 | print(f'anchors/target = {aat:.2f}, Best Possible Recall (BPR) = {bpr:.4f}', end='')
46 | if bpr < 0.98: # threshold to recompute
47 | print('. Attempting to improve anchors, please wait...')
48 | na = m.anchors.numel() // 2 # number of anchors
49 | try:
50 | anchors = kmean_anchors(dataset, n=na, img_size=imgsz, thr=thr, gen=1000, verbose=False)
51 | except Exception as e:
52 | print(f'{prefix}ERROR: {e}')
53 | new_bpr = metric(anchors)[0]
54 | if new_bpr > bpr: # replace anchors
55 | anchors = torch.tensor(anchors, device=m.anchors.device).type_as(m.anchors)
56 | m.anchors[:] = anchors.clone().view_as(m.anchors) / m.stride.to(m.anchors.device).view(-1, 1, 1) # loss
57 | check_anchor_order(m)
58 | print(f'{prefix}New anchors saved to model. Update model *.yaml to use these anchors in the future.')
59 | else:
60 | print(f'{prefix}Original anchors better than new anchors. Proceeding with original anchors.')
61 | print('') # newline
62 |
63 |
64 | def kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen=1000, verbose=True):
65 | """ Creates kmeans-evolved anchors from training dataset
66 |
67 | Arguments:
68 | dataset: path to data.yaml, or a loaded dataset
69 | n: number of anchors
70 | img_size: image size used for training
71 | thr: anchor-label wh ratio threshold hyperparameter hyp['anchor_t'] used for training, default=4.0
72 | gen: generations to evolve anchors using genetic algorithm
73 | verbose: print all results
74 |
75 | Return:
76 | k: kmeans evolved anchors
77 |
78 | Usage:
79 | from utils.autoanchor import *; _ = kmean_anchors()
80 | """
81 | from scipy.cluster.vq import kmeans
82 |
83 | thr = 1. / thr
84 | prefix = colorstr('autoanchor: ')
85 |
86 | def metric(k, wh): # compute metrics
87 | r = wh[:, None] / k[None]
88 | x = torch.min(r, 1. / r).min(2)[0] # ratio metric
89 | # x = wh_iou(wh, torch.tensor(k)) # iou metric
90 | return x, x.max(1)[0] # x, best_x
91 |
92 | def anchor_fitness(k): # mutation fitness
93 | _, best = metric(torch.tensor(k, dtype=torch.float32), wh)
94 | return (best * (best > thr).float()).mean() # fitness
95 |
96 | def print_results(k):
97 | k = k[np.argsort(k.prod(1))] # sort small to large
98 | x, best = metric(k, wh0)
99 | bpr, aat = (best > thr).float().mean(), (x > thr).float().mean() * n # best possible recall, anch > thr
100 | print(f'{prefix}thr={thr:.2f}: {bpr:.4f} best possible recall, {aat:.2f} anchors past thr')
101 | print(f'{prefix}n={n}, img_size={img_size}, metric_all={x.mean():.3f}/{best.mean():.3f}-mean/best, '
102 | f'past_thr={x[x > thr].mean():.3f}-mean: ', end='')
103 | for i, x in enumerate(k):
104 | print('%i,%i' % (round(x[0]), round(x[1])), end=', ' if i < len(k) - 1 else '\n') # use in *.cfg
105 | return k
106 |
107 | if isinstance(dataset, str): # *.yaml file
108 | with open(dataset, errors='ignore') as f:
109 | data_dict = yaml.safe_load(f) # model dict
110 | from utils.datasets import LoadImagesAndLabels
111 | dataset = LoadImagesAndLabels(data_dict['train'], augment=True, rect=True)
112 |
113 | # Get label wh
114 | shapes = img_size * dataset.shapes / dataset.shapes.max(1, keepdims=True)
115 | wh0 = np.concatenate([l[:, 3:5] * s for s, l in zip(shapes, dataset.labels)]) # wh
116 |
117 | # Filter
118 | i = (wh0 < 3.0).any(1).sum()
119 | if i:
120 | print(f'{prefix}WARNING: Extremely small objects found. {i} of {len(wh0)} labels are < 3 pixels in size.')
121 | wh = wh0[(wh0 >= 2.0).any(1)] # filter > 2 pixels
122 | # wh = wh * (np.random.rand(wh.shape[0], 1) * 0.9 + 0.1) # multiply by random scale 0-1
123 |
124 | # Kmeans calculation
125 | print(f'{prefix}Running kmeans for {n} anchors on {len(wh)} points...')
126 | s = wh.std(0) # sigmas for whitening
127 | k, dist = kmeans(wh / s, n, iter=30) # points, mean distance
128 | assert len(k) == n, f'{prefix}ERROR: scipy.cluster.vq.kmeans requested {n} points but returned only {len(k)}'
129 | k *= s
130 | wh = torch.tensor(wh, dtype=torch.float32) # filtered
131 | wh0 = torch.tensor(wh0, dtype=torch.float32) # unfiltered
132 | k = print_results(k)
133 |
134 | # Plot
135 | # k, d = [None] * 20, [None] * 20
136 | # for i in tqdm(range(1, 21)):
137 | # k[i-1], d[i-1] = kmeans(wh / s, i) # points, mean distance
138 | # fig, ax = plt.subplots(1, 2, figsize=(14, 7), tight_layout=True)
139 | # ax = ax.ravel()
140 | # ax[0].plot(np.arange(1, 21), np.array(d) ** 2, marker='.')
141 | # fig, ax = plt.subplots(1, 2, figsize=(14, 7)) # plot wh
142 | # ax[0].hist(wh[wh[:, 0]<100, 0],400)
143 | # ax[1].hist(wh[wh[:, 1]<100, 1],400)
144 | # fig.savefig('wh.png', dpi=200)
145 |
146 | # Evolve
147 | npr = np.random
148 | f, sh, mp, s = anchor_fitness(k), k.shape, 0.9, 0.1 # fitness, generations, mutation prob, sigma
149 | pbar = tqdm(range(gen), desc=f'{prefix}Evolving anchors with Genetic Algorithm:') # progress bar
150 | for _ in pbar:
151 | v = np.ones(sh)
152 | while (v == 1).all(): # mutate until a change occurs (prevent duplicates)
153 | v = ((npr.random(sh) < mp) * random.random() * npr.randn(*sh) * s + 1).clip(0.3, 3.0)
154 | kg = (k.copy() * v).clip(min=2.0)
155 | fg = anchor_fitness(kg)
156 | if fg > f:
157 | f, k = fg, kg.copy()
158 | pbar.desc = f'{prefix}Evolving anchors with Genetic Algorithm: fitness = {f:.4f}'
159 | if verbose:
160 | print_results(k)
161 |
162 | return print_results(k)
163 |
--------------------------------------------------------------------------------
/models/yolo_v5_object_detector.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from deep_utils.utils.box_utils.boxes import Box
3 | import torch
4 | from models.experimental import attempt_load
5 | from utils.general import xywh2xyxy
6 | from utils.datasets import letterbox
7 | import cv2
8 | import time
9 | import torchvision
10 | import torch.nn as nn
11 | from utils.metrics import box_iou
12 |
13 |
14 | class YOLOV5TorchObjectDetector(nn.Module):
15 | def __init__(self,
16 | model_weight,
17 | device,
18 | img_size,
19 | names=None,
20 | mode='eval',
21 | confidence=0.4,
22 | iou_thresh=0.45,
23 | agnostic_nms=False):
24 | super(YOLOV5TorchObjectDetector, self).__init__()
25 | self.device = device
26 | self.model = None
27 | self.img_size = img_size
28 | self.mode = mode
29 | self.confidence = confidence
30 | self.iou_thresh = iou_thresh
31 | self.agnostic = agnostic_nms
32 | self.model = attempt_load(model_weight, device=device)
33 | print("[INFO] Model is loaded")
34 | self.model.requires_grad_(True)
35 | self.model.to(device)
36 | if self.mode == 'train':
37 | self.model.train()
38 | else:
39 | self.model.eval()
40 | # fetch the names
41 | if names is None:
42 | print('[INFO] fetching names from coco file')
43 | self.names = ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat',
44 | 'traffic light',
45 | 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep',
46 | 'cow',
47 | 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase',
48 | 'frisbee',
49 | 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard',
50 | 'surfboard',
51 | 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana',
52 | 'apple',
53 | 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair',
54 | 'couch',
55 | 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote',
56 | 'keyboard', 'cell phone',
57 | 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors',
58 | 'teddy bear',
59 | 'hair drier', 'toothbrush']
60 | else:
61 | self.names = names
62 |
63 | # preventing cold start
64 | img = torch.zeros((1, 3, *self.img_size), device=device)
65 | self.model(img)
66 |
67 | @staticmethod
68 | def non_max_suppression(prediction, logits, conf_thres=0.6, iou_thres=0.45, classes=None, agnostic=False,
69 | multi_label=False, labels=(), max_det=300):
70 | """Runs Non-Maximum Suppression (NMS) on inference and logits results
71 |
72 | Returns:
73 | list of detections, on (n,6) tensor per image [xyxy, conf, cls] and pruned input logits (n, number-classes)
74 | """
75 |
76 | nc = prediction.shape[2] - 5 # number of classes
77 | xc = prediction[..., 4] > conf_thres # candidates
78 |
79 | # Checks
80 | assert 0 <= conf_thres <= 1, f'Invalid Confidence threshold {conf_thres}, valid values are between 0.0 and 1.0'
81 | assert 0 <= iou_thres <= 1, f'Invalid IoU {iou_thres}, valid values are between 0.0 and 1.0'
82 |
83 | # Settings
84 | min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height
85 | max_nms = 30000 # maximum number of boxes into torchvision.ops.nms()
86 | time_limit = 10.0 # seconds to quit after
87 | redundant = True # require redundant detections
88 | multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img)
89 | merge = False # use merge-NMS
90 |
91 | t = time.time()
92 | output = [torch.zeros((0, 6), device=prediction.device)] * prediction.shape[0]
93 | logits_output = [torch.zeros((0, 80), device=logits.device)] * logits.shape[0]
94 | for xi, (x, log_) in enumerate(zip(prediction, logits)): # image index, image inference
95 | # Apply constraints
96 | # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height
97 | x = x[xc[xi]] # confidence
98 | log_ = log_[xc[xi]]
99 | # Cat apriori labels if autolabelling
100 | if labels and len(labels[xi]):
101 | l = labels[xi]
102 | v = torch.zeros((len(l), nc + 5), device=x.device)
103 | v[:, :4] = l[:, 1:5] # box
104 | v[:, 4] = 1.0 # conf
105 | v[range(len(l)), l[:, 0].long() + 5] = 1.0 # cls
106 | x = torch.cat((x, v), 0)
107 |
108 | # If none remain process next image
109 | if not x.shape[0]:
110 | continue
111 |
112 | # Compute conf
113 | x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf
114 | # log_ *= x[:, 4:5]
115 | # Box (center x, center y, width, height) to (x1, y1, x2, y2)
116 | box = xywh2xyxy(x[:, :4])
117 |
118 | # Detections matrix nx6 (xyxy, conf, cls)
119 | if multi_label:
120 | i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T
121 | x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1)
122 | else: # best class only
123 | conf, j = x[:, 5:].max(1, keepdim=True)
124 | # log_ = x[:, 5:]
125 | x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres]
126 | log_ = log_[conf.view(-1) > conf_thres]
127 | # Filter by class
128 | if classes is not None:
129 | x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
130 |
131 | # Check shape
132 | n = x.shape[0] # number of boxes
133 | if not n: # no boxes
134 | continue
135 | elif n > max_nms: # excess boxes
136 | x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence
137 |
138 | # Batched NMS
139 | c = x[:, 5:6] * (0 if agnostic else max_wh) # classes
140 | boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores
141 | i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
142 | if i.shape[0] > max_det: # limit detections
143 | i = i[:max_det]
144 | if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean)
145 | # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)
146 | iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix
147 | weights = iou * scores[None] # box weights
148 | x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes
149 | if redundant:
150 | i = i[iou.sum(1) > 1] # require redundancy
151 |
152 | output[xi] = x[i]
153 | logits_output[xi] = log_[i]
154 | assert log_[i].shape[0] == x[i].shape[0]
155 | if (time.time() - t) > time_limit:
156 | print(f'WARNING: NMS time limit {time_limit}s exceeded')
157 | break # time limit exceeded
158 |
159 | return output, logits_output
160 |
161 | @staticmethod
162 | def yolo_resize(img, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True):
163 |
164 | return letterbox(img, new_shape=new_shape, color=color, auto=auto, scaleFill=scaleFill, scaleup=scaleup)
165 |
166 | def forward(self, img):
167 | prediction, logits, _ = self.model(img, augment=False)
168 | prediction, logits = self.non_max_suppression(prediction, logits, self.confidence, self.iou_thresh,
169 | classes=None,
170 | agnostic=self.agnostic)
171 | self.boxes, self.class_names, self.classes, self.confidences = [[[] for _ in range(img.shape[0])] for _ in
172 | range(4)]
173 | for i, det in enumerate(prediction): # detections per image
174 | if len(det):
175 | for *xyxy, conf, cls in det:
176 | bbox = Box.box2box(xyxy,
177 | in_source=Box.BoxSource.Torch,
178 | to_source=Box.BoxSource.Numpy,
179 | return_int=True)
180 | self.boxes[i].append(bbox)
181 | self.confidences[i].append(round(conf.item(), 2))
182 | cls = int(cls.item())
183 | self.classes[i].append(cls)
184 | if self.names is not None:
185 | self.class_names[i].append(self.names[cls])
186 | else:
187 | self.class_names[i].append(cls)
188 | return [self.boxes, self.classes, self.class_names, self.confidences], logits
189 |
190 | def preprocessing(self, img):
191 | if len(img.shape) != 4:
192 | img = np.expand_dims(img, axis=0)
193 | im0 = img.astype(np.uint8)
194 | img = np.array([self.yolo_resize(im, new_shape=self.img_size)[0] for im in im0])
195 | img = img.transpose((0, 3, 1, 2))
196 | img = np.ascontiguousarray(img)
197 | img = torch.from_numpy(img).to(self.device)
198 | img = img / 255.0
199 | return img
200 |
201 |
202 | if __name__ == '__main__':
203 | model_path = 'runs/train/cart-detection/weights/best.pt'
204 | img_path = './16_4322071600_101_0_4160379257.jpg'
205 | model = YOLOV5TorchObjectDetector(model_path, 'cpu', img_size=(640, 640)).to('cpu')
206 | img = np.expand_dims(cv2.imread(img_path)[..., ::-1], axis=0)
207 | img = model.preprocessing(img)
208 | a = model(img)
209 | print(model._modules)
210 |
--------------------------------------------------------------------------------
/utils/loss.py:
--------------------------------------------------------------------------------
1 | # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2 | """
3 | Loss functions
4 | """
5 |
6 | import torch
7 | import torch.nn as nn
8 |
9 | from utils.metrics import bbox_iou
10 | from utils.torch_utils import is_parallel
11 |
12 |
13 | def smooth_BCE(eps=0.1): # https://github.com/ultralytics/yolov3/issues/238#issuecomment-598028441
14 | # return positive, negative label smoothing BCE targets
15 | return 1.0 - 0.5 * eps, 0.5 * eps
16 |
17 |
18 | class BCEBlurWithLogitsLoss(nn.Module):
19 | # BCEwithLogitLoss() with reduced missing label effects.
20 | def __init__(self, alpha=0.05):
21 | super(BCEBlurWithLogitsLoss, self).__init__()
22 | self.loss_fcn = nn.BCEWithLogitsLoss(reduction='none') # must be nn.BCEWithLogitsLoss()
23 | self.alpha = alpha
24 |
25 | def forward(self, pred, true):
26 | loss = self.loss_fcn(pred, true)
27 | pred = torch.sigmoid(pred) # prob from logits
28 | dx = pred - true # reduce only missing label effects
29 | # dx = (pred - true).abs() # reduce missing label and false label effects
30 | alpha_factor = 1 - torch.exp((dx - 1) / (self.alpha + 1e-4))
31 | loss *= alpha_factor
32 | return loss.mean()
33 |
34 |
35 | class FocalLoss(nn.Module):
36 | # Wraps focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)
37 | def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
38 | super(FocalLoss, self).__init__()
39 | self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss()
40 | self.gamma = gamma
41 | self.alpha = alpha
42 | self.reduction = loss_fcn.reduction
43 | self.loss_fcn.reduction = 'none' # required to apply FL to each element
44 |
45 | def forward(self, pred, true):
46 | loss = self.loss_fcn(pred, true)
47 | # p_t = torch.exp(-loss)
48 | # loss *= self.alpha * (1.000001 - p_t) ** self.gamma # non-zero power for gradient stability
49 |
50 | # TF implementation https://github.com/tensorflow/addons/blob/v0.7.1/tensorflow_addons/losses/focal_loss.py
51 | pred_prob = torch.sigmoid(pred) # prob from logits
52 | p_t = true * pred_prob + (1 - true) * (1 - pred_prob)
53 | alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha)
54 | modulating_factor = (1.0 - p_t) ** self.gamma
55 | loss *= alpha_factor * modulating_factor
56 |
57 | if self.reduction == 'mean':
58 | return loss.mean()
59 | elif self.reduction == 'sum':
60 | return loss.sum()
61 | else: # 'none'
62 | return loss
63 |
64 |
65 | class QFocalLoss(nn.Module):
66 | # Wraps Quality focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)
67 | def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
68 | super(QFocalLoss, self).__init__()
69 | self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss()
70 | self.gamma = gamma
71 | self.alpha = alpha
72 | self.reduction = loss_fcn.reduction
73 | self.loss_fcn.reduction = 'none' # required to apply FL to each element
74 |
75 | def forward(self, pred, true):
76 | loss = self.loss_fcn(pred, true)
77 |
78 | pred_prob = torch.sigmoid(pred) # prob from logits
79 | alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha)
80 | modulating_factor = torch.abs(true - pred_prob) ** self.gamma
81 | loss *= alpha_factor * modulating_factor
82 |
83 | if self.reduction == 'mean':
84 | return loss.mean()
85 | elif self.reduction == 'sum':
86 | return loss.sum()
87 | else: # 'none'
88 | return loss
89 |
90 |
91 | class ComputeLoss:
92 | # Compute losses
93 | def __init__(self, model, autobalance=False):
94 | self.sort_obj_iou = False
95 | device = next(model.parameters()).device # get model device
96 | h = model.hyp # hyperparameters
97 |
98 | # Define criteria
99 | BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device))
100 | BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device))
101 |
102 | # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3
103 | self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0)) # positive, negative BCE targets
104 |
105 | # Focal loss
106 | g = h['fl_gamma'] # focal loss gamma
107 | if g > 0:
108 | BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g)
109 |
110 | det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module
111 | self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.06, .02]) # P3-P7
112 | self.ssi = list(det.stride).index(16) if autobalance else 0 # stride 16 index
113 | self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, 1.0, h, autobalance
114 | for k in 'na', 'nc', 'nl', 'anchors':
115 | setattr(self, k, getattr(det, k))
116 |
117 | def __call__(self, p, targets): # predictions, targets, model
118 | device = targets.device
119 | lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device)
120 | tcls, tbox, indices, anchors = self.build_targets(p, targets) # targets
121 |
122 | # Losses
123 | for i, pi in enumerate(p): # layer index, layer predictions
124 | b, a, gj, gi = indices[i] # image, anchor, gridy, gridx
125 | tobj = torch.zeros_like(pi[..., 0], device=device) # target obj
126 |
127 | n = b.shape[0] # number of targets
128 | if n:
129 | ps = pi[b, a, gj, gi] # prediction subset corresponding to targets
130 |
131 | # Regression
132 | pxy = ps[:, :2].sigmoid() * 2. - 0.5
133 | pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i]
134 | pbox = torch.cat((pxy, pwh), 1) # predicted box
135 | iou = bbox_iou(pbox.T, tbox[i], x1y1x2y2=False, CIoU=True) # iou(prediction, target)
136 | lbox += (1.0 - iou).mean() # iou loss
137 |
138 | # Objectness
139 | score_iou = iou.detach().clamp(0).type(tobj.dtype)
140 | if self.sort_obj_iou:
141 | sort_id = torch.argsort(score_iou)
142 | b, a, gj, gi, score_iou = b[sort_id], a[sort_id], gj[sort_id], gi[sort_id], score_iou[sort_id]
143 | tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * score_iou # iou ratio
144 |
145 | # Classification
146 | if self.nc > 1: # cls loss (only if multiple classes)
147 | t = torch.full_like(ps[:, 5:], self.cn, device=device) # targets
148 | t[range(n), tcls[i]] = self.cp
149 | lcls += self.BCEcls(ps[:, 5:], t) # BCE
150 |
151 | # Append targets to text file
152 | # with open('targets.txt', 'a') as file:
153 | # [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)]
154 |
155 | obji = self.BCEobj(pi[..., 4], tobj)
156 | lobj += obji * self.balance[i] # obj loss
157 | if self.autobalance:
158 | self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item()
159 |
160 | if self.autobalance:
161 | self.balance = [x / self.balance[self.ssi] for x in self.balance]
162 | lbox *= self.hyp['box']
163 | lobj *= self.hyp['obj']
164 | lcls *= self.hyp['cls']
165 | bs = tobj.shape[0] # batch size
166 |
167 | return (lbox + lobj + lcls) * bs, torch.cat((lbox, lobj, lcls)).detach()
168 |
169 | def build_targets(self, p, targets):
170 | # Build targets for compute_loss(), input targets(image,class,x,y,w,h)
171 | na, nt = self.na, targets.shape[0] # number of anchors, targets
172 | tcls, tbox, indices, anch = [], [], [], []
173 | gain = torch.ones(7, device=targets.device) # normalized to gridspace gain
174 | ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt)
175 | targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices
176 |
177 | g = 0.5 # bias
178 | off = torch.tensor([[0, 0],
179 | [1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m
180 | # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm
181 | ], device=targets.device).float() * g # offsets
182 |
183 | for i in range(self.nl):
184 | anchors = self.anchors[i]
185 | gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain
186 |
187 | # Match targets to anchors
188 | t = targets * gain
189 | if nt:
190 | # Matches
191 | r = t[:, :, 4:6] / anchors[:, None] # wh ratio
192 | j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare
193 | # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2))
194 | t = t[j] # filter
195 |
196 | # Offsets
197 | gxy = t[:, 2:4] # grid xy
198 | gxi = gain[[2, 3]] - gxy # inverse
199 | j, k = ((gxy % 1. < g) & (gxy > 1.)).T
200 | l, m = ((gxi % 1. < g) & (gxi > 1.)).T
201 | j = torch.stack((torch.ones_like(j), j, k, l, m))
202 | t = t.repeat((5, 1, 1))[j]
203 | offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j]
204 | else:
205 | t = targets[0]
206 | offsets = 0
207 |
208 | # Define
209 | b, c = t[:, :2].long().T # image, class
210 | gxy = t[:, 2:4] # grid xy
211 | gwh = t[:, 4:6] # grid wh
212 | gij = (gxy - offsets).long()
213 | gi, gj = gij.T # grid xy indices
214 |
215 | # Append
216 | a = t[:, 6].long() # anchor indices
217 | indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices
218 | tbox.append(torch.cat((gxy - gij, gwh), 1)) # box
219 | anch.append(anchors[a]) # anchors
220 | tcls.append(c) # class
221 |
222 | return tcls, tbox, indices, anch
223 |
--------------------------------------------------------------------------------
/utils/augmentations.py:
--------------------------------------------------------------------------------
1 | # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2 | """
3 | Image augmentation functions
4 | """
5 |
6 | import logging
7 | import math
8 | import random
9 |
10 | import cv2
11 | import numpy as np
12 |
13 | from utils.general import colorstr, segment2box, resample_segments, check_version
14 | from utils.metrics import bbox_ioa
15 |
16 |
17 | class Albumentations:
18 | # YOLOv5 Albumentations class (optional, only used if package is installed)
19 | def __init__(self):
20 | self.transform = None
21 | try:
22 | import albumentations as A
23 | check_version(A.__version__, '1.0.3') # version requirement
24 |
25 | self.transform = A.Compose([
26 | A.Blur(p=0.01),
27 | A.MedianBlur(p=0.01),
28 | A.ToGray(p=0.01),
29 | A.CLAHE(p=0.01),
30 | A.RandomBrightnessContrast(p=0.0),
31 | A.RandomGamma(p=0.0),
32 | A.ImageCompression(quality_lower=75, p=0.0)],
33 | bbox_params=A.BboxParams(format='yolo', label_fields=['class_labels']))
34 |
35 | logging.info(colorstr('albumentations: ') + ', '.join(f'{x}' for x in self.transform.transforms if x.p))
36 | except ImportError: # package not installed, skip
37 | pass
38 | except Exception as e:
39 | logging.info(colorstr('albumentations: ') + f'{e}')
40 |
41 | def __call__(self, im, labels, p=1.0):
42 | if self.transform and random.random() < p:
43 | new = self.transform(image=im, bboxes=labels[:, 1:], class_labels=labels[:, 0]) # transformed
44 | im, labels = new['image'], np.array([[c, *b] for c, b in zip(new['class_labels'], new['bboxes'])])
45 | return im, labels
46 |
47 |
48 | def augment_hsv(im, hgain=0.5, sgain=0.5, vgain=0.5):
49 | # HSV color-space augmentation
50 | if hgain or sgain or vgain:
51 | r = np.random.uniform(-1, 1, 3) * [hgain, sgain, vgain] + 1 # random gains
52 | hue, sat, val = cv2.split(cv2.cvtColor(im, cv2.COLOR_BGR2HSV))
53 | dtype = im.dtype # uint8
54 |
55 | x = np.arange(0, 256, dtype=r.dtype)
56 | lut_hue = ((x * r[0]) % 180).astype(dtype)
57 | lut_sat = np.clip(x * r[1], 0, 255).astype(dtype)
58 | lut_val = np.clip(x * r[2], 0, 255).astype(dtype)
59 |
60 | im_hsv = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val)))
61 | cv2.cvtColor(im_hsv, cv2.COLOR_HSV2BGR, dst=im) # no return needed
62 |
63 |
64 | def hist_equalize(im, clahe=True, bgr=False):
65 | # Equalize histogram on BGR image 'im' with im.shape(n,m,3) and range 0-255
66 | yuv = cv2.cvtColor(im, cv2.COLOR_BGR2YUV if bgr else cv2.COLOR_RGB2YUV)
67 | if clahe:
68 | c = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8))
69 | yuv[:, :, 0] = c.apply(yuv[:, :, 0])
70 | else:
71 | yuv[:, :, 0] = cv2.equalizeHist(yuv[:, :, 0]) # equalize Y channel histogram
72 | return cv2.cvtColor(yuv, cv2.COLOR_YUV2BGR if bgr else cv2.COLOR_YUV2RGB) # convert YUV image to RGB
73 |
74 |
75 | def replicate(im, labels):
76 | # Replicate labels
77 | h, w = im.shape[:2]
78 | boxes = labels[:, 1:].astype(int)
79 | x1, y1, x2, y2 = boxes.T
80 | s = ((x2 - x1) + (y2 - y1)) / 2 # side length (pixels)
81 | for i in s.argsort()[:round(s.size * 0.5)]: # smallest indices
82 | x1b, y1b, x2b, y2b = boxes[i]
83 | bh, bw = y2b - y1b, x2b - x1b
84 | yc, xc = int(random.uniform(0, h - bh)), int(random.uniform(0, w - bw)) # offset x, y
85 | x1a, y1a, x2a, y2a = [xc, yc, xc + bw, yc + bh]
86 | im[y1a:y2a, x1a:x2a] = im[y1b:y2b, x1b:x2b] # im4[ymin:ymax, xmin:xmax]
87 | labels = np.append(labels, [[labels[i, 0], x1a, y1a, x2a, y2a]], axis=0)
88 |
89 | return im, labels
90 |
91 |
92 | def letterbox(im, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True, stride=32):
93 | # Resize and pad image while meeting stride-multiple constraints
94 | shape = im.shape[:2] # current shape [height, width]
95 | if isinstance(new_shape, int):
96 | new_shape = (new_shape, new_shape)
97 |
98 | # Scale ratio (new / old)
99 | r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
100 | if not scaleup: # only scale down, do not scale up (for better val mAP)
101 | r = min(r, 1.0)
102 |
103 | # Compute padding
104 | ratio = r, r # width, height ratios
105 | new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
106 | dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding
107 | if auto: # minimum rectangle
108 | dw, dh = np.mod(dw, stride), np.mod(dh, stride) # wh padding
109 | elif scaleFill: # stretch
110 | dw, dh = 0.0, 0.0
111 | new_unpad = (new_shape[1], new_shape[0])
112 | ratio = new_shape[1] / shape[1], new_shape[0] / shape[0] # width, height ratios
113 |
114 | dw /= 2 # divide padding into 2 sides
115 | dh /= 2
116 |
117 | if shape[::-1] != new_unpad: # resize
118 | im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR)
119 | top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
120 | left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
121 | im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border
122 | return im, ratio, (dw, dh)
123 |
124 |
125 | def random_perspective(im, targets=(), segments=(), degrees=10, translate=.1, scale=.1, shear=10, perspective=0.0,
126 | border=(0, 0)):
127 | # torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(.1, .1), scale=(.9, 1.1), shear=(-10, 10))
128 | # targets = [cls, xyxy]
129 |
130 | height = im.shape[0] + border[0] * 2 # shape(h,w,c)
131 | width = im.shape[1] + border[1] * 2
132 |
133 | # Center
134 | C = np.eye(3)
135 | C[0, 2] = -im.shape[1] / 2 # x translation (pixels)
136 | C[1, 2] = -im.shape[0] / 2 # y translation (pixels)
137 |
138 | # Perspective
139 | P = np.eye(3)
140 | P[2, 0] = random.uniform(-perspective, perspective) # x perspective (about y)
141 | P[2, 1] = random.uniform(-perspective, perspective) # y perspective (about x)
142 |
143 | # Rotation and Scale
144 | R = np.eye(3)
145 | a = random.uniform(-degrees, degrees)
146 | # a += random.choice([-180, -90, 0, 90]) # add 90deg rotations to small rotations
147 | s = random.uniform(1 - scale, 1 + scale)
148 | # s = 2 ** random.uniform(-scale, scale)
149 | R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s)
150 |
151 | # Shear
152 | S = np.eye(3)
153 | S[0, 1] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # x shear (deg)
154 | S[1, 0] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # y shear (deg)
155 |
156 | # Translation
157 | T = np.eye(3)
158 | T[0, 2] = random.uniform(0.5 - translate, 0.5 + translate) * width # x translation (pixels)
159 | T[1, 2] = random.uniform(0.5 - translate, 0.5 + translate) * height # y translation (pixels)
160 |
161 | # Combined rotation matrix
162 | M = T @ S @ R @ P @ C # order of operations (right to left) is IMPORTANT
163 | if (border[0] != 0) or (border[1] != 0) or (M != np.eye(3)).any(): # image changed
164 | if perspective:
165 | im = cv2.warpPerspective(im, M, dsize=(width, height), borderValue=(114, 114, 114))
166 | else: # affine
167 | im = cv2.warpAffine(im, M[:2], dsize=(width, height), borderValue=(114, 114, 114))
168 |
169 | # Visualize
170 | # import matplotlib.pyplot as plt
171 | # ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel()
172 | # ax[0].imshow(im[:, :, ::-1]) # base
173 | # ax[1].imshow(im2[:, :, ::-1]) # warped
174 |
175 | # Transform label coordinates
176 | n = len(targets)
177 | if n:
178 | use_segments = any(x.any() for x in segments)
179 | new = np.zeros((n, 4))
180 | if use_segments: # warp segments
181 | segments = resample_segments(segments) # upsample
182 | for i, segment in enumerate(segments):
183 | xy = np.ones((len(segment), 3))
184 | xy[:, :2] = segment
185 | xy = xy @ M.T # transform
186 | xy = xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2] # perspective rescale or affine
187 |
188 | # clip
189 | new[i] = segment2box(xy, width, height)
190 |
191 | else: # warp boxes
192 | xy = np.ones((n * 4, 3))
193 | xy[:, :2] = targets[:, [1, 2, 3, 4, 1, 4, 3, 2]].reshape(n * 4, 2) # x1y1, x2y2, x1y2, x2y1
194 | xy = xy @ M.T # transform
195 | xy = (xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2]).reshape(n, 8) # perspective rescale or affine
196 |
197 | # create new boxes
198 | x = xy[:, [0, 2, 4, 6]]
199 | y = xy[:, [1, 3, 5, 7]]
200 | new = np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T
201 |
202 | # clip
203 | new[:, [0, 2]] = new[:, [0, 2]].clip(0, width)
204 | new[:, [1, 3]] = new[:, [1, 3]].clip(0, height)
205 |
206 | # filter candidates
207 | i = box_candidates(box1=targets[:, 1:5].T * s, box2=new.T, area_thr=0.01 if use_segments else 0.10)
208 | targets = targets[i]
209 | targets[:, 1:5] = new[i]
210 |
211 | return im, targets
212 |
213 |
214 | def copy_paste(im, labels, segments, p=0.5):
215 | # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy)
216 | n = len(segments)
217 | if p and n:
218 | h, w, c = im.shape # height, width, channels
219 | im_new = np.zeros(im.shape, np.uint8)
220 | for j in random.sample(range(n), k=round(p * n)):
221 | l, s = labels[j], segments[j]
222 | box = w - l[3], l[2], w - l[1], l[4]
223 | ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area
224 | if (ioa < 0.30).all(): # allow 30% obscuration of existing labels
225 | labels = np.concatenate((labels, [[l[0], *box]]), 0)
226 | segments.append(np.concatenate((w - s[:, 0:1], s[:, 1:2]), 1))
227 | cv2.drawContours(im_new, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED)
228 |
229 | result = cv2.bitwise_and(src1=im, src2=im_new)
230 | result = cv2.flip(result, 1) # augment segments (flip left-right)
231 | i = result > 0 # pixels to replace
232 | # i[:, :] = result.max(2).reshape(h, w, 1) # act over ch
233 | im[i] = result[i] # cv2.imwrite('debug.jpg', im) # debug
234 |
235 | return im, labels, segments
236 |
237 |
238 | def cutout(im, labels, p=0.5):
239 | # Applies image cutout augmentation https://arxiv.org/abs/1708.04552
240 | if random.random() < p:
241 | h, w = im.shape[:2]
242 | scales = [0.5] * 1 + [0.25] * 2 + [0.125] * 4 + [0.0625] * 8 + [0.03125] * 16 # image size fraction
243 | for s in scales:
244 | mask_h = random.randint(1, int(h * s)) # create random masks
245 | mask_w = random.randint(1, int(w * s))
246 |
247 | # box
248 | xmin = max(0, random.randint(0, w) - mask_w // 2)
249 | ymin = max(0, random.randint(0, h) - mask_h // 2)
250 | xmax = min(w, xmin + mask_w)
251 | ymax = min(h, ymin + mask_h)
252 |
253 | # apply random color mask
254 | im[ymin:ymax, xmin:xmax] = [random.randint(64, 191) for _ in range(3)]
255 |
256 | # return unobscured labels
257 | if len(labels) and s > 0.03:
258 | box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32)
259 | ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area
260 | labels = labels[ioa < 0.60] # remove >60% obscured labels
261 |
262 | return labels
263 |
264 |
265 | def mixup(im, labels, im2, labels2):
266 | # Applies MixUp augmentation https://arxiv.org/pdf/1710.09412.pdf
267 | r = np.random.beta(32.0, 32.0) # mixup ratio, alpha=beta=32.0
268 | im = (im * r + im2 * (1 - r)).astype(np.uint8)
269 | labels = np.concatenate((labels, labels2), 0)
270 | return im, labels
271 |
272 |
273 | def box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.1, eps=1e-16): # box1(4,n), box2(4,n)
274 | # Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio
275 | w1, h1 = box1[2] - box1[0], box1[3] - box1[1]
276 | w2, h2 = box2[2] - box2[0], box2[3] - box2[1]
277 | ar = np.maximum(w2 / (h2 + eps), h2 / (w2 + eps)) # aspect ratio
278 | return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + eps) > area_thr) & (ar < ar_thr) # candidates
279 |
--------------------------------------------------------------------------------
/utils/metrics.py:
--------------------------------------------------------------------------------
1 | # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2 | """
3 | Model validation metrics
4 | """
5 |
6 | import math
7 | import warnings
8 | from pathlib import Path
9 |
10 | import matplotlib.pyplot as plt
11 | import numpy as np
12 | import torch
13 |
14 |
15 | def fitness(x):
16 | # Model fitness as a weighted combination of metrics
17 | w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95]
18 | return (x[:, :4] * w).sum(1)
19 |
20 |
21 | def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names=()):
22 | """ Compute the average precision, given the recall and precision curves.
23 | Source: https://github.com/rafaelpadilla/Object-Detection-Metrics.
24 | # Arguments
25 | tp: True positives (nparray, nx1 or nx10).
26 | conf: Objectness value from 0-1 (nparray).
27 | pred_cls: Predicted object classes (nparray).
28 | target_cls: True object classes (nparray).
29 | plot: Plot precision-recall curve at mAP@0.5
30 | save_dir: Plot save directory
31 | # Returns
32 | The average precision as computed in py-faster-rcnn.
33 | """
34 |
35 | # Sort by objectness
36 | i = np.argsort(-conf)
37 | tp, conf, pred_cls = tp[i], conf[i], pred_cls[i]
38 |
39 | # Find unique classes
40 | unique_classes = np.unique(target_cls)
41 | nc = unique_classes.shape[0] # number of classes, number of detections
42 |
43 | # Create Precision-Recall curve and compute AP for each class
44 | px, py = np.linspace(0, 1, 1000), [] # for plotting
45 | ap, p, r = np.zeros((nc, tp.shape[1])), np.zeros((nc, 1000)), np.zeros((nc, 1000))
46 | for ci, c in enumerate(unique_classes):
47 | i = pred_cls == c
48 | n_l = (target_cls == c).sum() # number of labels
49 | n_p = i.sum() # number of predictions
50 |
51 | if n_p == 0 or n_l == 0:
52 | continue
53 | else:
54 | # Accumulate FPs and TPs
55 | fpc = (1 - tp[i]).cumsum(0)
56 | tpc = tp[i].cumsum(0)
57 |
58 | # Recall
59 | recall = tpc / (n_l + 1e-16) # recall curve
60 | r[ci] = np.interp(-px, -conf[i], recall[:, 0], left=0) # negative x, xp because xp decreases
61 |
62 | # Precision
63 | precision = tpc / (tpc + fpc) # precision curve
64 | p[ci] = np.interp(-px, -conf[i], precision[:, 0], left=1) # p at pr_score
65 |
66 | # AP from recall-precision curve
67 | for j in range(tp.shape[1]):
68 | ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j])
69 | if plot and j == 0:
70 | py.append(np.interp(px, mrec, mpre)) # precision at mAP@0.5
71 |
72 | # Compute F1 (harmonic mean of precision and recall)
73 | f1 = 2 * p * r / (p + r + 1e-16)
74 | names = [v for k, v in names.items() if k in unique_classes] # list: only classes that have data
75 | names = {i: v for i, v in enumerate(names)} # to dict
76 | if plot:
77 | plot_pr_curve(px, py, ap, Path(save_dir) / 'PR_curve.png', names)
78 | plot_mc_curve(px, f1, Path(save_dir) / 'F1_curve.png', names, ylabel='F1')
79 | plot_mc_curve(px, p, Path(save_dir) / 'P_curve.png', names, ylabel='Precision')
80 | plot_mc_curve(px, r, Path(save_dir) / 'R_curve.png', names, ylabel='Recall')
81 |
82 | i = f1.mean(0).argmax() # max F1 index
83 | return p[:, i], r[:, i], ap, f1[:, i], unique_classes.astype('int32')
84 |
85 |
86 | def compute_ap(recall, precision):
87 | """ Compute the average precision, given the recall and precision curves
88 | # Arguments
89 | recall: The recall curve (list)
90 | precision: The precision curve (list)
91 | # Returns
92 | Average precision, precision curve, recall curve
93 | """
94 |
95 | # Append sentinel values to beginning and end
96 | mrec = np.concatenate(([0.0], recall, [1.0]))
97 | mpre = np.concatenate(([1.0], precision, [0.0]))
98 |
99 | # Compute the precision envelope
100 | mpre = np.flip(np.maximum.accumulate(np.flip(mpre)))
101 |
102 | # Integrate area under curve
103 | method = 'interp' # methods: 'continuous', 'interp'
104 | if method == 'interp':
105 | x = np.linspace(0, 1, 101) # 101-point interp (COCO)
106 | ap = np.trapz(np.interp(x, mrec, mpre), x) # integrate
107 | else: # 'continuous'
108 | i = np.where(mrec[1:] != mrec[:-1])[0] # points where x axis (recall) changes
109 | ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) # area under curve
110 |
111 | return ap, mpre, mrec
112 |
113 |
114 | class ConfusionMatrix:
115 | # Updated version of https://github.com/kaanakan/object_detection_confusion_matrix
116 | def __init__(self, nc, conf=0.25, iou_thres=0.45):
117 | self.matrix = np.zeros((nc + 1, nc + 1))
118 | self.nc = nc # number of classes
119 | self.conf = conf
120 | self.iou_thres = iou_thres
121 |
122 | def process_batch(self, detections, labels):
123 | """
124 | Return intersection-over-union (Jaccard index) of boxes.
125 | Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
126 | Arguments:
127 | detections (Array[N, 6]), x1, y1, x2, y2, conf, class
128 | labels (Array[M, 5]), class, x1, y1, x2, y2
129 | Returns:
130 | None, updates confusion matrix accordingly
131 | """
132 | detections = detections[detections[:, 4] > self.conf]
133 | gt_classes = labels[:, 0].int()
134 | detection_classes = detections[:, 5].int()
135 | iou = box_iou(labels[:, 1:], detections[:, :4])
136 |
137 | x = torch.where(iou > self.iou_thres)
138 | if x[0].shape[0]:
139 | matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy()
140 | if x[0].shape[0] > 1:
141 | matches = matches[matches[:, 2].argsort()[::-1]]
142 | matches = matches[np.unique(matches[:, 1], return_index=True)[1]]
143 | matches = matches[matches[:, 2].argsort()[::-1]]
144 | matches = matches[np.unique(matches[:, 0], return_index=True)[1]]
145 | else:
146 | matches = np.zeros((0, 3))
147 |
148 | n = matches.shape[0] > 0
149 | m0, m1, _ = matches.transpose().astype(np.int16)
150 | for i, gc in enumerate(gt_classes):
151 | j = m0 == i
152 | if n and sum(j) == 1:
153 | self.matrix[detection_classes[m1[j]], gc] += 1 # correct
154 | else:
155 | self.matrix[self.nc, gc] += 1 # background FP
156 |
157 | if n:
158 | for i, dc in enumerate(detection_classes):
159 | if not any(m1 == i):
160 | self.matrix[dc, self.nc] += 1 # background FN
161 |
162 | def matrix(self):
163 | return self.matrix
164 |
165 | def plot(self, normalize=True, save_dir='', names=()):
166 | try:
167 | import seaborn as sn
168 |
169 | array = self.matrix / ((self.matrix.sum(0).reshape(1, -1) + 1E-6) if normalize else 1) # normalize columns
170 | array[array < 0.005] = np.nan # don't annotate (would appear as 0.00)
171 |
172 | fig = plt.figure(figsize=(12, 9), tight_layout=True)
173 | sn.set(font_scale=1.0 if self.nc < 50 else 0.8) # for label size
174 | labels = (0 < len(names) < 99) and len(names) == self.nc # apply names to ticklabels
175 | with warnings.catch_warnings():
176 | warnings.simplefilter('ignore') # suppress empty matrix RuntimeWarning: All-NaN slice encountered
177 | sn.heatmap(array, annot=self.nc < 30, annot_kws={"size": 8}, cmap='Blues', fmt='.2f', square=True,
178 | xticklabels=names + ['background FP'] if labels else "auto",
179 | yticklabels=names + ['background FN'] if labels else "auto").set_facecolor((1, 1, 1))
180 | fig.axes[0].set_xlabel('True')
181 | fig.axes[0].set_ylabel('Predicted')
182 | fig.savefig(Path(save_dir) / 'confusion_matrix.png', dpi=250)
183 | plt.close()
184 | except Exception as e:
185 | print(f'WARNING: ConfusionMatrix plot failure: {e}')
186 |
187 | def print(self):
188 | for i in range(self.nc + 1):
189 | print(' '.join(map(str, self.matrix[i])))
190 |
191 |
192 | def bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7):
193 | # Returns the IoU of box1 to box2. box1 is 4, box2 is nx4
194 | box2 = box2.T
195 |
196 | # Get the coordinates of bounding boxes
197 | if x1y1x2y2: # x1, y1, x2, y2 = box1
198 | b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
199 | b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
200 | else: # transform from xywh to xyxy
201 | b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2
202 | b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2
203 | b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2
204 | b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2
205 |
206 | # Intersection area
207 | inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
208 | (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)
209 |
210 | # Union Area
211 | w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps
212 | w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps
213 | union = w1 * h1 + w2 * h2 - inter + eps
214 |
215 | iou = inter / union
216 | if GIoU or DIoU or CIoU:
217 | cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width
218 | ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height
219 | if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1
220 | c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared
221 | rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 +
222 | (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center distance squared
223 | if DIoU:
224 | return iou - rho2 / c2 # DIoU
225 | elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
226 | v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2)
227 | with torch.no_grad():
228 | alpha = v / (v - iou + (1 + eps))
229 | return iou - (rho2 / c2 + v * alpha) # CIoU
230 | else: # GIoU https://arxiv.org/pdf/1902.09630.pdf
231 | c_area = cw * ch + eps # convex area
232 | return iou - (c_area - union) / c_area # GIoU
233 | else:
234 | return iou # IoU
235 |
236 |
237 | def box_iou(box1, box2):
238 | # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py
239 | """
240 | Return intersection-over-union (Jaccard index) of boxes.
241 | Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
242 | Arguments:
243 | box1 (Tensor[N, 4])
244 | box2 (Tensor[M, 4])
245 | Returns:
246 | iou (Tensor[N, M]): the NxM matrix containing the pairwise
247 | IoU values for every element in boxes1 and boxes2
248 | """
249 |
250 | def box_area(box):
251 | # box = 4xn
252 | return (box[2] - box[0]) * (box[3] - box[1])
253 |
254 | area1 = box_area(box1.T)
255 | area2 = box_area(box2.T)
256 |
257 | # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2)
258 | inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
259 | return inter / (area1[:, None] + area2 - inter) # iou = inter / (area1 + area2 - inter)
260 |
261 |
262 | def bbox_ioa(box1, box2, eps=1E-7):
263 | """ Returns the intersection over box2 area given box1, box2. Boxes are x1y1x2y2
264 | box1: np.array of shape(4)
265 | box2: np.array of shape(nx4)
266 | returns: np.array of shape(n)
267 | """
268 |
269 | box2 = box2.transpose()
270 |
271 | # Get the coordinates of bounding boxes
272 | b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
273 | b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
274 |
275 | # Intersection area
276 | inter_area = (np.minimum(b1_x2, b2_x2) - np.maximum(b1_x1, b2_x1)).clip(0) * \
277 | (np.minimum(b1_y2, b2_y2) - np.maximum(b1_y1, b2_y1)).clip(0)
278 |
279 | # box2 area
280 | box2_area = (b2_x2 - b2_x1) * (b2_y2 - b2_y1) + eps
281 |
282 | # Intersection over box2 area
283 | return inter_area / box2_area
284 |
285 |
286 | def wh_iou(wh1, wh2):
287 | # Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2
288 | wh1 = wh1[:, None] # [N,1,2]
289 | wh2 = wh2[None] # [1,M,2]
290 | inter = torch.min(wh1, wh2).prod(2) # [N,M]
291 | return inter / (wh1.prod(2) + wh2.prod(2) - inter) # iou = inter / (area1 + area2 - inter)
292 |
293 |
294 | # Plots ----------------------------------------------------------------------------------------------------------------
295 |
296 | def plot_pr_curve(px, py, ap, save_dir='pr_curve.png', names=()):
297 | # Precision-recall curve
298 | fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
299 | py = np.stack(py, axis=1)
300 |
301 | if 0 < len(names) < 21: # display per-class legend if < 21 classes
302 | for i, y in enumerate(py.T):
303 | ax.plot(px, y, linewidth=1, label=f'{names[i]} {ap[i, 0]:.3f}') # plot(recall, precision)
304 | else:
305 | ax.plot(px, py, linewidth=1, color='grey') # plot(recall, precision)
306 |
307 | ax.plot(px, py.mean(1), linewidth=3, color='blue', label='all classes %.3f mAP@0.5' % ap[:, 0].mean())
308 | ax.set_xlabel('Recall')
309 | ax.set_ylabel('Precision')
310 | ax.set_xlim(0, 1)
311 | ax.set_ylim(0, 1)
312 | plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
313 | fig.savefig(Path(save_dir), dpi=250)
314 | plt.close()
315 |
316 |
317 | def plot_mc_curve(px, py, save_dir='mc_curve.png', names=(), xlabel='Confidence', ylabel='Metric'):
318 | # Metric-confidence curve
319 | fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
320 |
321 | if 0 < len(names) < 21: # display per-class legend if < 21 classes
322 | for i, y in enumerate(py):
323 | ax.plot(px, y, linewidth=1, label=f'{names[i]}') # plot(confidence, metric)
324 | else:
325 | ax.plot(px, py.T, linewidth=1, color='grey') # plot(confidence, metric)
326 |
327 | y = py.mean(0)
328 | ax.plot(px, y, linewidth=3, color='blue', label=f'all classes {y.max():.2f} at {px[y.argmax()]:.3f}')
329 | ax.set_xlabel(xlabel)
330 | ax.set_ylabel(ylabel)
331 | ax.set_xlim(0, 1)
332 | ax.set_ylim(0, 1)
333 | plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
334 | fig.savefig(Path(save_dir), dpi=250)
335 | plt.close()
336 |
--------------------------------------------------------------------------------
/utils/torch_utils.py:
--------------------------------------------------------------------------------
1 | # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2 | """
3 | PyTorch utils
4 | """
5 |
6 | import datetime
7 | import logging
8 | import math
9 | import os
10 | import platform
11 | import subprocess
12 | import time
13 | from contextlib import contextmanager
14 | from copy import deepcopy
15 | from pathlib import Path
16 |
17 | import torch
18 | import torch.distributed as dist
19 | import torch.nn as nn
20 | import torch.nn.functional as F
21 | import torchvision
22 |
23 | try:
24 | import thop # for FLOPs computation
25 | except ImportError:
26 | thop = None
27 |
28 | LOGGER = logging.getLogger(__name__)
29 |
30 |
31 | @contextmanager
32 | def torch_distributed_zero_first(local_rank: int):
33 | """
34 | Decorator to make all processes in distributed training wait for each local_master to do something.
35 | """
36 | if local_rank not in [-1, 0]:
37 | dist.barrier(device_ids=[local_rank])
38 | yield
39 | if local_rank == 0:
40 | dist.barrier(device_ids=[0])
41 |
42 |
43 | def date_modified(path=__file__):
44 | # return human-readable file modification date, i.e. '2021-3-26'
45 | t = datetime.datetime.fromtimestamp(Path(path).stat().st_mtime)
46 | return f'{t.year}-{t.month}-{t.day}'
47 |
48 |
49 | def git_describe(path=Path(__file__).parent): # path must be a directory
50 | # return human-readable git description, i.e. v5.0-5-g3e25f1e https://git-scm.com/docs/git-describe
51 | s = f'git -C {path} describe --tags --long --always'
52 | try:
53 | return subprocess.check_output(s, shell=True, stderr=subprocess.STDOUT).decode()[:-1]
54 | except subprocess.CalledProcessError as e:
55 | return '' # not a git repository
56 |
57 |
58 | def select_device(device='', batch_size=None):
59 | # device = 'cpu' or '0' or '0,1,2,3'
60 | s = f'YOLOv5 🚀 {git_describe() or date_modified()} torch {torch.__version__} ' # string
61 | device = str(device).strip().lower().replace('cuda:', '') # to string, 'cuda:0' to '0'
62 | cpu = device == 'cpu'
63 | if cpu:
64 | os.environ['CUDA_VISIBLE_DEVICES'] = '-1' # force torch.cuda.is_available() = False
65 | elif device: # non-cpu device requested
66 | os.environ['CUDA_VISIBLE_DEVICES'] = device # set environment variable
67 | assert torch.cuda.is_available(), f'CUDA unavailable, invalid device {device} requested' # check availability
68 |
69 | cuda = not cpu and torch.cuda.is_available()
70 | if cuda:
71 | devices = device.split(',') if device else '0' # range(torch.cuda.device_count()) # i.e. 0,1,6,7
72 | n = len(devices) # device count
73 | if n > 1 and batch_size: # check batch_size is divisible by device_count
74 | assert batch_size % n == 0, f'batch-size {batch_size} not multiple of GPU count {n}'
75 | space = ' ' * (len(s) + 1)
76 | for i, d in enumerate(devices):
77 | p = torch.cuda.get_device_properties(i)
78 | s += f"{'' if i == 0 else space}CUDA:{d} ({p.name}, {p.total_memory / 1024 ** 2}MB)\n" # bytes to MB
79 | else:
80 | s += 'CPU\n'
81 |
82 | LOGGER.info(s.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else s) # emoji-safe
83 | return torch.device('cuda:0' if cuda else 'cpu')
84 |
85 |
86 | def time_sync():
87 | # pytorch-accurate time
88 | if torch.cuda.is_available():
89 | torch.cuda.synchronize()
90 | return time.time()
91 |
92 |
93 | def profile(input, ops, n=10, device=None):
94 | # YOLOv5 speed/memory/FLOPs profiler
95 | #
96 | # Usage:
97 | # input = torch.randn(16, 3, 640, 640)
98 | # m1 = lambda x: x * torch.sigmoid(x)
99 | # m2 = nn.SiLU()
100 | # profile(input, [m1, m2], n=100) # profile over 100 iterations
101 |
102 | results = []
103 | logging.basicConfig(format="%(message)s", level=logging.INFO)
104 | device = device or select_device()
105 | print(f"{'Params':>12s}{'GFLOPs':>12s}{'GPU_mem (GB)':>14s}{'forward (ms)':>14s}{'backward (ms)':>14s}"
106 | f"{'input':>24s}{'output':>24s}")
107 |
108 | for x in input if isinstance(input, list) else [input]:
109 | x = x.to(device)
110 | x.requires_grad = True
111 | for m in ops if isinstance(ops, list) else [ops]:
112 | m = m.to(device) if hasattr(m, 'to') else m # device
113 | m = m.half() if hasattr(m, 'half') and isinstance(x, torch.Tensor) and x.dtype is torch.float16 else m
114 | tf, tb, t = 0., 0., [0., 0., 0.] # dt forward, backward
115 | try:
116 | flops = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 # GFLOPs
117 | except:
118 | flops = 0
119 |
120 | try:
121 | for _ in range(n):
122 | t[0] = time_sync()
123 | y = m(x)
124 | t[1] = time_sync()
125 | try:
126 | _ = (sum([yi.sum() for yi in y]) if isinstance(y, list) else y).sum().backward()
127 | t[2] = time_sync()
128 | except Exception as e: # no backward method
129 | print(e)
130 | t[2] = float('nan')
131 | tf += (t[1] - t[0]) * 1000 / n # ms per op forward
132 | tb += (t[2] - t[1]) * 1000 / n # ms per op backward
133 | mem = torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0 # (GB)
134 | s_in = tuple(x.shape) if isinstance(x, torch.Tensor) else 'list'
135 | s_out = tuple(y.shape) if isinstance(y, torch.Tensor) else 'list'
136 | p = sum(list(x.numel() for x in m.parameters())) if isinstance(m, nn.Module) else 0 # parameters
137 | print(f'{p:12}{flops:12.4g}{mem:>14.3f}{tf:14.4g}{tb:14.4g}{str(s_in):>24s}{str(s_out):>24s}')
138 | results.append([p, flops, mem, tf, tb, s_in, s_out])
139 | except Exception as e:
140 | print(e)
141 | results.append(None)
142 | torch.cuda.empty_cache()
143 | return results
144 |
145 |
146 | def is_parallel(model):
147 | # Returns True if model is of type DP or DDP
148 | return type(model) in (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel)
149 |
150 |
151 | def de_parallel(model):
152 | # De-parallelize a model: returns single-GPU model if model is of type DP or DDP
153 | return model.module if is_parallel(model) else model
154 |
155 |
156 | def intersect_dicts(da, db, exclude=()):
157 | # Dictionary intersection of matching keys and shapes, omitting 'exclude' keys, using da values
158 | return {k: v for k, v in da.items() if k in db and not any(x in k for x in exclude) and v.shape == db[k].shape}
159 |
160 |
161 | def initialize_weights(model):
162 | for m in model.modules():
163 | t = type(m)
164 | if t is nn.Conv2d:
165 | pass # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
166 | elif t is nn.BatchNorm2d:
167 | m.eps = 1e-3
168 | m.momentum = 0.03
169 | elif t in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6]:
170 | m.inplace = True
171 |
172 |
173 | def find_modules(model, mclass=nn.Conv2d):
174 | # Finds layer indices matching module class 'mclass'
175 | return [i for i, m in enumerate(model.module_list) if isinstance(m, mclass)]
176 |
177 |
178 | def sparsity(model):
179 | # Return global model sparsity
180 | a, b = 0., 0.
181 | for p in model.parameters():
182 | a += p.numel()
183 | b += (p == 0).sum()
184 | return b / a
185 |
186 |
187 | def prune(model, amount=0.3):
188 | # Prune model to requested global sparsity
189 | import torch.nn.utils.prune as prune
190 | print('Pruning model... ', end='')
191 | for name, m in model.named_modules():
192 | if isinstance(m, nn.Conv2d):
193 | prune.l1_unstructured(m, name='weight', amount=amount) # prune
194 | prune.remove(m, 'weight') # make permanent
195 | print(' %.3g global sparsity' % sparsity(model))
196 |
197 |
198 | def fuse_conv_and_bn(conv, bn):
199 | # Fuse convolution and batchnorm layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/
200 | fusedconv = nn.Conv2d(conv.in_channels,
201 | conv.out_channels,
202 | kernel_size=conv.kernel_size,
203 | stride=conv.stride,
204 | padding=conv.padding,
205 | groups=conv.groups,
206 | bias=True).requires_grad_(False).to(conv.weight.device)
207 |
208 | # prepare filters
209 | w_conv = conv.weight.clone().view(conv.out_channels, -1)
210 | w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var)))
211 | fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape))
212 |
213 | # prepare spatial bias
214 | b_conv = torch.zeros(conv.weight.size(0), device=conv.weight.device) if conv.bias is None else conv.bias
215 | b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps))
216 | fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn)
217 |
218 | return fusedconv
219 |
220 |
221 | def model_info(model, verbose=False, img_size=640):
222 | # Model information. img_size may be int or list, i.e. img_size=640 or img_size=[640, 320]
223 | n_p = sum(x.numel() for x in model.parameters()) # number parameters
224 | n_g = sum(x.numel() for x in model.parameters() if x.requires_grad) # number gradients
225 | if verbose:
226 | print('%5s %40s %9s %12s %20s %10s %10s' % ('layer', 'name', 'gradient', 'parameters', 'shape', 'mu', 'sigma'))
227 | for i, (name, p) in enumerate(model.named_parameters()):
228 | name = name.replace('module_list.', '')
229 | print('%5g %40s %9s %12g %20s %10.3g %10.3g' %
230 | (i, name, p.requires_grad, p.numel(), list(p.shape), p.mean(), p.std()))
231 |
232 | try: # FLOPs
233 | from thop import profile
234 | stride = max(int(model.stride.max()), 32) if hasattr(model, 'stride') else 32
235 | img = torch.zeros((1, model.yaml.get('ch', 3), stride, stride), device=next(model.parameters()).device) # input
236 | flops = profile(deepcopy(model), inputs=(img,), verbose=False)[0] / 1E9 * 2 # stride GFLOPs
237 | img_size = img_size if isinstance(img_size, list) else [img_size, img_size] # expand if int/float
238 | fs = ', %.1f GFLOPs' % (flops * img_size[0] / stride * img_size[1] / stride) # 640x640 GFLOPs
239 | except (ImportError, Exception):
240 | fs = ''
241 |
242 | LOGGER.info(f"Model Summary: {len(list(model.modules()))} layers, {n_p} parameters, {n_g} gradients{fs}")
243 |
244 |
245 | def load_classifier(name='resnet101', n=2):
246 | # Loads a pretrained model reshaped to n-class output
247 | model = torchvision.models.__dict__[name](pretrained=True)
248 |
249 | # ResNet model properties
250 | # input_size = [3, 224, 224]
251 | # input_space = 'RGB'
252 | # input_range = [0, 1]
253 | # mean = [0.485, 0.456, 0.406]
254 | # std = [0.229, 0.224, 0.225]
255 |
256 | # Reshape output to n classes
257 | filters = model.fc.weight.shape[1]
258 | model.fc.bias = nn.Parameter(torch.zeros(n), requires_grad=True)
259 | model.fc.weight = nn.Parameter(torch.zeros(n, filters), requires_grad=True)
260 | model.fc.out_features = n
261 | return model
262 |
263 |
264 | def scale_img(img, ratio=1.0, same_shape=False, gs=32): # img(16,3,256,416)
265 | # scales img(bs,3,y,x) by ratio constrained to gs-multiple
266 | if ratio == 1.0:
267 | return img
268 | else:
269 | h, w = img.shape[2:]
270 | s = (int(h * ratio), int(w * ratio)) # new size
271 | img = F.interpolate(img, size=s, mode='bilinear', align_corners=False) # resize
272 | if not same_shape: # pad/crop img
273 | h, w = [math.ceil(x * ratio / gs) * gs for x in (h, w)]
274 | return F.pad(img, [0, w - s[1], 0, h - s[0]], value=0.447) # value = imagenet mean
275 |
276 |
277 | def copy_attr(a, b, include=(), exclude=()):
278 | # Copy attributes from b to a, options to only include [...] and to exclude [...]
279 | for k, v in b.__dict__.items():
280 | if (len(include) and k not in include) or k.startswith('_') or k in exclude:
281 | continue
282 | else:
283 | setattr(a, k, v)
284 |
285 |
286 | class EarlyStopping:
287 | # YOLOv5 simple early stopper
288 | def __init__(self, patience=30):
289 | self.best_fitness = 0.0 # i.e. mAP
290 | self.best_epoch = 0
291 | self.patience = patience or float('inf') # epochs to wait after fitness stops improving to stop
292 | self.possible_stop = False # possible stop may occur next epoch
293 |
294 | def __call__(self, epoch, fitness):
295 | if fitness >= self.best_fitness: # >= 0 to allow for early zero-fitness stage of training
296 | self.best_epoch = epoch
297 | self.best_fitness = fitness
298 | delta = epoch - self.best_epoch # epochs without improvement
299 | self.possible_stop = delta >= (self.patience - 1) # possible stop may occur next epoch
300 | stop = delta >= self.patience # stop training if patience exceeded
301 | if stop:
302 | LOGGER.info(f'EarlyStopping patience {self.patience} exceeded, stopping training.')
303 | return stop
304 |
305 |
306 | class ModelEMA:
307 | """ Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models
308 | Keep a moving average of everything in the model state_dict (parameters and buffers).
309 | This is intended to allow functionality like
310 | https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage
311 | A smoothed version of the weights is necessary for some training schemes to perform well.
312 | This class is sensitive where it is initialized in the sequence of model init,
313 | GPU assignment and distributed training wrappers.
314 | """
315 |
316 | def __init__(self, model, decay=0.9999, updates=0):
317 | # Create EMA
318 | self.ema = deepcopy(model.module if is_parallel(model) else model).eval() # FP32 EMA
319 | # if next(model.parameters()).device.type != 'cpu':
320 | # self.ema.half() # FP16 EMA
321 | self.updates = updates # number of EMA updates
322 | self.decay = lambda x: decay * (1 - math.exp(-x / 2000)) # decay exponential ramp (to help early epochs)
323 | for p in self.ema.parameters():
324 | p.requires_grad_(False)
325 |
326 | def update(self, model):
327 | # Update EMA parameters
328 | with torch.no_grad():
329 | self.updates += 1
330 | d = self.decay(self.updates)
331 |
332 | msd = model.module.state_dict() if is_parallel(model) else model.state_dict() # model state_dict
333 | for k, v in self.ema.state_dict().items():
334 | if v.dtype.is_floating_point:
335 | v *= d
336 | v += (1. - d) * msd[k].detach()
337 |
338 | def update_attr(self, model, include=(), exclude=('process_group', 'reducer')):
339 | # Update EMA attributes
340 | copy_attr(self.ema, model, include, exclude)
341 |
--------------------------------------------------------------------------------
/models/yolo.py:
--------------------------------------------------------------------------------
1 | # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2 | """
3 | YOLO-specific modules
4 |
5 | Usage:
6 | $ python path/to/models/yolo.py --cfg yolov5s.yaml
7 | """
8 |
9 | import argparse
10 | import sys
11 | from copy import deepcopy
12 | from pathlib import Path
13 |
14 | import torch
15 |
16 | FILE = Path(__file__).resolve()
17 | ROOT = FILE.parents[1] # YOLOv5 root directory
18 | if str(ROOT) not in sys.path:
19 | sys.path.append(str(ROOT)) # add ROOT to PATH
20 | # ROOT = ROOT.relative_to(Path.cwd()) # relative
21 |
22 | from models.common import *
23 | from models.experimental import *
24 | from utils.autoanchor import check_anchor_order
25 | from utils.general import check_yaml, make_divisible, print_args, set_logging
26 | from utils.plots import feature_visualization
27 | from utils.torch_utils import copy_attr, fuse_conv_and_bn, initialize_weights, model_info, scale_img, \
28 | select_device, time_sync
29 |
30 | try:
31 | import thop # for FLOPs computation
32 | except ImportError:
33 | thop = None
34 |
35 | LOGGER = logging.getLogger(__name__)
36 |
37 |
38 | class Detect(nn.Module):
39 | stride = None # strides computed during build
40 | onnx_dynamic = False # ONNX export parameter
41 |
42 | def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer
43 | super().__init__()
44 | self.nc = nc # number of classes
45 | self.no = nc + 5 # number of outputs per anchor
46 | self.nl = len(anchors) # number of detection layers
47 | self.na = len(anchors[0]) // 2 # number of anchors
48 | self.grid = [torch.zeros(1)] * self.nl # init grid
49 | self.anchor_grid = [torch.zeros(1)] * self.nl # init anchor grid
50 | self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2)
51 | self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
52 | self.inplace = inplace # use in-place ops (e.g. slice assignment)
53 |
54 | def forward(self, x):
55 | z = [] # inference output
56 | logits_ = []
57 | for i in range(self.nl):
58 | x[i] = self.m[i](x[i]) # conv
59 | bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
60 | x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
61 |
62 | if not self.training: # inference
63 | if self.grid[i].shape[2:4] != x[i].shape[2:4] or self.onnx_dynamic:
64 | self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i)
65 | logits = x[i][..., 5:]
66 | y = x[i].sigmoid()
67 | if self.inplace:
68 | y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
69 | y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
70 | else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953
71 | xy = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
72 | wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
73 | y = torch.cat((xy, wh, y[..., 4:]), -1)
74 | z.append(y.view(bs, -1, self.no))
75 | logits_.append(logits.view(bs, -1, self.no - 5))
76 | return x if self.training else (torch.cat(z, 1), torch.cat(logits_, 1), x)
77 |
78 | def _make_grid(self, nx=20, ny=20, i=0):
79 | d = self.anchors[i].device
80 | yv, xv = torch.meshgrid([torch.arange(ny).to(d), torch.arange(nx).to(d)])
81 | grid = torch.stack((xv, yv), 2).expand((1, self.na, ny, nx, 2)).float()
82 | anchor_grid = (self.anchors[i].clone() * self.stride[i]) \
83 | .view((1, self.na, 1, 1, 2)).expand((1, self.na, ny, nx, 2)).float()
84 | return grid, anchor_grid
85 |
86 |
87 | class Model(nn.Module):
88 | def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes
89 | super().__init__()
90 | if isinstance(cfg, dict):
91 | self.yaml = cfg # model dict
92 | else: # is *.yaml
93 | import yaml # for torch hub
94 | self.yaml_file = Path(cfg).name
95 | with open(cfg, errors='ignore') as f:
96 | self.yaml = yaml.safe_load(f) # model dict
97 |
98 | # Define model
99 | ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels
100 | if nc and nc != self.yaml['nc']:
101 | LOGGER.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}")
102 | self.yaml['nc'] = nc # override yaml value
103 | if anchors:
104 | LOGGER.info(f'Overriding model.yaml anchors with anchors={anchors}')
105 | self.yaml['anchors'] = round(anchors) # override yaml value
106 | self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist
107 | self.names = [str(i) for i in range(self.yaml['nc'])] # default names
108 | self.inplace = self.yaml.get('inplace', True)
109 |
110 | # Build strides, anchors
111 | m = self.model[-1] # Detect()
112 | if isinstance(m, Detect):
113 | s = 256 # 2x min stride
114 | m.inplace = self.inplace
115 | m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
116 | m.anchors /= m.stride.view(-1, 1, 1)
117 | check_anchor_order(m)
118 | self.stride = m.stride
119 | self._initialize_biases() # only run once
120 |
121 | # Init weights, biases
122 | initialize_weights(self)
123 | self.info()
124 | LOGGER.info('')
125 |
126 | def forward(self, x, augment=False, profile=False, visualize=False):
127 | if augment:
128 | return self._forward_augment(x) # augmented inference, None
129 | return self._forward_once(x, profile, visualize) # single-scale inference, train
130 |
131 | def _forward_augment(self, x):
132 | img_size = x.shape[-2:] # height, width
133 | s = [1, 0.83, 0.67] # scales
134 | f = [None, 3, None] # flips (2-ud, 3-lr)
135 | y = [] # outputs
136 | for si, fi in zip(s, f):
137 | xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max()))
138 | yi = self._forward_once(xi)[0] # forward
139 | # cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1]) # save
140 | yi = self._descale_pred(yi, fi, si, img_size)
141 | y.append(yi)
142 | y = self._clip_augmented(y) # clip augmented tails
143 | return torch.cat(y, 1), None # augmented inference, train
144 |
145 | def _forward_once(self, x, profile=False, visualize=False):
146 | y, dt = [], [] # outputs
147 | for m in self.model:
148 | if m.f != -1: # if not from previous layer
149 | x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers
150 | if profile:
151 | self._profile_one_layer(m, x, dt)
152 | x = m(x) # run
153 | y.append(x if m.i in self.save else None) # save output
154 | if visualize:
155 | feature_visualization(x, m.type, m.i, save_dir=visualize)
156 | return x
157 |
158 | def _descale_pred(self, p, flips, scale, img_size):
159 | # de-scale predictions following augmented inference (inverse operation)
160 | if self.inplace:
161 | p[..., :4] /= scale # de-scale
162 | if flips == 2:
163 | p[..., 1] = img_size[0] - p[..., 1] # de-flip ud
164 | elif flips == 3:
165 | p[..., 0] = img_size[1] - p[..., 0] # de-flip lr
166 | else:
167 | x, y, wh = p[..., 0:1] / scale, p[..., 1:2] / scale, p[..., 2:4] / scale # de-scale
168 | if flips == 2:
169 | y = img_size[0] - y # de-flip ud
170 | elif flips == 3:
171 | x = img_size[1] - x # de-flip lr
172 | p = torch.cat((x, y, wh, p[..., 4:]), -1)
173 | return p
174 |
175 | def _clip_augmented(self, y):
176 | # Clip YOLOv5 augmented inference tails
177 | nl = self.model[-1].nl # number of detection layers (P3-P5)
178 | g = sum(4 ** x for x in range(nl)) # grid points
179 | e = 1 # exclude layer count
180 | i = (y[0].shape[1] // g) * sum(4 ** x for x in range(e)) # indices
181 | y[0] = y[0][:, :-i] # large
182 | i = (y[-1].shape[1] // g) * sum(4 ** (nl - 1 - x) for x in range(e)) # indices
183 | y[-1] = y[-1][:, i:] # small
184 | return y
185 |
186 | def _profile_one_layer(self, m, x, dt):
187 | c = isinstance(m, Detect) # is final layer, copy input as inplace fix
188 | o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs
189 | t = time_sync()
190 | for _ in range(10):
191 | m(x.copy() if c else x)
192 | dt.append((time_sync() - t) * 100)
193 | if m == self.model[0]:
194 | LOGGER.info(f"{'time (ms)':>10s} {'GFLOPs':>10s} {'params':>10s} {'module'}")
195 | LOGGER.info(f'{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f} {m.type}')
196 | if c:
197 | LOGGER.info(f"{sum(dt):10.2f} {'-':>10s} {'-':>10s} Total")
198 |
199 | def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency
200 | # https://arxiv.org/abs/1708.02002 section 3.3
201 | # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1.
202 | m = self.model[-1] # Detect() module
203 | for mi, s in zip(m.m, m.stride): # from
204 | b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85)
205 | b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image)
206 | b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls
207 | mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
208 |
209 | def _print_biases(self):
210 | m = self.model[-1] # Detect() module
211 | for mi in m.m: # from
212 | b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85)
213 | LOGGER.info(
214 | ('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean()))
215 |
216 | # def _print_weights(self):
217 | # for m in self.model.modules():
218 | # if type(m) is Bottleneck:
219 | # LOGGER.info('%10.3g' % (m.w.detach().sigmoid() * 2)) # shortcut weights
220 |
221 | def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers
222 | LOGGER.info('Fusing layers... ')
223 | for m in self.model.modules():
224 | if isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'):
225 | m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv
226 | delattr(m, 'bn') # remove batchnorm
227 | m.forward = m.forward_fuse # update forward
228 | self.info()
229 | return self
230 |
231 | def autoshape(self): # add AutoShape module
232 | LOGGER.info('Adding AutoShape... ')
233 | m = AutoShape(self) # wrap model
234 | copy_attr(m, self, include=('yaml', 'nc', 'hyp', 'names', 'stride'), exclude=()) # copy attributes
235 | return m
236 |
237 | def info(self, verbose=False, img_size=640): # print model information
238 | model_info(self, verbose, img_size)
239 |
240 | def _apply(self, fn):
241 | # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers
242 | self = super()._apply(fn)
243 | m = self.model[-1] # Detect()
244 | if isinstance(m, Detect):
245 | m.stride = fn(m.stride)
246 | m.grid = list(map(fn, m.grid))
247 | if isinstance(m.anchor_grid, list):
248 | m.anchor_grid = list(map(fn, m.anchor_grid))
249 | return self
250 |
251 |
252 | def parse_model(d, ch): # model_dict, input_channels(3)
253 | LOGGER.info('\n%3s%18s%3s%10s %-40s%-30s' % ('', 'from', 'n', 'params', 'module', 'arguments'))
254 | anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple']
255 | na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors
256 | no = na * (nc + 5) # number of outputs = anchors * (classes + 5)
257 |
258 | layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out
259 | for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args
260 | m = eval(m) if isinstance(m, str) else m # eval strings
261 | for j, a in enumerate(args):
262 | try:
263 | args[j] = eval(a) if isinstance(a, str) else a # eval strings
264 | except NameError:
265 | pass
266 |
267 | n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain
268 | if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv,
269 | BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]:
270 | c1, c2 = ch[f], args[0]
271 | if c2 != no: # if not output
272 | c2 = make_divisible(c2 * gw, 8)
273 |
274 | args = [c1, c2, *args[1:]]
275 | if m in [BottleneckCSP, C3, C3TR, C3Ghost]:
276 | args.insert(2, n) # number of repeats
277 | n = 1
278 | elif m is nn.BatchNorm2d:
279 | args = [ch[f]]
280 | elif m is Concat:
281 | c2 = sum([ch[x] for x in f])
282 | elif m is Detect:
283 | args.append([ch[x] for x in f])
284 | if isinstance(args[1], int): # number of anchors
285 | args[1] = [list(range(args[1] * 2))] * len(f)
286 | elif m is Contract:
287 | c2 = ch[f] * args[0] ** 2
288 | elif m is Expand:
289 | c2 = ch[f] // args[0] ** 2
290 | else:
291 | c2 = ch[f]
292 |
293 | m_ = nn.Sequential(*[m(*args) for _ in range(n)]) if n > 1 else m(*args) # module
294 | t = str(m)[8:-2].replace('__main__.', '') # module type
295 | np = sum([x.numel() for x in m_.parameters()]) # number params
296 | m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params
297 | LOGGER.info('%3s%18s%3s%10.0f %-40s%-30s' % (i, f, n_, np, t, args)) # print
298 | save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist
299 | layers.append(m_)
300 | if i == 0:
301 | ch = []
302 | ch.append(c2)
303 | return nn.Sequential(*layers), sorted(save)
304 |
305 |
306 | if __name__ == '__main__':
307 | parser = argparse.ArgumentParser()
308 | parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml')
309 | parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
310 | parser.add_argument('--profile', action='store_true', help='profile model speed')
311 | opt = parser.parse_args()
312 | opt.cfg = check_yaml(opt.cfg) # check YAML
313 | print_args(FILE.stem, opt)
314 | set_logging()
315 | device = select_device(opt.device)
316 |
317 | # Create model
318 | model = Model(opt.cfg).to(device)
319 | model.train()
320 |
321 | # Profile
322 | if opt.profile:
323 | img = torch.rand(8 if torch.cuda.is_available() else 1, 3, 640, 640).to(device)
324 | y = model(img, profile=True)
325 |
326 | # Tensorboard (not working https://github.com/ultralytics/yolov5/issues/2898)
327 | # from torch.utils.tensorboard import SummaryWriter
328 | # tb_writer = SummaryWriter('.')
329 | # LOGGER.info("Run 'tensorboard --logdir=models' to view tensorboard at http://localhost:6006/")
330 | # tb_writer.add_graph(torch.jit.trace(model, img, strict=False), []) # add model graph
331 |
--------------------------------------------------------------------------------
/models/common.py:
--------------------------------------------------------------------------------
1 | # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2 | """
3 | Common modules
4 | """
5 |
6 | import logging
7 | import math
8 | import warnings
9 | from copy import copy
10 | from pathlib import Path
11 |
12 | import numpy as np
13 | import pandas as pd
14 | import requests
15 | import torch
16 | import torch.nn as nn
17 | from PIL import Image
18 | from torch.cuda import amp
19 |
20 | from utils.datasets import exif_transpose, letterbox
21 | from utils.general import (LOGGER, check_requirements, check_suffix, check_version, colorstr, increment_path,
22 | make_divisible, non_max_suppression, scale_coords, xywh2xyxy, xyxy2xywh)
23 | from utils.plots import Annotator, colors, save_one_box
24 | from utils.torch_utils import time_sync
25 |
26 | LOGGER = logging.getLogger(__name__)
27 |
28 |
29 | def autopad(k, p=None): # kernel, padding
30 | # Pad to 'same'
31 | if p is None:
32 | p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
33 | return p
34 |
35 |
36 | class Conv(nn.Module):
37 | # Standard convolution
38 | def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
39 | super().__init__()
40 | self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)
41 | self.bn = nn.BatchNorm2d(c2)
42 | self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())
43 |
44 | def forward(self, x):
45 | return self.act(self.bn(self.conv(x)))
46 |
47 | def forward_fuse(self, x):
48 | return self.act(self.conv(x))
49 |
50 |
51 | class DWConv(Conv):
52 | # Depth-wise convolution class
53 | def __init__(self, c1, c2, k=1, s=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
54 | super().__init__(c1, c2, k, s, g=math.gcd(c1, c2), act=act)
55 |
56 |
57 | class TransformerLayer(nn.Module):
58 | # Transformer layer https://arxiv.org/abs/2010.11929 (LayerNorm layers removed for better performance)
59 | def __init__(self, c, num_heads):
60 | super().__init__()
61 | self.q = nn.Linear(c, c, bias=False)
62 | self.k = nn.Linear(c, c, bias=False)
63 | self.v = nn.Linear(c, c, bias=False)
64 | self.ma = nn.MultiheadAttention(embed_dim=c, num_heads=num_heads)
65 | self.fc1 = nn.Linear(c, c, bias=False)
66 | self.fc2 = nn.Linear(c, c, bias=False)
67 |
68 | def forward(self, x):
69 | x = self.ma(self.q(x), self.k(x), self.v(x))[0] + x
70 | x = self.fc2(self.fc1(x)) + x
71 | return x
72 |
73 |
74 | class TransformerBlock(nn.Module):
75 | # Vision Transformer https://arxiv.org/abs/2010.11929
76 | def __init__(self, c1, c2, num_heads, num_layers):
77 | super().__init__()
78 | self.conv = None
79 | if c1 != c2:
80 | self.conv = Conv(c1, c2)
81 | self.linear = nn.Linear(c2, c2) # learnable position embedding
82 | self.tr = nn.Sequential(*[TransformerLayer(c2, num_heads) for _ in range(num_layers)])
83 | self.c2 = c2
84 |
85 | def forward(self, x):
86 | if self.conv is not None:
87 | x = self.conv(x)
88 | b, _, w, h = x.shape
89 | p = x.flatten(2).unsqueeze(0).transpose(0, 3).squeeze(3)
90 | return self.tr(p + self.linear(p)).unsqueeze(3).transpose(0, 3).reshape(b, self.c2, w, h)
91 |
92 |
93 | class Bottleneck(nn.Module):
94 | # Standard bottleneck
95 | def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion
96 | super().__init__()
97 | c_ = int(c2 * e) # hidden channels
98 | self.cv1 = Conv(c1, c_, 1, 1)
99 | self.cv2 = Conv(c_, c2, 3, 1, g=g)
100 | self.add = shortcut and c1 == c2
101 |
102 | def forward(self, x):
103 | return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
104 |
105 |
106 | class BottleneckCSP(nn.Module):
107 | # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
108 | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
109 | super().__init__()
110 | c_ = int(c2 * e) # hidden channels
111 | self.cv1 = Conv(c1, c_, 1, 1)
112 | self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False)
113 | self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False)
114 | self.cv4 = Conv(2 * c_, c2, 1, 1)
115 | self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3)
116 | self.act = nn.LeakyReLU(0.1, inplace=True)
117 | self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
118 |
119 | def forward(self, x):
120 | y1 = self.cv3(self.m(self.cv1(x)))
121 | y2 = self.cv2(x)
122 | return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1))))
123 |
124 |
125 | class C3(nn.Module):
126 | # CSP Bottleneck with 3 convolutions
127 | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
128 | super().__init__()
129 | c_ = int(c2 * e) # hidden channels
130 | self.cv1 = Conv(c1, c_, 1, 1)
131 | self.cv2 = Conv(c1, c_, 1, 1)
132 | self.cv3 = Conv(2 * c_, c2, 1) # act=FReLU(c2)
133 | self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
134 | # self.m = nn.Sequential(*[CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n)])
135 |
136 | def forward(self, x):
137 | return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1))
138 |
139 |
140 | class C3TR(C3):
141 | # C3 module with TransformerBlock()
142 | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
143 | super().__init__(c1, c2, n, shortcut, g, e)
144 | c_ = int(c2 * e)
145 | self.m = TransformerBlock(c_, c_, 4, n)
146 |
147 |
148 | class C3SPP(C3):
149 | # C3 module with SPP()
150 | def __init__(self, c1, c2, k=(5, 9, 13), n=1, shortcut=True, g=1, e=0.5):
151 | super().__init__(c1, c2, n, shortcut, g, e)
152 | c_ = int(c2 * e)
153 | self.m = SPP(c_, c_, k)
154 |
155 |
156 | class C3Ghost(C3):
157 | # C3 module with GhostBottleneck()
158 | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
159 | super().__init__(c1, c2, n, shortcut, g, e)
160 | c_ = int(c2 * e) # hidden channels
161 | self.m = nn.Sequential(*[GhostBottleneck(c_, c_) for _ in range(n)])
162 |
163 |
164 | class SPP(nn.Module):
165 | # Spatial Pyramid Pooling (SPP) layer https://arxiv.org/abs/1406.4729
166 | def __init__(self, c1, c2, k=(5, 9, 13)):
167 | super().__init__()
168 | c_ = c1 // 2 # hidden channels
169 | self.cv1 = Conv(c1, c_, 1, 1)
170 | self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1)
171 | self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])
172 |
173 | def forward(self, x):
174 | x = self.cv1(x)
175 | with warnings.catch_warnings():
176 | warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning
177 | return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1))
178 |
179 |
180 | class SPPF(nn.Module):
181 | # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher
182 | def __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13))
183 | super().__init__()
184 | c_ = c1 // 2 # hidden channels
185 | self.cv1 = Conv(c1, c_, 1, 1)
186 | self.cv2 = Conv(c_ * 4, c2, 1, 1)
187 | self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)
188 |
189 | def forward(self, x):
190 | x = self.cv1(x)
191 | with warnings.catch_warnings():
192 | warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning
193 | y1 = self.m(x)
194 | y2 = self.m(y1)
195 | return self.cv2(torch.cat([x, y1, y2, self.m(y2)], 1))
196 |
197 |
198 | class Focus(nn.Module):
199 | # Focus wh information into c-space
200 | def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
201 | super().__init__()
202 | self.conv = Conv(c1 * 4, c2, k, s, p, g, act)
203 | # self.contract = Contract(gain=2)
204 |
205 | def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2)
206 | return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1))
207 | # return self.conv(self.contract(x))
208 |
209 |
210 | class GhostConv(nn.Module):
211 | # Ghost Convolution https://github.com/huawei-noah/ghostnet
212 | def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups
213 | super().__init__()
214 | c_ = c2 // 2 # hidden channels
215 | self.cv1 = Conv(c1, c_, k, s, None, g, act)
216 | self.cv2 = Conv(c_, c_, 5, 1, None, c_, act)
217 |
218 | def forward(self, x):
219 | y = self.cv1(x)
220 | return torch.cat([y, self.cv2(y)], 1)
221 |
222 |
223 | class GhostBottleneck(nn.Module):
224 | # Ghost Bottleneck https://github.com/huawei-noah/ghostnet
225 | def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride
226 | super().__init__()
227 | c_ = c2 // 2
228 | self.conv = nn.Sequential(GhostConv(c1, c_, 1, 1), # pw
229 | DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw
230 | GhostConv(c_, c2, 1, 1, act=False)) # pw-linear
231 | self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False),
232 | Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity()
233 |
234 | def forward(self, x):
235 | return self.conv(x) + self.shortcut(x)
236 |
237 |
238 | class Contract(nn.Module):
239 | # Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40)
240 | def __init__(self, gain=2):
241 | super().__init__()
242 | self.gain = gain
243 |
244 | def forward(self, x):
245 | b, c, h, w = x.size() # assert (h / s == 0) and (W / s == 0), 'Indivisible gain'
246 | s = self.gain
247 | x = x.view(b, c, h // s, s, w // s, s) # x(1,64,40,2,40,2)
248 | x = x.permute(0, 3, 5, 1, 2, 4).contiguous() # x(1,2,2,64,40,40)
249 | return x.view(b, c * s * s, h // s, w // s) # x(1,256,40,40)
250 |
251 |
252 | class Expand(nn.Module):
253 | # Expand channels into width-height, i.e. x(1,64,80,80) to x(1,16,160,160)
254 | def __init__(self, gain=2):
255 | super().__init__()
256 | self.gain = gain
257 |
258 | def forward(self, x):
259 | b, c, h, w = x.size() # assert C / s ** 2 == 0, 'Indivisible gain'
260 | s = self.gain
261 | x = x.view(b, s, s, c // s ** 2, h, w) # x(1,2,2,16,80,80)
262 | x = x.permute(0, 3, 4, 1, 5, 2).contiguous() # x(1,16,80,2,80,2)
263 | return x.view(b, c // s ** 2, h * s, w * s) # x(1,16,160,160)
264 |
265 |
266 | class Concat(nn.Module):
267 | # Concatenate a list of tensors along dimension
268 | def __init__(self, dimension=1):
269 | super().__init__()
270 | self.d = dimension
271 |
272 | def forward(self, x):
273 | return torch.cat(x, self.d)
274 |
275 |
276 | class AutoShape(nn.Module):
277 | # YOLOv5 input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS
278 | conf = 0.25 # NMS confidence threshold
279 | iou = 0.45 # NMS IoU threshold
280 | classes = None # (optional list) filter by class
281 | multi_label = False # NMS multiple labels per box
282 | max_det = 1000 # maximum number of detections per image
283 |
284 | def __init__(self, model):
285 | super().__init__()
286 | self.model = model.eval()
287 |
288 | def autoshape(self):
289 | LOGGER.info('AutoShape already enabled, skipping... ') # model already converted to model.autoshape()
290 | return self
291 |
292 | def _apply(self, fn):
293 | # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers
294 | self = super()._apply(fn)
295 | m = self.model.model[-1] # Detect()
296 | m.stride = fn(m.stride)
297 | m.grid = list(map(fn, m.grid))
298 | if isinstance(m.anchor_grid, list):
299 | m.anchor_grid = list(map(fn, m.anchor_grid))
300 | return self
301 |
302 | @torch.no_grad()
303 | def forward(self, imgs, size=640, augment=False, profile=False):
304 | # Inference from various sources. For height=640, width=1280, RGB images example inputs are:
305 | # file: imgs = 'data/images/zidane.jpg' # str or PosixPath
306 | # URI: = 'https://ultralytics.com/images/zidane.jpg'
307 | # OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3)
308 | # PIL: = Image.open('image.jpg') or ImageGrab.grab() # HWC x(640,1280,3)
309 | # numpy: = np.zeros((640,1280,3)) # HWC
310 | # torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values)
311 | # multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images
312 |
313 | t = [time_sync()]
314 | p = next(self.model.parameters()) # for device and type
315 | if isinstance(imgs, torch.Tensor): # torch
316 | with amp.autocast(enabled=p.device.type != 'cpu'):
317 | return self.model(imgs.to(p.device).type_as(p), augment, profile) # inference
318 |
319 | # Pre-process
320 | n, imgs = (len(imgs), imgs) if isinstance(imgs, list) else (1, [imgs]) # number of images, list of images
321 | shape0, shape1, files = [], [], [] # image and inference shapes, filenames
322 | for i, im in enumerate(imgs):
323 | f = f'image{i}' # filename
324 | if isinstance(im, (str, Path)): # filename or uri
325 | im, f = Image.open(requests.get(im, stream=True).raw if str(im).startswith('http') else im), im
326 | im = np.asarray(exif_transpose(im))
327 | elif isinstance(im, Image.Image): # PIL Image
328 | im, f = np.asarray(exif_transpose(im)), getattr(im, 'filename', f) or f
329 | files.append(Path(f).with_suffix('.jpg').name)
330 | if im.shape[0] < 5: # image in CHW
331 | im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1)
332 | im = im[..., :3] if im.ndim == 3 else np.tile(im[..., None], 3) # enforce 3ch input
333 | s = im.shape[:2] # HWC
334 | shape0.append(s) # image shape
335 | g = (size / max(s)) # gain
336 | shape1.append([y * g for y in s])
337 | imgs[i] = im if im.data.contiguous else np.ascontiguousarray(im) # update
338 | shape1 = [make_divisible(x, int(self.stride.max())) for x in np.stack(shape1, 0).max(0)] # inference shape
339 | x = [letterbox(im, new_shape=shape1, auto=False)[0] for im in imgs] # pad
340 | x = np.stack(x, 0) if n > 1 else x[0][None] # stack
341 | x = np.ascontiguousarray(x.transpose((0, 3, 1, 2))) # BHWC to BCHW
342 | x = torch.from_numpy(x).to(p.device).type_as(p) / 255. # uint8 to fp16/32
343 | t.append(time_sync())
344 |
345 | with amp.autocast(enabled=p.device.type != 'cpu'):
346 | # Inference
347 | y = self.model(x, augment, profile)[0] # forward
348 | t.append(time_sync())
349 |
350 | # Post-process
351 | y = non_max_suppression(y, self.conf, iou_thres=self.iou, classes=self.classes,
352 | multi_label=self.multi_label, max_det=self.max_det) # NMS
353 | for i in range(n):
354 | scale_coords(shape1, y[i][:, :4], shape0[i])
355 |
356 | t.append(time_sync())
357 | return Detections(imgs, y, files, t, self.names, x.shape)
358 |
359 |
360 | class Detections:
361 | # YOLOv5 detections class for inference results
362 | def __init__(self, imgs, pred, files, times=None, names=None, shape=None):
363 | super().__init__()
364 | d = pred[0].device # device
365 | gn = [torch.tensor([*[im.shape[i] for i in [1, 0, 1, 0]], 1., 1.], device=d) for im in imgs] # normalizations
366 | self.imgs = imgs # list of images as numpy arrays
367 | self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls)
368 | self.names = names # class names
369 | self.files = files # image filenames
370 | self.xyxy = pred # xyxy pixels
371 | self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels
372 | self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized
373 | self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized
374 | self.n = len(self.pred) # number of images (batch size)
375 | self.t = tuple((times[i + 1] - times[i]) * 1000 / self.n for i in range(3)) # timestamps (ms)
376 | self.s = shape # inference BCHW shape
377 |
378 | def display(self, pprint=False, show=False, save=False, crop=False, render=False, save_dir=Path('')):
379 | crops = []
380 | for i, (im, pred) in enumerate(zip(self.imgs, self.pred)):
381 | s = f'image {i + 1}/{len(self.pred)}: {im.shape[0]}x{im.shape[1]} ' # string
382 | if pred.shape[0]:
383 | for c in pred[:, -1].unique():
384 | n = (pred[:, -1] == c).sum() # detections per class
385 | s += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, " # add to string
386 | if show or save or render or crop:
387 | annotator = Annotator(im, example=str(self.names))
388 | for *box, conf, cls in reversed(pred): # xyxy, confidence, class
389 | label = f'{self.names[int(cls)]} {conf:.2f}'
390 | if crop:
391 | file = save_dir / 'crops' / self.names[int(cls)] / self.files[i] if save else None
392 | crops.append({'box': box, 'conf': conf, 'cls': cls, 'label': label,
393 | 'im': save_one_box(box, im, file=file, save=save)})
394 | else: # all others
395 | annotator.box_label(box, label, color=colors(cls))
396 | im = annotator.im
397 | else:
398 | s += '(no detections)'
399 |
400 | im = Image.fromarray(im.astype(np.uint8)) if isinstance(im, np.ndarray) else im # from np
401 | if pprint:
402 | LOGGER.info(s.rstrip(', '))
403 | if show:
404 | im.show(self.files[i]) # show
405 | if save:
406 | f = self.files[i]
407 | im.save(save_dir / f) # save
408 | if i == self.n - 1:
409 | LOGGER.info(f"Saved {self.n} image{'s' * (self.n > 1)} to {colorstr('bold', save_dir)}")
410 | if render:
411 | self.imgs[i] = np.asarray(im)
412 | if crop:
413 | if save:
414 | LOGGER.info(f'Saved results to {save_dir}\n')
415 | return crops
416 |
417 | def print(self):
418 | self.display(pprint=True) # print results
419 | LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {tuple(self.s)}' %
420 | self.t)
421 |
422 | def show(self):
423 | self.display(show=True) # show results
424 |
425 | def save(self, save_dir='runs/detect/exp'):
426 | save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True) # increment save_dir
427 | self.display(save=True, save_dir=save_dir) # save results
428 |
429 | def crop(self, save=True, save_dir='runs/detect/exp'):
430 | save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True) if save else None
431 | return self.display(crop=True, save=save, save_dir=save_dir) # crop results
432 |
433 | def render(self):
434 | self.display(render=True) # render results
435 | return self.imgs
436 |
437 | def pandas(self):
438 | # return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0])
439 | new = copy(self) # return copy
440 | ca = 'xmin', 'ymin', 'xmax', 'ymax', 'confidence', 'class', 'name' # xyxy columns
441 | cb = 'xcenter', 'ycenter', 'width', 'height', 'confidence', 'class', 'name' # xywh columns
442 | for k, c in zip(['xyxy', 'xyxyn', 'xywh', 'xywhn'], [ca, ca, cb, cb]):
443 | a = [[x[:5] + [int(x[5]), self.names[int(x[5])]] for x in x.tolist()] for x in getattr(self, k)] # update
444 | setattr(new, k, [pd.DataFrame(x, columns=c) for x in a])
445 | return new
446 |
447 | def tolist(self):
448 | # return a list of Detections objects, i.e. 'for result in results.tolist():'
449 | x = [Detections([self.imgs[i]], [self.pred[i]], self.names, self.s) for i in range(self.n)]
450 | for d in x:
451 | for k in ['imgs', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']:
452 | setattr(d, k, getattr(d, k)[0]) # pop out of list
453 | return x
454 |
455 | def __len__(self):
456 | return self.n
457 |
458 |
459 | class Classify(nn.Module):
460 | # Classification head, i.e. x(b,c1,20,20) to x(b,c2)
461 | def __init__(self, c1, c2, k=1, s=1, p=None, g=1): # ch_in, ch_out, kernel, stride, padding, groups
462 | super().__init__()
463 | self.aap = nn.AdaptiveAvgPool2d(1) # to x(b,c1,1,1)
464 | self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g) # to x(b,c2,1,1)
465 | self.flat = nn.Flatten()
466 |
467 | def forward(self, x):
468 | z = torch.cat([self.aap(y) for y in (x if isinstance(x, list) else [x])], 1) # cat if list
469 | return self.flat(self.conv(z)) # flatten to x(b,c2)
470 |
--------------------------------------------------------------------------------
/utils/plots.py:
--------------------------------------------------------------------------------
1 | # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2 | """
3 | Plotting utils
4 | """
5 |
6 | import math
7 | import os
8 | from copy import copy
9 | from pathlib import Path
10 | from urllib.error import URLError
11 |
12 | import cv2
13 | import matplotlib
14 | import matplotlib.pyplot as plt
15 | import numpy as np
16 | import pandas as pd
17 | import seaborn as sn
18 | import torch
19 | from PIL import Image, ImageDraw, ImageFont
20 |
21 | from utils.general import (CONFIG_DIR, FONT, LOGGER, Timeout, check_font, check_requirements, clip_coords,
22 | increment_path, is_ascii, threaded, try_except, xywh2xyxy, xyxy2xywh)
23 | from utils.metrics import fitness
24 |
25 | # Settings
26 | RANK = int(os.getenv('RANK', -1))
27 | matplotlib.rc('font', **{'size': 11})
28 | matplotlib.use('Agg') # for writing to files only
29 |
30 |
31 | class Colors:
32 | # Ultralytics color palette https://ultralytics.com/
33 | def __init__(self):
34 | # hex = matplotlib.colors.TABLEAU_COLORS.values()
35 | hexs = ('FF3838', 'FF9D97', 'FF701F', 'FFB21D', 'CFD231', '48F90A', '92CC17', '3DDB86', '1A9334', '00D4BB',
36 | '2C99A8', '00C2FF', '344593', '6473FF', '0018EC', '8438FF', '520085', 'CB38FF', 'FF95C8', 'FF37C7')
37 | self.palette = [self.hex2rgb(f'#{c}') for c in hexs]
38 | self.n = len(self.palette)
39 |
40 | def __call__(self, i, bgr=False):
41 | c = self.palette[int(i) % self.n]
42 | return (c[2], c[1], c[0]) if bgr else c
43 |
44 | @staticmethod
45 | def hex2rgb(h): # rgb order (PIL)
46 | return tuple(int(h[1 + i:1 + i + 2], 16) for i in (0, 2, 4))
47 |
48 |
49 | colors = Colors() # create instance for 'from utils.plots import colors'
50 |
51 |
52 | def check_pil_font(font=FONT, size=10):
53 | # Return a PIL TrueType Font, downloading to CONFIG_DIR if necessary
54 | font = Path(font)
55 | font = font if font.exists() else (CONFIG_DIR / font.name)
56 | try:
57 | return ImageFont.truetype(str(font) if font.exists() else font.name, size)
58 | except Exception: # download if missing
59 | try:
60 | check_font(font)
61 | return ImageFont.truetype(str(font), size)
62 | except TypeError:
63 | check_requirements('Pillow>=8.4.0') # known issue https://github.com/ultralytics/yolov5/issues/5374
64 | except URLError: # not online
65 | return ImageFont.load_default()
66 |
67 |
68 | class Annotator:
69 | # YOLOv5 Annotator for train/val mosaics and jpgs and detect/hub inference annotations
70 | def __init__(self, im, line_width=None, font_size=None, font='Arial.ttf', pil=False, example='abc'):
71 | assert im.data.contiguous, 'Image not contiguous. Apply np.ascontiguousarray(im) to Annotator() input images.'
72 | non_ascii = not is_ascii(example) # non-latin labels, i.e. asian, arabic, cyrillic
73 | self.pil = pil or non_ascii
74 | if self.pil: # use PIL
75 | self.im = im if isinstance(im, Image.Image) else Image.fromarray(im)
76 | self.draw = ImageDraw.Draw(self.im)
77 | self.font = check_pil_font(font='Arial.Unicode.ttf' if non_ascii else font,
78 | size=font_size or max(round(sum(self.im.size) / 2 * 0.035), 12))
79 | else: # use cv2
80 | self.im = im
81 | self.lw = line_width or max(round(sum(im.shape) / 2 * 0.003), 2) # line width
82 |
83 | def box_label(self, box, label='', color=(128, 128, 128), txt_color=(255, 255, 255)):
84 | # Add one xyxy box to image with label
85 | if self.pil or not is_ascii(label):
86 | self.draw.rectangle(box, width=self.lw, outline=color) # box
87 | if label:
88 | w, h = self.font.getsize(label) # text width, height
89 | outside = box[1] - h >= 0 # label fits outside box
90 | self.draw.rectangle(
91 | (box[0], box[1] - h if outside else box[1], box[0] + w + 1,
92 | box[1] + 1 if outside else box[1] + h + 1),
93 | fill=color,
94 | )
95 | # self.draw.text((box[0], box[1]), label, fill=txt_color, font=self.font, anchor='ls') # for PIL>8.0
96 | self.draw.text((box[0], box[1] - h if outside else box[1]), label, fill=txt_color, font=self.font)
97 | else: # cv2
98 | p1, p2 = (int(box[0]), int(box[1])), (int(box[2]), int(box[3]))
99 | cv2.rectangle(self.im, p1, p2, color, thickness=self.lw, lineType=cv2.LINE_AA)
100 | if label:
101 | tf = max(self.lw - 1, 1) # font thickness
102 | w, h = cv2.getTextSize(label, 0, fontScale=self.lw / 3, thickness=tf)[0] # text width, height
103 | outside = p1[1] - h >= 3
104 | p2 = p1[0] + w, p1[1] - h - 3 if outside else p1[1] + h + 3
105 | cv2.rectangle(self.im, p1, p2, color, -1, cv2.LINE_AA) # filled
106 | cv2.putText(self.im,
107 | label, (p1[0], p1[1] - 2 if outside else p1[1] + h + 2),
108 | 0,
109 | self.lw / 3,
110 | txt_color,
111 | thickness=tf,
112 | lineType=cv2.LINE_AA)
113 |
114 | def rectangle(self, xy, fill=None, outline=None, width=1):
115 | # Add rectangle to image (PIL-only)
116 | self.draw.rectangle(xy, fill, outline, width)
117 |
118 | def text(self, xy, text, txt_color=(255, 255, 255)):
119 | # Add text to image (PIL-only)
120 | w, h = self.font.getsize(text) # text width, height
121 | self.draw.text((xy[0], xy[1] - h + 1), text, fill=txt_color, font=self.font)
122 |
123 | def result(self):
124 | # Return annotated image as array
125 | return np.asarray(self.im)
126 |
127 |
128 | def feature_visualization(x, module_type, stage, n=32, save_dir=Path('runs/detect/exp')):
129 | """
130 | x: Features to be visualized
131 | module_type: Module type
132 | stage: Module stage within model
133 | n: Maximum number of feature maps to plot
134 | save_dir: Directory to save results
135 | """
136 | if 'Detect' not in module_type:
137 | batch, channels, height, width = x.shape # batch, channels, height, width
138 | if height > 1 and width > 1:
139 | f = save_dir / f"stage{stage}_{module_type.split('.')[-1]}_features.png" # filename
140 |
141 | blocks = torch.chunk(x[0].cpu(), channels, dim=0) # select batch index 0, block by channels
142 | n = min(n, channels) # number of plots
143 | fig, ax = plt.subplots(math.ceil(n / 8), 8, tight_layout=True) # 8 rows x n/8 cols
144 | ax = ax.ravel()
145 | plt.subplots_adjust(wspace=0.05, hspace=0.05)
146 | for i in range(n):
147 | ax[i].imshow(blocks[i].squeeze()) # cmap='gray'
148 | ax[i].axis('off')
149 |
150 | LOGGER.info(f'Saving {f}... ({n}/{channels})')
151 | plt.savefig(f, dpi=300, bbox_inches='tight')
152 | plt.close()
153 | np.save(str(f.with_suffix('.npy')), x[0].cpu().numpy()) # npy save
154 |
155 |
156 | def hist2d(x, y, n=100):
157 | # 2d histogram used in labels.png and evolve.png
158 | xedges, yedges = np.linspace(x.min(), x.max(), n), np.linspace(y.min(), y.max(), n)
159 | hist, xedges, yedges = np.histogram2d(x, y, (xedges, yedges))
160 | xidx = np.clip(np.digitize(x, xedges) - 1, 0, hist.shape[0] - 1)
161 | yidx = np.clip(np.digitize(y, yedges) - 1, 0, hist.shape[1] - 1)
162 | return np.log(hist[xidx, yidx])
163 |
164 |
165 | def butter_lowpass_filtfilt(data, cutoff=1500, fs=50000, order=5):
166 | from scipy.signal import butter, filtfilt
167 |
168 | # https://stackoverflow.com/questions/28536191/how-to-filter-smooth-with-scipy-numpy
169 | def butter_lowpass(cutoff, fs, order):
170 | nyq = 0.5 * fs
171 | normal_cutoff = cutoff / nyq
172 | return butter(order, normal_cutoff, btype='low', analog=False)
173 |
174 | b, a = butter_lowpass(cutoff, fs, order=order)
175 | return filtfilt(b, a, data) # forward-backward filter
176 |
177 |
178 | def output_to_target(output):
179 | # Convert model output to target format [batch_id, class_id, x, y, w, h, conf]
180 | targets = []
181 | for i, o in enumerate(output):
182 | for *box, conf, cls in o.cpu().numpy():
183 | targets.append([i, cls, *list(*xyxy2xywh(np.array(box)[None])), conf])
184 | return np.array(targets)
185 |
186 |
187 | @threaded
188 | def plot_images(images, targets, paths=None, fname='images.jpg', names=None, max_size=1920, max_subplots=16):
189 | # Plot image grid with labels
190 | if isinstance(images, torch.Tensor):
191 | images = images.cpu().float().numpy()
192 | if isinstance(targets, torch.Tensor):
193 | targets = targets.cpu().numpy()
194 | if np.max(images[0]) <= 1:
195 | images *= 255 # de-normalise (optional)
196 | bs, _, h, w = images.shape # batch size, _, height, width
197 | bs = min(bs, max_subplots) # limit plot images
198 | ns = np.ceil(bs ** 0.5) # number of subplots (square)
199 |
200 | # Build Image
201 | mosaic = np.full((int(ns * h), int(ns * w), 3), 255, dtype=np.uint8) # init
202 | for i, im in enumerate(images):
203 | if i == max_subplots: # if last batch has fewer images than we expect
204 | break
205 | x, y = int(w * (i // ns)), int(h * (i % ns)) # block origin
206 | im = im.transpose(1, 2, 0)
207 | mosaic[y:y + h, x:x + w, :] = im
208 |
209 | # Resize (optional)
210 | scale = max_size / ns / max(h, w)
211 | if scale < 1:
212 | h = math.ceil(scale * h)
213 | w = math.ceil(scale * w)
214 | mosaic = cv2.resize(mosaic, tuple(int(x * ns) for x in (w, h)))
215 |
216 | # Annotate
217 | fs = int((h + w) * ns * 0.01) # font size
218 | annotator = Annotator(mosaic, line_width=round(fs / 10), font_size=fs, pil=True, example=names)
219 | for i in range(i + 1):
220 | x, y = int(w * (i // ns)), int(h * (i % ns)) # block origin
221 | annotator.rectangle([x, y, x + w, y + h], None, (255, 255, 255), width=2) # borders
222 | if paths:
223 | annotator.text((x + 5, y + 5 + h), text=Path(paths[i]).name[:40], txt_color=(220, 220, 220)) # filenames
224 | if len(targets) > 0:
225 | ti = targets[targets[:, 0] == i] # image targets
226 | boxes = xywh2xyxy(ti[:, 2:6]).T
227 | classes = ti[:, 1].astype('int')
228 | labels = ti.shape[1] == 6 # labels if no conf column
229 | conf = None if labels else ti[:, 6] # check for confidence presence (label vs pred)
230 |
231 | if boxes.shape[1]:
232 | if boxes.max() <= 1.01: # if normalized with tolerance 0.01
233 | boxes[[0, 2]] *= w # scale to pixels
234 | boxes[[1, 3]] *= h
235 | elif scale < 1: # absolute coords need scale if image scales
236 | boxes *= scale
237 | boxes[[0, 2]] += x
238 | boxes[[1, 3]] += y
239 | for j, box in enumerate(boxes.T.tolist()):
240 | cls = classes[j]
241 | color = colors(cls)
242 | cls = names[cls] if names else cls
243 | if labels or conf[j] > 0.25: # 0.25 conf thresh
244 | label = f'{cls}' if labels else f'{cls} {conf[j]:.1f}'
245 | annotator.box_label(box, label, color=color)
246 | annotator.im.save(fname) # save
247 |
248 |
249 | def plot_lr_scheduler(optimizer, scheduler, epochs=300, save_dir=''):
250 | # Plot LR simulating training for full epochs
251 | optimizer, scheduler = copy(optimizer), copy(scheduler) # do not modify originals
252 | y = []
253 | for _ in range(epochs):
254 | scheduler.step()
255 | y.append(optimizer.param_groups[0]['lr'])
256 | plt.plot(y, '.-', label='LR')
257 | plt.xlabel('epoch')
258 | plt.ylabel('LR')
259 | plt.grid()
260 | plt.xlim(0, epochs)
261 | plt.ylim(0)
262 | plt.savefig(Path(save_dir) / 'LR.png', dpi=200)
263 | plt.close()
264 |
265 |
266 | def plot_val_txt(): # from utils.plots import *; plot_val()
267 | # Plot val.txt histograms
268 | x = np.loadtxt('val.txt', dtype=np.float32)
269 | box = xyxy2xywh(x[:, :4])
270 | cx, cy = box[:, 0], box[:, 1]
271 |
272 | fig, ax = plt.subplots(1, 1, figsize=(6, 6), tight_layout=True)
273 | ax.hist2d(cx, cy, bins=600, cmax=10, cmin=0)
274 | ax.set_aspect('equal')
275 | plt.savefig('hist2d.png', dpi=300)
276 |
277 | fig, ax = plt.subplots(1, 2, figsize=(12, 6), tight_layout=True)
278 | ax[0].hist(cx, bins=600)
279 | ax[1].hist(cy, bins=600)
280 | plt.savefig('hist1d.png', dpi=200)
281 |
282 |
283 | def plot_targets_txt(): # from utils.plots import *; plot_targets_txt()
284 | # Plot targets.txt histograms
285 | x = np.loadtxt('targets.txt', dtype=np.float32).T
286 | s = ['x targets', 'y targets', 'width targets', 'height targets']
287 | fig, ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)
288 | ax = ax.ravel()
289 | for i in range(4):
290 | ax[i].hist(x[i], bins=100, label=f'{x[i].mean():.3g} +/- {x[i].std():.3g}')
291 | ax[i].legend()
292 | ax[i].set_title(s[i])
293 | plt.savefig('targets.jpg', dpi=200)
294 |
295 |
296 | def plot_val_study(file='', dir='', x=None): # from utils.plots import *; plot_val_study()
297 | # Plot file=study.txt generated by val.py (or plot all study*.txt in dir)
298 | save_dir = Path(file).parent if file else Path(dir)
299 | plot2 = False # plot additional results
300 | if plot2:
301 | ax = plt.subplots(2, 4, figsize=(10, 6), tight_layout=True)[1].ravel()
302 |
303 | fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True)
304 | # for f in [save_dir / f'study_coco_{x}.txt' for x in ['yolov5n6', 'yolov5s6', 'yolov5m6', 'yolov5l6', 'yolov5x6']]:
305 | for f in sorted(save_dir.glob('study*.txt')):
306 | y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T
307 | x = np.arange(y.shape[1]) if x is None else np.array(x)
308 | if plot2:
309 | s = ['P', 'R', 'mAP@.5', 'mAP@.5:.95', 't_preprocess (ms/img)', 't_inference (ms/img)', 't_NMS (ms/img)']
310 | for i in range(7):
311 | ax[i].plot(x, y[i], '.-', linewidth=2, markersize=8)
312 | ax[i].set_title(s[i])
313 |
314 | j = y[3].argmax() + 1
315 | ax2.plot(y[5, 1:j],
316 | y[3, 1:j] * 1E2,
317 | '.-',
318 | linewidth=2,
319 | markersize=8,
320 | label=f.stem.replace('study_coco_', '').replace('yolo', 'YOLO'))
321 |
322 | ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [34.6, 40.5, 43.0, 47.5, 49.7, 51.5],
323 | 'k.-',
324 | linewidth=2,
325 | markersize=8,
326 | alpha=.25,
327 | label='EfficientDet')
328 |
329 | ax2.grid(alpha=0.2)
330 | ax2.set_yticks(np.arange(20, 60, 5))
331 | ax2.set_xlim(0, 57)
332 | ax2.set_ylim(25, 55)
333 | ax2.set_xlabel('GPU Speed (ms/img)')
334 | ax2.set_ylabel('COCO AP val')
335 | ax2.legend(loc='lower right')
336 | f = save_dir / 'study.png'
337 | print(f'Saving {f}...')
338 | plt.savefig(f, dpi=300)
339 |
340 |
341 | @try_except # known issue https://github.com/ultralytics/yolov5/issues/5395
342 | @Timeout(30) # known issue https://github.com/ultralytics/yolov5/issues/5611
343 | def plot_labels(labels, names=(), save_dir=Path('')):
344 | # plot dataset labels
345 | LOGGER.info(f"Plotting labels to {save_dir / 'labels.jpg'}... ")
346 | c, b = labels[:, 0], labels[:, 1:].transpose() # classes, boxes
347 | nc = int(c.max() + 1) # number of classes
348 | x = pd.DataFrame(b.transpose(), columns=['x', 'y', 'width', 'height'])
349 |
350 | # seaborn correlogram
351 | sn.pairplot(x, corner=True, diag_kind='auto', kind='hist', diag_kws=dict(bins=50), plot_kws=dict(pmax=0.9))
352 | plt.savefig(save_dir / 'labels_correlogram.jpg', dpi=200)
353 | plt.close()
354 |
355 | # matplotlib labels
356 | matplotlib.use('svg') # faster
357 | ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)[1].ravel()
358 | y = ax[0].hist(c, bins=np.linspace(0, nc, nc + 1) - 0.5, rwidth=0.8)
359 | try: # color histogram bars by class
360 | [y[2].patches[i].set_color([x / 255 for x in colors(i)]) for i in range(nc)] # known issue #3195
361 | except Exception:
362 | pass
363 | ax[0].set_ylabel('instances')
364 | if 0 < len(names) < 30:
365 | ax[0].set_xticks(range(len(names)))
366 | ax[0].set_xticklabels(names, rotation=90, fontsize=10)
367 | else:
368 | ax[0].set_xlabel('classes')
369 | sn.histplot(x, x='x', y='y', ax=ax[2], bins=50, pmax=0.9)
370 | sn.histplot(x, x='width', y='height', ax=ax[3], bins=50, pmax=0.9)
371 |
372 | # rectangles
373 | labels[:, 1:3] = 0.5 # center
374 | labels[:, 1:] = xywh2xyxy(labels[:, 1:]) * 2000
375 | img = Image.fromarray(np.ones((2000, 2000, 3), dtype=np.uint8) * 255)
376 | for cls, *box in labels[:1000]:
377 | ImageDraw.Draw(img).rectangle(box, width=1, outline=colors(cls)) # plot
378 | ax[1].imshow(img)
379 | ax[1].axis('off')
380 |
381 | for a in [0, 1, 2, 3]:
382 | for s in ['top', 'right', 'left', 'bottom']:
383 | ax[a].spines[s].set_visible(False)
384 |
385 | plt.savefig(save_dir / 'labels.jpg', dpi=200)
386 | matplotlib.use('Agg')
387 | plt.close()
388 |
389 |
390 | def plot_evolve(evolve_csv='path/to/evolve.csv'): # from utils.plots import *; plot_evolve()
391 | # Plot evolve.csv hyp evolution results
392 | evolve_csv = Path(evolve_csv)
393 | data = pd.read_csv(evolve_csv)
394 | keys = [x.strip() for x in data.columns]
395 | x = data.values
396 | f = fitness(x)
397 | j = np.argmax(f) # max fitness index
398 | plt.figure(figsize=(10, 12), tight_layout=True)
399 | matplotlib.rc('font', **{'size': 8})
400 | print(f'Best results from row {j} of {evolve_csv}:')
401 | for i, k in enumerate(keys[7:]):
402 | v = x[:, 7 + i]
403 | mu = v[j] # best single result
404 | plt.subplot(6, 5, i + 1)
405 | plt.scatter(v, f, c=hist2d(v, f, 20), cmap='viridis', alpha=.8, edgecolors='none')
406 | plt.plot(mu, f.max(), 'k+', markersize=15)
407 | plt.title(f'{k} = {mu:.3g}', fontdict={'size': 9}) # limit to 40 characters
408 | if i % 5 != 0:
409 | plt.yticks([])
410 | print(f'{k:>15}: {mu:.3g}')
411 | f = evolve_csv.with_suffix('.png') # filename
412 | plt.savefig(f, dpi=200)
413 | plt.close()
414 | print(f'Saved {f}')
415 |
416 |
417 | def plot_results(file='path/to/results.csv', dir=''):
418 | # Plot training results.csv. Usage: from utils.plots import *; plot_results('path/to/results.csv')
419 | save_dir = Path(file).parent if file else Path(dir)
420 | fig, ax = plt.subplots(2, 5, figsize=(12, 6), tight_layout=True)
421 | ax = ax.ravel()
422 | files = list(save_dir.glob('results*.csv'))
423 | assert len(files), f'No results.csv files found in {save_dir.resolve()}, nothing to plot.'
424 | for f in files:
425 | try:
426 | data = pd.read_csv(f)
427 | s = [x.strip() for x in data.columns]
428 | x = data.values[:, 0]
429 | for i, j in enumerate([1, 2, 3, 4, 5, 8, 9, 10, 6, 7]):
430 | y = data.values[:, j].astype('float')
431 | # y[y == 0] = np.nan # don't show zero values
432 | ax[i].plot(x, y, marker='.', label=f.stem, linewidth=2, markersize=8)
433 | ax[i].set_title(s[j], fontsize=12)
434 | # if j in [8, 9, 10]: # share train and val loss y axes
435 | # ax[i].get_shared_y_axes().join(ax[i], ax[i - 5])
436 | except Exception as e:
437 | LOGGER.info(f'Warning: Plotting error for {f}: {e}')
438 | ax[1].legend()
439 | fig.savefig(save_dir / 'results.png', dpi=200)
440 | plt.close()
441 |
442 |
443 | def profile_idetection(start=0, stop=0, labels=(), save_dir=''):
444 | # Plot iDetection '*.txt' per-image logs. from utils.plots import *; profile_idetection()
445 | ax = plt.subplots(2, 4, figsize=(12, 6), tight_layout=True)[1].ravel()
446 | s = ['Images', 'Free Storage (GB)', 'RAM Usage (GB)', 'Battery', 'dt_raw (ms)', 'dt_smooth (ms)', 'real-world FPS']
447 | files = list(Path(save_dir).glob('frames*.txt'))
448 | for fi, f in enumerate(files):
449 | try:
450 | results = np.loadtxt(f, ndmin=2).T[:, 90:-30] # clip first and last rows
451 | n = results.shape[1] # number of rows
452 | x = np.arange(start, min(stop, n) if stop else n)
453 | results = results[:, x]
454 | t = (results[0] - results[0].min()) # set t0=0s
455 | results[0] = x
456 | for i, a in enumerate(ax):
457 | if i < len(results):
458 | label = labels[fi] if len(labels) else f.stem.replace('frames_', '')
459 | a.plot(t, results[i], marker='.', label=label, linewidth=1, markersize=5)
460 | a.set_title(s[i])
461 | a.set_xlabel('time (s)')
462 | # if fi == len(files) - 1:
463 | # a.set_ylim(bottom=0)
464 | for side in ['top', 'right']:
465 | a.spines[side].set_visible(False)
466 | else:
467 | a.remove()
468 | except Exception as e:
469 | print(f'Warning: Plotting error for {f}; {e}')
470 | ax[1].legend()
471 | plt.savefig(Path(save_dir) / 'idetection_profile.png', dpi=200)
472 |
473 |
474 | def save_one_box(xyxy, im, file=Path('im.jpg'), gain=1.02, pad=10, square=False, BGR=False, save=True):
475 | # Save image crop as {file} with crop size multiple {gain} and {pad} pixels. Save and/or return crop
476 | xyxy = torch.tensor(xyxy).view(-1, 4)
477 | b = xyxy2xywh(xyxy) # boxes
478 | if square:
479 | b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # attempt rectangle to square
480 | b[:, 2:] = b[:, 2:] * gain + pad # box wh * gain + pad
481 | xyxy = xywh2xyxy(b).long()
482 | clip_coords(xyxy, im.shape)
483 | crop = im[int(xyxy[0, 1]):int(xyxy[0, 3]), int(xyxy[0, 0]):int(xyxy[0, 2]), ::(1 if BGR else -1)]
484 | if save:
485 | file.parent.mkdir(parents=True, exist_ok=True) # make directory
486 | f = str(increment_path(file).with_suffix('.jpg'))
487 | # cv2.imwrite(f, crop) # https://github.com/ultralytics/yolov5/issues/7007 chroma subsampling issue
488 | Image.fromarray(cv2.cvtColor(crop, cv2.COLOR_BGR2RGB)).save(f, quality=95, subsampling=0)
489 | return crop
490 |
--------------------------------------------------------------------------------
/utils/general.py:
--------------------------------------------------------------------------------
1 | # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2 | """
3 | General utils
4 | """
5 |
6 | import contextlib
7 | import glob
8 | import inspect
9 | import logging
10 | import math
11 | import os
12 | import platform
13 | import random
14 | import re
15 | import shutil
16 | import signal
17 | import threading
18 | import time
19 | import urllib
20 | from datetime import datetime
21 | from itertools import repeat
22 | from multiprocessing.pool import ThreadPool
23 | from pathlib import Path
24 | from subprocess import check_output
25 | from typing import Optional
26 | from zipfile import ZipFile
27 |
28 | import cv2
29 | import numpy as np
30 | import pandas as pd
31 | import pkg_resources as pkg
32 | import torch
33 | import torchvision
34 | import yaml
35 |
36 | from utils.downloads import gsutil_getsize
37 | from utils.metrics import box_iou, fitness
38 |
39 | FILE = Path(__file__).resolve()
40 | ROOT = FILE.parents[1] # YOLOv5 root directory
41 | RANK = int(os.getenv('RANK', -1))
42 |
43 | # Settings
44 | DATASETS_DIR = ROOT.parent / 'datasets' # YOLOv5 datasets directory
45 | NUM_THREADS = min(8, max(1, os.cpu_count() - 1)) # number of YOLOv5 multiprocessing threads
46 | AUTOINSTALL = str(os.getenv('YOLOv5_AUTOINSTALL', True)).lower() == 'true' # global auto-install mode
47 | VERBOSE = str(os.getenv('YOLOv5_VERBOSE', True)).lower() == 'true' # global verbose mode
48 | FONT = 'Arial.ttf' # https://ultralytics.com/assets/Arial.ttf
49 |
50 | torch.set_printoptions(linewidth=320, precision=5, profile='long')
51 | np.set_printoptions(linewidth=320, formatter={'float_kind': '{:11.5g}'.format}) # format short g, %precision=5
52 | pd.options.display.max_columns = 10
53 | cv2.setNumThreads(0) # prevent OpenCV from multithreading (incompatible with PyTorch DataLoader)
54 | os.environ['NUMEXPR_MAX_THREADS'] = str(NUM_THREADS) # NumExpr max threads
55 | os.environ['OMP_NUM_THREADS'] = str(NUM_THREADS) # OpenMP max threads (PyTorch and SciPy)
56 |
57 |
58 | def is_kaggle():
59 | # Is environment a Kaggle Notebook?
60 | try:
61 | assert os.environ.get('PWD') == '/kaggle/working'
62 | assert os.environ.get('KAGGLE_URL_BASE') == 'https://www.kaggle.com'
63 | return True
64 | except AssertionError:
65 | return False
66 |
67 |
68 | def is_writeable(dir, test=False):
69 | # Return True if directory has write permissions, test opening a file with write permissions if test=True
70 | if not test:
71 | return os.access(dir, os.R_OK) # possible issues on Windows
72 | file = Path(dir) / 'tmp.txt'
73 | try:
74 | with open(file, 'w'): # open file with write permissions
75 | pass
76 | file.unlink() # remove file
77 | return True
78 | except OSError:
79 | return False
80 |
81 |
82 | def set_logging(name=None, verbose=VERBOSE):
83 | # Sets level and returns logger
84 | if is_kaggle():
85 | for h in logging.root.handlers:
86 | logging.root.removeHandler(h) # remove all handlers associated with the root logger object
87 | rank = int(os.getenv('RANK', -1)) # rank in world for Multi-GPU trainings
88 | level = logging.INFO if verbose and rank in {-1, 0} else logging.WARNING
89 | log = logging.getLogger(name)
90 | log.setLevel(level)
91 | handler = logging.StreamHandler()
92 | handler.setFormatter(logging.Formatter("%(message)s"))
93 | handler.setLevel(level)
94 | log.addHandler(handler)
95 |
96 |
97 | set_logging() # run before defining LOGGER
98 | LOGGER = logging.getLogger("yolov5") # define globally (used in train.py, val.py, detect.py, etc.)
99 |
100 |
101 | def user_config_dir(dir='Ultralytics', env_var='YOLOV5_CONFIG_DIR'):
102 | # Return path of user configuration directory. Prefer environment variable if exists. Make dir if required.
103 | env = os.getenv(env_var)
104 | if env:
105 | path = Path(env) # use environment variable
106 | else:
107 | cfg = {'Windows': 'AppData/Roaming', 'Linux': '.config', 'Darwin': 'Library/Application Support'} # 3 OS dirs
108 | path = Path.home() / cfg.get(platform.system(), '') # OS-specific config dir
109 | path = (path if is_writeable(path) else Path('/tmp')) / dir # GCP and AWS lambda fix, only /tmp is writeable
110 | path.mkdir(exist_ok=True) # make if required
111 | return path
112 |
113 |
114 | CONFIG_DIR = user_config_dir() # Ultralytics settings dir
115 |
116 |
117 | class Profile(contextlib.ContextDecorator):
118 | # Usage: @Profile() decorator or 'with Profile():' context manager
119 | def __enter__(self):
120 | self.start = time.time()
121 |
122 | def __exit__(self, type, value, traceback):
123 | print(f'Profile results: {time.time() - self.start:.5f}s')
124 |
125 |
126 | class Timeout(contextlib.ContextDecorator):
127 | # Usage: @Timeout(seconds) decorator or 'with Timeout(seconds):' context manager
128 | def __init__(self, seconds, *, timeout_msg='', suppress_timeout_errors=True):
129 | self.seconds = int(seconds)
130 | self.timeout_message = timeout_msg
131 | self.suppress = bool(suppress_timeout_errors)
132 |
133 | def _timeout_handler(self, signum, frame):
134 | raise TimeoutError(self.timeout_message)
135 |
136 | def __enter__(self):
137 | if platform.system() != 'Windows': # not supported on Windows
138 | signal.signal(signal.SIGALRM, self._timeout_handler) # Set handler for SIGALRM
139 | signal.alarm(self.seconds) # start countdown for SIGALRM to be raised
140 |
141 | def __exit__(self, exc_type, exc_val, exc_tb):
142 | if platform.system() != 'Windows':
143 | signal.alarm(0) # Cancel SIGALRM if it's scheduled
144 | if self.suppress and exc_type is TimeoutError: # Suppress TimeoutError
145 | return True
146 |
147 |
148 | class WorkingDirectory(contextlib.ContextDecorator):
149 | # Usage: @WorkingDirectory(dir) decorator or 'with WorkingDirectory(dir):' context manager
150 | def __init__(self, new_dir):
151 | self.dir = new_dir # new dir
152 | self.cwd = Path.cwd().resolve() # current dir
153 |
154 | def __enter__(self):
155 | os.chdir(self.dir)
156 |
157 | def __exit__(self, exc_type, exc_val, exc_tb):
158 | os.chdir(self.cwd)
159 |
160 |
161 | def try_except(func):
162 | # try-except function. Usage: @try_except decorator
163 | def handler(*args, **kwargs):
164 | try:
165 | func(*args, **kwargs)
166 | except Exception as e:
167 | print(e)
168 |
169 | return handler
170 |
171 |
172 | def threaded(func):
173 | # Multi-threads a target function and returns thread. Usage: @threaded decorator
174 | def wrapper(*args, **kwargs):
175 | thread = threading.Thread(target=func, args=args, kwargs=kwargs, daemon=True)
176 | thread.start()
177 | return thread
178 |
179 | return wrapper
180 |
181 |
182 | def methods(instance):
183 | # Get class/instance methods
184 | return [f for f in dir(instance) if callable(getattr(instance, f)) and not f.startswith("__")]
185 |
186 |
187 | def print_args(args: Optional[dict] = None, show_file=True, show_fcn=False):
188 | # Print function arguments (optional args dict)
189 | x = inspect.currentframe().f_back # previous frame
190 | file, _, fcn, _, _ = inspect.getframeinfo(x)
191 | if args is None: # get args automatically
192 | args, _, _, frm = inspect.getargvalues(x)
193 | args = {k: v for k, v in frm.items() if k in args}
194 | s = (f'{Path(file).stem}: ' if show_file else '') + (f'{fcn}: ' if show_fcn else '')
195 | LOGGER.info(colorstr(s) + ', '.join(f'{k}={v}' for k, v in args.items()))
196 |
197 |
198 | def init_seeds(seed=0):
199 | # Initialize random number generator (RNG) seeds https://pytorch.org/docs/stable/notes/randomness.html
200 | # cudnn seed 0 settings are slower and more reproducible, else faster and less reproducible
201 | import torch.backends.cudnn as cudnn
202 | random.seed(seed)
203 | np.random.seed(seed)
204 | torch.manual_seed(seed)
205 | cudnn.benchmark, cudnn.deterministic = (False, True) if seed == 0 else (True, False)
206 |
207 |
208 | def intersect_dicts(da, db, exclude=()):
209 | # Dictionary intersection of matching keys and shapes, omitting 'exclude' keys, using da values
210 | return {k: v for k, v in da.items() if k in db and not any(x in k for x in exclude) and v.shape == db[k].shape}
211 |
212 |
213 | def get_latest_run(search_dir='.'):
214 | # Return path to most recent 'last.pt' in /runs (i.e. to --resume from)
215 | last_list = glob.glob(f'{search_dir}/**/last*.pt', recursive=True)
216 | return max(last_list, key=os.path.getctime) if last_list else ''
217 |
218 |
219 | def is_docker():
220 | # Is environment a Docker container?
221 | return Path('/workspace').exists() # or Path('/.dockerenv').exists()
222 |
223 |
224 | def is_colab():
225 | # Is environment a Google Colab instance?
226 | try:
227 | import google.colab
228 | return True
229 | except ImportError:
230 | return False
231 |
232 |
233 | def is_pip():
234 | # Is file in a pip package?
235 | return 'site-packages' in Path(__file__).resolve().parts
236 |
237 |
238 | def is_ascii(s=''):
239 | # Is string composed of all ASCII (no UTF) characters? (note str().isascii() introduced in python 3.7)
240 | s = str(s) # convert list, tuple, None, etc. to str
241 | return len(s.encode().decode('ascii', 'ignore')) == len(s)
242 |
243 |
244 | def is_chinese(s='人工智能'):
245 | # Is string composed of any Chinese characters?
246 | return bool(re.search('[\u4e00-\u9fff]', str(s)))
247 |
248 |
249 | def emojis(str=''):
250 | # Return platform-dependent emoji-safe version of string
251 | return str.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else str
252 |
253 |
254 | def file_age(path=__file__):
255 | # Return days since last file update
256 | dt = (datetime.now() - datetime.fromtimestamp(Path(path).stat().st_mtime)) # delta
257 | return dt.days # + dt.seconds / 86400 # fractional days
258 |
259 |
260 | def file_date(path=__file__):
261 | # Return human-readable file modification date, i.e. '2021-3-26'
262 | t = datetime.fromtimestamp(Path(path).stat().st_mtime)
263 | return f'{t.year}-{t.month}-{t.day}'
264 |
265 |
266 | def file_size(path):
267 | # Return file/dir size (MB)
268 | mb = 1 << 20 # bytes to MiB (1024 ** 2)
269 | path = Path(path)
270 | if path.is_file():
271 | return path.stat().st_size / mb
272 | elif path.is_dir():
273 | return sum(f.stat().st_size for f in path.glob('**/*') if f.is_file()) / mb
274 | else:
275 | return 0.0
276 |
277 |
278 | def check_online():
279 | # Check internet connectivity
280 | import socket
281 | try:
282 | socket.create_connection(("1.1.1.1", 443), 5) # check host accessibility
283 | return True
284 | except OSError:
285 | return False
286 |
287 |
288 | def git_describe(path=ROOT): # path must be a directory
289 | # Return human-readable git description, i.e. v5.0-5-g3e25f1e https://git-scm.com/docs/git-describe
290 | try:
291 | assert (Path(path) / '.git').is_dir()
292 | return check_output(f'git -C {path} describe --tags --long --always', shell=True).decode()[:-1]
293 | except Exception:
294 | return ''
295 |
296 |
297 | @try_except
298 | @WorkingDirectory(ROOT)
299 | def check_git_status():
300 | # Recommend 'git pull' if code is out of date
301 | msg = ', for updates see https://github.com/ultralytics/yolov5'
302 | s = colorstr('github: ') # string
303 | assert Path('.git').exists(), s + 'skipping check (not a git repository)' + msg
304 | assert not is_docker(), s + 'skipping check (Docker image)' + msg
305 | assert check_online(), s + 'skipping check (offline)' + msg
306 |
307 | cmd = 'git fetch && git config --get remote.origin.url'
308 | url = check_output(cmd, shell=True, timeout=5).decode().strip().rstrip('.git') # git fetch
309 | branch = check_output('git rev-parse --abbrev-ref HEAD', shell=True).decode().strip() # checked out
310 | n = int(check_output(f'git rev-list {branch}..origin/master --count', shell=True)) # commits behind
311 | if n > 0:
312 | s += f"⚠️ YOLOv5 is out of date by {n} commit{'s' * (n > 1)}. Use `git pull` or `git clone {url}` to update."
313 | else:
314 | s += f'up to date with {url} ✅'
315 | LOGGER.info(emojis(s)) # emoji-safe
316 |
317 |
318 | def check_python(minimum='3.7.0'):
319 | # Check current python version vs. required python version
320 | check_version(platform.python_version(), minimum, name='Python ', hard=True)
321 |
322 |
323 | def check_version(current='0.0.0', minimum='0.0.0', name='version ', pinned=False, hard=False, verbose=False):
324 | # Check version vs. required version
325 | current, minimum = (pkg.parse_version(x) for x in (current, minimum))
326 | result = (current == minimum) if pinned else (current >= minimum) # bool
327 | s = f'{name}{minimum} required by YOLOv5, but {name}{current} is currently installed' # string
328 | if hard:
329 | assert result, s # assert min requirements met
330 | if verbose and not result:
331 | LOGGER.warning(s)
332 | return result
333 |
334 |
335 | @try_except
336 | def check_requirements(requirements=ROOT / 'requirements.txt', exclude=(), install=True, cmds=()):
337 | # Check installed dependencies meet requirements (pass *.txt file or list of packages)
338 | prefix = colorstr('red', 'bold', 'requirements:')
339 | check_python() # check python version
340 | if isinstance(requirements, (str, Path)): # requirements.txt file
341 | file = Path(requirements)
342 | assert file.exists(), f"{prefix} {file.resolve()} not found, check failed."
343 | with file.open() as f:
344 | requirements = [f'{x.name}{x.specifier}' for x in pkg.parse_requirements(f) if x.name not in exclude]
345 | else: # list or tuple of packages
346 | requirements = [x for x in requirements if x not in exclude]
347 |
348 | n = 0 # number of packages updates
349 | for i, r in enumerate(requirements):
350 | try:
351 | pkg.require(r)
352 | except Exception: # DistributionNotFound or VersionConflict if requirements not met
353 | s = f"{prefix} {r} not found and is required by YOLOv5"
354 | if install and AUTOINSTALL: # check environment variable
355 | LOGGER.info(f"{s}, attempting auto-update...")
356 | try:
357 | assert check_online(), f"'pip install {r}' skipped (offline)"
358 | LOGGER.info(check_output(f'pip install "{r}" {cmds[i] if cmds else ""}', shell=True).decode())
359 | n += 1
360 | except Exception as e:
361 | LOGGER.warning(f'{prefix} {e}')
362 | else:
363 | LOGGER.info(f'{s}. Please install and rerun your command.')
364 |
365 | if n: # if packages updated
366 | source = file.resolve() if 'file' in locals() else requirements
367 | s = f"{prefix} {n} package{'s' * (n > 1)} updated per {source}\n" \
368 | f"{prefix} ⚠️ {colorstr('bold', 'Restart runtime or rerun command for updates to take effect')}\n"
369 | LOGGER.info(emojis(s))
370 |
371 |
372 | def check_img_size(imgsz, s=32, floor=0):
373 | # Verify image size is a multiple of stride s in each dimension
374 | if isinstance(imgsz, int): # integer i.e. img_size=640
375 | new_size = max(make_divisible(imgsz, int(s)), floor)
376 | else: # list i.e. img_size=[640, 480]
377 | imgsz = list(imgsz) # convert to list if tuple
378 | new_size = [max(make_divisible(x, int(s)), floor) for x in imgsz]
379 | if new_size != imgsz:
380 | LOGGER.warning(f'WARNING: --img-size {imgsz} must be multiple of max stride {s}, updating to {new_size}')
381 | return new_size
382 |
383 |
384 | def check_imshow():
385 | # Check if environment supports image displays
386 | try:
387 | assert not is_docker(), 'cv2.imshow() is disabled in Docker environments'
388 | assert not is_colab(), 'cv2.imshow() is disabled in Google Colab environments'
389 | cv2.imshow('test', np.zeros((1, 1, 3)))
390 | cv2.waitKey(1)
391 | cv2.destroyAllWindows()
392 | cv2.waitKey(1)
393 | return True
394 | except Exception as e:
395 | LOGGER.warning(f'WARNING: Environment does not support cv2.imshow() or PIL Image.show() image displays\n{e}')
396 | return False
397 |
398 |
399 | def check_suffix(file='yolov5s.pt', suffix=('.pt',), msg=''):
400 | # Check file(s) for acceptable suffix
401 | if file and suffix:
402 | if isinstance(suffix, str):
403 | suffix = [suffix]
404 | for f in file if isinstance(file, (list, tuple)) else [file]:
405 | s = Path(f).suffix.lower() # file suffix
406 | if len(s):
407 | assert s in suffix, f"{msg}{f} acceptable suffix is {suffix}"
408 |
409 |
410 | def check_yaml(file, suffix=('.yaml', '.yml')):
411 | # Search/download YAML file (if necessary) and return path, checking suffix
412 | return check_file(file, suffix)
413 |
414 |
415 | def check_file(file, suffix=''):
416 | # Search/download file (if necessary) and return path
417 | check_suffix(file, suffix) # optional
418 | file = str(file) # convert to str()
419 | if Path(file).is_file() or not file: # exists
420 | return file
421 | elif file.startswith(('http:/', 'https:/')): # download
422 | url = file # warning: Pathlib turns :// -> :/
423 | file = Path(urllib.parse.unquote(file).split('?')[0]).name # '%2F' to '/', split https://url.com/file.txt?auth
424 | if Path(file).is_file():
425 | LOGGER.info(f'Found {url} locally at {file}') # file already exists
426 | else:
427 | LOGGER.info(f'Downloading {url} to {file}...')
428 | torch.hub.download_url_to_file(url, file)
429 | assert Path(file).exists() and Path(file).stat().st_size > 0, f'File download failed: {url}' # check
430 | return file
431 | else: # search
432 | files = []
433 | for d in 'data', 'models', 'utils': # search directories
434 | files.extend(glob.glob(str(ROOT / d / '**' / file), recursive=True)) # find file
435 | assert len(files), f'File not found: {file}' # assert file was found
436 | assert len(files) == 1, f"Multiple files match '{file}', specify exact path: {files}" # assert unique
437 | return files[0] # return file
438 |
439 |
440 | def check_font(font=FONT, progress=False):
441 | # Download font to CONFIG_DIR if necessary
442 | font = Path(font)
443 | file = CONFIG_DIR / font.name
444 | if not font.exists() and not file.exists():
445 | url = "https://ultralytics.com/assets/" + font.name
446 | LOGGER.info(f'Downloading {url} to {file}...')
447 | torch.hub.download_url_to_file(url, str(file), progress=progress)
448 |
449 |
450 | def check_dataset(data, autodownload=True):
451 | # Download, check and/or unzip dataset if not found locally
452 |
453 | # Download (optional)
454 | extract_dir = ''
455 | if isinstance(data, (str, Path)) and str(data).endswith('.zip'): # i.e. gs://bucket/dir/coco128.zip
456 | download(data, dir=DATASETS_DIR, unzip=True, delete=False, curl=False, threads=1)
457 | data = next((DATASETS_DIR / Path(data).stem).rglob('*.yaml'))
458 | extract_dir, autodownload = data.parent, False
459 |
460 | # Read yaml (optional)
461 | if isinstance(data, (str, Path)):
462 | with open(data, errors='ignore') as f:
463 | data = yaml.safe_load(f) # dictionary
464 |
465 | # Checks
466 | for k in 'train', 'val', 'nc':
467 | assert k in data, emojis(f"data.yaml '{k}:' field missing ❌")
468 | if 'names' not in data:
469 | LOGGER.warning(emojis("data.yaml 'names:' field missing ⚠, assigning default names 'class0', 'class1', etc."))
470 | data['names'] = [f'class{i}' for i in range(data['nc'])] # default names
471 |
472 | # Resolve paths
473 | path = Path(extract_dir or data.get('path') or '') # optional 'path' default to '.'
474 | if not path.is_absolute():
475 | path = (ROOT / path).resolve()
476 | for k in 'train', 'val', 'test':
477 | if data.get(k): # prepend path
478 | data[k] = str(path / data[k]) if isinstance(data[k], str) else [str(path / x) for x in data[k]]
479 |
480 | # Parse yaml
481 | train, val, test, s = (data.get(x) for x in ('train', 'val', 'test', 'download'))
482 | if val:
483 | val = [Path(x).resolve() for x in (val if isinstance(val, list) else [val])] # val path
484 | if not all(x.exists() for x in val):
485 | LOGGER.info(emojis('\nDataset not found ⚠, missing paths %s' % [str(x) for x in val if not x.exists()]))
486 | if not s or not autodownload:
487 | raise Exception(emojis('Dataset not found ❌'))
488 | t = time.time()
489 | root = path.parent if 'path' in data else '..' # unzip directory i.e. '../'
490 | if s.startswith('http') and s.endswith('.zip'): # URL
491 | f = Path(s).name # filename
492 | LOGGER.info(f'Downloading {s} to {f}...')
493 | torch.hub.download_url_to_file(s, f)
494 | Path(root).mkdir(parents=True, exist_ok=True) # create root
495 | ZipFile(f).extractall(path=root) # unzip
496 | Path(f).unlink() # remove zip
497 | r = None # success
498 | elif s.startswith('bash '): # bash script
499 | LOGGER.info(f'Running {s} ...')
500 | r = os.system(s)
501 | else: # python script
502 | r = exec(s, {'yaml': data}) # return None
503 | dt = f'({round(time.time() - t, 1)}s)'
504 | s = f"success ✅ {dt}, saved to {colorstr('bold', root)}" if r in (0, None) else f"failure {dt} ❌"
505 | LOGGER.info(emojis(f"Dataset download {s}"))
506 | check_font('Arial.ttf' if is_ascii(data['names']) else 'Arial.Unicode.ttf', progress=True) # download fonts
507 | return data # dictionary
508 |
509 |
510 | def check_amp(model):
511 | # Check PyTorch Automatic Mixed Precision (AMP) functionality. Return True on correct operation
512 | from models.common import AutoShape, DetectMultiBackend
513 |
514 | def amp_allclose(model, im):
515 | # All close FP32 vs AMP results
516 | m = AutoShape(model, verbose=False) # model
517 | a = m(im).xywhn[0] # FP32 inference
518 | m.amp = True
519 | b = m(im).xywhn[0] # AMP inference
520 | return a.shape == b.shape and torch.allclose(a, b, atol=0.1) # close to 10% absolute tolerance
521 |
522 | prefix = colorstr('AMP: ')
523 | device = next(model.parameters()).device # get model device
524 | if device.type == 'cpu':
525 | return False # AMP disabled on CPU
526 | f = ROOT / 'data' / 'images' / 'bus.jpg' # image to check
527 | im = f if f.exists() else 'https://ultralytics.com/images/bus.jpg' if check_online() else np.ones((640, 640, 3))
528 | try:
529 | assert amp_allclose(model, im) or amp_allclose(DetectMultiBackend('yolov5n.pt', device), im)
530 | LOGGER.info(emojis(f'{prefix}checks passed ✅'))
531 | return True
532 | except Exception:
533 | help_url = 'https://github.com/ultralytics/yolov5/issues/7908'
534 | LOGGER.warning(emojis(f'{prefix}checks failed ❌, disabling Automatic Mixed Precision. See {help_url}'))
535 | return False
536 |
537 |
538 | def url2file(url):
539 | # Convert URL to filename, i.e. https://url.com/file.txt?auth -> file.txt
540 | url = str(Path(url)).replace(':/', '://') # Pathlib turns :// -> :/
541 | return Path(urllib.parse.unquote(url)).name.split('?')[0] # '%2F' to '/', split https://url.com/file.txt?auth
542 |
543 |
544 | def download(url, dir='.', unzip=True, delete=True, curl=False, threads=1, retry=3):
545 | # Multi-threaded file download and unzip function, used in data.yaml for autodownload
546 | def download_one(url, dir):
547 | # Download 1 file
548 | success = True
549 | f = dir / Path(url).name # filename
550 | if Path(url).is_file(): # exists in current path
551 | Path(url).rename(f) # move to dir
552 | elif not f.exists():
553 | LOGGER.info(f'Downloading {url} to {f}...')
554 | for i in range(retry + 1):
555 | if curl:
556 | s = 'sS' if threads > 1 else '' # silent
557 | r = os.system(f'curl -{s}L "{url}" -o "{f}" --retry 9 -C -') # curl download with retry, continue
558 | success = r == 0
559 | else:
560 | torch.hub.download_url_to_file(url, f, progress=threads == 1) # torch download
561 | success = f.is_file()
562 | if success:
563 | break
564 | elif i < retry:
565 | LOGGER.warning(f'Download failure, retrying {i + 1}/{retry} {url}...')
566 | else:
567 | LOGGER.warning(f'Failed to download {url}...')
568 |
569 | if unzip and success and f.suffix in ('.zip', '.gz'):
570 | LOGGER.info(f'Unzipping {f}...')
571 | if f.suffix == '.zip':
572 | ZipFile(f).extractall(path=dir) # unzip
573 | elif f.suffix == '.gz':
574 | os.system(f'tar xfz {f} --directory {f.parent}') # unzip
575 | if delete:
576 | f.unlink() # remove zip
577 |
578 | dir = Path(dir)
579 | dir.mkdir(parents=True, exist_ok=True) # make directory
580 | if threads > 1:
581 | pool = ThreadPool(threads)
582 | pool.imap(lambda x: download_one(*x), zip(url, repeat(dir))) # multi-threaded
583 | pool.close()
584 | pool.join()
585 | else:
586 | for u in [url] if isinstance(url, (str, Path)) else url:
587 | download_one(u, dir)
588 |
589 |
590 | def make_divisible(x, divisor):
591 | # Returns nearest x divisible by divisor
592 | if isinstance(divisor, torch.Tensor):
593 | divisor = int(divisor.max()) # to int
594 | return math.ceil(x / divisor) * divisor
595 |
596 |
597 | def clean_str(s):
598 | # Cleans a string by replacing special characters with underscore _
599 | return re.sub(pattern="[|@#!¡·$€%&()=?¿^*;:,¨´><+]", repl="_", string=s)
600 |
601 |
602 | def one_cycle(y1=0.0, y2=1.0, steps=100):
603 | # lambda function for sinusoidal ramp from y1 to y2 https://arxiv.org/pdf/1812.01187.pdf
604 | return lambda x: ((1 - math.cos(x * math.pi / steps)) / 2) * (y2 - y1) + y1
605 |
606 |
607 | def colorstr(*input):
608 | # Colors a string https://en.wikipedia.org/wiki/ANSI_escape_code, i.e. colorstr('blue', 'hello world')
609 | *args, string = input if len(input) > 1 else ('blue', 'bold', input[0]) # color arguments, string
610 | colors = {
611 | 'black': '\033[30m', # basic colors
612 | 'red': '\033[31m',
613 | 'green': '\033[32m',
614 | 'yellow': '\033[33m',
615 | 'blue': '\033[34m',
616 | 'magenta': '\033[35m',
617 | 'cyan': '\033[36m',
618 | 'white': '\033[37m',
619 | 'bright_black': '\033[90m', # bright colors
620 | 'bright_red': '\033[91m',
621 | 'bright_green': '\033[92m',
622 | 'bright_yellow': '\033[93m',
623 | 'bright_blue': '\033[94m',
624 | 'bright_magenta': '\033[95m',
625 | 'bright_cyan': '\033[96m',
626 | 'bright_white': '\033[97m',
627 | 'end': '\033[0m', # misc
628 | 'bold': '\033[1m',
629 | 'underline': '\033[4m'}
630 | return ''.join(colors[x] for x in args) + f'{string}' + colors['end']
631 |
632 |
633 | def labels_to_class_weights(labels, nc=80):
634 | # Get class weights (inverse frequency) from training labels
635 | if labels[0] is None: # no labels loaded
636 | return torch.Tensor()
637 |
638 | labels = np.concatenate(labels, 0) # labels.shape = (866643, 5) for COCO
639 | classes = labels[:, 0].astype(np.int) # labels = [class xywh]
640 | weights = np.bincount(classes, minlength=nc) # occurrences per class
641 |
642 | # Prepend gridpoint count (for uCE training)
643 | # gpi = ((320 / 32 * np.array([1, 2, 4])) ** 2 * 3).sum() # gridpoints per image
644 | # weights = np.hstack([gpi * len(labels) - weights.sum() * 9, weights * 9]) ** 0.5 # prepend gridpoints to start
645 |
646 | weights[weights == 0] = 1 # replace empty bins with 1
647 | weights = 1 / weights # number of targets per class
648 | weights /= weights.sum() # normalize
649 | return torch.from_numpy(weights)
650 |
651 |
652 | def labels_to_image_weights(labels, nc=80, class_weights=np.ones(80)):
653 | # Produces image weights based on class_weights and image contents
654 | # Usage: index = random.choices(range(n), weights=image_weights, k=1) # weighted image sample
655 | class_counts = np.array([np.bincount(x[:, 0].astype(np.int), minlength=nc) for x in labels])
656 | return (class_weights.reshape(1, nc) * class_counts).sum(1)
657 |
658 |
659 | def coco80_to_coco91_class(): # converts 80-index (val2014) to 91-index (paper)
660 | # https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/
661 | # a = np.loadtxt('data/coco.names', dtype='str', delimiter='\n')
662 | # b = np.loadtxt('data/coco_paper.names', dtype='str', delimiter='\n')
663 | # x1 = [list(a[i] == b).index(True) + 1 for i in range(80)] # darknet to coco
664 | # x2 = [list(b[i] == a).index(True) if any(b[i] == a) else None for i in range(91)] # coco to darknet
665 | return [
666 | 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34,
667 | 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63,
668 | 64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90]
669 |
670 |
671 | def xyxy2xywh(x):
672 | # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right
673 | y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
674 | y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center
675 | y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center
676 | y[:, 2] = x[:, 2] - x[:, 0] # width
677 | y[:, 3] = x[:, 3] - x[:, 1] # height
678 | return y
679 |
680 |
681 | def xywh2xyxy(x):
682 | # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
683 | y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
684 | y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x
685 | y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y
686 | y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x
687 | y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y
688 | return y
689 |
690 |
691 | def xywhn2xyxy(x, w=640, h=640, padw=0, padh=0):
692 | # Convert nx4 boxes from [x, y, w, h] normalized to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
693 | y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
694 | y[:, 0] = w * (x[:, 0] - x[:, 2] / 2) + padw # top left x
695 | y[:, 1] = h * (x[:, 1] - x[:, 3] / 2) + padh # top left y
696 | y[:, 2] = w * (x[:, 0] + x[:, 2] / 2) + padw # bottom right x
697 | y[:, 3] = h * (x[:, 1] + x[:, 3] / 2) + padh # bottom right y
698 | return y
699 |
700 |
701 | def xyxy2xywhn(x, w=640, h=640, clip=False, eps=0.0):
702 | # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] normalized where xy1=top-left, xy2=bottom-right
703 | if clip:
704 | clip_coords(x, (h - eps, w - eps)) # warning: inplace clip
705 | y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
706 | y[:, 0] = ((x[:, 0] + x[:, 2]) / 2) / w # x center
707 | y[:, 1] = ((x[:, 1] + x[:, 3]) / 2) / h # y center
708 | y[:, 2] = (x[:, 2] - x[:, 0]) / w # width
709 | y[:, 3] = (x[:, 3] - x[:, 1]) / h # height
710 | return y
711 |
712 |
713 | def xyn2xy(x, w=640, h=640, padw=0, padh=0):
714 | # Convert normalized segments into pixel segments, shape (n,2)
715 | y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
716 | y[:, 0] = w * x[:, 0] + padw # top left x
717 | y[:, 1] = h * x[:, 1] + padh # top left y
718 | return y
719 |
720 |
721 | def segment2box(segment, width=640, height=640):
722 | # Convert 1 segment label to 1 box label, applying inside-image constraint, i.e. (xy1, xy2, ...) to (xyxy)
723 | x, y = segment.T # segment xy
724 | inside = (x >= 0) & (y >= 0) & (x <= width) & (y <= height)
725 | x, y, = x[inside], y[inside]
726 | return np.array([x.min(), y.min(), x.max(), y.max()]) if any(x) else np.zeros((1, 4)) # xyxy
727 |
728 |
729 | def segments2boxes(segments):
730 | # Convert segment labels to box labels, i.e. (cls, xy1, xy2, ...) to (cls, xywh)
731 | boxes = []
732 | for s in segments:
733 | x, y = s.T # segment xy
734 | boxes.append([x.min(), y.min(), x.max(), y.max()]) # cls, xyxy
735 | return xyxy2xywh(np.array(boxes)) # cls, xywh
736 |
737 |
738 | def resample_segments(segments, n=1000):
739 | # Up-sample an (n,2) segment
740 | for i, s in enumerate(segments):
741 | x = np.linspace(0, len(s) - 1, n)
742 | xp = np.arange(len(s))
743 | segments[i] = np.concatenate([np.interp(x, xp, s[:, i]) for i in range(2)]).reshape(2, -1).T # segment xy
744 | return segments
745 |
746 |
747 | def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None):
748 | # Rescale coords (xyxy) from img1_shape to img0_shape
749 | if ratio_pad is None: # calculate from img0_shape
750 | gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new
751 | pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding
752 | else:
753 | gain = ratio_pad[0][0]
754 | pad = ratio_pad[1]
755 |
756 | coords[:, [0, 2]] -= pad[0] # x padding
757 | coords[:, [1, 3]] -= pad[1] # y padding
758 | coords[:, :4] /= gain
759 | clip_coords(coords, img0_shape)
760 | return coords
761 |
762 |
763 | def clip_coords(boxes, shape):
764 | # Clip bounding xyxy bounding boxes to image shape (height, width)
765 | if isinstance(boxes, torch.Tensor): # faster individually
766 | boxes[:, 0].clamp_(0, shape[1]) # x1
767 | boxes[:, 1].clamp_(0, shape[0]) # y1
768 | boxes[:, 2].clamp_(0, shape[1]) # x2
769 | boxes[:, 3].clamp_(0, shape[0]) # y2
770 | else: # np.array (faster grouped)
771 | boxes[:, [0, 2]] = boxes[:, [0, 2]].clip(0, shape[1]) # x1, x2
772 | boxes[:, [1, 3]] = boxes[:, [1, 3]].clip(0, shape[0]) # y1, y2
773 |
774 |
775 | def non_max_suppression(prediction,
776 | conf_thres=0.25,
777 | iou_thres=0.45,
778 | classes=None,
779 | agnostic=False,
780 | multi_label=False,
781 | labels=(),
782 | max_det=300):
783 | """Non-Maximum Suppression (NMS) on inference results to reject overlapping bounding boxes
784 |
785 | Returns:
786 | list of detections, on (n,6) tensor per image [xyxy, conf, cls]
787 | """
788 |
789 | bs = prediction.shape[0] # batch size
790 | nc = prediction.shape[2] - 5 # number of classes
791 | xc = prediction[..., 4] > conf_thres # candidates
792 |
793 | # Checks
794 | assert 0 <= conf_thres <= 1, f'Invalid Confidence threshold {conf_thres}, valid values are between 0.0 and 1.0'
795 | assert 0 <= iou_thres <= 1, f'Invalid IoU {iou_thres}, valid values are between 0.0 and 1.0'
796 |
797 | # Settings
798 | # min_wh = 2 # (pixels) minimum box width and height
799 | max_wh = 7680 # (pixels) maximum box width and height
800 | max_nms = 30000 # maximum number of boxes into torchvision.ops.nms()
801 | time_limit = 0.3 + 0.03 * bs # seconds to quit after
802 | redundant = True # require redundant detections
803 | multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img)
804 | merge = False # use merge-NMS
805 |
806 | t = time.time()
807 | output = [torch.zeros((0, 6), device=prediction.device)] * bs
808 | for xi, x in enumerate(prediction): # image index, image inference
809 | # Apply constraints
810 | # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height
811 | x = x[xc[xi]] # confidence
812 |
813 | # Cat apriori labels if autolabelling
814 | if labels and len(labels[xi]):
815 | lb = labels[xi]
816 | v = torch.zeros((len(lb), nc + 5), device=x.device)
817 | v[:, :4] = lb[:, 1:5] # box
818 | v[:, 4] = 1.0 # conf
819 | v[range(len(lb)), lb[:, 0].long() + 5] = 1.0 # cls
820 | x = torch.cat((x, v), 0)
821 |
822 | # If none remain process next image
823 | if not x.shape[0]:
824 | continue
825 |
826 | # Compute conf
827 | x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf
828 |
829 | # Box (center x, center y, width, height) to (x1, y1, x2, y2)
830 | box = xywh2xyxy(x[:, :4])
831 |
832 | # Detections matrix nx6 (xyxy, conf, cls)
833 | if multi_label:
834 | i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T
835 | x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1)
836 | else: # best class only
837 | conf, j = x[:, 5:].max(1, keepdim=True)
838 | x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres]
839 |
840 | # Filter by class
841 | if classes is not None:
842 | x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
843 |
844 | # Apply finite constraint
845 | # if not torch.isfinite(x).all():
846 | # x = x[torch.isfinite(x).all(1)]
847 |
848 | # Check shape
849 | n = x.shape[0] # number of boxes
850 | if not n: # no boxes
851 | continue
852 | elif n > max_nms: # excess boxes
853 | x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence
854 |
855 | # Batched NMS
856 | c = x[:, 5:6] * (0 if agnostic else max_wh) # classes
857 | boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores
858 | i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
859 | if i.shape[0] > max_det: # limit detections
860 | i = i[:max_det]
861 | if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean)
862 | # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)
863 | iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix
864 | weights = iou * scores[None] # box weights
865 | x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes
866 | if redundant:
867 | i = i[iou.sum(1) > 1] # require redundancy
868 |
869 | output[xi] = x[i]
870 | if (time.time() - t) > time_limit:
871 | LOGGER.warning(f'WARNING: NMS time limit {time_limit:.3f}s exceeded')
872 | break # time limit exceeded
873 |
874 | return output
875 |
876 |
877 | def strip_optimizer(f='best.pt', s=''): # from utils.general import *; strip_optimizer()
878 | # Strip optimizer from 'f' to finalize training, optionally save as 's'
879 | x = torch.load(f, map_location=torch.device('cpu'))
880 | if x.get('ema'):
881 | x['model'] = x['ema'] # replace model with ema
882 | for k in 'optimizer', 'best_fitness', 'wandb_id', 'ema', 'updates': # keys
883 | x[k] = None
884 | x['epoch'] = -1
885 | x['model'].half() # to FP16
886 | for p in x['model'].parameters():
887 | p.requires_grad = False
888 | torch.save(x, s or f)
889 | mb = os.path.getsize(s or f) / 1E6 # filesize
890 | LOGGER.info(f"Optimizer stripped from {f},{f' saved as {s},' if s else ''} {mb:.1f}MB")
891 |
892 |
893 | def print_mutation(results, hyp, save_dir, bucket, prefix=colorstr('evolve: ')):
894 | evolve_csv = save_dir / 'evolve.csv'
895 | evolve_yaml = save_dir / 'hyp_evolve.yaml'
896 | keys = ('metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95', 'val/box_loss',
897 | 'val/obj_loss', 'val/cls_loss') + tuple(hyp.keys()) # [results + hyps]
898 | keys = tuple(x.strip() for x in keys)
899 | vals = results + tuple(hyp.values())
900 | n = len(keys)
901 |
902 | # Download (optional)
903 | if bucket:
904 | url = f'gs://{bucket}/evolve.csv'
905 | if gsutil_getsize(url) > (evolve_csv.stat().st_size if evolve_csv.exists() else 0):
906 | os.system(f'gsutil cp {url} {save_dir}') # download evolve.csv if larger than local
907 |
908 | # Log to evolve.csv
909 | s = '' if evolve_csv.exists() else (('%20s,' * n % keys).rstrip(',') + '\n') # add header
910 | with open(evolve_csv, 'a') as f:
911 | f.write(s + ('%20.5g,' * n % vals).rstrip(',') + '\n')
912 |
913 | # Save yaml
914 | with open(evolve_yaml, 'w') as f:
915 | data = pd.read_csv(evolve_csv)
916 | data = data.rename(columns=lambda x: x.strip()) # strip keys
917 | i = np.argmax(fitness(data.values[:, :4])) #
918 | generations = len(data)
919 | f.write('# YOLOv5 Hyperparameter Evolution Results\n' + f'# Best generation: {i}\n' +
920 | f'# Last generation: {generations - 1}\n' + '# ' + ', '.join(f'{x.strip():>20s}' for x in keys[:7]) +
921 | '\n' + '# ' + ', '.join(f'{x:>20.5g}' for x in data.values[i, :7]) + '\n\n')
922 | yaml.safe_dump(data.loc[i][7:].to_dict(), f, sort_keys=False)
923 |
924 | # Print to screen
925 | LOGGER.info(prefix + f'{generations} generations finished, current result:\n' + prefix +
926 | ', '.join(f'{x.strip():>20s}' for x in keys) + '\n' + prefix + ', '.join(f'{x:20.5g}'
927 | for x in vals) + '\n\n')
928 |
929 | if bucket:
930 | os.system(f'gsutil cp {evolve_csv} {evolve_yaml} gs://{bucket}') # upload
931 |
932 |
933 | def apply_classifier(x, model, img, im0):
934 | # Apply a second stage classifier to YOLO outputs
935 | # Example model = torchvision.models.__dict__['efficientnet_b0'](pretrained=True).to(device).eval()
936 | im0 = [im0] if isinstance(im0, np.ndarray) else im0
937 | for i, d in enumerate(x): # per image
938 | if d is not None and len(d):
939 | d = d.clone()
940 |
941 | # Reshape and pad cutouts
942 | b = xyxy2xywh(d[:, :4]) # boxes
943 | b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # rectangle to square
944 | b[:, 2:] = b[:, 2:] * 1.3 + 30 # pad
945 | d[:, :4] = xywh2xyxy(b).long()
946 |
947 | # Rescale boxes from img_size to im0 size
948 | scale_coords(img.shape[2:], d[:, :4], im0[i].shape)
949 |
950 | # Classes
951 | pred_cls1 = d[:, 5].long()
952 | ims = []
953 | for a in d:
954 | cutout = im0[i][int(a[1]):int(a[3]), int(a[0]):int(a[2])]
955 | im = cv2.resize(cutout, (224, 224)) # BGR
956 |
957 | im = im[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
958 | im = np.ascontiguousarray(im, dtype=np.float32) # uint8 to float32
959 | im /= 255 # 0 - 255 to 0.0 - 1.0
960 | ims.append(im)
961 |
962 | pred_cls2 = model(torch.Tensor(ims).to(d.device)).argmax(1) # classifier prediction
963 | x[i] = x[i][pred_cls1 == pred_cls2] # retain matching class detections
964 |
965 | return x
966 |
967 |
968 | def increment_path(path, exist_ok=False, sep='', mkdir=False):
969 | # Increment file or directory path, i.e. runs/exp --> runs/exp{sep}2, runs/exp{sep}3, ... etc.
970 | path = Path(path) # os-agnostic
971 | if path.exists() and not exist_ok:
972 | path, suffix = (path.with_suffix(''), path.suffix) if path.is_file() else (path, '')
973 |
974 | # Method 1
975 | for n in range(2, 9999):
976 | p = f'{path}{sep}{n}{suffix}' # increment path
977 | if not os.path.exists(p): #
978 | break
979 | path = Path(p)
980 |
981 | # Method 2 (deprecated)
982 | # dirs = glob.glob(f"{path}{sep}*") # similar paths
983 | # matches = [re.search(rf"{path.stem}{sep}(\d+)", d) for d in dirs]
984 | # i = [int(m.groups()[0]) for m in matches if m] # indices
985 | # n = max(i) + 1 if i else 2 # increment number
986 | # path = Path(f"{path}{sep}{n}{suffix}") # increment path
987 |
988 | if mkdir:
989 | path.mkdir(parents=True, exist_ok=True) # make directory
990 |
991 | return path
992 |
993 |
994 | # OpenCV Chinese-friendly functions ------------------------------------------------------------------------------------
995 | imshow_ = cv2.imshow # copy to avoid recursion errors
996 |
997 |
998 | def imread(path, flags=cv2.IMREAD_COLOR):
999 | return cv2.imdecode(np.fromfile(path, np.uint8), flags)
1000 |
1001 |
1002 | def imwrite(path, im):
1003 | try:
1004 | cv2.imencode(Path(path).suffix, im)[1].tofile(path)
1005 | return True
1006 | except Exception:
1007 | return False
1008 |
1009 |
1010 | def imshow(path, im):
1011 | imshow_(path.encode('unicode_escape').decode(), im)
1012 |
1013 |
1014 | cv2.imread, cv2.imwrite, cv2.imshow = imread, imwrite, imshow # redefine
1015 |
1016 | # Variables ------------------------------------------------------------------------------------------------------------
1017 | NCOLS = 0 if is_docker() else shutil.get_terminal_size().columns # terminal window size for tqdm
1018 |
--------------------------------------------------------------------------------