├── .gitignore
├── README.md
├── app
├── PyTorch_YOLOv3
│ ├── config
│ │ ├── coco.data
│ │ ├── create_custom_model.sh
│ │ ├── custom.data
│ │ ├── yolov3-custom.cfg
│ │ ├── yolov3-tiny.cfg
│ │ └── yolov3.cfg
│ ├── data
│ │ └── custom
│ │ │ ├── classes.names
│ │ │ ├── generate.py
│ │ │ ├── train.txt
│ │ │ └── valid.txt
│ ├── detect.py
│ ├── models.py
│ ├── test.py
│ ├── train.py
│ └── utils
│ │ ├── __init__.py
│ │ ├── augmentations.py
│ │ ├── datasets.py
│ │ ├── logger.py
│ │ ├── parse_config.py
│ │ └── utils.py
├── VertebraSegmentation
│ ├── connected_component.py
│ ├── coordinate.py
│ ├── filp_and_rotate.py
│ ├── generate.py
│ ├── net
│ │ ├── data
│ │ │ ├── __init__.py
│ │ │ ├── __pycache__
│ │ │ │ ├── __init__.cpython-36.pyc
│ │ │ │ ├── __init__.cpython-37.pyc
│ │ │ │ ├── dataset.cpython-36.pyc
│ │ │ │ └── dataset.cpython-37.pyc
│ │ │ └── dataset.py
│ │ ├── model
│ │ │ ├── __init__.py
│ │ │ ├── __pycache__
│ │ │ │ ├── __init__.cpython-36.pyc
│ │ │ │ ├── __init__.cpython-37.pyc
│ │ │ │ ├── components.cpython-36.pyc
│ │ │ │ ├── components.cpython-37.pyc
│ │ │ │ ├── resunet.cpython-36.pyc
│ │ │ │ ├── resunet.cpython-37.pyc
│ │ │ │ ├── resunet_parts.cpython-36.pyc
│ │ │ │ ├── resunet_parts.cpython-37.pyc
│ │ │ │ ├── unet.cpython-36.pyc
│ │ │ │ ├── unet.cpython-37.pyc
│ │ │ │ ├── unet_parts.cpython-36.pyc
│ │ │ │ └── unet_parts.cpython-37.pyc
│ │ │ ├── components.py
│ │ │ ├── resunet.py
│ │ │ ├── resunet_parts.py
│ │ │ ├── unet.py
│ │ │ └── unet_parts.py
│ │ ├── predict.py
│ │ └── train.py
│ └── utils.py
├── app.py
├── cross.txt
└── main.py
├── images
├── Cobb.jpg
├── demo.jpg
├── flowchart.jpg
├── logo.png
└── screenshot.png
└── requirements.txt
/.gitignore:
--------------------------------------------------------------------------------
1 | app/result
2 | app/source
3 | app/valid_data
4 | app/output
5 | app/coordinate
6 | app/GT_coordinate
7 | app/Report.docx
8 | app/Vertebra.pptx
9 | app/__pycache__
10 | app/VertebraSegmentation/coordinate
11 | app/VertebraSegmentation/detect_data
12 | app/VertebraSegmentation/extend_dataset
13 | app/VertebraSegmentation/extend_detect_data
14 | app/VertebraSegmentation/label_data
15 | app/VertebraSegmentation/label_extend_dataset
16 | app/VertebraSegmentation/labels
17 | app/VertebraSegmentation/object_detect_label
18 | app/VertebraSegmentation/original_data
19 | app/VertebraSegmentation/original_split_data
20 | app/VertebraSegmentation/save_model
21 | app/VertebraSegmentation/splitting_dataset
22 | app/VertebraSegmentation/temp
23 | app/VertebraSegmentation/test
24 | app/VertebraSegmentation/valid_data
25 | app/VertebraSegmentation/xml
26 | app/VertebraSegmentation/net/save
27 |
28 | app/PyTorch_YOLOv3/checkpoints
29 | app/PyTorch_YOLOv3/coordinate
30 | app/PyTorch_YOLOv3/data/custom/images
31 | app/PyTorch_YOLOv3/data/custom/labels
32 | app/PyTorch_YOLOv3/output
33 | app/PyTorch_YOLOv3/pre_img
34 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
25 | [![Contributors][contributors-shield]][contributors-url]
26 | [![Forks][forks-shield]][forks-url]
27 | [![Stargazers][stars-shield]][stars-url]
28 | [![Issues][issues-shield]][issues-url]
29 | [![MIT License][license-shield]][license-url]
30 | [![LinkedIn][linkedin-shield]][linkedin-url]
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
41 |
Cobb angle can be measured from spinal X-ray images.
42 |
43 |
44 | Automatic locate and segment the vertebra from an anterior-posterior (AP) view spinal X-ray images (grey level).
45 |
46 | Explore the docs »
47 |
48 |
49 | View Demo
50 | ·
51 | Report Bug
52 | ·
53 | Request Feature
54 |
55 |
56 |
57 |
58 |
59 |
60 | ## Table of Contents
61 |
62 | * [About the Project](#about-the-project)
63 | * [Built With](#built-with)
64 | * [Getting Started](#getting-started)
65 | * [Prerequisites](#prerequisites)
66 | * [Installation](#installation)
67 | * [Usage](#usage)
68 | * [FlowChart](#flowchart)
69 | * [Contributing](#contributing)
70 | * [License](#license)
71 | * [Contact](#contact)
72 | * [References](#References)
73 |
74 |
75 |
76 |
77 | ## About The Project
78 |
79 |
80 | [![Product Name Screen Shot][demo]](https://example.com)
81 |
82 |
85 |
86 |
87 | ### Built With
88 |
89 | * []() Python
90 | * []() Pytorch
91 | * []() PyQt5
92 |
93 |
94 |
95 |
96 | ## Getting Started
97 |
98 | To get a local copy up and running follow these simple steps.
99 |
100 | ### Prerequisites
101 |
102 | * Python
103 | ```sh
104 | numpy
105 | torch>=1.4
106 | torchvision
107 | matplotlib
108 | tensorflow
109 | tensorboard
110 | terminaltables
111 | pillow
112 | tqdm
113 | ```
114 |
115 | ### Installation
116 |
117 | 1. Clone the repo
118 | ```sh
119 | git clone https://github.com/z0978916348/Localization_and_Segmentation.git
120 | ```
121 | 2. Install python packages
122 | ```sh
123 | pip install requirements.txt
124 | ```
125 | 3. Prepare VertebraSegmentation data
126 | ```sh
127 | └ train
128 | └ 1.png
129 | └ 2.png
130 | └ ...
131 | └ test
132 | └ 5.png
133 | └ 6.png
134 | └ ...
135 | └ valid
136 | └ 8.png
137 | └ 9.png
138 | └ ...
139 | ```
140 | 4. Prepare PyTorch_YOLOv3 data
141 |
142 | ### Classes
143 | Add class names to data/custom/classes.names. This file should have one row per class name.
144 |
145 | ### Image Folder
146 | Move the images of your dataset to data/custom/images/.
147 |
148 | "train and valid respectively"
149 |
150 | ### Annotation Folder
151 | Move your annotations to data/custom/labels/. The dataloader expects that the annotation file corresponding to the image data/custom/images/train.jpg has the path data/custom/labels/train.txt. Each row in the annotation file should define one bounding box, using the syntax label_idx x_center y_center width height. The coordinates should be scaled [0, 1], and the label_idx should be zero-indexed and correspond to the row number of the class name in data/custom/classes.names.
152 |
153 | "train and valid respectively"
154 |
155 | ### Define Train and Validation Sets
156 | In data/custom/train.txt and data/custom/valid.txt, add paths to images that will be used as train and validation data respectively.
157 |
158 | After training, you can follow app.py folder to put your dataset into correspond folder.
159 | It it easy to know, I will not step by step for description
160 |
161 | 5. Excute
162 | ```sh
163 | python app.py
164 | ```
165 |
166 |
167 |
168 | ## Usage
169 |
170 | Use deep learning model to detect and segment spine precisely. We can use it to find Cobb Angle
171 | ![Cobb][Cobb]
172 |
173 |
174 |
175 |
176 |
177 |
178 |
179 |
180 | ## Flow Chart
181 |
182 | ![flowchart][flowchart]
183 |
184 |
185 |
186 | ## Contributing
187 |
188 | Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**.
189 |
190 | 1. Fork the Project
191 | 2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
192 | 3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
193 | 4. Push to the Branch (`git push origin feature/AmazingFeature`)
194 | 5. Open a Pull Request
195 |
196 |
197 |
198 |
199 | ## License
200 |
201 | Distributed under the MIT License. See `LICENSE` for more information.
202 |
203 |
204 |
205 |
206 | ## Contact
207 |
208 |
209 |
210 | Project Link: [https://github.com/z0978916348/Localization_and_Segmentation](https://github.com/z0978916348/Localization_and_Segmentation)
211 |
212 |
213 |
214 |
215 | ## References
216 |
217 | * []() Ronneberger, O., Fischer, P., & Brox, T. (2015, October). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention (pp. 234-241). Springer, Cham.
218 | * []()Z. Zhang, Q. Liu, and Y. Wang, “Road extraction by deep residual u-net,” IEEE Geoscience and Remote Sensing Letters, 2018, http://arxiv.org/abs/1711.10684.
219 | * []()Horng, M. H., Kuok, C. P., Fu, M. J., Lin, C. J., & Sun, Y. N. (2019). Cobb Angle Measurement of Spine from X-Ray Images Using Convolutional Neural Network. Computational and mathematical methods in medicine, 2019.
220 | * []()Al Arif, S. M. R., Knapp, K., & Slabaugh, G. (2017). Region-aware deep localization framework for cervical vertebrae in X-ray images. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support (pp. 74-82). Springer, Cham.
221 |
222 |
223 |
224 |
225 |
226 |
227 |
228 |
229 | [contributors-shield]: https://img.shields.io/github/contributors/z0978916348/repo.svg?style=flat-square
230 | [contributors-url]: https://github.com/z0978916348/repo/graphs/contributors
231 | [forks-shield]: https://img.shields.io/github/forks/z0978916348/repo.svg?style=flat-square
232 | [forks-url]: https://github.com/z0978916348/repo/network/members
233 | [stars-shield]: https://img.shields.io/github/stars/z0978916348/repo.svg?style=flat-square
234 | [stars-url]: https://github.com/z0978916348/repo/stargazers
235 | [issues-shield]: https://img.shields.io/github/issues/z0978916348/repo.svg?style=flat-square
236 | [issues-url]: https://github.com/z0978916348/repo/issues
237 | [license-shield]: https://img.shields.io/github/license/z0978916348/repo.svg?style=flat-square
238 | [license-url]: https://github.com/z0978916348/repo/blob/master/LICENSE.txt
239 | [linkedin-shield]: https://img.shields.io/badge/-LinkedIn-black.svg?style=flat-square&logo=linkedin&colorB=555
240 | [linkedin-url]: https://linkedin.com/in/z0978916348
241 | [product-screenshot]: images/screenshot.png
242 | [demo]: images/demo.jpg
243 | [Cobb]: images/Cobb.jpg
244 | [FlowChart]: images/flowchart.jpg
245 |
--------------------------------------------------------------------------------
/app/PyTorch_YOLOv3/config/coco.data:
--------------------------------------------------------------------------------
1 | classes= 80
2 | train=data/coco/trainvalno5k.txt
3 | valid=data/coco/5k.txt
4 | names=data/coco.names
5 | backup=backup/
6 | eval=coco
7 |
--------------------------------------------------------------------------------
/app/PyTorch_YOLOv3/config/create_custom_model.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | NUM_CLASSES=$1
4 |
5 | echo "
6 | [net]
7 | # Testing
8 | #batch=1
9 | #subdivisions=1
10 | # Training
11 | batch=16
12 | subdivisions=1
13 | width=416
14 | height=416
15 | channels=3
16 | momentum=0.9
17 | decay=0.0005
18 | angle=0
19 | saturation = 1.5
20 | exposure = 1.5
21 | hue=.1
22 |
23 | learning_rate=0.001
24 | burn_in=1000
25 | max_batches = 500200
26 | policy=steps
27 | steps=400000,450000
28 | scales=.1,.1
29 |
30 | [convolutional]
31 | batch_normalize=1
32 | filters=32
33 | size=3
34 | stride=1
35 | pad=1
36 | activation=leaky
37 |
38 | # Downsample
39 |
40 | [convolutional]
41 | batch_normalize=1
42 | filters=64
43 | size=3
44 | stride=2
45 | pad=1
46 | activation=leaky
47 |
48 | [convolutional]
49 | batch_normalize=1
50 | filters=32
51 | size=1
52 | stride=1
53 | pad=1
54 | activation=leaky
55 |
56 | [convolutional]
57 | batch_normalize=1
58 | filters=64
59 | size=3
60 | stride=1
61 | pad=1
62 | activation=leaky
63 |
64 | [shortcut]
65 | from=-3
66 | activation=linear
67 |
68 | # Downsample
69 |
70 | [convolutional]
71 | batch_normalize=1
72 | filters=128
73 | size=3
74 | stride=2
75 | pad=1
76 | activation=leaky
77 |
78 | [convolutional]
79 | batch_normalize=1
80 | filters=64
81 | size=1
82 | stride=1
83 | pad=1
84 | activation=leaky
85 |
86 | [convolutional]
87 | batch_normalize=1
88 | filters=128
89 | size=3
90 | stride=1
91 | pad=1
92 | activation=leaky
93 |
94 | [shortcut]
95 | from=-3
96 | activation=linear
97 |
98 | [convolutional]
99 | batch_normalize=1
100 | filters=64
101 | size=1
102 | stride=1
103 | pad=1
104 | activation=leaky
105 |
106 | [convolutional]
107 | batch_normalize=1
108 | filters=128
109 | size=3
110 | stride=1
111 | pad=1
112 | activation=leaky
113 |
114 | [shortcut]
115 | from=-3
116 | activation=linear
117 |
118 | # Downsample
119 |
120 | [convolutional]
121 | batch_normalize=1
122 | filters=256
123 | size=3
124 | stride=2
125 | pad=1
126 | activation=leaky
127 |
128 | [convolutional]
129 | batch_normalize=1
130 | filters=128
131 | size=1
132 | stride=1
133 | pad=1
134 | activation=leaky
135 |
136 | [convolutional]
137 | batch_normalize=1
138 | filters=256
139 | size=3
140 | stride=1
141 | pad=1
142 | activation=leaky
143 |
144 | [shortcut]
145 | from=-3
146 | activation=linear
147 |
148 | [convolutional]
149 | batch_normalize=1
150 | filters=128
151 | size=1
152 | stride=1
153 | pad=1
154 | activation=leaky
155 |
156 | [convolutional]
157 | batch_normalize=1
158 | filters=256
159 | size=3
160 | stride=1
161 | pad=1
162 | activation=leaky
163 |
164 | [shortcut]
165 | from=-3
166 | activation=linear
167 |
168 | [convolutional]
169 | batch_normalize=1
170 | filters=128
171 | size=1
172 | stride=1
173 | pad=1
174 | activation=leaky
175 |
176 | [convolutional]
177 | batch_normalize=1
178 | filters=256
179 | size=3
180 | stride=1
181 | pad=1
182 | activation=leaky
183 |
184 | [shortcut]
185 | from=-3
186 | activation=linear
187 |
188 | [convolutional]
189 | batch_normalize=1
190 | filters=128
191 | size=1
192 | stride=1
193 | pad=1
194 | activation=leaky
195 |
196 | [convolutional]
197 | batch_normalize=1
198 | filters=256
199 | size=3
200 | stride=1
201 | pad=1
202 | activation=leaky
203 |
204 | [shortcut]
205 | from=-3
206 | activation=linear
207 |
208 |
209 | [convolutional]
210 | batch_normalize=1
211 | filters=128
212 | size=1
213 | stride=1
214 | pad=1
215 | activation=leaky
216 |
217 | [convolutional]
218 | batch_normalize=1
219 | filters=256
220 | size=3
221 | stride=1
222 | pad=1
223 | activation=leaky
224 |
225 | [shortcut]
226 | from=-3
227 | activation=linear
228 |
229 | [convolutional]
230 | batch_normalize=1
231 | filters=128
232 | size=1
233 | stride=1
234 | pad=1
235 | activation=leaky
236 |
237 | [convolutional]
238 | batch_normalize=1
239 | filters=256
240 | size=3
241 | stride=1
242 | pad=1
243 | activation=leaky
244 |
245 | [shortcut]
246 | from=-3
247 | activation=linear
248 |
249 | [convolutional]
250 | batch_normalize=1
251 | filters=128
252 | size=1
253 | stride=1
254 | pad=1
255 | activation=leaky
256 |
257 | [convolutional]
258 | batch_normalize=1
259 | filters=256
260 | size=3
261 | stride=1
262 | pad=1
263 | activation=leaky
264 |
265 | [shortcut]
266 | from=-3
267 | activation=linear
268 |
269 | [convolutional]
270 | batch_normalize=1
271 | filters=128
272 | size=1
273 | stride=1
274 | pad=1
275 | activation=leaky
276 |
277 | [convolutional]
278 | batch_normalize=1
279 | filters=256
280 | size=3
281 | stride=1
282 | pad=1
283 | activation=leaky
284 |
285 | [shortcut]
286 | from=-3
287 | activation=linear
288 |
289 | # Downsample
290 |
291 | [convolutional]
292 | batch_normalize=1
293 | filters=512
294 | size=3
295 | stride=2
296 | pad=1
297 | activation=leaky
298 |
299 | [convolutional]
300 | batch_normalize=1
301 | filters=256
302 | size=1
303 | stride=1
304 | pad=1
305 | activation=leaky
306 |
307 | [convolutional]
308 | batch_normalize=1
309 | filters=512
310 | size=3
311 | stride=1
312 | pad=1
313 | activation=leaky
314 |
315 | [shortcut]
316 | from=-3
317 | activation=linear
318 |
319 |
320 | [convolutional]
321 | batch_normalize=1
322 | filters=256
323 | size=1
324 | stride=1
325 | pad=1
326 | activation=leaky
327 |
328 | [convolutional]
329 | batch_normalize=1
330 | filters=512
331 | size=3
332 | stride=1
333 | pad=1
334 | activation=leaky
335 |
336 | [shortcut]
337 | from=-3
338 | activation=linear
339 |
340 |
341 | [convolutional]
342 | batch_normalize=1
343 | filters=256
344 | size=1
345 | stride=1
346 | pad=1
347 | activation=leaky
348 |
349 | [convolutional]
350 | batch_normalize=1
351 | filters=512
352 | size=3
353 | stride=1
354 | pad=1
355 | activation=leaky
356 |
357 | [shortcut]
358 | from=-3
359 | activation=linear
360 |
361 |
362 | [convolutional]
363 | batch_normalize=1
364 | filters=256
365 | size=1
366 | stride=1
367 | pad=1
368 | activation=leaky
369 |
370 | [convolutional]
371 | batch_normalize=1
372 | filters=512
373 | size=3
374 | stride=1
375 | pad=1
376 | activation=leaky
377 |
378 | [shortcut]
379 | from=-3
380 | activation=linear
381 |
382 | [convolutional]
383 | batch_normalize=1
384 | filters=256
385 | size=1
386 | stride=1
387 | pad=1
388 | activation=leaky
389 |
390 | [convolutional]
391 | batch_normalize=1
392 | filters=512
393 | size=3
394 | stride=1
395 | pad=1
396 | activation=leaky
397 |
398 | [shortcut]
399 | from=-3
400 | activation=linear
401 |
402 |
403 | [convolutional]
404 | batch_normalize=1
405 | filters=256
406 | size=1
407 | stride=1
408 | pad=1
409 | activation=leaky
410 |
411 | [convolutional]
412 | batch_normalize=1
413 | filters=512
414 | size=3
415 | stride=1
416 | pad=1
417 | activation=leaky
418 |
419 | [shortcut]
420 | from=-3
421 | activation=linear
422 |
423 |
424 | [convolutional]
425 | batch_normalize=1
426 | filters=256
427 | size=1
428 | stride=1
429 | pad=1
430 | activation=leaky
431 |
432 | [convolutional]
433 | batch_normalize=1
434 | filters=512
435 | size=3
436 | stride=1
437 | pad=1
438 | activation=leaky
439 |
440 | [shortcut]
441 | from=-3
442 | activation=linear
443 |
444 | [convolutional]
445 | batch_normalize=1
446 | filters=256
447 | size=1
448 | stride=1
449 | pad=1
450 | activation=leaky
451 |
452 | [convolutional]
453 | batch_normalize=1
454 | filters=512
455 | size=3
456 | stride=1
457 | pad=1
458 | activation=leaky
459 |
460 | [shortcut]
461 | from=-3
462 | activation=linear
463 |
464 | # Downsample
465 |
466 | [convolutional]
467 | batch_normalize=1
468 | filters=1024
469 | size=3
470 | stride=2
471 | pad=1
472 | activation=leaky
473 |
474 | [convolutional]
475 | batch_normalize=1
476 | filters=512
477 | size=1
478 | stride=1
479 | pad=1
480 | activation=leaky
481 |
482 | [convolutional]
483 | batch_normalize=1
484 | filters=1024
485 | size=3
486 | stride=1
487 | pad=1
488 | activation=leaky
489 |
490 | [shortcut]
491 | from=-3
492 | activation=linear
493 |
494 | [convolutional]
495 | batch_normalize=1
496 | filters=512
497 | size=1
498 | stride=1
499 | pad=1
500 | activation=leaky
501 |
502 | [convolutional]
503 | batch_normalize=1
504 | filters=1024
505 | size=3
506 | stride=1
507 | pad=1
508 | activation=leaky
509 |
510 | [shortcut]
511 | from=-3
512 | activation=linear
513 |
514 | [convolutional]
515 | batch_normalize=1
516 | filters=512
517 | size=1
518 | stride=1
519 | pad=1
520 | activation=leaky
521 |
522 | [convolutional]
523 | batch_normalize=1
524 | filters=1024
525 | size=3
526 | stride=1
527 | pad=1
528 | activation=leaky
529 |
530 | [shortcut]
531 | from=-3
532 | activation=linear
533 |
534 | [convolutional]
535 | batch_normalize=1
536 | filters=512
537 | size=1
538 | stride=1
539 | pad=1
540 | activation=leaky
541 |
542 | [convolutional]
543 | batch_normalize=1
544 | filters=1024
545 | size=3
546 | stride=1
547 | pad=1
548 | activation=leaky
549 |
550 | [shortcut]
551 | from=-3
552 | activation=linear
553 |
554 | ######################
555 |
556 | [convolutional]
557 | batch_normalize=1
558 | filters=512
559 | size=1
560 | stride=1
561 | pad=1
562 | activation=leaky
563 |
564 | [convolutional]
565 | batch_normalize=1
566 | size=3
567 | stride=1
568 | pad=1
569 | filters=1024
570 | activation=leaky
571 |
572 | [convolutional]
573 | batch_normalize=1
574 | filters=512
575 | size=1
576 | stride=1
577 | pad=1
578 | activation=leaky
579 |
580 | [convolutional]
581 | batch_normalize=1
582 | size=3
583 | stride=1
584 | pad=1
585 | filters=1024
586 | activation=leaky
587 |
588 | [convolutional]
589 | batch_normalize=1
590 | filters=512
591 | size=1
592 | stride=1
593 | pad=1
594 | activation=leaky
595 |
596 | [convolutional]
597 | batch_normalize=1
598 | size=3
599 | stride=1
600 | pad=1
601 | filters=1024
602 | activation=leaky
603 |
604 | [convolutional]
605 | size=1
606 | stride=1
607 | pad=1
608 | filters=$(expr 3 \* $(expr $NUM_CLASSES \+ 5))
609 | activation=linear
610 |
611 |
612 | [yolo]
613 | mask = 6,7,8
614 | anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
615 | classes=$NUM_CLASSES
616 | num=9
617 | jitter=.3
618 | ignore_thresh = .7
619 | truth_thresh = 1
620 | random=1
621 |
622 |
623 | [route]
624 | layers = -4
625 |
626 | [convolutional]
627 | batch_normalize=1
628 | filters=256
629 | size=1
630 | stride=1
631 | pad=1
632 | activation=leaky
633 |
634 | [upsample]
635 | stride=2
636 |
637 | [route]
638 | layers = -1, 61
639 |
640 |
641 |
642 | [convolutional]
643 | batch_normalize=1
644 | filters=256
645 | size=1
646 | stride=1
647 | pad=1
648 | activation=leaky
649 |
650 | [convolutional]
651 | batch_normalize=1
652 | size=3
653 | stride=1
654 | pad=1
655 | filters=512
656 | activation=leaky
657 |
658 | [convolutional]
659 | batch_normalize=1
660 | filters=256
661 | size=1
662 | stride=1
663 | pad=1
664 | activation=leaky
665 |
666 | [convolutional]
667 | batch_normalize=1
668 | size=3
669 | stride=1
670 | pad=1
671 | filters=512
672 | activation=leaky
673 |
674 | [convolutional]
675 | batch_normalize=1
676 | filters=256
677 | size=1
678 | stride=1
679 | pad=1
680 | activation=leaky
681 |
682 | [convolutional]
683 | batch_normalize=1
684 | size=3
685 | stride=1
686 | pad=1
687 | filters=512
688 | activation=leaky
689 |
690 | [convolutional]
691 | size=1
692 | stride=1
693 | pad=1
694 | filters=$(expr 3 \* $(expr $NUM_CLASSES \+ 5))
695 | activation=linear
696 |
697 |
698 | [yolo]
699 | mask = 3,4,5
700 | anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
701 | classes=$NUM_CLASSES
702 | num=9
703 | jitter=.3
704 | ignore_thresh = .7
705 | truth_thresh = 1
706 | random=1
707 |
708 |
709 |
710 | [route]
711 | layers = -4
712 |
713 | [convolutional]
714 | batch_normalize=1
715 | filters=128
716 | size=1
717 | stride=1
718 | pad=1
719 | activation=leaky
720 |
721 | [upsample]
722 | stride=2
723 |
724 | [route]
725 | layers = -1, 36
726 |
727 |
728 |
729 | [convolutional]
730 | batch_normalize=1
731 | filters=128
732 | size=1
733 | stride=1
734 | pad=1
735 | activation=leaky
736 |
737 | [convolutional]
738 | batch_normalize=1
739 | size=3
740 | stride=1
741 | pad=1
742 | filters=256
743 | activation=leaky
744 |
745 | [convolutional]
746 | batch_normalize=1
747 | filters=128
748 | size=1
749 | stride=1
750 | pad=1
751 | activation=leaky
752 |
753 | [convolutional]
754 | batch_normalize=1
755 | size=3
756 | stride=1
757 | pad=1
758 | filters=256
759 | activation=leaky
760 |
761 | [convolutional]
762 | batch_normalize=1
763 | filters=128
764 | size=1
765 | stride=1
766 | pad=1
767 | activation=leaky
768 |
769 | [convolutional]
770 | batch_normalize=1
771 | size=3
772 | stride=1
773 | pad=1
774 | filters=256
775 | activation=leaky
776 |
777 | [convolutional]
778 | size=1
779 | stride=1
780 | pad=1
781 | filters=$(expr 3 \* $(expr $NUM_CLASSES \+ 5))
782 | activation=linear
783 |
784 |
785 | [yolo]
786 | mask = 0,1,2
787 | anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
788 | classes=$NUM_CLASSES
789 | num=9
790 | jitter=.3
791 | ignore_thresh = .7
792 | truth_thresh = 1
793 | random=1
794 | " >> yolov3-custom.cfg
795 |
--------------------------------------------------------------------------------
/app/PyTorch_YOLOv3/config/custom.data:
--------------------------------------------------------------------------------
1 | classes= 1
2 | train=data/custom/train.txt
3 | valid=data/custom/valid.txt
4 | names=data/custom/classes.names
5 |
--------------------------------------------------------------------------------
/app/PyTorch_YOLOv3/config/yolov3-custom.cfg:
--------------------------------------------------------------------------------
1 |
2 | [net]
3 | # Testing
4 | #batch=1
5 | #subdivisions=1
6 | # Training
7 | batch=16
8 | subdivisions=1
9 | width=416
10 | height=416
11 | # width=608
12 | # height=608
13 | channels=1
14 | momentum=0.9
15 | decay=0.0005
16 | angle=0
17 | saturation = 1.5
18 | exposure = 1.5
19 | hue=.1
20 |
21 | learning_rate=0.001
22 | burn_in=1000
23 | max_batches = 500200
24 | policy=steps
25 | steps=400000,450000
26 | scales=.1,.1
27 |
28 | [convolutional]
29 | batch_normalize=1
30 | filters=32
31 | size=3
32 | stride=1
33 | pad=1
34 | activation=leaky
35 |
36 | # Downsample
37 |
38 | [convolutional]
39 | batch_normalize=1
40 | filters=64
41 | size=3
42 | stride=2
43 | pad=1
44 | activation=leaky
45 |
46 | [convolutional]
47 | batch_normalize=1
48 | filters=32
49 | size=1
50 | stride=1
51 | pad=1
52 | activation=leaky
53 |
54 | [convolutional]
55 | batch_normalize=1
56 | filters=64
57 | size=3
58 | stride=1
59 | pad=1
60 | activation=leaky
61 |
62 | [shortcut]
63 | from=-3
64 | activation=linear
65 |
66 | # Downsample
67 |
68 | [convolutional]
69 | batch_normalize=1
70 | filters=128
71 | size=3
72 | stride=2
73 | pad=1
74 | activation=leaky
75 |
76 | [convolutional]
77 | batch_normalize=1
78 | filters=64
79 | size=1
80 | stride=1
81 | pad=1
82 | activation=leaky
83 |
84 | [convolutional]
85 | batch_normalize=1
86 | filters=128
87 | size=3
88 | stride=1
89 | pad=1
90 | activation=leaky
91 |
92 | [shortcut]
93 | from=-3
94 | activation=linear
95 |
96 | [convolutional]
97 | batch_normalize=1
98 | filters=64
99 | size=1
100 | stride=1
101 | pad=1
102 | activation=leaky
103 |
104 | [convolutional]
105 | batch_normalize=1
106 | filters=128
107 | size=3
108 | stride=1
109 | pad=1
110 | activation=leaky
111 |
112 | [shortcut]
113 | from=-3
114 | activation=linear
115 |
116 | # Downsample
117 |
118 | [convolutional]
119 | batch_normalize=1
120 | filters=256
121 | size=3
122 | stride=2
123 | pad=1
124 | activation=leaky
125 |
126 | [convolutional]
127 | batch_normalize=1
128 | filters=128
129 | size=1
130 | stride=1
131 | pad=1
132 | activation=leaky
133 |
134 | [convolutional]
135 | batch_normalize=1
136 | filters=256
137 | size=3
138 | stride=1
139 | pad=1
140 | activation=leaky
141 |
142 | [shortcut]
143 | from=-3
144 | activation=linear
145 |
146 | [convolutional]
147 | batch_normalize=1
148 | filters=128
149 | size=1
150 | stride=1
151 | pad=1
152 | activation=leaky
153 |
154 | [convolutional]
155 | batch_normalize=1
156 | filters=256
157 | size=3
158 | stride=1
159 | pad=1
160 | activation=leaky
161 |
162 | [shortcut]
163 | from=-3
164 | activation=linear
165 |
166 | [convolutional]
167 | batch_normalize=1
168 | filters=128
169 | size=1
170 | stride=1
171 | pad=1
172 | activation=leaky
173 |
174 | [convolutional]
175 | batch_normalize=1
176 | filters=256
177 | size=3
178 | stride=1
179 | pad=1
180 | activation=leaky
181 |
182 | [shortcut]
183 | from=-3
184 | activation=linear
185 |
186 | [convolutional]
187 | batch_normalize=1
188 | filters=128
189 | size=1
190 | stride=1
191 | pad=1
192 | activation=leaky
193 |
194 | [convolutional]
195 | batch_normalize=1
196 | filters=256
197 | size=3
198 | stride=1
199 | pad=1
200 | activation=leaky
201 |
202 | [shortcut]
203 | from=-3
204 | activation=linear
205 |
206 |
207 | [convolutional]
208 | batch_normalize=1
209 | filters=128
210 | size=1
211 | stride=1
212 | pad=1
213 | activation=leaky
214 |
215 | [convolutional]
216 | batch_normalize=1
217 | filters=256
218 | size=3
219 | stride=1
220 | pad=1
221 | activation=leaky
222 |
223 | [shortcut]
224 | from=-3
225 | activation=linear
226 |
227 | [convolutional]
228 | batch_normalize=1
229 | filters=128
230 | size=1
231 | stride=1
232 | pad=1
233 | activation=leaky
234 |
235 | [convolutional]
236 | batch_normalize=1
237 | filters=256
238 | size=3
239 | stride=1
240 | pad=1
241 | activation=leaky
242 |
243 | [shortcut]
244 | from=-3
245 | activation=linear
246 |
247 | [convolutional]
248 | batch_normalize=1
249 | filters=128
250 | size=1
251 | stride=1
252 | pad=1
253 | activation=leaky
254 |
255 | [convolutional]
256 | batch_normalize=1
257 | filters=256
258 | size=3
259 | stride=1
260 | pad=1
261 | activation=leaky
262 |
263 | [shortcut]
264 | from=-3
265 | activation=linear
266 |
267 | [convolutional]
268 | batch_normalize=1
269 | filters=128
270 | size=1
271 | stride=1
272 | pad=1
273 | activation=leaky
274 |
275 | [convolutional]
276 | batch_normalize=1
277 | filters=256
278 | size=3
279 | stride=1
280 | pad=1
281 | activation=leaky
282 |
283 | [shortcut]
284 | from=-3
285 | activation=linear
286 |
287 | # Downsample
288 |
289 | [convolutional]
290 | batch_normalize=1
291 | filters=512
292 | size=3
293 | stride=2
294 | pad=1
295 | activation=leaky
296 |
297 | [convolutional]
298 | batch_normalize=1
299 | filters=256
300 | size=1
301 | stride=1
302 | pad=1
303 | activation=leaky
304 |
305 | [convolutional]
306 | batch_normalize=1
307 | filters=512
308 | size=3
309 | stride=1
310 | pad=1
311 | activation=leaky
312 |
313 | [shortcut]
314 | from=-3
315 | activation=linear
316 |
317 |
318 | [convolutional]
319 | batch_normalize=1
320 | filters=256
321 | size=1
322 | stride=1
323 | pad=1
324 | activation=leaky
325 |
326 | [convolutional]
327 | batch_normalize=1
328 | filters=512
329 | size=3
330 | stride=1
331 | pad=1
332 | activation=leaky
333 |
334 | [shortcut]
335 | from=-3
336 | activation=linear
337 |
338 |
339 | [convolutional]
340 | batch_normalize=1
341 | filters=256
342 | size=1
343 | stride=1
344 | pad=1
345 | activation=leaky
346 |
347 | [convolutional]
348 | batch_normalize=1
349 | filters=512
350 | size=3
351 | stride=1
352 | pad=1
353 | activation=leaky
354 |
355 | [shortcut]
356 | from=-3
357 | activation=linear
358 |
359 |
360 | [convolutional]
361 | batch_normalize=1
362 | filters=256
363 | size=1
364 | stride=1
365 | pad=1
366 | activation=leaky
367 |
368 | [convolutional]
369 | batch_normalize=1
370 | filters=512
371 | size=3
372 | stride=1
373 | pad=1
374 | activation=leaky
375 |
376 | [shortcut]
377 | from=-3
378 | activation=linear
379 |
380 | [convolutional]
381 | batch_normalize=1
382 | filters=256
383 | size=1
384 | stride=1
385 | pad=1
386 | activation=leaky
387 |
388 | [convolutional]
389 | batch_normalize=1
390 | filters=512
391 | size=3
392 | stride=1
393 | pad=1
394 | activation=leaky
395 |
396 | [shortcut]
397 | from=-3
398 | activation=linear
399 |
400 |
401 | [convolutional]
402 | batch_normalize=1
403 | filters=256
404 | size=1
405 | stride=1
406 | pad=1
407 | activation=leaky
408 |
409 | [convolutional]
410 | batch_normalize=1
411 | filters=512
412 | size=3
413 | stride=1
414 | pad=1
415 | activation=leaky
416 |
417 | [shortcut]
418 | from=-3
419 | activation=linear
420 |
421 |
422 | [convolutional]
423 | batch_normalize=1
424 | filters=256
425 | size=1
426 | stride=1
427 | pad=1
428 | activation=leaky
429 |
430 | [convolutional]
431 | batch_normalize=1
432 | filters=512
433 | size=3
434 | stride=1
435 | pad=1
436 | activation=leaky
437 |
438 | [shortcut]
439 | from=-3
440 | activation=linear
441 |
442 | [convolutional]
443 | batch_normalize=1
444 | filters=256
445 | size=1
446 | stride=1
447 | pad=1
448 | activation=leaky
449 |
450 | [convolutional]
451 | batch_normalize=1
452 | filters=512
453 | size=3
454 | stride=1
455 | pad=1
456 | activation=leaky
457 |
458 | [shortcut]
459 | from=-3
460 | activation=linear
461 |
462 | # Downsample
463 |
464 | [convolutional]
465 | batch_normalize=1
466 | filters=1024
467 | size=3
468 | stride=2
469 | pad=1
470 | activation=leaky
471 |
472 | [convolutional]
473 | batch_normalize=1
474 | filters=512
475 | size=1
476 | stride=1
477 | pad=1
478 | activation=leaky
479 |
480 | [convolutional]
481 | batch_normalize=1
482 | filters=1024
483 | size=3
484 | stride=1
485 | pad=1
486 | activation=leaky
487 |
488 | [shortcut]
489 | from=-3
490 | activation=linear
491 |
492 | [convolutional]
493 | batch_normalize=1
494 | filters=512
495 | size=1
496 | stride=1
497 | pad=1
498 | activation=leaky
499 |
500 | [convolutional]
501 | batch_normalize=1
502 | filters=1024
503 | size=3
504 | stride=1
505 | pad=1
506 | activation=leaky
507 |
508 | [shortcut]
509 | from=-3
510 | activation=linear
511 |
512 | [convolutional]
513 | batch_normalize=1
514 | filters=512
515 | size=1
516 | stride=1
517 | pad=1
518 | activation=leaky
519 |
520 | [convolutional]
521 | batch_normalize=1
522 | filters=1024
523 | size=3
524 | stride=1
525 | pad=1
526 | activation=leaky
527 |
528 | [shortcut]
529 | from=-3
530 | activation=linear
531 |
532 | [convolutional]
533 | batch_normalize=1
534 | filters=512
535 | size=1
536 | stride=1
537 | pad=1
538 | activation=leaky
539 |
540 | [convolutional]
541 | batch_normalize=1
542 | filters=1024
543 | size=3
544 | stride=1
545 | pad=1
546 | activation=leaky
547 |
548 | [shortcut]
549 | from=-3
550 | activation=linear
551 |
552 | ######################
553 |
554 | [convolutional]
555 | batch_normalize=1
556 | filters=512
557 | size=1
558 | stride=1
559 | pad=1
560 | activation=leaky
561 |
562 | [convolutional]
563 | batch_normalize=1
564 | size=3
565 | stride=1
566 | pad=1
567 | filters=1024
568 | activation=leaky
569 |
570 | [convolutional]
571 | batch_normalize=1
572 | filters=512
573 | size=1
574 | stride=1
575 | pad=1
576 | activation=leaky
577 |
578 | [convolutional]
579 | batch_normalize=1
580 | size=3
581 | stride=1
582 | pad=1
583 | filters=1024
584 | activation=leaky
585 |
586 | [convolutional]
587 | batch_normalize=1
588 | filters=512
589 | size=1
590 | stride=1
591 | pad=1
592 | activation=leaky
593 |
594 | [convolutional]
595 | batch_normalize=1
596 | size=3
597 | stride=1
598 | pad=1
599 | filters=1024
600 | activation=leaky
601 |
602 | [convolutional]
603 | size=1
604 | stride=1
605 | pad=1
606 | filters=18
607 | activation=linear
608 |
609 |
610 | [yolo]
611 | mask = 6,7,8
612 | # anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
613 | anchors = 40, 24, 43,26, 46,27, 51,30, 65,38, 79,46, 88,49, 93,51, 97,52
614 | # anchors = 40,24, 46,27, 51,30, 60,30, 65,38, 75,46, 85,43, 90,51, 97,52
615 | classes=1
616 | num=9
617 | jitter=.3
618 | ignore_thresh = .7
619 | truth_thresh = 1
620 | random=1
621 |
622 |
623 | [route]
624 | layers = -4
625 |
626 | [convolutional]
627 | batch_normalize=1
628 | filters=256
629 | size=1
630 | stride=1
631 | pad=1
632 | activation=leaky
633 |
634 | [upsample]
635 | stride=2
636 |
637 | [route]
638 | layers = -1, 61
639 |
640 |
641 |
642 | [convolutional]
643 | batch_normalize=1
644 | filters=256
645 | size=1
646 | stride=1
647 | pad=1
648 | activation=leaky
649 |
650 | [convolutional]
651 | batch_normalize=1
652 | size=3
653 | stride=1
654 | pad=1
655 | filters=512
656 | activation=leaky
657 |
658 | [convolutional]
659 | batch_normalize=1
660 | filters=256
661 | size=1
662 | stride=1
663 | pad=1
664 | activation=leaky
665 |
666 | [convolutional]
667 | batch_normalize=1
668 | size=3
669 | stride=1
670 | pad=1
671 | filters=512
672 | activation=leaky
673 |
674 | [convolutional]
675 | batch_normalize=1
676 | filters=256
677 | size=1
678 | stride=1
679 | pad=1
680 | activation=leaky
681 |
682 | [convolutional]
683 | batch_normalize=1
684 | size=3
685 | stride=1
686 | pad=1
687 | filters=512
688 | activation=leaky
689 |
690 | [convolutional]
691 | size=1
692 | stride=1
693 | pad=1
694 | filters=18
695 | activation=linear
696 |
697 |
698 | [yolo]
699 | mask = 3,4,5
700 | # anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
701 | anchors = 40, 24, 43,26, 46,27, 51,30, 65,38, 79,46, 88,49, 93,51, 97,52
702 | # anchors = 40,24, 46,27, 51,30, 60,30, 65,38, 75,46, 85,43, 90,51, 97,52
703 | classes=1
704 | num=9
705 | jitter=.3
706 | ignore_thresh = .7
707 | truth_thresh = 1
708 | random=1
709 |
710 |
711 |
712 | [route]
713 | layers = -4
714 |
715 | [convolutional]
716 | batch_normalize=1
717 | filters=128
718 | size=1
719 | stride=1
720 | pad=1
721 | activation=leaky
722 |
723 | [upsample]
724 | stride=2
725 |
726 | [route]
727 | layers = -1, 36
728 |
729 |
730 |
731 | [convolutional]
732 | batch_normalize=1
733 | filters=128
734 | size=1
735 | stride=1
736 | pad=1
737 | activation=leaky
738 |
739 | [convolutional]
740 | batch_normalize=1
741 | size=3
742 | stride=1
743 | pad=1
744 | filters=256
745 | activation=leaky
746 |
747 | [convolutional]
748 | batch_normalize=1
749 | filters=128
750 | size=1
751 | stride=1
752 | pad=1
753 | activation=leaky
754 |
755 | [convolutional]
756 | batch_normalize=1
757 | size=3
758 | stride=1
759 | pad=1
760 | filters=256
761 | activation=leaky
762 |
763 | [convolutional]
764 | batch_normalize=1
765 | filters=128
766 | size=1
767 | stride=1
768 | pad=1
769 | activation=leaky
770 |
771 | [convolutional]
772 | batch_normalize=1
773 | size=3
774 | stride=1
775 | pad=1
776 | filters=256
777 | activation=leaky
778 |
779 | [convolutional]
780 | size=1
781 | stride=1
782 | pad=1
783 | filters=18
784 | activation=linear
785 |
786 |
787 | [yolo]
788 | mask = 0,1,2
789 | # anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
790 | anchors = 40, 24, 43,26, 46,27, 51,30, 65,38, 79,46, 88,49, 93,51, 97,52
791 | # anchors = 40,24, 46,27, 51,30, 60,30, 65,38, 75,46, 85,43, 90,51, 97,52
792 | classes=1
793 | num=9
794 | jitter=.3
795 | ignore_thresh = .7
796 | truth_thresh = 1
797 | random=1
798 |
799 |
--------------------------------------------------------------------------------
/app/PyTorch_YOLOv3/config/yolov3-tiny.cfg:
--------------------------------------------------------------------------------
1 | [net]
2 | # Testing
3 | batch=1
4 | subdivisions=1
5 | # Training
6 | # batch=64
7 | # subdivisions=2
8 | width=416
9 | height=416
10 | channels=3
11 | momentum=0.9
12 | decay=0.0005
13 | angle=0
14 | saturation = 1.5
15 | exposure = 1.5
16 | hue=.1
17 |
18 | learning_rate=0.001
19 | burn_in=1000
20 | max_batches = 500200
21 | policy=steps
22 | steps=400000,450000
23 | scales=.1,.1
24 |
25 | # 0
26 | [convolutional]
27 | batch_normalize=1
28 | filters=16
29 | size=3
30 | stride=1
31 | pad=1
32 | activation=leaky
33 |
34 | # 1
35 | [maxpool]
36 | size=2
37 | stride=2
38 |
39 | # 2
40 | [convolutional]
41 | batch_normalize=1
42 | filters=32
43 | size=3
44 | stride=1
45 | pad=1
46 | activation=leaky
47 |
48 | # 3
49 | [maxpool]
50 | size=2
51 | stride=2
52 |
53 | # 4
54 | [convolutional]
55 | batch_normalize=1
56 | filters=64
57 | size=3
58 | stride=1
59 | pad=1
60 | activation=leaky
61 |
62 | # 5
63 | [maxpool]
64 | size=2
65 | stride=2
66 |
67 | # 6
68 | [convolutional]
69 | batch_normalize=1
70 | filters=128
71 | size=3
72 | stride=1
73 | pad=1
74 | activation=leaky
75 |
76 | # 7
77 | [maxpool]
78 | size=2
79 | stride=2
80 |
81 | # 8
82 | [convolutional]
83 | batch_normalize=1
84 | filters=256
85 | size=3
86 | stride=1
87 | pad=1
88 | activation=leaky
89 |
90 | # 9
91 | [maxpool]
92 | size=2
93 | stride=2
94 |
95 | # 10
96 | [convolutional]
97 | batch_normalize=1
98 | filters=512
99 | size=3
100 | stride=1
101 | pad=1
102 | activation=leaky
103 |
104 | # 11
105 | [maxpool]
106 | size=2
107 | stride=1
108 |
109 | # 12
110 | [convolutional]
111 | batch_normalize=1
112 | filters=1024
113 | size=3
114 | stride=1
115 | pad=1
116 | activation=leaky
117 |
118 | ###########
119 |
120 | # 13
121 | [convolutional]
122 | batch_normalize=1
123 | filters=256
124 | size=1
125 | stride=1
126 | pad=1
127 | activation=leaky
128 |
129 | # 14
130 | [convolutional]
131 | batch_normalize=1
132 | filters=512
133 | size=3
134 | stride=1
135 | pad=1
136 | activation=leaky
137 |
138 | # 15
139 | [convolutional]
140 | size=1
141 | stride=1
142 | pad=1
143 | filters=255
144 | activation=linear
145 |
146 |
147 |
148 | # 16
149 | [yolo]
150 | mask = 3,4,5
151 | anchors = 10,14, 23,27, 37,58, 81,82, 135,169, 344,319
152 | classes=80
153 | num=6
154 | jitter=.3
155 | ignore_thresh = .7
156 | truth_thresh = 1
157 | random=1
158 |
159 | # 17
160 | [route]
161 | layers = -4
162 |
163 | # 18
164 | [convolutional]
165 | batch_normalize=1
166 | filters=128
167 | size=1
168 | stride=1
169 | pad=1
170 | activation=leaky
171 |
172 | # 19
173 | [upsample]
174 | stride=2
175 |
176 | # 20
177 | [route]
178 | layers = -1, 8
179 |
180 | # 21
181 | [convolutional]
182 | batch_normalize=1
183 | filters=256
184 | size=3
185 | stride=1
186 | pad=1
187 | activation=leaky
188 |
189 | # 22
190 | [convolutional]
191 | size=1
192 | stride=1
193 | pad=1
194 | filters=255
195 | activation=linear
196 |
197 | # 23
198 | [yolo]
199 | mask = 1,2,3
200 | anchors = 10,14, 23,27, 37,58, 81,82, 135,169, 344,319
201 | classes=80
202 | num=6
203 | jitter=.3
204 | ignore_thresh = .7
205 | truth_thresh = 1
206 | random=1
207 |
--------------------------------------------------------------------------------
/app/PyTorch_YOLOv3/config/yolov3.cfg:
--------------------------------------------------------------------------------
1 | [net]
2 | # Testing
3 | #batch=1
4 | #subdivisions=1
5 | # Training
6 | batch=16
7 | subdivisions=1
8 | width=416
9 | height=416
10 | channels=3
11 | momentum=0.9
12 | decay=0.0005
13 | angle=0
14 | saturation = 1.5
15 | exposure = 1.5
16 | hue=.1
17 |
18 | learning_rate=0.001
19 | burn_in=1000
20 | max_batches = 500200
21 | policy=steps
22 | steps=400000,450000
23 | scales=.1,.1
24 |
25 | [convolutional]
26 | batch_normalize=1
27 | filters=32
28 | size=3
29 | stride=1
30 | pad=1
31 | activation=leaky
32 |
33 | # Downsample
34 |
35 | [convolutional]
36 | batch_normalize=1
37 | filters=64
38 | size=3
39 | stride=2
40 | pad=1
41 | activation=leaky
42 |
43 | [convolutional]
44 | batch_normalize=1
45 | filters=32
46 | size=1
47 | stride=1
48 | pad=1
49 | activation=leaky
50 |
51 | [convolutional]
52 | batch_normalize=1
53 | filters=64
54 | size=3
55 | stride=1
56 | pad=1
57 | activation=leaky
58 |
59 | [shortcut]
60 | from=-3
61 | activation=linear
62 |
63 | # Downsample
64 |
65 | [convolutional]
66 | batch_normalize=1
67 | filters=128
68 | size=3
69 | stride=2
70 | pad=1
71 | activation=leaky
72 |
73 | [convolutional]
74 | batch_normalize=1
75 | filters=64
76 | size=1
77 | stride=1
78 | pad=1
79 | activation=leaky
80 |
81 | [convolutional]
82 | batch_normalize=1
83 | filters=128
84 | size=3
85 | stride=1
86 | pad=1
87 | activation=leaky
88 |
89 | [shortcut]
90 | from=-3
91 | activation=linear
92 |
93 | [convolutional]
94 | batch_normalize=1
95 | filters=64
96 | size=1
97 | stride=1
98 | pad=1
99 | activation=leaky
100 |
101 | [convolutional]
102 | batch_normalize=1
103 | filters=128
104 | size=3
105 | stride=1
106 | pad=1
107 | activation=leaky
108 |
109 | [shortcut]
110 | from=-3
111 | activation=linear
112 |
113 | # Downsample
114 |
115 | [convolutional]
116 | batch_normalize=1
117 | filters=256
118 | size=3
119 | stride=2
120 | pad=1
121 | activation=leaky
122 |
123 | [convolutional]
124 | batch_normalize=1
125 | filters=128
126 | size=1
127 | stride=1
128 | pad=1
129 | activation=leaky
130 |
131 | [convolutional]
132 | batch_normalize=1
133 | filters=256
134 | size=3
135 | stride=1
136 | pad=1
137 | activation=leaky
138 |
139 | [shortcut]
140 | from=-3
141 | activation=linear
142 |
143 | [convolutional]
144 | batch_normalize=1
145 | filters=128
146 | size=1
147 | stride=1
148 | pad=1
149 | activation=leaky
150 |
151 | [convolutional]
152 | batch_normalize=1
153 | filters=256
154 | size=3
155 | stride=1
156 | pad=1
157 | activation=leaky
158 |
159 | [shortcut]
160 | from=-3
161 | activation=linear
162 |
163 | [convolutional]
164 | batch_normalize=1
165 | filters=128
166 | size=1
167 | stride=1
168 | pad=1
169 | activation=leaky
170 |
171 | [convolutional]
172 | batch_normalize=1
173 | filters=256
174 | size=3
175 | stride=1
176 | pad=1
177 | activation=leaky
178 |
179 | [shortcut]
180 | from=-3
181 | activation=linear
182 |
183 | [convolutional]
184 | batch_normalize=1
185 | filters=128
186 | size=1
187 | stride=1
188 | pad=1
189 | activation=leaky
190 |
191 | [convolutional]
192 | batch_normalize=1
193 | filters=256
194 | size=3
195 | stride=1
196 | pad=1
197 | activation=leaky
198 |
199 | [shortcut]
200 | from=-3
201 | activation=linear
202 |
203 |
204 | [convolutional]
205 | batch_normalize=1
206 | filters=128
207 | size=1
208 | stride=1
209 | pad=1
210 | activation=leaky
211 |
212 | [convolutional]
213 | batch_normalize=1
214 | filters=256
215 | size=3
216 | stride=1
217 | pad=1
218 | activation=leaky
219 |
220 | [shortcut]
221 | from=-3
222 | activation=linear
223 |
224 | [convolutional]
225 | batch_normalize=1
226 | filters=128
227 | size=1
228 | stride=1
229 | pad=1
230 | activation=leaky
231 |
232 | [convolutional]
233 | batch_normalize=1
234 | filters=256
235 | size=3
236 | stride=1
237 | pad=1
238 | activation=leaky
239 |
240 | [shortcut]
241 | from=-3
242 | activation=linear
243 |
244 | [convolutional]
245 | batch_normalize=1
246 | filters=128
247 | size=1
248 | stride=1
249 | pad=1
250 | activation=leaky
251 |
252 | [convolutional]
253 | batch_normalize=1
254 | filters=256
255 | size=3
256 | stride=1
257 | pad=1
258 | activation=leaky
259 |
260 | [shortcut]
261 | from=-3
262 | activation=linear
263 |
264 | [convolutional]
265 | batch_normalize=1
266 | filters=128
267 | size=1
268 | stride=1
269 | pad=1
270 | activation=leaky
271 |
272 | [convolutional]
273 | batch_normalize=1
274 | filters=256
275 | size=3
276 | stride=1
277 | pad=1
278 | activation=leaky
279 |
280 | [shortcut]
281 | from=-3
282 | activation=linear
283 |
284 | # Downsample
285 |
286 | [convolutional]
287 | batch_normalize=1
288 | filters=512
289 | size=3
290 | stride=2
291 | pad=1
292 | activation=leaky
293 |
294 | [convolutional]
295 | batch_normalize=1
296 | filters=256
297 | size=1
298 | stride=1
299 | pad=1
300 | activation=leaky
301 |
302 | [convolutional]
303 | batch_normalize=1
304 | filters=512
305 | size=3
306 | stride=1
307 | pad=1
308 | activation=leaky
309 |
310 | [shortcut]
311 | from=-3
312 | activation=linear
313 |
314 |
315 | [convolutional]
316 | batch_normalize=1
317 | filters=256
318 | size=1
319 | stride=1
320 | pad=1
321 | activation=leaky
322 |
323 | [convolutional]
324 | batch_normalize=1
325 | filters=512
326 | size=3
327 | stride=1
328 | pad=1
329 | activation=leaky
330 |
331 | [shortcut]
332 | from=-3
333 | activation=linear
334 |
335 |
336 | [convolutional]
337 | batch_normalize=1
338 | filters=256
339 | size=1
340 | stride=1
341 | pad=1
342 | activation=leaky
343 |
344 | [convolutional]
345 | batch_normalize=1
346 | filters=512
347 | size=3
348 | stride=1
349 | pad=1
350 | activation=leaky
351 |
352 | [shortcut]
353 | from=-3
354 | activation=linear
355 |
356 |
357 | [convolutional]
358 | batch_normalize=1
359 | filters=256
360 | size=1
361 | stride=1
362 | pad=1
363 | activation=leaky
364 |
365 | [convolutional]
366 | batch_normalize=1
367 | filters=512
368 | size=3
369 | stride=1
370 | pad=1
371 | activation=leaky
372 |
373 | [shortcut]
374 | from=-3
375 | activation=linear
376 |
377 | [convolutional]
378 | batch_normalize=1
379 | filters=256
380 | size=1
381 | stride=1
382 | pad=1
383 | activation=leaky
384 |
385 | [convolutional]
386 | batch_normalize=1
387 | filters=512
388 | size=3
389 | stride=1
390 | pad=1
391 | activation=leaky
392 |
393 | [shortcut]
394 | from=-3
395 | activation=linear
396 |
397 |
398 | [convolutional]
399 | batch_normalize=1
400 | filters=256
401 | size=1
402 | stride=1
403 | pad=1
404 | activation=leaky
405 |
406 | [convolutional]
407 | batch_normalize=1
408 | filters=512
409 | size=3
410 | stride=1
411 | pad=1
412 | activation=leaky
413 |
414 | [shortcut]
415 | from=-3
416 | activation=linear
417 |
418 |
419 | [convolutional]
420 | batch_normalize=1
421 | filters=256
422 | size=1
423 | stride=1
424 | pad=1
425 | activation=leaky
426 |
427 | [convolutional]
428 | batch_normalize=1
429 | filters=512
430 | size=3
431 | stride=1
432 | pad=1
433 | activation=leaky
434 |
435 | [shortcut]
436 | from=-3
437 | activation=linear
438 |
439 | [convolutional]
440 | batch_normalize=1
441 | filters=256
442 | size=1
443 | stride=1
444 | pad=1
445 | activation=leaky
446 |
447 | [convolutional]
448 | batch_normalize=1
449 | filters=512
450 | size=3
451 | stride=1
452 | pad=1
453 | activation=leaky
454 |
455 | [shortcut]
456 | from=-3
457 | activation=linear
458 |
459 | # Downsample
460 |
461 | [convolutional]
462 | batch_normalize=1
463 | filters=1024
464 | size=3
465 | stride=2
466 | pad=1
467 | activation=leaky
468 |
469 | [convolutional]
470 | batch_normalize=1
471 | filters=512
472 | size=1
473 | stride=1
474 | pad=1
475 | activation=leaky
476 |
477 | [convolutional]
478 | batch_normalize=1
479 | filters=1024
480 | size=3
481 | stride=1
482 | pad=1
483 | activation=leaky
484 |
485 | [shortcut]
486 | from=-3
487 | activation=linear
488 |
489 | [convolutional]
490 | batch_normalize=1
491 | filters=512
492 | size=1
493 | stride=1
494 | pad=1
495 | activation=leaky
496 |
497 | [convolutional]
498 | batch_normalize=1
499 | filters=1024
500 | size=3
501 | stride=1
502 | pad=1
503 | activation=leaky
504 |
505 | [shortcut]
506 | from=-3
507 | activation=linear
508 |
509 | [convolutional]
510 | batch_normalize=1
511 | filters=512
512 | size=1
513 | stride=1
514 | pad=1
515 | activation=leaky
516 |
517 | [convolutional]
518 | batch_normalize=1
519 | filters=1024
520 | size=3
521 | stride=1
522 | pad=1
523 | activation=leaky
524 |
525 | [shortcut]
526 | from=-3
527 | activation=linear
528 |
529 | [convolutional]
530 | batch_normalize=1
531 | filters=512
532 | size=1
533 | stride=1
534 | pad=1
535 | activation=leaky
536 |
537 | [convolutional]
538 | batch_normalize=1
539 | filters=1024
540 | size=3
541 | stride=1
542 | pad=1
543 | activation=leaky
544 |
545 | [shortcut]
546 | from=-3
547 | activation=linear
548 |
549 | ######################
550 |
551 | [convolutional]
552 | batch_normalize=1
553 | filters=512
554 | size=1
555 | stride=1
556 | pad=1
557 | activation=leaky
558 |
559 | [convolutional]
560 | batch_normalize=1
561 | size=3
562 | stride=1
563 | pad=1
564 | filters=1024
565 | activation=leaky
566 |
567 | [convolutional]
568 | batch_normalize=1
569 | filters=512
570 | size=1
571 | stride=1
572 | pad=1
573 | activation=leaky
574 |
575 | [convolutional]
576 | batch_normalize=1
577 | size=3
578 | stride=1
579 | pad=1
580 | filters=1024
581 | activation=leaky
582 |
583 | [convolutional]
584 | batch_normalize=1
585 | filters=512
586 | size=1
587 | stride=1
588 | pad=1
589 | activation=leaky
590 |
591 | [convolutional]
592 | batch_normalize=1
593 | size=3
594 | stride=1
595 | pad=1
596 | filters=1024
597 | activation=leaky
598 |
599 | [convolutional]
600 | size=1
601 | stride=1
602 | pad=1
603 | filters=255
604 | activation=linear
605 |
606 |
607 | [yolo]
608 | mask = 6,7,8
609 | anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
610 | classes=80
611 | num=9
612 | jitter=.3
613 | ignore_thresh = .7
614 | truth_thresh = 1
615 | random=1
616 |
617 |
618 | [route]
619 | layers = -4
620 |
621 | [convolutional]
622 | batch_normalize=1
623 | filters=256
624 | size=1
625 | stride=1
626 | pad=1
627 | activation=leaky
628 |
629 | [upsample]
630 | stride=2
631 |
632 | [route]
633 | layers = -1, 61
634 |
635 |
636 |
637 | [convolutional]
638 | batch_normalize=1
639 | filters=256
640 | size=1
641 | stride=1
642 | pad=1
643 | activation=leaky
644 |
645 | [convolutional]
646 | batch_normalize=1
647 | size=3
648 | stride=1
649 | pad=1
650 | filters=512
651 | activation=leaky
652 |
653 | [convolutional]
654 | batch_normalize=1
655 | filters=256
656 | size=1
657 | stride=1
658 | pad=1
659 | activation=leaky
660 |
661 | [convolutional]
662 | batch_normalize=1
663 | size=3
664 | stride=1
665 | pad=1
666 | filters=512
667 | activation=leaky
668 |
669 | [convolutional]
670 | batch_normalize=1
671 | filters=256
672 | size=1
673 | stride=1
674 | pad=1
675 | activation=leaky
676 |
677 | [convolutional]
678 | batch_normalize=1
679 | size=3
680 | stride=1
681 | pad=1
682 | filters=512
683 | activation=leaky
684 |
685 | [convolutional]
686 | size=1
687 | stride=1
688 | pad=1
689 | filters=255
690 | activation=linear
691 |
692 |
693 | [yolo]
694 | mask = 3,4,5
695 | anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
696 | classes=80
697 | num=9
698 | jitter=.3
699 | ignore_thresh = .7
700 | truth_thresh = 1
701 | random=1
702 |
703 |
704 |
705 | [route]
706 | layers = -4
707 |
708 | [convolutional]
709 | batch_normalize=1
710 | filters=128
711 | size=1
712 | stride=1
713 | pad=1
714 | activation=leaky
715 |
716 | [upsample]
717 | stride=2
718 |
719 | [route]
720 | layers = -1, 36
721 |
722 |
723 |
724 | [convolutional]
725 | batch_normalize=1
726 | filters=128
727 | size=1
728 | stride=1
729 | pad=1
730 | activation=leaky
731 |
732 | [convolutional]
733 | batch_normalize=1
734 | size=3
735 | stride=1
736 | pad=1
737 | filters=256
738 | activation=leaky
739 |
740 | [convolutional]
741 | batch_normalize=1
742 | filters=128
743 | size=1
744 | stride=1
745 | pad=1
746 | activation=leaky
747 |
748 | [convolutional]
749 | batch_normalize=1
750 | size=3
751 | stride=1
752 | pad=1
753 | filters=256
754 | activation=leaky
755 |
756 | [convolutional]
757 | batch_normalize=1
758 | filters=128
759 | size=1
760 | stride=1
761 | pad=1
762 | activation=leaky
763 |
764 | [convolutional]
765 | batch_normalize=1
766 | size=3
767 | stride=1
768 | pad=1
769 | filters=256
770 | activation=leaky
771 |
772 | [convolutional]
773 | size=1
774 | stride=1
775 | pad=1
776 | filters=255
777 | activation=linear
778 |
779 |
780 | [yolo]
781 | mask = 0,1,2
782 | anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
783 | classes=80
784 | num=9
785 | jitter=.3
786 | ignore_thresh = .7
787 | truth_thresh = 1
788 | random=1
789 |
--------------------------------------------------------------------------------
/app/PyTorch_YOLOv3/data/custom/classes.names:
--------------------------------------------------------------------------------
1 | bone
2 |
--------------------------------------------------------------------------------
/app/PyTorch_YOLOv3/data/custom/generate.py:
--------------------------------------------------------------------------------
1 | import xml.etree.ElementTree as XET
2 | import os
3 | from os import mkdir, listdir
4 | from os.path import splitext, exists, join
5 | import shutil
6 |
7 | source_dir = [join("./images")]
8 | target_dir = [join("./labels")]
9 | custom_path = "data/custom/images"
10 |
11 | sub_dir = ["train", "valid"]
12 | out_dir = ["images", "labels"]
13 |
14 | img_width = 500
15 | img_height = 1200
16 |
17 | def delete(path):
18 | try:
19 | shutil.rmtree(path)
20 | except OSError as e:
21 | print(e)
22 | else:
23 | print("The directory is deleted successfully")
24 |
25 | # label_idx x_center y_center width height
26 |
27 |
28 | delete("./labels/train/")
29 | delete("./labels/valid/")
30 |
31 | for dir in out_dir:
32 | if not exists(dir):
33 | mkdir(dir)
34 |
35 |
36 | for dir in out_dir:
37 | for sub in sub_dir:
38 | if not exists(join(dir, sub)):
39 | mkdir(join(dir, sub))
40 |
41 | for dir in sub_dir:
42 | flag = True
43 | for file in sorted(listdir(f"./images/{dir}")):
44 | label_idx = []
45 | label_xmin = []
46 | label_ymin = []
47 | label_xmax = []
48 | label_ymax = []
49 |
50 | name = splitext(file)[0]
51 |
52 | path = join(custom_path, dir, f"{name}.png")
53 |
54 | if flag:
55 | f = open(f"{dir}.txt", 'w')
56 | f.write(f"{path}\n")
57 | flag = False
58 | else:
59 | f = open(f"{dir}.txt", 'a')
60 | f.write(f"{path}\n")
61 |
62 | tree = XET.parse(f"xml/{name}.xml")
63 | root = tree.getroot()
64 |
65 | for i in range(6, len(root)):
66 |
67 | x_center = (float(root[i][4][0].text) + ( float(root[i][4][2].text)-float(root[i][4][0].text) )/2 )
68 | y_center = (float(root[i][4][1].text) + ( float(root[i][4][3].text)-float(root[i][4][1].text) )/2 )
69 | width = float(root[i][4][2].text)-float(root[i][4][0].text)
70 | height = float(root[i][4][3].text)-float(root[i][4][1].text)
71 |
72 | x_center /= img_width
73 | y_center /= img_height
74 | width /= img_width
75 | height /= img_height
76 | if i == 6:
77 | f = open(f"labels/{dir}/{name}.txt", 'w')
78 | f.write("{:d} {:.6f} {:.6f} {:.6f} {:.6f}\n".format(0, x_center, y_center, width, height) )
79 | else:
80 | f = open(f"labels/{dir}/{name}.txt", 'a')
81 | f.write("{:d} {:.6f} {:.6f} {:.6f} {:.6f}\n".format(0, x_center, y_center, width, height) )
82 |
--------------------------------------------------------------------------------
/app/PyTorch_YOLOv3/data/custom/train.txt:
--------------------------------------------------------------------------------
1 | data/custom/images/train/0021.png
2 | data/custom/images/train/0022.png
3 | data/custom/images/train/0023.png
4 | data/custom/images/train/0024.png
5 | data/custom/images/train/0025.png
6 | data/custom/images/train/0026.png
7 | data/custom/images/train/0027.png
8 | data/custom/images/train/0028.png
9 | data/custom/images/train/0029.png
10 | data/custom/images/train/0030.png
11 | data/custom/images/train/0031.png
12 | data/custom/images/train/0032.png
13 | data/custom/images/train/0033.png
14 | data/custom/images/train/0034.png
15 | data/custom/images/train/0035.png
16 | data/custom/images/train/0036.png
17 | data/custom/images/train/0037.png
18 | data/custom/images/train/0038.png
19 | data/custom/images/train/0039.png
20 | data/custom/images/train/0040.png
21 | data/custom/images/train/0041.png
22 | data/custom/images/train/0042.png
23 | data/custom/images/train/0043.png
24 | data/custom/images/train/0044.png
25 | data/custom/images/train/0045.png
26 | data/custom/images/train/0046.png
27 | data/custom/images/train/0047.png
28 | data/custom/images/train/0048.png
29 | data/custom/images/train/0049.png
30 | data/custom/images/train/0050.png
31 | data/custom/images/train/0051.png
32 | data/custom/images/train/0052.png
33 | data/custom/images/train/0053.png
34 | data/custom/images/train/0054.png
35 | data/custom/images/train/0055.png
36 | data/custom/images/train/0056.png
37 | data/custom/images/train/0057.png
38 | data/custom/images/train/0058.png
39 | data/custom/images/train/0059.png
40 | data/custom/images/train/0060.png
41 |
--------------------------------------------------------------------------------
/app/PyTorch_YOLOv3/data/custom/valid.txt:
--------------------------------------------------------------------------------
1 | data/custom/images/valid/0001.png
2 | data/custom/images/valid/0002.png
3 | data/custom/images/valid/0003.png
4 | data/custom/images/valid/0004.png
5 | data/custom/images/valid/0005.png
6 | data/custom/images/valid/0006.png
7 | data/custom/images/valid/0007.png
8 | data/custom/images/valid/0008.png
9 | data/custom/images/valid/0009.png
10 | data/custom/images/valid/0010.png
11 | data/custom/images/valid/0011.png
12 | data/custom/images/valid/0012.png
13 | data/custom/images/valid/0013.png
14 | data/custom/images/valid/0014.png
15 | data/custom/images/valid/0015.png
16 | data/custom/images/valid/0016.png
17 | data/custom/images/valid/0017.png
18 | data/custom/images/valid/0018.png
19 | data/custom/images/valid/0019.png
20 | data/custom/images/valid/0020.png
21 |
--------------------------------------------------------------------------------
/app/PyTorch_YOLOv3/detect.py:
--------------------------------------------------------------------------------
1 | from __future__ import division
2 |
3 | from models import *
4 | from utils.utils import *
5 | from utils.datasets import *
6 |
7 | import os
8 | import sys
9 | import time
10 | import datetime
11 | import argparse
12 |
13 | from os.path import splitext, exists, join
14 | from PIL import Image
15 |
16 | import torch
17 | from torch.utils.data import DataLoader
18 | from torchvision import datasets
19 | from torch.autograd import Variable
20 |
21 | import matplotlib.pyplot as plt
22 | import matplotlib.patches as patches
23 | from matplotlib.ticker import NullLocator
24 |
25 | def preprocess(img):
26 | img = rgb2gray(img)
27 | bound = img.shape[0] // 3
28 | up = exposure.equalize_adapthist(img[:bound, :])
29 | down = exposure.equalize_adapthist(img[bound:, :])
30 | enhance = np.append(up, down, axis=0)
31 | edge = sobel(gaussian(enhance, 2))
32 | enhance = enhance + edge * 3
33 | return np.where(enhance > 1, 1, enhance)
34 |
35 | def clahe_hist(img):
36 | clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
37 | cl1 = clahe.apply(img)
38 | return cl1
39 |
40 | if __name__ == "__main__":
41 | # parser = argparse.ArgumentParser()
42 | # parser.add_argument("--image_folder", type=str, default="data/samples", help="path to dataset")
43 | # parser.add_argument("--model_def", type=str, default="config/yolov3.cfg", help="path to model definition file")
44 | # parser.add_argument("--weights_path", type=str, default="weights/yolov3.weights", help="path to weights file")
45 | # parser.add_argument("--class_path", type=str, default="data/coco.names", help="path to class label file")
46 | # parser.add_argument("--conf_thres", type=float, default=0.8, help="object confidence threshold")
47 | # parser.add_argument("--nms_thres", type=float, default=0.4, help="iou thresshold for non-maximum suppression")
48 | # parser.add_argument("--batch_size", type=int, default=1, help="size of the batches")
49 | # parser.add_argument("--n_cpu", type=int, default=0, help="number of cpu threads to use during batch generation")
50 | # parser.add_argument("--img_size", type=int, default=416, help="size of each image dimension")
51 | # parser.add_argument("--checkpoint_model", type=str, help="path to checkpoint model")
52 | parser = argparse.ArgumentParser()
53 | parser.add_argument("--image_folder", type=str, default="data/custom/images/valid/", help="path to dataset")
54 | # parser.add_argument("--image_folder", type=str, default="pre_img/", help="path to dataset")
55 | parser.add_argument("--model_def", type=str, default="config/yolov3-custom.cfg", help="path to model definition file")
56 | parser.add_argument("--weights_path", type=str, default="checkpoints/yolov3_ckpt_1000.pth", help="path to weights file")
57 | parser.add_argument("--class_path", type=str, default="data/custom/classes.names", help="path to class label file")
58 | parser.add_argument("--conf_thres", type=float, default=0.8, help="object confidence threshold") # 0.8
59 | parser.add_argument("--nms_thres", type=float, default=0.3, help="iou thresshold for non-maximum suppression") # 0.25
60 | parser.add_argument("--batch_size", type=int, default=1, help="size of the batches")
61 | parser.add_argument("--n_cpu", type=int, default=0, help="number of cpu threads to use during batch generation")
62 | parser.add_argument("--img_size", type=int, default=416, help="size of each image dimension")
63 | parser.add_argument("--checkpoint_model", type=str, default="checkpoints/yolov3_ckpt_best_f01.pth", help="path to checkpoint model")
64 | opt = parser.parse_args()
65 | print(opt)
66 |
67 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
68 |
69 | os.makedirs("output", exist_ok=True)
70 | os.makedirs("pre_img", exist_ok=True)
71 | os.makedirs("coordinate", exist_ok=True)
72 |
73 | fname_list = []
74 |
75 | for file in os.listdir(opt.image_folder):
76 |
77 | # img_name = "data/custom/images/valid/" + file
78 | # img = cv2.imread(img_name, 0)
79 | # img = clahe_hist(img)
80 | # img = preprocess(img/255) * 255
81 | # cv2.imwrite(f"pre_img/{file}", img)
82 | file_name = splitext(file)[0]
83 | fname_list.append(f"{file_name}.txt")
84 |
85 | fname_list = sorted(fname_list)
86 |
87 | # Set up model
88 | model = Darknet(opt.model_def, img_size=opt.img_size).to(device)
89 |
90 |
91 | # Load checkpoint weights
92 | model.load_state_dict(torch.load(opt.checkpoint_model))
93 |
94 | # if opt.weights_path.endswith(".weights"):
95 | # # Load darknet weights
96 | # model.load_darknet_weights(opt.checkpoint_model)
97 | # else:
98 | # # Load checkpoint weights
99 | # model.load_state_dict(torch.load(opt.checkpoint_model))
100 |
101 | model.eval() # Set in evaluation mode
102 |
103 | dataloader = DataLoader(
104 | ImageFolder(opt.image_folder, img_size=opt.img_size),
105 | batch_size=opt.batch_size,
106 | shuffle=False,
107 | num_workers=opt.n_cpu,
108 | )
109 |
110 | classes = load_classes(opt.class_path) # Extracts class labels from file
111 |
112 | Tensor = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor
113 |
114 | imgs = [] # Stores image paths
115 | img_detections = [] # Stores detections for each image index
116 |
117 | print("\nPerforming object detection:")
118 | prev_time = time.time()
119 | for batch_i, (img_paths, input_imgs) in enumerate(dataloader):
120 | # Configure input
121 |
122 | input_imgs = Variable(input_imgs.type(Tensor))
123 |
124 | # Get detections
125 | with torch.no_grad():
126 | detections = model(input_imgs)
127 | detections = non_max_suppression(detections, opt.conf_thres, opt.nms_thres)
128 |
129 | # Log progress
130 | current_time = time.time()
131 | inference_time = datetime.timedelta(seconds=current_time - prev_time)
132 | prev_time = current_time
133 | print("\t+ Batch %d, Inference Time: %s" % (batch_i, inference_time))
134 |
135 | # Save image and detections
136 | imgs.extend(img_paths)
137 | img_detections.extend(detections)
138 |
139 | # Bounding-box colors
140 | # cmap = plt.get_cmap("tab20b")
141 | plt.set_cmap('gray')
142 | # colors = [cmap(i) for i in np.linspace(0, 1, 20)]
143 |
144 | rewrite = True
145 |
146 | print("\nSaving images:")
147 | # Iterate through images and save plot of detections
148 | for img_i, (path, detections) in enumerate(zip(imgs, img_detections)):
149 |
150 | print("(%d) Image: '%s'" % (img_i, path))
151 |
152 | # Create plot
153 |
154 | img = np.array(Image.open(path).convert('L'))
155 | plt.figure()
156 | fig, ax = plt.subplots(1)
157 | ax.imshow(img)
158 |
159 | # Draw bounding boxes and labels of detections
160 | if detections is not None:
161 | # Rescale boxes to original image
162 | detections = rescale_boxes(detections, opt.img_size, img.shape[:2])
163 | # unique_labels = detections[:, -1].cpu().unique()
164 | # n_cls_preds = len(unique_labels)
165 | # bbox_colors = random.sample(colors, n_cls_preds)
166 | rewrite = True
167 | for x1, y1, x2, y2, conf, cls_conf, cls_pred in detections:
168 |
169 | print("\t+ Label: %s, Conf: %.5f" % (classes[int(cls_pred)], cls_conf.item()))
170 |
171 | # y1 = y1 - 25
172 | # y2 = y2 + 25
173 | box_w = x2 - x1
174 | box_h = y2 - y1
175 | x1, y1, x2, y2 = math.floor(x1), math.floor(y1), math.ceil(x2), math.ceil(y2)
176 | box_w, box_h = x2-x1, y2-y1
177 | if rewrite:
178 | f1 = open(f"../VertebraSegmentation/coordinate/{fname_list[img_i]}", 'w')
179 | f2 = open(f"coordinate/{fname_list[img_i]}", 'w')
180 | # f.write(f"{x1} {y1} {x2} {y2} {box_w} {box_h}\n")
181 | f1.write("{:d} {:d} {:d} {:d} {:d} {:d}\n".format(x1, y1, x2, y2, box_w, box_h) )
182 | f2.write("{:d} {:d} {:d} {:d} {:d} {:d}\n".format(x1, y1, x2, y2, box_w, box_h) )
183 | rewrite = False
184 | else:
185 | f1 = open(f"../VertebraSegmentation/coordinate/{fname_list[img_i]}", 'a')
186 | f2 = open(f"coordinate/{fname_list[img_i]}", 'a')
187 | # f.write(f"{x1} {y1} {x2} {y2} {box_w} {box_h}\n")
188 | f1.write("{:d} {:d} {:d} {:d} {:d} {:d}\n".format(x1, y1, x2, y2, box_w, box_h) )
189 | f2.write("{:d} {:d} {:d} {:d} {:d} {:d}\n".format(x1, y1, x2, y2, box_w, box_h) )
190 | # color = bbox_colors[int(np.where(unique_labels == int(cls_pred))[0])]
191 | # Create a Rectangle patch
192 | bbox = patches.Rectangle((x1, y1), box_w, box_h, linewidth=0.5, edgecolor='red', facecolor="none")
193 | # Add the bbox to the plot
194 | ax.add_patch(bbox)
195 | # Add label
196 | # plt.text(
197 | # x1,
198 | # y1,
199 | # s=classes[int(cls_pred)],
200 | # color="white",
201 | # verticalalignment="top",
202 | # bbox={"color": color, "pad": 0},
203 | # )
204 |
205 | # Save generated image with detections
206 | plt.axis("off")
207 | plt.gca().xaxis.set_major_locator(NullLocator())
208 | plt.gca().yaxis.set_major_locator(NullLocator())
209 | # plt.set_cmap('gray')
210 | filename = path.split("/")[-1].split(".")[0]
211 | plt.savefig(f"output/{filename}.png", bbox_inches="tight", pad_inches=0.0, facecolor="none")
212 | plt.close()
213 |
--------------------------------------------------------------------------------
/app/PyTorch_YOLOv3/models.py:
--------------------------------------------------------------------------------
1 | from __future__ import division
2 |
3 | import torch
4 | import torch.nn as nn
5 | import torch.nn.functional as F
6 | from torch.autograd import Variable
7 | import numpy as np
8 |
9 | ## nor for main, for train
10 | from utils.parse_config import *
11 | from utils.utils import build_targets, to_cpu, non_max_suppression
12 |
13 | # # for main
14 | # from .utils.parse_config import *
15 | # from .utils.utils import build_targets, to_cpu, non_max_suppression
16 |
17 | import matplotlib.pyplot as plt
18 | import matplotlib.patches as patches
19 |
20 |
21 | def create_modules(module_defs):
22 | """
23 | Constructs module list of layer blocks from module configuration in module_defs
24 | """
25 | hyperparams = module_defs.pop(0)
26 | output_filters = [int(hyperparams["channels"])]
27 | module_list = nn.ModuleList()
28 | for module_i, module_def in enumerate(module_defs):
29 | modules = nn.Sequential()
30 |
31 | if module_def["type"] == "convolutional":
32 | bn = int(module_def["batch_normalize"])
33 | filters = int(module_def["filters"])
34 | kernel_size = int(module_def["size"])
35 | pad = (kernel_size - 1) // 2
36 | modules.add_module(
37 | f"conv_{module_i}",
38 | nn.Conv2d(
39 | in_channels=output_filters[-1],
40 | out_channels=filters,
41 | kernel_size=kernel_size,
42 | stride=int(module_def["stride"]),
43 | padding=pad,
44 | bias=not bn,
45 | ),
46 | )
47 | if bn:
48 | modules.add_module(f"batch_norm_{module_i}", nn.BatchNorm2d(filters, momentum=0.9, eps=1e-5))
49 | if module_def["activation"] == "leaky":
50 | modules.add_module(f"leaky_{module_i}", nn.LeakyReLU(0.1))
51 |
52 | elif module_def["type"] == "maxpool":
53 | kernel_size = int(module_def["size"])
54 | stride = int(module_def["stride"])
55 | if kernel_size == 2 and stride == 1:
56 | modules.add_module(f"_debug_padding_{module_i}", nn.ZeroPad2d((0, 1, 0, 1)))
57 | maxpool = nn.MaxPool2d(kernel_size=kernel_size, stride=stride, padding=int((kernel_size - 1) // 2))
58 | modules.add_module(f"maxpool_{module_i}", maxpool)
59 |
60 | elif module_def["type"] == "upsample":
61 | upsample = Upsample(scale_factor=int(module_def["stride"]), mode="nearest")
62 | modules.add_module(f"upsample_{module_i}", upsample)
63 |
64 | elif module_def["type"] == "route":
65 | layers = [int(x) for x in module_def["layers"].split(",")]
66 | filters = sum([output_filters[1:][i] for i in layers])
67 | modules.add_module(f"route_{module_i}", EmptyLayer())
68 |
69 | elif module_def["type"] == "shortcut":
70 | filters = output_filters[1:][int(module_def["from"])]
71 | modules.add_module(f"shortcut_{module_i}", EmptyLayer())
72 |
73 | elif module_def["type"] == "yolo":
74 | anchor_idxs = [int(x) for x in module_def["mask"].split(",")]
75 | # Extract anchors
76 | anchors = [int(x) for x in module_def["anchors"].split(",")]
77 | anchors = [(anchors[i], anchors[i + 1]) for i in range(0, len(anchors), 2)]
78 | anchors = [anchors[i] for i in anchor_idxs]
79 | num_classes = int(module_def["classes"])
80 | img_size = int(hyperparams["height"])
81 | # Define detection layer
82 | yolo_layer = YOLOLayer(anchors, num_classes, img_size)
83 | modules.add_module(f"yolo_{module_i}", yolo_layer)
84 | # Register module list and number of output filters
85 | module_list.append(modules)
86 | output_filters.append(filters)
87 |
88 | return hyperparams, module_list
89 |
90 |
91 | class Upsample(nn.Module):
92 | """ nn.Upsample is deprecated """
93 |
94 | def __init__(self, scale_factor, mode="nearest"):
95 | super(Upsample, self).__init__()
96 | self.scale_factor = scale_factor
97 | self.mode = mode
98 |
99 | def forward(self, x):
100 | x = F.interpolate(x, scale_factor=self.scale_factor, mode=self.mode)
101 | return x
102 |
103 |
104 | class EmptyLayer(nn.Module):
105 | """Placeholder for 'route' and 'shortcut' layers"""
106 |
107 | def __init__(self):
108 | super(EmptyLayer, self).__init__()
109 |
110 |
111 | class YOLOLayer(nn.Module):
112 | """Detection layer"""
113 |
114 | def __init__(self, anchors, num_classes, img_dim=416):
115 | super(YOLOLayer, self).__init__()
116 | self.anchors = anchors
117 | self.num_anchors = len(anchors)
118 | self.num_classes = num_classes
119 | self.ignore_thres = 0.5
120 | self.mse_loss = nn.MSELoss()
121 | self.bce_loss = nn.BCELoss()
122 | self.obj_scale = 1
123 | self.noobj_scale = 100
124 | self.metrics = {}
125 | self.img_dim = img_dim
126 | self.grid_size = 0 # grid size
127 |
128 | def compute_grid_offsets(self, grid_size, cuda=True):
129 | self.grid_size = grid_size
130 | g = self.grid_size
131 | FloatTensor = torch.cuda.FloatTensor if cuda else torch.FloatTensor
132 | self.stride = self.img_dim / self.grid_size
133 | # Calculate offsets for each grid
134 | self.grid_x = torch.arange(g).repeat(g, 1).view([1, 1, g, g]).type(FloatTensor)
135 | self.grid_y = torch.arange(g).repeat(g, 1).t().view([1, 1, g, g]).type(FloatTensor)
136 | self.scaled_anchors = FloatTensor([(a_w / self.stride, a_h / self.stride) for a_w, a_h in self.anchors])
137 | self.anchor_w = self.scaled_anchors[:, 0:1].view((1, self.num_anchors, 1, 1))
138 | self.anchor_h = self.scaled_anchors[:, 1:2].view((1, self.num_anchors, 1, 1))
139 |
140 | def forward(self, x, targets=None, img_dim=None):
141 |
142 | # Tensors for cuda support
143 | FloatTensor = torch.cuda.FloatTensor if x.is_cuda else torch.FloatTensor
144 | LongTensor = torch.cuda.LongTensor if x.is_cuda else torch.LongTensor
145 | ByteTensor = torch.cuda.ByteTensor if x.is_cuda else torch.ByteTensor
146 |
147 | self.img_dim = img_dim
148 | num_samples = x.size(0)
149 | grid_size = x.size(2)
150 |
151 | prediction = (
152 | x.view(num_samples, self.num_anchors, self.num_classes + 5, grid_size, grid_size)
153 | .permute(0, 1, 3, 4, 2)
154 | .contiguous()
155 | )
156 |
157 | # Get outputs
158 | x = torch.sigmoid(prediction[..., 0]) # Center x
159 | y = torch.sigmoid(prediction[..., 1]) # Center y
160 | w = prediction[..., 2] # Width
161 | h = prediction[..., 3] # Height
162 | pred_conf = torch.sigmoid(prediction[..., 4]) # Conf
163 | pred_cls = torch.sigmoid(prediction[..., 5:]) # Cls pred.
164 |
165 | # If grid size does not match current we compute new offsets
166 | if grid_size != self.grid_size:
167 | self.compute_grid_offsets(grid_size, cuda=x.is_cuda)
168 |
169 | # Add offset and scale with anchors
170 | pred_boxes = FloatTensor(prediction[..., :4].shape)
171 | pred_boxes[..., 0] = x.data + self.grid_x
172 | pred_boxes[..., 1] = y.data + self.grid_y
173 | pred_boxes[..., 2] = torch.exp(w.data) * self.anchor_w
174 | pred_boxes[..., 3] = torch.exp(h.data) * self.anchor_h
175 |
176 | output = torch.cat(
177 | (
178 | pred_boxes.view(num_samples, -1, 4) * self.stride,
179 | pred_conf.view(num_samples, -1, 1),
180 | pred_cls.view(num_samples, -1, self.num_classes),
181 | ),
182 | -1,
183 | )
184 |
185 | if targets is None:
186 | return output, 0
187 | else:
188 | iou_scores, class_mask, obj_mask, noobj_mask, tx, ty, tw, th, tcls, tconf = build_targets(
189 | pred_boxes=pred_boxes,
190 | pred_cls=pred_cls,
191 | target=targets,
192 | anchors=self.scaled_anchors,
193 | ignore_thres=self.ignore_thres,
194 | )
195 |
196 | obj_mask=obj_mask.bool() # convert int8 to bool
197 | noobj_mask=noobj_mask.bool() #convert int8 to bool
198 |
199 | # Loss : Mask outputs to ignore non-existing objects (except with conf. loss)
200 | loss_x = self.mse_loss(x[obj_mask], tx[obj_mask])
201 | loss_y = self.mse_loss(y[obj_mask], ty[obj_mask])
202 | loss_w = self.mse_loss(w[obj_mask], tw[obj_mask])
203 | loss_h = self.mse_loss(h[obj_mask], th[obj_mask])
204 | loss_conf_obj = self.bce_loss(pred_conf[obj_mask], tconf[obj_mask])
205 | loss_conf_noobj = self.bce_loss(pred_conf[noobj_mask], tconf[noobj_mask])
206 | loss_conf = self.obj_scale * loss_conf_obj + self.noobj_scale * loss_conf_noobj
207 | loss_cls = self.bce_loss(pred_cls[obj_mask], tcls[obj_mask])
208 | total_loss = loss_x + loss_y + loss_w + loss_h + loss_conf + loss_cls
209 |
210 | # Metrics
211 | cls_acc = 100 * class_mask[obj_mask].mean()
212 | conf_obj = pred_conf[obj_mask].mean()
213 | conf_noobj = pred_conf[noobj_mask].mean()
214 | conf50 = (pred_conf > 0.5).float()
215 | iou50 = (iou_scores > 0.5).float()
216 | iou75 = (iou_scores > 0.75).float()
217 | detected_mask = conf50 * class_mask * tconf
218 | precision = torch.sum(iou50 * detected_mask) / (conf50.sum() + 1e-16)
219 | recall50 = torch.sum(iou50 * detected_mask) / (obj_mask.sum() + 1e-16)
220 | recall75 = torch.sum(iou75 * detected_mask) / (obj_mask.sum() + 1e-16)
221 |
222 | self.metrics = {
223 | "loss": to_cpu(total_loss).item(),
224 | "x": to_cpu(loss_x).item(),
225 | "y": to_cpu(loss_y).item(),
226 | "w": to_cpu(loss_w).item(),
227 | "h": to_cpu(loss_h).item(),
228 | "conf": to_cpu(loss_conf).item(),
229 | "cls": to_cpu(loss_cls).item(),
230 | "cls_acc": to_cpu(cls_acc).item(),
231 | "recall50": to_cpu(recall50).item(),
232 | "recall75": to_cpu(recall75).item(),
233 | "precision": to_cpu(precision).item(),
234 | "conf_obj": to_cpu(conf_obj).item(),
235 | "conf_noobj": to_cpu(conf_noobj).item(),
236 | "grid_size": grid_size,
237 | }
238 |
239 | return output, total_loss
240 |
241 |
242 | class Darknet(nn.Module):
243 | """YOLOv3 object detection model"""
244 |
245 | def __init__(self, config_path, img_size=416):
246 | super(Darknet, self).__init__()
247 | self.module_defs = parse_model_config(config_path)
248 | self.hyperparams, self.module_list = create_modules(self.module_defs)
249 | self.yolo_layers = [layer[0] for layer in self.module_list if hasattr(layer[0], "metrics")]
250 | self.img_size = img_size
251 | self.seen = 0
252 | self.header_info = np.array([0, 0, 0, self.seen, 0], dtype=np.int32)
253 |
254 | def forward(self, x, targets=None):
255 | img_dim = x.shape[2]
256 | loss = 0
257 | layer_outputs, yolo_outputs = [], []
258 | for i, (module_def, module) in enumerate(zip(self.module_defs, self.module_list)):
259 | if module_def["type"] in ["convolutional", "upsample", "maxpool"]:
260 | x = module(x)
261 | elif module_def["type"] == "route":
262 | x = torch.cat([layer_outputs[int(layer_i)] for layer_i in module_def["layers"].split(",")], 1)
263 | elif module_def["type"] == "shortcut":
264 | layer_i = int(module_def["from"])
265 | x = layer_outputs[-1] + layer_outputs[layer_i]
266 | elif module_def["type"] == "yolo":
267 | x, layer_loss = module[0](x, targets, img_dim)
268 | loss += layer_loss
269 | yolo_outputs.append(x)
270 | layer_outputs.append(x)
271 | yolo_outputs = to_cpu(torch.cat(yolo_outputs, 1))
272 | return yolo_outputs if targets is None else (loss, yolo_outputs)
273 |
274 | def load_darknet_weights(self, weights_path):
275 | """Parses and loads the weights stored in 'weights_path'"""
276 |
277 | # Open the weights file
278 | with open(weights_path, "rb") as f:
279 | header = np.fromfile(f, dtype=np.int32, count=5) # First five are header values
280 | self.header_info = header # Needed to write header when saving weights
281 | self.seen = header[3] # number of images seen during training
282 | weights = np.fromfile(f, dtype=np.float32) # The rest are weights
283 |
284 | # Establish cutoff for loading backbone weights
285 | cutoff = None
286 | if "darknet53.conv.74" in weights_path:
287 | cutoff = 75
288 |
289 | ptr = 0
290 | for i, (module_def, module) in enumerate(zip(self.module_defs, self.module_list)):
291 | if i == cutoff:
292 | break
293 | if module_def["type"] == "convolutional":
294 | conv_layer = module[0]
295 | if module_def["batch_normalize"]:
296 | # Load BN bias, weights, running mean and running variance
297 | bn_layer = module[1]
298 | num_b = bn_layer.bias.numel() # Number of biases
299 | # Bias
300 | bn_b = torch.from_numpy(weights[ptr : ptr + num_b]).view_as(bn_layer.bias)
301 | bn_layer.bias.data.copy_(bn_b)
302 | ptr += num_b
303 | # Weight
304 | bn_w = torch.from_numpy(weights[ptr : ptr + num_b]).view_as(bn_layer.weight)
305 | bn_layer.weight.data.copy_(bn_w)
306 | ptr += num_b
307 | # Running Mean
308 | bn_rm = torch.from_numpy(weights[ptr : ptr + num_b]).view_as(bn_layer.running_mean)
309 | bn_layer.running_mean.data.copy_(bn_rm)
310 | ptr += num_b
311 | # Running Var
312 | bn_rv = torch.from_numpy(weights[ptr : ptr + num_b]).view_as(bn_layer.running_var)
313 | bn_layer.running_var.data.copy_(bn_rv)
314 | ptr += num_b
315 | else:
316 | # Load conv. bias
317 | num_b = conv_layer.bias.numel()
318 | conv_b = torch.from_numpy(weights[ptr : ptr + num_b]).view_as(conv_layer.bias)
319 | conv_layer.bias.data.copy_(conv_b)
320 | ptr += num_b
321 | # Load conv. weights
322 | num_w = conv_layer.weight.numel()
323 | conv_w = torch.from_numpy(weights[ptr : ptr + num_w]).view_as(conv_layer.weight)
324 | conv_layer.weight.data.copy_(conv_w)
325 | ptr += num_w
326 |
327 | def save_darknet_weights(self, path, cutoff=-1):
328 | """
329 | @:param path - path of the new weights file
330 | @:param cutoff - save layers between 0 and cutoff (cutoff = -1 -> all are saved)
331 | """
332 | fp = open(path, "wb")
333 | self.header_info[3] = self.seen
334 | self.header_info.tofile(fp)
335 |
336 | # Iterate through layers
337 | for i, (module_def, module) in enumerate(zip(self.module_defs[:cutoff], self.module_list[:cutoff])):
338 | if module_def["type"] == "convolutional":
339 | conv_layer = module[0]
340 | # If batch norm, load bn first
341 | if module_def["batch_normalize"]:
342 | bn_layer = module[1]
343 | bn_layer.bias.data.cpu().numpy().tofile(fp)
344 | bn_layer.weight.data.cpu().numpy().tofile(fp)
345 | bn_layer.running_mean.data.cpu().numpy().tofile(fp)
346 | bn_layer.running_var.data.cpu().numpy().tofile(fp)
347 | # Load conv bias
348 | else:
349 | conv_layer.bias.data.cpu().numpy().tofile(fp)
350 | # Load conv weights
351 | conv_layer.weight.data.cpu().numpy().tofile(fp)
352 |
353 | fp.close()
354 |
--------------------------------------------------------------------------------
/app/PyTorch_YOLOv3/test.py:
--------------------------------------------------------------------------------
1 | from __future__ import division
2 |
3 | from models import *
4 | from utils.utils import *
5 | from utils.datasets import *
6 | from utils.parse_config import *
7 |
8 | import os
9 | import sys
10 | import time
11 | import datetime
12 | import argparse
13 | import tqdm
14 |
15 | import torch
16 | from torch.utils.data import DataLoader
17 | from torchvision import datasets
18 | from torchvision import transforms
19 | from torch.autograd import Variable
20 | import torch.optim as optim
21 |
22 |
23 | def evaluate(model, path, iou_thres, conf_thres, nms_thres, img_size, batch_size):
24 | model.eval()
25 | # print(path)
26 | # Get dataloader
27 | dataset = ListDataset(path, img_size=img_size, augment=False, multiscale=False)
28 |
29 | dataloader = torch.utils.data.DataLoader(
30 | dataset, batch_size=batch_size, shuffle=False, num_workers=1, collate_fn=dataset.collate_fn
31 | )
32 |
33 | Tensor = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor
34 |
35 | labels = []
36 | sample_metrics = [] # List of tuples (TP, confs, pred)
37 |
38 | outputs = None
39 | for batch_i, (_, imgs, targets) in enumerate(tqdm.tqdm(dataloader, desc="Detecting objects")):
40 |
41 | # Extract labels
42 | labels += targets[:, 1].tolist()
43 |
44 | # Rescale target
45 | targets[:, 2:] = xywh2xyxy(targets[:, 2:])
46 | targets[:, 2:] *= img_size
47 |
48 | imgs = Variable(imgs.type(Tensor), requires_grad=False)
49 |
50 | with torch.no_grad():
51 | outputs = model(imgs)
52 | outputs = non_max_suppression(outputs, conf_thres=conf_thres, nms_thres=nms_thres)
53 |
54 | sample_metrics += get_batch_statistics(outputs, targets, iou_threshold=iou_thres)
55 |
56 | # Concatenate sample statistics
57 |
58 | true_positives, pred_scores, pred_labels = [np.concatenate(x, 0) for x in list(zip(*sample_metrics))]
59 | precision, recall, AP, f1, ap_class = ap_per_class(true_positives, pred_scores, pred_labels, labels)
60 |
61 | return precision, recall, AP, f1, ap_class
62 |
63 |
64 | if __name__ == "__main__":
65 | parser = argparse.ArgumentParser()
66 | parser.add_argument("--batch_size", type=int, default=8, help="size of each image batch")
67 | parser.add_argument("--model_def", type=str, default="config/yolov3.cfg", help="path to model definition file")
68 | parser.add_argument("--data_config", type=str, default="config/coco.data", help="path to data config file")
69 | parser.add_argument("--weights_path", type=str, default="weights/yolov3.weights", help="path to weights file")
70 | parser.add_argument("--class_path", type=str, default="data/coco.names", help="path to class label file")
71 | parser.add_argument("--iou_thres", type=float, default=0.5, help="iou threshold required to qualify as detected")
72 | parser.add_argument("--conf_thres", type=float, default=0.001, help="object confidence threshold")
73 | parser.add_argument("--nms_thres", type=float, default=0.5, help="iou thresshold for non-maximum suppression")
74 | parser.add_argument("--n_cpu", type=int, default=8, help="number of cpu threads to use during batch generation")
75 | parser.add_argument("--img_size", type=int, default=416, help="size of each image dimension")
76 | opt = parser.parse_args()
77 | print(opt)
78 |
79 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
80 |
81 | data_config = parse_data_config(opt.data_config)
82 |
83 | valid_path = data_config["valid"]
84 | class_names = load_classes(data_config["names"])
85 |
86 | # Initiate model
87 | model = Darknet(opt.model_def).to(device)
88 | if opt.weights_path.endswith(".weights"):
89 | # Load darknet weights
90 | model.load_darknet_weights(opt.weights_path)
91 | else:
92 | # Load checkpoint weights
93 | model.load_state_dict(torch.load(opt.weights_path))
94 |
95 | print("Compute mAP...")
96 |
97 | precision, recall, AP, f1, ap_class = evaluate(
98 | model,
99 | path=valid_path,
100 | iou_thres=opt.iou_thres,
101 | conf_thres=opt.conf_thres,
102 | nms_thres=opt.nms_thres,
103 | img_size=opt.img_size,
104 | batch_size=8,
105 | )
106 |
107 | print("Average Precisions:")
108 | for i, c in enumerate(ap_class):
109 | print(f"+ Class '{c}' ({class_names[c]}) - AP: {AP[i]}")
110 |
111 | print(f"mAP: {AP.mean()}")
112 |
--------------------------------------------------------------------------------
/app/PyTorch_YOLOv3/train.py:
--------------------------------------------------------------------------------
1 | from __future__ import division
2 |
3 | from models import *
4 | from utils.logger import *
5 | from utils.utils import *
6 | from utils.datasets import *
7 | from utils.parse_config import *
8 | from test import evaluate
9 |
10 | from terminaltables import AsciiTable
11 |
12 | import os
13 | import sys
14 | import time
15 | import datetime
16 | import argparse
17 |
18 | import torch
19 | from torch.utils.data import DataLoader
20 | from torchvision import datasets
21 | from torchvision import transforms
22 | from torch.autograd import Variable
23 | import torch.optim as optim
24 |
25 |
26 | if __name__ == "__main__":
27 | parser = argparse.ArgumentParser()
28 | # parser.add_argument("--epochs", type=int, default=3000, help="number of epochs")
29 | # parser.add_argument("--batch_size", type=int, default=8, help="size of each image batch")
30 | # parser.add_argument("--gradient_accumulations", type=int, default=2, help="number of gradient accums before step")
31 | # parser.add_argument("--model_def", type=str, default="config/yolov3.cfg", help="path to model definition file")
32 | # parser.add_argument("--data_config", type=str, default="config/coco.data", help="path to data config file")
33 | # parser.add_argument("--pretrained_weights", type=str, help="if specified starts from checkpoint model")
34 | # parser.add_argument("--n_cpu", type=int, default=8, help="number of cpu threads to use during batch generation")
35 | # parser.add_argument("--img_size", type=int, default=416, help="size of each image dimension")
36 | # parser.add_argument("--checkpoint_interval", type=int, default=500, help="interval between saving model weights")
37 | # parser.add_argument("--evaluation_interval", type=int, default=1, help="interval evaluations on validation set")
38 | # parser.add_argument("--compute_map", default=False, help="if True computes mAP every tenth batch")
39 | # parser.add_argument("--multiscale_training", default=True, help="allow for multi-scale training")
40 | parser.add_argument("--epochs", type=int, default=3000, help="number of epochs")
41 | parser.add_argument("--batch_size", type=int, default=8, help="size of each image batch")
42 | parser.add_argument("--gradient_accumulations", type=int, default=2, help="number of gradient accums before step")
43 | parser.add_argument("--model_def", type=str, default="config/yolov3-custom.cfg", help="path to model definition file")
44 | parser.add_argument("--data_config", type=str, default="config/custom.data", help="path to data config file")
45 | parser.add_argument("--pretrained_weights", type=str, help="if specified starts from checkpoint model")
46 | parser.add_argument("--n_cpu", type=int, default=8, help="number of cpu threads to use during batch generation")
47 | parser.add_argument("--img_size", type=int, default=416, help="size of each image dimension")
48 | parser.add_argument("--checkpoint_interval", type=int, default=500, help="interval between saving model weights")
49 | parser.add_argument("--evaluation_interval", type=int, default=1, help="interval evaluations on validation set")
50 | parser.add_argument("--compute_map", default=False, help="if True computes mAP every tenth batch")
51 | parser.add_argument("--multiscale_training", default=True, help="allow for multi-scale training")
52 | opt = parser.parse_args()
53 | print(opt)
54 |
55 | # logger = Logger("logs")
56 |
57 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
58 |
59 | os.makedirs("output", exist_ok=True)
60 | os.makedirs("checkpoints", exist_ok=True)
61 |
62 | # Get data configuration
63 | data_config = parse_data_config(opt.data_config)
64 | train_path = data_config["train"]
65 |
66 | valid_path = data_config["valid"]
67 |
68 | class_names = load_classes(data_config["names"])
69 |
70 | # Initiate model
71 | model = Darknet(opt.model_def).to(device)
72 | model.apply(weights_init_normal)
73 |
74 | # If specified we start from checkpoint
75 | if opt.pretrained_weights:
76 | if opt.pretrained_weights.endswith(".pth"):
77 | model.load_state_dict(torch.load(opt.pretrained_weights))
78 | else:
79 | model.load_darknet_weights(opt.pretrained_weights)
80 |
81 | # Get dataloader
82 | dataset = ListDataset(train_path, augment=True, multiscale=opt.multiscale_training)
83 | dataloader = torch.utils.data.DataLoader(
84 | dataset,
85 | batch_size=opt.batch_size,
86 | shuffle=True,
87 | num_workers=opt.n_cpu,
88 | pin_memory=True,
89 | collate_fn=dataset.collate_fn,
90 | )
91 |
92 | optimizer = torch.optim.Adam(model.parameters())
93 |
94 | metrics = [
95 | "grid_size",
96 | "loss",
97 | "x",
98 | "y",
99 | "w",
100 | "h",
101 | "conf",
102 | "cls",
103 | "cls_acc",
104 | "recall50",
105 | "recall75",
106 | "precision",
107 | "conf_obj",
108 | "conf_noobj",
109 | ]
110 |
111 | current_mAP = 0
112 | best_mAP = 0
113 | best_ep = 0
114 |
115 | for epoch in range(opt.epochs):
116 | model.train()
117 | start_time = time.time()
118 | for batch_i, (_, imgs, targets) in enumerate(dataloader):
119 | batches_done = len(dataloader) * epoch + batch_i
120 | # print(f"batches_done = {batches_done}, epoch = {epoch}, batch_i = {batch_i}")
121 | imgs = Variable(imgs.to(device))
122 | targets = Variable(targets.to(device), requires_grad=False)
123 |
124 | loss, outputs = model(imgs, targets)
125 |
126 | loss.backward()
127 |
128 | if batches_done % opt.gradient_accumulations:
129 | # Accumulates gradient before each step
130 | optimizer.step()
131 | optimizer.zero_grad()
132 |
133 |
134 | # ----------------
135 | # Log progress
136 | # ----------------
137 |
138 | log_str = "\n---- [Epoch %d/%d, Batch %d/%d] ----\n" % (epoch+1, opt.epochs, batch_i+1, len(dataloader))
139 |
140 | metric_table = [["Metrics", *[f"YOLO Layer {i}" for i in range(len(model.yolo_layers))]]]
141 |
142 |
143 |
144 | # Log metrics at each YOLO layer
145 | for i, metric in enumerate(metrics):
146 | formats = {m: "%.6f" for m in metrics}
147 | formats["grid_size"] = "%2d"
148 | formats["cls_acc"] = "%.2f%%"
149 | row_metrics = [formats[metric] % yolo.metrics.get(metric, 0) for yolo in model.yolo_layers]
150 | metric_table += [[metric, *row_metrics]]
151 |
152 | # Tensorboard logging
153 | # tensorboard_log = []
154 | # for j, yolo in enumerate(model.yolo_layers):
155 | # for name, metric in yolo.metrics.items():
156 | # if name != "grid_size":
157 | # tensorboard_log += [(f"{name}_{j+1}", metric)]
158 | # tensorboard_log += [("loss", loss.item())]
159 | # logger.list_of_scalars_summary(tensorboard_log, batches_done)
160 |
161 | log_str += AsciiTable(metric_table).table
162 | log_str += f"\nTotal loss {loss.item()}"
163 |
164 | # Determine approximate time left for epoch
165 | epoch_batches_left = len(dataloader) - (batch_i + 1)
166 | time_left = datetime.timedelta(seconds=epoch_batches_left * (time.time() - start_time) / (batch_i + 1))
167 | log_str += f"\n---- ETA {time_left}"
168 |
169 | print(log_str)
170 |
171 | model.seen += imgs.size(0)
172 |
173 | if (epoch+1) % opt.evaluation_interval == 0 and (epoch+1) > 50:
174 | print("\n---- Evaluating Model ----")
175 | # Evaluate the model on the validation set
176 | precision, recall, AP, f1, ap_class = evaluate(
177 | model,
178 | path=valid_path,
179 | iou_thres=0.5,
180 | conf_thres=0.5,
181 | nms_thres=0.5,
182 | img_size=opt.img_size,
183 | batch_size=8,
184 | )
185 | evaluation_metrics = [
186 | ("val_precision", precision.mean()),
187 | ("val_recall", recall.mean()),
188 | ("val_mAP", AP.mean()),
189 | ("val_f1", f1.mean()),
190 | ]
191 | # logger.list_of_scalars_summary(evaluation_metrics, epoch)
192 |
193 | # Print class APs and mAP
194 | ap_table = [["Index", "Class name", "AP"]]
195 | for i, c in enumerate(ap_class):
196 | ap_table += [[c, class_names[c], "%.5f" % AP[i]]]
197 | print(AsciiTable(ap_table).table)
198 | print(f"---- mAP {AP.mean()}")
199 |
200 | current_mAP = AP.mean()
201 |
202 | # if current_mAP > best_mAP:
203 | # torch.save(model.state_dict(), f"checkpoints/yolov3_ckpt_best_f03.pth")
204 | # torch.save(model.state_dict(), f"checkpoints/yolov3_ckpt_last_f03.pth")
205 | # best_mAP = current_mAP
206 | # best_ep = epoch + 1
207 | # else:
208 | # torch.save(model.state_dict(), f"checkpoints/yolov3_ckpt_last_f03.pth")
209 |
210 | print(f"Best epoch is {best_ep}, mAP = {best_mAP}")
211 |
212 | # if epoch % opt.checkpoint_interval == 0 and epoch != 0:
213 | # torch.save(model.state_dict(), f"checkpoints/yolov3_ckpt_%d.pth" % epoch)
214 |
--------------------------------------------------------------------------------
/app/PyTorch_YOLOv3/utils/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/z0978916348/Localization_and_Segmentation/1a80d9730dad7ede8c3b92793e85a979915a2fad/app/PyTorch_YOLOv3/utils/__init__.py
--------------------------------------------------------------------------------
/app/PyTorch_YOLOv3/utils/augmentations.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn.functional as F
3 | import numpy as np
4 |
5 |
6 | def horisontal_flip(images, targets):
7 | images = torch.flip(images, [-1])
8 | targets[:, 2] = 1 - targets[:, 2]
9 | return images, targets
10 |
--------------------------------------------------------------------------------
/app/PyTorch_YOLOv3/utils/datasets.py:
--------------------------------------------------------------------------------
1 | import glob
2 | import random
3 | import os
4 | import sys
5 | import numpy as np
6 | from PIL import Image
7 | import torch
8 | import torch.nn.functional as F
9 | import cv2
10 |
11 | # from utils.augmentations import horisontal_flip
12 | from .augmentations import horisontal_flip
13 |
14 |
15 | from torch.utils.data import Dataset
16 | import torchvision.transforms as transforms
17 |
18 | from skimage.filters import gaussian, sobel
19 | from skimage.color import rgb2gray
20 | from skimage import exposure, io
21 |
22 | from PIL import ImageFile
23 | ImageFile.LOAD_TRUNCATED_IMAGES = True
24 |
25 | import warnings
26 | warnings.filterwarnings('ignore')
27 |
28 | def pad_to_square(img, pad_value):
29 | c, h, w = img.shape
30 | dim_diff = np.abs(h - w)
31 | # (upper / left) padding and (lower / right) padding
32 | pad1, pad2 = dim_diff // 2, dim_diff - dim_diff // 2
33 | # Determine padding
34 | pad = (0, 0, pad1, pad2) if h <= w else (pad1, pad2, 0, 0)
35 | # Add padding
36 | img = F.pad(img, pad, "constant", value=pad_value)
37 |
38 | return img, pad
39 |
40 |
41 | def resize(image, size):
42 | image = F.interpolate(image.unsqueeze(0), size=size, mode="nearest").squeeze(0)
43 | return image
44 |
45 |
46 | def random_resize(images, min_size=288, max_size=448):
47 | new_size = random.sample(list(range(min_size, max_size + 1, 32)), 1)[0]
48 | images = F.interpolate(images, size=new_size, mode="nearest")
49 | return images
50 |
51 |
52 | class ImageFolder(Dataset):
53 | def __init__(self, folder_path, img_size=416):
54 | self.files = sorted(glob.glob("%s/*.*" % folder_path))
55 | self.img_size = img_size
56 |
57 | def __getitem__(self, index):
58 | img_path = self.files[index % len(self.files)]
59 | # Extract image as PyTorch tensor
60 | # img = transforms.ToTensor()(Image.open(img_path).convert('L'))
61 | img = cv2.imread(img_path, 0)
62 | img = self.clahe_hist(img)
63 | img = self.preprocess(img/255)
64 | img = transforms.ToTensor()(img).float()
65 | # Pad to square resolution
66 | img, _ = pad_to_square(img, 0)
67 | # Resize
68 | img = resize(img, self.img_size)
69 |
70 | return img_path, img
71 |
72 | def __len__(self):
73 | return len(self.files)
74 |
75 | @classmethod
76 | def preprocess(cls, img):
77 | img = rgb2gray(img)
78 | bound = img.shape[0] // 3
79 | up = exposure.equalize_adapthist(img[:bound, :])
80 | down = exposure.equalize_adapthist(img[bound:, :])
81 | enhance = np.append(up, down, axis=0)
82 | edge = sobel(gaussian(enhance, 2))
83 | enhance = enhance + edge * 3
84 | return np.where(enhance > 1, 1, enhance)
85 | @classmethod
86 | def clahe_hist(cls, img):
87 | clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
88 | cl1 = clahe.apply(img)
89 | return cl1
90 |
91 |
92 | class ListDataset(Dataset):
93 | def __init__(self, list_path, img_size=416, augment=True, multiscale=True, normalized_labels=True):
94 | with open(list_path, "r") as file:
95 | self.img_files = file.readlines()
96 |
97 | self.label_files = [
98 | path.replace("images", "labels").replace(".png", ".txt").replace(".jpg", ".txt")
99 | for path in self.img_files
100 | ]
101 | self.img_size = img_size
102 | self.max_objects = 100
103 | self.augment = augment
104 | self.multiscale = multiscale
105 | self.normalized_labels = normalized_labels
106 | self.min_size = self.img_size - 3 * 32
107 | self.max_size = self.img_size + 3 * 32
108 | self.batch_count = 0
109 |
110 | def __getitem__(self, index):
111 |
112 | # ---------
113 | # Image
114 | # ---------
115 |
116 | img_path = self.img_files[index % len(self.img_files)].rstrip()
117 |
118 | # Extract image as PyTorch tensor
119 | # img = transforms.ToTensor()(Image.open(img_path).convert('L'))
120 |
121 | img = cv2.imread(img_path, 0)
122 | img = self.clahe_hist(img)
123 | img = self.preprocess(img/255)
124 | img = transforms.ToTensor()(img).float()
125 |
126 | # Handle images with less than three channels
127 |
128 | # if len(img.shape) != 3:
129 | # img = img.unsqueeze(0)
130 | # img = img.expand((3, img.shape[1:]))
131 |
132 | # _, h, w = img.shape
133 | # h_factor, w_factor = (h, w) if self.normalized_labels else (1, 1)
134 | # # Pad to square resolution
135 | # img, pad = pad_to_square(img, 0)
136 | # _, padded_h, padded_w = img.shape
137 |
138 | _, h, w = img.shape
139 | h_factor, w_factor = (h, w) if self.normalized_labels else (1, 1)
140 | # Pad to square resolution
141 | img, pad = pad_to_square(img, 0)
142 | _, padded_h, padded_w = img.shape
143 |
144 | # ---------
145 | # Label
146 | # ---------
147 |
148 | label_path = self.label_files[index % len(self.img_files)].rstrip()
149 |
150 | targets = None
151 | if os.path.exists(label_path):
152 |
153 | boxes = torch.from_numpy(np.loadtxt(label_path).reshape(-1, 5))
154 |
155 | # Extract coordinates for unpadded + unscaled image
156 | x1 = w_factor * (boxes[:, 1] - boxes[:, 3] / 2)
157 | y1 = h_factor * (boxes[:, 2] - boxes[:, 4] / 2)
158 | x2 = w_factor * (boxes[:, 1] + boxes[:, 3] / 2)
159 | y2 = h_factor * (boxes[:, 2] + boxes[:, 4] / 2)
160 | # Adjust for added padding
161 | x1 += pad[0]
162 | y1 += pad[2]
163 | x2 += pad[1]
164 | y2 += pad[3]
165 | # Returns (x, y, w, h)
166 | boxes[:, 1] = ((x1 + x2) / 2) / padded_w
167 | boxes[:, 2] = ((y1 + y2) / 2) / padded_h
168 | boxes[:, 3] *= w_factor / padded_w
169 | boxes[:, 4] *= h_factor / padded_h
170 |
171 | # print(f"w = {torch.max(boxes[:, 3]), torch.min(boxes[:, 3])}")
172 | # print(f"h = {torch.max(boxes[:, 4]), torch.min(boxes[:, 4])}")
173 | targets = torch.zeros((len(boxes), 6))
174 |
175 | targets[:, 1:] = boxes
176 |
177 | # Apply augmentations
178 | if self.augment:
179 | if np.random.random() < 0.5:
180 | img, targets = horisontal_flip(img, targets)
181 |
182 | return img_path, img, targets
183 |
184 | def collate_fn(self, batch):
185 |
186 | paths, imgs, targets = list(zip(*batch))
187 |
188 | # Remove empty placeholder targets
189 | targets = [boxes for boxes in targets if boxes is not None]
190 |
191 | # Add sample index to targets
192 | for i, boxes in enumerate(targets):
193 | boxes[:, 0] = i
194 |
195 | targets = torch.cat(targets, 0)
196 |
197 | # Selects new image size every tenth batch
198 | if self.multiscale and self.batch_count % 10 == 0:
199 | self.img_size = random.choice(range(self.min_size, self.max_size + 1, 32))
200 | # Resize images to input shape
201 | # print(self.img_size)
202 | imgs = torch.stack([resize(img, self.img_size) for img in imgs])
203 | self.batch_count += 1
204 | return paths, imgs, targets
205 |
206 | def __len__(self):
207 | return len(self.img_files)
208 |
209 |
210 | @classmethod
211 | def preprocess(cls, img):
212 | img = rgb2gray(img)
213 | bound = img.shape[0] // 3
214 | up = exposure.equalize_adapthist(img[:bound, :])
215 | down = exposure.equalize_adapthist(img[bound:, :])
216 | enhance = np.append(up, down, axis=0)
217 | edge = sobel(gaussian(enhance, 2))
218 | enhance = enhance + edge * 3
219 | return np.where(enhance > 1, 1, enhance)
220 | @classmethod
221 | def clahe_hist(cls, img):
222 | clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
223 | cl1 = clahe.apply(img)
224 | return cl1
225 |
--------------------------------------------------------------------------------
/app/PyTorch_YOLOv3/utils/logger.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 |
3 |
4 | class Logger(object):
5 | def __init__(self, log_dir):
6 | """Create a summary writer logging to log_dir."""
7 | self.writer = tf.summary.FileWriter(log_dir)
8 |
9 | def scalar_summary(self, tag, value, step):
10 | """Log a scalar variable."""
11 | summary = tf.Summary(value=[tf.Summary.Value(tag=tag, simple_value=value)])
12 | self.writer.add_summary(summary, step)
13 |
14 | def list_of_scalars_summary(self, tag_value_pairs, step):
15 | """Log scalar variables."""
16 | summary = tf.Summary(value=[tf.Summary.Value(tag=tag, simple_value=value) for tag, value in tag_value_pairs])
17 | self.writer.add_summary(summary, step)
18 |
--------------------------------------------------------------------------------
/app/PyTorch_YOLOv3/utils/parse_config.py:
--------------------------------------------------------------------------------
1 | def parse_model_config(path):
2 | """Parses the yolo-v3 layer configuration file and returns module definitions"""
3 | file = open(path, 'r')
4 | lines = file.read().split('\n')
5 | lines = [x for x in lines if x and not x.startswith('#')]
6 | lines = [x.rstrip().lstrip() for x in lines] # get rid of fringe whitespaces
7 | module_defs = []
8 | for line in lines:
9 | if line.startswith('['): # This marks the start of a new block
10 | module_defs.append({})
11 | module_defs[-1]['type'] = line[1:-1].rstrip()
12 | if module_defs[-1]['type'] == 'convolutional':
13 | module_defs[-1]['batch_normalize'] = 0
14 | else:
15 | key, value = line.split("=")
16 | value = value.strip()
17 | module_defs[-1][key.rstrip()] = value.strip()
18 |
19 | return module_defs
20 |
21 | def parse_data_config(path):
22 | """Parses the data configuration file"""
23 | options = dict()
24 | options['gpus'] = '0,1,2,3'
25 | options['num_workers'] = '10'
26 | with open(path, 'r') as fp:
27 | lines = fp.readlines()
28 | for line in lines:
29 | line = line.strip()
30 | if line == '' or line.startswith('#'):
31 | continue
32 | key, value = line.split('=')
33 | options[key.strip()] = value.strip()
34 | return options
35 |
--------------------------------------------------------------------------------
/app/PyTorch_YOLOv3/utils/utils.py:
--------------------------------------------------------------------------------
1 | from __future__ import division
2 | import math
3 | import time
4 | import tqdm
5 | import torch
6 | import torch.nn as nn
7 | import torch.nn.functional as F
8 | from torch.autograd import Variable
9 | import numpy as np
10 | import matplotlib.pyplot as plt
11 | import matplotlib.patches as patches
12 |
13 |
14 | def to_cpu(tensor):
15 | return tensor.detach().cpu()
16 |
17 |
18 | def load_classes(path):
19 | """
20 | Loads class labels at 'path'
21 | """
22 | fp = open(path, "r")
23 | names = fp.read().split("\n")[:-1]
24 | return names
25 |
26 |
27 | def weights_init_normal(m):
28 | classname = m.__class__.__name__
29 | if classname.find("Conv") != -1:
30 | torch.nn.init.normal_(m.weight.data, 0.0, 0.02)
31 | elif classname.find("BatchNorm2d") != -1:
32 | torch.nn.init.normal_(m.weight.data, 1.0, 0.02)
33 | torch.nn.init.constant_(m.bias.data, 0.0)
34 |
35 |
36 | def rescale_boxes(boxes, current_dim, original_shape):
37 | """ Rescales bounding boxes to the original shape """
38 | orig_h, orig_w = original_shape
39 | # The amount of padding that was added
40 | pad_x = max(orig_h - orig_w, 0) * (current_dim / max(original_shape))
41 | pad_y = max(orig_w - orig_h, 0) * (current_dim / max(original_shape))
42 | # Image height and width after padding is removed
43 | unpad_h = current_dim - pad_y
44 | unpad_w = current_dim - pad_x
45 | # Rescale bounding boxes to dimension of original image
46 | boxes[:, 0] = ((boxes[:, 0] - pad_x // 2) / unpad_w) * orig_w
47 | boxes[:, 1] = ((boxes[:, 1] - pad_y // 2) / unpad_h) * orig_h
48 | boxes[:, 2] = ((boxes[:, 2] - pad_x // 2) / unpad_w) * orig_w
49 | boxes[:, 3] = ((boxes[:, 3] - pad_y // 2) / unpad_h) * orig_h
50 | return boxes
51 |
52 |
53 | def xywh2xyxy(x):
54 | y = x.new(x.shape)
55 | y[..., 0] = x[..., 0] - x[..., 2] / 2
56 | y[..., 1] = x[..., 1] - x[..., 3] / 2
57 | y[..., 2] = x[..., 0] + x[..., 2] / 2
58 | y[..., 3] = x[..., 1] + x[..., 3] / 2
59 | return y
60 |
61 |
62 | def ap_per_class(tp, conf, pred_cls, target_cls):
63 | """ Compute the average precision, given the recall and precision curves.
64 | Source: https://github.com/rafaelpadilla/Object-Detection-Metrics.
65 | # Arguments
66 | tp: True positives (list).
67 | conf: Objectness value from 0-1 (list).
68 | pred_cls: Predicted object classes (list).
69 | target_cls: True object classes (list).
70 | # Returns
71 | The average precision as computed in py-faster-rcnn.
72 | """
73 |
74 | # Sort by objectness
75 | i = np.argsort(-conf)
76 | tp, conf, pred_cls = tp[i], conf[i], pred_cls[i]
77 |
78 | # Find unique classes
79 | unique_classes = np.unique(target_cls)
80 |
81 | # Create Precision-Recall curve and compute AP for each class
82 | ap, p, r = [], [], []
83 | for c in tqdm.tqdm(unique_classes, desc="Computing AP"):
84 | i = pred_cls == c
85 | n_gt = (target_cls == c).sum() # Number of ground truth objects
86 | n_p = i.sum() # Number of predicted objects
87 |
88 | if n_p == 0 and n_gt == 0:
89 | continue
90 | elif n_p == 0 or n_gt == 0:
91 | ap.append(0)
92 | r.append(0)
93 | p.append(0)
94 | else:
95 | # Accumulate FPs and TPs
96 | fpc = (1 - tp[i]).cumsum()
97 | tpc = (tp[i]).cumsum()
98 |
99 | # Recall
100 | recall_curve = tpc / (n_gt + 1e-16)
101 | r.append(recall_curve[-1])
102 |
103 | # Precision
104 | precision_curve = tpc / (tpc + fpc)
105 | p.append(precision_curve[-1])
106 |
107 | # AP from recall-precision curve
108 | ap.append(compute_ap(recall_curve, precision_curve))
109 |
110 | # Compute F1 score (harmonic mean of precision and recall)
111 | p, r, ap = np.array(p), np.array(r), np.array(ap)
112 | f1 = 2 * p * r / (p + r + 1e-16)
113 |
114 | return p, r, ap, f1, unique_classes.astype("int32")
115 |
116 |
117 | def compute_ap(recall, precision):
118 | """ Compute the average precision, given the recall and precision curves.
119 | Code originally from https://github.com/rbgirshick/py-faster-rcnn.
120 |
121 | # Arguments
122 | recall: The recall curve (list).
123 | precision: The precision curve (list).
124 | # Returns
125 | The average precision as computed in py-faster-rcnn.
126 | """
127 | # correct AP calculation
128 | # first append sentinel values at the end
129 | mrec = np.concatenate(([0.0], recall, [1.0]))
130 | mpre = np.concatenate(([0.0], precision, [0.0]))
131 |
132 | # compute the precision envelope
133 | for i in range(mpre.size - 1, 0, -1):
134 | mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i])
135 |
136 | # to calculate area under PR curve, look for points
137 | # where X axis (recall) changes value
138 | i = np.where(mrec[1:] != mrec[:-1])[0]
139 |
140 | # and sum (\Delta recall) * prec
141 | ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])
142 | return ap
143 |
144 |
145 | def get_batch_statistics(outputs, targets, iou_threshold):
146 | """ Compute true positives, predicted scores and predicted labels per sample """
147 | batch_metrics = []
148 | for sample_i in range(len(outputs)):
149 |
150 | if outputs[sample_i] is None:
151 | continue
152 |
153 | output = outputs[sample_i]
154 | pred_boxes = output[:, :4]
155 | pred_scores = output[:, 4]
156 | pred_labels = output[:, -1]
157 |
158 | true_positives = np.zeros(pred_boxes.shape[0])
159 |
160 | annotations = targets[targets[:, 0] == sample_i][:, 1:]
161 | target_labels = annotations[:, 0] if len(annotations) else []
162 | if len(annotations):
163 | detected_boxes = []
164 | target_boxes = annotations[:, 1:]
165 |
166 | for pred_i, (pred_box, pred_label) in enumerate(zip(pred_boxes, pred_labels)):
167 |
168 | # If targets are found break
169 | if len(detected_boxes) == len(annotations):
170 | break
171 |
172 | # Ignore if label is not one of the target labels
173 | if pred_label not in target_labels:
174 | continue
175 |
176 | iou, box_index = bbox_iou(pred_box.unsqueeze(0), target_boxes).max(0)
177 | if iou >= iou_threshold and box_index not in detected_boxes:
178 | true_positives[pred_i] = 1
179 | detected_boxes += [box_index]
180 | batch_metrics.append([true_positives, pred_scores, pred_labels])
181 | return batch_metrics
182 |
183 |
184 | def bbox_wh_iou(wh1, wh2):
185 | wh2 = wh2.t()
186 | w1, h1 = wh1[0], wh1[1]
187 | w2, h2 = wh2[0], wh2[1]
188 | inter_area = torch.min(w1, w2) * torch.min(h1, h2)
189 | union_area = (w1 * h1 + 1e-16) + w2 * h2 - inter_area
190 | return inter_area / union_area
191 |
192 |
193 | def bbox_iou(box1, box2, x1y1x2y2=True):
194 | """
195 | Returns the IoU of two bounding boxes
196 | """
197 | if not x1y1x2y2:
198 | # Transform from center and width to exact coordinates
199 | b1_x1, b1_x2 = box1[:, 0] - box1[:, 2] / 2, box1[:, 0] + box1[:, 2] / 2
200 | b1_y1, b1_y2 = box1[:, 1] - box1[:, 3] / 2, box1[:, 1] + box1[:, 3] / 2
201 | b2_x1, b2_x2 = box2[:, 0] - box2[:, 2] / 2, box2[:, 0] + box2[:, 2] / 2
202 | b2_y1, b2_y2 = box2[:, 1] - box2[:, 3] / 2, box2[:, 1] + box2[:, 3] / 2
203 | else:
204 | # Get the coordinates of bounding boxes
205 | b1_x1, b1_y1, b1_x2, b1_y2 = box1[:, 0], box1[:, 1], box1[:, 2], box1[:, 3]
206 | b2_x1, b2_y1, b2_x2, b2_y2 = box2[:, 0], box2[:, 1], box2[:, 2], box2[:, 3]
207 |
208 | # get the corrdinates of the intersection rectangle
209 | inter_rect_x1 = torch.max(b1_x1, b2_x1)
210 | inter_rect_y1 = torch.max(b1_y1, b2_y1)
211 | inter_rect_x2 = torch.min(b1_x2, b2_x2)
212 | inter_rect_y2 = torch.min(b1_y2, b2_y2)
213 | # Intersection area
214 | inter_area = torch.clamp(inter_rect_x2 - inter_rect_x1 + 1, min=0) * torch.clamp(
215 | inter_rect_y2 - inter_rect_y1 + 1, min=0
216 | )
217 | # Union Area
218 | b1_area = (b1_x2 - b1_x1 + 1) * (b1_y2 - b1_y1 + 1)
219 | b2_area = (b2_x2 - b2_x1 + 1) * (b2_y2 - b2_y1 + 1)
220 |
221 | iou = inter_area / (b1_area + b2_area - inter_area + 1e-16)
222 |
223 | return iou
224 |
225 |
226 | def non_max_suppression(prediction, conf_thres=0.5, nms_thres=0.4):
227 | """
228 | Removes detections with lower object confidence score than 'conf_thres' and performs
229 | Non-Maximum Suppression to further filter detections.
230 | Returns detections with shape:
231 | (x1, y1, x2, y2, object_conf, class_score, class_pred)
232 | """
233 |
234 | # From (center x, center y, width, height) to (x1, y1, x2, y2)
235 | prediction[..., :4] = xywh2xyxy(prediction[..., :4])
236 | output = [None for _ in range(len(prediction))]
237 | for image_i, image_pred in enumerate(prediction):
238 | # Filter out confidence scores below threshold
239 | image_pred = image_pred[image_pred[:, 4] >= conf_thres]
240 | # If none are remaining => process next image
241 | if not image_pred.size(0):
242 | continue
243 | # Object confidence times class confidence
244 | score = image_pred[:, 4] * image_pred[:, 5:].max(1)[0]
245 | # Sort by it
246 | image_pred = image_pred[(-score).argsort()]
247 | class_confs, class_preds = image_pred[:, 5:].max(1, keepdim=True)
248 | detections = torch.cat((image_pred[:, :5], class_confs.float(), class_preds.float()), 1)
249 | # Perform non-maximum suppression
250 | keep_boxes = []
251 | while detections.size(0):
252 | large_overlap = bbox_iou(detections[0, :4].unsqueeze(0), detections[:, :4]) > nms_thres
253 | label_match = detections[0, -1] == detections[:, -1]
254 | # Indices of boxes with lower confidence scores, large IOUs and matching labels
255 | invalid = large_overlap & label_match
256 | weights = detections[invalid, 4:5]
257 | # Merge overlapping bboxes by order of confidence
258 | detections[0, :4] = (weights * detections[invalid, :4]).sum(0) / weights.sum()
259 | keep_boxes += [detections[0]]
260 | detections = detections[~invalid]
261 | if keep_boxes:
262 | output[image_i] = torch.stack(keep_boxes)
263 |
264 | return output
265 |
266 |
267 | def build_targets(pred_boxes, pred_cls, target, anchors, ignore_thres):
268 |
269 | ByteTensor = torch.cuda.ByteTensor if pred_boxes.is_cuda else torch.ByteTensor
270 | FloatTensor = torch.cuda.FloatTensor if pred_boxes.is_cuda else torch.FloatTensor
271 |
272 | nB = pred_boxes.size(0)
273 | nA = pred_boxes.size(1)
274 | nC = pred_cls.size(-1)
275 | nG = pred_boxes.size(2)
276 |
277 | # Output tensors
278 | obj_mask = ByteTensor(nB, nA, nG, nG).fill_(0)
279 | noobj_mask = ByteTensor(nB, nA, nG, nG).fill_(1)
280 | class_mask = FloatTensor(nB, nA, nG, nG).fill_(0)
281 | iou_scores = FloatTensor(nB, nA, nG, nG).fill_(0)
282 | tx = FloatTensor(nB, nA, nG, nG).fill_(0)
283 | ty = FloatTensor(nB, nA, nG, nG).fill_(0)
284 | tw = FloatTensor(nB, nA, nG, nG).fill_(0)
285 | th = FloatTensor(nB, nA, nG, nG).fill_(0)
286 | tcls = FloatTensor(nB, nA, nG, nG, nC).fill_(0)
287 |
288 | # Convert to position relative to box
289 | target_boxes = target[:, 2:6] * nG
290 | gxy = target_boxes[:, :2]
291 | gwh = target_boxes[:, 2:]
292 | # Get anchors with best iou
293 | ious = torch.stack([bbox_wh_iou(anchor, gwh) for anchor in anchors])
294 | best_ious, best_n = ious.max(0)
295 | # Separate target values
296 | b, target_labels = target[:, :2].long().t()
297 | gx, gy = gxy.t()
298 | gw, gh = gwh.t()
299 | gi, gj = gxy.long().t()
300 | # Set masks
301 | obj_mask[b, best_n, gj, gi] = 1
302 | noobj_mask[b, best_n, gj, gi] = 0
303 |
304 | # Set noobj mask to zero where iou exceeds ignore threshold
305 | for i, anchor_ious in enumerate(ious.t()):
306 | noobj_mask[b[i], anchor_ious > ignore_thres, gj[i], gi[i]] = 0
307 |
308 | # Coordinates
309 | tx[b, best_n, gj, gi] = gx - gx.floor()
310 | ty[b, best_n, gj, gi] = gy - gy.floor()
311 | # Width and height
312 | tw[b, best_n, gj, gi] = torch.log(gw / anchors[best_n][:, 0] + 1e-16)
313 | th[b, best_n, gj, gi] = torch.log(gh / anchors[best_n][:, 1] + 1e-16)
314 | # One-hot encoding of label
315 | tcls[b, best_n, gj, gi, target_labels] = 1
316 | # Compute label correctness and iou at best anchor
317 | class_mask[b, best_n, gj, gi] = (pred_cls[b, best_n, gj, gi].argmax(-1) == target_labels).float()
318 | iou_scores[b, best_n, gj, gi] = bbox_iou(pred_boxes[b, best_n, gj, gi], target_boxes, x1y1x2y2=False)
319 |
320 | tconf = obj_mask.float()
321 | return iou_scores, class_mask, obj_mask, noobj_mask, tx, ty, tw, th, tcls, tconf
322 |
--------------------------------------------------------------------------------
/app/VertebraSegmentation/connected_component.py:
--------------------------------------------------------------------------------
1 |
2 | from torch.nn import Module, CrossEntropyLoss
3 | import torch.nn as nn
4 | import torch
5 | from torch.autograd import Variable
6 | from torch.utils.data import Dataset
7 | from torchvision.transforms import Compose, ToTensor
8 | from skimage.filters import gaussian, sobel
9 | from skimage.color import rgb2gray
10 | from skimage import exposure, io
11 | from os import listdir, path
12 | import torch
13 | import numpy as np
14 | from skimage.transform import rescale, resize, downscale_local_mean
15 | import math
16 | from os.path import splitext, exists, join
17 | import cv2
18 | import numpy as np
19 | import matplotlib.pyplot as plt
20 | import warnings
21 | from os import mkdir, listdir
22 | from os.path import splitext, exists, join
23 | from tqdm import tqdm
24 | warnings.filterwarnings('ignore')
25 |
26 |
27 | PATH_LABEL = [join("label_data", "f01"), join("label_data", "f02"), join("label_data", "f03")]
28 | SOURCE_DIR = "original_data"
29 | SOURCE_SUB_DIR = ["f01", "f02", "f03"]
30 | SUB_DIR = ["image", "label"]
31 | TARGET_DIR = "label_data"
32 |
33 | codebook = {0:(0, 0, 0), 1:(0, 8, 255), 2:(0, 93, 255), 3:(0, 178, 255), 4:(0, 255, 77), 5:(0, 255, 162),
34 | 6:(0, 255, 247), 7:(8, 255, 0), 8:(77, 0, 255), 9:(93, 255, 0), 10:(162, 0, 255), 11:(178, 255, 0),
35 | 12:(247, 0, 255), 13:(255, 0, 8), 14:(255, 0, 93), 15:(255, 0, 178), 16:(255, 76, 0),
36 | 17:(255, 162, 0), 18:(255, 247, 0)}
37 |
38 | def connected_component_label(path):
39 |
40 | # Getting the input image
41 | img = cv2.imread(path, 0)
42 | # Converting those pixels with values 1-127 to 0 and others to 1
43 | img = cv2.threshold(img, 127, 255, cv2.THRESH_BINARY)[1]
44 | # Applying cv2.connectedComponents()
45 | num_labels, labels = cv2.connectedComponents(img)
46 |
47 |
48 | test = np.zeros((labels.shape[0], labels.shape[1], 3))
49 |
50 | for i in range(labels.shape[0]):
51 | for j in range(labels.shape[1]):
52 | test[i][j] = codebook[labels[i][j]]
53 | return cv2.cvtColor(test.astype('float32'), cv2.COLOR_BGR2RGB)
54 |
55 | # # Map component labels to hue val, 0-179 is the hue range in OpenCV
56 | # label_hue = np.uint8(179*labels/np.max(labels))
57 | # blank_ch = 255*np.ones_like(label_hue)
58 | # labeled_img = cv2.merge([label_hue, blank_ch, blank_ch])
59 |
60 | # # Converting cvt to BGR
61 | # labeled_img = cv2.cvtColor(labeled_img, cv2.COLOR_HSV2BGR)
62 |
63 | # # set bg label to black
64 | # labeled_img[label_hue==0] = 0
65 |
66 | # # Showing Original Image
67 | # img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
68 |
69 | # # plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
70 | # # plt.axis("off")
71 | # # plt.title("Orginal Image")
72 | # # plt.show()
73 |
74 | # #Showing Image after Component Labeling
75 | # # plt.imshow(cv2.cvtColor(labeled_img, cv2.COLOR_BGR2RGB))
76 | # # plt.axis('off')
77 | # # plt.title("Image after Component Labeling")
78 | # # plt.show()
79 |
80 | # return cv2.cvtColor(labeled_img, cv2.COLOR_BGR2RGB)
81 |
82 | def rgb2label(img, color_codes = None, one_hot_encode=False):
83 | if color_codes is None:
84 | color_codes = {val:i for i,val in enumerate(set( tuple(v) for m2d in img for v in m2d ))}
85 | n_labels = len(color_codes)
86 | result = np.ndarray(shape=img.shape[:2], dtype=int)
87 | result[:,:] = -1
88 |
89 | color_codes = sorted(color_codes)
90 | sort_color_codes = dict()
91 | for idx, rgb in enumerate(color_codes):
92 | result[(img==rgb).all(2)] = idx
93 | sort_color_codes[rgb] = idx
94 |
95 | # for rgb, idx in color_codes.items():
96 | # result[(img==rgb).all(2)] = idx
97 |
98 | if one_hot_encode:
99 | one_hot_labels = np.zeros((img.shape[0],img.shape[1],n_labels))
100 | # one-hot encoding
101 | for c in range(n_labels):
102 | one_hot_labels[: , : , c ] = (result == c ).astype(int)
103 | result = one_hot_labels
104 |
105 | # return result, sort_color_codes
106 | return result
107 |
108 | # def Labeling(img_path):
109 |
110 | # img = connected_component_label(img_path)
111 | # img_labels, color_codes = rgb2label(img, one_hot_encode=True)
112 |
113 | # return img_labels
114 |
115 |
116 | if __name__ == '__main__':
117 |
118 | for path in PATH_LABEL:
119 | if not exists(join(path, SUB_DIR[1])):
120 | mkdir(join(path, SUB_DIR[1]))
121 |
122 |
123 | for src_subdir in SOURCE_SUB_DIR:
124 | for file in tqdm(listdir(join(SOURCE_DIR, src_subdir, SUB_DIR[1])), desc=f"{SOURCE_DIR}/{src_subdir}/{SUB_DIR[1]}"):
125 | path = join(SOURCE_DIR, src_subdir, SUB_DIR[1], f"{splitext(file)[0]}.png")
126 | colorful_img = connected_component_label(path)
127 | cv2.imwrite(join(TARGET_DIR, src_subdir, SUB_DIR[1], f"{splitext(file)[0]}.png"), colorful_img)
128 |
129 | print("Generate labeling data !")
130 |
--------------------------------------------------------------------------------
/app/VertebraSegmentation/coordinate.py:
--------------------------------------------------------------------------------
1 | import os
2 | from os import mkdir, listdir
3 | from os.path import splitext, exists, join
4 |
5 | import torch
6 | import torch.nn as nn
7 | def get_information(filename, dir):
8 |
9 | name = splitext(filename)[0]
10 |
11 | f = open(f"{dir}/{name}.txt", 'r')
12 |
13 | lines = f.readlines()
14 |
15 | coordinate = []
16 |
17 | # x1, y1, x2, y2, box_w, box_h
18 |
19 | for id, line in enumerate(lines):
20 | line = line.replace('\n', '')
21 | line = line.split(' ')
22 | coordinate.append(line)
23 |
24 |
25 | return coordinate
26 |
27 |
28 |
29 |
--------------------------------------------------------------------------------
/app/VertebraSegmentation/filp_and_rotate.py:
--------------------------------------------------------------------------------
1 | from os import mkdir, listdir
2 | from os.path import splitext, exists, join
3 | from PIL import Image, ImageOps
4 | from tqdm import tqdm
5 | import torchvision.transforms.functional as F
6 | import cv2
7 | import numpy as np
8 | import os
9 | import math
10 | import shutil
11 | from .coordinate import get_information
12 |
13 | SOURCE_DIR = [join("original_data", "f01"), join("original_data", "f02")]
14 | LABELING_SOURCE_DIR = [join("label_data", "f02"), join("label_data", "f03")]
15 | OBJECT_SOURCE_DIR = [join("detect_data", "f02"), join("detect_data", "f03")]
16 |
17 | TARGET_DIR = "extend_dataset"
18 | LABELING_TARGET_DIR = "label_extend_dataset"
19 | OBJECT_DIR = "extend_detect_data"
20 |
21 | SUB_DIR = ["image", "label"]
22 |
23 | SPLIT_DIR = 'splitting_dataset'
24 |
25 | ORIGINAL_SPLIT = "original_split_data"
26 | ORIGINAL_SPLIT_DATA = [join("original_split_data", "f01"), join("original_split_data", "f02"), join("original_split_data", "f03")]
27 |
28 | ORIGINAL_SRC = "original_data"
29 |
30 | OBJECT_DETECTION_LABEL = "object_detect_label"
31 | DETECTION_DATA = "detect_data"
32 | F_SUBDIR = ["f01", "f02", "f03"]
33 | # ORIGINAL_SOURCE_DIR = [join("original_data", "f01"), join("original_data", "f02"), join("original_data", "f03")]
34 |
35 |
36 |
37 | ROTATION_ANGLE = [180]
38 |
39 |
40 | Max_H = 110
41 | Max_W = 190
42 | def delete_dir(path):
43 |
44 | try:
45 | shutil.rmtree(path)
46 | except OSError as e:
47 | print(e)
48 | else:
49 | print("The directory is deleted successfully")
50 |
51 | def contrast_img(img1, c, b):
52 | rows, cols, channels = img1.shape
53 |
54 | blank = np.zeros([rows, cols, channels], img1.dtype)
55 | dst = cv2.addWeighted(img1, c, blank, 1-c, b)
56 | # cv2.imshow('original_img', img1)
57 | # cv2.imshow("contrast_img", dst)
58 | return dst
59 |
60 | def create_valid_return_len(dir, save_path, source_path):
61 |
62 | os.makedirs(save_path, exist_ok=True)
63 | os.makedirs("test/valid", exist_ok=True)
64 |
65 | box_num = []
66 |
67 | for file in tqdm(sorted(listdir(join(ORIGINAL_SRC, source_path))), desc=f"{ORIGINAL_SRC}//{source_path}"): # .png
68 | img = cv2.imread(join(ORIGINAL_SRC, source_path, file), 0)
69 | boxes = get_information(file, dir)
70 |
71 | box_num.append(len(boxes))
72 |
73 | for id, box in enumerate(boxes):
74 | box = list(map(int, box))
75 | x1, y1, x2, y2 = box[0], box[1], box[2], box[3]
76 | width, height = box[4], box[5]
77 | detect_region = np.zeros((height, width))
78 | detect_region = img[y1:y2+1, x1:x2+1]
79 |
80 | print(detect_region)
81 |
82 | cv2.imwrite(join(save_path, f"{splitext(file)[0]}_{id}.png"), detect_region)
83 |
84 | return box_num
85 |
86 |
87 | if __name__ == '__main__':
88 |
89 | # if not exists(TARGET_DIR):
90 | # mkdir(TARGET_DIR)
91 | # for sub_dir in SUB_DIR:
92 | # dir = join(TARGET_DIR, sub_dir)
93 | # if not exists(dir):
94 | # mkdir(dir)
95 |
96 |
97 | # for source in SOURCE_DIR:
98 | # for sub_dir in SUB_DIR:
99 | # for file in tqdm(listdir(join(source, sub_dir)), desc=f"{source}//{sub_dir}"):
100 | # img = Image.open(join(source, sub_dir, file))
101 | # img.save(join(TARGET_DIR, sub_dir, f"{splitext(file)[0]}_0.png"))
102 | # for angle in ROTATION_ANGLE:
103 | # img.rotate(angle, expand=True).save(join(TARGET_DIR, sub_dir, f"{splitext(file)[0]}_{angle}.png"))
104 | # img = ImageOps.mirror(img)
105 | # img.save(join(TARGET_DIR, sub_dir, f"{splitext(file)[0]}_f0.png"))
106 | # for angle in ROTATION_ANGLE:
107 | # img.rotate(angle, expand=True).save(join(TARGET_DIR, sub_dir, f"{splitext(file)[0]}_f{angle}.png"))
108 |
109 | #######################################################################################################################
110 | # # for labeling dataset
111 | # if not exists(LABELING_TARGET_DIR):
112 | # mkdir(LABELING_TARGET_DIR)
113 | # for sub_dir in SUB_DIR:
114 | # dir = join(LABELING_TARGET_DIR, sub_dir)
115 | # if not exists(dir):
116 | # mkdir(dir)
117 |
118 | # for source in LABELING_SOURCE_DIR:
119 | # for sub_dir in SUB_DIR:
120 | # for file in tqdm(listdir(join(source, sub_dir)), desc=f"{source}//{sub_dir}"):
121 | # img = Image.open(join(source, sub_dir, file))
122 | # img.save(join(LABELING_TARGET_DIR, sub_dir, f"{splitext(file)[0]}_0.png"))
123 | # for angle in ROTATION_ANGLE:
124 | # img.rotate(angle, expand=True).save(join(LABELING_TARGET_DIR, sub_dir, f"{splitext(file)[0]}_{angle}.png"))
125 | # img = ImageOps.mirror(img)
126 | # img.save(join(LABELING_TARGET_DIR, sub_dir, f"{splitext(file)[0]}_f0.png"))
127 | # for angle in ROTATION_ANGLE:
128 | # img.rotate(angle, expand=True).save(join(LABELING_TARGET_DIR, sub_dir, f"{splitext(file)[0]}_f{angle}.png"))
129 |
130 | ########################################################################################################################
131 | # for object detect splitting dataset
132 | os.makedirs(OBJECT_DETECTION_LABEL, exist_ok=True)
133 | os.makedirs(DETECTION_DATA, exist_ok=True)
134 |
135 |
136 | for sub in F_SUBDIR:
137 | # os.makedirs(join(OBJECT_DETECTION_LABEL, sub), exist_ok=True)
138 | os.makedirs(join(DETECTION_DATA, sub), exist_ok=True)
139 | for sub_dir in SUB_DIR:
140 | os.makedirs(join(DETECTION_DATA, sub, sub_dir), exist_ok=True)
141 |
142 | for source in F_SUBDIR: # f01, f02, f03
143 | for sub_dir in SUB_DIR: # image, label
144 | for file in tqdm(listdir(join(ORIGINAL_SRC, source, sub_dir)), desc=f"{ORIGINAL_SRC}//{source}//{sub_dir}"): # .png
145 | img = cv2.imread(join(ORIGINAL_SRC, source, sub_dir, file), 0)
146 | boxes = get_information(file, "object_detect_label")
147 |
148 | for id, box in enumerate(boxes):
149 | box = list(map(float, box))
150 | x1, y1, x2, y2 = math.floor(box[0]), math.floor(box[1]), math.ceil(box[2]), math.ceil(box[3])
151 | x1 = x1-50
152 | x2 = x2+50
153 | width, height = x2-x1, y2-y1
154 |
155 | ##############################################################################
156 | # if width < Max_W:
157 | # remain = Max_W - width
158 | # halfWidth, theOtherHalfWidth = remain//2, remain - remain//2
159 | # width = Max_W
160 |
161 | # if height < Max_H:
162 | # remain = Max_H - height
163 | # halfHeight, theOtherHalfHeight = remain//2, remain - remain//2
164 | # box_h = Max_H
165 |
166 | # x1 = x1 - halfWidth
167 | # x2 = x2 + theOtherHalfWidth
168 | # y1 = y1 - halfHeight
169 | # y2 = y2 + theOtherHalfHeight
170 |
171 | # x1 = 0 if x1 < 0 else x1
172 | # y1 = 0 if y1 < 0 else y1
173 | # x2 = 499 if x2 > 499 else x2
174 | # y2 = 1199 if y2 > 1199 else y2
175 | ################################################################################
176 |
177 | detect_region = np.zeros((height, width))
178 | detect_region = img[y1:y2+1, x1:x2+1]
179 |
180 |
181 | cv2.imwrite(join(DETECTION_DATA, source, sub_dir, f"{splitext(file)[0]}_{id}.png"), detect_region)
182 |
183 |
184 | # extend object detect dataset
185 |
186 |
187 |
188 | OBJ_EXTEND = "extend_detect_data"
189 |
190 | os.makedirs(OBJ_EXTEND, exist_ok=True)
191 |
192 | for sub_dir in SUB_DIR:
193 | dir = join(OBJ_EXTEND, sub_dir)
194 | delete_dir(dir)
195 | os.makedirs(dir, exist_ok=True)
196 |
197 | for source in OBJECT_SOURCE_DIR:
198 | for sub_dir in SUB_DIR:
199 | for file in tqdm(listdir(join(source, sub_dir)), desc=f"{source}//{sub_dir}"):
200 | img = Image.open(join(source, sub_dir, file))
201 | img.save(join(OBJECT_DIR, sub_dir, f"{splitext(file)[0]}_0.png"))
202 | for angle in ROTATION_ANGLE:
203 | img.rotate(angle, expand=True).save(join(OBJECT_DIR, sub_dir, f"{splitext(file)[0]}_{angle}.png"))
204 | img = ImageOps.mirror(img)
205 | img.save(join(OBJECT_DIR, sub_dir, f"{splitext(file)[0]}_f0.png"))
206 | for angle in ROTATION_ANGLE:
207 | img.rotate(angle, expand=True).save(join(OBJECT_DIR, sub_dir, f"{splitext(file)[0]}_f{angle}.png"))
208 |
209 |
210 |
211 |
212 | # VALID_SOURCE = "coordinate"
213 | # VALID_DATA = "valid_data"
214 | # VALID_DIR = "f01/image"
215 | # os.makedirs(VALID_DATA, exist_ok=True)
216 |
217 |
218 | # for file in tqdm(listdir(join(ORIGINAL_SRC, VALID_DIR)), desc=f"{VALID_DATA}"): # .png
219 | # img = cv2.imread(join(ORIGINAL_SRC, VALID_DIR, file), 0)
220 | # boxes = get_information(file, OBJECT_DETECTION_LABEL)
221 |
222 | # num = len(boxes)
223 |
224 | # for id, box in enumerate(boxes):
225 | # box = list(map(int, box))
226 | # x1, y1, x2, y2 = box[0], box[1], box[2], box[3]
227 | # width, height = box[4], box[5]
228 | # detect_region = np.zeros((height, width))
229 | # detect_region = img[y1:y2+1, x1:x2+1]
230 |
231 |
232 | # cv2.imwrite(join(VALID_DATA, f"{splitext(file)[0]}_{id}.png"), detect_region)
233 |
234 | ######################################################################################################################
235 |
236 | # exist_split_extend = True
237 |
238 | # if not exists(SPLIT_DIR):
239 | # mkdir(SPLIT_DIR)
240 | # for sub_dir in SUB_DIR:
241 | # dir = join(SPLIT_DIR, sub_dir)
242 | # if not exists(dir):
243 | # mkdir(dir)
244 | # exist_split_extend = False
245 |
246 | # if not exist_split_extend:
247 | # for sub_dir in SUB_DIR:
248 | # for file in tqdm(listdir(join(TARGET_DIR, sub_dir)), desc=f"{SPLIT_DIR}//{sub_dir}"):
249 |
250 | # img = cv2.imread(join(TARGET_DIR, sub_dir, file), 0)
251 | # Height, Width = img.shape
252 | # gap = Height // 3
253 | # truncated = 0
254 |
255 | # for _ in range(3):
256 |
257 | # split_img = img[truncated:truncated+gap, :]
258 | # truncated += gap
259 |
260 | # if _ == 0:
261 | # cv2.imwrite(join(SPLIT_DIR, sub_dir, f"{splitext(file)[0]}_top.png"), split_img)
262 | # elif _ == 1:
263 | # cv2.imwrite(join(SPLIT_DIR, sub_dir, f"{splitext(file)[0]}_mid.png"), split_img)
264 | # else:
265 | # cv2.imwrite(join(SPLIT_DIR, sub_dir, f"{splitext(file)[0]}_bot.png"), split_img)
266 |
267 | # exist_split_original = True
268 |
269 | # if not exists(ORIGINAL_SPLIT):
270 | # mkdir(ORIGINAL_SPLIT)
271 |
272 | # for sub_org in ORIGINAL_SPLIT_DATA:
273 | # if not exists(sub_org):
274 | # mkdir(sub_org)
275 | # for sub_dir in SUB_DIR:
276 | # dir = join(sub_org, sub_dir)
277 | # if not exists(dir):
278 | # mkdir(dir)
279 | # exist_split_original = False
280 |
281 | # if not exist_split_original:
282 | # for dir in ["f01", "f02", "f03"]:
283 | # for sub_dir in SUB_DIR:
284 | # for file in tqdm(listdir(join(ORIGINAL_SRC, dir, sub_dir)), desc=f"{ORIGINAL_SPLIT}//{sub_dir}"):
285 | # img = cv2.imread(join(ORIGINAL_SRC, dir, sub_dir, file), 0)
286 |
287 | # Height, Width = img.shape
288 | # gap = Height // 3
289 | # truncated = 0
290 |
291 | # for _ in range(3):
292 |
293 | # split_img = img[truncated:truncated+gap, :]
294 | # truncated += gap
295 |
296 | # if _ == 0:
297 | # cv2.imwrite(join(ORIGINAL_SPLIT, dir, sub_dir, f"{splitext(file)[0]}_top.png"), split_img)
298 | # elif _ == 1:
299 | # cv2.imwrite(join(ORIGINAL_SPLIT, dir, sub_dir, f"{splitext(file)[0]}_mid.png"), split_img)
300 | # else:
301 | # cv2.imwrite(join(ORIGINAL_SPLIT, dir, sub_dir, f"{splitext(file)[0]}_bot.png"), split_img)
302 |
303 |
--------------------------------------------------------------------------------
/app/VertebraSegmentation/generate.py:
--------------------------------------------------------------------------------
1 | import xml.etree.ElementTree as XET
2 | import os
3 | from os import mkdir, listdir
4 | from os.path import splitext, exists, join
5 |
6 |
7 | source_dir = [join("./images")]
8 | target_dir = [join("./labels")]
9 | custom_path = "data/custom/images"
10 |
11 | valid_dir = join("data", "f01", "image")
12 |
13 | sub_dir = ["train", "valid"]
14 | out_dir = ["images", "labels"]
15 |
16 | img_width = 500
17 | img_height = 1200
18 |
19 |
20 |
21 | # label_idx x_center y_center width height
22 |
23 | for dir in out_dir:
24 | if not exists(dir):
25 | mkdir(dir)
26 |
27 |
28 | for dir in out_dir:
29 | for sub in sub_dir:
30 | if not exists(join(dir, sub)):
31 | mkdir(join(dir, sub))
32 |
33 | for dir in ["xml"]:
34 | for file in sorted(listdir(f"{dir}/")):
35 |
36 | label_idx = []
37 | label_xmin = []
38 | label_ymin = []
39 | label_xmax = []
40 | label_ymax = []
41 |
42 | name = splitext(file)[0]
43 |
44 |
45 | tree = XET.parse(f"xml/{name}.xml")
46 | root = tree.getroot()
47 |
48 | for i in range(6, len(root)):
49 |
50 | x1, x2 = int(root[i][4][0].text), int(root[i][4][2].text)
51 | y1, y2 = int(root[i][4][1].text), int(root[i][4][3].text)
52 | width = int(root[i][4][2].text)-int(root[i][4][0].text)
53 | height = int(root[i][4][3].text)-int(root[i][4][1].text)
54 |
55 | if i == 6:
56 | f = open(f"object_detect_label/{name}.txt", 'w')
57 | f.write("{:d} {:d} {:d} {:d} {:d} {:d}\n".format(x1, y1, x2, y2, width, height) )
58 | else:
59 | f = open(f"object_detect_label/{name}.txt", 'a')
60 | f.write("{:d} {:d} {:d} {:d} {:d} {:d}\n".format(x1, y1, x2, y2, width, height) )
61 |
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/data/__init__.py:
--------------------------------------------------------------------------------
1 | from .dataset import VertebraDataset
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/data/__pycache__/__init__.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/z0978916348/Localization_and_Segmentation/1a80d9730dad7ede8c3b92793e85a979915a2fad/app/VertebraSegmentation/net/data/__pycache__/__init__.cpython-36.pyc
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/data/__pycache__/__init__.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/z0978916348/Localization_and_Segmentation/1a80d9730dad7ede8c3b92793e85a979915a2fad/app/VertebraSegmentation/net/data/__pycache__/__init__.cpython-37.pyc
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/data/__pycache__/dataset.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/z0978916348/Localization_and_Segmentation/1a80d9730dad7ede8c3b92793e85a979915a2fad/app/VertebraSegmentation/net/data/__pycache__/dataset.cpython-36.pyc
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/data/__pycache__/dataset.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/z0978916348/Localization_and_Segmentation/1a80d9730dad7ede8c3b92793e85a979915a2fad/app/VertebraSegmentation/net/data/__pycache__/dataset.cpython-37.pyc
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/data/dataset.py:
--------------------------------------------------------------------------------
1 | from torch.utils.data import Dataset
2 | from torchvision.transforms import Compose, ToTensor
3 | from skimage.filters import gaussian, sobel
4 | from skimage.color import rgb2gray
5 | from skimage import exposure, io
6 | from os import listdir, path
7 | import torch
8 | import numpy as np
9 | import cv2
10 | from os.path import splitext, exists, join
11 |
12 | codebook = {(0, 0, 0): 0, (0, 8, 255): 1, (0, 93, 255): 2, (0, 178, 255): 3, (0, 255, 77): 4, (0, 255, 162): 5,
13 | (0, 255, 247): 6, (8, 255, 0): 7, (77, 0, 255): 8, (93, 255, 0): 9, (162, 0, 255): 10, (178, 255, 0): 11,
14 | (247, 0, 255): 12, (255, 0, 8): 13, (255, 0, 93): 14, (255, 0, 178): 15, (255, 76, 0): 16,
15 | (255, 162, 0): 17, (255, 247, 0): 18}
16 |
17 | class VertebraDataset(Dataset):
18 | def __init__(self, dataset_path, train=False, image_folder="image", mask_folder="label"):
19 | self.train = train
20 | self.dataset_path = dataset_path
21 | if self.train:
22 | self.image_folder = image_folder
23 | self.images = sorted(listdir(path.join(dataset_path, image_folder)))
24 | self.mask_folder = mask_folder
25 | self.masks = sorted(listdir(path.join(dataset_path, mask_folder)))
26 | else:
27 | self.images = sorted(listdir(path.join(dataset_path)), key=lambda x: int((splitext(x)[0])[5:]))
28 | self.transform = Compose([ToTensor()])
29 |
30 | def __getitem__(self, idx):
31 | if self.train:
32 | img_path = path.join(self.dataset_path, self.image_folder, self.images[idx])
33 | # img = io.imread(img_path)
34 | img = cv2.imread(img_path, 0)
35 | img = self.clahe_hist(img)
36 | img = img / 255
37 | # img = self.preprocess(img)
38 |
39 | out_img = np.zeros((1,) + img.shape, dtype=np.float)
40 | out_img[:, ] = img
41 | mask_path = path.join(self.dataset_path, self.mask_folder, self.masks[idx])
42 |
43 | ######################### original #########################
44 |
45 | mask = np.array(rgb2gray(io.imread(mask_path))) / 255
46 |
47 | ######################### original #########################
48 |
49 | ######################### color #########################
50 |
51 | # mask = cv2.imread(mask_path, 1)
52 | # mask = self.rgb2label(mask, color_codes=codebook, one_hot_encode=True)
53 |
54 | ######################### color #########################
55 |
56 | return torch.as_tensor(out_img, dtype=torch.float), torch.as_tensor(mask, dtype=torch.float)
57 | else:
58 | img_path = path.join(self.dataset_path, self.images[idx])
59 | # img = io.imread(img_path)
60 | img = cv2.imread(img_path, 0)
61 | img = self.clahe_hist(img)
62 | img = img / 255
63 | # img = self.preprocess(img)
64 | out_img = np.zeros((1,) + img.shape, dtype=np.float)
65 | out_img[:, ] = img
66 | return torch.as_tensor(out_img, dtype=torch.float), self.images[idx]
67 |
68 | def __len__(self):
69 | return len(self.images)
70 |
71 | @classmethod
72 | def preprocess(cls, img):
73 | img = rgb2gray(img)
74 | bound = img.shape[0] // 3
75 | up = exposure.equalize_adapthist(img[:bound, :])
76 | down = exposure.equalize_adapthist(img[bound:, :])
77 | enhance = np.append(up, down, axis=0)
78 | edge = sobel(gaussian(enhance, 2))
79 | enhance = enhance + edge * 3
80 | return np.where(enhance > 1, 1, enhance)
81 | @classmethod
82 | def clahe_hist(cls, img):
83 | clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
84 | cl1 = clahe.apply(img)
85 | return cl1
86 | @classmethod
87 | def rgb2label(cls, img, color_codes = None, one_hot_encode=False):
88 | if color_codes is None:
89 | color_codes = {val:i for i,val in enumerate(set( tuple(v) for m2d in img for v in m2d ))}
90 | n_labels = len(color_codes)
91 | result = np.ndarray(shape=img.shape[:2], dtype=int)
92 | result[:,:] = -1
93 |
94 | color_codes = sorted(color_codes)
95 | sort_color_codes = dict()
96 | for idx, rgb in enumerate(color_codes):
97 | result[(img==rgb).all(2)] = idx
98 | sort_color_codes[rgb] = idx
99 |
100 | # for rgb, idx in color_codes.items():
101 | # result[(img==rgb).all(2)] = idx
102 |
103 | if one_hot_encode:
104 | # one_hot_labels = np.zeros((img.shape[0],img.shape[1],n_labels))
105 | one_hot_labels = np.zeros((n_labels, img.shape[0],img.shape[1]))
106 | # one-hot encoding
107 | for c in range(n_labels):
108 | # one_hot_labels[ :, : , c ] = (result == c ).astype(int)
109 | one_hot_labels[ c , : , : ] = (result == c ).astype(int)
110 | result = one_hot_labels
111 |
112 | # return result, sort_color_codes
113 | return result
114 |
115 | if __name__ == '__main__':
116 |
117 | dataset = VertebraDataset("..//..//extend_dataset", train=True)
118 |
119 | a, b = dataset[0]
120 | print(a.shape)
121 | print(b.shape)
122 |
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/model/__init__.py:
--------------------------------------------------------------------------------
1 | from .unet import Unet
2 | from .resunet import ResUnet
3 |
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/model/__pycache__/__init__.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/z0978916348/Localization_and_Segmentation/1a80d9730dad7ede8c3b92793e85a979915a2fad/app/VertebraSegmentation/net/model/__pycache__/__init__.cpython-36.pyc
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/model/__pycache__/__init__.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/z0978916348/Localization_and_Segmentation/1a80d9730dad7ede8c3b92793e85a979915a2fad/app/VertebraSegmentation/net/model/__pycache__/__init__.cpython-37.pyc
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/model/__pycache__/components.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/z0978916348/Localization_and_Segmentation/1a80d9730dad7ede8c3b92793e85a979915a2fad/app/VertebraSegmentation/net/model/__pycache__/components.cpython-36.pyc
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/model/__pycache__/components.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/z0978916348/Localization_and_Segmentation/1a80d9730dad7ede8c3b92793e85a979915a2fad/app/VertebraSegmentation/net/model/__pycache__/components.cpython-37.pyc
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/model/__pycache__/resunet.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/z0978916348/Localization_and_Segmentation/1a80d9730dad7ede8c3b92793e85a979915a2fad/app/VertebraSegmentation/net/model/__pycache__/resunet.cpython-36.pyc
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/model/__pycache__/resunet.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/z0978916348/Localization_and_Segmentation/1a80d9730dad7ede8c3b92793e85a979915a2fad/app/VertebraSegmentation/net/model/__pycache__/resunet.cpython-37.pyc
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/model/__pycache__/resunet_parts.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/z0978916348/Localization_and_Segmentation/1a80d9730dad7ede8c3b92793e85a979915a2fad/app/VertebraSegmentation/net/model/__pycache__/resunet_parts.cpython-36.pyc
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/model/__pycache__/resunet_parts.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/z0978916348/Localization_and_Segmentation/1a80d9730dad7ede8c3b92793e85a979915a2fad/app/VertebraSegmentation/net/model/__pycache__/resunet_parts.cpython-37.pyc
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/model/__pycache__/unet.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/z0978916348/Localization_and_Segmentation/1a80d9730dad7ede8c3b92793e85a979915a2fad/app/VertebraSegmentation/net/model/__pycache__/unet.cpython-36.pyc
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/model/__pycache__/unet.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/z0978916348/Localization_and_Segmentation/1a80d9730dad7ede8c3b92793e85a979915a2fad/app/VertebraSegmentation/net/model/__pycache__/unet.cpython-37.pyc
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/model/__pycache__/unet_parts.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/z0978916348/Localization_and_Segmentation/1a80d9730dad7ede8c3b92793e85a979915a2fad/app/VertebraSegmentation/net/model/__pycache__/unet_parts.cpython-36.pyc
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/model/__pycache__/unet_parts.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/z0978916348/Localization_and_Segmentation/1a80d9730dad7ede8c3b92793e85a979915a2fad/app/VertebraSegmentation/net/model/__pycache__/unet_parts.cpython-37.pyc
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/model/components.py:
--------------------------------------------------------------------------------
1 | from torch.nn import Sequential, Conv2d, ConvTranspose2d, ReLU, BatchNorm2d, Module, functional
2 | import torch.nn.functional as F
3 | import torch
4 |
5 | # y > x
6 | def padding(x, y):
7 | if x.shape == y.shape:
8 | return x
9 | else:
10 | s2 = y.shape[2] - x.shape[2]
11 | s3 = y.shape[3] - x.shape[3]
12 | return functional.pad(x, (0, s3, 0, s2))
13 |
14 |
15 | # x > y
16 | def cropping(x, y):
17 | if x.shape == y.shape:
18 | return x
19 | else:
20 | # diffY = x.size()[2] - y.size()[2]
21 | # diffX = x.size()[3] - y.size()[3]
22 |
23 | # y = F.pad(y, [diffX // 2, diffX - diffX // 2,
24 | # diffY // 2, diffY - diffY // 2])
25 | # return torch.cat([x, y], dim=1)
26 | return x[:, :, :y.shape[2], :y.shape[3]]
27 |
28 |
29 | def Double_Conv2d(in_channels, out_channels, padding=0):
30 | return Sequential(
31 | Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=3, padding=padding),
32 | ReLU(inplace=True),
33 |
34 | Conv2d(in_channels=out_channels, out_channels=out_channels, kernel_size=3, padding=padding),
35 | ReLU(inplace=True),
36 | )
37 |
38 |
39 | def DeConv2D(in_channels, out_channels):
40 | return Sequential(
41 | ConvTranspose2d(in_channels, out_channels, kernel_size=2, stride=2),
42 | ReLU(inplace=True),
43 | )
44 |
45 |
46 | def Residual_Unit(in_channels, out_channels, stride=1, padding=0):
47 | return Sequential(
48 | BatchNorm2d(in_channels),
49 | ReLU(inplace=True),
50 | Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=3, stride=stride, padding=padding),
51 | )
52 |
53 |
54 | class ResidualBlock(Module):
55 | def __init__(self, in_channels, out_channels, f_stride=1, padding=0):
56 | super().__init__()
57 | self.ru1 = Residual_Unit(in_channels=in_channels, out_channels=out_channels, stride=f_stride, padding=padding)
58 | self.ru2 = Residual_Unit(in_channels=out_channels, out_channels=out_channels, padding=padding)
59 |
60 | def forward(self, x):
61 | x = self.ru1(x)
62 | residual = x
63 | x = self.ru2(x)
64 | x += residual
65 | return x
66 |
67 |
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/model/resunet.py:
--------------------------------------------------------------------------------
1 | from torch.nn import Module, Conv2d, ConvTranspose2d
2 | from torch import cat
3 | from .components import Residual_Unit, ResidualBlock, cropping
4 |
5 |
6 | class ResUnet(Module):
7 | def __init__(self, in_channels, out_channels):
8 | super().__init__()
9 | self.conv1 = Conv2d(in_channels=in_channels, out_channels=64, kernel_size=3, padding=1)
10 | self.resunit1l = Residual_Unit(64, 64, padding=1)
11 | self.resblock2l = ResidualBlock(64, 128, f_stride=2, padding=1)
12 | self.resblock3l = ResidualBlock(128, 256, f_stride=2, padding=1)
13 | self.resbridge = ResidualBlock(256, 512, f_stride=2, padding=1)
14 |
15 | self.up3 = ConvTranspose2d(512, 256, kernel_size=3, stride=2)
16 | self.up2 = ConvTranspose2d(256, 128, kernel_size=3, stride=2)
17 | self.up1 = ConvTranspose2d(128, 64, kernel_size=3, stride=2)
18 |
19 | self.resblock3r = ResidualBlock(512, 256, padding=1)
20 | self.resblock2r = ResidualBlock(256, 128, padding=1)
21 | self.resblock1r = ResidualBlock(128, 64, padding=1)
22 |
23 | self.final = Conv2d(64, out_channels, kernel_size=1)
24 |
25 | def forward(self, x):
26 | x = self.conv1(x)
27 | x = self.resunit1l(x)
28 | l1 = x
29 |
30 | x = self.resblock2l(x)
31 | l2 = x
32 |
33 | x = self.resblock3l(x)
34 | l3 = x
35 |
36 | x = self.resbridge(x)
37 |
38 | x = self.up3(x)
39 | x = cropping(x, l3)
40 | x = cat([l3, x], dim=1)
41 | x = self.resblock3r(x)
42 |
43 | x = self.up2(x)
44 | x = cropping(x, l2)
45 | x = cat([l2, x], dim=1)
46 | x = self.resblock2r(x)
47 |
48 | x = self.up1(x)
49 | x = cropping(x, l1)
50 | x = cat([l1, x], dim=1)
51 | x = self.resblock1r(x)
52 |
53 | x = self.final(x)
54 | return x
55 |
56 | """ Full assembly of the parts to form the complete network """
57 |
58 | import torch.nn.functional as F
59 |
60 | from .resunet_parts import *
61 |
62 |
63 | class ResidualUNet(nn.Module):
64 | def __init__(self, n_channels, n_classes, bilinear=True):
65 | super(ResidualUNet, self).__init__()
66 | self.n_channels = n_channels
67 | self.n_classes = n_classes
68 | self.bilinear = bilinear
69 |
70 | self.inc = ResidualDoubleConv(n_channels, 32)
71 | self.down1 = Down(32, 64)
72 | self.down2 = Down(64, 128)
73 | self.down3 = Down(128, 256)
74 | self.down4 = Down(256, 512)
75 | self.down5 = Down(512, 512)
76 | self.up1 = Up(1024, 512, bilinear)
77 | self.up2 = Up(768, 256, bilinear)
78 | self.up3 = Up(384, 128, bilinear)
79 | self.up4 = Up(192, 64, bilinear)
80 | self.up5 = Up(96, 32, bilinear)
81 | self.outc = OutConv(32, n_classes)
82 |
83 | def forward(self, x):
84 | x1 = self.inc(x)
85 | x2 = self.down1(x1)
86 | x3 = self.down2(x2)
87 | x4 = self.down3(x3)
88 | x5 = self.down4(x4)
89 | x6 = self.down5(x5)
90 | x = self.up1(x6, x5)
91 | x = self.up2(x, x4)
92 | x = self.up3(x, x3)
93 | x = self.up4(x, x2)
94 | x = self.up5(x, x1)
95 | logits = self.outc(x)
96 |
97 | return logits
98 |
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/model/resunet_parts.py:
--------------------------------------------------------------------------------
1 | """ Parts of the U-Net model """
2 |
3 | import torch
4 | import torch.nn as nn
5 | import torch.nn.functional as F
6 |
7 | class ResidualDoubleConv(nn.Module):
8 | """(convolution => [BN] => ReLU) * 2"""
9 |
10 | def __init__(self, in_channels, out_channels, stride=1):
11 | super().__init__()
12 | self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1)
13 | self.bn1 = nn.BatchNorm2d(out_channels)
14 | self.relu = nn.ReLU(inplace=True)
15 | self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1)
16 | self.bn2 = nn.BatchNorm2d(out_channels)
17 |
18 | self.shortcut = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=stride)
19 |
20 | def forward(self, x):
21 | identity = x
22 |
23 | out = self.conv1(x)
24 | out = self.bn1(out)
25 | out = self.relu(out)
26 |
27 | out = self.conv2(out)
28 | out = self.bn2(out)
29 |
30 | out += self.shortcut(identity)
31 | out = self.relu(out)
32 |
33 | return out
34 |
35 |
36 | class Down(nn.Module):
37 | """Downscaling with maxpool then double conv"""
38 |
39 | def __init__(self, in_channels, out_channels):
40 | super().__init__()
41 | self.maxpool_conv = nn.Sequential(
42 | nn.MaxPool2d(2),
43 | ResidualDoubleConv(in_channels, out_channels)
44 | )
45 |
46 | def forward(self, x):
47 | return self.maxpool_conv(x)
48 |
49 |
50 | class Up(nn.Module):
51 | """Upscaling then double conv"""
52 |
53 | def __init__(self, in_channels, out_channels, bilinear=True):
54 | super().__init__()
55 |
56 | # if bilinear, use the normal convolutions to reduce the number of channels
57 | if bilinear:
58 | self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
59 | else:
60 | self.up = nn.ConvTranspose2d(in_channels // 2, in_channels // 2, kernel_size=2, stride=2)
61 |
62 | self.conv = ResidualDoubleConv(in_channels, out_channels)
63 |
64 | def forward(self, x1, x2):
65 | x1 = self.up(x1)
66 | # input is CHW
67 | diffY = x2.size()[2] - x1.size()[2]
68 | diffX = x2.size()[3] - x1.size()[3]
69 |
70 | x1 = F.pad(x1, [diffX // 2, diffX - diffX // 2,
71 | diffY // 2, diffY - diffY // 2])
72 |
73 | x = torch.cat([x2, x1], dim=1)
74 | return self.conv(x)
75 |
76 |
77 | class OutConv(nn.Module):
78 | def __init__(self, in_channels, out_channels):
79 | super(OutConv, self).__init__()
80 | self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=1)
81 |
82 | def forward(self, x):
83 | return self.conv(x)
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/model/unet.py:
--------------------------------------------------------------------------------
1 | from torch.nn import Module, MaxPool2d, Conv2d
2 | import torch
3 | from .components import Double_Conv2d, DeConv2D, cropping
4 |
5 |
6 | class Unet(Module):
7 | def __init__(self, in_channels, out_channels):
8 | super().__init__()
9 | self.double1l = Double_Conv2d(in_channels, 64, padding=0)
10 |
11 | self.double2l = Double_Conv2d(64, 128, padding=0)
12 | self.double3l = Double_Conv2d(128, 256, padding=0)
13 | self.double4l = Double_Conv2d(256, 512, padding=0)
14 | self.doubleb = Double_Conv2d(512, 1024, padding=0)
15 |
16 | self.maxpooling = MaxPool2d(kernel_size=2, stride=2)
17 |
18 | self.up1 = DeConv2D(1024, 512)
19 | self.up2 = DeConv2D(512, 256)
20 | self.up3 = DeConv2D(256, 128)
21 | self.up4 = DeConv2D(128, 64)
22 |
23 | self.double1r = Double_Conv2d(1024, 512, padding=0)
24 | self.double2r = Double_Conv2d(512, 256, padding=0)
25 | self.double3r = Double_Conv2d(256, 128, padding=0)
26 | self.double4r = Double_Conv2d(128, 64, padding=0)
27 |
28 | self.final = Conv2d(64, out_channels, kernel_size=1)
29 |
30 | def forward(self, x):
31 | l1 = self.double1l(x)
32 | x = self.maxpooling(l1)
33 |
34 | l2 = self.double2l(x)
35 | x = self.maxpooling(l2)
36 |
37 | l3 = self.double3l(x)
38 | x = self.maxpooling(l3)
39 |
40 | l4 = self.double4l(x)
41 | x = self.maxpooling(l4)
42 |
43 | x = self.doubleb(x)
44 |
45 | x = self.up1(x)
46 | l4 = cropping(l4, x)
47 | x = torch.cat([l4, x], dim=1)
48 | # x = cropping(l4, x)
49 | x = self.double1r(x)
50 |
51 | x = self.up2(x)
52 | l3 = cropping(l3, x)
53 | x = torch.cat([l3, x], dim=1)
54 | # x = cropping(l3, x)
55 | x = self.double2r(x)
56 |
57 | x = self.up3(x)
58 | l2 = cropping(l2, x)
59 | x = torch.cat([l2, x], dim=1)
60 | # x = cropping(l2, x)
61 | x = self.double3r(x)
62 |
63 | x = self.up4(x)
64 | l1 = cropping(l1, x)
65 | x = torch.cat([l1, x], dim=1)
66 | # x = cropping(l1, x)
67 | x = self.double4r(x)
68 |
69 | x = self.final(x)
70 | return x
71 |
72 |
73 | """ Full assembly of the parts to form the complete network """
74 |
75 | import torch.nn.functional as F
76 |
77 | from .unet_parts import *
78 |
79 |
80 | class UNet(nn.Module):
81 | def __init__(self, n_channels, n_classes, bilinear=True):
82 | super(UNet, self).__init__()
83 | self.n_channels = n_channels
84 | self.n_classes = n_classes
85 | self.bilinear = bilinear
86 |
87 | self.inc = DoubleConv(n_channels, 64)
88 | self.down1 = Down(64, 128)
89 | self.down2 = Down(128, 256)
90 | self.down3 = Down(256, 512)
91 | self.down4 = Down(512, 512)
92 | self.up1 = Up(1024, 256, bilinear)
93 | self.up2 = Up(512, 128, bilinear)
94 | self.up3 = Up(256, 64, bilinear)
95 | self.up4 = Up(128, 64, bilinear)
96 | self.outc = OutConv(64, n_classes)
97 |
98 | def forward(self, x):
99 | x1 = self.inc(x)
100 | x2 = self.down1(x1)
101 | x3 = self.down2(x2)
102 | x4 = self.down3(x3)
103 | x5 = self.down4(x4)
104 | x = self.up1(x5, x4)
105 | x = self.up2(x, x3)
106 | x = self.up3(x, x2)
107 | x = self.up4(x, x1)
108 | logits = self.outc(x)
109 | return logits
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/model/unet_parts.py:
--------------------------------------------------------------------------------
1 | """ Parts of the U-Net model """
2 |
3 | import torch
4 | import torch.nn as nn
5 | import torch.nn.functional as F
6 |
7 |
8 | class DoubleConv(nn.Module):
9 | """(convolution => [BN] => ReLU) * 2"""
10 |
11 | def __init__(self, in_channels, out_channels):
12 | super().__init__()
13 | self.double_conv = nn.Sequential(
14 | nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1),
15 | nn.BatchNorm2d(out_channels),
16 | nn.ReLU(inplace=True),
17 | nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1),
18 | nn.BatchNorm2d(out_channels),
19 | nn.ReLU(inplace=True)
20 | )
21 |
22 | def forward(self, x):
23 | return self.double_conv(x)
24 |
25 |
26 | class Down(nn.Module):
27 | """Downscaling with maxpool then double conv"""
28 |
29 | def __init__(self, in_channels, out_channels):
30 | super().__init__()
31 | self.maxpool_conv = nn.Sequential(
32 | nn.MaxPool2d(2),
33 | DoubleConv(in_channels, out_channels)
34 | )
35 |
36 | def forward(self, x):
37 | return self.maxpool_conv(x)
38 |
39 |
40 | class Up(nn.Module):
41 | """Upscaling then double conv"""
42 |
43 | def __init__(self, in_channels, out_channels, bilinear=True):
44 | super().__init__()
45 |
46 | # if bilinear, use the normal convolutions to reduce the number of channels
47 | if bilinear:
48 | self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
49 | else:
50 | self.up = nn.ConvTranspose2d(in_channels // 2, in_channels // 2, kernel_size=2, stride=2)
51 |
52 | self.conv = DoubleConv(in_channels, out_channels)
53 |
54 | def forward(self, x1, x2):
55 | x1 = self.up(x1)
56 | # input is CHW
57 | diffY = x2.size()[2] - x1.size()[2]
58 | diffX = x2.size()[3] - x1.size()[3]
59 |
60 | x1 = F.pad(x1, [diffX // 2, diffX - diffX // 2,
61 | diffY // 2, diffY - diffY // 2])
62 | x = torch.cat([x2, x1], dim=1)
63 | return self.conv(x)
64 |
65 |
66 | class OutConv(nn.Module):
67 | def __init__(self, in_channels, out_channels):
68 | super(OutConv, self).__init__()
69 | self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=1)
70 |
71 | def forward(self, x):
72 | return self.conv(x)
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/predict.py:
--------------------------------------------------------------------------------
1 | from torch.utils.data import DataLoader
2 | from tqdm import tqdm
3 | from skimage import io
4 | from data import VertebraDataset
5 | from model.unet import Unet
6 | import torch
7 | import torch.nn as nn
8 | import numpy as np
9 | import warnings
10 | from model.unet import UNet
11 | from model.resunet import ResidualUNet
12 | import cv2
13 | import math
14 |
15 |
16 |
17 | warnings.filterwarnings('ignore')
18 |
19 |
20 |
21 | codebook = {0:(0, 0, 0), 1:(0, 8, 255), 2:(0, 93, 255), 3:(0, 178, 255), 4:(0, 255, 77), 5:(0, 255, 162),
22 | 6:(0, 255, 247), 7:(8, 255, 0), 8:(77, 0, 255), 9:(93, 255, 0), 10:(162, 0, 255), 11:(178, 255, 0),
23 | 12:(247, 0, 255), 13:(255, 0, 8), 14:(255, 0, 93), 15:(255, 0, 178), 16:(255, 76, 0),
24 | 17:(255, 162, 0), 18:(255, 247, 0)}
25 |
26 |
27 | def normalize_data(output, threshold=0.6):
28 |
29 | return np.where(output > threshold, 1, 0)
30 |
31 |
32 | ############################# color ###################################
33 | # def predict(model, loader, save_path="..//test//predict_color"):
34 | ############################# color ###################################
35 |
36 | ############################# original ###################################
37 | # def predict(model, loader, save_path="..//test//predict"):
38 | ############################# original ###################################
39 |
40 | ############################# detect ###################################
41 | def predict(model, loader, save_path="..//test//detect"):
42 | # def predict(model, loader, save_path="..//test//valid"):
43 | ############################# detect ###################################
44 | model.eval()
45 | with torch.no_grad():
46 | for _, (img, filename) in tqdm(enumerate(loader), total=len(loader), desc="Predict"):
47 | img = img.to(device)
48 | output = model(img)
49 |
50 | # output = (torch.softmax(output, dim=1)[:, 1])
51 |
52 | ############################# original ###################################
53 |
54 | output = torch.sigmoid(output)[0, :]
55 | output = (normalize_data(output.cpu().numpy())*255).astype(np.uint8)
56 |
57 | ############################# original ###################################
58 |
59 | ############################# color ###################################
60 |
61 | # # (19, 1200, 500)
62 |
63 | # # output = torch.sigmoid(output)
64 | # # output = output[0][0]
65 | # # output = (normalize_data(output.unsqueeze(0).cpu().numpy())*255).astype(np.uint8)
66 |
67 | # output = output.permute(0, 2, 3, 1) # (1200, 500, 19)
68 | # output = torch.sigmoid(output)
69 | # output = torch.argmax(output, dim=3)
70 | # output = image_decode(output[0]).cpu().numpy().astype(np.uint8)
71 |
72 | ############################# color ###################################
73 |
74 |
75 | for dim in range(output.shape[0]):
76 | io.imsave(f"{save_path}//p{filename[dim]}", output[dim])
77 |
78 |
79 | # def predict(model, loader, save_path="..//test//valid"):
80 | # model.eval()
81 | # with torch.no_grad():
82 | # for _, (img, filename) in tqdm(enumerate(loader), total=len(loader), desc="Predict"):
83 | # img = img.to(device)
84 | # output = model(img)
85 |
86 | # output = torch.sigmoid(output)[0, :]
87 | # output = (normalize_data(output.cpu().numpy())*255).astype(np.uint8)
88 |
89 | # for dim in range(output.shape[0]):
90 | # io.imsave(f"{save_path}//p{filename[dim]}", output[dim])
91 |
92 |
93 | # def predict_split(model, loader, save_path="..//test//predict_split"):
94 | # model.eval()
95 | # with torch.no_grad():
96 | # for _, (img, filename) in tqdm(enumerate(loader), total=len(loader), desc="Predict"):
97 | # img = img.to(device)
98 | # output = model(img)
99 | # # output = (torch.softmax(output, dim=1)[:, 1])
100 | # output = torch.sigmoid(output)[0, :]
101 | # output = (normalize_data(output.cpu().numpy())*255).astype(np.uint8)
102 |
103 | # for dim in range(output.shape[0]):
104 | # io.imsave(f"{save_path}//p{filename[dim]}", output[dim])
105 |
106 |
107 | # def predict_one(img):
108 | # img = VertebraDataset.preprocess(img)
109 | # format_img = np.zeros([1, 1, img.shape[0], img.shape[1]])
110 | # format_img[0, 0] = img
111 | # format_img = torch.tensor(format_img, dtype=torch.float)
112 | # device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
113 | # model = Unet(in_channels=1, out_channels=2)
114 | # model.load_state_dict(torch.load(model_path)["state_dict"])
115 | # model = model.to(device)
116 | # model.eval()
117 | # with torch.no_grad():
118 | # format_img = format_img.to(device)
119 | # output = model(format_img)
120 | # output = (torch.softmax(output, dim=1)[:, 1]) * 255
121 | # output = output.cpu().numpy().astype(np.uint8)
122 | # return output[0]
123 |
124 |
125 | if __name__ == '__main__':
126 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
127 |
128 |
129 |
130 | ########################### color ###############################
131 |
132 | # dataset = VertebraDataset("..//original_data//f01//image")
133 | # model = ResidualUNet(n_channels=1, n_classes=19)
134 | # # checkpoint = torch.load("save//best_color.pt")
135 | # checkpoint = torch.load("save//last_color.pt")
136 |
137 | ########################### color ###############################
138 |
139 | ########################### original ###############################
140 |
141 | # dataset = VertebraDataset("..//original_data//f01//image")
142 | # model = ResidualUNet(n_channels=1, n_classes=1)
143 | # # model = UNet(n_channels=1, n_classes=2)
144 | # checkpoint = torch.load("save//best.pt")
145 | # # checkpoint = torch.load("save//last.pt")
146 |
147 | ########################### original ###############################
148 |
149 | ########################### detect ###############################
150 |
151 | # dataset = VertebraDataset("..//detect_data//f01//image")
152 | dataset = VertebraDataset("..//valid_data//")
153 | model = ResidualUNet(n_channels=1, n_classes=1)
154 | # model = UNet(n_channels=1, n_classes=2)
155 | checkpoint = torch.load("save//best_detect.pt")
156 | # checkpoint = torch.load("save//last_detect.pt")
157 |
158 | ########################### detect ###############################
159 |
160 |
161 |
162 |
163 | # torch.save({"state_dict": model.state_dict(), "loss": loss_mean, "batchsize": batch_size, "Epoch": ep + 1}, path.join(save_dir, "last.pt"))
164 | model.load_state_dict(checkpoint["state_dict"])
165 | loader = DataLoader(dataset, batch_size=1)
166 | model = model.to(device)
167 |
168 | predict(model, loader)
169 | print("Done.")
170 |
--------------------------------------------------------------------------------
/app/VertebraSegmentation/net/train.py:
--------------------------------------------------------------------------------
1 | from torch.nn import Module, CrossEntropyLoss, BCEWithLogitsLoss
2 | from torch.optim import Adam
3 | from torch.utils.data import DataLoader
4 | import torch
5 | import torch.nn as nn
6 | from tqdm import tqdm
7 | from os import path
8 | from model.unet import Unet
9 | from model.unet import UNet # new
10 | from model.resunet import ResidualUNet
11 | from data.dataset import VertebraDataset
12 | import matplotlib.pyplot as plt
13 | import time
14 | import warnings
15 | import numpy as np
16 | import cv2
17 | import torch.nn.functional as F
18 | warnings.filterwarnings('ignore')
19 |
20 | def dice_coef(target, truth, smooth=1.0):
21 | target = target.contiguous().view(-1)
22 | truth = truth.contiguous().view(-1)
23 | union = target.sum() + truth.sum()
24 | intersection = torch.sum(target * truth)
25 | dice = (2 * intersection + smooth) / (union + smooth)
26 | return dice
27 |
28 |
29 |
30 | def Multiclass_dice_coef(input, target, weights=None):
31 | num_labels = target.shape[1] # one-hot encoding length
32 |
33 | if weights is None:
34 | weights = torch.ones(num_labels) # uniform weights for all classes
35 |
36 | # weights[0] = 0
37 |
38 | totalLoss = 0
39 |
40 | for idx in range(num_labels):
41 | diceLoss = dice_coef(input[:, idx], target[:, idx])
42 | if weights is not None:
43 | diceLoss *= weights[idx]
44 | totalLoss += diceLoss
45 |
46 | return totalLoss/num_labels
47 |
48 | def normalize_data(output, threshold=0.7):
49 | output = output.cpu().detach().numpy()
50 | output = np.where(output > threshold, 1.0, 0)
51 | output = torch.from_numpy(output).to("cuda")
52 |
53 | return output
54 |
55 | # def save_fig(epoch, loss, trainscore, testscore, save_dir=path.join("save")):
56 | # plt.plot(epoch, loss, label="Loss")
57 | # plt.title("Loss")
58 | # plt.xlabel("Epoch")
59 | # plt.ylabel("Loss")
60 | # plt.legend()
61 | # plt.savefig(path.join(save_dir, "loss.png"))
62 | # plt.clf()
63 |
64 | # plt.plot(epoch, trainscore, label="Train")
65 | # plt.plot(epoch, testscore, label="Test")
66 | # plt.title("Score")
67 | # plt.xlabel("Epoch")
68 | # plt.ylabel("Score")
69 | # plt.legend()
70 | # plt.savefig(path.join(save_dir, "score.png"))
71 | # plt.clf()
72 |
73 |
74 | def eval(model, loader, device):
75 | scores = list()
76 | model.eval()
77 | with torch.no_grad():
78 | for _, (img, mask) in tqdm(enumerate(loader), total=len(loader), desc="Evaluate"):
79 | img = img.to(device)
80 | mask = mask.to(device)
81 | output = model(img)
82 |
83 | # output = torch.softmax(output, dim=1)
84 |
85 | ######################### original #########################
86 |
87 | output = torch.sigmoid(output)
88 | output = normalize_data(output)# OUTPUT NEED TO NORMALIZE TO 0 OR 1 FOR A THRESHOLD
89 | score = dice_coef(output[:, :], mask[:, :output.shape[2], :output.shape[3]])
90 |
91 | ######################### original #########################
92 |
93 | ######################### color #########################
94 |
95 | # output = torch.sigmoid(output)
96 | # output = normalize_data(output, threshold=0.5)
97 |
98 | # # output = torch.softmax(output, dim=1)
99 | # # output = torch.argmax(output, dim=1)
100 |
101 | # score = Multiclass_dice_coef(output, mask)
102 | # # score = dice_coef(output[0][0], mask[0][0])
103 |
104 | ######################### color #########################
105 |
106 | scores.append(score)
107 | return torch.mean(torch.stack(scores, dim=0))
108 |
109 |
110 | def run_one_epoch(model, loader, device, criterion, optimizer):
111 | total_loss = 0
112 | model.train()
113 | for _, (img, mask) in tqdm(enumerate(loader), total=len(loader), desc="Train"):
114 | img = img.to(device)
115 | mask = mask.to(device)
116 | optimizer.zero_grad()
117 | output = model(img)
118 |
119 |
120 | ######################### color #########################
121 |
122 | # # output = F.sigmoid(output)
123 |
124 | # # output = normalize_data(output, threshold=0.7)
125 |
126 | # # loss = criterion(output[0], mask[0])
127 | # loss = criterion(output, mask)
128 |
129 | ######################### color #########################
130 |
131 | ######################### original #########################
132 |
133 | loss = criterion(output[0,0,:,:], mask[0, :, :])
134 |
135 | ######################### original #########################
136 |
137 | loss.backward()
138 | optimizer.step()
139 | total_loss += loss
140 | return total_loss / len(loader)
141 |
142 | class DiceLoss(nn.Module):
143 | def __init__(self, weight=None, size_average=True):
144 | super(DiceLoss, self).__init__()
145 |
146 | def forward(self, inputs, targets, smooth=1):
147 |
148 | # comment out if your model contains a sigmoid or equivalent activation layer
149 | inputs = F.sigmoid(inputs)
150 |
151 | #flatten label and prediction tensors
152 | inputs = inputs.view(-1)
153 | targets = targets.view(-1)
154 |
155 | intersection = (inputs * targets).sum()
156 | dice = (2.*intersection + smooth)/(inputs.sum() + targets.sum() + smooth)
157 |
158 | return 1 - dice
159 |
160 | class MulticlassDiceLoss(nn.Module):
161 | """
162 | requires one hot encoded target. Applies DiceLoss on each class iteratively.
163 | requires input.shape[0:1] and target.shape[0:1] to be (N, C) where N is
164 | batch size and C is number of classes
165 | """
166 | def __init__(self):
167 | super(MulticlassDiceLoss, self).__init__()
168 |
169 | def forward(self, input, target, weights=None):
170 |
171 | num_labels = target.shape[1] # one-hot encoding length
172 |
173 | if weights is None:
174 | weights = torch.ones(num_labels) #uniform weights for all classes
175 |
176 |
177 | dice = DiceLoss()
178 | totalLoss = 0
179 |
180 | for idx in range(num_labels):
181 | diceLoss = dice(input[:, idx], target[:, idx])
182 | if weights is not None:
183 | diceLoss *= weights[idx]
184 | totalLoss += diceLoss
185 |
186 | return totalLoss/num_labels
187 |
188 |
189 | def train(model, traindataset, testdataset, device, epochs, criterion, optimizer, batch_size=1, save_dir=path.join("save")):
190 | fig_epoch = list()
191 | fig_loss = list()
192 | fig_train_score = list()
193 | fig_test_score = list()
194 |
195 | highest_epoch = highest_score = 0
196 |
197 | trainloader = DataLoader(traindataset, batch_size=batch_size, shuffle=True, num_workers=10, pin_memory=True)
198 | testloader = DataLoader(testdataset, batch_size=batch_size, shuffle=True, num_workers=10, pin_memory=True)
199 |
200 | model = model.to(device)
201 |
202 | for ep in range(epochs):
203 | timer = time.clock()
204 |
205 | learning_rate = 0
206 |
207 | for param_group in optimizer.param_groups:
208 | learning_rate = param_group['lr']
209 |
210 | adjust_learning_rate(learning_rate, optimizer, ep)
211 |
212 | print(f"[ Epoch {ep + 1}/{epochs} ]")
213 | loss_mean = run_one_epoch(model, trainloader, device, criterion, optimizer)
214 | train_score = eval(model, trainloader, device)
215 | test_score = eval(model, testloader, device)
216 |
217 | # fig_epoch.append(ep + 1)
218 | # fig_loss.append(loss_mean)
219 | # fig_train_score.append(train_score)
220 | # fig_test_score.append(test_score)
221 | # save_fig(fig_epoch, fig_loss, fig_train_score, fig_test_score, save_dir=save_dir)
222 |
223 | if test_score > highest_score:
224 | highest_score = test_score
225 | highest_epoch = ep + 1
226 | ######################### original #########################
227 | # torch.save({"state_dict": model.state_dict(), "loss": loss_mean, "batchsize": batch_size, "Epoch": ep + 1}, path.join(save_dir, "best.pt"))
228 | ######################### original #########################
229 |
230 | ######################### color #########################
231 | # torch.save({"state_dict": model.state_dict(), "loss": loss_mean, "batchsize": batch_size, "Epoch": ep + 1}, path.join(save_dir, "best_color.pt"))
232 | ######################### color #########################
233 |
234 | ######################### detect #########################
235 | torch.save({"state_dict": model.state_dict(), "loss": loss_mean, "batchsize": batch_size, "Epoch": ep + 1}, path.join(save_dir, "best_test_f01.pt"))
236 | ######################### detect #########################
237 |
238 | ######################### original #########################
239 | # torch.save({"state_dict": model.state_dict(), "loss": loss_mean, "batchsize": batch_size, "Epoch": ep + 1}, path.join(save_dir, "last.pt"))
240 | ######################### original #########################
241 |
242 | ######################### color #########################
243 | # torch.save({"state_dict": model.state_dict(), "loss": loss_mean, "batchsize": batch_size, "Epoch": ep + 1}, path.join(save_dir, "last_color.pt"))
244 | ######################### color #########################
245 |
246 | ######################### detect #########################
247 | torch.save({"state_dict": model.state_dict(), "loss": loss_mean, "batchsize": batch_size, "Epoch": ep + 1}, path.join(save_dir, "last_test_f01.pt"))
248 | ######################### detect #########################
249 | print(f"""
250 | Best Score {highest_score} @ Epoch {highest_epoch}
251 | Learning Rate: {learning_rate}
252 | Loss: {loss_mean}
253 | Train Dice: {train_score}
254 | Test Dice: {test_score}
255 | Time passed: {round(time.clock() - timer)} seconds.
256 | """)
257 |
258 | def adjust_learning_rate(LEARNING_RATE, optimizer, epoch):
259 | lr = LEARNING_RATE * (0.8 ** (epoch // 70))
260 | for param_group in optimizer.param_groups:
261 | param_group['lr'] = lr
262 |
263 | def clahe_hist(img):
264 | clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
265 | cl1 = clahe.apply(img)
266 | return cl1
267 |
268 | if __name__ == '__main__':
269 | EPOCH = 120
270 | BATCHSIZE = 1
271 |
272 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
273 | # device = "cpu"
274 |
275 | ######################### color #########################
276 |
277 | # traindataset = VertebraDataset("..//label_extend_dataset", train=True)
278 | # testdataset = VertebraDataset("..//label_data//f01", train=True)
279 | # model = ResidualUNet(n_channels=1, n_classes=19)
280 | # criterion = BCEWithLogitsLoss()
281 | # # criterion = MulticlassDiceLoss()
282 |
283 | ######################### color #########################
284 |
285 | ######################### original #########################
286 |
287 | # traindataset = VertebraDataset("..//extend_dataset", train=True)
288 | # testdataset = VertebraDataset("..//original_data//f01", train=True)
289 | # model = ResidualUNet(n_channels=1, n_classes=1)
290 | # criterion = BCEWithLogitsLoss()
291 |
292 | ######################### original #########################
293 |
294 | ######################### split #########################
295 |
296 | # traindataset = VertebraDataset("..//splitting_dataset", train=True)
297 | # testdataset = VertebraDataset("..//original_split_data//f01", train=True)
298 |
299 | ######################### split #########################
300 |
301 | ######################### detect #########################
302 |
303 | traindataset = VertebraDataset("..//train", train=True)
304 | testdataset = VertebraDataset("..//test", train=True)
305 | model = ResidualUNet(n_channels=1, n_classes=1)
306 | criterion = BCEWithLogitsLoss()
307 |
308 | ######################### detect #########################
309 |
310 | # model = ResidualUNet(n_channels=1, n_classes=1)
311 | # model = UNet(n_channels=1, n_classes=2)
312 | # model = Unet(in_channels=1, out_channels=2)
313 | # model = ResUnet(in_channels=1, out_channels=2)
314 |
315 | # criterion = CrossEntropyLoss()
316 | # criterion = BCEWithLogitsLoss()
317 | # criterion = DiceLoss()
318 |
319 | optimizer = Adam(model.parameters(), lr=1e-4)
320 |
321 | train(model, traindataset, testdataset, device, EPOCH, criterion, optimizer, batch_size=BATCHSIZE)
322 | print("Done.")
323 |
--------------------------------------------------------------------------------
/app/VertebraSegmentation/utils.py:
--------------------------------------------------------------------------------
1 | from skimage.measure import label, regionprops
2 | from skimage.morphology import binary_dilation, binary_erosion, square
3 | from skimage.filters import sobel
4 | from skimage.color import gray2rgb
5 | import numpy as np
6 | from tqdm import tqdm
7 |
8 | # 除雜訊
9 | def clean_noice(img, threshold=128, middle_threshold=80):
10 | img = img > threshold
11 |
12 | # 計算投影直方圖
13 | val_count = list()
14 | for i in range(img.shape[1]):
15 | val_count.append(np.sum(img[:, i] != 0))
16 | matrix = np.zeros_like(img)
17 | matrix = np.transpose(matrix)
18 | for i in range(matrix.shape[0]):
19 | matrix[i, :val_count[i]] = 1
20 | matrix = np.transpose(matrix)
21 |
22 | # 找中點,去除目標外資訊
23 | matrix = label(matrix, connectivity=2)
24 | max_area = 0
25 | for group in regionprops(matrix):
26 | if group.label != 0 and group.area > max_area:
27 | max_area = group.area
28 | col_min = group.bbox[1]
29 | col_max = group.bbox[3]
30 | img[:, :col_min] = 0
31 | img[:, col_max + 1:] = 0
32 |
33 | # 找中線,去除雜訊
34 | middle_line = (col_min + col_max) // 2
35 | matrix = label(img, connectivity=2)
36 | for group in regionprops(matrix):
37 | if group.label != 0 and abs(group.centroid[1] - middle_line) > middle_threshold:
38 | matrix = np.where(matrix == group.label, 0, matrix)
39 | img = matrix > 0
40 |
41 | return img * 255
42 |
43 |
44 | def horizontal_cut(img, ratio=0.7, width=10, min_area=1500):
45 | pixcl_count = list()
46 | for i in range(img.shape[0]):
47 | pixcl_count.append(np.sum(img[i, :] > 0))
48 |
49 | edges = list()
50 | reset = True
51 | climb = True
52 | local_hightest = 0
53 | local_lowest = 0
54 | for idx in range(len(pixcl_count)):
55 | if reset:
56 | local_hightest = 0
57 | local_lowest = 0
58 | reset = False
59 | climb = True
60 |
61 | if climb:
62 | local_hightest = idx if pixcl_count[idx] > pixcl_count[local_hightest] else local_hightest
63 | if pixcl_count[idx] < pixcl_count[local_hightest] * ratio:
64 | local_lowest = idx
65 | climb = False
66 | else:
67 | local_lowest = idx if pixcl_count[idx] < pixcl_count[local_lowest] else local_lowest
68 | if pixcl_count[idx] * ratio > pixcl_count[local_lowest]:
69 | reset = True
70 | edges.append(local_lowest)
71 |
72 | img = closing(img)
73 |
74 | for i in edges:
75 | img[i: i + width, :] = 0
76 |
77 | labels = label(img, connectivity=1)
78 | for group in regionprops(labels):
79 | if group.area < min_area:
80 | labels = np.where(labels == group.label, 0, labels)
81 | return (labels > 0) * 255
82 |
83 |
84 | def opening(img, square_width=5):
85 | return binary_dilation(binary_erosion(img, square(square_width)), square(square_width))
86 |
87 |
88 | def closing(img, square_width=15):
89 | return binary_erosion(binary_dilation(img, square(square_width)), square(square_width))
90 |
91 |
92 | def dice_coef(target, truth, t_val=255):
93 | target_val = target == t_val
94 | truth_val = truth == t_val
95 | target_obj = np.sum(target_val)
96 | truth_obj = np.sum(truth_val)
97 | intersection = np.sum(np.logical_and(target_val, truth_val))
98 | dice = (2 * intersection) / (target_obj + truth_obj)
99 | return dice
100 |
101 |
102 | def dice_coef_each_region(target, truth):
103 | target_lab = label(target, connectivity=1)
104 | truth_lab = label(truth, connectivity=1)
105 | target_vals = sorted(list(np.unique(target_lab)))[1:]
106 | truth_vals = sorted(list(np.unique(truth_lab)))[1:]
107 |
108 |
109 | scores = list()
110 | for _, truth_num in enumerate(tqdm(truth_vals, total=len(truth_vals))):
111 | score = 0
112 | lab = 0
113 | for target_num in target_vals:
114 | dice = dice_coef(target_lab == target_num, truth_lab == truth_num, t_val=1)
115 | if dice > score:
116 | score = dice
117 | lab = target_num
118 | scores.append((truth_num, lab, score))
119 | if lab != 0 and score != 0:
120 | target_vals.remove(lab)
121 |
122 | # (truth_num, target_num, score)
123 | return scores, np.mean([i[2] for i in scores])
124 |
125 |
126 | def draw_edge(origin_img, predict_map):
127 | img = gray2rgb(origin_img)
128 | edge = sobel(predict_map)
129 | row, col = np.where(edge > 0)
130 | for i in range(len(row)):
131 | img[row[i], col[i], 0] = 0
132 | img[row[i], col[i], 1] = 0
133 | img[row[i], col[i], 2] = 255
134 | return img
135 |
136 |
137 | if __name__ == '__main__':
138 | from skimage import io
139 | from skimage.color import rgb2gray
140 | import matplotlib.pyplot as plt
141 |
142 | img_label = rgb2gray(io.imread("original_data/f01/image/0003.png"))
143 | img_target = rgb2gray(io.imread("test//predict//p0003.png"))
144 |
145 | print(img_label.shape)
146 | print(img_target.shape)
147 |
148 | # img_label = img_label[:, :-4]
149 |
150 | img_target = clean_noice(img_target)
151 | img_target = horizontal_cut(img_target)
152 | plt.subplot(1, 2, 1)
153 | plt.imshow(img_target, cmap="gray")
154 | plt.subplot(1, 2, 2)
155 | plt.imshow(img_label, cmap="gray")
156 | plt.show()
157 |
158 | mapping, dice = dice_coef_each_region(img_target, img_label)
159 | print(mapping)
160 | print(dice)
161 |
--------------------------------------------------------------------------------
/app/cross.txt:
--------------------------------------------------------------------------------
1 | f01: 0.897
2 | 1. 0.90
3 | 2. 0.92
4 | 3. 0.86
5 | 4. 0.93
6 | 5. 0.91
7 | 6. 0.85
8 | 7. 0.87
9 | 8. 0.92
10 | 9. 0.90
11 | 10. 0.89
12 | 11. 0.90
13 | 12. 0.91
14 | 13. 0.90
15 | 14. 0.90
16 | 15. 0.90
17 | 16. 0.90
18 | 17. 0.91
19 | 18. 0.89
20 | 19. 0.90
21 | 20. 0.88
22 |
23 | f02: 0.916
24 |
25 | 21. 0.93
26 | 22. 0.91
27 | 23. 0.93
28 | 24. 0.90
29 | 25. 0.91
30 | 26. 0.92
31 | 27. 0.92
32 | 28. 0.87
33 | 29. 0.88
34 | 30. 0.92
35 | 31. 0.91
36 | 32. 0.92
37 | 33. 0.92
38 | 34. 0.93
39 | 35. 0.94
40 | 36. 0.92
41 | 37. 0.96
42 | 38. 0.93
43 | 39. 0.88
44 | 40. 0.92
45 |
46 | f03: 0.9065
47 |
48 | 41. 0.92
49 | 42. 0.85
50 | 43. 0.93
51 | 44. 0.90
52 | 45. 0.84
53 | 46. 0.92
54 | 47. 0.91
55 | 48. 0.92
56 | 49. 0.86
57 | 50. 0.93
58 | 51. 0.92
59 | 52. 0.91
60 | 53. 0.92
61 | 54. 0.90
62 | 55. 0.92
63 | 56. 0.92
64 | 57. 0.89
65 | 58. 0.93
66 | 59. 0.94
67 | 60. 0.90
--------------------------------------------------------------------------------
/app/main.py:
--------------------------------------------------------------------------------
1 | from __future__ import division
2 |
3 |
4 | from PyTorch_YOLOv3.utils.utils import *
5 | from PyTorch_YOLOv3.utils.datasets import *
6 | from PyTorch_YOLOv3.models import *
7 | from PyTorch_YOLOv3.utils.parse_config import *
8 |
9 | from VertebraSegmentation.coordinate import *
10 | from VertebraSegmentation.filp_and_rotate import *
11 | from VertebraSegmentation.net.data import VertebraDataset
12 | from VertebraSegmentation.net.model.unet import Unet
13 | from VertebraSegmentation.net.model.resunet import ResidualUNet
14 |
15 | import cv2
16 | import math
17 | import os
18 | import sys
19 | import time
20 | import datetime
21 | import argparse
22 | import shutil
23 |
24 | from skimage.measure import label, regionprops
25 | from skimage.morphology import binary_dilation, binary_erosion, square
26 | from skimage.filters import sobel
27 | from skimage.color import gray2rgb
28 | from skimage import morphology
29 |
30 | from os.path import splitext, exists, join
31 | from PIL import Image
32 |
33 | import torch
34 | import torch.nn as nn
35 | import numpy as np
36 | from torch.utils.data import DataLoader
37 | from torchvision import datasets
38 | from torch.autograd import Variable
39 | from tqdm import tqdm
40 | from skimage import io
41 | import torch.nn.functional as F
42 | import matplotlib.pyplot as plt
43 | import matplotlib.patches as patches
44 | from matplotlib.ticker import NullLocator
45 |
46 | import os
47 |
48 | Max_H = 110
49 | Max_W = 190
50 |
51 | def preprocess(img):
52 | img = rgb2gray(img)
53 | bound = img.shape[0] // 3
54 | up = exposure.equalize_adapthist(img[:bound, :])
55 | down = exposure.equalize_adapthist(img[bound:, :])
56 | enhance = np.append(up, down, axis=0)
57 | edge = sobel(gaussian(enhance, 2))
58 | enhance = enhance + edge * 3
59 | return np.where(enhance > 1, 1, enhance)
60 |
61 | def clahe_hist(img):
62 | clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
63 | cl1 = clahe.apply(img)
64 | return cl1
65 |
66 | def resize(image, size):
67 | image = F.interpolate(image.unsqueeze(0), size=size, mode="nearest").squeeze(0)
68 | return image
69 |
70 | def delete_gap_dir(dir):
71 | if os.path.isdir(dir):
72 | for d in os.listdir(dir):
73 | delete_gap_dir(os.path.join(dir, d))
74 | if not os.listdir(dir):
75 | os.rmdir(dir)
76 | delete_gap_dir(os.getcwd())
77 |
78 | def detect_one(img_path, modelSelcet="PyTorch_YOLOv3/checkpoints/yolov3_ckpt_best_f01.pth"):
79 | parser = argparse.ArgumentParser()
80 | parser.add_argument("--image_path", type=str, default=f"{img_path}", help="path to dataset")
81 | parser.add_argument("--model_def", type=str, default="PyTorch_YOLOv3/config/yolov3-custom.cfg", help="path to model definition file")
82 | parser.add_argument("--class_path", type=str, default="PyTorch_YOLOv3/data/custom/classes.names", help="path to class label file")
83 | parser.add_argument("--conf_thres", type=float, default=0.8, help="object confidence threshold") # 0.8
84 | parser.add_argument("--nms_thres", type=float, default=0.3, help="iou thresshold for non-maximum suppression") # 0.25
85 | parser.add_argument("--n_cpu", type=int, default=0, help="number of cpu threads to use during batch generation")
86 | parser.add_argument("--img_size", type=int, default=416, help="size of each image dimension")
87 | parser.add_argument("--checkpoint_model", type=str, default=f"{modelSelcet}", help="path to checkpoint model")
88 | opt = parser.parse_args()
89 |
90 | print(opt.checkpoint_model)
91 |
92 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
93 | os.makedirs("./output", exist_ok=True)
94 | os.makedirs("./coordinate", exist_ok=True)
95 |
96 | # Set up model
97 | model = Darknet(opt.model_def, img_size=opt.img_size).to(device)
98 | # Load checkpoint weights
99 | model.load_state_dict(torch.load(opt.checkpoint_model))
100 | classes = load_classes(opt.class_path) # Extracts class labels from file
101 |
102 | Tensor = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor
103 |
104 | print(opt.image_path + ".png")
105 |
106 | img = cv2.imread(opt.image_path + ".png", 0)
107 |
108 | img = clahe_hist(img)
109 | img = preprocess(img/255)
110 | img = transforms.ToTensor()(img).float()
111 | img, _ = pad_to_square(img, 0)
112 | img = resize(img, opt.img_size)
113 |
114 | print("\nPerforming object detection:")
115 |
116 | input_imgs = Variable(img.type(Tensor)).unsqueeze(0)
117 |
118 | detections = None
119 | with torch.no_grad():
120 | detections = model(input_imgs)
121 | detections = non_max_suppression(detections, opt.conf_thres, opt.nms_thres)
122 |
123 | plt.set_cmap('gray')
124 | rewrite = True
125 |
126 | img = np.array(Image.open(img_path + ".png").convert('L'))
127 | plt.figure()
128 | fig, ax = plt.subplots(1)
129 | ax.imshow(img)
130 | # print(img.shape)
131 | filename = img_path[-4:]
132 |
133 | if detections is not None:
134 | # Rescale boxes to original image
135 | detections = rescale_boxes(detections[0], opt.img_size, img.shape[:2])
136 |
137 | for x1, y1, x2, y2, conf, cls_conf, cls_pred in detections:
138 | box_w = x2 - x1
139 | box_h = y2 - y1
140 | x1, y1, x2, y2 = math.floor(x1), math.floor(y1), math.ceil(x2), math.ceil(y2)
141 | box_w, box_h = x2-x1, y2-y1
142 |
143 | if rewrite:
144 | f1 = open(f"./coordinate/{filename}.txt", 'w')
145 | f1.write("{:d} {:d} {:d} {:d} {:d} {:d}\n".format(x1, y1, x2, y2, box_w, box_h) )
146 | rewrite = False
147 | else:
148 | f1 = open(f"./coordinate/{filename}.txt", 'a')
149 | f1.write("{:d} {:d} {:d} {:d} {:d} {:d}\n".format(x1, y1, x2, y2, box_w, box_h) )
150 |
151 | bbox = patches.Rectangle((x1, y1), box_w, box_h, linewidth=0.5, edgecolor='red', facecolor="none")
152 | ax.add_patch(bbox)
153 |
154 | plt.axis("off")
155 | plt.gca().xaxis.set_major_locator(NullLocator())
156 | plt.gca().yaxis.set_major_locator(NullLocator())
157 | plt.savefig(f"./output/{filename}.png", bbox_inches="tight", pad_inches=0.0, facecolor="none")
158 | plt.close()
159 | print("\nImage has saved")
160 |
161 | f1.close()
162 | path1 = join("./coordinate", filename)
163 | path2 = join("./GT_coordinate", filename)
164 |
165 | Sort_coordinate(f"{path1}.txt", flag=True)
166 | Sort_coordinate(f"{path2}.txt", flag=False)
167 | def detect():
168 | parser = argparse.ArgumentParser()
169 | parser.add_argument("--image_folder", type=str, default="PyTorch_YOLOv3/data/custom/images/valid/", help="path to dataset")
170 | parser.add_argument("--model_def", type=str, default="PyTorch_YOLOv3/config/yolov3-custom.cfg", help="path to model definition file")
171 | parser.add_argument("--class_path", type=str, default="PyTorch_YOLOv3/data/custom/classes.names", help="path to class label file")
172 | parser.add_argument("--conf_thres", type=float, default=0.8, help="object confidence threshold") # 0.8
173 | parser.add_argument("--nms_thres", type=float, default=0.3, help="iou thresshold for non-maximum suppression") # 0.25
174 | parser.add_argument("--batch_size", type=int, default=1, help="size of the batches")
175 | parser.add_argument("--n_cpu", type=int, default=0, help="number of cpu threads to use during batch generation")
176 | parser.add_argument("--img_size", type=int, default=416, help="size of each image dimension")
177 | parser.add_argument("--checkpoint_model", type=str, default="PyTorch_YOLOv3/checkpoints/yolov3_ckpt_best_f01.pth", help="path to checkpoint model")
178 | opt = parser.parse_args()
179 |
180 |
181 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
182 | os.makedirs("PyTorch_YOLOv3/output", exist_ok=True)
183 | os.makedirs("PyTorch_YOLOv3/pre_img", exist_ok=True)
184 | os.makedirs("PyTorch_YOLOv3/coordinate", exist_ok=True)
185 |
186 | fname_list = []
187 | for file in os.listdir(opt.image_folder):
188 |
189 | file_name = splitext(file)[0]
190 | fname_list.append(f"{file_name}.txt")
191 |
192 | fname_list = sorted(fname_list)
193 |
194 | # Set up model
195 | model = Darknet(opt.model_def, img_size=opt.img_size).to(device)
196 | # Load checkpoint weights
197 | model.load_state_dict(torch.load(opt.checkpoint_model))
198 |
199 | model.eval() # Set in evaluation mode
200 |
201 | dataloader = DataLoader(
202 | ImageFolder(opt.image_folder, img_size=opt.img_size),
203 | batch_size=opt.batch_size,
204 | shuffle=False,
205 | num_workers=opt.n_cpu,
206 | )
207 |
208 | classes = load_classes(opt.class_path) # Extracts class labels from file
209 |
210 | Tensor = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor
211 |
212 | imgs = [] # Stores image paths
213 | img_detections = [] # Stores detections for each image index
214 |
215 | print("\nPerforming object detection:")
216 | prev_time = time.time()
217 |
218 | for batch_i, (img_paths, input_imgs) in tqdm(enumerate(dataloader), total=len(dataloader), desc="Batch Inference Time"):
219 | # Configure input
220 | input_imgs = Variable(input_imgs.type(Tensor))
221 |
222 | # Get detections
223 | with torch.no_grad():
224 | detections = model(input_imgs)
225 | detections = non_max_suppression(detections, opt.conf_thres, opt.nms_thres)
226 |
227 | # Log progress
228 | current_time = time.time()
229 | inference_time = datetime.timedelta(seconds=current_time - prev_time)
230 | prev_time = current_time
231 | # print("\t+ Batch %d, Inference Time: %s" % (batch_i, inference_time))
232 |
233 | # Save image and detections
234 | imgs.extend(img_paths)
235 | img_detections.extend(detections)
236 |
237 |
238 |
239 | plt.set_cmap('gray')
240 |
241 | rewrite = True
242 | print("\nSaving images:")
243 |
244 |
245 | for img_i, (path, detections) in enumerate(zip(imgs, img_detections)):
246 |
247 | print("(%d) Image: '%s'" % (img_i, path))
248 |
249 | # Create plot
250 |
251 | img = np.array(Image.open(path).convert('L'))
252 | plt.figure()
253 | fig, ax = plt.subplots(1)
254 | ax.imshow(img)
255 |
256 |
257 | # Draw bounding boxes and labels of detections
258 | if detections is not None:
259 | # Rescale boxes to original image
260 |
261 | detections = rescale_boxes(detections, opt.img_size, img.shape[:2])
262 |
263 | rewrite = True
264 | for x1, y1, x2, y2, conf, cls_conf, cls_pred in detections:
265 |
266 | # print("\t+ Label: %s, Conf: %.5f" % (classes[int(cls_pred)], cls_conf.item()))
267 | # x1 = x1 - 10
268 | y1 = y1 - 5
269 | y2 = y2 + 5
270 | x1 = x1 - 50
271 | x2 = x2 + 50
272 | box_w = x2 - x1
273 | box_h = y2 - y1
274 | x1, y1, x2, y2 = math.floor(x1), math.floor(y1), math.ceil(x2), math.ceil(y2)
275 | box_w, box_h = x2-x1, y2-y1
276 |
277 |
278 | if rewrite:
279 | f1 = open(f"VertebraSegmentation/coordinate/{fname_list[img_i]}", 'w')
280 | f2 = open(f"PyTorch_YOLOv3/coordinate/{fname_list[img_i]}", 'w')
281 | f1.write("{:d} {:d} {:d} {:d} {:d} {:d}\n".format(x1, y1, x2, y2, box_w, box_h) )
282 | f2.write("{:d} {:d} {:d} {:d} {:d} {:d}\n".format(x1, y1, x2, y2, box_w, box_h) )
283 | rewrite = False
284 | else:
285 | f1 = open(f"VertebraSegmentation/coordinate/{fname_list[img_i]}", 'a')
286 | f2 = open(f"PyTorch_YOLOv3/coordinate/{fname_list[img_i]}", 'a')
287 | f1.write("{:d} {:d} {:d} {:d} {:d} {:d}\n".format(x1, y1, x2, y2, box_w, box_h) )
288 | f2.write("{:d} {:d} {:d} {:d} {:d} {:d}\n".format(x1, y1, x2, y2, box_w, box_h) )
289 | # color = bbox_colors[int(np.where(unique_labels == int(cls_pred))[0])]
290 | # Create a Rectangle patch
291 | bbox = patches.Rectangle((x1, y1), box_w, box_h, linewidth=0.5, edgecolor='red', facecolor="none")
292 | # Add the bbox to the plot
293 | ax.add_patch(bbox)
294 | # Save generated image with detections
295 | plt.axis("off")
296 | plt.gca().xaxis.set_major_locator(NullLocator())
297 | plt.gca().yaxis.set_major_locator(NullLocator())
298 | # plt.set_cmap('gray')
299 | filename = path.split("/")[-1].split(".")[0]
300 | plt.savefig(f"PyTorch_YOLOv3/output/{filename}.png", bbox_inches="tight", pad_inches=0.0, facecolor="none")
301 | plt.close()
302 |
303 | # print(f"Max_Height={Max_H}", f"Max_Width={Max_W}")
304 |
305 | def create_valid_return_len(coordinate_path, save_path, source_path):
306 |
307 | os.makedirs(save_path, exist_ok=True)
308 | os.makedirs("VertebraSegmentation/test/valid", exist_ok=True)
309 |
310 | box_num = []
311 |
312 | for file in tqdm(sorted(listdir(source_path)), desc=f"{save_path}"):
313 | img = cv2.imread(join(source_path, file), 0)
314 | boxes = get_information(file, coordinate_path)
315 |
316 | box_num.append(len(boxes))
317 |
318 | for id, box in enumerate(boxes):
319 | box = list(map(int, box))
320 | x1, y1, x2, y2 = box[0], box[1], box[2], box[3]
321 | width, height = box[4], box[5]
322 | detect_region = np.zeros((height, width))
323 | detect_region = img[y1:y2+1, x1:x2+1]
324 |
325 | cv2.imwrite(join("VertebraSegmentation", "valid_data", f"{splitext(file)[0]}_{id}.png"), detect_region)
326 |
327 | return box_num
328 |
329 | def normalize_data(output, threshold=0.6):
330 |
331 | return np.where(output > threshold, 1, 0)
332 |
333 |
334 | def predict(model, loader, numOfEachImg, save_path="VertebraSegmentation//test//valid"):
335 |
336 | model.eval()
337 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
338 |
339 | result = np.zeros((1, 1200, 500), dtype=int)
340 |
341 | count = 0
342 | with torch.no_grad():
343 | for _, (img, filename) in tqdm(enumerate(loader), total=len(loader), desc="Predict"):
344 |
345 | index = int((splitext(filename[0])[0])[:4]) - 1 # 0, 1, 2, 3, ... , 19
346 | id = int((splitext(filename[0])[0])[5:])
347 | count += 1
348 |
349 | img = img.to(device)
350 | output = model(img)
351 | output = torch.sigmoid(output)[0, :]
352 | output = (normalize_data(output.cpu().numpy())*255).astype(np.uint8)
353 |
354 | boxes = get_information(f"{(splitext(filename[0])[0])[:4]}.png", "VertebraSegmentation/coordinate")
355 |
356 | box = boxes[id]
357 | x1, y1, x2, y2 = int(box[0]), int(box[1]), int(box[2]), int(box[3])
358 |
359 | result[:, y1:y2+1, x1:x2+1] = output
360 |
361 | if count == numOfEachImg[index]:
362 | result = result.astype(np.uint8)
363 | for dim in range(result.shape[0]):
364 | io.imsave(f"{save_path}//p{(splitext(filename[0])[0])[:4]}.png", result[dim])
365 |
366 | result = np.zeros((1, 1200, 500), dtype=int)
367 | count = 0
368 |
369 | def dice_coef(target, truth, empty=False, smooth=1.0):
370 | if not empty:
371 | target = target.flatten()
372 | truth = truth.flatten()
373 | union = np.sum(target) + np.sum(truth)
374 | intersection = np.sum(target * truth)
375 | dice = (2 * intersection + smooth) / (union + smooth)
376 | return dice
377 | else:
378 | print("aaaa")
379 | return 0
380 |
381 |
382 | def create_valid_return_len_one(coordinate_path, save_path, source_path, target):
383 |
384 | os.makedirs(save_path, exist_ok=True)
385 | os.makedirs(source_path, exist_ok=True)
386 |
387 | img = cv2.imread(join(source_path, target), 0)
388 | boxes = get_information(target, coordinate_path)
389 |
390 | for id, box in enumerate(boxes):
391 | box = list(map(int, box))
392 | x1, y1, x2, y2 = box[0], box[1], box[2], box[3]
393 | width, height = box[4], box[5]
394 | detect_region = np.zeros((height, width))
395 | detect_region = img[y1:y2+1, x1:x2+1]
396 |
397 | cv2.imwrite(join(save_path, f"{splitext(target)[0]}_{id}.png"), detect_region)
398 |
399 | return len(boxes)
400 |
401 | def predict_one(model, loader, numOfImg, img, save_path=".//result"):
402 |
403 | os.makedirs(save_path, exist_ok=True)
404 | model.eval()
405 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
406 | result = np.zeros((1, 1200, 500), dtype=int)
407 |
408 | GT_path = join("./source/label", img)
409 | label_img = cv2.imread(GT_path, 0) # 2-dimension
410 | Dice_list = [None for _ in range(20)]
411 | bone_num = 0
412 |
413 | GT_id = 0
414 | id = 0
415 | with torch.no_grad():
416 | for _, (img, filename) in tqdm(enumerate(loader), total=len(loader), desc="Predict"):
417 | id = int((splitext(filename[0])[0])[5:])
418 | img = img.to(device) # img is 4-dimension
419 | output = model(img)
420 | output = torch.sigmoid(output)[0, :]
421 | output = (normalize_data(output.cpu().numpy())*255).astype(np.uint8)
422 | boxes = get_information(f"{(splitext(filename[0])[0])[:4]}.png", "./coordinate")
423 |
424 | output = (output==255).astype(bool)
425 | output = morphology.remove_small_objects(output, min_size=2000, connectivity=2)
426 | output = output.astype(int)
427 | output = np.where(output==1, 255, 0)
428 |
429 | box = boxes[id]
430 |
431 | # print(id)
432 |
433 | x1, y1, x2, y2 = int(box[0]), int(box[1]), int(box[2]), int(box[3])
434 | result[:, y1:y2+1, x1:x2+1] = output
435 |
436 | GT_boxes = get_information(f"{(splitext(filename[0])[0])[:4]}.png", "./GT_coordinate")
437 |
438 | flag = True
439 |
440 | while flag:
441 | GT_box = GT_boxes[GT_id]
442 | GT_x1, GT_y1, GT_x2, GT_y2 = int(GT_box[0]), int(GT_box[1]), int(GT_box[2]), int(GT_box[3])
443 |
444 | result_label = label_img[GT_y1: GT_y2+1, GT_x1: GT_x2+1]
445 | Dice = 0
446 | if (abs(y1 - GT_y1) > 35 and len(boxes) != len(GT_boxes)):
447 | Dice = dice_coef(None, None, empty=True)
448 | else:
449 | # intersection_x1 = min(x1, GT_x1)
450 | # intersection_y1 = min(y1, GT_y1)
451 | # intersection_x2 = max(x2, GT_x2)
452 | # intersection_y2 = max(y2, GT_y2)
453 | # intersection_h = intersection_y2-intersection_y1+1
454 | # intersection_w = intersection_x2-intersection_x1+1
455 |
456 | intersection_result = np.zeros((1200, 500), dtype=int)
457 | intersection_result_label = np.zeros((1200, 500), dtype=int)
458 |
459 | intersection_result[y1:y2+1, x1:x2+1] = result[:, y1:y2+1, x1:x2+1]
460 | intersection_result_label[GT_y1:GT_y2+1, GT_x1:GT_x2+1] = result_label
461 |
462 | Dice = dice_coef(intersection_result/255, intersection_result_label/255)
463 | Dice = round(float(Dice), 2)
464 | id += 1
465 | flag = False
466 |
467 | Dice_list[GT_id] = Dice
468 | GT_id += 1
469 |
470 |
471 |
472 | # result_label = np.zeros((y2-y1, x2-x1), dtype=int)
473 | # result_label = label_img[y1:y2+1, x1:x2+1]
474 |
475 | # Dice = dice_coef(result[0, y1:y2+1, x1:x2+1]/255, result_label/255)
476 | # Dice = round(float(Dice), 2)
477 | # Dice_list[id] = Dice
478 |
479 | bone_num += 1
480 |
481 | if _+1 == numOfImg:
482 | result = result.astype(np.uint8)
483 | for dim in range(result.shape[0]):
484 | io.imsave(f"{save_path}//p{(splitext(filename[0])[0])[:4]}.png", result[dim])
485 |
486 |
487 | return Dice_list, bone_num
488 |
489 | def Segmentation_one(target):
490 |
491 | # create splitting dataset
492 | numOfImg = create_valid_return_len_one("./coordinate",
493 | "./valid_data",
494 | "./source/image",
495 | target) # target must be xxxx.png
496 | # recombine each sepertated img
497 |
498 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
499 | dataset = VertebraDataset(".//valid_data//")
500 | model = ResidualUNet(n_channels=1, n_classes=1)
501 | checkpoint = torch.load("VertebraSegmentation/net/save/best_detect_f03.pt")
502 | model.load_state_dict(checkpoint["state_dict"])
503 | loader = DataLoader(dataset, batch_size=1)
504 | model = model.to(device)
505 | return predict_one(model, loader, numOfImg, target) # return Dice_list and bone_num
506 | print("Done.")
507 |
508 | def Segmentation():
509 |
510 | # create splitting dataset
511 | numOfEachImg = create_valid_return_len("VertebraSegmentation/coordinate",
512 | "VertebraSegmentation/valid_data",
513 | "VertebraSegmentation/original_data/f01/image")
514 |
515 | # recombine each sepertated img
516 |
517 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
518 | dataset = VertebraDataset("VertebraSegmentation//valid_data//")
519 | model = ResidualUNet(n_channels=1, n_classes=1)
520 | checkpoint = torch.load("VertebraSegmentation//net//save//best_test_f01.pt")
521 | # checkpoint = torch.load("save//last_detect.pt")
522 |
523 | model.load_state_dict(checkpoint["state_dict"])
524 | loader = DataLoader(dataset, batch_size=1)
525 | model = model.to(device)
526 |
527 | predict(model, loader, numOfEachImg)
528 | print("Done.")
529 |
530 |
531 | def delete_valid_data(path):
532 |
533 | try:
534 | shutil.rmtree(path)
535 | except OSError as e:
536 | print(e)
537 | else:
538 | print("The directory is deleted successfully")
539 |
540 | def Sort_coordinate(path, flag):
541 | # path = join("./coordinate", filename)
542 |
543 | f = open(path, 'r')
544 | lines = f.readlines()
545 | f.close()
546 | list_of_list = []
547 |
548 | for i, line in enumerate(lines):
549 | lines[i] = line.replace("\n", "")
550 | L = list(map(lambda x: int(x), lines[i].split(' ')))
551 | list_of_list.append(L)
552 |
553 | list_of_list.sort(key=lambda x: x[1])
554 |
555 | f1 = open(path, 'w')
556 | f2 = open(path, 'a')
557 |
558 | for i, line in enumerate(list_of_list):
559 |
560 | if flag:
561 | if i == 0:
562 | f1.write("{:d} {:d} {:d} {:d} {:d} {:d}\n".format(line[0], line[1], line[2], line[3], line[4], line[5]) )
563 | f1.close()
564 | else:
565 | f2.write("{:d} {:d} {:d} {:d} {:d} {:d}\n".format(line[0], line[1], line[2], line[3], line[4], line[5]) )
566 | else:
567 | if i == 0:
568 | f1.write("{:d} {:d} {:d} {:d}\n".format(line[0], line[1], line[2], line[3]) )
569 | f1.close()
570 | else:
571 | f2.write("{:d} {:d} {:d} {:d}\n".format(line[0], line[1], line[2], line[3]) )
572 | f2.close()
573 |
574 | return len(list_of_list)
575 |
576 | def main():
577 | delete_valid_data(r"VertebraSegmentation/valid_data")
578 | # create coordinate of each boxes
579 | detect()
580 | Segmentation()
581 |
582 | if __name__ == "__main__":
583 |
584 | main()
585 |
--------------------------------------------------------------------------------
/images/Cobb.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/z0978916348/Localization_and_Segmentation/1a80d9730dad7ede8c3b92793e85a979915a2fad/images/Cobb.jpg
--------------------------------------------------------------------------------
/images/demo.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/z0978916348/Localization_and_Segmentation/1a80d9730dad7ede8c3b92793e85a979915a2fad/images/demo.jpg
--------------------------------------------------------------------------------
/images/flowchart.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/z0978916348/Localization_and_Segmentation/1a80d9730dad7ede8c3b92793e85a979915a2fad/images/flowchart.jpg
--------------------------------------------------------------------------------
/images/logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/z0978916348/Localization_and_Segmentation/1a80d9730dad7ede8c3b92793e85a979915a2fad/images/logo.png
--------------------------------------------------------------------------------
/images/screenshot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/z0978916348/Localization_and_Segmentation/1a80d9730dad7ede8c3b92793e85a979915a2fad/images/screenshot.png
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | numpy
2 | torch>=1.0
3 | torchvision
4 | matplotlib
5 | tensorflow
6 | tensorboard
7 | terminaltables
8 | pillow
9 | tqdm
10 |
--------------------------------------------------------------------------------