├── LICENSE ├── README.md ├── examples ├── .gitkeep ├── sample1.jpg ├── sample2.jpg ├── sample3.jpg └── sample4.jpg ├── requirements.txt └── yolov5 ├── .dockerignore ├── Dockerfile ├── LICENSE ├── README.md ├── data ├── coco.yaml ├── coco128.yaml ├── hyp.finetune.yaml ├── hyp.scratch.yaml ├── road.yaml ├── scripts │ ├── get_coco.sh │ └── get_voc.sh └── voc.yaml ├── datasets └── road2020 │ ├── damage_classes.txt │ ├── move_test_iamges.py │ ├── train.txt │ └── val.txt ├── detect.py ├── hubconf.py ├── inference └── images │ ├── bus.jpg │ └── zidane.jpg ├── models ├── __init__.py ├── common.py ├── experimental.py ├── export.py ├── hub │ ├── yolov3-spp.yaml │ ├── yolov5-fpn.yaml │ └── yolov5-panet.yaml ├── yolo.py ├── yolov5l.yaml ├── yolov5m.yaml ├── yolov5s.yaml ├── yolov5x.yaml └── yolov5x_road.yaml ├── requirements.txt ├── scripts ├── __init__.py ├── dataset_setup_for_yolov5.sh ├── download_IMSC_grddc2020_weights.sh ├── download_road2020.sh ├── prepare_test.sh ├── strip_optimizer.py └── xml2yolo.py ├── sotabench.py ├── test.py ├── train.py ├── tutorial.ipynb ├── utils ├── __init__.py ├── activations.py ├── datasets.py ├── evolve.sh ├── general.py ├── google_app_engine │ ├── Dockerfile │ ├── additional_requirements.txt │ └── app.yaml ├── google_utils.py └── torch_utils.py └── weights ├── IMSC └── download.sh └── download_weights.sh /README.md: -------------------------------------------------------------------------------- 1 | # rddc2020 2 | road damage detection challenge 2020 3 | 4 | 5 | # road damage detection challange 2020 IMSC submission 6 | 7 | This repository contains source code and trained models for [Road Damage Detection and Classification Challenge](https://rdd2020.sekilab.global/overview/) that was held as part of 2020 IEEE Big Data conference. 8 | 9 | The best model achieved mean F1-score of 0.674878682854973 on test1 and 0.666213894130645 on test2 dataset of the competition. 10 | 11 | Sample predictions: 12 | 13 | ![]() ![]() ![]() ![]() 14 | 15 | 16 | 17 | 18 | 19 | ## Table of contents 20 | 21 | - [Prerequisites](#prerequisites) 22 | - [Quick start](#quick-start) 23 | - [RDCC Dataset Setup for YOLOv5](#RDCC-Dataset-Setup) 24 | - [IMSC YOLOv5 Model zoo](#IMSC-YOLOv5-Model-zoo) 25 | - [Detection / Submission](#Detection) 26 | - [Performance on RDDC test datasets](#Performance-on-RDDC-test-datasets) 27 | - [Training](#Training) 28 | 29 | ## Prerequisites 30 | 31 | You need to install: 32 | - [Python3 >= 3.6](https://www.python.org/downloads/) 33 | - Use `requirements.txt` to install required python dependencies 34 | 35 | ```Shell 36 | # Python >= 3.6 is needed 37 | pip3 install -r requirements.txt 38 | ``` 39 | 40 | 41 | ## Quick-start 42 | 1. Clone the road-damage-detection repo into $RDD: 43 | 44 | ```Shell 45 | git clone https://github.com/USC-InfoLab/rddc2020.git 46 | ``` 47 | 48 | 2. Install python packages: 49 | 50 | ```Shell 51 | pip3 install -r requirements.txt 52 | ``` 53 | 54 | 55 | ## [RDCC](https://github.com/sekilab/RoadDamageDetector#dataset-for-global-road-damage-detection-challenge-2020) Dataset Setup for YOLOv5 56 | 57 | **NOTE: Entire process (step 1-4 explained in this section) of downloading and preparing GRDDC 2020 dataset can be done by executing `yolov5/scripts/dataset_setup_for_yolov5.sh`** 58 | 59 | ```Shell 60 | bash yolov5/scripts/dataset_setup_for_yolov5.sh 61 | ``` 62 | 63 | OR 64 | 65 | 1. Go to `yolov5` directory 66 | ```Shell 67 | cd yolov5 68 | ``` 69 | 70 | 2. execute `download_road2020.sh` to downlaod train and test dataset 71 | ```Shell 72 | bash scripts/download_road2020.sh 73 | ``` 74 | 75 | 3. **Detection:** strcutre test datasets for inference using yolov5 76 | ```Shell 77 | bash scripts/prepare_test.sh 78 | ``` 79 | 80 | 4. **Training:** Generate the label files for yolov5 using [scripts/xml2Yolo.py](https://github.com/USC-InfoLab/rddc2020/tree/master/yolov5/scripts/xml2Yolo.py) 81 | ```Shell 82 | python3 scripts/xml2yolo.py 83 | ``` 84 | - Use `python3 scripts/xml2Yolo.py --help` for command line option details 85 | 86 | 87 | ## IMSC YOLOv5 Model zoo 88 | 89 | 1. Go to `yolov5` directory 90 | ```Shell 91 | cd yolov5 92 | ``` 93 | 94 | 2. download YOLOv5 model zoo: 95 | ```Shell 96 | bash scripts/download_IMSC_grddc2020_weights.sh 97 | ``` 98 | 99 | ## Detection / Submission 100 | 1. Download weights as mentioned in [IMSC YOLOv5 Model zoo](#IMSC-YOLOv5-Model-zoo) 101 | 102 | 2. Go to `yolov5` directory 103 | ```Shell 104 | cd yolov5 105 | ``` 106 | 3. Execute one of the follwoing commands to generate `results.csv`(competition format) and predicated images under `inference/output/`: 107 | ```Shell 108 | # inference using best ensemble model for test1 dataset 109 | python3 detect.py --weights weights/IMSC/last_95_448_32_aug2.pt weights/IMSC/last_95_640_16.pt weights/IMSC/last_120_640_32_aug2.pt --img 640 --source datasets/road2020/test1/test_images/ --conf-thres 0.22 --iou-thres 0.9999 --agnostic-nms --augment 110 | ``` 111 | 112 | ```Shell 113 | # inference using best ensemble model for test2 dataset 114 | python3 detect.py --weights weights/IMSC/last_95_448_32_aug2.pt weights/IMSC/last_95_640_16.pt weights/IMSC/last_120_640_32_aug2.pt weights/IMSC/last_100_100_640_16.pt --img 640 --source datasets/road2020/test2/test_images/ --conf-thres 0.22 --iou-thres 0.9999 --agnostic-nms --augment 115 | ``` 116 | 117 | ```Shell 118 | # inference using best non-ensemble model for test1 dataset 119 | python3 detect.py --weights weights/IMSC/last_95.pt --img 640 --source datasets/road2020/test1/test_images/ --conf-thres 0.20 --iou-thres 0.9999 --agnostic-nms --augment 120 | ``` 121 | 122 | ```Shell 123 | # inference using best non-ensemble model for test2 dataset 124 | python3 detect.py --weights weights/IMSC/last_95.pt --img 640 --source datasets/road2020/test2/test_images/ --conf-thres 0.20 --iou-thres 0.9999 --agnostic-nms --augment 125 | ``` 126 | 127 | ## Performance on RDDC test datasets 128 | 129 | | YOLOv5x_448_32_aug2 | YOLOv5x_640_16_95 | YOLOv5x_640_16_100 | YOLOv5x_640_32 | YOLOv5x_640_16_aug2 | YOLOv5x_640_32_aug2 | test1 F1-score | test2 F1-score | 130 | |------- |------------------- |------------------- |------------------- |------------------- |------------------- |------------------- |------------------- | 131 | | | :heavy_check_mark: | | | | | 0.66697383879131 |0.651389430313506 | 132 | | :heavy_check_mark: | :heavy_check_mark: | | | | :heavy_check_mark: |**0.674878682854973** | 0.665632401648316 | 133 | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: |0.674198239966431 | **0.666213894130645** | 134 | 135 | 136 | ## Training 137 | 1. download pre-trained weights from yolov5 repo 138 | ```Shell 139 | bash weights/download_weights.sh 140 | ``` 141 | 142 | 2. run following command 143 | ```Shell 144 | python3 train.py --data data/road.yaml --cfg models/yolov5x.yaml --weights weight/yolov5x.pt --batch-size 64 145 | ``` 146 | visit [yolov5](https://github.com/ultralytics/yolov5) official source code for more training and inference time arguments 147 | 148 | 149 | 150 | 151 | 152 | 153 | 154 | -------------------------------------------------------------------------------- /examples/.gitkeep: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /examples/sample1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/USC-InfoLab/rddc2020/72cda97851fb6a48b5b9a55048ba38c890396d23/examples/sample1.jpg -------------------------------------------------------------------------------- /examples/sample2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/USC-InfoLab/rddc2020/72cda97851fb6a48b5b9a55048ba38c890396d23/examples/sample2.jpg -------------------------------------------------------------------------------- /examples/sample3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/USC-InfoLab/rddc2020/72cda97851fb6a48b5b9a55048ba38c890396d23/examples/sample3.jpg -------------------------------------------------------------------------------- /examples/sample4.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/USC-InfoLab/rddc2020/72cda97851fb6a48b5b9a55048ba38c890396d23/examples/sample4.jpg -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | -r yolov5/requirements.txt 2 | gdown 3 | -------------------------------------------------------------------------------- /yolov5/.dockerignore: -------------------------------------------------------------------------------- 1 | # Repo-specific DockerIgnore ------------------------------------------------------------------------------------------- 2 | .git 3 | .cache 4 | .idea 5 | runs 6 | output 7 | coco 8 | storage.googleapis.com 9 | 10 | data/samples/* 11 | **/results*.txt 12 | *.jpg 13 | 14 | # Neural Network weights ----------------------------------------------------------------------------------------------- 15 | **/*.weights 16 | **/*.pt 17 | **/*.pth 18 | **/*.onnx 19 | **/*.mlmodel 20 | **/*.torchscript 21 | 22 | 23 | # Below Copied From .gitignore ----------------------------------------------------------------------------------------- 24 | # Below Copied From .gitignore ----------------------------------------------------------------------------------------- 25 | 26 | 27 | # GitHub Python GitIgnore ---------------------------------------------------------------------------------------------- 28 | # Byte-compiled / optimized / DLL files 29 | __pycache__/ 30 | *.py[cod] 31 | *$py.class 32 | 33 | # C extensions 34 | *.so 35 | 36 | # Distribution / packaging 37 | .Python 38 | env/ 39 | build/ 40 | develop-eggs/ 41 | dist/ 42 | downloads/ 43 | eggs/ 44 | .eggs/ 45 | lib/ 46 | lib64/ 47 | parts/ 48 | sdist/ 49 | var/ 50 | wheels/ 51 | *.egg-info/ 52 | .installed.cfg 53 | *.egg 54 | 55 | # PyInstaller 56 | # Usually these files are written by a python script from a template 57 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 58 | *.manifest 59 | *.spec 60 | 61 | # Installer logs 62 | pip-log.txt 63 | pip-delete-this-directory.txt 64 | 65 | # Unit test / coverage reports 66 | htmlcov/ 67 | .tox/ 68 | .coverage 69 | .coverage.* 70 | .cache 71 | nosetests.xml 72 | coverage.xml 73 | *.cover 74 | .hypothesis/ 75 | 76 | # Translations 77 | *.mo 78 | *.pot 79 | 80 | # Django stuff: 81 | *.log 82 | local_settings.py 83 | 84 | # Flask stuff: 85 | instance/ 86 | .webassets-cache 87 | 88 | # Scrapy stuff: 89 | .scrapy 90 | 91 | # Sphinx documentation 92 | docs/_build/ 93 | 94 | # PyBuilder 95 | target/ 96 | 97 | # Jupyter Notebook 98 | .ipynb_checkpoints 99 | 100 | # pyenv 101 | .python-version 102 | 103 | # celery beat schedule file 104 | celerybeat-schedule 105 | 106 | # SageMath parsed files 107 | *.sage.py 108 | 109 | # dotenv 110 | .env 111 | 112 | # virtualenv 113 | .venv 114 | venv*/ 115 | ENV/ 116 | 117 | # Spyder project settings 118 | .spyderproject 119 | .spyproject 120 | 121 | # Rope project settings 122 | .ropeproject 123 | 124 | # mkdocs documentation 125 | /site 126 | 127 | # mypy 128 | .mypy_cache/ 129 | 130 | 131 | # https://github.com/github/gitignore/blob/master/Global/macOS.gitignore ----------------------------------------------- 132 | 133 | # General 134 | .DS_Store 135 | .AppleDouble 136 | .LSOverride 137 | 138 | # Icon must end with two \r 139 | Icon 140 | Icon? 141 | 142 | # Thumbnails 143 | ._* 144 | 145 | # Files that might appear in the root of a volume 146 | .DocumentRevisions-V100 147 | .fseventsd 148 | .Spotlight-V100 149 | .TemporaryItems 150 | .Trashes 151 | .VolumeIcon.icns 152 | .com.apple.timemachine.donotpresent 153 | 154 | # Directories potentially created on remote AFP share 155 | .AppleDB 156 | .AppleDesktop 157 | Network Trash Folder 158 | Temporary Items 159 | .apdisk 160 | 161 | 162 | # https://github.com/github/gitignore/blob/master/Global/JetBrains.gitignore 163 | # Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and WebStorm 164 | # Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839 165 | 166 | # User-specific stuff: 167 | .idea/* 168 | .idea/**/workspace.xml 169 | .idea/**/tasks.xml 170 | .idea/dictionaries 171 | .html # Bokeh Plots 172 | .pg # TensorFlow Frozen Graphs 173 | .avi # videos 174 | 175 | # Sensitive or high-churn files: 176 | .idea/**/dataSources/ 177 | .idea/**/dataSources.ids 178 | .idea/**/dataSources.local.xml 179 | .idea/**/sqlDataSources.xml 180 | .idea/**/dynamic.xml 181 | .idea/**/uiDesigner.xml 182 | 183 | # Gradle: 184 | .idea/**/gradle.xml 185 | .idea/**/libraries 186 | 187 | # CMake 188 | cmake-build-debug/ 189 | cmake-build-release/ 190 | 191 | # Mongo Explorer plugin: 192 | .idea/**/mongoSettings.xml 193 | 194 | ## File-based project format: 195 | *.iws 196 | 197 | ## Plugin-specific files: 198 | 199 | # IntelliJ 200 | out/ 201 | 202 | # mpeltonen/sbt-idea plugin 203 | .idea_modules/ 204 | 205 | # JIRA plugin 206 | atlassian-ide-plugin.xml 207 | 208 | # Cursive Clojure plugin 209 | .idea/replstate.xml 210 | 211 | # Crashlytics plugin (for Android Studio and IntelliJ) 212 | com_crashlytics_export_strings.xml 213 | crashlytics.properties 214 | crashlytics-build.properties 215 | fabric.properties 216 | -------------------------------------------------------------------------------- /yolov5/Dockerfile: -------------------------------------------------------------------------------- 1 | # Start FROM Nvidia PyTorch image https://ngc.nvidia.com/catalog/containers/nvidia:pytorch 2 | FROM nvcr.io/nvidia/pytorch:20.08-py3 3 | 4 | # Install dependencies 5 | RUN pip install --upgrade pip 6 | # COPY requirements.txt . 7 | # RUN pip install -r requirements.txt 8 | RUN pip install gsutil 9 | 10 | # Create working directory 11 | RUN mkdir -p /usr/src/app 12 | WORKDIR /usr/src/app 13 | 14 | # Copy contents 15 | COPY . /usr/src/app 16 | 17 | # Copy weights 18 | #RUN python3 -c "from models import *; \ 19 | #attempt_download('weights/yolov5s.pt'); \ 20 | #attempt_download('weights/yolov5m.pt'); \ 21 | #attempt_download('weights/yolov5l.pt')" 22 | 23 | 24 | # --------------------------------------------------- Extras Below --------------------------------------------------- 25 | 26 | # Build and Push 27 | # t=ultralytics/yolov5:latest && sudo docker build -t $t . && sudo docker push $t 28 | # for v in {300..303}; do t=ultralytics/coco:v$v && sudo docker build -t $t . && sudo docker push $t; done 29 | 30 | # Pull and Run 31 | # t=ultralytics/yolov5:latest && sudo docker pull $t && sudo docker run -it --ipc=host $t 32 | 33 | # Pull and Run with local directory access 34 | # t=ultralytics/yolov5:latest && sudo docker pull $t && sudo docker run -it --ipc=host --gpus all -v "$(pwd)"/coco:/usr/src/coco $t 35 | 36 | # Kill all 37 | # sudo docker kill $(sudo docker ps -q) 38 | 39 | # Kill all image-based 40 | # sudo docker kill $(sudo docker ps -a -q --filter ancestor=ultralytics/yolov5:latest) 41 | 42 | # Bash into running container 43 | # sudo docker container exec -it ba65811811ab bash 44 | 45 | # Bash into stopped container 46 | # sudo docker commit 092b16b25c5b usr/resume && sudo docker run -it --gpus all --ipc=host -v "$(pwd)"/coco:/usr/src/coco --entrypoint=sh usr/resume 47 | 48 | # Send weights to GCP 49 | # python -c "from utils.general import *; strip_optimizer('runs/exp0_*/weights/best.pt', 'tmp.pt')" && gsutil cp tmp.pt gs://*.pt 50 | 51 | # Clean up 52 | # docker system prune -a --volumes 53 | -------------------------------------------------------------------------------- /yolov5/LICENSE: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 3, 29 June 2007 3 | 4 | Copyright (C) 2007 Free Software Foundation, Inc. 5 | Everyone is permitted to copy and distribute verbatim copies 6 | of this license document, but changing it is not allowed. 7 | 8 | Preamble 9 | 10 | The GNU General Public License is a free, copyleft license for 11 | software and other kinds of works. 12 | 13 | The licenses for most software and other practical works are designed 14 | to take away your freedom to share and change the works. By contrast, 15 | the GNU General Public License is intended to guarantee your freedom to 16 | share and change all versions of a program--to make sure it remains free 17 | software for all its users. We, the Free Software Foundation, use the 18 | GNU General Public License for most of our software; it applies also to 19 | any other work released this way by its authors. You can apply it to 20 | your programs, too. 21 | 22 | When we speak of free software, we are referring to freedom, not 23 | price. Our General Public Licenses are designed to make sure that you 24 | have the freedom to distribute copies of free software (and charge for 25 | them if you wish), that you receive source code or can get it if you 26 | want it, that you can change the software or use pieces of it in new 27 | free programs, and that you know you can do these things. 28 | 29 | To protect your rights, we need to prevent others from denying you 30 | these rights or asking you to surrender the rights. Therefore, you have 31 | certain responsibilities if you distribute copies of the software, or if 32 | you modify it: responsibilities to respect the freedom of others. 33 | 34 | For example, if you distribute copies of such a program, whether 35 | gratis or for a fee, you must pass on to the recipients the same 36 | freedoms that you received. You must make sure that they, too, receive 37 | or can get the source code. And you must show them these terms so they 38 | know their rights. 39 | 40 | Developers that use the GNU GPL protect your rights with two steps: 41 | (1) assert copyright on the software, and (2) offer you this License 42 | giving you legal permission to copy, distribute and/or modify it. 43 | 44 | For the developers' and authors' protection, the GPL clearly explains 45 | that there is no warranty for this free software. For both users' and 46 | authors' sake, the GPL requires that modified versions be marked as 47 | changed, so that their problems will not be attributed erroneously to 48 | authors of previous versions. 49 | 50 | Some devices are designed to deny users access to install or run 51 | modified versions of the software inside them, although the manufacturer 52 | can do so. This is fundamentally incompatible with the aim of 53 | protecting users' freedom to change the software. The systematic 54 | pattern of such abuse occurs in the area of products for individuals to 55 | use, which is precisely where it is most unacceptable. Therefore, we 56 | have designed this version of the GPL to prohibit the practice for those 57 | products. If such problems arise substantially in other domains, we 58 | stand ready to extend this provision to those domains in future versions 59 | of the GPL, as needed to protect the freedom of users. 60 | 61 | Finally, every program is threatened constantly by software patents. 62 | States should not allow patents to restrict development and use of 63 | software on general-purpose computers, but in those that do, we wish to 64 | avoid the special danger that patents applied to a free program could 65 | make it effectively proprietary. To prevent this, the GPL assures that 66 | patents cannot be used to render the program non-free. 67 | 68 | The precise terms and conditions for copying, distribution and 69 | modification follow. 70 | 71 | TERMS AND CONDITIONS 72 | 73 | 0. Definitions. 74 | 75 | "This License" refers to version 3 of the GNU General Public License. 76 | 77 | "Copyright" also means copyright-like laws that apply to other kinds of 78 | works, such as semiconductor masks. 79 | 80 | "The Program" refers to any copyrightable work licensed under this 81 | License. Each licensee is addressed as "you". "Licensees" and 82 | "recipients" may be individuals or organizations. 83 | 84 | To "modify" a work means to copy from or adapt all or part of the work 85 | in a fashion requiring copyright permission, other than the making of an 86 | exact copy. The resulting work is called a "modified version" of the 87 | earlier work or a work "based on" the earlier work. 88 | 89 | A "covered work" means either the unmodified Program or a work based 90 | on the Program. 91 | 92 | To "propagate" a work means to do anything with it that, without 93 | permission, would make you directly or secondarily liable for 94 | infringement under applicable copyright law, except executing it on a 95 | computer or modifying a private copy. Propagation includes copying, 96 | distribution (with or without modification), making available to the 97 | public, and in some countries other activities as well. 98 | 99 | To "convey" a work means any kind of propagation that enables other 100 | parties to make or receive copies. Mere interaction with a user through 101 | a computer network, with no transfer of a copy, is not conveying. 102 | 103 | An interactive user interface displays "Appropriate Legal Notices" 104 | to the extent that it includes a convenient and prominently visible 105 | feature that (1) displays an appropriate copyright notice, and (2) 106 | tells the user that there is no warranty for the work (except to the 107 | extent that warranties are provided), that licensees may convey the 108 | work under this License, and how to view a copy of this License. If 109 | the interface presents a list of user commands or options, such as a 110 | menu, a prominent item in the list meets this criterion. 111 | 112 | 1. Source Code. 113 | 114 | The "source code" for a work means the preferred form of the work 115 | for making modifications to it. "Object code" means any non-source 116 | form of a work. 117 | 118 | A "Standard Interface" means an interface that either is an official 119 | standard defined by a recognized standards body, or, in the case of 120 | interfaces specified for a particular programming language, one that 121 | is widely used among developers working in that language. 122 | 123 | The "System Libraries" of an executable work include anything, other 124 | than the work as a whole, that (a) is included in the normal form of 125 | packaging a Major Component, but which is not part of that Major 126 | Component, and (b) serves only to enable use of the work with that 127 | Major Component, or to implement a Standard Interface for which an 128 | implementation is available to the public in source code form. A 129 | "Major Component", in this context, means a major essential component 130 | (kernel, window system, and so on) of the specific operating system 131 | (if any) on which the executable work runs, or a compiler used to 132 | produce the work, or an object code interpreter used to run it. 133 | 134 | The "Corresponding Source" for a work in object code form means all 135 | the source code needed to generate, install, and (for an executable 136 | work) run the object code and to modify the work, including scripts to 137 | control those activities. However, it does not include the work's 138 | System Libraries, or general-purpose tools or generally available free 139 | programs which are used unmodified in performing those activities but 140 | which are not part of the work. For example, Corresponding Source 141 | includes interface definition files associated with source files for 142 | the work, and the source code for shared libraries and dynamically 143 | linked subprograms that the work is specifically designed to require, 144 | such as by intimate data communication or control flow between those 145 | subprograms and other parts of the work. 146 | 147 | The Corresponding Source need not include anything that users 148 | can regenerate automatically from other parts of the Corresponding 149 | Source. 150 | 151 | The Corresponding Source for a work in source code form is that 152 | same work. 153 | 154 | 2. Basic Permissions. 155 | 156 | All rights granted under this License are granted for the term of 157 | copyright on the Program, and are irrevocable provided the stated 158 | conditions are met. This License explicitly affirms your unlimited 159 | permission to run the unmodified Program. The output from running a 160 | covered work is covered by this License only if the output, given its 161 | content, constitutes a covered work. This License acknowledges your 162 | rights of fair use or other equivalent, as provided by copyright law. 163 | 164 | You may make, run and propagate covered works that you do not 165 | convey, without conditions so long as your license otherwise remains 166 | in force. You may convey covered works to others for the sole purpose 167 | of having them make modifications exclusively for you, or provide you 168 | with facilities for running those works, provided that you comply with 169 | the terms of this License in conveying all material for which you do 170 | not control copyright. Those thus making or running the covered works 171 | for you must do so exclusively on your behalf, under your direction 172 | and control, on terms that prohibit them from making any copies of 173 | your copyrighted material outside their relationship with you. 174 | 175 | Conveying under any other circumstances is permitted solely under 176 | the conditions stated below. Sublicensing is not allowed; section 10 177 | makes it unnecessary. 178 | 179 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law. 180 | 181 | No covered work shall be deemed part of an effective technological 182 | measure under any applicable law fulfilling obligations under article 183 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or 184 | similar laws prohibiting or restricting circumvention of such 185 | measures. 186 | 187 | When you convey a covered work, you waive any legal power to forbid 188 | circumvention of technological measures to the extent such circumvention 189 | is effected by exercising rights under this License with respect to 190 | the covered work, and you disclaim any intention to limit operation or 191 | modification of the work as a means of enforcing, against the work's 192 | users, your or third parties' legal rights to forbid circumvention of 193 | technological measures. 194 | 195 | 4. Conveying Verbatim Copies. 196 | 197 | You may convey verbatim copies of the Program's source code as you 198 | receive it, in any medium, provided that you conspicuously and 199 | appropriately publish on each copy an appropriate copyright notice; 200 | keep intact all notices stating that this License and any 201 | non-permissive terms added in accord with section 7 apply to the code; 202 | keep intact all notices of the absence of any warranty; and give all 203 | recipients a copy of this License along with the Program. 204 | 205 | You may charge any price or no price for each copy that you convey, 206 | and you may offer support or warranty protection for a fee. 207 | 208 | 5. Conveying Modified Source Versions. 209 | 210 | You may convey a work based on the Program, or the modifications to 211 | produce it from the Program, in the form of source code under the 212 | terms of section 4, provided that you also meet all of these conditions: 213 | 214 | a) The work must carry prominent notices stating that you modified 215 | it, and giving a relevant date. 216 | 217 | b) The work must carry prominent notices stating that it is 218 | released under this License and any conditions added under section 219 | 7. This requirement modifies the requirement in section 4 to 220 | "keep intact all notices". 221 | 222 | c) You must license the entire work, as a whole, under this 223 | License to anyone who comes into possession of a copy. This 224 | License will therefore apply, along with any applicable section 7 225 | additional terms, to the whole of the work, and all its parts, 226 | regardless of how they are packaged. This License gives no 227 | permission to license the work in any other way, but it does not 228 | invalidate such permission if you have separately received it. 229 | 230 | d) If the work has interactive user interfaces, each must display 231 | Appropriate Legal Notices; however, if the Program has interactive 232 | interfaces that do not display Appropriate Legal Notices, your 233 | work need not make them do so. 234 | 235 | A compilation of a covered work with other separate and independent 236 | works, which are not by their nature extensions of the covered work, 237 | and which are not combined with it such as to form a larger program, 238 | in or on a volume of a storage or distribution medium, is called an 239 | "aggregate" if the compilation and its resulting copyright are not 240 | used to limit the access or legal rights of the compilation's users 241 | beyond what the individual works permit. Inclusion of a covered work 242 | in an aggregate does not cause this License to apply to the other 243 | parts of the aggregate. 244 | 245 | 6. Conveying Non-Source Forms. 246 | 247 | You may convey a covered work in object code form under the terms 248 | of sections 4 and 5, provided that you also convey the 249 | machine-readable Corresponding Source under the terms of this License, 250 | in one of these ways: 251 | 252 | a) Convey the object code in, or embodied in, a physical product 253 | (including a physical distribution medium), accompanied by the 254 | Corresponding Source fixed on a durable physical medium 255 | customarily used for software interchange. 256 | 257 | b) Convey the object code in, or embodied in, a physical product 258 | (including a physical distribution medium), accompanied by a 259 | written offer, valid for at least three years and valid for as 260 | long as you offer spare parts or customer support for that product 261 | model, to give anyone who possesses the object code either (1) a 262 | copy of the Corresponding Source for all the software in the 263 | product that is covered by this License, on a durable physical 264 | medium customarily used for software interchange, for a price no 265 | more than your reasonable cost of physically performing this 266 | conveying of source, or (2) access to copy the 267 | Corresponding Source from a network server at no charge. 268 | 269 | c) Convey individual copies of the object code with a copy of the 270 | written offer to provide the Corresponding Source. This 271 | alternative is allowed only occasionally and noncommercially, and 272 | only if you received the object code with such an offer, in accord 273 | with subsection 6b. 274 | 275 | d) Convey the object code by offering access from a designated 276 | place (gratis or for a charge), and offer equivalent access to the 277 | Corresponding Source in the same way through the same place at no 278 | further charge. You need not require recipients to copy the 279 | Corresponding Source along with the object code. If the place to 280 | copy the object code is a network server, the Corresponding Source 281 | may be on a different server (operated by you or a third party) 282 | that supports equivalent copying facilities, provided you maintain 283 | clear directions next to the object code saying where to find the 284 | Corresponding Source. Regardless of what server hosts the 285 | Corresponding Source, you remain obligated to ensure that it is 286 | available for as long as needed to satisfy these requirements. 287 | 288 | e) Convey the object code using peer-to-peer transmission, provided 289 | you inform other peers where the object code and Corresponding 290 | Source of the work are being offered to the general public at no 291 | charge under subsection 6d. 292 | 293 | A separable portion of the object code, whose source code is excluded 294 | from the Corresponding Source as a System Library, need not be 295 | included in conveying the object code work. 296 | 297 | A "User Product" is either (1) a "consumer product", which means any 298 | tangible personal property which is normally used for personal, family, 299 | or household purposes, or (2) anything designed or sold for incorporation 300 | into a dwelling. In determining whether a product is a consumer product, 301 | doubtful cases shall be resolved in favor of coverage. For a particular 302 | product received by a particular user, "normally used" refers to a 303 | typical or common use of that class of product, regardless of the status 304 | of the particular user or of the way in which the particular user 305 | actually uses, or expects or is expected to use, the product. A product 306 | is a consumer product regardless of whether the product has substantial 307 | commercial, industrial or non-consumer uses, unless such uses represent 308 | the only significant mode of use of the product. 309 | 310 | "Installation Information" for a User Product means any methods, 311 | procedures, authorization keys, or other information required to install 312 | and execute modified versions of a covered work in that User Product from 313 | a modified version of its Corresponding Source. The information must 314 | suffice to ensure that the continued functioning of the modified object 315 | code is in no case prevented or interfered with solely because 316 | modification has been made. 317 | 318 | If you convey an object code work under this section in, or with, or 319 | specifically for use in, a User Product, and the conveying occurs as 320 | part of a transaction in which the right of possession and use of the 321 | User Product is transferred to the recipient in perpetuity or for a 322 | fixed term (regardless of how the transaction is characterized), the 323 | Corresponding Source conveyed under this section must be accompanied 324 | by the Installation Information. But this requirement does not apply 325 | if neither you nor any third party retains the ability to install 326 | modified object code on the User Product (for example, the work has 327 | been installed in ROM). 328 | 329 | The requirement to provide Installation Information does not include a 330 | requirement to continue to provide support service, warranty, or updates 331 | for a work that has been modified or installed by the recipient, or for 332 | the User Product in which it has been modified or installed. Access to a 333 | network may be denied when the modification itself materially and 334 | adversely affects the operation of the network or violates the rules and 335 | protocols for communication across the network. 336 | 337 | Corresponding Source conveyed, and Installation Information provided, 338 | in accord with this section must be in a format that is publicly 339 | documented (and with an implementation available to the public in 340 | source code form), and must require no special password or key for 341 | unpacking, reading or copying. 342 | 343 | 7. Additional Terms. 344 | 345 | "Additional permissions" are terms that supplement the terms of this 346 | License by making exceptions from one or more of its conditions. 347 | Additional permissions that are applicable to the entire Program shall 348 | be treated as though they were included in this License, to the extent 349 | that they are valid under applicable law. If additional permissions 350 | apply only to part of the Program, that part may be used separately 351 | under those permissions, but the entire Program remains governed by 352 | this License without regard to the additional permissions. 353 | 354 | When you convey a copy of a covered work, you may at your option 355 | remove any additional permissions from that copy, or from any part of 356 | it. (Additional permissions may be written to require their own 357 | removal in certain cases when you modify the work.) You may place 358 | additional permissions on material, added by you to a covered work, 359 | for which you have or can give appropriate copyright permission. 360 | 361 | Notwithstanding any other provision of this License, for material you 362 | add to a covered work, you may (if authorized by the copyright holders of 363 | that material) supplement the terms of this License with terms: 364 | 365 | a) Disclaiming warranty or limiting liability differently from the 366 | terms of sections 15 and 16 of this License; or 367 | 368 | b) Requiring preservation of specified reasonable legal notices or 369 | author attributions in that material or in the Appropriate Legal 370 | Notices displayed by works containing it; or 371 | 372 | c) Prohibiting misrepresentation of the origin of that material, or 373 | requiring that modified versions of such material be marked in 374 | reasonable ways as different from the original version; or 375 | 376 | d) Limiting the use for publicity purposes of names of licensors or 377 | authors of the material; or 378 | 379 | e) Declining to grant rights under trademark law for use of some 380 | trade names, trademarks, or service marks; or 381 | 382 | f) Requiring indemnification of licensors and authors of that 383 | material by anyone who conveys the material (or modified versions of 384 | it) with contractual assumptions of liability to the recipient, for 385 | any liability that these contractual assumptions directly impose on 386 | those licensors and authors. 387 | 388 | All other non-permissive additional terms are considered "further 389 | restrictions" within the meaning of section 10. If the Program as you 390 | received it, or any part of it, contains a notice stating that it is 391 | governed by this License along with a term that is a further 392 | restriction, you may remove that term. If a license document contains 393 | a further restriction but permits relicensing or conveying under this 394 | License, you may add to a covered work material governed by the terms 395 | of that license document, provided that the further restriction does 396 | not survive such relicensing or conveying. 397 | 398 | If you add terms to a covered work in accord with this section, you 399 | must place, in the relevant source files, a statement of the 400 | additional terms that apply to those files, or a notice indicating 401 | where to find the applicable terms. 402 | 403 | Additional terms, permissive or non-permissive, may be stated in the 404 | form of a separately written license, or stated as exceptions; 405 | the above requirements apply either way. 406 | 407 | 8. Termination. 408 | 409 | You may not propagate or modify a covered work except as expressly 410 | provided under this License. Any attempt otherwise to propagate or 411 | modify it is void, and will automatically terminate your rights under 412 | this License (including any patent licenses granted under the third 413 | paragraph of section 11). 414 | 415 | However, if you cease all violation of this License, then your 416 | license from a particular copyright holder is reinstated (a) 417 | provisionally, unless and until the copyright holder explicitly and 418 | finally terminates your license, and (b) permanently, if the copyright 419 | holder fails to notify you of the violation by some reasonable means 420 | prior to 60 days after the cessation. 421 | 422 | Moreover, your license from a particular copyright holder is 423 | reinstated permanently if the copyright holder notifies you of the 424 | violation by some reasonable means, this is the first time you have 425 | received notice of violation of this License (for any work) from that 426 | copyright holder, and you cure the violation prior to 30 days after 427 | your receipt of the notice. 428 | 429 | Termination of your rights under this section does not terminate the 430 | licenses of parties who have received copies or rights from you under 431 | this License. If your rights have been terminated and not permanently 432 | reinstated, you do not qualify to receive new licenses for the same 433 | material under section 10. 434 | 435 | 9. Acceptance Not Required for Having Copies. 436 | 437 | You are not required to accept this License in order to receive or 438 | run a copy of the Program. Ancillary propagation of a covered work 439 | occurring solely as a consequence of using peer-to-peer transmission 440 | to receive a copy likewise does not require acceptance. However, 441 | nothing other than this License grants you permission to propagate or 442 | modify any covered work. These actions infringe copyright if you do 443 | not accept this License. Therefore, by modifying or propagating a 444 | covered work, you indicate your acceptance of this License to do so. 445 | 446 | 10. Automatic Licensing of Downstream Recipients. 447 | 448 | Each time you convey a covered work, the recipient automatically 449 | receives a license from the original licensors, to run, modify and 450 | propagate that work, subject to this License. You are not responsible 451 | for enforcing compliance by third parties with this License. 452 | 453 | An "entity transaction" is a transaction transferring control of an 454 | organization, or substantially all assets of one, or subdividing an 455 | organization, or merging organizations. If propagation of a covered 456 | work results from an entity transaction, each party to that 457 | transaction who receives a copy of the work also receives whatever 458 | licenses to the work the party's predecessor in interest had or could 459 | give under the previous paragraph, plus a right to possession of the 460 | Corresponding Source of the work from the predecessor in interest, if 461 | the predecessor has it or can get it with reasonable efforts. 462 | 463 | You may not impose any further restrictions on the exercise of the 464 | rights granted or affirmed under this License. For example, you may 465 | not impose a license fee, royalty, or other charge for exercise of 466 | rights granted under this License, and you may not initiate litigation 467 | (including a cross-claim or counterclaim in a lawsuit) alleging that 468 | any patent claim is infringed by making, using, selling, offering for 469 | sale, or importing the Program or any portion of it. 470 | 471 | 11. Patents. 472 | 473 | A "contributor" is a copyright holder who authorizes use under this 474 | License of the Program or a work on which the Program is based. The 475 | work thus licensed is called the contributor's "contributor version". 476 | 477 | A contributor's "essential patent claims" are all patent claims 478 | owned or controlled by the contributor, whether already acquired or 479 | hereafter acquired, that would be infringed by some manner, permitted 480 | by this License, of making, using, or selling its contributor version, 481 | but do not include claims that would be infringed only as a 482 | consequence of further modification of the contributor version. For 483 | purposes of this definition, "control" includes the right to grant 484 | patent sublicenses in a manner consistent with the requirements of 485 | this License. 486 | 487 | Each contributor grants you a non-exclusive, worldwide, royalty-free 488 | patent license under the contributor's essential patent claims, to 489 | make, use, sell, offer for sale, import and otherwise run, modify and 490 | propagate the contents of its contributor version. 491 | 492 | In the following three paragraphs, a "patent license" is any express 493 | agreement or commitment, however denominated, not to enforce a patent 494 | (such as an express permission to practice a patent or covenant not to 495 | sue for patent infringement). To "grant" such a patent license to a 496 | party means to make such an agreement or commitment not to enforce a 497 | patent against the party. 498 | 499 | If you convey a covered work, knowingly relying on a patent license, 500 | and the Corresponding Source of the work is not available for anyone 501 | to copy, free of charge and under the terms of this License, through a 502 | publicly available network server or other readily accessible means, 503 | then you must either (1) cause the Corresponding Source to be so 504 | available, or (2) arrange to deprive yourself of the benefit of the 505 | patent license for this particular work, or (3) arrange, in a manner 506 | consistent with the requirements of this License, to extend the patent 507 | license to downstream recipients. "Knowingly relying" means you have 508 | actual knowledge that, but for the patent license, your conveying the 509 | covered work in a country, or your recipient's use of the covered work 510 | in a country, would infringe one or more identifiable patents in that 511 | country that you have reason to believe are valid. 512 | 513 | If, pursuant to or in connection with a single transaction or 514 | arrangement, you convey, or propagate by procuring conveyance of, a 515 | covered work, and grant a patent license to some of the parties 516 | receiving the covered work authorizing them to use, propagate, modify 517 | or convey a specific copy of the covered work, then the patent license 518 | you grant is automatically extended to all recipients of the covered 519 | work and works based on it. 520 | 521 | A patent license is "discriminatory" if it does not include within 522 | the scope of its coverage, prohibits the exercise of, or is 523 | conditioned on the non-exercise of one or more of the rights that are 524 | specifically granted under this License. You may not convey a covered 525 | work if you are a party to an arrangement with a third party that is 526 | in the business of distributing software, under which you make payment 527 | to the third party based on the extent of your activity of conveying 528 | the work, and under which the third party grants, to any of the 529 | parties who would receive the covered work from you, a discriminatory 530 | patent license (a) in connection with copies of the covered work 531 | conveyed by you (or copies made from those copies), or (b) primarily 532 | for and in connection with specific products or compilations that 533 | contain the covered work, unless you entered into that arrangement, 534 | or that patent license was granted, prior to 28 March 2007. 535 | 536 | Nothing in this License shall be construed as excluding or limiting 537 | any implied license or other defenses to infringement that may 538 | otherwise be available to you under applicable patent law. 539 | 540 | 12. No Surrender of Others' Freedom. 541 | 542 | If conditions are imposed on you (whether by court order, agreement or 543 | otherwise) that contradict the conditions of this License, they do not 544 | excuse you from the conditions of this License. If you cannot convey a 545 | covered work so as to satisfy simultaneously your obligations under this 546 | License and any other pertinent obligations, then as a consequence you may 547 | not convey it at all. For example, if you agree to terms that obligate you 548 | to collect a royalty for further conveying from those to whom you convey 549 | the Program, the only way you could satisfy both those terms and this 550 | License would be to refrain entirely from conveying the Program. 551 | 552 | 13. Use with the GNU Affero General Public License. 553 | 554 | Notwithstanding any other provision of this License, you have 555 | permission to link or combine any covered work with a work licensed 556 | under version 3 of the GNU Affero General Public License into a single 557 | combined work, and to convey the resulting work. The terms of this 558 | License will continue to apply to the part which is the covered work, 559 | but the special requirements of the GNU Affero General Public License, 560 | section 13, concerning interaction through a network will apply to the 561 | combination as such. 562 | 563 | 14. Revised Versions of this License. 564 | 565 | The Free Software Foundation may publish revised and/or new versions of 566 | the GNU General Public License from time to time. Such new versions will 567 | be similar in spirit to the present version, but may differ in detail to 568 | address new problems or concerns. 569 | 570 | Each version is given a distinguishing version number. If the 571 | Program specifies that a certain numbered version of the GNU General 572 | Public License "or any later version" applies to it, you have the 573 | option of following the terms and conditions either of that numbered 574 | version or of any later version published by the Free Software 575 | Foundation. If the Program does not specify a version number of the 576 | GNU General Public License, you may choose any version ever published 577 | by the Free Software Foundation. 578 | 579 | If the Program specifies that a proxy can decide which future 580 | versions of the GNU General Public License can be used, that proxy's 581 | public statement of acceptance of a version permanently authorizes you 582 | to choose that version for the Program. 583 | 584 | Later license versions may give you additional or different 585 | permissions. However, no additional obligations are imposed on any 586 | author or copyright holder as a result of your choosing to follow a 587 | later version. 588 | 589 | 15. Disclaimer of Warranty. 590 | 591 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY 592 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT 593 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY 594 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, 595 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR 596 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM 597 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF 598 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 599 | 600 | 16. Limitation of Liability. 601 | 602 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 603 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS 604 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY 605 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE 606 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF 607 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD 608 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), 609 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF 610 | SUCH DAMAGES. 611 | 612 | 17. Interpretation of Sections 15 and 16. 613 | 614 | If the disclaimer of warranty and limitation of liability provided 615 | above cannot be given local legal effect according to their terms, 616 | reviewing courts shall apply local law that most closely approximates 617 | an absolute waiver of all civil liability in connection with the 618 | Program, unless a warranty or assumption of liability accompanies a 619 | copy of the Program in return for a fee. 620 | 621 | END OF TERMS AND CONDITIONS 622 | 623 | How to Apply These Terms to Your New Programs 624 | 625 | If you develop a new program, and you want it to be of the greatest 626 | possible use to the public, the best way to achieve this is to make it 627 | free software which everyone can redistribute and change under these terms. 628 | 629 | To do so, attach the following notices to the program. It is safest 630 | to attach them to the start of each source file to most effectively 631 | state the exclusion of warranty; and each file should have at least 632 | the "copyright" line and a pointer to where the full notice is found. 633 | 634 | 635 | Copyright (C) 636 | 637 | This program is free software: you can redistribute it and/or modify 638 | it under the terms of the GNU General Public License as published by 639 | the Free Software Foundation, either version 3 of the License, or 640 | (at your option) any later version. 641 | 642 | This program is distributed in the hope that it will be useful, 643 | but WITHOUT ANY WARRANTY; without even the implied warranty of 644 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 645 | GNU General Public License for more details. 646 | 647 | You should have received a copy of the GNU General Public License 648 | along with this program. If not, see . 649 | 650 | Also add information on how to contact you by electronic and paper mail. 651 | 652 | If the program does terminal interaction, make it output a short 653 | notice like this when it starts in an interactive mode: 654 | 655 | Copyright (C) 656 | This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 657 | This is free software, and you are welcome to redistribute it 658 | under certain conditions; type `show c' for details. 659 | 660 | The hypothetical commands `show w' and `show c' should show the appropriate 661 | parts of the General Public License. Of course, your program's commands 662 | might be different; for a GUI interface, you would use an "about box". 663 | 664 | You should also get your employer (if you work as a programmer) or school, 665 | if any, to sign a "copyright disclaimer" for the program, if necessary. 666 | For more information on this, and how to apply and follow the GNU GPL, see 667 | . 668 | 669 | The GNU General Public License does not permit incorporating your program 670 | into proprietary programs. If your program is a subroutine library, you 671 | may consider it more useful to permit linking proprietary applications with 672 | the library. If this is what you want to do, use the GNU Lesser General 673 | Public License instead of this License. But first, please read 674 | . -------------------------------------------------------------------------------- /yolov5/README.md: -------------------------------------------------------------------------------- 1 | 2 | 3 |   4 | 5 | ![CI CPU testing](https://github.com/ultralytics/yolov5/workflows/CI%20CPU%20testing/badge.svg) 6 | 7 | This repository represents Ultralytics open-source research into future object detection methods, and incorporates our lessons learned and best practices evolved over training thousands of models on custom client datasets with our previous YOLO repository https://github.com/ultralytics/yolov3. **All code and models are under active development, and are subject to modification or deletion without notice.** Use at your own risk. 8 | 9 | ** GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size 32, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS. EfficientDet data from [google/automl](https://github.com/google/automl) at batch size 8. 10 | 11 | - **August 13, 2020**: [v3.0 release](https://github.com/ultralytics/yolov5/releases/tag/v3.0): nn.Hardswish() activations, data autodownload, native AMP. 12 | - **July 23, 2020**: [v2.0 release](https://github.com/ultralytics/yolov5/releases/tag/v2.0): improved model definition, training and mAP. 13 | - **June 22, 2020**: [PANet](https://arxiv.org/abs/1803.01534) updates: new heads, reduced parameters, improved speed and mAP [364fcfd](https://github.com/ultralytics/yolov5/commit/364fcfd7dba53f46edd4f04c037a039c0a287972). 14 | - **June 19, 2020**: [FP16](https://pytorch.org/docs/stable/nn.html#torch.nn.Module.half) as new default for smaller checkpoints and faster inference [d4c6674](https://github.com/ultralytics/yolov5/commit/d4c6674c98e19df4c40e33a777610a18d1961145). 15 | - **June 9, 2020**: [CSP](https://github.com/WongKinYiu/CrossStagePartialNetworks) updates: improved speed, size, and accuracy (credit to @WongKinYiu for CSP). 16 | - **May 27, 2020**: Public release. YOLOv5 models are SOTA among all known YOLO implementations. 17 | - **April 1, 2020**: Start development of future compound-scaled [YOLOv3](https://github.com/ultralytics/yolov3)/[YOLOv4](https://github.com/AlexeyAB/darknet)-based PyTorch models. 18 | 19 | 20 | ## Pretrained Checkpoints 21 | 22 | | Model | APval | APtest | AP50 | SpeedGPU | FPSGPU || params | FLOPS | 23 | |---------- |------ |------ |------ | -------- | ------| ------ |------ | :------: | 24 | | [YOLOv5s](https://github.com/ultralytics/yolov5/releases/tag/v3.0) | 37.0 | 37.0 | 56.2 | **2.4ms** | **416** || 7.5M | 13.2B 25 | | [YOLOv5m](https://github.com/ultralytics/yolov5/releases/tag/v3.0) | 44.3 | 44.3 | 63.2 | 3.4ms | 294 || 21.8M | 39.4B 26 | | [YOLOv5l](https://github.com/ultralytics/yolov5/releases/tag/v3.0) | 47.7 | 47.7 | 66.5 | 4.4ms | 227 || 47.8M | 88.1B 27 | | [YOLOv5x](https://github.com/ultralytics/yolov5/releases/tag/v3.0) | **49.2** | **49.2** | **67.7** | 6.9ms | 145 || 89.0M | 166.4B 28 | | | | | | | || | 29 | | [YOLOv5x](https://github.com/ultralytics/yolov5/releases/tag/v3.0) + TTA|**50.8**| **50.8** | **68.9** | 25.5ms | 39 || 89.0M | 354.3B 30 | | | | | | | || | 31 | | [YOLOv3-SPP](https://github.com/ultralytics/yolov5/releases/tag/v3.0) | 45.6 | 45.5 | 65.2 | 4.5ms | 222 || 63.0M | 118.0B 32 | 33 | ** APtest denotes COCO [test-dev2017](http://cocodataset.org/#upload) server results, all other AP results in the table denote val2017 accuracy. 34 | ** All AP numbers are for single-model single-scale without ensemble or test-time augmentation. **Reproduce** by `python test.py --data coco.yaml --img 640 --conf 0.001` 35 | ** SpeedGPU measures end-to-end time per image averaged over 5000 COCO val2017 images using a GCP [n1-standard-16](https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types) instance with one V100 GPU, and includes image preprocessing, PyTorch FP16 image inference at --batch-size 32 --img-size 640, postprocessing and NMS. Average NMS time included in this chart is 1-2ms/img. **Reproduce** by `python test.py --data coco.yaml --img 640 --conf 0.1` 36 | ** All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation). 37 | ** Test Time Augmentation ([TTA](https://github.com/ultralytics/yolov5/issues/303)) runs at 3 image sizes. **Reproduce** by `python test.py --data coco.yaml --img 832 --augment` 38 | 39 | ## Requirements 40 | 41 | Python 3.8 or later with all [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) dependencies installed, including `torch>=1.6`. To install run: 42 | ```bash 43 | $ pip install -r requirements.txt 44 | ``` 45 | 46 | 47 | ## Tutorials 48 | 49 | * [Train Custom Data](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data) 50 | * [Multi-GPU Training](https://github.com/ultralytics/yolov5/issues/475) 51 | * [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36) 52 | * [ONNX and TorchScript Export](https://github.com/ultralytics/yolov5/issues/251) 53 | * [Test-Time Augmentation (TTA)](https://github.com/ultralytics/yolov5/issues/303) 54 | * [Model Ensembling](https://github.com/ultralytics/yolov5/issues/318) 55 | * [Model Pruning/Sparsity](https://github.com/ultralytics/yolov5/issues/304) 56 | * [Hyperparameter Evolution](https://github.com/ultralytics/yolov5/issues/607) 57 | * [TensorRT Deployment](https://github.com/wang-xinyu/tensorrtx) 58 | 59 | 60 | ## Environments 61 | 62 | YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled): 63 | 64 | - **Google Colab Notebook** with free GPU: Open In Colab 65 | - **Kaggle Notebook** with free GPU: [https://www.kaggle.com/ultralytics/yolov5](https://www.kaggle.com/ultralytics/yolov5) 66 | - **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart) 67 | - **Docker Image** https://hub.docker.com/r/ultralytics/yolov5. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) ![Docker Pulls](https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker) 68 | 69 | 70 | ## Inference 71 | 72 | Inference can be run on most common media formats. Model [checkpoints](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J) are downloaded automatically if available. Results are saved to `./inference/output`. 73 | ```bash 74 | $ python detect.py --source 0 # webcam 75 | file.jpg # image 76 | file.mp4 # video 77 | path/ # directory 78 | path/*.jpg # glob 79 | rtsp://170.93.143.139/rtplive/470011e600ef003a004ee33696235daa # rtsp stream 80 | rtmp://192.168.1.105/live/test # rtmp stream 81 | http://112.50.243.8/PLTV/88888888/224/3221225900/1.m3u8 # http stream 82 | ``` 83 | 84 | To run inference on examples in the `./inference/images` folder: 85 | 86 | ```bash 87 | $ python detect.py --source ./inference/images/ --weights yolov5s.pt --conf 0.4 88 | 89 | Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.4, device='', fourcc='mp4v', half=False, img_size=640, iou_thres=0.5, output='inference/output', save_txt=False, source='./inference/images/', view_img=False, weights='yolov5s.pt') 90 | Using CUDA device0 _CudaDeviceProperties(name='Tesla P100-PCIE-16GB', total_memory=16280MB) 91 | 92 | Downloading https://drive.google.com/uc?export=download&id=1R5T6rIyy3lLwgFXNms8whc-387H0tMQO as yolov5s.pt... Done (2.6s) 93 | 94 | image 1/2 inference/images/bus.jpg: 640x512 3 persons, 1 buss, Done. (0.009s) 95 | image 2/2 inference/images/zidane.jpg: 384x640 2 persons, 2 ties, Done. (0.009s) 96 | Results saved to /content/yolov5/inference/output 97 | ``` 98 | 99 | 100 | 101 | 102 | ## Training 103 | 104 | Download [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) and run command below. Training times for YOLOv5s/m/l/x are 2/4/6/8 days on a single V100 (multi-GPU times faster). Use the largest `--batch-size` your GPU allows (batch sizes shown for 16 GB devices). 105 | ```bash 106 | $ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size 64 107 | yolov5m 40 108 | yolov5l 24 109 | yolov5x 16 110 | ``` 111 | 112 | 113 | 114 | ## Citation 115 | 116 | [![DOI](https://zenodo.org/badge/264818686.svg)](https://zenodo.org/badge/latestdoi/264818686) 117 | 118 | 119 | ## About Us 120 | 121 | Ultralytics is a U.S.-based particle physics and AI startup with over 6 years of expertise supporting government, academic and business clients. We offer a wide range of vision AI services, spanning from simple expert advice up to delivery of fully customized, end-to-end production solutions, including: 122 | - **Cloud-based AI** systems operating on **hundreds of HD video streams in realtime.** 123 | - **Edge AI** integrated into custom iOS and Android apps for realtime **30 FPS video inference.** 124 | - **Custom data training**, hyperparameter evolution, and model exportation to any destination. 125 | 126 | For business inquiries and professional support requests please visit us at https://www.ultralytics.com. 127 | 128 | 129 | ## Contact 130 | 131 | **Issues should be raised directly in the repository.** For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com. 132 | -------------------------------------------------------------------------------- /yolov5/data/coco.yaml: -------------------------------------------------------------------------------- 1 | # COCO 2017 dataset http://cocodataset.org 2 | # Train command: python train.py --data coco.yaml 3 | # Default dataset location is next to /yolov5: 4 | # /parent_folder 5 | # /coco 6 | # /yolov5 7 | 8 | 9 | # download command/URL (optional) 10 | download: bash data/scripts/get_coco.sh 11 | 12 | # train and val data as 1) directory: path/images/, 2) file: path/images.txt, or 3) list: [path1/images/, path2/images/] 13 | train: ../coco/train2017.txt # 118287 images 14 | val: ../coco/val2017.txt # 5000 images 15 | test: ../coco/test-dev2017.txt # 20288 of 40670 images, submit to https://competitions.codalab.org/competitions/20794 16 | 17 | # number of classes 18 | nc: 80 19 | 20 | # class names 21 | names: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 22 | 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 23 | 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 24 | 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 25 | 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 26 | 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 27 | 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 28 | 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 29 | 'hair drier', 'toothbrush'] 30 | 31 | # Print classes 32 | # with open('data/coco.yaml') as f: 33 | # d = yaml.load(f, Loader=yaml.FullLoader) # dict 34 | # for i, x in enumerate(d['names']): 35 | # print(i, x) 36 | -------------------------------------------------------------------------------- /yolov5/data/coco128.yaml: -------------------------------------------------------------------------------- 1 | # COCO 2017 dataset http://cocodataset.org - first 128 training images 2 | # Train command: python train.py --data coco128.yaml 3 | # Default dataset location is next to /yolov5: 4 | # /parent_folder 5 | # /coco128 6 | # /yolov5 7 | 8 | 9 | # download command/URL (optional) 10 | download: https://github.com/ultralytics/yolov5/releases/download/v1.0/coco128.zip 11 | 12 | # train and val data as 1) directory: path/images/, 2) file: path/images.txt, or 3) list: [path1/images/, path2/images/] 13 | train: ../coco128/images/train2017/ # 128 images 14 | val: ../coco128/images/train2017/ # 128 images 15 | 16 | # number of classes 17 | nc: 80 18 | 19 | # class names 20 | names: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 21 | 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 22 | 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 23 | 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 24 | 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 25 | 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 26 | 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 27 | 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 28 | 'hair drier', 'toothbrush'] 29 | -------------------------------------------------------------------------------- /yolov5/data/hyp.finetune.yaml: -------------------------------------------------------------------------------- 1 | # Hyperparameters for VOC finetuning 2 | # python train.py --batch 64 --weights yolov5m.pt --data voc.yaml --img 512 --epochs 50 3 | # See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials 4 | 5 | 6 | # Hyperparameter Evolution Results 7 | # Generations: 306 8 | # P R mAP.5 mAP.5:.95 box obj cls 9 | # Metrics: 0.6 0.936 0.896 0.684 0.0115 0.00805 0.00146 10 | 11 | lr0: 0.0032 12 | lrf: 0.12 13 | momentum: 0.843 14 | weight_decay: 0.00036 15 | warmup_epochs: 2.0 16 | warmup_momentum: 0.5 17 | warmup_bias_lr: 0.05 18 | giou: 0.0296 19 | cls: 0.243 20 | cls_pw: 0.631 21 | obj: 0.301 22 | obj_pw: 0.911 23 | iou_t: 0.2 24 | anchor_t: 2.91 25 | # anchors: 3.63 26 | fl_gamma: 0.0 27 | hsv_h: 0.0138 28 | hsv_s: 0.664 29 | hsv_v: 0.464 30 | degrees: 0.373 31 | translate: 0.245 32 | scale: 0.898 33 | shear: 0.602 34 | perspective: 0.0 35 | flipud: 0.00856 36 | fliplr: 0.5 37 | mosaic: 1.0 38 | mixup: 0.243 39 | -------------------------------------------------------------------------------- /yolov5/data/hyp.scratch.yaml: -------------------------------------------------------------------------------- 1 | # Hyperparameters for COCO training from scratch 2 | # python train.py --batch 40 --cfg yolov5m.yaml --weights '' --data coco.yaml --img 640 --epochs 300 3 | # See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials 4 | 5 | 6 | lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3) 7 | lrf: 0.2 # final OneCycleLR learning rate (lr0 * lrf) 8 | momentum: 0.937 # SGD momentum/Adam beta1 9 | weight_decay: 0.0005 # optimizer weight decay 5e-4 10 | warmup_epochs: 3.0 # warmup epochs (fractions ok) 11 | warmup_momentum: 0.8 # warmup initial momentum 12 | warmup_bias_lr: 0.1 # warmup initial bias lr 13 | giou: 0.05 # box loss gain 14 | cls: 0.5 # cls loss gain 15 | cls_pw: 1.0 # cls BCELoss positive_weight 16 | obj: 1.0 # obj loss gain (scale with pixels) 17 | obj_pw: 1.0 # obj BCELoss positive_weight 18 | iou_t: 0.20 # IoU training threshold 19 | anchor_t: 4.0 # anchor-multiple threshold 20 | # anchors: 0 # anchors per output grid (0 to ignore) 21 | fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5) 22 | hsv_h: 0.015 # image HSV-Hue augmentation (fraction) 23 | hsv_s: 0.7 # image HSV-Saturation augmentation (fraction) 24 | hsv_v: 0.4 # image HSV-Value augmentation (fraction) 25 | degrees: 0.0 # image rotation (+/- deg) 26 | translate: 0.1 # image translation (+/- fraction) 27 | scale: 0.5 # image scale (+/- gain) 28 | shear: 0.0 # image shear (+/- deg) 29 | perspective: 0.0 # image perspective (+/- fraction), range 0-0.001 30 | flipud: 0.0 # image flip up-down (probability) 31 | fliplr: 0.5 # image flip left-right (probability) 32 | mosaic: 1.0 # image mosaic (probability) 33 | mixup: 0.0 # image mixup (probability) 34 | -------------------------------------------------------------------------------- /yolov5/data/road.yaml: -------------------------------------------------------------------------------- 1 | train: datasets/road2020/train.txt 2 | val: datasets/road2020/val.txt 3 | nc: 4 4 | names: ['D00','D10','D20','D40'] 5 | -------------------------------------------------------------------------------- /yolov5/data/scripts/get_coco.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # COCO 2017 dataset http://cocodataset.org 3 | # Download command: bash data/scripts/get_coco.sh 4 | # Train command: python train.py --data coco.yaml 5 | # Default dataset location is next to /yolov5: 6 | # /parent_folder 7 | # /coco 8 | # /yolov5 9 | 10 | # Download/unzip labels 11 | echo 'Downloading COCO 2017 labels ...' 12 | d='../' # unzip directory 13 | f='coco2017labels.zip' && curl -L https://github.com/ultralytics/yolov5/releases/download/v1.0/$f -o $f 14 | unzip -q $f -d $d && rm $f 15 | 16 | # Download/unzip images 17 | echo 'Downloading COCO 2017 images ...' 18 | d='../coco/images' # unzip directory 19 | f='train2017.zip' && curl http://images.cocodataset.org/zips/$f -o $f && unzip -q $f -d $d && rm $f # 19G, 118k images 20 | f='val2017.zip' && curl http://images.cocodataset.org/zips/$f -o $f && unzip -q $f -d $d && rm $f # 1G, 5k images 21 | # f='test2017.zip' && curl http://images.cocodataset.org/zips/$f -o $f && unzip -q $f -d $d && rm $f # 7G, 41k images 22 | -------------------------------------------------------------------------------- /yolov5/data/scripts/get_voc.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # PASCAL VOC dataset http://host.robots.ox.ac.uk/pascal/VOC/ 3 | # Download command: bash data/scripts/get_voc.sh 4 | # Train command: python train.py --data voc.yaml 5 | # Default dataset location is next to /yolov5: 6 | # /parent_folder 7 | # /VOC 8 | # /yolov5 9 | 10 | start=$(date +%s) 11 | 12 | # handle optional download dir 13 | if [ -z "$1" ]; then 14 | # navigate to ~/tmp 15 | echo "navigating to ../tmp/ ..." 16 | mkdir -p ../tmp 17 | cd ../tmp/ 18 | else 19 | # check if is valid directory 20 | if [ ! -d $1 ]; then 21 | echo $1 "is not a valid directory" 22 | exit 0 23 | fi 24 | echo "navigating to" $1 "..." 25 | cd $1 26 | fi 27 | 28 | echo "Downloading VOC2007 trainval ..." 29 | # Download data 30 | curl -LO http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar 31 | echo "Downloading VOC2007 test data ..." 32 | curl -LO http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar 33 | echo "Done downloading." 34 | 35 | # Extract data 36 | echo "Extracting trainval ..." 37 | tar -xf VOCtrainval_06-Nov-2007.tar 38 | echo "Extracting test ..." 39 | tar -xf VOCtest_06-Nov-2007.tar 40 | echo "removing tars ..." 41 | rm VOCtrainval_06-Nov-2007.tar 42 | rm VOCtest_06-Nov-2007.tar 43 | 44 | end=$(date +%s) 45 | runtime=$((end - start)) 46 | 47 | echo "Completed in" $runtime "seconds" 48 | 49 | start=$(date +%s) 50 | 51 | # handle optional download dir 52 | if [ -z "$1" ]; then 53 | # navigate to ~/tmp 54 | echo "navigating to ../tmp/ ..." 55 | mkdir -p ../tmp 56 | cd ../tmp/ 57 | else 58 | # check if is valid directory 59 | if [ ! -d $1 ]; then 60 | echo $1 "is not a valid directory" 61 | exit 0 62 | fi 63 | echo "navigating to" $1 "..." 64 | cd $1 65 | fi 66 | 67 | echo "Downloading VOC2012 trainval ..." 68 | # Download data 69 | curl -LO http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar 70 | echo "Done downloading." 71 | 72 | # Extract data 73 | echo "Extracting trainval ..." 74 | tar -xf VOCtrainval_11-May-2012.tar 75 | echo "removing tar ..." 76 | rm VOCtrainval_11-May-2012.tar 77 | 78 | end=$(date +%s) 79 | runtime=$((end - start)) 80 | 81 | echo "Completed in" $runtime "seconds" 82 | 83 | cd ../tmp 84 | echo "Spliting dataset..." 85 | python3 - "$@" <train.txt 145 | cat 2007_train.txt 2007_val.txt 2007_test.txt 2012_train.txt 2012_val.txt >train.all.txt 146 | 147 | python3 - "$@" <= 1 93 | p, s, im0 = path[i], '%g: ' % i, im0s[i].copy() 94 | else: 95 | p, s, im0 = path, '', im0s 96 | 97 | save_path = str(Path(out) / Path(p).name) 98 | txt_path = str(Path(out) / Path(p).stem) + ('_%g' % dataset.frame if dataset.mode == 'video' else '') 99 | s += '%gx%g ' % img.shape[2:] # print string 100 | gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh 101 | if det is not None and len(det): 102 | # Rescale boxes from img_size to im0 size 103 | det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round() 104 | 105 | # Print results 106 | for c in det[:, -1].unique(): 107 | n = (det[:, -1] == c).sum() # detections per class 108 | s += '%g %ss, ' % (n, names[int(c)]) # add to string 109 | 110 | # Write results 111 | for *xyxy, conf, cls in reversed(det): 112 | if save_txt: # Write to file 113 | xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh 114 | with open(txt_path + '.txt', 'a') as f: 115 | f.write(('%g ' * 5 + '\n') % (cls, *xywh)) # label format 116 | 117 | if save_img or view_img: # Add bbox to image 118 | label = '%s %.2f' % (names[int(cls)], conf) 119 | plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=3) 120 | 121 | if save_csv: 122 | csv_f.write("{} {} {} {} {} ".format(str(int(cls.detach().cpu().numpy())+1), str(int(xyxy[0].detach().cpu().numpy())), str(int(xyxy[1].detach().cpu().numpy())), str(int(xyxy[2].detach().cpu().numpy())), str(int(xyxy[3].detach().cpu().numpy())))) 123 | csv_f.write("\n") 124 | 125 | # Print time (inference + NMS) 126 | print('%sDone. (%.3fs)' % (s, t2 - t1)) 127 | 128 | # Stream results 129 | if view_img: 130 | cv2.imshow(p, im0) 131 | if cv2.waitKey(1) == ord('q'): # q to quit 132 | raise StopIteration 133 | 134 | # Save results (image with detections) 135 | if save_img: 136 | if dataset.mode == 'images': 137 | cv2.imwrite(save_path, im0) 138 | else: 139 | if vid_path != save_path: # new video 140 | vid_path = save_path 141 | if isinstance(vid_writer, cv2.VideoWriter): 142 | vid_writer.release() # release previous video writer 143 | 144 | fourcc = 'mp4v' # output video codec 145 | fps = vid_cap.get(cv2.CAP_PROP_FPS) 146 | w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) 147 | h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) 148 | vid_writer = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*fourcc), fps, (w, h)) 149 | vid_writer.write(im0) 150 | 151 | if save_txt or save_img: 152 | print('Results saved to %s' % Path(out)) 153 | if platform.system() == 'Darwin' and not opt.update: # MacOS 154 | os.system('open ' + save_path) 155 | 156 | print('Done. (%.3fs)' % (time.time() - t0)) 157 | 158 | 159 | if __name__ == '__main__': 160 | parser = argparse.ArgumentParser() 161 | parser.add_argument('--weights', nargs='+', type=str, default='yolov5s.pt', help='model.pt path(s)') 162 | parser.add_argument('--source', type=str, default='inference/images', help='source') # file/folder, 0 for webcam 163 | parser.add_argument('--output', type=str, default='inference/output', help='output folder') # output folder 164 | parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)') 165 | parser.add_argument('--conf-thres', type=float, default=0.4, help='object confidence threshold') 166 | parser.add_argument('--iou-thres', type=float, default=0.5, help='IOU threshold for NMS') 167 | parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') 168 | parser.add_argument('--view-img', action='store_true', help='display results') 169 | parser.add_argument('--save-txt', action='store_true', help='save results to *.txt') 170 | parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --class 0, or --class 0 2 3') 171 | parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS') 172 | parser.add_argument('--augment', action='store_true', help='augmented inference') 173 | parser.add_argument('--update', action='store_true', help='update all models') 174 | opt = parser.parse_args() 175 | print(opt) 176 | 177 | with torch.no_grad(): 178 | if opt.update: # update all models (to fix SourceChangeWarning) 179 | for opt.weights in ['yolov5s.pt', 'yolov5m.pt', 'yolov5l.pt', 'yolov5x.pt']: 180 | detect() 181 | strip_optimizer(opt.weights) 182 | else: 183 | detect() 184 | csv_f.close() 185 | -------------------------------------------------------------------------------- /yolov5/hubconf.py: -------------------------------------------------------------------------------- 1 | """File for accessing YOLOv5 via PyTorch Hub https://pytorch.org/hub/ 2 | 3 | Usage: 4 | import torch 5 | model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True, channels=3, classes=80) 6 | """ 7 | 8 | dependencies = ['torch', 'yaml'] 9 | import os 10 | 11 | import torch 12 | 13 | from models.common import NMS 14 | from models.yolo import Model 15 | from utils.google_utils import attempt_download 16 | 17 | 18 | def create(name, pretrained, channels, classes): 19 | """Creates a specified YOLOv5 model 20 | 21 | Arguments: 22 | name (str): name of model, i.e. 'yolov5s' 23 | pretrained (bool): load pretrained weights into the model 24 | channels (int): number of input channels 25 | classes (int): number of model classes 26 | 27 | Returns: 28 | pytorch model 29 | """ 30 | config = os.path.join(os.path.dirname(__file__), 'models', '%s.yaml' % name) # model.yaml path 31 | try: 32 | model = Model(config, channels, classes) 33 | if pretrained: 34 | ckpt = '%s.pt' % name # checkpoint filename 35 | attempt_download(ckpt) # download if not found locally 36 | state_dict = torch.load(ckpt, map_location=torch.device('cpu'))['model'].float().state_dict() # to FP32 37 | state_dict = {k: v for k, v in state_dict.items() if model.state_dict()[k].shape == v.shape} # filter 38 | model.load_state_dict(state_dict, strict=False) # load 39 | 40 | model.add_nms() # add NMS module 41 | model.eval() 42 | return model 43 | 44 | except Exception as e: 45 | help_url = 'https://github.com/ultralytics/yolov5/issues/36' 46 | s = 'Cache maybe be out of date, deleting cache and retrying may solve this. See %s for help.' % help_url 47 | raise Exception(s) from e 48 | 49 | 50 | def yolov5s(pretrained=False, channels=3, classes=80): 51 | """YOLOv5-small model from https://github.com/ultralytics/yolov5 52 | 53 | Arguments: 54 | pretrained (bool): load pretrained weights into the model, default=False 55 | channels (int): number of input channels, default=3 56 | classes (int): number of model classes, default=80 57 | 58 | Returns: 59 | pytorch model 60 | """ 61 | return create('yolov5s', pretrained, channels, classes) 62 | 63 | 64 | def yolov5m(pretrained=False, channels=3, classes=80): 65 | """YOLOv5-medium model from https://github.com/ultralytics/yolov5 66 | 67 | Arguments: 68 | pretrained (bool): load pretrained weights into the model, default=False 69 | channels (int): number of input channels, default=3 70 | classes (int): number of model classes, default=80 71 | 72 | Returns: 73 | pytorch model 74 | """ 75 | return create('yolov5m', pretrained, channels, classes) 76 | 77 | 78 | def yolov5l(pretrained=False, channels=3, classes=80): 79 | """YOLOv5-large model from https://github.com/ultralytics/yolov5 80 | 81 | Arguments: 82 | pretrained (bool): load pretrained weights into the model, default=False 83 | channels (int): number of input channels, default=3 84 | classes (int): number of model classes, default=80 85 | 86 | Returns: 87 | pytorch model 88 | """ 89 | return create('yolov5l', pretrained, channels, classes) 90 | 91 | 92 | def yolov5x(pretrained=False, channels=3, classes=80): 93 | """YOLOv5-xlarge model from https://github.com/ultralytics/yolov5 94 | 95 | Arguments: 96 | pretrained (bool): load pretrained weights into the model, default=False 97 | channels (int): number of input channels, default=3 98 | classes (int): number of model classes, default=80 99 | 100 | Returns: 101 | pytorch model 102 | """ 103 | return create('yolov5x', pretrained, channels, classes) 104 | -------------------------------------------------------------------------------- /yolov5/inference/images/bus.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/USC-InfoLab/rddc2020/72cda97851fb6a48b5b9a55048ba38c890396d23/yolov5/inference/images/bus.jpg -------------------------------------------------------------------------------- /yolov5/inference/images/zidane.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/USC-InfoLab/rddc2020/72cda97851fb6a48b5b9a55048ba38c890396d23/yolov5/inference/images/zidane.jpg -------------------------------------------------------------------------------- /yolov5/models/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/USC-InfoLab/rddc2020/72cda97851fb6a48b5b9a55048ba38c890396d23/yolov5/models/__init__.py -------------------------------------------------------------------------------- /yolov5/models/common.py: -------------------------------------------------------------------------------- 1 | # This file contains modules common to various models 2 | import math 3 | 4 | import torch 5 | import torch.nn as nn 6 | from utils.general import non_max_suppression 7 | 8 | 9 | def autopad(k, p=None): # kernel, padding 10 | # Pad to 'same' 11 | if p is None: 12 | p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad 13 | return p 14 | 15 | 16 | def DWConv(c1, c2, k=1, s=1, act=True): 17 | # Depthwise convolution 18 | return Conv(c1, c2, k, s, g=math.gcd(c1, c2), act=act) 19 | 20 | 21 | class Conv(nn.Module): 22 | # Standard convolution 23 | def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups 24 | super(Conv, self).__init__() 25 | self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False) 26 | self.bn = nn.BatchNorm2d(c2) 27 | self.act = nn.Hardswish() if act else nn.Identity() 28 | 29 | def forward(self, x): 30 | return self.act(self.bn(self.conv(x))) 31 | 32 | def fuseforward(self, x): 33 | return self.act(self.conv(x)) 34 | 35 | 36 | class Bottleneck(nn.Module): 37 | # Standard bottleneck 38 | def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion 39 | super(Bottleneck, self).__init__() 40 | c_ = int(c2 * e) # hidden channels 41 | self.cv1 = Conv(c1, c_, 1, 1) 42 | self.cv2 = Conv(c_, c2, 3, 1, g=g) 43 | self.add = shortcut and c1 == c2 44 | 45 | def forward(self, x): 46 | return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) 47 | 48 | 49 | class BottleneckCSP(nn.Module): 50 | # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks 51 | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion 52 | super(BottleneckCSP, self).__init__() 53 | c_ = int(c2 * e) # hidden channels 54 | self.cv1 = Conv(c1, c_, 1, 1) 55 | self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False) 56 | self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False) 57 | self.cv4 = Conv(2 * c_, c2, 1, 1) 58 | self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3) 59 | self.act = nn.LeakyReLU(0.1, inplace=True) 60 | self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) 61 | 62 | def forward(self, x): 63 | y1 = self.cv3(self.m(self.cv1(x))) 64 | y2 = self.cv2(x) 65 | return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1)))) 66 | 67 | 68 | class SPP(nn.Module): 69 | # Spatial pyramid pooling layer used in YOLOv3-SPP 70 | def __init__(self, c1, c2, k=(5, 9, 13)): 71 | super(SPP, self).__init__() 72 | c_ = c1 // 2 # hidden channels 73 | self.cv1 = Conv(c1, c_, 1, 1) 74 | self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1) 75 | self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k]) 76 | 77 | def forward(self, x): 78 | x = self.cv1(x) 79 | return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1)) 80 | 81 | 82 | class Focus(nn.Module): 83 | # Focus wh information into c-space 84 | def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups 85 | super(Focus, self).__init__() 86 | self.conv = Conv(c1 * 4, c2, k, s, p, g, act) 87 | 88 | def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2) 89 | return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1)) 90 | 91 | 92 | class Concat(nn.Module): 93 | # Concatenate a list of tensors along dimension 94 | def __init__(self, dimension=1): 95 | super(Concat, self).__init__() 96 | self.d = dimension 97 | 98 | def forward(self, x): 99 | return torch.cat(x, self.d) 100 | 101 | 102 | class NMS(nn.Module): 103 | # Non-Maximum Suppression (NMS) module 104 | conf = 0.3 # confidence threshold 105 | iou = 0.6 # IoU threshold 106 | classes = None # (optional list) filter by class 107 | 108 | def __init__(self, dimension=1): 109 | super(NMS, self).__init__() 110 | 111 | def forward(self, x): 112 | return non_max_suppression(x[0], conf_thres=self.conf, iou_thres=self.iou, classes=self.classes) 113 | 114 | 115 | class Flatten(nn.Module): 116 | # Use after nn.AdaptiveAvgPool2d(1) to remove last 2 dimensions 117 | @staticmethod 118 | def forward(x): 119 | return x.view(x.size(0), -1) 120 | 121 | 122 | class Classify(nn.Module): 123 | # Classification head, i.e. x(b,c1,20,20) to x(b,c2) 124 | def __init__(self, c1, c2, k=1, s=1, p=None, g=1): # ch_in, ch_out, kernel, stride, padding, groups 125 | super(Classify, self).__init__() 126 | self.aap = nn.AdaptiveAvgPool2d(1) # to x(b,c1,1,1) 127 | self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False) # to x(b,c2,1,1) 128 | self.flat = Flatten() 129 | 130 | def forward(self, x): 131 | z = torch.cat([self.aap(y) for y in (x if isinstance(x, list) else [x])], 1) # cat if list 132 | return self.flat(self.conv(z)) # flatten to x(b,c2) 133 | -------------------------------------------------------------------------------- /yolov5/models/experimental.py: -------------------------------------------------------------------------------- 1 | # This file contains experimental modules 2 | 3 | import numpy as np 4 | import torch 5 | import torch.nn as nn 6 | 7 | from models.common import Conv, DWConv 8 | from utils.google_utils import attempt_download 9 | 10 | 11 | class CrossConv(nn.Module): 12 | # Cross Convolution Downsample 13 | def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False): 14 | # ch_in, ch_out, kernel, stride, groups, expansion, shortcut 15 | super(CrossConv, self).__init__() 16 | c_ = int(c2 * e) # hidden channels 17 | self.cv1 = Conv(c1, c_, (1, k), (1, s)) 18 | self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g) 19 | self.add = shortcut and c1 == c2 20 | 21 | def forward(self, x): 22 | return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) 23 | 24 | 25 | class C3(nn.Module): 26 | # Cross Convolution CSP 27 | def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion 28 | super(C3, self).__init__() 29 | c_ = int(c2 * e) # hidden channels 30 | self.cv1 = Conv(c1, c_, 1, 1) 31 | self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False) 32 | self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False) 33 | self.cv4 = Conv(2 * c_, c2, 1, 1) 34 | self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3) 35 | self.act = nn.LeakyReLU(0.1, inplace=True) 36 | self.m = nn.Sequential(*[CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n)]) 37 | 38 | def forward(self, x): 39 | y1 = self.cv3(self.m(self.cv1(x))) 40 | y2 = self.cv2(x) 41 | return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1)))) 42 | 43 | 44 | class Sum(nn.Module): 45 | # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070 46 | def __init__(self, n, weight=False): # n: number of inputs 47 | super(Sum, self).__init__() 48 | self.weight = weight # apply weights boolean 49 | self.iter = range(n - 1) # iter object 50 | if weight: 51 | self.w = nn.Parameter(-torch.arange(1., n) / 2, requires_grad=True) # layer weights 52 | 53 | def forward(self, x): 54 | y = x[0] # no weight 55 | if self.weight: 56 | w = torch.sigmoid(self.w) * 2 57 | for i in self.iter: 58 | y = y + x[i + 1] * w[i] 59 | else: 60 | for i in self.iter: 61 | y = y + x[i + 1] 62 | return y 63 | 64 | 65 | class GhostConv(nn.Module): 66 | # Ghost Convolution https://github.com/huawei-noah/ghostnet 67 | def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups 68 | super(GhostConv, self).__init__() 69 | c_ = c2 // 2 # hidden channels 70 | self.cv1 = Conv(c1, c_, k, s, g, act) 71 | self.cv2 = Conv(c_, c_, 5, 1, c_, act) 72 | 73 | def forward(self, x): 74 | y = self.cv1(x) 75 | return torch.cat([y, self.cv2(y)], 1) 76 | 77 | 78 | class GhostBottleneck(nn.Module): 79 | # Ghost Bottleneck https://github.com/huawei-noah/ghostnet 80 | def __init__(self, c1, c2, k, s): 81 | super(GhostBottleneck, self).__init__() 82 | c_ = c2 // 2 83 | self.conv = nn.Sequential(GhostConv(c1, c_, 1, 1), # pw 84 | DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw 85 | GhostConv(c_, c2, 1, 1, act=False)) # pw-linear 86 | self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False), 87 | Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity() 88 | 89 | def forward(self, x): 90 | return self.conv(x) + self.shortcut(x) 91 | 92 | 93 | class MixConv2d(nn.Module): 94 | # Mixed Depthwise Conv https://arxiv.org/abs/1907.09595 95 | def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True): 96 | super(MixConv2d, self).__init__() 97 | groups = len(k) 98 | if equal_ch: # equal c_ per group 99 | i = torch.linspace(0, groups - 1E-6, c2).floor() # c2 indices 100 | c_ = [(i == g).sum() for g in range(groups)] # intermediate channels 101 | else: # equal weight.numel() per group 102 | b = [c2] + [0] * groups 103 | a = np.eye(groups + 1, groups, k=-1) 104 | a -= np.roll(a, 1, axis=1) 105 | a *= np.array(k) ** 2 106 | a[0] = 1 107 | c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b 108 | 109 | self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)]) 110 | self.bn = nn.BatchNorm2d(c2) 111 | self.act = nn.LeakyReLU(0.1, inplace=True) 112 | 113 | def forward(self, x): 114 | return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1))) 115 | 116 | 117 | class Ensemble(nn.ModuleList): 118 | # Ensemble of models 119 | def __init__(self): 120 | super(Ensemble, self).__init__() 121 | 122 | def forward(self, x, augment=False): 123 | y = [] 124 | for module in self: 125 | y.append(module(x, augment)[0]) 126 | # y = torch.stack(y).max(0)[0] # max ensemble 127 | # y = torch.cat(y, 1) # nms ensemble 128 | y = torch.stack(y).mean(0) # mean ensemble 129 | return y, None # inference, train output 130 | 131 | 132 | def attempt_load(weights, map_location=None): 133 | # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a 134 | model = Ensemble() 135 | for w in weights if isinstance(weights, list) else [weights]: 136 | attempt_download(w) 137 | model.append(torch.load(w, map_location=map_location)['model'].float().fuse().eval()) # load FP32 model 138 | 139 | if len(model) == 1: 140 | return model[-1] # return model 141 | else: 142 | print('Ensemble created with %s\n' % weights) 143 | for k in ['names', 'stride']: 144 | setattr(model, k, getattr(model[-1], k)) 145 | return model # return ensemble 146 | -------------------------------------------------------------------------------- /yolov5/models/export.py: -------------------------------------------------------------------------------- 1 | """Exports a YOLOv5 *.pt model to ONNX and TorchScript formats 2 | 3 | Usage: 4 | $ export PYTHONPATH="$PWD" && python models/export.py --weights ./weights/yolov5s.pt --img 640 --batch 1 5 | """ 6 | 7 | import argparse 8 | 9 | import torch 10 | import torch.nn as nn 11 | 12 | import models 13 | from models.experimental import attempt_load 14 | from utils.activations import Hardswish 15 | from utils.general import set_logging 16 | 17 | if __name__ == '__main__': 18 | parser = argparse.ArgumentParser() 19 | parser.add_argument('--weights', type=str, default='./yolov5s.pt', help='weights path') # from yolov5/models/ 20 | parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='image size') # height, width 21 | parser.add_argument('--batch-size', type=int, default=1, help='batch size') 22 | opt = parser.parse_args() 23 | opt.img_size *= 2 if len(opt.img_size) == 1 else 1 # expand 24 | print(opt) 25 | set_logging() 26 | 27 | # Input 28 | img = torch.zeros((opt.batch_size, 3, *opt.img_size)) # image size(1,3,320,192) iDetection 29 | 30 | # Load PyTorch model 31 | model = attempt_load(opt.weights, map_location=torch.device('cpu')) # load FP32 model 32 | 33 | # Update model 34 | for k, m in model.named_modules(): 35 | m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatability 36 | if isinstance(m, models.common.Conv) and isinstance(m.act, nn.Hardswish): 37 | m.act = Hardswish() # assign activation 38 | # if isinstance(m, models.yolo.Detect): 39 | # m.forward = m.forward_export # assign forward (optional) 40 | model.model[-1].export = True # set Detect() layer export=True 41 | y = model(img) # dry run 42 | 43 | # TorchScript export 44 | try: 45 | print('\nStarting TorchScript export with torch %s...' % torch.__version__) 46 | f = opt.weights.replace('.pt', '.torchscript.pt') # filename 47 | ts = torch.jit.trace(model, img) 48 | ts.save(f) 49 | print('TorchScript export success, saved as %s' % f) 50 | except Exception as e: 51 | print('TorchScript export failure: %s' % e) 52 | 53 | # ONNX export 54 | try: 55 | import onnx 56 | 57 | print('\nStarting ONNX export with onnx %s...' % onnx.__version__) 58 | f = opt.weights.replace('.pt', '.onnx') # filename 59 | torch.onnx.export(model, img, f, verbose=False, opset_version=12, input_names=['images'], 60 | output_names=['classes', 'boxes'] if y is None else ['output']) 61 | 62 | # Checks 63 | onnx_model = onnx.load(f) # load onnx model 64 | onnx.checker.check_model(onnx_model) # check onnx model 65 | # print(onnx.helper.printable_graph(onnx_model.graph)) # print a human readable model 66 | print('ONNX export success, saved as %s' % f) 67 | except Exception as e: 68 | print('ONNX export failure: %s' % e) 69 | 70 | # CoreML export 71 | try: 72 | import coremltools as ct 73 | 74 | print('\nStarting CoreML export with coremltools %s...' % ct.__version__) 75 | # convert model from torchscript and apply pixel scaling as per detect.py 76 | model = ct.convert(ts, inputs=[ct.ImageType(name='images', shape=img.shape, scale=1 / 255.0, bias=[0, 0, 0])]) 77 | f = opt.weights.replace('.pt', '.mlmodel') # filename 78 | model.save(f) 79 | print('CoreML export success, saved as %s' % f) 80 | except Exception as e: 81 | print('CoreML export failure: %s' % e) 82 | 83 | # Finish 84 | print('\nExport complete. Visualize with https://github.com/lutzroeder/netron.') 85 | -------------------------------------------------------------------------------- /yolov5/models/hub/yolov3-spp.yaml: -------------------------------------------------------------------------------- 1 | # parameters 2 | nc: 80 # number of classes 3 | depth_multiple: 1.0 # model depth multiple 4 | width_multiple: 1.0 # layer channel multiple 5 | 6 | # anchors 7 | anchors: 8 | - [10,13, 16,30, 33,23] # P3/8 9 | - [30,61, 62,45, 59,119] # P4/16 10 | - [116,90, 156,198, 373,326] # P5/32 11 | 12 | # darknet53 backbone 13 | backbone: 14 | # [from, number, module, args] 15 | [[-1, 1, Conv, [32, 3, 1]], # 0 16 | [-1, 1, Conv, [64, 3, 2]], # 1-P1/2 17 | [-1, 1, Bottleneck, [64]], 18 | [-1, 1, Conv, [128, 3, 2]], # 3-P2/4 19 | [-1, 2, Bottleneck, [128]], 20 | [-1, 1, Conv, [256, 3, 2]], # 5-P3/8 21 | [-1, 8, Bottleneck, [256]], 22 | [-1, 1, Conv, [512, 3, 2]], # 7-P4/16 23 | [-1, 8, Bottleneck, [512]], 24 | [-1, 1, Conv, [1024, 3, 2]], # 9-P5/32 25 | [-1, 4, Bottleneck, [1024]], # 10 26 | ] 27 | 28 | # YOLOv3-SPP head 29 | head: 30 | [[-1, 1, Bottleneck, [1024, False]], 31 | [-1, 1, SPP, [512, [5, 9, 13]]], 32 | [-1, 1, Conv, [1024, 3, 1]], 33 | [-1, 1, Conv, [512, 1, 1]], 34 | [-1, 1, Conv, [1024, 3, 1]], # 15 (P5/32-large) 35 | 36 | [-2, 1, Conv, [256, 1, 1]], 37 | [-1, 1, nn.Upsample, [None, 2, 'nearest']], 38 | [[-1, 8], 1, Concat, [1]], # cat backbone P4 39 | [-1, 1, Bottleneck, [512, False]], 40 | [-1, 1, Bottleneck, [512, False]], 41 | [-1, 1, Conv, [256, 1, 1]], 42 | [-1, 1, Conv, [512, 3, 1]], # 22 (P4/16-medium) 43 | 44 | [-2, 1, Conv, [128, 1, 1]], 45 | [-1, 1, nn.Upsample, [None, 2, 'nearest']], 46 | [[-1, 6], 1, Concat, [1]], # cat backbone P3 47 | [-1, 1, Bottleneck, [256, False]], 48 | [-1, 2, Bottleneck, [256, False]], # 27 (P3/8-small) 49 | 50 | [[27, 22, 15], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5) 51 | ] 52 | -------------------------------------------------------------------------------- /yolov5/models/hub/yolov5-fpn.yaml: -------------------------------------------------------------------------------- 1 | # parameters 2 | nc: 80 # number of classes 3 | depth_multiple: 1.0 # model depth multiple 4 | width_multiple: 1.0 # layer channel multiple 5 | 6 | # anchors 7 | anchors: 8 | - [10,13, 16,30, 33,23] # P3/8 9 | - [30,61, 62,45, 59,119] # P4/16 10 | - [116,90, 156,198, 373,326] # P5/32 11 | 12 | # YOLOv5 backbone 13 | backbone: 14 | # [from, number, module, args] 15 | [[-1, 1, Focus, [64, 3]], # 0-P1/2 16 | [-1, 1, Conv, [128, 3, 2]], # 1-P2/4 17 | [-1, 3, Bottleneck, [128]], 18 | [-1, 1, Conv, [256, 3, 2]], # 3-P3/8 19 | [-1, 9, BottleneckCSP, [256]], 20 | [-1, 1, Conv, [512, 3, 2]], # 5-P4/16 21 | [-1, 9, BottleneckCSP, [512]], 22 | [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32 23 | [-1, 1, SPP, [1024, [5, 9, 13]]], 24 | [-1, 6, BottleneckCSP, [1024]], # 9 25 | ] 26 | 27 | # YOLOv5 FPN head 28 | head: 29 | [[-1, 3, BottleneckCSP, [1024, False]], # 10 (P5/32-large) 30 | 31 | [-1, 1, nn.Upsample, [None, 2, 'nearest']], 32 | [[-1, 6], 1, Concat, [1]], # cat backbone P4 33 | [-1, 1, Conv, [512, 1, 1]], 34 | [-1, 3, BottleneckCSP, [512, False]], # 14 (P4/16-medium) 35 | 36 | [-1, 1, nn.Upsample, [None, 2, 'nearest']], 37 | [[-1, 4], 1, Concat, [1]], # cat backbone P3 38 | [-1, 1, Conv, [256, 1, 1]], 39 | [-1, 3, BottleneckCSP, [256, False]], # 18 (P3/8-small) 40 | 41 | [[18, 14, 10], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5) 42 | ] 43 | -------------------------------------------------------------------------------- /yolov5/models/hub/yolov5-panet.yaml: -------------------------------------------------------------------------------- 1 | # parameters 2 | nc: 80 # number of classes 3 | depth_multiple: 1.0 # model depth multiple 4 | width_multiple: 1.0 # layer channel multiple 5 | 6 | # anchors 7 | anchors: 8 | - [116,90, 156,198, 373,326] # P5/32 9 | - [30,61, 62,45, 59,119] # P4/16 10 | - [10,13, 16,30, 33,23] # P3/8 11 | 12 | # YOLOv5 backbone 13 | backbone: 14 | # [from, number, module, args] 15 | [[-1, 1, Focus, [64, 3]], # 0-P1/2 16 | [-1, 1, Conv, [128, 3, 2]], # 1-P2/4 17 | [-1, 3, BottleneckCSP, [128]], 18 | [-1, 1, Conv, [256, 3, 2]], # 3-P3/8 19 | [-1, 9, BottleneckCSP, [256]], 20 | [-1, 1, Conv, [512, 3, 2]], # 5-P4/16 21 | [-1, 9, BottleneckCSP, [512]], 22 | [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32 23 | [-1, 1, SPP, [1024, [5, 9, 13]]], 24 | [-1, 3, BottleneckCSP, [1024, False]], # 9 25 | ] 26 | 27 | # YOLOv5 PANet head 28 | head: 29 | [[-1, 1, Conv, [512, 1, 1]], 30 | [-1, 1, nn.Upsample, [None, 2, 'nearest']], 31 | [[-1, 6], 1, Concat, [1]], # cat backbone P4 32 | [-1, 3, BottleneckCSP, [512, False]], # 13 33 | 34 | [-1, 1, Conv, [256, 1, 1]], 35 | [-1, 1, nn.Upsample, [None, 2, 'nearest']], 36 | [[-1, 4], 1, Concat, [1]], # cat backbone P3 37 | [-1, 3, BottleneckCSP, [256, False]], # 17 (P3/8-small) 38 | 39 | [-1, 1, Conv, [256, 3, 2]], 40 | [[-1, 14], 1, Concat, [1]], # cat head P4 41 | [-1, 3, BottleneckCSP, [512, False]], # 20 (P4/16-medium) 42 | 43 | [-1, 1, Conv, [512, 3, 2]], 44 | [[-1, 10], 1, Concat, [1]], # cat head P5 45 | [-1, 3, BottleneckCSP, [1024, False]], # 23 (P5/32-large) 46 | 47 | [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P5, P4, P3) 48 | ] 49 | -------------------------------------------------------------------------------- /yolov5/models/yolo.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import logging 3 | import math 4 | from copy import deepcopy 5 | from pathlib import Path 6 | 7 | import torch 8 | import torch.nn as nn 9 | 10 | from models.common import Conv, Bottleneck, SPP, DWConv, Focus, BottleneckCSP, Concat, NMS 11 | from models.experimental import MixConv2d, CrossConv, C3 12 | from utils.general import check_anchor_order, make_divisible, check_file, set_logging 13 | from utils.torch_utils import ( 14 | time_synchronized, fuse_conv_and_bn, model_info, scale_img, initialize_weights, select_device) 15 | 16 | logger = logging.getLogger(__name__) 17 | 18 | 19 | class Detect(nn.Module): 20 | stride = None # strides computed during build 21 | export = False # onnx export 22 | 23 | def __init__(self, nc=80, anchors=(), ch=()): # detection layer 24 | super(Detect, self).__init__() 25 | self.nc = nc # number of classes 26 | self.no = nc + 5 # number of outputs per anchor 27 | self.nl = len(anchors) # number of detection layers 28 | self.na = len(anchors[0]) // 2 # number of anchors 29 | self.grid = [torch.zeros(1)] * self.nl # init grid 30 | a = torch.tensor(anchors).float().view(self.nl, -1, 2) 31 | self.register_buffer('anchors', a) # shape(nl,na,2) 32 | self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2) 33 | self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv 34 | 35 | def forward(self, x): 36 | # x = x.copy() # for profiling 37 | z = [] # inference output 38 | self.training |= self.export 39 | for i in range(self.nl): 40 | x[i] = self.m[i](x[i]) # conv 41 | bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) 42 | x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() 43 | 44 | if not self.training: # inference 45 | if self.grid[i].shape[2:4] != x[i].shape[2:4]: 46 | self.grid[i] = self._make_grid(nx, ny).to(x[i].device) 47 | 48 | y = x[i].sigmoid() 49 | y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i].to(x[i].device)) * self.stride[i] # xy 50 | y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh 51 | z.append(y.view(bs, -1, self.no)) 52 | 53 | return x if self.training else (torch.cat(z, 1), x) 54 | 55 | @staticmethod 56 | def _make_grid(nx=20, ny=20): 57 | yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)]) 58 | return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float() 59 | 60 | 61 | class Model(nn.Module): 62 | def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None): # model, input channels, number of classes 63 | super(Model, self).__init__() 64 | if isinstance(cfg, dict): 65 | self.yaml = cfg # model dict 66 | else: # is *.yaml 67 | import yaml # for torch hub 68 | self.yaml_file = Path(cfg).name 69 | with open(cfg) as f: 70 | self.yaml = yaml.load(f, Loader=yaml.FullLoader) # model dict 71 | 72 | # Define model 73 | if nc and nc != self.yaml['nc']: 74 | print('Overriding model.yaml nc=%g with nc=%g' % (self.yaml['nc'], nc)) 75 | self.yaml['nc'] = nc # override yaml value 76 | self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist, ch_out 77 | # print([x.shape for x in self.forward(torch.zeros(1, ch, 64, 64))]) 78 | 79 | # Build strides, anchors 80 | m = self.model[-1] # Detect() 81 | if isinstance(m, Detect): 82 | s = 128 # 2x min stride 83 | m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward 84 | m.anchors /= m.stride.view(-1, 1, 1) 85 | check_anchor_order(m) 86 | self.stride = m.stride 87 | self._initialize_biases() # only run once 88 | # print('Strides: %s' % m.stride.tolist()) 89 | 90 | # Init weights, biases 91 | initialize_weights(self) 92 | self.info() 93 | print('') 94 | 95 | def forward(self, x, augment=False, profile=False): 96 | if augment: 97 | img_size = x.shape[-2:] # height, width 98 | s = [1, 0.83, 0.67] # scales 99 | f = [None, 3, None] # flips (2-ud, 3-lr) 100 | y = [] # outputs 101 | for si, fi in zip(s, f): 102 | xi = scale_img(x.flip(fi) if fi else x, si) 103 | yi = self.forward_once(xi)[0] # forward 104 | # cv2.imwrite('img%g.jpg' % s, 255 * xi[0].numpy().transpose((1, 2, 0))[:, :, ::-1]) # save 105 | yi[..., :4] /= si # de-scale 106 | if fi == 2: 107 | yi[..., 1] = img_size[0] - yi[..., 1] # de-flip ud 108 | elif fi == 3: 109 | yi[..., 0] = img_size[1] - yi[..., 0] # de-flip lr 110 | y.append(yi) 111 | return torch.cat(y, 1), None # augmented inference, train 112 | else: 113 | return self.forward_once(x, profile) # single-scale inference, train 114 | 115 | def forward_once(self, x, profile=False): 116 | y, dt = [], [] # outputs 117 | for m in self.model: 118 | if m.f != -1: # if not from previous layer 119 | x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers 120 | 121 | if profile: 122 | try: 123 | import thop 124 | o = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 # FLOPS 125 | except: 126 | o = 0 127 | t = time_synchronized() 128 | for _ in range(10): 129 | _ = m(x) 130 | dt.append((time_synchronized() - t) * 100) 131 | print('%10.1f%10.0f%10.1fms %-40s' % (o, m.np, dt[-1], m.type)) 132 | 133 | x = m(x) # run 134 | y.append(x if m.i in self.save else None) # save output 135 | 136 | if profile: 137 | print('%.1fms total' % sum(dt)) 138 | return x 139 | 140 | def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency 141 | # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. 142 | m = self.model[-1] # Detect() module 143 | for mi, s in zip(m.m, m.stride): # from 144 | b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) 145 | b[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) 146 | b[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls 147 | mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) 148 | 149 | def _print_biases(self): 150 | m = self.model[-1] # Detect() module 151 | for mi in m.m: # from 152 | b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85) 153 | print(('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean())) 154 | 155 | # def _print_weights(self): 156 | # for m in self.model.modules(): 157 | # if type(m) is Bottleneck: 158 | # print('%10.3g' % (m.w.detach().sigmoid() * 2)) # shortcut weights 159 | 160 | def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers 161 | print('Fusing layers... ') 162 | for m in self.model.modules(): 163 | if type(m) is Conv and hasattr(Conv, 'bn'): 164 | m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatability 165 | m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv 166 | delattr(m, 'bn') # remove batchnorm 167 | m.forward = m.fuseforward # update forward 168 | self.info() 169 | return self 170 | 171 | def add_nms(self): # fuse model Conv2d() + BatchNorm2d() layers 172 | if type(self.model[-1]) is not NMS: # if missing NMS 173 | print('Adding NMS module... ') 174 | m = NMS() # module 175 | m.f = -1 # from 176 | m.i = self.model[-1].i + 1 # index 177 | self.model.add_module(name='%s' % m.i, module=m) # add 178 | return self 179 | 180 | def info(self, verbose=False): # print model information 181 | model_info(self, verbose) 182 | 183 | 184 | def parse_model(d, ch): # model_dict, input_channels(3) 185 | logger.info('\n%3s%18s%3s%10s %-40s%-30s' % ('', 'from', 'n', 'params', 'module', 'arguments')) 186 | anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'] 187 | na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors 188 | no = na * (nc + 5) # number of outputs = anchors * (classes + 5) 189 | 190 | layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out 191 | for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args 192 | m = eval(m) if isinstance(m, str) else m # eval strings 193 | for j, a in enumerate(args): 194 | try: 195 | args[j] = eval(a) if isinstance(a, str) else a # eval strings 196 | except: 197 | pass 198 | 199 | n = max(round(n * gd), 1) if n > 1 else n # depth gain 200 | if m in [Conv, Bottleneck, SPP, DWConv, MixConv2d, Focus, CrossConv, BottleneckCSP, C3]: 201 | c1, c2 = ch[f], args[0] 202 | 203 | # Normal 204 | # if i > 0 and args[0] != no: # channel expansion factor 205 | # ex = 1.75 # exponential (default 2.0) 206 | # e = math.log(c2 / ch[1]) / math.log(2) 207 | # c2 = int(ch[1] * ex ** e) 208 | # if m != Focus: 209 | 210 | c2 = make_divisible(c2 * gw, 8) if c2 != no else c2 211 | 212 | # Experimental 213 | # if i > 0 and args[0] != no: # channel expansion factor 214 | # ex = 1 + gw # exponential (default 2.0) 215 | # ch1 = 32 # ch[1] 216 | # e = math.log(c2 / ch1) / math.log(2) # level 1-n 217 | # c2 = int(ch1 * ex ** e) 218 | # if m != Focus: 219 | # c2 = make_divisible(c2, 8) if c2 != no else c2 220 | 221 | args = [c1, c2, *args[1:]] 222 | if m in [BottleneckCSP, C3]: 223 | args.insert(2, n) 224 | n = 1 225 | elif m is nn.BatchNorm2d: 226 | args = [ch[f]] 227 | elif m is Concat: 228 | c2 = sum([ch[-1 if x == -1 else x + 1] for x in f]) 229 | elif m is Detect: 230 | args.append([ch[x + 1] for x in f]) 231 | if isinstance(args[1], int): # number of anchors 232 | args[1] = [list(range(args[1] * 2))] * len(f) 233 | else: 234 | c2 = ch[f] 235 | 236 | m_ = nn.Sequential(*[m(*args) for _ in range(n)]) if n > 1 else m(*args) # module 237 | t = str(m)[8:-2].replace('__main__.', '') # module type 238 | np = sum([x.numel() for x in m_.parameters()]) # number params 239 | m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params 240 | logger.info('%3s%18s%3s%10.0f %-40s%-30s' % (i, f, n, np, t, args)) # print 241 | save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist 242 | layers.append(m_) 243 | ch.append(c2) 244 | return nn.Sequential(*layers), sorted(save) 245 | 246 | 247 | if __name__ == '__main__': 248 | parser = argparse.ArgumentParser() 249 | parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml') 250 | parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') 251 | opt = parser.parse_args() 252 | opt.cfg = check_file(opt.cfg) # check file 253 | set_logging() 254 | device = select_device(opt.device) 255 | 256 | # Create model 257 | model = Model(opt.cfg).to(device) 258 | model.train() 259 | 260 | # Profile 261 | # img = torch.rand(8 if torch.cuda.is_available() else 1, 3, 640, 640).to(device) 262 | # y = model(img, profile=True) 263 | 264 | # ONNX export 265 | # model.model[-1].export = True 266 | # torch.onnx.export(model, img, opt.cfg.replace('.yaml', '.onnx'), verbose=True, opset_version=11) 267 | 268 | # Tensorboard 269 | # from torch.utils.tensorboard import SummaryWriter 270 | # tb_writer = SummaryWriter() 271 | # print("Run 'tensorboard --logdir=models/runs' to view tensorboard at http://localhost:6006/") 272 | # tb_writer.add_graph(model.model, img) # add model to tensorboard 273 | # tb_writer.add_image('test', img[0], dataformats='CWH') # add model to tensorboard 274 | -------------------------------------------------------------------------------- /yolov5/models/yolov5l.yaml: -------------------------------------------------------------------------------- 1 | # parameters 2 | nc: 80 # number of classes 3 | depth_multiple: 1.0 # model depth multiple 4 | width_multiple: 1.0 # layer channel multiple 5 | 6 | # anchors 7 | anchors: 8 | - [10,13, 16,30, 33,23] # P3/8 9 | - [30,61, 62,45, 59,119] # P4/16 10 | - [116,90, 156,198, 373,326] # P5/32 11 | 12 | # YOLOv5 backbone 13 | backbone: 14 | # [from, number, module, args] 15 | [[-1, 1, Focus, [64, 3]], # 0-P1/2 16 | [-1, 1, Conv, [128, 3, 2]], # 1-P2/4 17 | [-1, 3, BottleneckCSP, [128]], 18 | [-1, 1, Conv, [256, 3, 2]], # 3-P3/8 19 | [-1, 9, BottleneckCSP, [256]], 20 | [-1, 1, Conv, [512, 3, 2]], # 5-P4/16 21 | [-1, 9, BottleneckCSP, [512]], 22 | [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32 23 | [-1, 1, SPP, [1024, [5, 9, 13]]], 24 | [-1, 3, BottleneckCSP, [1024, False]], # 9 25 | ] 26 | 27 | # YOLOv5 head 28 | head: 29 | [[-1, 1, Conv, [512, 1, 1]], 30 | [-1, 1, nn.Upsample, [None, 2, 'nearest']], 31 | [[-1, 6], 1, Concat, [1]], # cat backbone P4 32 | [-1, 3, BottleneckCSP, [512, False]], # 13 33 | 34 | [-1, 1, Conv, [256, 1, 1]], 35 | [-1, 1, nn.Upsample, [None, 2, 'nearest']], 36 | [[-1, 4], 1, Concat, [1]], # cat backbone P3 37 | [-1, 3, BottleneckCSP, [256, False]], # 17 (P3/8-small) 38 | 39 | [-1, 1, Conv, [256, 3, 2]], 40 | [[-1, 14], 1, Concat, [1]], # cat head P4 41 | [-1, 3, BottleneckCSP, [512, False]], # 20 (P4/16-medium) 42 | 43 | [-1, 1, Conv, [512, 3, 2]], 44 | [[-1, 10], 1, Concat, [1]], # cat head P5 45 | [-1, 3, BottleneckCSP, [1024, False]], # 23 (P5/32-large) 46 | 47 | [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5) 48 | ] 49 | -------------------------------------------------------------------------------- /yolov5/models/yolov5m.yaml: -------------------------------------------------------------------------------- 1 | # parameters 2 | nc: 80 # number of classes 3 | depth_multiple: 0.67 # model depth multiple 4 | width_multiple: 0.75 # layer channel multiple 5 | 6 | # anchors 7 | anchors: 8 | - [10,13, 16,30, 33,23] # P3/8 9 | - [30,61, 62,45, 59,119] # P4/16 10 | - [116,90, 156,198, 373,326] # P5/32 11 | 12 | # YOLOv5 backbone 13 | backbone: 14 | # [from, number, module, args] 15 | [[-1, 1, Focus, [64, 3]], # 0-P1/2 16 | [-1, 1, Conv, [128, 3, 2]], # 1-P2/4 17 | [-1, 3, BottleneckCSP, [128]], 18 | [-1, 1, Conv, [256, 3, 2]], # 3-P3/8 19 | [-1, 9, BottleneckCSP, [256]], 20 | [-1, 1, Conv, [512, 3, 2]], # 5-P4/16 21 | [-1, 9, BottleneckCSP, [512]], 22 | [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32 23 | [-1, 1, SPP, [1024, [5, 9, 13]]], 24 | [-1, 3, BottleneckCSP, [1024, False]], # 9 25 | ] 26 | 27 | # YOLOv5 head 28 | head: 29 | [[-1, 1, Conv, [512, 1, 1]], 30 | [-1, 1, nn.Upsample, [None, 2, 'nearest']], 31 | [[-1, 6], 1, Concat, [1]], # cat backbone P4 32 | [-1, 3, BottleneckCSP, [512, False]], # 13 33 | 34 | [-1, 1, Conv, [256, 1, 1]], 35 | [-1, 1, nn.Upsample, [None, 2, 'nearest']], 36 | [[-1, 4], 1, Concat, [1]], # cat backbone P3 37 | [-1, 3, BottleneckCSP, [256, False]], # 17 (P3/8-small) 38 | 39 | [-1, 1, Conv, [256, 3, 2]], 40 | [[-1, 14], 1, Concat, [1]], # cat head P4 41 | [-1, 3, BottleneckCSP, [512, False]], # 20 (P4/16-medium) 42 | 43 | [-1, 1, Conv, [512, 3, 2]], 44 | [[-1, 10], 1, Concat, [1]], # cat head P5 45 | [-1, 3, BottleneckCSP, [1024, False]], # 23 (P5/32-large) 46 | 47 | [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5) 48 | ] 49 | -------------------------------------------------------------------------------- /yolov5/models/yolov5s.yaml: -------------------------------------------------------------------------------- 1 | # parameters 2 | nc: 80 # number of classes 3 | depth_multiple: 0.33 # model depth multiple 4 | width_multiple: 0.50 # layer channel multiple 5 | 6 | # anchors 7 | anchors: 8 | - [10,13, 16,30, 33,23] # P3/8 9 | - [30,61, 62,45, 59,119] # P4/16 10 | - [116,90, 156,198, 373,326] # P5/32 11 | 12 | # YOLOv5 backbone 13 | backbone: 14 | # [from, number, module, args] 15 | [[-1, 1, Focus, [64, 3]], # 0-P1/2 16 | [-1, 1, Conv, [128, 3, 2]], # 1-P2/4 17 | [-1, 3, BottleneckCSP, [128]], 18 | [-1, 1, Conv, [256, 3, 2]], # 3-P3/8 19 | [-1, 9, BottleneckCSP, [256]], 20 | [-1, 1, Conv, [512, 3, 2]], # 5-P4/16 21 | [-1, 9, BottleneckCSP, [512]], 22 | [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32 23 | [-1, 1, SPP, [1024, [5, 9, 13]]], 24 | [-1, 3, BottleneckCSP, [1024, False]], # 9 25 | ] 26 | 27 | # YOLOv5 head 28 | head: 29 | [[-1, 1, Conv, [512, 1, 1]], 30 | [-1, 1, nn.Upsample, [None, 2, 'nearest']], 31 | [[-1, 6], 1, Concat, [1]], # cat backbone P4 32 | [-1, 3, BottleneckCSP, [512, False]], # 13 33 | 34 | [-1, 1, Conv, [256, 1, 1]], 35 | [-1, 1, nn.Upsample, [None, 2, 'nearest']], 36 | [[-1, 4], 1, Concat, [1]], # cat backbone P3 37 | [-1, 3, BottleneckCSP, [256, False]], # 17 (P3/8-small) 38 | 39 | [-1, 1, Conv, [256, 3, 2]], 40 | [[-1, 14], 1, Concat, [1]], # cat head P4 41 | [-1, 3, BottleneckCSP, [512, False]], # 20 (P4/16-medium) 42 | 43 | [-1, 1, Conv, [512, 3, 2]], 44 | [[-1, 10], 1, Concat, [1]], # cat head P5 45 | [-1, 3, BottleneckCSP, [1024, False]], # 23 (P5/32-large) 46 | 47 | [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5) 48 | ] 49 | -------------------------------------------------------------------------------- /yolov5/models/yolov5x.yaml: -------------------------------------------------------------------------------- 1 | # parameters 2 | nc: 80 # number of classes 3 | depth_multiple: 1.33 # model depth multiple 4 | width_multiple: 1.25 # layer channel multiple 5 | 6 | # anchors 7 | anchors: 8 | - [10,13, 16,30, 33,23] # P3/8 9 | - [30,61, 62,45, 59,119] # P4/16 10 | - [116,90, 156,198, 373,326] # P5/32 11 | 12 | # YOLOv5 backbone 13 | backbone: 14 | # [from, number, module, args] 15 | [[-1, 1, Focus, [64, 3]], # 0-P1/2 16 | [-1, 1, Conv, [128, 3, 2]], # 1-P2/4 17 | [-1, 3, BottleneckCSP, [128]], 18 | [-1, 1, Conv, [256, 3, 2]], # 3-P3/8 19 | [-1, 9, BottleneckCSP, [256]], 20 | [-1, 1, Conv, [512, 3, 2]], # 5-P4/16 21 | [-1, 9, BottleneckCSP, [512]], 22 | [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32 23 | [-1, 1, SPP, [1024, [5, 9, 13]]], 24 | [-1, 3, BottleneckCSP, [1024, False]], # 9 25 | ] 26 | 27 | # YOLOv5 head 28 | head: 29 | [[-1, 1, Conv, [512, 1, 1]], 30 | [-1, 1, nn.Upsample, [None, 2, 'nearest']], 31 | [[-1, 6], 1, Concat, [1]], # cat backbone P4 32 | [-1, 3, BottleneckCSP, [512, False]], # 13 33 | 34 | [-1, 1, Conv, [256, 1, 1]], 35 | [-1, 1, nn.Upsample, [None, 2, 'nearest']], 36 | [[-1, 4], 1, Concat, [1]], # cat backbone P3 37 | [-1, 3, BottleneckCSP, [256, False]], # 17 (P3/8-small) 38 | 39 | [-1, 1, Conv, [256, 3, 2]], 40 | [[-1, 14], 1, Concat, [1]], # cat head P4 41 | [-1, 3, BottleneckCSP, [512, False]], # 20 (P4/16-medium) 42 | 43 | [-1, 1, Conv, [512, 3, 2]], 44 | [[-1, 10], 1, Concat, [1]], # cat head P5 45 | [-1, 3, BottleneckCSP, [1024, False]], # 23 (P5/32-large) 46 | 47 | [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5) 48 | ] 49 | -------------------------------------------------------------------------------- /yolov5/models/yolov5x_road.yaml: -------------------------------------------------------------------------------- 1 | # parameters 2 | nc: 4 # number of classes 3 | depth_multiple: 1.33 # model depth multiple 4 | width_multiple: 1.25 # layer channel multiple 5 | 6 | # anchors 7 | anchors: 8 | - [10,13, 16,30, 33,23] # P3/8 9 | - [30,61, 62,45, 59,119] # P4/16 10 | - [116,90, 156,198, 373,326] # P5/32 11 | 12 | # YOLOv5 backbone 13 | backbone: 14 | # [from, number, module, args] 15 | [[-1, 1, Focus, [64, 3]], # 0-P1/2 16 | [-1, 1, Conv, [128, 3, 2]], # 1-P2/4 17 | [-1, 3, BottleneckCSP, [128]], 18 | [-1, 1, Conv, [256, 3, 2]], # 3-P3/8 19 | [-1, 9, BottleneckCSP, [256]], 20 | [-1, 1, Conv, [512, 3, 2]], # 5-P4/16 21 | [-1, 9, BottleneckCSP, [512]], 22 | [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32 23 | [-1, 1, SPP, [1024, [5, 9, 13]]], 24 | [-1, 3, BottleneckCSP, [1024, False]], # 9 25 | ] 26 | 27 | # YOLOv5 head 28 | head: 29 | [[-1, 1, Conv, [512, 1, 1]], 30 | [-1, 1, nn.Upsample, [None, 2, 'nearest']], 31 | [[-1, 6], 1, Concat, [1]], # cat backbone P4 32 | [-1, 3, BottleneckCSP, [512, False]], # 13 33 | 34 | [-1, 1, Conv, [256, 1, 1]], 35 | [-1, 1, nn.Upsample, [None, 2, 'nearest']], 36 | [[-1, 4], 1, Concat, [1]], # cat backbone P3 37 | [-1, 3, BottleneckCSP, [256, False]], # 17 (P3/8-small) 38 | 39 | [-1, 1, Conv, [256, 3, 2]], 40 | [[-1, 14], 1, Concat, [1]], # cat head P4 41 | [-1, 3, BottleneckCSP, [512, False]], # 20 (P4/16-medium) 42 | 43 | [-1, 1, Conv, [512, 3, 2]], 44 | [[-1, 10], 1, Concat, [1]], # cat head P5 45 | [-1, 3, BottleneckCSP, [1024, False]], # 23 (P5/32-large) 46 | 47 | [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5) 48 | ] 49 | -------------------------------------------------------------------------------- /yolov5/requirements.txt: -------------------------------------------------------------------------------- 1 | # pip install -r requirements.txt 2 | 3 | # base ---------------------------------------- 4 | Cython 5 | matplotlib>=3.2.2 6 | numpy>=1.18.5 7 | opencv-python>=4.1.2 8 | pillow 9 | PyYAML>=5.3 10 | scipy>=1.4.1 11 | tensorboard>=2.2 12 | torch>=1.6.0 13 | torchvision>=0.7.0 14 | tqdm>=4.41.0 15 | 16 | # coco ---------------------------------------- 17 | # pycocotools>=2.0 18 | 19 | # export -------------------------------------- 20 | # packaging # for coremltools 21 | # coremltools==4.0b3 22 | # onnx>=1.7.0 23 | # scikit-learn==0.19.2 # for coreml quantization 24 | 25 | # extras -------------------------------------- 26 | # thop # FLOPS computation 27 | # seaborn # plotting 28 | -------------------------------------------------------------------------------- /yolov5/scripts/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/USC-InfoLab/rddc2020/72cda97851fb6a48b5b9a55048ba38c890396d23/yolov5/scripts/__init__.py -------------------------------------------------------------------------------- /yolov5/scripts/dataset_setup_for_yolov5.sh: -------------------------------------------------------------------------------- 1 | cd yolov5 2 | bash scripts/download_road2020.sh 3 | bash scripts/prepare_test.sh 4 | python3 scripts/xml2yolo.py 5 | -------------------------------------------------------------------------------- /yolov5/scripts/download_IMSC_grddc2020_weights.sh: -------------------------------------------------------------------------------- 1 | gdown https://drive.google.com/uc?id=1F_0MHIBuO1wgVwePk6UAuFudKmCf_7Fs -O weights/IMSC/last_95_448_32_aug2.pt 2 | gdown https://drive.google.com/uc?id=1Fw6_ku3Z8aTdy4vwjZatHTkaNyeT7ZoZ -O weights/IMSC/last_95_640_16.pt 3 | gdown https://drive.google.com/uc?id=1Xu2KDBkD09E7ItOkKrodM_XzOQu-6Mhl -O weights/IMSC/last_95.pt 4 | gdown https://drive.google.com/uc?id=1ky9aZ1ygiy2qXlY_zcpj_4QI1ccfQTcE -O weights/IMSC/last_100_100_640_16.pt 5 | gdown https://drive.google.com/uc?id=1Wd1KA8j-q6qRQzy6ytLEav89xsmiqLFB -O weights/IMSC/last_120_640_32_aug2.pt 6 | -------------------------------------------------------------------------------- /yolov5/scripts/download_road2020.sh: -------------------------------------------------------------------------------- 1 | cd datasets/road2020/ 2 | echo "downloading train dataset..." 3 | wget https://mycityreport.s3-ap-northeast-1.amazonaws.com/02_RoadDamageDataset/public_data/IEEE_bigdata_RDD2020/train.tar.gz 2>/dev/null || curl -L https://mycityreport.s3-ap-northeast-1.amazonaws.com/02_RoadDamageDataset/public_data/IEEE_bigdata_RDD2020/train.tar.gz -O train.tar.gz 4 | echo "downloading test1 dataset..." 5 | wget https://mycityreport.s3-ap-northeast-1.amazonaws.com/02_RoadDamageDataset/public_data/IEEE_bigdata_RDD2020/test1.tar.gz 2>/dev/null || curl -L https://mycityreport.s3-ap-northeast-1.amazonaws.com/02_RoadDamageDataset/public_data/IEEE_bigdata_RDD2020/test1.tar.gz -O test1.tar.g 6 | echo "downloading test2 dataset..." 7 | wget https://mycityreport.s3-ap-northeast-1.amazonaws.com/02_RoadDamageDataset/public_data/IEEE_bigdata_RDD2020/test2.tar.gz 2>/dev/null || curl -L https://mycityreport.s3-ap-northeast-1.amazonaws.com/02_RoadDamageDataset/public_data/IEEE_bigdata_RDD2020/test2.tar.gz -O test2.tar.gz 8 | tar -xvf train.tar.gz 9 | tar -xvf test1.tar.gz 10 | tar -xvf test2.tar.gz 11 | rm train.tar.gz test1.tar.gz test2.tar.gz 12 | cd - 13 | -------------------------------------------------------------------------------- /yolov5/scripts/prepare_test.sh: -------------------------------------------------------------------------------- 1 | cd datasets/road2020 2 | echo "move test images to a flat directory structure required by yolo..." 3 | python3 move_test_iamges.py 4 | cd - 5 | -------------------------------------------------------------------------------- /yolov5/scripts/strip_optimizer.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import torch 3 | sys.path.insert(0, './') 4 | from utils.general import strip_optimizer 5 | 6 | 7 | strip_optimizer(sys.argv[1]) 8 | -------------------------------------------------------------------------------- /yolov5/scripts/xml2yolo.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import math 3 | import os 4 | import sys 5 | import xml.etree.ElementTree as ET 6 | from PIL import Image 7 | from collections import defaultdict 8 | from random import shuffle 9 | 10 | 11 | #Type of image in Dataset 12 | imageType = ["jpg","png","jpeg","JPEG","JPG","PNG"] 13 | #dictionary to store list of image paths in each class 14 | imageListDict = defaultdict(set) 15 | 16 | def convert(size, box): 17 | dw = 1./size[0] 18 | dh = 1./size[1] 19 | x = (box[0] + box[1])/2.0 20 | y = (box[2] + box[3])/2.0 21 | w = box[1] - box[0] 22 | h = box[3] - box[2] 23 | x = x*dw 24 | w = w*dw 25 | y = y*dh 26 | h = h*dh 27 | return [x,y,w,h] 28 | 29 | #convert minX,minY,maxX,maxY to normalized numbers required by Yolo 30 | def getYoloNumbers(imagePath, minX,minY,maxX, maxY): 31 | image=Image.open(imagePath) 32 | w= int(image.size[0]) 33 | h= int(image.size[1]) 34 | b = (minX,maxX, minY, maxY) 35 | bb = convert((w,h), b) 36 | image.close() 37 | return bb 38 | 39 | def getFileList3(filePath): 40 | xmlFiles = [] 41 | with open(filePath,"r") as f: 42 | xmlFiles = f.readlines() 43 | for i in range(len(xmlFiles)): 44 | temp = xmlFiles[i].strip().rsplit('.',1)[0] 45 | xmlFiles[i] = os.path.abspath(temp.replace("images","annotations/xmls")+".xml") 46 | labels_path = os.path.dirname(xmlFiles[i]).replace("annotations/xmls","labels") 47 | if not os.path.exists(labels_path): 48 | os.mkdir(labels_path) 49 | assert(os.path.exists(xmlFiles[i])) 50 | 51 | 52 | 53 | return xmlFiles 54 | 55 | 56 | def main(): 57 | 58 | parser = argparse.ArgumentParser(description='run phase2.') 59 | parser.add_argument('--class_file', type=str, help='path of the file containing list of classes of detection problem. sample file at "datasets/road2020/damage_classes.txt"',default='datasets/road2020/damage_classes.txt') 60 | parser.add_argument('--input_file', type=str, help='location to the list of images/xml files(absolute path). sample file at "datasets/road2020/train.txt"',default='datasets/road2020/train.txt') 61 | args = parser.parse_args() 62 | 63 | #assign each class of dataset to a number 64 | outputCtoId = {} 65 | 66 | f = open(args.class_file,"r") 67 | lines = f.readlines() 68 | f.close() 69 | num_classes=1 70 | for i in range(len(lines)): 71 | outputCtoId[lines[i].strip()] = i 72 | 73 | #read the path of the directory where XML and images are present 74 | xmlFiles = getFileList3(args.input_file) 75 | 76 | print("total files:", len(xmlFiles)) 77 | 78 | #loop over each file under dirPath 79 | for file in xmlFiles: 80 | filePath = file 81 | #print(filePath) 82 | tree = ET.parse(filePath) 83 | root = tree.getroot() 84 | 85 | i = 0 86 | imageFile = filePath[:-4].replace("annotations/xmls","images")+"."+imageType[i] 87 | while (not os.path.isfile(imageFile) and i<2): 88 | i+=1 89 | imageFile = filePath[:-4].replace("annotations/xmls","images")+"."+imageType[i] 90 | 91 | if not os.path.isfile(imageFile): 92 | print("File not found:",imageFile) 93 | continue 94 | 95 | txtFile = imageFile.replace("images","labels") 96 | txtFile = txtFile[:-4]+".txt" 97 | yoloOutput = open(txtFile,"w") 98 | 99 | #loop over each object tag in annotation tag 100 | for objects in root.findall('object'): 101 | surfaceType = objects.find('name').text.replace(" ","") 102 | 103 | 104 | if surfaceType=="D00" or surfaceType=="D10" or surfaceType=="D20" or surfaceType=="D40": 105 | bndbox = objects.find('bndbox') 106 | [minX,minY,maxX,maxY] = [int(child.text) for child in bndbox] 107 | [x,y,w,h] = getYoloNumbers(imageFile,minX,minY,maxX, maxY) 108 | yoloOutput.write(str(outputCtoId[surfaceType])+" "+str(x)+" "+str(y)+" "+str(w)+" "+str(h)+"\n") 109 | imageListDict[outputCtoId[surfaceType]].add(imageFile) 110 | 111 | 112 | yoloOutput.close() 113 | 114 | for cl in imageListDict: 115 | print(lines[cl].strip(),":",len(imageListDict[cl])) 116 | 117 | 118 | 119 | if __name__== "__main__": 120 | main() 121 | -------------------------------------------------------------------------------- /yolov5/sotabench.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import glob 3 | import json 4 | import os 5 | import shutil 6 | from pathlib import Path 7 | 8 | import numpy as np 9 | import torch 10 | import yaml 11 | from tqdm import tqdm 12 | 13 | from models.experimental import attempt_load 14 | from utils.datasets import create_dataloader 15 | from utils.general import ( 16 | coco80_to_coco91_class, check_dataset, check_file, check_img_size, compute_loss, non_max_suppression, scale_coords, 17 | xyxy2xywh, clip_coords, plot_images, xywh2xyxy, box_iou, output_to_target, ap_per_class, set_logging) 18 | from utils.torch_utils import select_device, time_synchronized 19 | 20 | 21 | from sotabencheval.object_detection import COCOEvaluator 22 | from sotabencheval.utils import is_server 23 | 24 | DATA_ROOT = './.data/vision/coco' if is_server() else '../coco' # sotabench data dir 25 | 26 | 27 | def test(data, 28 | weights=None, 29 | batch_size=16, 30 | imgsz=640, 31 | conf_thres=0.001, 32 | iou_thres=0.6, # for NMS 33 | save_json=False, 34 | single_cls=False, 35 | augment=False, 36 | verbose=False, 37 | model=None, 38 | dataloader=None, 39 | save_dir='', 40 | merge=False, 41 | save_txt=False): 42 | # Initialize/load model and set device 43 | training = model is not None 44 | if training: # called by train.py 45 | device = next(model.parameters()).device # get model device 46 | 47 | else: # called directly 48 | set_logging() 49 | device = select_device(opt.device, batch_size=batch_size) 50 | merge, save_txt = opt.merge, opt.save_txt # use Merge NMS, save *.txt labels 51 | if save_txt: 52 | out = Path('inference/output') 53 | if os.path.exists(out): 54 | shutil.rmtree(out) # delete output folder 55 | os.makedirs(out) # make new output folder 56 | 57 | # Remove previous 58 | for f in glob.glob(str(Path(save_dir) / 'test_batch*.jpg')): 59 | os.remove(f) 60 | 61 | # Load model 62 | model = attempt_load(weights, map_location=device) # load FP32 model 63 | imgsz = check_img_size(imgsz, s=model.stride.max()) # check img_size 64 | 65 | # Multi-GPU disabled, incompatible with .half() https://github.com/ultralytics/yolov5/issues/99 66 | # if device.type != 'cpu' and torch.cuda.device_count() > 1: 67 | # model = nn.DataParallel(model) 68 | 69 | # Half 70 | half = device.type != 'cpu' # half precision only supported on CUDA 71 | if half: 72 | model.half() 73 | 74 | # Configure 75 | model.eval() 76 | with open(data) as f: 77 | data = yaml.load(f, Loader=yaml.FullLoader) # model dict 78 | check_dataset(data) # check 79 | nc = 1 if single_cls else int(data['nc']) # number of classes 80 | iouv = torch.linspace(0.5, 0.95, 10).to(device) # iou vector for mAP@0.5:0.95 81 | niou = iouv.numel() 82 | 83 | # Dataloader 84 | if not training: 85 | img = torch.zeros((1, 3, imgsz, imgsz), device=device) # init img 86 | _ = model(img.half() if half else img) if device.type != 'cpu' else None # run once 87 | path = data['test'] if opt.task == 'test' else data['val'] # path to val/test images 88 | dataloader = create_dataloader(path, imgsz, batch_size, model.stride.max(), opt, 89 | hyp=None, augment=False, cache=True, pad=0.5, rect=True)[0] 90 | 91 | seen = 0 92 | names = model.names if hasattr(model, 'names') else model.module.names 93 | coco91class = coco80_to_coco91_class() 94 | s = ('%20s' + '%12s' * 6) % ('Class', 'Images', 'Targets', 'P', 'R', 'mAP@.5', 'mAP@.5:.95') 95 | p, r, f1, mp, mr, map50, map, t0, t1 = 0., 0., 0., 0., 0., 0., 0., 0., 0. 96 | loss = torch.zeros(3, device=device) 97 | jdict, stats, ap, ap_class = [], [], [], [] 98 | evaluator = COCOEvaluator(root=DATA_ROOT, model_name=opt.weights.replace('.pt', '')) 99 | for batch_i, (img, targets, paths, shapes) in enumerate(tqdm(dataloader, desc=s)): 100 | img = img.to(device, non_blocking=True) 101 | img = img.half() if half else img.float() # uint8 to fp16/32 102 | img /= 255.0 # 0 - 255 to 0.0 - 1.0 103 | targets = targets.to(device) 104 | nb, _, height, width = img.shape # batch size, channels, height, width 105 | whwh = torch.Tensor([width, height, width, height]).to(device) 106 | 107 | # Disable gradients 108 | with torch.no_grad(): 109 | # Run model 110 | t = time_synchronized() 111 | inf_out, train_out = model(img, augment=augment) # inference and training outputs 112 | t0 += time_synchronized() - t 113 | 114 | # Compute loss 115 | if training: # if model has loss hyperparameters 116 | loss += compute_loss([x.float() for x in train_out], targets, model)[1][:3] # GIoU, obj, cls 117 | 118 | # Run NMS 119 | t = time_synchronized() 120 | output = non_max_suppression(inf_out, conf_thres=conf_thres, iou_thres=iou_thres, merge=merge) 121 | t1 += time_synchronized() - t 122 | 123 | # Statistics per image 124 | for si, pred in enumerate(output): 125 | labels = targets[targets[:, 0] == si, 1:] 126 | nl = len(labels) 127 | tcls = labels[:, 0].tolist() if nl else [] # target class 128 | seen += 1 129 | 130 | if pred is None: 131 | if nl: 132 | stats.append((torch.zeros(0, niou, dtype=torch.bool), torch.Tensor(), torch.Tensor(), tcls)) 133 | continue 134 | 135 | # Append to text file 136 | if save_txt: 137 | gn = torch.tensor(shapes[si][0])[[1, 0, 1, 0]] # normalization gain whwh 138 | x = pred.clone() 139 | x[:, :4] = scale_coords(img[si].shape[1:], x[:, :4], shapes[si][0], shapes[si][1]) # to original 140 | for *xyxy, conf, cls in x: 141 | xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh 142 | with open(str(out / Path(paths[si]).stem) + '.txt', 'a') as f: 143 | f.write(('%g ' * 5 + '\n') % (cls, *xywh)) # label format 144 | 145 | # Clip boxes to image bounds 146 | clip_coords(pred, (height, width)) 147 | 148 | # Append to pycocotools JSON dictionary 149 | if save_json: 150 | # [{"image_id": 42, "category_id": 18, "bbox": [258.15, 41.29, 348.26, 243.78], "score": 0.236}, ... 151 | image_id = Path(paths[si]).stem 152 | box = pred[:, :4].clone() # xyxy 153 | scale_coords(img[si].shape[1:], box, shapes[si][0], shapes[si][1]) # to original shape 154 | box = xyxy2xywh(box) # xywh 155 | box[:, :2] -= box[:, 2:] / 2 # xy center to top-left corner 156 | for p, b in zip(pred.tolist(), box.tolist()): 157 | result = {'image_id': int(image_id) if image_id.isnumeric() else image_id, 158 | 'category_id': coco91class[int(p[5])], 159 | 'bbox': [round(x, 3) for x in b], 160 | 'score': round(p[4], 5)} 161 | jdict.append(result) 162 | 163 | #evaluator.add([result]) 164 | #if evaluator.cache_exists: 165 | # break 166 | 167 | # # Assign all predictions as incorrect 168 | # correct = torch.zeros(pred.shape[0], niou, dtype=torch.bool, device=device) 169 | # if nl: 170 | # detected = [] # target indices 171 | # tcls_tensor = labels[:, 0] 172 | # 173 | # # target boxes 174 | # tbox = xywh2xyxy(labels[:, 1:5]) * whwh 175 | # 176 | # # Per target class 177 | # for cls in torch.unique(tcls_tensor): 178 | # ti = (cls == tcls_tensor).nonzero(as_tuple=False).view(-1) # prediction indices 179 | # pi = (cls == pred[:, 5]).nonzero(as_tuple=False).view(-1) # target indices 180 | # 181 | # # Search for detections 182 | # if pi.shape[0]: 183 | # # Prediction to target ious 184 | # ious, i = box_iou(pred[pi, :4], tbox[ti]).max(1) # best ious, indices 185 | # 186 | # # Append detections 187 | # detected_set = set() 188 | # for j in (ious > iouv[0]).nonzero(as_tuple=False): 189 | # d = ti[i[j]] # detected target 190 | # if d.item() not in detected_set: 191 | # detected_set.add(d.item()) 192 | # detected.append(d) 193 | # correct[pi[j]] = ious[j] > iouv # iou_thres is 1xn 194 | # if len(detected) == nl: # all targets already located in image 195 | # break 196 | # 197 | # # Append statistics (correct, conf, pcls, tcls) 198 | # stats.append((correct.cpu(), pred[:, 4].cpu(), pred[:, 5].cpu(), tcls)) 199 | 200 | # # Plot images 201 | # if batch_i < 1: 202 | # f = Path(save_dir) / ('test_batch%g_gt.jpg' % batch_i) # filename 203 | # plot_images(img, targets, paths, str(f), names) # ground truth 204 | # f = Path(save_dir) / ('test_batch%g_pred.jpg' % batch_i) 205 | # plot_images(img, output_to_target(output, width, height), paths, str(f), names) # predictions 206 | 207 | evaluator.add(jdict) 208 | evaluator.save() 209 | 210 | # # Compute statistics 211 | # stats = [np.concatenate(x, 0) for x in zip(*stats)] # to numpy 212 | # if len(stats) and stats[0].any(): 213 | # p, r, ap, f1, ap_class = ap_per_class(*stats) 214 | # p, r, ap50, ap = p[:, 0], r[:, 0], ap[:, 0], ap.mean(1) # [P, R, AP@0.5, AP@0.5:0.95] 215 | # mp, mr, map50, map = p.mean(), r.mean(), ap50.mean(), ap.mean() 216 | # nt = np.bincount(stats[3].astype(np.int64), minlength=nc) # number of targets per class 217 | # else: 218 | # nt = torch.zeros(1) 219 | # 220 | # # Print results 221 | # pf = '%20s' + '%12.3g' * 6 # print format 222 | # print(pf % ('all', seen, nt.sum(), mp, mr, map50, map)) 223 | # 224 | # # Print results per class 225 | # if verbose and nc > 1 and len(stats): 226 | # for i, c in enumerate(ap_class): 227 | # print(pf % (names[c], seen, nt[c], p[i], r[i], ap50[i], ap[i])) 228 | # 229 | # # Print speeds 230 | # t = tuple(x / seen * 1E3 for x in (t0, t1, t0 + t1)) + (imgsz, imgsz, batch_size) # tuple 231 | # if not training: 232 | # print('Speed: %.1f/%.1f/%.1f ms inference/NMS/total per %gx%g image at batch-size %g' % t) 233 | # 234 | # # Save JSON 235 | # if save_json and len(jdict): 236 | # f = 'detections_val2017_%s_results.json' % \ 237 | # (weights.split(os.sep)[-1].replace('.pt', '') if isinstance(weights, str) else '') # filename 238 | # print('\nCOCO mAP with pycocotools... saving %s...' % f) 239 | # with open(f, 'w') as file: 240 | # json.dump(jdict, file) 241 | # 242 | # try: # https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb 243 | # from pycocotools.coco import COCO 244 | # from pycocotools.cocoeval import COCOeval 245 | # 246 | # imgIds = [int(Path(x).stem) for x in dataloader.dataset.img_files] 247 | # cocoGt = COCO(glob.glob('../coco/annotations/instances_val*.json')[0]) # initialize COCO ground truth api 248 | # cocoDt = cocoGt.loadRes(f) # initialize COCO pred api 249 | # cocoEval = COCOeval(cocoGt, cocoDt, 'bbox') 250 | # cocoEval.params.imgIds = imgIds # image IDs to evaluate 251 | # cocoEval.evaluate() 252 | # cocoEval.accumulate() 253 | # cocoEval.summarize() 254 | # map, map50 = cocoEval.stats[:2] # update results (mAP@0.5:0.95, mAP@0.5) 255 | # except Exception as e: 256 | # print('ERROR: pycocotools unable to run: %s' % e) 257 | # 258 | # # Return results 259 | # model.float() # for training 260 | # maps = np.zeros(nc) + map 261 | # for i, c in enumerate(ap_class): 262 | # maps[c] = ap[i] 263 | # return (mp, mr, map50, map, *(loss.cpu() / len(dataloader)).tolist()), maps, t 264 | 265 | 266 | if __name__ == '__main__': 267 | parser = argparse.ArgumentParser(prog='test.py') 268 | parser.add_argument('--weights', nargs='+', type=str, default='yolov5s.pt', help='model.pt path(s)') 269 | parser.add_argument('--data', type=str, default='data/coco.yaml', help='*.data path') 270 | parser.add_argument('--batch-size', type=int, default=32, help='size of each image batch') 271 | parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)') 272 | parser.add_argument('--conf-thres', type=float, default=0.001, help='object confidence threshold') 273 | parser.add_argument('--iou-thres', type=float, default=0.65, help='IOU threshold for NMS') 274 | parser.add_argument('--save-json', action='store_true', help='save a cocoapi-compatible JSON results file') 275 | parser.add_argument('--task', default='val', help="'val', 'test', 'study'") 276 | parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') 277 | parser.add_argument('--single-cls', action='store_true', help='treat as single-class dataset') 278 | parser.add_argument('--augment', action='store_true', help='augmented inference') 279 | parser.add_argument('--merge', action='store_true', help='use Merge NMS') 280 | parser.add_argument('--verbose', action='store_true', help='report mAP by class') 281 | parser.add_argument('--save-txt', action='store_true', help='save results to *.txt') 282 | opt = parser.parse_args() 283 | opt.save_json |= opt.data.endswith('coco.yaml') 284 | opt.data = check_file(opt.data) # check file 285 | print(opt) 286 | 287 | if opt.task in ['val', 'test']: # run normally 288 | test(opt.data, 289 | opt.weights, 290 | opt.batch_size, 291 | opt.img_size, 292 | opt.conf_thres, 293 | opt.iou_thres, 294 | opt.save_json, 295 | opt.single_cls, 296 | opt.augment, 297 | opt.verbose) 298 | 299 | elif opt.task == 'study': # run over a range of settings and save/plot 300 | for weights in ['yolov5s.pt', 'yolov5m.pt', 'yolov5l.pt', 'yolov5x.pt']: 301 | f = 'study_%s_%s.txt' % (Path(opt.data).stem, Path(weights).stem) # filename to save to 302 | x = list(range(320, 800, 64)) # x axis 303 | y = [] # y axis 304 | for i in x: # img-size 305 | print('\nRunning %s point %s...' % (f, i)) 306 | r, _, t = test(opt.data, weights, opt.batch_size, i, opt.conf_thres, opt.iou_thres, opt.save_json) 307 | y.append(r + t) # results and times 308 | np.savetxt(f, y, fmt='%10.4g') # save 309 | os.system('zip -r study.zip study_*.txt') 310 | # utils.general.plot_study_txt(f, x) # plot -------------------------------------------------------------------------------- /yolov5/test.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import glob 3 | import json 4 | import os 5 | import shutil 6 | from pathlib import Path 7 | 8 | import numpy as np 9 | import torch 10 | import yaml 11 | from tqdm import tqdm 12 | 13 | from models.experimental import attempt_load 14 | from utils.datasets import create_dataloader 15 | from utils.general import ( 16 | coco80_to_coco91_class, check_dataset, check_file, check_img_size, compute_loss, non_max_suppression, scale_coords, 17 | xyxy2xywh, clip_coords, plot_images, xywh2xyxy, box_iou, output_to_target, ap_per_class, set_logging) 18 | from utils.torch_utils import select_device, time_synchronized 19 | 20 | 21 | def test(data, 22 | weights=None, 23 | batch_size=16, 24 | imgsz=640, 25 | conf_thres=0.001, 26 | iou_thres=0.6, # for NMS 27 | save_json=False, 28 | single_cls=False, 29 | augment=False, 30 | verbose=False, 31 | model=None, 32 | dataloader=None, 33 | save_dir='', 34 | merge=False, 35 | save_txt=False): 36 | # Initialize/load model and set device 37 | training = model is not None 38 | if training: # called by train.py 39 | device = next(model.parameters()).device # get model device 40 | 41 | else: # called directly 42 | set_logging() 43 | device = select_device(opt.device, batch_size=batch_size) 44 | merge, save_txt = opt.merge, opt.save_txt # use Merge NMS, save *.txt labels 45 | if save_txt: 46 | out = Path('inference/output') 47 | if os.path.exists(out): 48 | shutil.rmtree(out) # delete output folder 49 | os.makedirs(out) # make new output folder 50 | 51 | # Remove previous 52 | for f in glob.glob(str(Path(save_dir) / 'test_batch*.jpg')): 53 | os.remove(f) 54 | 55 | # Load model 56 | model = attempt_load(weights, map_location=device) # load FP32 model 57 | imgsz = check_img_size(imgsz, s=model.stride.max()) # check img_size 58 | 59 | # Multi-GPU disabled, incompatible with .half() https://github.com/ultralytics/yolov5/issues/99 60 | # if device.type != 'cpu' and torch.cuda.device_count() > 1: 61 | # model = nn.DataParallel(model) 62 | 63 | # Half 64 | half = device.type != 'cpu' # half precision only supported on CUDA 65 | if half: 66 | model.half() 67 | 68 | # Configure 69 | model.eval() 70 | with open(data) as f: 71 | data = yaml.load(f, Loader=yaml.FullLoader) # model dict 72 | check_dataset(data) # check 73 | nc = 1 if single_cls else int(data['nc']) # number of classes 74 | iouv = torch.linspace(0.5, 0.95, 10).to(device) # iou vector for mAP@0.5:0.95 75 | niou = iouv.numel() 76 | 77 | # Dataloader 78 | if not training: 79 | img = torch.zeros((1, 3, imgsz, imgsz), device=device) # init img 80 | _ = model(img.half() if half else img) if device.type != 'cpu' else None # run once 81 | path = data['test'] if opt.task == 'test' else data['val'] # path to val/test images 82 | dataloader = create_dataloader(path, imgsz, batch_size, model.stride.max(), opt, 83 | hyp=None, augment=False, cache=False, pad=0.5, rect=True)[0] 84 | 85 | seen = 0 86 | names = model.names if hasattr(model, 'names') else model.module.names 87 | coco91class = coco80_to_coco91_class() 88 | s = ('%20s' + '%12s' * 6) % ('Class', 'Images', 'Targets', 'P', 'R', 'mAP@.5', 'mAP@.5:.95') 89 | p, r, f1, mp, mr, map50, map, t0, t1 = 0., 0., 0., 0., 0., 0., 0., 0., 0. 90 | loss = torch.zeros(3, device=device) 91 | jdict, stats, ap, ap_class = [], [], [], [] 92 | for batch_i, (img, targets, paths, shapes) in enumerate(tqdm(dataloader, desc=s)): 93 | img = img.to(device, non_blocking=True) 94 | img = img.half() if half else img.float() # uint8 to fp16/32 95 | img /= 255.0 # 0 - 255 to 0.0 - 1.0 96 | targets = targets.to(device) 97 | nb, _, height, width = img.shape # batch size, channels, height, width 98 | whwh = torch.Tensor([width, height, width, height]).to(device) 99 | 100 | # Disable gradients 101 | with torch.no_grad(): 102 | # Run model 103 | t = time_synchronized() 104 | inf_out, train_out = model(img, augment=augment) # inference and training outputs 105 | t0 += time_synchronized() - t 106 | 107 | # Compute loss 108 | if training: # if model has loss hyperparameters 109 | loss += compute_loss([x.float() for x in train_out], targets, model)[1][:3] # GIoU, obj, cls 110 | 111 | # Run NMS 112 | t = time_synchronized() 113 | output = non_max_suppression(inf_out, conf_thres=conf_thres, iou_thres=iou_thres, merge=merge) 114 | t1 += time_synchronized() - t 115 | 116 | # Statistics per image 117 | for si, pred in enumerate(output): 118 | labels = targets[targets[:, 0] == si, 1:] 119 | nl = len(labels) 120 | tcls = labels[:, 0].tolist() if nl else [] # target class 121 | seen += 1 122 | 123 | if pred is None: 124 | if nl: 125 | stats.append((torch.zeros(0, niou, dtype=torch.bool), torch.Tensor(), torch.Tensor(), tcls)) 126 | continue 127 | 128 | # Append to text file 129 | if save_txt: 130 | gn = torch.tensor(shapes[si][0])[[1, 0, 1, 0]] # normalization gain whwh 131 | x = pred.clone() 132 | x[:, :4] = scale_coords(img[si].shape[1:], x[:, :4], shapes[si][0], shapes[si][1]) # to original 133 | for *xyxy, conf, cls in x: 134 | xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh 135 | with open(str(out / Path(paths[si]).stem) + '.txt', 'a') as f: 136 | f.write(('%g ' * 5 + '\n') % (cls, *xywh)) # label format 137 | 138 | # Clip boxes to image bounds 139 | clip_coords(pred, (height, width)) 140 | 141 | # Append to pycocotools JSON dictionary 142 | if save_json: 143 | # [{"image_id": 42, "category_id": 18, "bbox": [258.15, 41.29, 348.26, 243.78], "score": 0.236}, ... 144 | image_id = Path(paths[si]).stem 145 | box = pred[:, :4].clone() # xyxy 146 | scale_coords(img[si].shape[1:], box, shapes[si][0], shapes[si][1]) # to original shape 147 | box = xyxy2xywh(box) # xywh 148 | box[:, :2] -= box[:, 2:] / 2 # xy center to top-left corner 149 | for p, b in zip(pred.tolist(), box.tolist()): 150 | jdict.append({'image_id': int(image_id) if image_id.isnumeric() else image_id, 151 | 'category_id': coco91class[int(p[5])], 152 | 'bbox': [round(x, 3) for x in b], 153 | 'score': round(p[4], 5)}) 154 | 155 | # Assign all predictions as incorrect 156 | correct = torch.zeros(pred.shape[0], niou, dtype=torch.bool, device=device) 157 | if nl: 158 | detected = [] # target indices 159 | tcls_tensor = labels[:, 0] 160 | 161 | # target boxes 162 | tbox = xywh2xyxy(labels[:, 1:5]) * whwh 163 | 164 | # Per target class 165 | for cls in torch.unique(tcls_tensor): 166 | ti = (cls == tcls_tensor).nonzero(as_tuple=False).view(-1) # prediction indices 167 | pi = (cls == pred[:, 5]).nonzero(as_tuple=False).view(-1) # target indices 168 | 169 | # Search for detections 170 | if pi.shape[0]: 171 | # Prediction to target ious 172 | ious, i = box_iou(pred[pi, :4], tbox[ti]).max(1) # best ious, indices 173 | 174 | # Append detections 175 | detected_set = set() 176 | for j in (ious > iouv[0]).nonzero(as_tuple=False): 177 | d = ti[i[j]] # detected target 178 | if d.item() not in detected_set: 179 | detected_set.add(d.item()) 180 | detected.append(d) 181 | correct[pi[j]] = ious[j] > iouv # iou_thres is 1xn 182 | if len(detected) == nl: # all targets already located in image 183 | break 184 | 185 | # Append statistics (correct, conf, pcls, tcls) 186 | stats.append((correct.cpu(), pred[:, 4].cpu(), pred[:, 5].cpu(), tcls)) 187 | 188 | # Plot images 189 | if batch_i < 1: 190 | f = Path(save_dir) / ('test_batch%g_gt.jpg' % batch_i) # filename 191 | plot_images(img, targets, paths, str(f), names) # ground truth 192 | f = Path(save_dir) / ('test_batch%g_pred.jpg' % batch_i) 193 | plot_images(img, output_to_target(output, width, height), paths, str(f), names) # predictions 194 | 195 | # Compute statistics 196 | stats = [np.concatenate(x, 0) for x in zip(*stats)] # to numpy 197 | if len(stats) and stats[0].any(): 198 | p, r, ap, f1, ap_class = ap_per_class(*stats) 199 | p, r, ap50, ap = p[:, 0], r[:, 0], ap[:, 0], ap.mean(1) # [P, R, AP@0.5, AP@0.5:0.95] 200 | mp, mr, map50, map = p.mean(), r.mean(), ap50.mean(), ap.mean() 201 | nt = np.bincount(stats[3].astype(np.int64), minlength=nc) # number of targets per class 202 | else: 203 | nt = torch.zeros(1) 204 | 205 | # Print results 206 | pf = '%20s' + '%12.3g' * 6 # print format 207 | print(pf % ('all', seen, nt.sum(), mp, mr, map50, map)) 208 | 209 | # Print results per class 210 | if verbose and nc > 1 and len(stats): 211 | for i, c in enumerate(ap_class): 212 | print(pf % (names[c], seen, nt[c], p[i], r[i], ap50[i], ap[i])) 213 | 214 | # Print speeds 215 | t = tuple(x / seen * 1E3 for x in (t0, t1, t0 + t1)) + (imgsz, imgsz, batch_size) # tuple 216 | if not training: 217 | print('Speed: %.1f/%.1f/%.1f ms inference/NMS/total per %gx%g image at batch-size %g' % t) 218 | 219 | # Save JSON 220 | if save_json and len(jdict): 221 | f = 'detections_val2017_%s_results.json' % \ 222 | (weights.split(os.sep)[-1].replace('.pt', '') if isinstance(weights, str) else '') # filename 223 | print('\nCOCO mAP with pycocotools... saving %s...' % f) 224 | with open(f, 'w') as file: 225 | json.dump(jdict, file) 226 | 227 | try: # https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb 228 | from pycocotools.coco import COCO 229 | from pycocotools.cocoeval import COCOeval 230 | 231 | imgIds = [int(Path(x).stem) for x in dataloader.dataset.img_files] 232 | cocoGt = COCO(glob.glob('../coco/annotations/instances_val*.json')[0]) # initialize COCO ground truth api 233 | cocoDt = cocoGt.loadRes(f) # initialize COCO pred api 234 | cocoEval = COCOeval(cocoGt, cocoDt, 'bbox') 235 | cocoEval.params.imgIds = imgIds # image IDs to evaluate 236 | cocoEval.evaluate() 237 | cocoEval.accumulate() 238 | cocoEval.summarize() 239 | map, map50 = cocoEval.stats[:2] # update results (mAP@0.5:0.95, mAP@0.5) 240 | except Exception as e: 241 | print('ERROR: pycocotools unable to run: %s' % e) 242 | 243 | # Return results 244 | model.float() # for training 245 | maps = np.zeros(nc) + map 246 | for i, c in enumerate(ap_class): 247 | maps[c] = ap[i] 248 | return (mp, mr, map50, map, *(loss.cpu() / len(dataloader)).tolist()), maps, t 249 | 250 | 251 | if __name__ == '__main__': 252 | parser = argparse.ArgumentParser(prog='test.py') 253 | parser.add_argument('--weights', nargs='+', type=str, default='yolov5s.pt', help='model.pt path(s)') 254 | parser.add_argument('--data', type=str, default='data/coco128.yaml', help='*.data path') 255 | parser.add_argument('--batch-size', type=int, default=32, help='size of each image batch') 256 | parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)') 257 | parser.add_argument('--conf-thres', type=float, default=0.001, help='object confidence threshold') 258 | parser.add_argument('--iou-thres', type=float, default=0.65, help='IOU threshold for NMS') 259 | parser.add_argument('--save-json', action='store_true', help='save a cocoapi-compatible JSON results file') 260 | parser.add_argument('--task', default='val', help="'val', 'test', 'study'") 261 | parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') 262 | parser.add_argument('--single-cls', action='store_true', help='treat as single-class dataset') 263 | parser.add_argument('--augment', action='store_true', help='augmented inference') 264 | parser.add_argument('--merge', action='store_true', help='use Merge NMS') 265 | parser.add_argument('--verbose', action='store_true', help='report mAP by class') 266 | parser.add_argument('--save-txt', action='store_true', help='save results to *.txt') 267 | opt = parser.parse_args() 268 | opt.save_json |= opt.data.endswith('coco.yaml') 269 | opt.data = check_file(opt.data) # check file 270 | print(opt) 271 | 272 | if opt.task in ['val', 'test']: # run normally 273 | test(opt.data, 274 | opt.weights, 275 | opt.batch_size, 276 | opt.img_size, 277 | opt.conf_thres, 278 | opt.iou_thres, 279 | opt.save_json, 280 | opt.single_cls, 281 | opt.augment, 282 | opt.verbose) 283 | 284 | elif opt.task == 'study': # run over a range of settings and save/plot 285 | for weights in ['yolov5s.pt', 'yolov5m.pt', 'yolov5l.pt', 'yolov5x.pt']: 286 | f = 'study_%s_%s.txt' % (Path(opt.data).stem, Path(weights).stem) # filename to save to 287 | x = list(range(320, 800, 64)) # x axis 288 | y = [] # y axis 289 | for i in x: # img-size 290 | print('\nRunning %s point %s...' % (f, i)) 291 | r, _, t = test(opt.data, weights, opt.batch_size, i, opt.conf_thres, opt.iou_thres, opt.save_json) 292 | y.append(r + t) # results and times 293 | np.savetxt(f, y, fmt='%10.4g') # save 294 | os.system('zip -r study.zip study_*.txt') 295 | # utils.general.plot_study_txt(f, x) # plot 296 | -------------------------------------------------------------------------------- /yolov5/train.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import glob 3 | import logging 4 | import math 5 | import os 6 | import random 7 | import shutil 8 | import time 9 | from pathlib import Path 10 | 11 | import numpy as np 12 | import torch.distributed as dist 13 | import torch.nn.functional as F 14 | import torch.optim as optim 15 | import torch.optim.lr_scheduler as lr_scheduler 16 | import torch.utils.data 17 | import yaml 18 | from torch.cuda import amp 19 | from torch.nn.parallel import DistributedDataParallel as DDP 20 | from torch.utils.tensorboard import SummaryWriter 21 | from tqdm import tqdm 22 | 23 | import test # import test.py to get mAP after each epoch 24 | from models.yolo import Model 25 | from utils.datasets import create_dataloader 26 | from utils.general import ( 27 | torch_distributed_zero_first, labels_to_class_weights, plot_labels, check_anchors, labels_to_image_weights, 28 | compute_loss, plot_images, fitness, strip_optimizer, plot_results, get_latest_run, check_dataset, check_file, 29 | check_git_status, check_img_size, increment_dir, print_mutation, plot_evolution, set_logging) 30 | from utils.google_utils import attempt_download 31 | from utils.torch_utils import init_seeds, ModelEMA, select_device, intersect_dicts 32 | 33 | logger = logging.getLogger(__name__) 34 | 35 | 36 | def train(hyp, opt, device, tb_writer=None): 37 | logger.info(f'Hyperparameters {hyp}') 38 | log_dir = Path(tb_writer.log_dir) if tb_writer else Path(opt.logdir) / 'evolve' # logging directory 39 | wdir = log_dir / 'weights' # weights directory 40 | os.makedirs(wdir, exist_ok=True) 41 | last = wdir / 'last.pt' 42 | best = wdir / 'best.pt' 43 | results_file = str(log_dir / 'results.txt') 44 | epochs, batch_size, total_batch_size, weights, rank = \ 45 | opt.epochs, opt.batch_size, opt.total_batch_size, opt.weights, opt.global_rank 46 | 47 | # Save run settings 48 | with open(log_dir / 'hyp.yaml', 'w') as f: 49 | yaml.dump(hyp, f, sort_keys=False) 50 | with open(log_dir / 'opt.yaml', 'w') as f: 51 | yaml.dump(vars(opt), f, sort_keys=False) 52 | 53 | # Configure 54 | cuda = device.type != 'cpu' 55 | init_seeds(2 + rank) 56 | with open(opt.data) as f: 57 | data_dict = yaml.load(f, Loader=yaml.FullLoader) # data dict 58 | with torch_distributed_zero_first(rank): 59 | check_dataset(data_dict) # check 60 | train_path = data_dict['train'] 61 | test_path = data_dict['val'] 62 | nc, names = (1, ['item']) if opt.single_cls else (int(data_dict['nc']), data_dict['names']) # number classes, names 63 | assert len(names) == nc, '%g names found for nc=%g dataset in %s' % (len(names), nc, opt.data) # check 64 | 65 | # Model 66 | pretrained = weights.endswith('.pt') 67 | if pretrained: 68 | with torch_distributed_zero_first(rank): 69 | attempt_download(weights) # download if not found locally 70 | ckpt = torch.load(weights, map_location=device) # load checkpoint 71 | if hyp.get('anchors'): 72 | ckpt['model'].yaml['anchors'] = round(hyp['anchors']) # force autoanchor 73 | model = Model(opt.cfg or ckpt['model'].yaml, ch=3, nc=nc).to(device) # create 74 | exclude = ['anchor'] if opt.cfg or hyp.get('anchors') else [] # exclude keys 75 | state_dict = ckpt['model'].float().state_dict() # to FP32 76 | state_dict = intersect_dicts(state_dict, model.state_dict(), exclude=exclude) # intersect 77 | model.load_state_dict(state_dict, strict=False) # load 78 | logger.info('Transferred %g/%g items from %s' % (len(state_dict), len(model.state_dict()), weights)) # report 79 | else: 80 | model = Model(opt.cfg, ch=3, nc=nc).to(device) # create 81 | 82 | # Freeze 83 | freeze = ['', ] # parameter names to freeze (full or partial) 84 | if any(freeze): 85 | for k, v in model.named_parameters(): 86 | if any(x in k for x in freeze): 87 | print('freezing %s' % k) 88 | v.requires_grad = False 89 | 90 | # Optimizer 91 | nbs = 64 # nominal batch size 92 | accumulate = max(round(nbs / total_batch_size), 1) # accumulate loss before optimizing 93 | hyp['weight_decay'] *= total_batch_size * accumulate / nbs # scale weight_decay 94 | 95 | pg0, pg1, pg2 = [], [], [] # optimizer parameter groups 96 | for k, v in model.named_parameters(): 97 | v.requires_grad = True 98 | if '.bias' in k: 99 | pg2.append(v) # biases 100 | elif '.weight' in k and '.bn' not in k: 101 | pg1.append(v) # apply weight decay 102 | else: 103 | pg0.append(v) # all else 104 | 105 | if opt.adam: 106 | optimizer = optim.Adam(pg0, lr=hyp['lr0'], betas=(hyp['momentum'], 0.999)) # adjust beta1 to momentum 107 | else: 108 | optimizer = optim.SGD(pg0, lr=hyp['lr0'], momentum=hyp['momentum'], nesterov=True) 109 | 110 | optimizer.add_param_group({'params': pg1, 'weight_decay': hyp['weight_decay']}) # add pg1 with weight_decay 111 | optimizer.add_param_group({'params': pg2}) # add pg2 (biases) 112 | logger.info('Optimizer groups: %g .bias, %g conv.weight, %g other' % (len(pg2), len(pg1), len(pg0))) 113 | del pg0, pg1, pg2 114 | 115 | # Scheduler https://arxiv.org/pdf/1812.01187.pdf 116 | # https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#OneCycleLR 117 | lf = lambda x: ((1 + math.cos(x * math.pi / epochs)) / 2) * (1 - hyp['lrf']) + hyp['lrf'] # cosine 118 | scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf) 119 | # plot_lr_scheduler(optimizer, scheduler, epochs) 120 | 121 | # Resume 122 | start_epoch, best_fitness = 0, 0.0 123 | if pretrained: 124 | # Optimizer 125 | if ckpt['optimizer'] is not None: 126 | optimizer.load_state_dict(ckpt['optimizer']) 127 | best_fitness = ckpt['best_fitness'] 128 | 129 | # Results 130 | if ckpt.get('training_results') is not None: 131 | with open(results_file, 'w') as file: 132 | file.write(ckpt['training_results']) # write results.txt 133 | 134 | # Epochs 135 | start_epoch = ckpt['epoch'] + 1 136 | if opt.resume: 137 | assert start_epoch > 0, '%s training to %g epochs is finished, nothing to resume.' % (weights, epochs) 138 | shutil.copytree(wdir, wdir.parent / f'weights_backup_epoch{start_epoch - 1}') # save previous weights 139 | if epochs < start_epoch: 140 | logger.info('%s has been trained for %g epochs. Fine-tuning for %g additional epochs.' % 141 | (weights, ckpt['epoch'], epochs)) 142 | epochs += ckpt['epoch'] # finetune additional epochs 143 | 144 | del ckpt, state_dict 145 | 146 | # Image sizes 147 | gs = int(max(model.stride)) # grid size (max stride) 148 | imgsz, imgsz_test = [check_img_size(x, gs) for x in opt.img_size] # verify imgsz are gs-multiples 149 | 150 | # DP mode 151 | if cuda and rank == -1 and torch.cuda.device_count() > 1: 152 | model = torch.nn.DataParallel(model) 153 | 154 | # SyncBatchNorm 155 | if opt.sync_bn and cuda and rank != -1: 156 | model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device) 157 | logger.info('Using SyncBatchNorm()') 158 | 159 | # Exponential moving average 160 | ema = ModelEMA(model) if rank in [-1, 0] else None 161 | 162 | # DDP mode 163 | if cuda and rank != -1: 164 | model = DDP(model, device_ids=[opt.local_rank], output_device=opt.local_rank) 165 | 166 | # Trainloader 167 | dataloader, dataset = create_dataloader(train_path, imgsz, batch_size, gs, opt, 168 | hyp=hyp, augment=True, cache=opt.cache_images, rect=opt.rect, 169 | rank=rank, world_size=opt.world_size, workers=opt.workers) 170 | mlc = np.concatenate(dataset.labels, 0)[:, 0].max() # max label class 171 | nb = len(dataloader) # number of batches 172 | assert mlc < nc, 'Label class %g exceeds nc=%g in %s. Possible class labels are 0-%g' % (mlc, nc, opt.data, nc - 1) 173 | 174 | # Process 0 175 | if rank in [-1, 0]: 176 | ema.updates = start_epoch * nb // accumulate # set EMA updates 177 | testloader = create_dataloader(test_path, imgsz_test, total_batch_size, gs, opt, 178 | hyp=hyp, augment=False, cache=opt.cache_images and not opt.notest, rect=True, 179 | rank=-1, world_size=opt.world_size, workers=opt.workers)[0] # testloader 180 | 181 | if not opt.resume: 182 | labels = np.concatenate(dataset.labels, 0) 183 | c = torch.tensor(labels[:, 0]) # classes 184 | # cf = torch.bincount(c.long(), minlength=nc) + 1. # frequency 185 | # model._initialize_biases(cf.to(device)) 186 | plot_labels(labels, save_dir=log_dir) 187 | if tb_writer: 188 | # tb_writer.add_hparams(hyp, {}) # causes duplicate https://github.com/ultralytics/yolov5/pull/384 189 | tb_writer.add_histogram('classes', c, 0) 190 | 191 | # Anchors 192 | if not opt.noautoanchor: 193 | check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz) 194 | 195 | # Model parameters 196 | hyp['cls'] *= nc / 80. # scale coco-tuned hyp['cls'] to current dataset 197 | model.nc = nc # attach number of classes to model 198 | model.hyp = hyp # attach hyperparameters to model 199 | model.gr = 1.0 # giou loss ratio (obj_loss = 1.0 or giou) 200 | model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device) # attach class weights 201 | model.names = names 202 | 203 | # Start training 204 | t0 = time.time() 205 | nw = max(round(hyp['warmup_epochs'] * nb), 1e3) # number of warmup iterations, max(3 epochs, 1k iterations) 206 | # nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training 207 | maps = np.zeros(nc) # mAP per class 208 | results = (0, 0, 0, 0, 0, 0, 0) # 'P', 'R', 'mAP', 'F1', 'val GIoU', 'val Objectness', 'val Classification' 209 | scheduler.last_epoch = start_epoch - 1 # do not move 210 | scaler = amp.GradScaler(enabled=cuda) 211 | logger.info('Image sizes %g train, %g test\nUsing %g dataloader workers\nLogging results to %s\n' 212 | 'Starting training for %g epochs...' % (imgsz, imgsz_test, dataloader.num_workers, log_dir, epochs)) 213 | for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------ 214 | model.train() 215 | 216 | # Update image weights (optional) 217 | if opt.image_weights: 218 | # Generate indices 219 | if rank in [-1, 0]: 220 | cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2 # class weights 221 | iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw) # image weights 222 | dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n) # rand weighted idx 223 | # Broadcast if DDP 224 | if rank != -1: 225 | indices = (torch.tensor(dataset.indices) if rank == 0 else torch.zeros(dataset.n)).int() 226 | dist.broadcast(indices, 0) 227 | if rank != 0: 228 | dataset.indices = indices.cpu().numpy() 229 | 230 | # Update mosaic border 231 | # b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs) 232 | # dataset.mosaic_border = [b - imgsz, -b] # height, width borders 233 | 234 | mloss = torch.zeros(4, device=device) # mean losses 235 | if rank != -1: 236 | dataloader.sampler.set_epoch(epoch) 237 | pbar = enumerate(dataloader) 238 | logger.info(('\n' + '%10s' * 8) % ('Epoch', 'gpu_mem', 'GIoU', 'obj', 'cls', 'total', 'targets', 'img_size')) 239 | if rank in [-1, 0]: 240 | pbar = tqdm(pbar, total=nb) # progress bar 241 | optimizer.zero_grad() 242 | for i, (imgs, targets, paths, _) in pbar: # batch ------------------------------------------------------------- 243 | ni = i + nb * epoch # number integrated batches (since train start) 244 | imgs = imgs.to(device, non_blocking=True).float() / 255.0 # uint8 to float32, 0-255 to 0.0-1.0 245 | 246 | # Warmup 247 | if ni <= nw: 248 | xi = [0, nw] # x interp 249 | # model.gr = np.interp(ni, xi, [0.0, 1.0]) # giou loss ratio (obj_loss = 1.0 or giou) 250 | accumulate = max(1, np.interp(ni, xi, [1, nbs / total_batch_size]).round()) 251 | for j, x in enumerate(optimizer.param_groups): 252 | # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0 253 | x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 2 else 0.0, x['initial_lr'] * lf(epoch)]) 254 | if 'momentum' in x: 255 | x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']]) 256 | 257 | # Multi-scale 258 | if opt.multi_scale: 259 | sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs # size 260 | sf = sz / max(imgs.shape[2:]) # scale factor 261 | if sf != 1: 262 | ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]] # new shape (stretched to gs-multiple) 263 | imgs = F.interpolate(imgs, size=ns, mode='bilinear', align_corners=False) 264 | 265 | # Forward 266 | with amp.autocast(enabled=cuda): 267 | pred = model(imgs) # forward 268 | loss, loss_items = compute_loss(pred, targets.to(device), model) # loss scaled by batch_size 269 | if rank != -1: 270 | loss *= opt.world_size # gradient averaged between devices in DDP mode 271 | 272 | # Backward 273 | scaler.scale(loss).backward() 274 | 275 | # Optimize 276 | if ni % accumulate == 0: 277 | scaler.step(optimizer) # optimizer.step 278 | scaler.update() 279 | optimizer.zero_grad() 280 | if ema: 281 | ema.update(model) 282 | 283 | # Print 284 | if rank in [-1, 0]: 285 | mloss = (mloss * i + loss_items) / (i + 1) # update mean losses 286 | mem = '%.3gG' % (torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0) # (GB) 287 | s = ('%10s' * 2 + '%10.4g' * 6) % ( 288 | '%g/%g' % (epoch, epochs - 1), mem, *mloss, targets.shape[0], imgs.shape[-1]) 289 | pbar.set_description(s) 290 | 291 | # Plot 292 | if ni < 3: 293 | f = str(log_dir / ('train_batch%g.jpg' % ni)) # filename 294 | result = plot_images(images=imgs, targets=targets, paths=paths, fname=f) 295 | if tb_writer and result is not None: 296 | tb_writer.add_image(f, result, dataformats='HWC', global_step=epoch) 297 | # tb_writer.add_graph(model, imgs) # add model to tensorboard 298 | 299 | # end batch ------------------------------------------------------------------------------------------------ 300 | 301 | # Scheduler 302 | lr = [x['lr'] for x in optimizer.param_groups] # for tensorboard 303 | scheduler.step() 304 | 305 | # DDP process 0 or single-GPU 306 | if rank in [-1, 0]: 307 | # mAP 308 | if ema: 309 | ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'gr', 'names', 'stride']) 310 | final_epoch = epoch + 1 == epochs 311 | if not opt.notest or final_epoch: # Calculate mAP 312 | if final_epoch: # replot predictions 313 | [os.remove(x) for x in glob.glob(str(log_dir / 'test_batch*_pred.jpg')) if os.path.exists(x)] 314 | results, maps, times = test.test(opt.data, 315 | batch_size=total_batch_size, 316 | imgsz=imgsz_test, 317 | model=ema.ema, 318 | single_cls=opt.single_cls, 319 | dataloader=testloader, 320 | save_dir=log_dir) 321 | 322 | # Write 323 | with open(results_file, 'a') as f: 324 | f.write(s + '%10.4g' * 7 % results + '\n') # P, R, mAP, F1, test_losses=(GIoU, obj, cls) 325 | if len(opt.name) and opt.bucket: 326 | os.system('gsutil cp %s gs://%s/results/results%s.txt' % (results_file, opt.bucket, opt.name)) 327 | 328 | # Tensorboard 329 | if tb_writer: 330 | tags = ['train/giou_loss', 'train/obj_loss', 'train/cls_loss', # train loss 331 | 'metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95', 332 | 'val/giou_loss', 'val/obj_loss', 'val/cls_loss', # val loss 333 | 'x/lr0', 'x/lr1', 'x/lr2'] # params 334 | for x, tag in zip(list(mloss[:-1]) + list(results) + lr, tags): 335 | tb_writer.add_scalar(tag, x, epoch) 336 | 337 | # Update best mAP 338 | fi = fitness(np.array(results).reshape(1, -1)) # fitness_i = weighted combination of [P, R, mAP, F1] 339 | if fi > best_fitness: 340 | best_fitness = fi 341 | 342 | # Save model 343 | save = (not opt.nosave) or (final_epoch and not opt.evolve) 344 | if save: 345 | with open(results_file, 'r') as f: # create checkpoint 346 | ckpt = {'epoch': epoch, 347 | 'best_fitness': best_fitness, 348 | 'training_results': f.read(), 349 | 'model': ema.ema, 350 | 'optimizer': None if final_epoch else optimizer.state_dict()} 351 | 352 | # Save last, best and delete 353 | torch.save(ckpt, last) 354 | if epoch % 5 == 0: 355 | wt_name = os.path.join(wdir, 'last_{}.pt'.format(epoch)) 356 | print("saving..", wt_name) 357 | torch.save(ckpt, wt_name) 358 | if best_fitness == fi: 359 | torch.save(ckpt, best) 360 | del ckpt 361 | # end epoch ---------------------------------------------------------------------------------------------------- 362 | # end training 363 | 364 | if rank in [-1, 0]: 365 | # Strip optimizers 366 | n = opt.name if opt.name.isnumeric() else '' 367 | fresults, flast, fbest = log_dir / f'results{n}.txt', wdir / f'last{n}.pt', wdir / f'best{n}.pt' 368 | for f1, f2 in zip([wdir / 'last.pt', wdir / 'best.pt', results_file], [flast, fbest, fresults]): 369 | if os.path.exists(f1): 370 | os.rename(f1, f2) # rename 371 | if str(f2).endswith('.pt'): # is *.pt 372 | strip_optimizer(f2) # strip optimizer 373 | os.system('gsutil cp %s gs://%s/weights' % (f2, opt.bucket)) if opt.bucket else None # upload 374 | # Finish 375 | if not opt.evolve: 376 | plot_results(save_dir=log_dir) # save as results.png 377 | logger.info('%g epochs completed in %.3f hours.\n' % (epoch - start_epoch + 1, (time.time() - t0) / 3600)) 378 | 379 | dist.destroy_process_group() if rank not in [-1, 0] else None 380 | torch.cuda.empty_cache() 381 | return results 382 | 383 | 384 | if __name__ == '__main__': 385 | parser = argparse.ArgumentParser() 386 | parser.add_argument('--weights', type=str, default='yolov5s.pt', help='initial weights path') 387 | parser.add_argument('--cfg', type=str, default='', help='model.yaml path') 388 | parser.add_argument('--data', type=str, default='data/coco128.yaml', help='data.yaml path') 389 | parser.add_argument('--hyp', type=str, default='data/hyp.scratch.yaml', help='hyperparameters path') 390 | parser.add_argument('--epochs', type=int, default=300) 391 | parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs') 392 | parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='[train, test] image sizes') 393 | parser.add_argument('--rect', action='store_true', help='rectangular training') 394 | parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training') 395 | parser.add_argument('--nosave', action='store_true', help='only save final checkpoint') 396 | parser.add_argument('--notest', action='store_true', help='only test final epoch') 397 | parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check') 398 | parser.add_argument('--evolve', action='store_true', help='evolve hyperparameters') 399 | parser.add_argument('--bucket', type=str, default='', help='gsutil bucket') 400 | parser.add_argument('--cache-images', action='store_true', help='cache images for faster training') 401 | parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training') 402 | parser.add_argument('--name', default='', help='renames results.txt to results_name.txt if supplied') 403 | parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') 404 | parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%') 405 | parser.add_argument('--single-cls', action='store_true', help='train as single-class dataset') 406 | parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer') 407 | parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode') 408 | parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify') 409 | parser.add_argument('--logdir', type=str, default='runs/', help='logging directory') 410 | parser.add_argument('--workers', type=int, default=8, help='maximum number of dataloader workers') 411 | opt = parser.parse_args() 412 | 413 | # Set DDP variables 414 | opt.total_batch_size = opt.batch_size 415 | opt.world_size = int(os.environ['WORLD_SIZE']) if 'WORLD_SIZE' in os.environ else 1 416 | opt.global_rank = int(os.environ['RANK']) if 'RANK' in os.environ else -1 417 | set_logging(opt.global_rank) 418 | if opt.global_rank in [-1, 0]: 419 | check_git_status() 420 | 421 | # Resume 422 | if opt.resume: # resume an interrupted run 423 | ckpt = opt.resume if isinstance(opt.resume, str) else get_latest_run() # specified or most recent path 424 | log_dir = Path(ckpt).parent.parent # runs/exp0 425 | assert os.path.isfile(ckpt), 'ERROR: --resume checkpoint does not exist' 426 | with open(log_dir / 'opt.yaml') as f: 427 | opt = argparse.Namespace(**yaml.load(f, Loader=yaml.FullLoader)) # replace 428 | opt.cfg, opt.weights, opt.resume = '', ckpt, True 429 | logger.info('Resuming training from %s' % ckpt) 430 | 431 | else: 432 | # opt.hyp = opt.hyp or ('hyp.finetune.yaml' if opt.weights else 'hyp.scratch.yaml') 433 | opt.data, opt.cfg, opt.hyp = check_file(opt.data), check_file(opt.cfg), check_file(opt.hyp) # check files 434 | assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified' 435 | opt.img_size.extend([opt.img_size[-1]] * (2 - len(opt.img_size))) # extend to 2 sizes (train, test) 436 | log_dir = increment_dir(Path(opt.logdir) / 'exp', opt.name) # runs/exp1 437 | 438 | device = select_device(opt.device, batch_size=opt.batch_size) 439 | 440 | # DDP mode 441 | if opt.local_rank != -1: 442 | assert torch.cuda.device_count() > opt.local_rank 443 | torch.cuda.set_device(opt.local_rank) 444 | device = torch.device('cuda', opt.local_rank) 445 | dist.init_process_group(backend='nccl', init_method='env://') # distributed backend 446 | assert opt.batch_size % opt.world_size == 0, '--batch-size must be multiple of CUDA device count' 447 | opt.batch_size = opt.total_batch_size // opt.world_size 448 | 449 | logger.info(opt) 450 | with open(opt.hyp) as f: 451 | hyp = yaml.load(f, Loader=yaml.FullLoader) # load hyps 452 | 453 | # Train 454 | if not opt.evolve: 455 | tb_writer = None 456 | if opt.global_rank in [-1, 0]: 457 | logger.info('Start Tensorboard with "tensorboard --logdir %s", view at http://localhost:6006/' % opt.logdir) 458 | tb_writer = SummaryWriter(log_dir=log_dir) # runs/exp0 459 | 460 | train(hyp, opt, device, tb_writer) 461 | 462 | # Evolve hyperparameters (optional) 463 | else: 464 | # Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit) 465 | meta = {'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3) 466 | 'lrf': (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf) 467 | 'momentum': (0.3, 0.6, 0.98), # SGD momentum/Adam beta1 468 | 'weight_decay': (1, 0.0, 0.001), # optimizer weight decay 469 | 'warmup_epochs': (1, 0.0, 5.0), # warmup epochs (fractions ok) 470 | 'warmup_momentum': (1, 0.0, 0.95), # warmup initial momentum 471 | 'warmup_bias_lr': (1, 0.0, 0.2), # warmup initial bias lr 472 | 'giou': (1, 0.02, 0.2), # GIoU loss gain 473 | 'cls': (1, 0.2, 4.0), # cls loss gain 474 | 'cls_pw': (1, 0.5, 2.0), # cls BCELoss positive_weight 475 | 'obj': (1, 0.2, 4.0), # obj loss gain (scale with pixels) 476 | 'obj_pw': (1, 0.5, 2.0), # obj BCELoss positive_weight 477 | 'iou_t': (0, 0.1, 0.7), # IoU training threshold 478 | 'anchor_t': (1, 2.0, 8.0), # anchor-multiple threshold 479 | 'anchors': (2, 2.0, 10.0), # anchors per output grid (0 to ignore) 480 | 'fl_gamma': (0, 0.0, 2.0), # focal loss gamma (efficientDet default gamma=1.5) 481 | 'hsv_h': (1, 0.0, 0.1), # image HSV-Hue augmentation (fraction) 482 | 'hsv_s': (1, 0.0, 0.9), # image HSV-Saturation augmentation (fraction) 483 | 'hsv_v': (1, 0.0, 0.9), # image HSV-Value augmentation (fraction) 484 | 'degrees': (1, 0.0, 45.0), # image rotation (+/- deg) 485 | 'translate': (1, 0.0, 0.9), # image translation (+/- fraction) 486 | 'scale': (1, 0.0, 0.9), # image scale (+/- gain) 487 | 'shear': (1, 0.0, 10.0), # image shear (+/- deg) 488 | 'perspective': (0, 0.0, 0.001), # image perspective (+/- fraction), range 0-0.001 489 | 'flipud': (1, 0.0, 1.0), # image flip up-down (probability) 490 | 'fliplr': (0, 0.0, 1.0), # image flip left-right (probability) 491 | 'mosaic': (1, 0.0, 1.0), # image mixup (probability) 492 | 'mixup': (1, 0.0, 1.0)} # image mixup (probability) 493 | 494 | assert opt.local_rank == -1, 'DDP mode not implemented for --evolve' 495 | opt.notest, opt.nosave = True, True # only test/save final epoch 496 | # ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices 497 | yaml_file = Path('runs/evolve/hyp_evolved.yaml') # save best result here 498 | if opt.bucket: 499 | os.system('gsutil cp gs://%s/evolve.txt .' % opt.bucket) # download evolve.txt if exists 500 | 501 | for _ in range(300): # generations to evolve 502 | if os.path.exists('evolve.txt'): # if evolve.txt exists: select best hyps and mutate 503 | # Select parent(s) 504 | parent = 'single' # parent selection method: 'single' or 'weighted' 505 | x = np.loadtxt('evolve.txt', ndmin=2) 506 | n = min(5, len(x)) # number of previous results to consider 507 | x = x[np.argsort(-fitness(x))][:n] # top n mutations 508 | w = fitness(x) - fitness(x).min() # weights 509 | if parent == 'single' or len(x) == 1: 510 | # x = x[random.randint(0, n - 1)] # random selection 511 | x = x[random.choices(range(n), weights=w)[0]] # weighted selection 512 | elif parent == 'weighted': 513 | x = (x * w.reshape(n, 1)).sum(0) / w.sum() # weighted combination 514 | 515 | # Mutate 516 | mp, s = 0.8, 0.2 # mutation probability, sigma 517 | npr = np.random 518 | npr.seed(int(time.time())) 519 | g = np.array([x[0] for x in meta.values()]) # gains 0-1 520 | ng = len(meta) 521 | v = np.ones(ng) 522 | while all(v == 1): # mutate until a change occurs (prevent duplicates) 523 | v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0) 524 | for i, k in enumerate(hyp.keys()): # plt.hist(v.ravel(), 300) 525 | hyp[k] = float(x[i + 7] * v[i]) # mutate 526 | 527 | # Constrain to limits 528 | for k, v in meta.items(): 529 | hyp[k] = max(hyp[k], v[1]) # lower limit 530 | hyp[k] = min(hyp[k], v[2]) # upper limit 531 | hyp[k] = round(hyp[k], 5) # significant digits 532 | 533 | # Train mutation 534 | results = train(hyp.copy(), opt, device) 535 | 536 | # Write mutation results 537 | print_mutation(hyp.copy(), results, yaml_file, opt.bucket) 538 | 539 | # Plot results 540 | plot_evolution(yaml_file) 541 | print('Hyperparameter evolution complete. Best results saved as: %s\nCommand to train a new model with these ' 542 | 'hyperparameters: $ python train.py --hyp %s' % (yaml_file, yaml_file)) 543 | -------------------------------------------------------------------------------- /yolov5/utils/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/USC-InfoLab/rddc2020/72cda97851fb6a48b5b9a55048ba38c890396d23/yolov5/utils/__init__.py -------------------------------------------------------------------------------- /yolov5/utils/activations.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.functional as F 4 | 5 | 6 | # Swish https://arxiv.org/pdf/1905.02244.pdf --------------------------------------------------------------------------- 7 | class Swish(nn.Module): # 8 | @staticmethod 9 | def forward(x): 10 | return x * torch.sigmoid(x) 11 | 12 | 13 | class Hardswish(nn.Module): # export-friendly version of nn.Hardswish() 14 | @staticmethod 15 | def forward(x): 16 | # return x * F.hardsigmoid(x) # for torchscript and CoreML 17 | return x * F.hardtanh(x + 3, 0., 6.) / 6. # for torchscript, CoreML and ONNX 18 | 19 | 20 | class MemoryEfficientSwish(nn.Module): 21 | class F(torch.autograd.Function): 22 | @staticmethod 23 | def forward(ctx, x): 24 | ctx.save_for_backward(x) 25 | return x * torch.sigmoid(x) 26 | 27 | @staticmethod 28 | def backward(ctx, grad_output): 29 | x = ctx.saved_tensors[0] 30 | sx = torch.sigmoid(x) 31 | return grad_output * (sx * (1 + x * (1 - sx))) 32 | 33 | def forward(self, x): 34 | return self.F.apply(x) 35 | 36 | 37 | # Mish https://github.com/digantamisra98/Mish -------------------------------------------------------------------------- 38 | class Mish(nn.Module): 39 | @staticmethod 40 | def forward(x): 41 | return x * F.softplus(x).tanh() 42 | 43 | 44 | class MemoryEfficientMish(nn.Module): 45 | class F(torch.autograd.Function): 46 | @staticmethod 47 | def forward(ctx, x): 48 | ctx.save_for_backward(x) 49 | return x.mul(torch.tanh(F.softplus(x))) # x * tanh(ln(1 + exp(x))) 50 | 51 | @staticmethod 52 | def backward(ctx, grad_output): 53 | x = ctx.saved_tensors[0] 54 | sx = torch.sigmoid(x) 55 | fx = F.softplus(x).tanh() 56 | return grad_output * (fx + x * sx * (1 - fx * fx)) 57 | 58 | def forward(self, x): 59 | return self.F.apply(x) 60 | 61 | 62 | # FReLU https://arxiv.org/abs/2007.11824 ------------------------------------------------------------------------------- 63 | class FReLU(nn.Module): 64 | def __init__(self, c1, k=3): # ch_in, kernel 65 | super().__init__() 66 | self.conv = nn.Conv2d(c1, c1, k, 1, 1, groups=c1) 67 | self.bn = nn.BatchNorm2d(c1) 68 | 69 | def forward(self, x): 70 | return torch.max(x, self.bn(self.conv(x))) 71 | -------------------------------------------------------------------------------- /yolov5/utils/evolve.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Hyperparameter evolution commands (avoids CUDA memory leakage issues) 3 | # Replaces train.py python generations 'for' loop with a bash 'for' loop 4 | 5 | # Start on 4-GPU machine 6 | #for i in 0 1 2 3; do 7 | # t=ultralytics/yolov5:evolve && sudo docker pull $t && sudo docker run -d --ipc=host --gpus all -v "$(pwd)"/VOC:/usr/src/VOC $t bash utils/evolve.sh $i 8 | # sleep 60 # avoid simultaneous evolve.txt read/write 9 | #done 10 | 11 | # Hyperparameter evolution commands 12 | while true; do 13 | # python train.py --batch 64 --weights yolov5m.pt --data voc.yaml --img 512 --epochs 50 --evolve --bucket ult/evolve/voc --device $1 14 | python train.py --batch 40 --weights yolov5m.pt --data coco.yaml --img 640 --epochs 30 --evolve --bucket ult/evolve/coco --device $1 15 | done 16 | -------------------------------------------------------------------------------- /yolov5/utils/google_app_engine/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM gcr.io/google-appengine/python 2 | 3 | # Create a virtualenv for dependencies. This isolates these packages from 4 | # system-level packages. 5 | # Use -p python3 or -p python3.7 to select python version. Default is version 2. 6 | RUN virtualenv /env -p python3 7 | 8 | # Setting these environment variables are the same as running 9 | # source /env/bin/activate. 10 | ENV VIRTUAL_ENV /env 11 | ENV PATH /env/bin:$PATH 12 | 13 | RUN apt-get update && apt-get install -y python-opencv 14 | 15 | # Copy the application's requirements.txt and run pip to install all 16 | # dependencies into the virtualenv. 17 | ADD requirements.txt /app/requirements.txt 18 | RUN pip install -r /app/requirements.txt 19 | 20 | # Add the application source code. 21 | ADD . /app 22 | 23 | # Run a WSGI server to serve the application. gunicorn must be declared as 24 | # a dependency in requirements.txt. 25 | CMD gunicorn -b :$PORT main:app 26 | -------------------------------------------------------------------------------- /yolov5/utils/google_app_engine/additional_requirements.txt: -------------------------------------------------------------------------------- 1 | # add these requirements in your app on top of the existing ones 2 | pip==18.1 3 | Flask==1.0.2 4 | gunicorn==19.9.0 5 | -------------------------------------------------------------------------------- /yolov5/utils/google_app_engine/app.yaml: -------------------------------------------------------------------------------- 1 | runtime: custom 2 | env: flex 3 | 4 | service: yolov5app 5 | 6 | liveness_check: 7 | initial_delay_sec: 600 8 | 9 | manual_scaling: 10 | instances: 1 11 | resources: 12 | cpu: 1 13 | memory_gb: 4 14 | disk_size_gb: 20 -------------------------------------------------------------------------------- /yolov5/utils/google_utils.py: -------------------------------------------------------------------------------- 1 | # This file contains google utils: https://cloud.google.com/storage/docs/reference/libraries 2 | # pip install --upgrade google-cloud-storage 3 | # from google.cloud import storage 4 | 5 | import os 6 | import platform 7 | import subprocess 8 | import time 9 | from pathlib import Path 10 | 11 | import torch 12 | 13 | 14 | def gsutil_getsize(url=''): 15 | # gs://bucket/file size https://cloud.google.com/storage/docs/gsutil/commands/du 16 | s = subprocess.check_output('gsutil du %s' % url, shell=True).decode('utf-8') 17 | return eval(s.split(' ')[0]) if len(s) else 0 # bytes 18 | 19 | 20 | def attempt_download(weights): 21 | # Attempt to download pretrained weights if not found locally 22 | weights = weights.strip().replace("'", '') 23 | file = Path(weights).name 24 | 25 | msg = weights + ' missing, try downloading from https://github.com/ultralytics/yolov5/releases/' 26 | models = ['yolov5s.pt', 'yolov5m.pt', 'yolov5l.pt', 'yolov5x.pt'] # available models 27 | 28 | if file in models and not os.path.isfile(weights): 29 | # Google Drive 30 | # d = {'yolov5s.pt': '1R5T6rIyy3lLwgFXNms8whc-387H0tMQO', 31 | # 'yolov5m.pt': '1vobuEExpWQVpXExsJ2w-Mbf3HJjWkQJr', 32 | # 'yolov5l.pt': '1hrlqD1Wdei7UT4OgT785BEk1JwnSvNEV', 33 | # 'yolov5x.pt': '1mM8aZJlWTxOg7BZJvNUMrTnA2AbeCVzS'} 34 | # r = gdrive_download(id=d[file], name=weights) if file in d else 1 35 | # if r == 0 and os.path.exists(weights) and os.path.getsize(weights) > 1E6: # check 36 | # return 37 | 38 | try: # GitHub 39 | url = 'https://github.com/ultralytics/yolov5/releases/download/v3.0/' + file 40 | print('Downloading %s to %s...' % (url, weights)) 41 | torch.hub.download_url_to_file(url, weights) 42 | assert os.path.exists(weights) and os.path.getsize(weights) > 1E6 # check 43 | except Exception as e: # GCP 44 | print('Download error: %s' % e) 45 | url = 'https://storage.googleapis.com/ultralytics/yolov5/ckpt/' + file 46 | print('Downloading %s to %s...' % (url, weights)) 47 | r = os.system('curl -L %s -o %s' % (url, weights)) # torch.hub.download_url_to_file(url, weights) 48 | finally: 49 | if not (os.path.exists(weights) and os.path.getsize(weights) > 1E6): # check 50 | os.remove(weights) if os.path.exists(weights) else None # remove partial downloads 51 | print('ERROR: Download failure: %s' % msg) 52 | print('') 53 | return 54 | 55 | 56 | def gdrive_download(id='1n_oKgR81BJtqk75b00eAjdv03qVCQn2f', name='coco128.zip'): 57 | # Downloads a file from Google Drive. from utils.google_utils import *; gdrive_download() 58 | t = time.time() 59 | 60 | print('Downloading https://drive.google.com/uc?export=download&id=%s as %s... ' % (id, name), end='') 61 | os.remove(name) if os.path.exists(name) else None # remove existing 62 | os.remove('cookie') if os.path.exists('cookie') else None 63 | 64 | # Attempt file download 65 | out = "NUL" if platform.system() == "Windows" else "/dev/null" 66 | os.system('curl -c ./cookie -s -L "drive.google.com/uc?export=download&id=%s" > %s ' % (id, out)) 67 | if os.path.exists('cookie'): # large file 68 | s = 'curl -Lb ./cookie "drive.google.com/uc?export=download&confirm=%s&id=%s" -o %s' % (get_token(), id, name) 69 | else: # small file 70 | s = 'curl -s -L -o %s "drive.google.com/uc?export=download&id=%s"' % (name, id) 71 | r = os.system(s) # execute, capture return 72 | os.remove('cookie') if os.path.exists('cookie') else None 73 | 74 | # Error check 75 | if r != 0: 76 | os.remove(name) if os.path.exists(name) else None # remove partial 77 | print('Download error ') # raise Exception('Download error') 78 | return r 79 | 80 | # Unzip if archive 81 | if name.endswith('.zip'): 82 | print('unzipping... ', end='') 83 | os.system('unzip -q %s' % name) # unzip 84 | os.remove(name) # remove zip to free space 85 | 86 | print('Done (%.1fs)' % (time.time() - t)) 87 | return r 88 | 89 | 90 | def get_token(cookie="./cookie"): 91 | with open(cookie) as f: 92 | for line in f: 93 | if "download" in line: 94 | return line.split()[-1] 95 | return "" 96 | 97 | # def upload_blob(bucket_name, source_file_name, destination_blob_name): 98 | # # Uploads a file to a bucket 99 | # # https://cloud.google.com/storage/docs/uploading-objects#storage-upload-object-python 100 | # 101 | # storage_client = storage.Client() 102 | # bucket = storage_client.get_bucket(bucket_name) 103 | # blob = bucket.blob(destination_blob_name) 104 | # 105 | # blob.upload_from_filename(source_file_name) 106 | # 107 | # print('File {} uploaded to {}.'.format( 108 | # source_file_name, 109 | # destination_blob_name)) 110 | # 111 | # 112 | # def download_blob(bucket_name, source_blob_name, destination_file_name): 113 | # # Uploads a blob from a bucket 114 | # storage_client = storage.Client() 115 | # bucket = storage_client.get_bucket(bucket_name) 116 | # blob = bucket.blob(source_blob_name) 117 | # 118 | # blob.download_to_filename(destination_file_name) 119 | # 120 | # print('Blob {} downloaded to {}.'.format( 121 | # source_blob_name, 122 | # destination_file_name)) 123 | -------------------------------------------------------------------------------- /yolov5/utils/torch_utils.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import math 3 | import os 4 | import time 5 | from copy import deepcopy 6 | 7 | import torch 8 | import torch.backends.cudnn as cudnn 9 | import torch.nn as nn 10 | import torch.nn.functional as F 11 | import torchvision.models as models 12 | 13 | logger = logging.getLogger(__name__) 14 | 15 | 16 | def init_seeds(seed=0): 17 | torch.manual_seed(seed) 18 | 19 | # Speed-reproducibility tradeoff https://pytorch.org/docs/stable/notes/randomness.html 20 | if seed == 0: # slower, more reproducible 21 | cudnn.deterministic = True 22 | cudnn.benchmark = False 23 | else: # faster, less reproducible 24 | cudnn.deterministic = False 25 | cudnn.benchmark = True 26 | 27 | 28 | def select_device(device='', batch_size=None): 29 | # device = 'cpu' or '0' or '0,1,2,3' 30 | cpu_request = device.lower() == 'cpu' 31 | if device and not cpu_request: # if device requested other than 'cpu' 32 | os.environ['CUDA_VISIBLE_DEVICES'] = device # set environment variable 33 | assert torch.cuda.is_available(), 'CUDA unavailable, invalid device %s requested' % device # check availablity 34 | 35 | cuda = False if cpu_request else torch.cuda.is_available() 36 | if cuda: 37 | c = 1024 ** 2 # bytes to MB 38 | ng = torch.cuda.device_count() 39 | if ng > 1 and batch_size: # check that batch_size is compatible with device_count 40 | assert batch_size % ng == 0, 'batch-size %g not multiple of GPU count %g' % (batch_size, ng) 41 | x = [torch.cuda.get_device_properties(i) for i in range(ng)] 42 | s = 'Using CUDA ' 43 | for i in range(0, ng): 44 | if i == 1: 45 | s = ' ' * len(s) 46 | logger.info("%sdevice%g _CudaDeviceProperties(name='%s', total_memory=%dMB)" % 47 | (s, i, x[i].name, x[i].total_memory / c)) 48 | else: 49 | logger.info('Using CPU') 50 | 51 | logger.info('') # skip a line 52 | return torch.device('cuda:0' if cuda else 'cpu') 53 | 54 | 55 | def time_synchronized(): 56 | torch.cuda.synchronize() if torch.cuda.is_available() else None 57 | return time.time() 58 | 59 | 60 | def is_parallel(model): 61 | return type(model) in (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel) 62 | 63 | 64 | def intersect_dicts(da, db, exclude=()): 65 | # Dictionary intersection of matching keys and shapes, omitting 'exclude' keys, using da values 66 | return {k: v for k, v in da.items() if k in db and not any(x in k for x in exclude) and v.shape == db[k].shape} 67 | 68 | 69 | def initialize_weights(model): 70 | for m in model.modules(): 71 | t = type(m) 72 | if t is nn.Conv2d: 73 | pass # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') 74 | elif t is nn.BatchNorm2d: 75 | m.eps = 1e-3 76 | m.momentum = 0.03 77 | elif t in [nn.LeakyReLU, nn.ReLU, nn.ReLU6]: 78 | m.inplace = True 79 | 80 | 81 | def find_modules(model, mclass=nn.Conv2d): 82 | # Finds layer indices matching module class 'mclass' 83 | return [i for i, m in enumerate(model.module_list) if isinstance(m, mclass)] 84 | 85 | 86 | def sparsity(model): 87 | # Return global model sparsity 88 | a, b = 0., 0. 89 | for p in model.parameters(): 90 | a += p.numel() 91 | b += (p == 0).sum() 92 | return b / a 93 | 94 | 95 | def prune(model, amount=0.3): 96 | # Prune model to requested global sparsity 97 | import torch.nn.utils.prune as prune 98 | print('Pruning model... ', end='') 99 | for name, m in model.named_modules(): 100 | if isinstance(m, nn.Conv2d): 101 | prune.l1_unstructured(m, name='weight', amount=amount) # prune 102 | prune.remove(m, 'weight') # make permanent 103 | print(' %.3g global sparsity' % sparsity(model)) 104 | 105 | 106 | def fuse_conv_and_bn(conv, bn): 107 | # Fuse convolution and batchnorm layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/ 108 | 109 | # init 110 | fusedconv = nn.Conv2d(conv.in_channels, 111 | conv.out_channels, 112 | kernel_size=conv.kernel_size, 113 | stride=conv.stride, 114 | padding=conv.padding, 115 | groups=conv.groups, 116 | bias=True).requires_grad_(False).to(conv.weight.device) 117 | 118 | # prepare filters 119 | w_conv = conv.weight.clone().view(conv.out_channels, -1) 120 | w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var))) 121 | fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.size())) 122 | 123 | # prepare spatial bias 124 | b_conv = torch.zeros(conv.weight.size(0), device=conv.weight.device) if conv.bias is None else conv.bias 125 | b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps)) 126 | fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn) 127 | 128 | return fusedconv 129 | 130 | 131 | def model_info(model, verbose=False): 132 | # Plots a line-by-line description of a PyTorch model 133 | n_p = sum(x.numel() for x in model.parameters()) # number parameters 134 | n_g = sum(x.numel() for x in model.parameters() if x.requires_grad) # number gradients 135 | if verbose: 136 | print('%5s %40s %9s %12s %20s %10s %10s' % ('layer', 'name', 'gradient', 'parameters', 'shape', 'mu', 'sigma')) 137 | for i, (name, p) in enumerate(model.named_parameters()): 138 | name = name.replace('module_list.', '') 139 | print('%5g %40s %9s %12g %20s %10.3g %10.3g' % 140 | (i, name, p.requires_grad, p.numel(), list(p.shape), p.mean(), p.std())) 141 | 142 | try: # FLOPS 143 | from thop import profile 144 | flops = profile(deepcopy(model), inputs=(torch.zeros(1, 3, 64, 64),), verbose=False)[0] / 1E9 * 2 145 | fs = ', %.1f GFLOPS' % (flops * 100) # 640x640 FLOPS 146 | except: 147 | fs = '' 148 | 149 | logger.info( 150 | 'Model Summary: %g layers, %g parameters, %g gradients%s' % (len(list(model.parameters())), n_p, n_g, fs)) 151 | 152 | 153 | def load_classifier(name='resnet101', n=2): 154 | # Loads a pretrained model reshaped to n-class output 155 | model = models.__dict__[name](pretrained=True) 156 | 157 | # Display model properties 158 | input_size = [3, 224, 224] 159 | input_space = 'RGB' 160 | input_range = [0, 1] 161 | mean = [0.485, 0.456, 0.406] 162 | std = [0.229, 0.224, 0.225] 163 | for x in ['input_size', 'input_space', 'input_range', 'mean', 'std']: 164 | print(x + ' =', eval(x)) 165 | 166 | # Reshape output to n classes 167 | filters = model.fc.weight.shape[1] 168 | model.fc.bias = nn.Parameter(torch.zeros(n), requires_grad=True) 169 | model.fc.weight = nn.Parameter(torch.zeros(n, filters), requires_grad=True) 170 | model.fc.out_features = n 171 | return model 172 | 173 | 174 | def scale_img(img, ratio=1.0, same_shape=False): # img(16,3,256,416), r=ratio 175 | # scales img(bs,3,y,x) by ratio 176 | if ratio == 1.0: 177 | return img 178 | else: 179 | h, w = img.shape[2:] 180 | s = (int(h * ratio), int(w * ratio)) # new size 181 | img = F.interpolate(img, size=s, mode='bilinear', align_corners=False) # resize 182 | if not same_shape: # pad/crop img 183 | gs = 32 # (pixels) grid size 184 | h, w = [math.ceil(x * ratio / gs) * gs for x in (h, w)] 185 | return F.pad(img, [0, w - s[1], 0, h - s[0]], value=0.447) # value = imagenet mean 186 | 187 | 188 | def copy_attr(a, b, include=(), exclude=()): 189 | # Copy attributes from b to a, options to only include [...] and to exclude [...] 190 | for k, v in b.__dict__.items(): 191 | if (len(include) and k not in include) or k.startswith('_') or k in exclude: 192 | continue 193 | else: 194 | setattr(a, k, v) 195 | 196 | 197 | class ModelEMA: 198 | """ Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models 199 | Keep a moving average of everything in the model state_dict (parameters and buffers). 200 | This is intended to allow functionality like 201 | https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage 202 | A smoothed version of the weights is necessary for some training schemes to perform well. 203 | This class is sensitive where it is initialized in the sequence of model init, 204 | GPU assignment and distributed training wrappers. 205 | """ 206 | 207 | def __init__(self, model, decay=0.9999, updates=0): 208 | # Create EMA 209 | self.ema = deepcopy(model.module if is_parallel(model) else model).eval() # FP32 EMA 210 | # if next(model.parameters()).device.type != 'cpu': 211 | # self.ema.half() # FP16 EMA 212 | self.updates = updates # number of EMA updates 213 | self.decay = lambda x: decay * (1 - math.exp(-x / 2000)) # decay exponential ramp (to help early epochs) 214 | for p in self.ema.parameters(): 215 | p.requires_grad_(False) 216 | 217 | def update(self, model): 218 | # Update EMA parameters 219 | with torch.no_grad(): 220 | self.updates += 1 221 | d = self.decay(self.updates) 222 | 223 | msd = model.module.state_dict() if is_parallel(model) else model.state_dict() # model state_dict 224 | for k, v in self.ema.state_dict().items(): 225 | if v.dtype.is_floating_point: 226 | v *= d 227 | v += (1. - d) * msd[k].detach() 228 | 229 | def update_attr(self, model, include=(), exclude=('process_group', 'reducer')): 230 | # Update EMA attributes 231 | copy_attr(self.ema, model, include, exclude) 232 | -------------------------------------------------------------------------------- /yolov5/weights/IMSC/download.sh: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/USC-InfoLab/rddc2020/72cda97851fb6a48b5b9a55048ba38c890396d23/yolov5/weights/IMSC/download.sh -------------------------------------------------------------------------------- /yolov5/weights/download_weights.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Download common models 3 | 4 | python -c " 5 | from utils.google_utils import *; 6 | attempt_download('weights/yolov5s.pt'); 7 | attempt_download('weights/yolov5m.pt'); 8 | attempt_download('weights/yolov5l.pt'); 9 | attempt_download('weights/yolov5x.pt') 10 | " 11 | --------------------------------------------------------------------------------