├── .github └── ISSUE_TEMPLATE │ ├── bug_report.md │ ├── custom.md │ └── feature_request.md ├── .gitignore ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE ├── README.md ├── config └── yolov3.cfg ├── data ├── cars-in-singapore---1204012.jpg └── idd.names ├── desktop.ini ├── docs ├── 17110013-FYP-Phase02-Report.pdf ├── _config.yml └── index.md ├── itms-yolov3.py ├── util ├── __pycache__ │ ├── __init__.cpython-36.pyc │ ├── darknet.cpython-36.pyc │ ├── datasets.cpython-36.pyc │ ├── image_processor.cpython-36.pyc │ ├── image_processor.cpython-37.pyc │ ├── model.cpython-36.pyc │ ├── model.cpython-37.pyc │ ├── moduler.cpython-36.pyc │ ├── moduler.cpython-37.pyc │ ├── parser.cpython-36.pyc │ ├── parser.cpython-37.pyc │ ├── signal_lights.cpython-36.pyc │ ├── signal_lights.cpython-37.pyc │ ├── signal_switching.cpython-36.pyc │ ├── signal_switching.cpython-37.pyc │ ├── utill.cpython-36.pyc │ ├── utils.cpython-36.pyc │ ├── utils.cpython-37.pyc │ └── uutils.cpython-36.pyc ├── boot.py ├── dynamic_signal_switching.py ├── image_processor.py ├── itms-yolo-m4-01.py ├── itms-yolo.py ├── model.py ├── moduler.py ├── parser.py └── utils.py ├── vehicles-on-lanes ├── 130228082226-india-wealth-sports-cars-640x360.jpg ├── Road621.jpg ├── auto-majors-welcome-retail-sales-numbers-siam-to-stay-with-wholesale-numbers.jpg └── large_w9yk3mhHXYLAZxbD5k5tIvOT14QFKb2xL4w-mnYSfBA.jpg └── weights └── cars-in-singapore---1204012.jpg /.github/ISSUE_TEMPLATE/bug_report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Bug report 3 | about: Create a report to help us improve 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Describe the bug** 11 | A clear and concise description of what the bug is. 12 | 13 | **To Reproduce** 14 | Steps to reproduce the behavior: 15 | 1. Go to '...' 16 | 2. Click on '....' 17 | 3. Scroll down to '....' 18 | 4. See error 19 | 20 | **Expected behavior** 21 | A clear and concise description of what you expected to happen. 22 | 23 | **Screenshots** 24 | If applicable, add screenshots to help explain your problem. 25 | 26 | **Desktop (please complete the following information):** 27 | - OS: [e.g. iOS] 28 | - Browser [e.g. chrome, safari] 29 | - Version [e.g. 22] 30 | 31 | **Smartphone (please complete the following information):** 32 | - Device: [e.g. iPhone6] 33 | - OS: [e.g. iOS8.1] 34 | - Browser [e.g. stock browser, safari] 35 | - Version [e.g. 22] 36 | 37 | **Additional context** 38 | Add any other context about the problem here. 39 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/custom.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Custom issue template 3 | about: Describe this issue template's purpose here. 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | 11 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature_request.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Feature request 3 | about: Suggest an idea for this project 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Is your feature request related to a problem? Please describe.** 11 | A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] 12 | 13 | **Describe the solution you'd like** 14 | A clear and concise description of what you want to happen. 15 | 16 | **Describe alternatives you've considered** 17 | A clear and concise description of any alternative solutions or features you've considered. 18 | 19 | **Additional context** 20 | Add any other context or screenshots about the feature request here. 21 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | 2 | weights/yolov3.weights 3 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Contributor Covenant Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | In the interest of fostering an open and welcoming environment, we as 6 | contributors and maintainers pledge to making participation in our project and 7 | our community a harassment-free experience for everyone, regardless of age, body 8 | size, disability, ethnicity, sex characteristics, gender identity and expression, 9 | level of experience, education, socio-economic status, nationality, personal 10 | appearance, race, religion, or sexual identity and orientation. 11 | 12 | ## Our Standards 13 | 14 | Examples of behavior that contributes to creating a positive environment 15 | include: 16 | 17 | * Using welcoming and inclusive language 18 | * Being respectful of differing viewpoints and experiences 19 | * Gracefully accepting constructive criticism 20 | * Focusing on what is best for the community 21 | * Showing empathy towards other community members 22 | 23 | Examples of unacceptable behavior by participants include: 24 | 25 | * The use of sexualized language or imagery and unwelcome sexual attention or 26 | advances 27 | * Trolling, insulting/derogatory comments, and personal or political attacks 28 | * Public or private harassment 29 | * Publishing others' private information, such as a physical or electronic 30 | address, without explicit permission 31 | * Other conduct which could reasonably be considered inappropriate in a 32 | professional setting 33 | 34 | ## Our Responsibilities 35 | 36 | Project maintainers are responsible for clarifying the standards of acceptable 37 | behavior and are expected to take appropriate and fair corrective action in 38 | response to any instances of unacceptable behavior. 39 | 40 | Project maintainers have the right and responsibility to remove, edit, or 41 | reject comments, commits, code, wiki edits, issues, and other contributions 42 | that are not aligned to this Code of Conduct, or to ban temporarily or 43 | permanently any contributor for other behaviors that they deem inappropriate, 44 | threatening, offensive, or harmful. 45 | 46 | ## Scope 47 | 48 | This Code of Conduct applies both within project spaces and in public spaces 49 | when an individual is representing the project or its community. Examples of 50 | representing a project or community include using an official project e-mail 51 | address, posting via an official social media account, or acting as an appointed 52 | representative at an online or offline event. Representation of a project may be 53 | further defined and clarified by project maintainers. 54 | 55 | ## Enforcement 56 | 57 | Instances of abusive, harassing, or otherwise unacceptable behavior may be 58 | reported by contacting the project team at 17110013@hicet.ac.in. All 59 | complaints will be reviewed and investigated and will result in a response that 60 | is deemed necessary and appropriate to the circumstances. The project team is 61 | obligated to maintain confidentiality with regard to the reporter of an incident. 62 | Further details of specific enforcement policies may be posted separately. 63 | 64 | Project maintainers who do not follow or enforce the Code of Conduct in good 65 | faith may face temporary or permanent repercussions as determined by other 66 | members of the project's leadership. 67 | 68 | ## Attribution 69 | 70 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, 71 | available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html 72 | 73 | [homepage]: https://www.contributor-covenant.org 74 | 75 | For answers to common questions about this code of conduct, see 76 | https://www.contributor-covenant.org/faq 77 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing 2 | 3 | When contributing to this repository, please first discuss the change you wish to make via issue, 4 | email, or any other method with the owners of this repository before making a change. 5 | 6 | Please note we have a code of conduct, please follow it in all your interactions with the project. 7 | 8 | ## Pull Request Process 9 | 10 | 1. Ensure any install or build dependencies are removed before the end of the layer when doing a 11 | build. 12 | 2. Update the README.md with details of changes to the interface, this includes new environment 13 | variables, exposed ports, useful file locations and container parameters. 14 | 3. Increase the version numbers in any examples files and the README.md to the new version that this 15 | Pull Request would represent. The versioning scheme we use is [SemVer](http://semver.org/). 16 | 4. You may merge the Pull Request in once you have the sign-off of two other developers, or if you 17 | do not have permission to do that, you may request the second reviewer to merge it for you. 18 | 19 | ## Code of Conduct 20 | 21 | ### Our Pledge 22 | 23 | In the interest of fostering an open and welcoming environment, we as 24 | contributors and maintainers pledge to making participation in our project and 25 | our community a harassment-free experience for everyone, regardless of age, body 26 | size, disability, ethnicity, gender identity and expression, level of experience, 27 | nationality, personal appearance, race, religion, or sexual identity and 28 | orientation. 29 | 30 | ### Our Standards 31 | 32 | Examples of behavior that contributes to creating a positive environment 33 | include: 34 | 35 | * Using welcoming and inclusive language 36 | * Being respectful of differing viewpoints and experiences 37 | * Gracefully accepting constructive criticism 38 | * Focusing on what is best for the community 39 | * Showing empathy towards other community members 40 | 41 | Examples of unacceptable behavior by participants include: 42 | 43 | * The use of sexualized language or imagery and unwelcome sexual attention or 44 | advances 45 | * Trolling, insulting/derogatory comments, and personal or political attacks 46 | * Public or private harassment 47 | * Publishing others' private information, such as a physical or electronic 48 | address, without explicit permission 49 | * Other conduct which could reasonably be considered inappropriate in a 50 | professional setting 51 | 52 | ### Our Responsibilities 53 | 54 | Project maintainers are responsible for clarifying the standards of acceptable 55 | behavior and are expected to take appropriate and fair corrective action in 56 | response to any instances of unacceptable behavior. 57 | 58 | Project maintainers have the right and responsibility to remove, edit, or 59 | reject comments, commits, code, wiki edits, issues, and other contributions 60 | that are not aligned to this Code of Conduct, or to ban temporarily or 61 | permanently any contributor for other behaviors that they deem inappropriate, 62 | threatening, offensive, or harmful. 63 | 64 | ### Scope 65 | 66 | This Code of Conduct applies both within project spaces and in public spaces 67 | when an individual is representing the project or its community. Examples of 68 | representing a project or community include using an official project e-mail 69 | address, posting via an official social media account, or acting as an appointed 70 | representative at an online or offline event. Representation of a project may be 71 | further defined and clarified by project maintainers. 72 | 73 | ### Enforcement 74 | 75 | Instances of abusive, harassing, or otherwise unacceptable behavior may be 76 | reported by contacting the project team at [INSERT EMAIL ADDRESS]. All 77 | complaints will be reviewed and investigated and will result in a response that 78 | is deemed necessary and appropriate to the circumstances. The project team is 79 | obligated to maintain confidentiality with regard to the reporter of an incident. 80 | Further details of specific enforcement policies may be posted separately. 81 | 82 | Project maintainers who do not follow or enforce the Code of Conduct in good 83 | faith may face temporary or permanent repercussions as determined by other 84 | members of the project's leadership. 85 | 86 | ### Attribution 87 | 88 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, 89 | available at [http://contributor-covenant.org/version/1/4][version] 90 | 91 | [homepage]: http://contributor-covenant.org 92 | [version]: http://contributor-covenant.org/version/1/4/ 93 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Creative Commons Legal Code 2 | 3 | CC0 1.0 Universal 4 | 5 | CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE 6 | LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN 7 | ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS 8 | INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES 9 | REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS 10 | PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM 11 | THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED 12 | HEREUNDER. 13 | 14 | Statement of Purpose 15 | 16 | The laws of most jurisdictions throughout the world automatically confer 17 | exclusive Copyright and Related Rights (defined below) upon the creator 18 | and subsequent owner(s) (each and all, an "owner") of an original work of 19 | authorship and/or a database (each, a "Work"). 20 | 21 | Certain owners wish to permanently relinquish those rights to a Work for 22 | the purpose of contributing to a commons of creative, cultural and 23 | scientific works ("Commons") that the public can reliably and without fear 24 | of later claims of infringement build upon, modify, incorporate in other 25 | works, reuse and redistribute as freely as possible in any form whatsoever 26 | and for any purposes, including without limitation commercial purposes. 27 | These owners may contribute to the Commons to promote the ideal of a free 28 | culture and the further production of creative, cultural and scientific 29 | works, or to gain reputation or greater distribution for their Work in 30 | part through the use and efforts of others. 31 | 32 | For these and/or other purposes and motivations, and without any 33 | expectation of additional consideration or compensation, the person 34 | associating CC0 with a Work (the "Affirmer"), to the extent that he or she 35 | is an owner of Copyright and Related Rights in the Work, voluntarily 36 | elects to apply CC0 to the Work and publicly distribute the Work under its 37 | terms, with knowledge of his or her Copyright and Related Rights in the 38 | Work and the meaning and intended legal effect of CC0 on those rights. 39 | 40 | 1. Copyright and Related Rights. A Work made available under CC0 may be 41 | protected by copyright and related or neighboring rights ("Copyright and 42 | Related Rights"). Copyright and Related Rights include, but are not 43 | limited to, the following: 44 | 45 | i. the right to reproduce, adapt, distribute, perform, display, 46 | communicate, and translate a Work; 47 | ii. moral rights retained by the original author(s) and/or performer(s); 48 | iii. publicity and privacy rights pertaining to a person's image or 49 | likeness depicted in a Work; 50 | iv. rights protecting against unfair competition in regards to a Work, 51 | subject to the limitations in paragraph 4(a), below; 52 | v. rights protecting the extraction, dissemination, use and reuse of data 53 | in a Work; 54 | vi. database rights (such as those arising under Directive 96/9/EC of the 55 | European Parliament and of the Council of 11 March 1996 on the legal 56 | protection of databases, and under any national implementation 57 | thereof, including any amended or successor version of such 58 | directive); and 59 | vii. other similar, equivalent or corresponding rights throughout the 60 | world based on applicable law or treaty, and any national 61 | implementations thereof. 62 | 63 | 2. Waiver. To the greatest extent permitted by, but not in contravention 64 | of, applicable law, Affirmer hereby overtly, fully, permanently, 65 | irrevocably and unconditionally waives, abandons, and surrenders all of 66 | Affirmer's Copyright and Related Rights and associated claims and causes 67 | of action, whether now known or unknown (including existing as well as 68 | future claims and causes of action), in the Work (i) in all territories 69 | worldwide, (ii) for the maximum duration provided by applicable law or 70 | treaty (including future time extensions), (iii) in any current or future 71 | medium and for any number of copies, and (iv) for any purpose whatsoever, 72 | including without limitation commercial, advertising or promotional 73 | purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each 74 | member of the public at large and to the detriment of Affirmer's heirs and 75 | successors, fully intending that such Waiver shall not be subject to 76 | revocation, rescission, cancellation, termination, or any other legal or 77 | equitable action to disrupt the quiet enjoyment of the Work by the public 78 | as contemplated by Affirmer's express Statement of Purpose. 79 | 80 | 3. Public License Fallback. Should any part of the Waiver for any reason 81 | be judged legally invalid or ineffective under applicable law, then the 82 | Waiver shall be preserved to the maximum extent permitted taking into 83 | account Affirmer's express Statement of Purpose. In addition, to the 84 | extent the Waiver is so judged Affirmer hereby grants to each affected 85 | person a royalty-free, non transferable, non sublicensable, non exclusive, 86 | irrevocable and unconditional license to exercise Affirmer's Copyright and 87 | Related Rights in the Work (i) in all territories worldwide, (ii) for the 88 | maximum duration provided by applicable law or treaty (including future 89 | time extensions), (iii) in any current or future medium and for any number 90 | of copies, and (iv) for any purpose whatsoever, including without 91 | limitation commercial, advertising or promotional purposes (the 92 | "License"). The License shall be deemed effective as of the date CC0 was 93 | applied by Affirmer to the Work. Should any part of the License for any 94 | reason be judged legally invalid or ineffective under applicable law, such 95 | partial invalidity or ineffectiveness shall not invalidate the remainder 96 | of the License, and in such case Affirmer hereby affirms that he or she 97 | will not (i) exercise any of his or her remaining Copyright and Related 98 | Rights in the Work or (ii) assert any associated claims and causes of 99 | action with respect to the Work, in either case contrary to Affirmer's 100 | express Statement of Purpose. 101 | 102 | 4. Limitations and Disclaimers. 103 | 104 | a. No trademark or patent rights held by Affirmer are waived, abandoned, 105 | surrendered, licensed or otherwise affected by this document. 106 | b. Affirmer offers the Work as-is and makes no representations or 107 | warranties of any kind concerning the Work, express, implied, 108 | statutory or otherwise, including without limitation warranties of 109 | title, merchantability, fitness for a particular purpose, non 110 | infringement, or the absence of latent or other defects, accuracy, or 111 | the present or absence of errors, whether or not discoverable, all to 112 | the greatest extent permissible under applicable law. 113 | c. Affirmer disclaims responsibility for clearing rights of other persons 114 | that may apply to the Work or any use thereof, including without 115 | limitation any person's Copyright and Related Rights in the Work. 116 | Further, Affirmer disclaims responsibility for obtaining any necessary 117 | consents, permissions or other rights required for any use of the 118 | Work. 119 | d. Affirmer understands and acknowledges that Creative Commons is not a 120 | party to this document and has no duty or obligation with respect to 121 | this CC0 or use of the Work. 122 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # **FINAL YEAR PROJECT :** 2 | 3 | 4 | # **Intelligent Traffic Management System using Machine Learning Model** 5 | 6 | ## Team Members : Ashwin.G, Anandhakumar.P, Avinash.V, Dinakaran.K.P 7 | 8 | --- 9 | 10 | ## **Main Modules** : 11 | 12 | **1. Machine Learning Model Development** ✅ 13 | >1. Installing underlying Framework 14 | >2. Defining the Dataset 15 | >3. Configuring the Dataset according to YOLO model 16 | >4. Converting the chosen dataset to YOLO Format 17 | 18 | **2. Training - YOLO Machine Learning Model with IDD Dataset** ✅ 19 | >1. Write custom training config 20 | >2. Start training the model after defining the class files 21 | >3. Calculating mAP for our model with IDD Dataset 22 | >4. Creating weights for our model 23 | 24 | **3. YOLO Machine Learning Model - Deployment** ⏳ 25 | >1. Non-Max Suppression 26 | >2. Vehicle Detection 27 | >3. Counting number of vehicles present. 28 | 29 | **4. Dynamic Signal Switching** 🚦 30 | >1. Average Signal Open/Close Time 31 | >2. Lane Open/Close Function 32 | >3. Dynamic to Static at abnormal conditions. 33 | --- 34 | -------------------------------------------------------------------------------- /config/yolov3.cfg: -------------------------------------------------------------------------------- 1 | [net] 2 | # Testing 3 | #batch=1 4 | #subdivisions=1 5 | # Training 6 | batch=16 7 | subdivisions=1 8 | width=416 9 | height=416 10 | channels=3 11 | momentum=0.9 12 | decay=0.0005 13 | angle=0 14 | saturation = 1.5 15 | exposure = 1.5 16 | hue=.1 17 | 18 | learning_rate=0.001 19 | burn_in=1000 20 | max_batches = 500200 21 | policy=steps 22 | steps=400000,450000 23 | scales=.1,.1 24 | 25 | [convolutional] 26 | batch_normalize=1 27 | filters=32 28 | size=3 29 | stride=1 30 | pad=1 31 | activation=leaky 32 | 33 | # Downsample 34 | 35 | [convolutional] 36 | batch_normalize=1 37 | filters=64 38 | size=3 39 | stride=2 40 | pad=1 41 | activation=leaky 42 | 43 | [convolutional] 44 | batch_normalize=1 45 | filters=32 46 | size=1 47 | stride=1 48 | pad=1 49 | activation=leaky 50 | 51 | [convolutional] 52 | batch_normalize=1 53 | filters=64 54 | size=3 55 | stride=1 56 | pad=1 57 | activation=leaky 58 | 59 | [shortcut] 60 | from=-3 61 | activation=linear 62 | 63 | # Downsample 64 | 65 | [convolutional] 66 | batch_normalize=1 67 | filters=128 68 | size=3 69 | stride=2 70 | pad=1 71 | activation=leaky 72 | 73 | [convolutional] 74 | batch_normalize=1 75 | filters=64 76 | size=1 77 | stride=1 78 | pad=1 79 | activation=leaky 80 | 81 | [convolutional] 82 | batch_normalize=1 83 | filters=128 84 | size=3 85 | stride=1 86 | pad=1 87 | activation=leaky 88 | 89 | [shortcut] 90 | from=-3 91 | activation=linear 92 | 93 | [convolutional] 94 | batch_normalize=1 95 | filters=64 96 | size=1 97 | stride=1 98 | pad=1 99 | activation=leaky 100 | 101 | [convolutional] 102 | batch_normalize=1 103 | filters=128 104 | size=3 105 | stride=1 106 | pad=1 107 | activation=leaky 108 | 109 | [shortcut] 110 | from=-3 111 | activation=linear 112 | 113 | # Downsample 114 | 115 | [convolutional] 116 | batch_normalize=1 117 | filters=256 118 | size=3 119 | stride=2 120 | pad=1 121 | activation=leaky 122 | 123 | [convolutional] 124 | batch_normalize=1 125 | filters=128 126 | size=1 127 | stride=1 128 | pad=1 129 | activation=leaky 130 | 131 | [convolutional] 132 | batch_normalize=1 133 | filters=256 134 | size=3 135 | stride=1 136 | pad=1 137 | activation=leaky 138 | 139 | [shortcut] 140 | from=-3 141 | activation=linear 142 | 143 | [convolutional] 144 | batch_normalize=1 145 | filters=128 146 | size=1 147 | stride=1 148 | pad=1 149 | activation=leaky 150 | 151 | [convolutional] 152 | batch_normalize=1 153 | filters=256 154 | size=3 155 | stride=1 156 | pad=1 157 | activation=leaky 158 | 159 | [shortcut] 160 | from=-3 161 | activation=linear 162 | 163 | [convolutional] 164 | batch_normalize=1 165 | filters=128 166 | size=1 167 | stride=1 168 | pad=1 169 | activation=leaky 170 | 171 | [convolutional] 172 | batch_normalize=1 173 | filters=256 174 | size=3 175 | stride=1 176 | pad=1 177 | activation=leaky 178 | 179 | [shortcut] 180 | from=-3 181 | activation=linear 182 | 183 | [convolutional] 184 | batch_normalize=1 185 | filters=128 186 | size=1 187 | stride=1 188 | pad=1 189 | activation=leaky 190 | 191 | [convolutional] 192 | batch_normalize=1 193 | filters=256 194 | size=3 195 | stride=1 196 | pad=1 197 | activation=leaky 198 | 199 | [shortcut] 200 | from=-3 201 | activation=linear 202 | 203 | 204 | [convolutional] 205 | batch_normalize=1 206 | filters=128 207 | size=1 208 | stride=1 209 | pad=1 210 | activation=leaky 211 | 212 | [convolutional] 213 | batch_normalize=1 214 | filters=256 215 | size=3 216 | stride=1 217 | pad=1 218 | activation=leaky 219 | 220 | [shortcut] 221 | from=-3 222 | activation=linear 223 | 224 | [convolutional] 225 | batch_normalize=1 226 | filters=128 227 | size=1 228 | stride=1 229 | pad=1 230 | activation=leaky 231 | 232 | [convolutional] 233 | batch_normalize=1 234 | filters=256 235 | size=3 236 | stride=1 237 | pad=1 238 | activation=leaky 239 | 240 | [shortcut] 241 | from=-3 242 | activation=linear 243 | 244 | [convolutional] 245 | batch_normalize=1 246 | filters=128 247 | size=1 248 | stride=1 249 | pad=1 250 | activation=leaky 251 | 252 | [convolutional] 253 | batch_normalize=1 254 | filters=256 255 | size=3 256 | stride=1 257 | pad=1 258 | activation=leaky 259 | 260 | [shortcut] 261 | from=-3 262 | activation=linear 263 | 264 | [convolutional] 265 | batch_normalize=1 266 | filters=128 267 | size=1 268 | stride=1 269 | pad=1 270 | activation=leaky 271 | 272 | [convolutional] 273 | batch_normalize=1 274 | filters=256 275 | size=3 276 | stride=1 277 | pad=1 278 | activation=leaky 279 | 280 | [shortcut] 281 | from=-3 282 | activation=linear 283 | 284 | # Downsample 285 | 286 | [convolutional] 287 | batch_normalize=1 288 | filters=512 289 | size=3 290 | stride=2 291 | pad=1 292 | activation=leaky 293 | 294 | [convolutional] 295 | batch_normalize=1 296 | filters=256 297 | size=1 298 | stride=1 299 | pad=1 300 | activation=leaky 301 | 302 | [convolutional] 303 | batch_normalize=1 304 | filters=512 305 | size=3 306 | stride=1 307 | pad=1 308 | activation=leaky 309 | 310 | [shortcut] 311 | from=-3 312 | activation=linear 313 | 314 | 315 | [convolutional] 316 | batch_normalize=1 317 | filters=256 318 | size=1 319 | stride=1 320 | pad=1 321 | activation=leaky 322 | 323 | [convolutional] 324 | batch_normalize=1 325 | filters=512 326 | size=3 327 | stride=1 328 | pad=1 329 | activation=leaky 330 | 331 | [shortcut] 332 | from=-3 333 | activation=linear 334 | 335 | 336 | [convolutional] 337 | batch_normalize=1 338 | filters=256 339 | size=1 340 | stride=1 341 | pad=1 342 | activation=leaky 343 | 344 | [convolutional] 345 | batch_normalize=1 346 | filters=512 347 | size=3 348 | stride=1 349 | pad=1 350 | activation=leaky 351 | 352 | [shortcut] 353 | from=-3 354 | activation=linear 355 | 356 | 357 | [convolutional] 358 | batch_normalize=1 359 | filters=256 360 | size=1 361 | stride=1 362 | pad=1 363 | activation=leaky 364 | 365 | [convolutional] 366 | batch_normalize=1 367 | filters=512 368 | size=3 369 | stride=1 370 | pad=1 371 | activation=leaky 372 | 373 | [shortcut] 374 | from=-3 375 | activation=linear 376 | 377 | [convolutional] 378 | batch_normalize=1 379 | filters=256 380 | size=1 381 | stride=1 382 | pad=1 383 | activation=leaky 384 | 385 | [convolutional] 386 | batch_normalize=1 387 | filters=512 388 | size=3 389 | stride=1 390 | pad=1 391 | activation=leaky 392 | 393 | [shortcut] 394 | from=-3 395 | activation=linear 396 | 397 | 398 | [convolutional] 399 | batch_normalize=1 400 | filters=256 401 | size=1 402 | stride=1 403 | pad=1 404 | activation=leaky 405 | 406 | [convolutional] 407 | batch_normalize=1 408 | filters=512 409 | size=3 410 | stride=1 411 | pad=1 412 | activation=leaky 413 | 414 | [shortcut] 415 | from=-3 416 | activation=linear 417 | 418 | 419 | [convolutional] 420 | batch_normalize=1 421 | filters=256 422 | size=1 423 | stride=1 424 | pad=1 425 | activation=leaky 426 | 427 | [convolutional] 428 | batch_normalize=1 429 | filters=512 430 | size=3 431 | stride=1 432 | pad=1 433 | activation=leaky 434 | 435 | [shortcut] 436 | from=-3 437 | activation=linear 438 | 439 | [convolutional] 440 | batch_normalize=1 441 | filters=256 442 | size=1 443 | stride=1 444 | pad=1 445 | activation=leaky 446 | 447 | [convolutional] 448 | batch_normalize=1 449 | filters=512 450 | size=3 451 | stride=1 452 | pad=1 453 | activation=leaky 454 | 455 | [shortcut] 456 | from=-3 457 | activation=linear 458 | 459 | # Downsample 460 | 461 | [convolutional] 462 | batch_normalize=1 463 | filters=1024 464 | size=3 465 | stride=2 466 | pad=1 467 | activation=leaky 468 | 469 | [convolutional] 470 | batch_normalize=1 471 | filters=512 472 | size=1 473 | stride=1 474 | pad=1 475 | activation=leaky 476 | 477 | [convolutional] 478 | batch_normalize=1 479 | filters=1024 480 | size=3 481 | stride=1 482 | pad=1 483 | activation=leaky 484 | 485 | [shortcut] 486 | from=-3 487 | activation=linear 488 | 489 | [convolutional] 490 | batch_normalize=1 491 | filters=512 492 | size=1 493 | stride=1 494 | pad=1 495 | activation=leaky 496 | 497 | [convolutional] 498 | batch_normalize=1 499 | filters=1024 500 | size=3 501 | stride=1 502 | pad=1 503 | activation=leaky 504 | 505 | [shortcut] 506 | from=-3 507 | activation=linear 508 | 509 | [convolutional] 510 | batch_normalize=1 511 | filters=512 512 | size=1 513 | stride=1 514 | pad=1 515 | activation=leaky 516 | 517 | [convolutional] 518 | batch_normalize=1 519 | filters=1024 520 | size=3 521 | stride=1 522 | pad=1 523 | activation=leaky 524 | 525 | [shortcut] 526 | from=-3 527 | activation=linear 528 | 529 | [convolutional] 530 | batch_normalize=1 531 | filters=512 532 | size=1 533 | stride=1 534 | pad=1 535 | activation=leaky 536 | 537 | [convolutional] 538 | batch_normalize=1 539 | filters=1024 540 | size=3 541 | stride=1 542 | pad=1 543 | activation=leaky 544 | 545 | [shortcut] 546 | from=-3 547 | activation=linear 548 | 549 | ###################### 550 | 551 | [convolutional] 552 | batch_normalize=1 553 | filters=512 554 | size=1 555 | stride=1 556 | pad=1 557 | activation=leaky 558 | 559 | [convolutional] 560 | batch_normalize=1 561 | size=3 562 | stride=1 563 | pad=1 564 | filters=1024 565 | activation=leaky 566 | 567 | [convolutional] 568 | batch_normalize=1 569 | filters=512 570 | size=1 571 | stride=1 572 | pad=1 573 | activation=leaky 574 | 575 | [convolutional] 576 | batch_normalize=1 577 | size=3 578 | stride=1 579 | pad=1 580 | filters=1024 581 | activation=leaky 582 | 583 | [convolutional] 584 | batch_normalize=1 585 | filters=512 586 | size=1 587 | stride=1 588 | pad=1 589 | activation=leaky 590 | 591 | [convolutional] 592 | batch_normalize=1 593 | size=3 594 | stride=1 595 | pad=1 596 | filters=1024 597 | activation=leaky 598 | 599 | [convolutional] 600 | size=1 601 | stride=1 602 | pad=1 603 | filters=255 604 | activation=linear 605 | 606 | 607 | [yolo] 608 | mask = 6,7,8 609 | anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326 610 | classes=80 611 | num=9 612 | jitter=.3 613 | ignore_thresh = .7 614 | truth_thresh = 1 615 | random=1 616 | 617 | 618 | [route] 619 | layers = -4 620 | 621 | [convolutional] 622 | batch_normalize=1 623 | filters=256 624 | size=1 625 | stride=1 626 | pad=1 627 | activation=leaky 628 | 629 | [upsample] 630 | stride=2 631 | 632 | [route] 633 | layers = -1, 61 634 | 635 | 636 | 637 | [convolutional] 638 | batch_normalize=1 639 | filters=256 640 | size=1 641 | stride=1 642 | pad=1 643 | activation=leaky 644 | 645 | [convolutional] 646 | batch_normalize=1 647 | size=3 648 | stride=1 649 | pad=1 650 | filters=512 651 | activation=leaky 652 | 653 | [convolutional] 654 | batch_normalize=1 655 | filters=256 656 | size=1 657 | stride=1 658 | pad=1 659 | activation=leaky 660 | 661 | [convolutional] 662 | batch_normalize=1 663 | size=3 664 | stride=1 665 | pad=1 666 | filters=512 667 | activation=leaky 668 | 669 | [convolutional] 670 | batch_normalize=1 671 | filters=256 672 | size=1 673 | stride=1 674 | pad=1 675 | activation=leaky 676 | 677 | [convolutional] 678 | batch_normalize=1 679 | size=3 680 | stride=1 681 | pad=1 682 | filters=512 683 | activation=leaky 684 | 685 | [convolutional] 686 | size=1 687 | stride=1 688 | pad=1 689 | filters=255 690 | activation=linear 691 | 692 | 693 | [yolo] 694 | mask = 3,4,5 695 | anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326 696 | classes=80 697 | num=9 698 | jitter=.3 699 | ignore_thresh = .7 700 | truth_thresh = 1 701 | random=1 702 | 703 | 704 | 705 | [route] 706 | layers = -4 707 | 708 | [convolutional] 709 | batch_normalize=1 710 | filters=128 711 | size=1 712 | stride=1 713 | pad=1 714 | activation=leaky 715 | 716 | [upsample] 717 | stride=2 718 | 719 | [route] 720 | layers = -1, 36 721 | 722 | 723 | 724 | [convolutional] 725 | batch_normalize=1 726 | filters=128 727 | size=1 728 | stride=1 729 | pad=1 730 | activation=leaky 731 | 732 | [convolutional] 733 | batch_normalize=1 734 | size=3 735 | stride=1 736 | pad=1 737 | filters=256 738 | activation=leaky 739 | 740 | [convolutional] 741 | batch_normalize=1 742 | filters=128 743 | size=1 744 | stride=1 745 | pad=1 746 | activation=leaky 747 | 748 | [convolutional] 749 | batch_normalize=1 750 | size=3 751 | stride=1 752 | pad=1 753 | filters=256 754 | activation=leaky 755 | 756 | [convolutional] 757 | batch_normalize=1 758 | filters=128 759 | size=1 760 | stride=1 761 | pad=1 762 | activation=leaky 763 | 764 | [convolutional] 765 | batch_normalize=1 766 | size=3 767 | stride=1 768 | pad=1 769 | filters=256 770 | activation=leaky 771 | 772 | [convolutional] 773 | size=1 774 | stride=1 775 | pad=1 776 | filters=255 777 | activation=linear 778 | 779 | 780 | [yolo] 781 | mask = 0,1,2 782 | anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326 783 | classes=80 784 | num=9 785 | jitter=.3 786 | ignore_thresh = .7 787 | truth_thresh = 1 788 | random=1 789 | -------------------------------------------------------------------------------- /data/cars-in-singapore---1204012.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/data/cars-in-singapore---1204012.jpg -------------------------------------------------------------------------------- /data/idd.names: -------------------------------------------------------------------------------- 1 | person 2 | bicycle 3 | car 4 | motorbike 5 | aeroplane 6 | bus 7 | train 8 | truck 9 | boat 10 | traffic light 11 | fire hydrant 12 | stop sign 13 | parking meter 14 | bench 15 | bird 16 | cat 17 | dog 18 | horse 19 | sheep 20 | cow 21 | elephant 22 | bear 23 | zebra 24 | giraffe 25 | backpack 26 | umbrella 27 | handbag 28 | tie 29 | suitcase 30 | frisbee 31 | skis 32 | snowboard 33 | sports ball 34 | kite 35 | baseball bat 36 | baseball glove 37 | skateboard 38 | surfboard 39 | tennis racket 40 | bottle 41 | wine glass 42 | cup 43 | fork 44 | knife 45 | spoon 46 | bowl 47 | banana 48 | apple 49 | sandwich 50 | orange 51 | broccoli 52 | carrot 53 | hot dog 54 | pizza 55 | donut 56 | cake 57 | chair 58 | sofa 59 | pottedplant 60 | bed 61 | diningtable 62 | toilet 63 | tvmonitor 64 | laptop 65 | mouse 66 | remote 67 | keyboard 68 | cell phone 69 | microwave 70 | oven 71 | toaster 72 | sink 73 | refrigerator 74 | book 75 | clock 76 | vase 77 | scissors 78 | teddy bear 79 | hair drier 80 | toothbrush 81 | -------------------------------------------------------------------------------- /desktop.ini: -------------------------------------------------------------------------------- 1 | [.ShellClassInfo] 2 | IconResource=C:\Users\Ashwin Gounder\3D Objects\traffic_lights__2_.ico,0 3 | [ViewState] 4 | Mode= 5 | Vid= 6 | FolderType=Generic 7 | -------------------------------------------------------------------------------- /docs/17110013-FYP-Phase02-Report.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/docs/17110013-FYP-Phase02-Report.pdf -------------------------------------------------------------------------------- /docs/_config.yml: -------------------------------------------------------------------------------- 1 | theme: jekyll-theme-cayman -------------------------------------------------------------------------------- /docs/index.md: -------------------------------------------------------------------------------- 1 | # Welcome to ITMS Homepage 2 | 3 | You can use the [editor on GitHub](https://github.com/FYP-ITMS/Intelligent-Traffic-Management-System-Using-ML-YOLO/edit/master/docs/index.md) to maintain and preview the content for your website in Markdown files. 4 | 5 | Whenever you commit to this repository, GitHub Pages will run [Jekyll](https://jekyllrb.com/) to rebuild the pages in your site, from the content in your Markdown files. 6 | 7 | ### Markdown 8 | 9 | Markdown is a lightweight and easy-to-use syntax for styling your writing. It includes conventions for 10 | 11 | ```markdown 12 | Syntax highlighted code block 13 | 14 | # Header 1 15 | ## Header 2 16 | ### Header 3 17 | 18 | - Bulleted 19 | - List 20 | 21 | 1. Numbered 22 | 2. List 23 | 24 | **Bold** and _Italic_ and `Code` text 25 | 26 | [Link](url) and ![Image](src) 27 | ``` 28 | 29 | For more details see [GitHub Flavored Markdown](https://guides.github.com/features/mastering-markdown/). 30 | 31 | ### Jekyll Themes 32 | 33 | Your Pages site will use the layout and styles from the Jekyll theme you have selected in your [repository settings](https://github.com/FYP-ITMS/Intelligent-Traffic-Management-System-Using-ML-YOLO/settings). The name of this theme is saved in the Jekyll `_config.yml` configuration file. 34 | 35 | ### Support or Contact 36 | 37 | Having trouble with Pages? Check out our [documentation](https://docs.github.com/categories/github-pages-basics/) or [contact support](https://support.github.com/contact) and we’ll help you sort it out. 38 | -------------------------------------------------------------------------------- /itms-yolov3.py: -------------------------------------------------------------------------------- 1 | '''*** Import Section ***''' 2 | from __future__ import division # to allow compatibility of code between Python 2.x and 3.x with minimal overhead 3 | from collections import Counter # library and method for counting hashable objects 4 | import argparse # to define arguments to the program in a user-friendly way 5 | import os # provides functions to interact with local file system 6 | import os.path as osp # provides range of methods to manipulate files and directories 7 | import pickle as pkl # to implement binary protocols for serializing and de-serializing object structure 8 | import pandas as pd # popular data-analysis library for machine learning. 9 | import time # for time-related python functions 10 | import sys # provides access for variables used or maintained by intrepreter 11 | import torch # machine learning library for tensor and neural-network computations 12 | from torch.autograd import Variable # Auto Differentaion package for managing scalar based values 13 | import cv2 # OpenCV Library to carry out Computer Vision tasks 14 | import emoji 15 | import warnings # to manage warnings that are displayed during execution 16 | warnings.filterwarnings( 17 | 'ignore') # to ignore warning messages while code execution 18 | print('\033[1m' + '\033[91m' + "Kickstarting YOLO...\n") 19 | from util.parser import load_classes # navigates to load_classess function in util.parser.py 20 | from util.model import Darknet # to load weights into our model for vehicle detection 21 | from util.image_processor import preparing_image # to pass input image into model,after resizing it into yolo format 22 | from util.utils import non_max_suppression # to do non-max-suppression in the detected bounding box objects i.e cars 23 | from util.dynamic_signal_switching import switch_signal 24 | from util.dynamic_signal_switching import avg_signal_oc_time 25 | 26 | 27 | #*** Parsing Arguments to YOLO Model *** 28 | def arg_parse(): 29 | parser = argparse.ArgumentParser( 30 | description= 31 | 'YOLO Vehicle Detection Model for Intelligent Traffic Management System') 32 | parser.add_argument("--images", 33 | dest='images', 34 | help="Image / Directory containing images to vehicle detection upon", 35 | default="vehicles-on-lanes", 36 | type=str) 37 | parser.add_argument("--bs", 38 | dest="bs", 39 | help="Batch size", 40 | default=1) 41 | parser.add_argument("--confidence_score", 42 | dest="confidence", 43 | help="Confidence Score to filter Vehicle Prediction", 44 | default=0.3) 45 | parser.add_argument("--nms_thresh", 46 | dest="nms_thresh", 47 | help="NMS Threshhold", 48 | default=0.3) 49 | parser.add_argument("--cfg", 50 | dest='cfgfile', 51 | help="Config file", 52 | default="config/yolov3.cfg", 53 | type=str) 54 | parser.add_argument("--weights", 55 | dest='weightsfile', 56 | help="weightsfile", 57 | default="weights/yolov3.weights", 58 | type=str) 59 | parser.add_argument( 60 | "--reso", 61 | dest='reso', 62 | help= 63 | "Input resolution of the network. Increase to increase accuracy. Decrease to increase speed", 64 | default="416", 65 | type=str) 66 | return parser.parse_args() 67 | 68 | 69 | args = arg_parse() 70 | images = args.images 71 | batch_size = int(args.bs) 72 | confidence = float(args.confidence) 73 | nms_thesh = float(args.nms_thresh) 74 | start = 0 75 | CUDA = torch.cuda.is_available() 76 | 77 | #***Loading Dataset Class File*** 78 | classes = load_classes("data/idd.names") 79 | 80 | #***Setting up the neural network*** 81 | model = Darknet(args.cfgfile) 82 | print('\033[0m' + "Input Data Passed Into YOLO Model..." + u'\N{check mark}') 83 | model.load_weights(args.weightsfile) 84 | print('\033[0m' + "YOLO Neural Network Successfully Loaded..." + 85 | u'\N{check mark}') 86 | print('\033[0m') 87 | model.hyperparams["height"] = args.reso 88 | inp_dim = int(model.hyperparams["height"]) 89 | assert inp_dim % 32 == 0 90 | assert inp_dim > 32 91 | num_classes = model.num_classes 92 | print('\033[1m' + '\033[92m' + 93 | "Performing Vehicle Detection with YOLO Neural Network..." + '\033[0m' + 94 | u'\N{check mark}') 95 | #Putting YOLO Model into GPU: 96 | if CUDA: 97 | model.cuda() 98 | model.eval() 99 | read_dir = time.time() 100 | 101 | #***Vehicle Detection Phase*** 102 | try: 103 | imlist = [ 104 | osp.join(osp.realpath('.'), images, img) for img in os.listdir(images) 105 | ] 106 | except NotADirectoryError: 107 | imlist = [] 108 | imlist.append(osp.join(osp.realpath('.'), images)) 109 | except FileNotFoundError: 110 | print("No Input with the name {}".format(images)) 111 | print("Model failed to load your input. ") 112 | exit() 113 | 114 | load_batch = time.time() 115 | loaded_ims = [cv2.imread(x) for x in imlist] 116 | 117 | im_batches = list( 118 | map(preparing_image, loaded_ims, [inp_dim for x in range(len(imlist))])) 119 | im_dim_list = [(x.shape[1], x.shape[0]) for x in loaded_ims] 120 | im_dim_list = torch.FloatTensor(im_dim_list).repeat(1, 2) 121 | 122 | leftover = 0 123 | 124 | if (len(im_dim_list) % batch_size): 125 | leftover = 1 126 | 127 | if batch_size != 1: 128 | num_batches = len(imlist) // batch_size + leftover 129 | im_batches = [ 130 | torch.cat( 131 | (im_batches[i * batch_size:min((i + 1) * 132 | batch_size, len(im_batches))])) 133 | for i in range(num_batches) 134 | ] 135 | 136 | write = 0 137 | 138 | if CUDA: 139 | im_dim_list = im_dim_list.cuda() 140 | start_outputs_loop = time.time() 141 | 142 | lane_count_list = [] 143 | input_image_count = 0 144 | denser_lane = 0 145 | lane_with_higher_count = 0 146 | 147 | print() 148 | print( 149 | '\033[1m' + 150 | "------------------------------------------------------------------------------------------------------------------------------------------------------------" 151 | ) 152 | print('\033[1m' + "SUMMARY") 153 | print( 154 | '\033[1m' + 155 | "------------------------------------------------------------------------------------------------------------------------------------------------------------" 156 | ) 157 | print('\033[1m' + 158 | "{:25s}: ".format("\nDetected (" + str(len(imlist)) + " inputs)")) 159 | print('\033[0m') 160 | #Loading the image, if present : 161 | for i, batch in enumerate(im_batches): 162 | #load the image 163 | vehicle_count = 0 164 | start = time.time() 165 | if CUDA: 166 | batch = batch.cuda() 167 | with torch.no_grad(): 168 | prediction = model(Variable(batch)) 169 | 170 | prediction = non_max_suppression(prediction, 171 | confidence, 172 | num_classes, 173 | nms_conf=nms_thesh) 174 | 175 | end = time.time() 176 | 177 | if type(prediction) == int: 178 | for im_num, image in enumerate( 179 | imlist[i * batch_size:min((i + 1) * batch_size, len(imlist))]): 180 | im_id = i * batch_size + im_num 181 | print("{0:20s} predicted in {1:6.3f} seconds".format( 182 | image.split("/")[-1], (end - start) / batch_size)) 183 | print("{0:20s} {1:s}".format("Objects detected:", "")) 184 | print("----------------------------------------------------------") 185 | continue 186 | 187 | prediction[:, 188 | 0] += i * batch_size # transform the atribute from index in batch to index in imlist 189 | 190 | if not write: # If we have't initialised output 191 | output = prediction 192 | write = 1 193 | else: 194 | output = torch.cat((output, prediction)) 195 | 196 | for im_num, image in enumerate( 197 | imlist[i * batch_size:min((i + 1) * batch_size, len(imlist))]): 198 | vehicle_count = 0 199 | input_image_count += 1 200 | #denser_lane = 201 | im_id = i * batch_size + im_num 202 | objs = [classes[int(x[-1])] for x in output if int(x[0]) == im_id] 203 | vc = Counter(objs) 204 | for i in objs: 205 | if i == "car" or i == "motorbike" or i == "truck" or i == "bicycle" or i == "autorickshaw": 206 | vehicle_count += 1 207 | 208 | print('\033[1m' + "Lane : {} - {} : {:5s} {}".format( 209 | input_image_count, "Number of Vehicles detected", "", 210 | vehicle_count)) 211 | 212 | if vehicle_count > 0: 213 | lane_count_list.append(vehicle_count) 214 | 215 | if vehicle_count > lane_with_higher_count: 216 | lane_with_higher_count = vehicle_count 217 | denser_lane = input_image_count 218 | 219 | '''print( 220 | '\033[0m' + 221 | " File Name: {0:20s}.".format(image.split("/")[-1]))''' 222 | print('\033[0m' +" {:15} {}".format("Vehicle Type", "Count")) 223 | for key, value in sorted(vc.items()): 224 | if key == "car" or key == "motorbike" or key == "truck" or key == "bicycle": 225 | print('\033[0m' + " {:15s} {}".format(key, value)) 226 | 227 | if CUDA: 228 | torch.cuda.synchronize() 229 | 230 | if vehicle_count == 0: 231 | print( 232 | '\033[1m' + 233 | "There are no vehicles present from the input that was passed into our YOLO Model." 234 | ) 235 | 236 | print( 237 | '\033[1m' + 238 | "------------------------------------------------------------------------------------------------------------------------------------------------------------" 239 | ) 240 | print( 241 | emoji.emojize(':vertical_traffic_light:') + '\033[1m' + '\033[94m' + 242 | " Lane with denser traffic is : Lane " + str(denser_lane) + '\033[30m' + 243 | "\n") 244 | 245 | switching_time = avg_signal_oc_time(lane_count_list) 246 | 247 | switch_signal(denser_lane, switching_time) 248 | 249 | print( 250 | '\033[1m' + 251 | "------------------------------------------------------------------------------------------------------------------------------------------------------------" 252 | ) 253 | try: 254 | output 255 | except NameError: 256 | print("No detections were made | No Objects were found from the input") 257 | exit() 258 | 259 | torch.cuda.empty_cache() -------------------------------------------------------------------------------- /util/__pycache__/__init__.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/util/__pycache__/__init__.cpython-36.pyc -------------------------------------------------------------------------------- /util/__pycache__/darknet.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/util/__pycache__/darknet.cpython-36.pyc -------------------------------------------------------------------------------- /util/__pycache__/datasets.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/util/__pycache__/datasets.cpython-36.pyc -------------------------------------------------------------------------------- /util/__pycache__/image_processor.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/util/__pycache__/image_processor.cpython-36.pyc -------------------------------------------------------------------------------- /util/__pycache__/image_processor.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/util/__pycache__/image_processor.cpython-37.pyc -------------------------------------------------------------------------------- /util/__pycache__/model.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/util/__pycache__/model.cpython-36.pyc -------------------------------------------------------------------------------- /util/__pycache__/model.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/util/__pycache__/model.cpython-37.pyc -------------------------------------------------------------------------------- /util/__pycache__/moduler.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/util/__pycache__/moduler.cpython-36.pyc -------------------------------------------------------------------------------- /util/__pycache__/moduler.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/util/__pycache__/moduler.cpython-37.pyc -------------------------------------------------------------------------------- /util/__pycache__/parser.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/util/__pycache__/parser.cpython-36.pyc -------------------------------------------------------------------------------- /util/__pycache__/parser.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/util/__pycache__/parser.cpython-37.pyc -------------------------------------------------------------------------------- /util/__pycache__/signal_lights.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/util/__pycache__/signal_lights.cpython-36.pyc -------------------------------------------------------------------------------- /util/__pycache__/signal_lights.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/util/__pycache__/signal_lights.cpython-37.pyc -------------------------------------------------------------------------------- /util/__pycache__/signal_switching.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/util/__pycache__/signal_switching.cpython-36.pyc -------------------------------------------------------------------------------- /util/__pycache__/signal_switching.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/util/__pycache__/signal_switching.cpython-37.pyc -------------------------------------------------------------------------------- /util/__pycache__/utill.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/util/__pycache__/utill.cpython-36.pyc -------------------------------------------------------------------------------- /util/__pycache__/utils.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/util/__pycache__/utils.cpython-36.pyc -------------------------------------------------------------------------------- /util/__pycache__/utils.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/util/__pycache__/utils.cpython-37.pyc -------------------------------------------------------------------------------- /util/__pycache__/uutils.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/util/__pycache__/uutils.cpython-36.pyc -------------------------------------------------------------------------------- /util/boot.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/util/boot.py -------------------------------------------------------------------------------- /util/dynamic_signal_switching.py: -------------------------------------------------------------------------------- 1 | import time 2 | import emoji 3 | 4 | 5 | def switch_signal(denser_lane, seconds): 6 | print("Dynamic Signal Switching Phase" + '\033[0m') 7 | time.sleep(1) 8 | print('\033[1m' + '\n\033[99m' + 9 | "OPENING LANE-{}: ".format(str(denser_lane)) + '\033[0m') 10 | print( 11 | "----------------------------------------------------------------------------------" 12 | ) 13 | if denser_lane == 1: 14 | print( 15 | "Lane 1 Lane 2 Lane 3 Lane 4" 16 | ) 17 | time.sleep(1) 18 | print(" " + emoji.emojize(":white_circle:") + " " + 19 | emoji.emojize(":red_circle:") + " " + 20 | emoji.emojize(":red_circle:") + " " + 21 | emoji.emojize(":red_circle:") + "\n " + 22 | emoji.emojize(":white_circle:") + " " + 23 | emoji.emojize(":white_circle:") + " " + 24 | emoji.emojize(":white_circle:") + " " + 25 | emoji.emojize(":white_circle:") + "\n " + 26 | emoji.emojize(":green_circle:") + " " + 27 | emoji.emojize(":white_circle:") + " " + 28 | emoji.emojize(":white_circle:") + " " + 29 | emoji.emojize(":white_circle:") + "\n") 30 | print( 31 | "----------------------------------------------------------------------------------" 32 | ) 33 | print('\033[1m' + '\n\033[99m' + 34 | "LANE-{} OPENED !".format(str(denser_lane)) + '\033[0m') 35 | print("\n Calculating Signal Open-Close Timing...") 36 | print('\033[0m' + '\n\033[99m' + 37 | "LANE-{} will CLOSE after {} seconds ".format( 38 | str(denser_lane), str(seconds)) + '\033[0m', 39 | end="") 40 | while seconds: 41 | mins, secs = divmod(seconds, 60) 42 | print('\033[99m' + ".", end="") 43 | time.sleep(1) 44 | seconds -= 1 45 | print( 46 | "----------------------------------------------------------------------------------" 47 | ) 48 | print('\033[1m' + '\n\033[99m' + 49 | "CLOSING LANE-{}: ".format(str(denser_lane)) + '\033[0m') 50 | print( 51 | "----------------------------------------------------------------------------------" 52 | ) 53 | time.sleep(1) 54 | print() 55 | print( 56 | "Lane 1 Lane 2 Lane 3 Lane 4" 57 | ) 58 | print(" " + emoji.emojize(":red_circle:") + " " + 59 | emoji.emojize(":red_circle:") + " " + 60 | emoji.emojize(":red_circle:") + " " + 61 | emoji.emojize(":red_circle:") + "\n " + 62 | emoji.emojize(":white_circle:") + " " + 63 | emoji.emojize(":white_circle:") + " " + 64 | emoji.emojize(":white_circle:") + " " + 65 | emoji.emojize(":white_circle:") + "\n " + 66 | emoji.emojize(":white_circle:") + " " + 67 | emoji.emojize(":white_circle:") + " " + 68 | emoji.emojize(":white_circle:") + " " + 69 | emoji.emojize(":white_circle:") + "\n") 70 | elif denser_lane == 2: 71 | print( 72 | "Lane 1 Lane 2 Lane 3 Lane 4" 73 | ) 74 | time.sleep(1) 75 | print(" " + emoji.emojize(":red_circle:") + " " + 76 | emoji.emojize(":white_circle:") + " " + 77 | emoji.emojize(":red_circle:") + " " + 78 | emoji.emojize(":red_circle:") + "\n " + 79 | emoji.emojize(":white_circle:") + " " + 80 | emoji.emojize(":white_circle:") + " " + 81 | emoji.emojize(":white_circle:") + " " + 82 | emoji.emojize(":white_circle:") + "\n " + 83 | emoji.emojize(":white_circle:") + " " + 84 | emoji.emojize(":green_circle:") + " " + 85 | emoji.emojize(":white_circle:") + " " + 86 | emoji.emojize(":white_circle:") + "\n") 87 | print('\033[1m' + '\n\033[99m' + 88 | "LANE-{} OPENED !".format(str(denser_lane)) + '\033[0m') 89 | print("\n Calculating Signal Open-Close Timing...") 90 | print('\033[0m' + '\n\033[99m' + 91 | "LANE-{} will CLOSE after {} seconds ".format( 92 | str(denser_lane), str(seconds)) + '\033[0m', 93 | end="") 94 | while seconds: 95 | mins, secs = divmod(seconds, 60) 96 | print('\033[99m' + ".", end="") 97 | time.sleep(1) 98 | seconds -= 1 99 | print() 100 | print('\033[1m' + '\n\033[99m' + 101 | "CLOSING LANE-{}: ".format(str(denser_lane)) + '\033[0m') 102 | print( 103 | "----------------------------------------------------------------------------------" 104 | ) 105 | time.sleep(1) 106 | print() 107 | print( 108 | "Lane 1 Lane 2 Lane 3 Lane 4" 109 | ) 110 | print(" " + emoji.emojize(":red_circle:") + " " + 111 | emoji.emojize(":red_circle:") + " " + 112 | emoji.emojize(":red_circle:") + " " + 113 | emoji.emojize(":red_circle:") + "\n " + 114 | emoji.emojize(":white_circle:") + " " + 115 | emoji.emojize(":white_circle:") + " " + 116 | emoji.emojize(":white_circle:") + " " + 117 | emoji.emojize(":white_circle:") + "\n " + 118 | emoji.emojize(":white_circle:") + " " + 119 | emoji.emojize(":white_circle:") + " " + 120 | emoji.emojize(":white_circle:") + " " + 121 | emoji.emojize(":white_circle:") + "\n") 122 | elif denser_lane == 3: 123 | print( 124 | "Lane 1 Lane 2 Lane 3 Lane 4" 125 | ) 126 | time.sleep(1) 127 | print(" " + emoji.emojize(":red_circle:") + " " + 128 | emoji.emojize(":red_circle:") + " " + 129 | emoji.emojize(":white_circle:") + " " + 130 | emoji.emojize(":red_circle:") + "\n " + 131 | emoji.emojize(":white_circle:") + " " + 132 | emoji.emojize(":white_circle:") + " " + 133 | emoji.emojize(":white_circle:") + " " + 134 | emoji.emojize(":white_circle:") + "\n " + 135 | emoji.emojize(":white_circle:") + " " + 136 | emoji.emojize(":white_circle:") + " " + 137 | emoji.emojize(":green_circle:") + " " + 138 | emoji.emojize(":white_circle:") + "\n") 139 | print('\033[1m' + '\n\033[99m' + 140 | "LANE-{} OPENED !".format(str(denser_lane)) + '\033[0m') 141 | print("\n Calculating Signal Open-Close Timing...") 142 | print('\033[0m' + '\n\033[99m' + 143 | "LANE-{} will CLOSE after {} seconds ".format( 144 | str(denser_lane), str(seconds)) + '\033[0m', 145 | end="") 146 | while seconds: 147 | mins, secs = divmod(seconds, 60) 148 | print('\033[99m' + ".", end="") 149 | time.sleep(1) 150 | seconds -= 1 151 | print() 152 | print('\033[1m' + '\n\033[99m' + 153 | "CLOSING LANE-{}: ".format(str(denser_lane)) + '\033[0m') 154 | print( 155 | "----------------------------------------------------------------------------------" 156 | ) 157 | time.sleep(1) 158 | print() 159 | print( 160 | "Lane 1 Lane 2 Lane 3 Lane 4" 161 | ) 162 | print(" " + emoji.emojize(":red_circle:") + " " + 163 | emoji.emojize(":red_circle:") + " " + 164 | emoji.emojize(":red_circle:") + " " + 165 | emoji.emojize(":red_circle:") + "\n " + 166 | emoji.emojize(":white_circle:") + " " + 167 | emoji.emojize(":white_circle:") + " " + 168 | emoji.emojize(":white_circle:") + " " + 169 | emoji.emojize(":white_circle:") + "\n " + 170 | emoji.emojize(":white_circle:") + " " + 171 | emoji.emojize(":white_circle:") + " " + 172 | emoji.emojize(":white_circle:") + " " + 173 | emoji.emojize(":white_circle:") + "\n") 174 | elif denser_lane == 4: 175 | print( 176 | "Lane 1 Lane 2 Lane 3 Lane 4" 177 | ) 178 | time.sleep(1) 179 | print(" " + emoji.emojize(":red_circle:") + " " + 180 | emoji.emojize(":red_circle:") + " " + 181 | emoji.emojize(":red_circle:") + " " + 182 | emoji.emojize(":white_circle:") + "\n " + 183 | emoji.emojize(":white_circle:") + " " + 184 | emoji.emojize(":white_circle:") + " " + 185 | emoji.emojize(":white_circle:") + " " + 186 | emoji.emojize(":white_circle:") + "\n " + 187 | emoji.emojize(":white_circle:") + " " + 188 | emoji.emojize(":white_circle:") + " " + 189 | emoji.emojize(":white_circle:") + " " + 190 | emoji.emojize(":green_circle:") + "\n") 191 | print('\033[1m' + '\n\033[99m' + 192 | "LANE-{} OPENED !".format(str(denser_lane)) + '\033[0m') 193 | print("\n Calculating Signal Open-Close Timing...") 194 | print('\033[0m' + '\n\033[99m' + 195 | "LANE-{} will CLOSE after {} seconds ".format( 196 | str(denser_lane), str(seconds)) + '\033[0m', 197 | end="") 198 | while seconds: 199 | mins, secs = divmod(seconds, 60) 200 | print('\033[99m' + ".", end="") 201 | time.sleep(1) 202 | seconds -= 1 203 | print() 204 | print('\033[1m' + '\n\033[99m' + 205 | "CLOSING LANE-{}: ".format(str(denser_lane)) + '\033[0m') 206 | print( 207 | "----------------------------------------------------------------------------------" 208 | ) 209 | time.sleep(1) 210 | print() 211 | print( 212 | "Lane 1 Lane 2 Lane 3 Lane 4" 213 | ) 214 | print(" " + emoji.emojize(":red_circle:") + " " + 215 | emoji.emojize(":red_circle:") + " " + 216 | emoji.emojize(":red_circle:") + " " + 217 | emoji.emojize(":red_circle:") + "\n " + 218 | emoji.emojize(":white_circle:") + " " + 219 | emoji.emojize(":white_circle:") + " " + 220 | emoji.emojize(":white_circle:") + " " + 221 | emoji.emojize(":white_circle:") + "\n " + 222 | emoji.emojize(":white_circle:") + " " + 223 | emoji.emojize(":white_circle:") + " " + 224 | emoji.emojize(":white_circle:") + " " + 225 | emoji.emojize(":white_circle:") + "\n") 226 | 227 | print('\033[0m' + '\n\033[99m' + 228 | "LANE-{} is now CLOSED ".format(str(denser_lane) + '\033[0m')) 229 | 230 | 231 | def avg_signal_oc_time(lane_count_list): 232 | average_count = sum(lane_count_list) / len(lane_count_list) 233 | if average_count > 50: 234 | if int(max(lane_count_list)) > 75: 235 | return 75 236 | else: 237 | return int(max(lane_count_list)) + 20 238 | elif average_count > 60: 239 | return 135 240 | elif average_count > 45: 241 | return 105 242 | elif average_count > 20: 243 | return 55 244 | elif average_count < 20: 245 | return 20 246 | -------------------------------------------------------------------------------- /util/image_processor.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import cv2 3 | import torch 4 | 5 | def letterbox_image(image, input_dimension): 6 | """ 7 | Function: 8 | Resize image with unchanged aspect ratio using padding 9 | 10 | Arguments: 11 | image -- image input passed. 12 | input_dimension -- dimensions for resizing the image. 13 | 14 | Return: 15 | image_as_tensor -- resized image 16 | """ 17 | image_width, image_height = image.shape[1], image.shape[0] 18 | width, height = input_dimension 19 | new_width = int(image_width * min(width/image_width, height/image_height)) 20 | new_height = int(image_height * min(width/image_width, height/image_height)) 21 | resized_image = cv2.resize(image, (new_width,new_height), interpolation = cv2.INTER_CUBIC) 22 | 23 | image_as_tensor = np.full((input_dimension[1], input_dimension[0], 3), 128) 24 | image_as_tensor[(height-new_height)//2:(height-new_height)//2 + new_height,(width-new_width)//2:(width-new_width)//2 + new_width, :] = resized_image 25 | 26 | return image_as_tensor 27 | 28 | def preparing_image(image, input_dimension): 29 | """ 30 | Function: 31 | Prepare image for inputting to the neural network. 32 | 33 | Arguments: 34 | age input passed. 35 | input_dimension -- dimensions for resizing the image. 36 | 37 | Return: 38 | image -- image after preparing 39 | """ 40 | image = (letterbox_image(image, (input_dimension, input_dimension))) 41 | image = image[:,:,::-1].transpose((2,0,1)).copy() 42 | image = torch.from_numpy(image).float().div(255.0).unsqueeze(0) 43 | 44 | return image 45 | -------------------------------------------------------------------------------- /util/itms-yolo-m4-01.py: -------------------------------------------------------------------------------- 1 | '''*** Import Section ***''' 2 | from __future__ import division # to allow compatibility of code between Python 2.x and 3.x with minimal overhead 3 | from collections import Counter # library and method for counting hashable objects 4 | import argparse # to define arguments to the program in a user-friendly way 5 | import os # provides functions to interact with local file system 6 | import os.path as osp # provides range of methods to manipulate files and directories 7 | import pickle as pkl # to implement binary protocols for serializing and de-serializing object structure 8 | import pandas as pd # popular data-analysis library for machine learning. 9 | import time # for time-related python functions 10 | import sys # provides access for variables used or maintained by intrepreter 11 | import torch # machine learning library for tensor and neural-network computations 12 | from torch.autograd import Variable # Auto Differentaion package for managing scalar based values 13 | import cv2 # OpenCV Library to carry out Computer Vision tasks 14 | import emoji 15 | import warnings # to manage warnings that are displayed during execution 16 | warnings.filterwarnings( 17 | 'ignore') # to ignore warning messages while code execution 18 | print('\033[1m' + '\033[91m' + "Kickstarting YOLO...\n") 19 | from util.parser import load_classes # navigates to load_classess function in util.parser.py 20 | from util.model import Darknet # to load weights into our model for vehicle detection 21 | from util.image_processor import preparing_image # to pass input image into model,after resizing it into yolo format 22 | from util.utils import non_max_suppression # to do non-max-suppression in the detected bounding box objects i.e cars 23 | from util.signal_switching import countdown 24 | from util.signal_lights import switch_signal 25 | 26 | 27 | #*** Parsing Arguments to YOLO Model *** 28 | def arg_parse(): 29 | parser = argparse.ArgumentParser( 30 | description= 31 | 'YOLO Vehicle Detection Model for Intelligent Traffic Management System' 32 | ) 33 | parser.add_argument( 34 | "--images", 35 | dest='images', 36 | help="Image / Directory containing images to vehicle detection upon", 37 | default="/content/Model/test-images", 38 | type=str) 39 | '''parser.add_argument("--outputs",dest='outputs',help="Image / Directory to store detections",default="/content/output/",type=str)''' 40 | parser.add_argument("--bs", dest="bs", help="Batch size", default=1) 41 | parser.add_argument("--confidence_score", 42 | dest="confidence", 43 | help="Confidence Score to filter Vehicle Prediction", 44 | default=0.3) 45 | parser.add_argument("--nms_thresh", 46 | dest="nms_thresh", 47 | help="NMS Threshhold", 48 | default=0.3) 49 | parser.add_argument("--cfg", 50 | dest='cfgfile', 51 | help="Config file", 52 | default="config/yolov3.cfg", 53 | type=str) 54 | parser.add_argument("--weights", 55 | dest='weightsfile', 56 | help="weightsfile", 57 | default="weights/yolov3.weights", 58 | type=str) 59 | parser.add_argument( 60 | "--reso", 61 | dest='reso', 62 | help= 63 | "Input resolution of the network. Increase to increase accuracy. Decrease to increase speed", 64 | default="416", 65 | type=str) 66 | return parser.parse_args() 67 | 68 | 69 | args = arg_parse() 70 | images = args.images 71 | batch_size = int(args.bs) 72 | confidence = float(args.confidence) 73 | nms_thesh = float(args.nms_thresh) 74 | start = 0 75 | CUDA = torch.cuda.is_available() 76 | 77 | #***Loading Dataset Class File*** 78 | classes = load_classes("data/idd.names") 79 | 80 | #***Setting up the neural network*** 81 | model = Darknet(args.cfgfile) 82 | print('\033[0m' + "Input Data Passed Into YOLO Model..." + u'\N{check mark}') 83 | model.load_weights(args.weightsfile) 84 | print('\033[0m' + "YOLO Neural Network Successfully Loaded..." + 85 | u'\N{check mark}') 86 | print('\033[0m') 87 | model.hyperparams["height"] = args.reso 88 | inp_dim = int(model.hyperparams["height"]) 89 | assert inp_dim % 32 == 0 90 | assert inp_dim > 32 91 | num_classes = model.num_classes 92 | print('\033[1m' + '\033[92m' + 93 | "Performing Vehicle Detection with YOLO Neural Network..." + '\033[0m' + 94 | u'\N{check mark}') 95 | #Putting YOLO Model into GPU: 96 | if CUDA: 97 | model.cuda() 98 | model.eval() 99 | read_dir = time.time() 100 | 101 | #***Vehicle Detection Phase*** 102 | try: 103 | imlist = [ 104 | osp.join(osp.realpath('.'), images, img) for img in os.listdir(images) 105 | ] 106 | except NotADirectoryError: 107 | imlist = [] 108 | imlist.append(osp.join(osp.realpath('.'), images)) 109 | except FileNotFoundError: 110 | print("No Input with the name {}".format(images)) 111 | print("Model failed to load your input. ") 112 | exit() 113 | 114 | load_batch = time.time() 115 | loaded_ims = [cv2.imread(x) for x in imlist] 116 | 117 | im_batches = list( 118 | map(preparing_image, loaded_ims, [inp_dim for x in range(len(imlist))])) 119 | im_dim_list = [(x.shape[1], x.shape[0]) for x in loaded_ims] 120 | im_dim_list = torch.FloatTensor(im_dim_list).repeat(1, 2) 121 | 122 | leftover = 0 123 | 124 | if (len(im_dim_list) % batch_size): 125 | leftover = 1 126 | 127 | if batch_size != 1: 128 | num_batches = len(imlist) // batch_size + leftover 129 | im_batches = [ 130 | torch.cat( 131 | (im_batches[i * batch_size:min((i + 1) * 132 | batch_size, len(im_batches))])) 133 | for i in range(num_batches) 134 | ] 135 | 136 | write = 0 137 | 138 | if CUDA: 139 | im_dim_list = im_dim_list.cuda() 140 | start_outputs_loop = time.time() 141 | 142 | input_image_count = 0 143 | denser_lane = 0 144 | lane_with_higher_count = 0 145 | print() 146 | print( 147 | '\033[1m' + 148 | "------------------------------------------------------------------------------------------------------------------------------------------------------------" 149 | ) 150 | print('\033[1m' + "SUMMARY") 151 | print( 152 | '\033[1m' + 153 | "------------------------------------------------------------------------------------------------------------------------------------------------------------" 154 | ) 155 | print('\033[1m' + 156 | "{:25s}: ".format("\nDetected (" + str(len(imlist)) + " inputs)")) 157 | print('\033[0m') 158 | #Loading the image, if present : 159 | for i, batch in enumerate(im_batches): 160 | #load the image 161 | vehicle_count = 0 162 | start = time.time() 163 | if CUDA: 164 | batch = batch.cuda() 165 | with torch.no_grad(): 166 | prediction = model(Variable(batch)) 167 | 168 | prediction = non_max_suppression(prediction, 169 | confidence, 170 | num_classes, 171 | nms_conf=nms_thesh) 172 | 173 | end = time.time() 174 | 175 | if type(prediction) == int: 176 | for im_num, image in enumerate( 177 | imlist[i * batch_size:min((i + 1) * batch_size, len(imlist))]): 178 | im_id = i * batch_size + im_num 179 | print("{0:20s} predicted in {1:6.3f} seconds".format( 180 | image.split("/")[-1], (end - start) / batch_size)) 181 | print("{0:20s} {1:s}".format("Objects detected:", "")) 182 | print("----------------------------------------------------------") 183 | continue 184 | 185 | prediction[:, 186 | 0] += i * batch_size # transform the atribute from index in batch to index in imlist 187 | 188 | if not write: # If we have't initialised output 189 | output = prediction 190 | write = 1 191 | else: 192 | output = torch.cat((output, prediction)) 193 | 194 | for im_num, image in enumerate( 195 | imlist[i * batch_size:min((i + 1) * batch_size, len(imlist))]): 196 | vehicle_count = 0 197 | input_image_count += 1 198 | #denser_lane = 199 | im_id = i * batch_size + im_num 200 | objs = [classes[int(x[-1])] for x in output if int(x[0]) == im_id] 201 | vc = Counter(objs) 202 | for i in objs: 203 | if i == "car" or i == "motorbike" or i == "truck" or i == "bicycle" or i == "autorickshaw": 204 | vehicle_count += 1 205 | 206 | print('\033[1m' + "Lane : {} - {} : {:5s} {}".format( 207 | input_image_count, "Number of Vehicles detected", "", 208 | vehicle_count)) 209 | if vehicle_count > lane_with_higher_count: 210 | lane_with_higher_count = vehicle_count 211 | denser_lane = input_image_count 212 | print( 213 | '\033[0m' + 214 | " File Name: {0:20s}.".format(image.split("/")[-1])) 215 | print('\033[0m' + 216 | " {:15} {}".format("Vehicle Type", "Count")) 217 | for key, value in sorted(vc.items()): 218 | if key == "car" or key == "motorbike" or key == "truck" or key == "bicycle": 219 | print('\033[0m' + " {:15s} {}".format(key, value)) 220 | 221 | if CUDA: 222 | torch.cuda.synchronize() 223 | if vehicle_count == 0: 224 | print( 225 | '\033[1m' + 226 | "There are no vehicles present from the input that was passed into our YOLO Model." 227 | ) 228 | 229 | print( 230 | '\033[1m' + 231 | "------------------------------------------------------------------------------------------------------------------------------------------------------------" 232 | ) 233 | print( 234 | emoji.emojize(':vertical_traffic_light:') + '\033[1m' + '\033[94m' + 235 | " Lane with denser traffic is : Lane " + str(denser_lane) +'\033[30m'+ "\n") 236 | 237 | switch_signal(denser_lane, 5) 238 | 239 | print( 240 | '\033[1m' + 241 | "------------------------------------------------------------------------------------------------------------------------------------------------------------" 242 | ) 243 | try: 244 | output 245 | except NameError: 246 | print("No detections were made | No Objects were found from the input") 247 | exit() 248 | 249 | torch.cuda.empty_cache() -------------------------------------------------------------------------------- /util/itms-yolo.py: -------------------------------------------------------------------------------- 1 | '''*** Import Section ***''' 2 | from __future__ import division # to allow compatibility of code between Python 2.x and 3.x with minimal overhead 3 | from collections import Counter # library and method for counting hashable objects 4 | import argparse # to define arguments to the program in a user-friendly way 5 | import os # provides functions to interact with local file system 6 | import os.path as osp # provides range of methods to manipulate files and directories 7 | import pickle as pkl # to implement binary protocols for serializing and de-serializing object structure 8 | import pandas as pd # popular data-analysis library for machine learning. 9 | import time # for time-related python functions 10 | import sys # provides access for variables used or maintained by intrepreter 11 | import torch # machine learning library for tensor and neural-network computations 12 | from torch.autograd import Variable # Auto Differentaion package for managing scalar based values 13 | import cv2 # OpenCV Library to carry out Computer Vision tasks 14 | import emoji 15 | import warnings # to manage warnings that are displayed during execution 16 | warnings.filterwarnings( 17 | 'ignore') # to ignore warning messages while code execution 18 | print('\033[1m' + '\033[91m' + "Kickstarting YOLO...\n") 19 | from util.parser import load_classes # navigates to load_classess function in util.parser.py 20 | from util.model import Darknet # to load weights into our model for vehicle detection 21 | from util.image_processor import preparing_image # to pass input image into model,after resizing it into yolo format 22 | from util.utils import non_max_suppression # to do non-max-suppression in the detected bounding box objects i.e cars 23 | from util.dynamic_signal_switching import switch_signal 24 | from util.dynamic_signal_switching import avg_signal_op_time 25 | 26 | 27 | #*** Parsing Arguments to YOLO Model *** 28 | def arg_parse(): 29 | parser = argparse.ArgumentParser( 30 | description= 31 | 'YOLO Vehicle Detection Model for Intelligent Traffic Management System' 32 | ) 33 | parser.add_argument( 34 | "--images", 35 | dest='images', 36 | help="Image / Directory containing images to vehicle detection upon", 37 | default="vehicles-on-lanes", 38 | type=str) 39 | '''parser.add_argument("--outputs",dest='outputs',help="Image / Directory to store detections",default="/content/output/",type=str)''' 40 | parser.add_argument("--bs", dest="bs", help="Batch size", default=1) 41 | parser.add_argument("--confidence_score", 42 | dest="confidence", 43 | help="Confidence Score to filter Vehicle Prediction", 44 | default=0.3) 45 | parser.add_argument("--nms_thresh", 46 | dest="nms_thresh", 47 | help="NMS Threshhold", 48 | default=0.3) 49 | parser.add_argument("--cfg", 50 | dest='cfgfile', 51 | help="Config file", 52 | default="config/yolov3.cfg", 53 | type=str) 54 | parser.add_argument("--weights", 55 | dest='weightsfile', 56 | help="weightsfile", 57 | default="weights/yolov3.weights", 58 | type=str) 59 | parser.add_argument( 60 | "--reso", 61 | dest='reso', 62 | help= 63 | "Input resolution of the network. Increase to increase accuracy. Decrease to increase speed", 64 | default="416", 65 | type=str) 66 | return parser.parse_args() 67 | 68 | 69 | args = arg_parse() 70 | images = args.images 71 | batch_size = int(args.bs) 72 | confidence = float(args.confidence) 73 | nms_thesh = float(args.nms_thresh) 74 | start = 0 75 | CUDA = torch.cuda.is_available() 76 | 77 | #***Loading Dataset Class File*** 78 | classes = load_classes("data/idd.names") 79 | 80 | #***Setting up the neural network*** 81 | model = Darknet(args.cfgfile) 82 | print('\033[0m' + "Input Data Passed Into YOLO Model..." + u'\N{check mark}') 83 | model.load_weights(args.weightsfile) 84 | print('\033[0m' + "YOLO Neural Network Successfully Loaded..." + 85 | u'\N{check mark}') 86 | print('\033[0m') 87 | model.hyperparams["height"] = args.reso 88 | inp_dim = int(model.hyperparams["height"]) 89 | assert inp_dim % 32 == 0 90 | assert inp_dim > 32 91 | num_classes = model.num_classes 92 | print('\033[1m' + '\033[92m' + 93 | "Performing Vehicle Detection with YOLO Neural Network..." + '\033[0m' + 94 | u'\N{check mark}') 95 | #Putting YOLO Model into GPU: 96 | if CUDA: 97 | model.cuda() 98 | model.eval() 99 | read_dir = time.time() 100 | 101 | #***Vehicle Detection Phase*** 102 | try: 103 | imlist = [ 104 | osp.join(osp.realpath('.'), images, img) for img in os.listdir(images) 105 | ] 106 | except NotADirectoryError: 107 | imlist = [] 108 | imlist.append(osp.join(osp.realpath('.'), images)) 109 | except FileNotFoundError: 110 | print("No Input with the name {}".format(images)) 111 | print("Model failed to load your input. ") 112 | exit() 113 | 114 | load_batch = time.time() 115 | loaded_ims = [cv2.imread(x) for x in imlist] 116 | 117 | im_batches = list( 118 | map(preparing_image, loaded_ims, [inp_dim for x in range(len(imlist))])) 119 | im_dim_list = [(x.shape[1], x.shape[0]) for x in loaded_ims] 120 | im_dim_list = torch.FloatTensor(im_dim_list).repeat(1, 2) 121 | 122 | leftover = 0 123 | 124 | if (len(im_dim_list) % batch_size): 125 | leftover = 1 126 | 127 | if batch_size != 1: 128 | num_batches = len(imlist) // batch_size + leftover 129 | im_batches = [ 130 | torch.cat( 131 | (im_batches[i * batch_size:min((i + 1) * 132 | batch_size, len(im_batches))])) 133 | for i in range(num_batches) 134 | ] 135 | 136 | write = 0 137 | 138 | if CUDA: 139 | im_dim_list = im_dim_list.cuda() 140 | start_outputs_loop = time.time() 141 | 142 | lane_count_list = [] 143 | input_image_count = 0 144 | denser_lane = 0 145 | lane_with_higher_count = 0 146 | 147 | print() 148 | print( 149 | '\033[1m' + 150 | "------------------------------------------------------------------------------------------------------------------------------------------------------------" 151 | ) 152 | print('\033[1m' + "SUMMARY") 153 | print( 154 | '\033[1m' + 155 | "------------------------------------------------------------------------------------------------------------------------------------------------------------" 156 | ) 157 | print('\033[1m' + 158 | "{:25s}: ".format("\nDetected (" + str(len(imlist)) + " inputs)")) 159 | print('\033[0m') 160 | #Loading the image, if present : 161 | for i, batch in enumerate(im_batches): 162 | #load the image 163 | vehicle_count = 0 164 | start = time.time() 165 | if CUDA: 166 | batch = batch.cuda() 167 | with torch.no_grad(): 168 | prediction = model(Variable(batch)) 169 | 170 | prediction = non_max_suppression(prediction, 171 | confidence, 172 | num_classes, 173 | nms_conf=nms_thesh) 174 | 175 | end = time.time() 176 | 177 | if type(prediction) == int: 178 | for im_num, image in enumerate( 179 | imlist[i * batch_size:min((i + 1) * batch_size, len(imlist))]): 180 | im_id = i * batch_size + im_num 181 | print("{0:20s} predicted in {1:6.3f} seconds".format( 182 | image.split("/")[-1], (end - start) / batch_size)) 183 | print("{0:20s} {1:s}".format("Objects detected:", "")) 184 | print("----------------------------------------------------------") 185 | continue 186 | 187 | prediction[:, 188 | 0] += i * batch_size # transform the atribute from index in batch to index in imlist 189 | 190 | if not write: # If we have't initialised output 191 | output = prediction 192 | write = 1 193 | else: 194 | output = torch.cat((output, prediction)) 195 | 196 | for im_num, image in enumerate( 197 | imlist[i * batch_size:min((i + 1) * batch_size, len(imlist))]): 198 | vehicle_count = 0 199 | input_image_count += 1 200 | #denser_lane = 201 | im_id = i * batch_size + im_num 202 | objs = [classes[int(x[-1])] for x in output if int(x[0]) == im_id] 203 | vc = Counter(objs) 204 | for i in objs: 205 | if i == "car" or i == "motorbike" or i == "truck" or i == "bicycle" or i == "autorickshaw": 206 | vehicle_count += 1 207 | 208 | print('\033[1m' + "Lane : {} - {} : {:5s} {}".format( 209 | input_image_count, "Number of Vehicles detected", "", 210 | vehicle_count)) 211 | 212 | if vehicle_count > 0: 213 | lane_count_list.append(vehicle_count) 214 | 215 | if vehicle_count > lane_with_higher_count: 216 | lane_with_higher_count = vehicle_count 217 | denser_lane = input_image_count 218 | 219 | print( 220 | '\033[0m' + 221 | " File Name: {0:20s}.".format(image.split("/")[-1])) 222 | print('\033[0m' + 223 | " {:15} {}".format("Vehicle Type", "Count")) 224 | for key, value in sorted(vc.items()): 225 | if key == "car" or key == "motorbike" or key == "truck" or key == "bicycle": 226 | print('\033[0m' + " {:15s} {}".format(key, value)) 227 | 228 | if CUDA: 229 | torch.cuda.synchronize() 230 | 231 | if vehicle_count == 0: 232 | print( 233 | '\033[1m' + 234 | "There are no vehicles present from the input that was passed into our YOLO Model." 235 | ) 236 | 237 | print( 238 | '\033[1m' + 239 | "------------------------------------------------------------------------------------------------------------------------------------------------------------" 240 | ) 241 | print( 242 | emoji.emojize(':vertical_traffic_light:') + '\033[1m' + '\033[94m' + 243 | " Lane with denser traffic is : Lane " + str(denser_lane) + '\033[30m' + 244 | "\n") 245 | 246 | switching_time = 5 #avg_signal_op_time(lane_count_list) 247 | 248 | switch_signal(denser_lane, switching_time) 249 | 250 | print( 251 | '\033[1m' + 252 | "------------------------------------------------------------------------------------------------------------------------------------------------------------" 253 | ) 254 | try: 255 | output 256 | except NameError: 257 | print("No detections were made | No Objects were found from the input") 258 | exit() 259 | 260 | torch.cuda.empty_cache() -------------------------------------------------------------------------------- /util/model.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | from collections import defaultdict 4 | 5 | import numpy as np 6 | import torch 7 | from torch import nn 8 | 9 | from .parser import parse_model_configuration 10 | from .moduler import modules_creator 11 | 12 | class Darknet(nn.Module): 13 | """YOLOv3 object detection model""" 14 | def __init__(self, config_path, img_size=416): 15 | """ 16 | Function: 17 | Constructor for Darknet class 18 | 19 | Arguments: 20 | conig_path -- path of model configuration file 21 | img_size -- size of input images 22 | """ 23 | super(Darknet, self).__init__() 24 | self.blocks = parse_model_configuration(config_path) 25 | self.hyperparams, self.module_list ,self.num_classes = modules_creator(self.blocks) 26 | self.img_size = img_size 27 | self.seen = 0 28 | self.header_info = np.array([0, 0, 0, self.seen, 0]) 29 | self.loss_names = ["x", "y", "w", "h", "conf", "cls", "recall", "precision"] 30 | 31 | def forward(self, x, targets=None): 32 | """ 33 | Function: 34 | Feedforward propagation for prediction 35 | 36 | Arguments: 37 | x -- input tensor 38 | targets -- tensor of true values of training process 39 | 40 | Returns: 41 | output -- tensor of outputs of the models 42 | """ 43 | is_training = targets is not None 44 | output = [] 45 | self.losses = defaultdict(float) 46 | layer_outputs = [] 47 | for i, (block, module) in enumerate(zip(self.blocks, self.module_list)): 48 | if block["type"] in ["convolutional", "upsample", "maxpool"]: 49 | x = module(x) 50 | elif block["type"] == "route": 51 | layer_i = [int(x) for x in block["layers"].split(",")] 52 | x = torch.cat([layer_outputs[i] for i in layer_i], 1) 53 | elif block["type"] == "shortcut": 54 | layer_i = int(block["from"]) 55 | x = layer_outputs[-1] + layer_outputs[layer_i] 56 | elif block["type"] == "yolo": 57 | # Train phase: get loss 58 | if is_training: 59 | x, *losses = module[0](x, targets) 60 | for name, loss in zip(self.loss_names, losses): 61 | self.losses[name] += loss 62 | # Test phase: Get detections 63 | else: 64 | x = module(x) 65 | output.append(x) 66 | layer_outputs.append(x) 67 | 68 | self.losses["recall"] /= 3 69 | self.losses["precision"] /= 3 70 | return sum(output) if is_training else torch.cat(output, 1) 71 | 72 | def load_weights(self, weights_path): 73 | """ 74 | Function: 75 | Parses and loads the weights stored in 'weights_path 76 | 77 | Arguments: 78 | weights_path -- path of weights file 79 | """ 80 | # Open the weights file 81 | fp = open(weights_path, "rb") 82 | header = np.fromfile(fp, dtype=np.int32, count=5) # First five are header values 83 | 84 | # Needed to write header when saving weights 85 | self.header_info = header 86 | 87 | self.seen = header[3] 88 | weights = np.fromfile(fp, dtype=np.float32) # The rest are weights 89 | fp.close() 90 | 91 | ptr = 0 92 | for i, (block, module) in enumerate(zip(self.blocks, self.module_list)): 93 | if block["type"] == "convolutional": 94 | conv_layer = module[0] 95 | try: 96 | block["batch_normalize"] 97 | except: 98 | block["batch_normalize"] = 0 99 | if block["batch_normalize"]: 100 | # Load BN bias, weights, running mean and running variance 101 | bn_layer = module[1] 102 | num_b = bn_layer.bias.numel() # Number of biases 103 | # Bias 104 | bn_b = torch.from_numpy(weights[ptr : ptr + num_b]).view_as(bn_layer.bias) 105 | bn_layer.bias.data.copy_(bn_b) 106 | ptr += num_b 107 | # Weight 108 | bn_w = torch.from_numpy(weights[ptr : ptr + num_b]).view_as(bn_layer.weight) 109 | bn_layer.weight.data.copy_(bn_w) 110 | ptr += num_b 111 | # Running Mean 112 | bn_rm = torch.from_numpy(weights[ptr : ptr + num_b]).view_as(bn_layer.running_mean) 113 | bn_layer.running_mean.data.copy_(bn_rm) 114 | ptr += num_b 115 | # Running Var 116 | bn_rv = torch.from_numpy(weights[ptr : ptr + num_b]).view_as(bn_layer.running_var) 117 | bn_layer.running_var.data.copy_(bn_rv) 118 | ptr += num_b 119 | else: 120 | # Load conv. bias 121 | num_b = conv_layer.bias.numel() 122 | conv_b = torch.from_numpy(weights[ptr : ptr + num_b]).view_as(conv_layer.bias) 123 | conv_layer.bias.data.copy_(conv_b) 124 | ptr += num_b 125 | # Load conv. weights 126 | num_w = conv_layer.weight.numel() 127 | conv_w = torch.from_numpy(weights[ptr : ptr + num_w]).view_as(conv_layer.weight) 128 | conv_layer.weight.data.copy_(conv_w) 129 | ptr += num_w 130 | 131 | def save_weights(self, path, cutoff=-1): 132 | """ 133 | Function: 134 | Save trained model's weights 135 | 136 | Arguments: 137 | path -- path of the new weights file 138 | cutoff -- save layers between 0 and cutoff (cutoff = -1 -> all are saved) 139 | """ 140 | fp = open(path, "wb") 141 | self.header_info[3] = self.seen 142 | self.header_info.tofile(fp) 143 | 144 | # Iterate through layers 145 | for i, (block, module) in enumerate(zip(self.blocks[:cutoff], self.module_list[:cutoff])): 146 | if block["type"] == "convolutional": 147 | conv_layer = module[0] 148 | # If batch norm, load bn first 149 | if block["batch_normalize"]: 150 | bn_layer = module[1] 151 | bn_layer.bias.data.cpu().numpy().tofile(fp) 152 | bn_layer.weight.data.cpu().numpy().tofile(fp) 153 | bn_layer.running_mean.data.cpu().numpy().tofile(fp) 154 | bn_layer.running_var.data.cpu().numpy().tofile(fp) 155 | # Load conv bias 156 | else: 157 | conv_layer.bias.data.cpu().numpy().tofile(fp) 158 | # Load conv weights 159 | conv_layer.weight.data.cpu().numpy().tofile(fp) 160 | fp.close() -------------------------------------------------------------------------------- /util/moduler.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | from __future__ import division 4 | 5 | import torch 6 | import torch.nn as nn 7 | from torch.autograd import Variable 8 | 9 | from util.utils import build_targets 10 | 11 | 12 | class EmptyLayer(nn.Module): 13 | """Placeholder for 'route' and 'shortcut' layers""" 14 | 15 | def __init__(self): 16 | """ 17 | Function: 18 | Constructor for EmptyLayer class 19 | """ 20 | super(EmptyLayer, self).__init__() 21 | 22 | 23 | class DetectionLayer(nn.Module): 24 | """Detection layer""" 25 | 26 | def __init__(self, anchors, num_classes, img_dim): 27 | """ 28 | Function: 29 | Constructor for DetectionLayer class 30 | 31 | Arguments: 32 | anchors -- list of anchors boxes dimensions 33 | num_classes -- number of classes that model with classify 34 | img_dim -- dimension of input images 35 | """ 36 | super(DetectionLayer, self).__init__() 37 | self.anchors = anchors 38 | self.num_anchors = len(anchors) 39 | self.num_classes = num_classes 40 | self.bbox_attrs = 5 + num_classes 41 | self.image_dim = img_dim 42 | self.ignore_thres = 0.5 43 | self.lambda_coord = 1 44 | 45 | self.mse_loss = nn.MSELoss(size_average=True) # Coordinate loss 46 | self.bce_loss = nn.BCELoss(size_average=True) # Confidence loss 47 | self.ce_loss = nn.CrossEntropyLoss() # Class loss 48 | 49 | def forward(self, x, targets=None): 50 | """ 51 | Function: 52 | Feedforward propagation for prediction 53 | 54 | Arguments: 55 | x -- input tensor 56 | targets -- tensor of true values of training process 57 | 58 | Returns: 59 | output -- tensor of outputs of the models 60 | """ 61 | 62 | nA = self.num_anchors 63 | nB = x.size(0) 64 | nG = x.size(2) 65 | stride = self.image_dim / nG 66 | 67 | # Tensors for cuda support 68 | FloatTensor = torch.cuda.FloatTensor if x.is_cuda else torch.FloatTensor 69 | LongTensor = torch.cuda.LongTensor if x.is_cuda else torch.LongTensor 70 | ByteTensor = torch.cuda.ByteTensor if x.is_cuda else torch.ByteTensor 71 | 72 | prediction = x.view(nB, nA, self.bbox_attrs, nG, nG).permute(0, 1, 3, 4, 2).contiguous() 73 | 74 | # Get outputs 75 | x = torch.sigmoid(prediction[..., 0]) # Center x 76 | y = torch.sigmoid(prediction[..., 1]) # Center y 77 | w = prediction[..., 2] # Width 78 | h = prediction[..., 3] # Height 79 | pred_conf = torch.sigmoid(prediction[..., 4]) # Conf 80 | pred_cls = torch.sigmoid(prediction[..., 5:]) # Cls pred. 81 | 82 | # Calculate offsets for each grid 83 | grid_x = torch.arange(nG).repeat(nG, 1).view([1, 1, nG, nG]).type(FloatTensor) 84 | grid_y = torch.arange(nG).repeat(nG, 1).t().view([1, 1, nG, nG]).type(FloatTensor) 85 | scaled_anchors = FloatTensor([(a_w / stride, a_h / stride) for a_w, a_h in self.anchors]) 86 | anchor_w = scaled_anchors[:, 0:1].view((1, nA, 1, 1)) 87 | anchor_h = scaled_anchors[:, 1:2].view((1, nA, 1, 1)) 88 | 89 | # Add offset and scale with anchors 90 | pred_boxes = FloatTensor(prediction[..., :4].shape) 91 | pred_boxes[..., 0] = x.data + grid_x 92 | pred_boxes[..., 1] = y.data + grid_y 93 | pred_boxes[..., 2] = torch.exp(w.data) * anchor_w 94 | pred_boxes[..., 3] = torch.exp(h.data) * anchor_h 95 | 96 | # Training 97 | if targets is not None: 98 | 99 | if x.is_cuda: 100 | self.mse_loss = self.mse_loss.cuda() 101 | self.bce_loss = self.bce_loss.cuda() 102 | self.ce_loss = self.ce_loss.cuda() 103 | 104 | nGT, nCorrect, mask, conf_mask, tx, ty, tw, th, tconf, tcls = build_targets( 105 | pred_boxes=pred_boxes.cpu().data, 106 | pred_conf=pred_conf.cpu().data, 107 | pred_cls=pred_cls.cpu().data, 108 | target=targets.cpu().data, 109 | anchors=scaled_anchors.cpu().data, 110 | num_anchors=nA, 111 | num_classes=self.num_classes, 112 | grid_size=nG, 113 | ignore_thres=self.ignore_thres, 114 | img_dim=self.image_dim, 115 | ) 116 | 117 | nProposals = int((pred_conf > 0.5).sum().item()) 118 | recall = float(nCorrect / nGT) if nGT else 1 119 | precision = float(nCorrect / nProposals) 120 | 121 | # Handle masks 122 | mask = Variable(mask.type(ByteTensor)) 123 | conf_mask = Variable(conf_mask.type(ByteTensor)) 124 | 125 | # Handle target variables 126 | tx = Variable(tx.type(FloatTensor), requires_grad=False) 127 | ty = Variable(ty.type(FloatTensor), requires_grad=False) 128 | tw = Variable(tw.type(FloatTensor), requires_grad=False) 129 | th = Variable(th.type(FloatTensor), requires_grad=False) 130 | tconf = Variable(tconf.type(FloatTensor), requires_grad=False) 131 | tcls = Variable(tcls.type(LongTensor), requires_grad=False) 132 | 133 | # Get conf mask where gt and where there is no gt 134 | conf_mask_true = mask 135 | conf_mask_false = conf_mask - mask 136 | 137 | # Mask outputs to ignore non-existing objects 138 | loss_x = self.mse_loss(x[mask], tx[mask]) 139 | loss_y = self.mse_loss(y[mask], ty[mask]) 140 | loss_w = self.mse_loss(w[mask], tw[mask]) 141 | loss_h = self.mse_loss(h[mask], th[mask]) 142 | loss_conf = self.bce_loss(pred_conf[conf_mask_false], tconf[conf_mask_false]) + self.bce_loss( 143 | pred_conf[conf_mask_true], tconf[conf_mask_true] 144 | ) 145 | loss_cls = (1 / nB) * self.ce_loss(pred_cls[mask], torch.argmax(tcls[mask], 1)) 146 | loss = loss_x + loss_y + loss_w + loss_h + loss_conf + loss_cls 147 | 148 | return ( 149 | loss, 150 | loss_x.item(), 151 | loss_y.item(), 152 | loss_w.item(), 153 | loss_h.item(), 154 | loss_conf.item(), 155 | loss_cls.item(), 156 | recall, 157 | precision, 158 | ) 159 | 160 | else: 161 | # If not in training phase return predictions 162 | output = torch.cat( 163 | ( 164 | pred_boxes.view(nB, -1, 4) * stride, 165 | pred_conf.view(nB, -1, 1), 166 | pred_cls.view(nB, -1, self.num_classes), 167 | ), 168 | -1, 169 | ) 170 | return output 171 | 172 | 173 | def modules_creator(blocks): 174 | """ 175 | Function: 176 | Constructs module list of layer blocks from module configuration in blocks 177 | 178 | Arguments: 179 | blocks -- dictionary of each block contains it's info 180 | 181 | Returns: 182 | hyperparams -- dictionary contains info about model 183 | modules_list -- list of pytorch modules 184 | """ 185 | hyperparams = blocks.pop(0) 186 | output_filters = [int(hyperparams["channels"])] 187 | modules_list = nn.ModuleList() 188 | for i, block in enumerate(blocks): 189 | modules = nn.Sequential() 190 | 191 | if block["type"] == "convolutional": 192 | try: 193 | bn = int(block["batch_normalize"]) 194 | except: 195 | bn = 0 196 | filters = int(block["filters"]) 197 | kernel_size = int(block["size"]) 198 | pad = (kernel_size - 1) // 2 if int(block["pad"]) else 0 199 | modules.add_module( 200 | "conv_%d" % i, 201 | nn.Conv2d( 202 | in_channels=output_filters[-1], 203 | out_channels=filters, 204 | kernel_size=kernel_size, 205 | stride=int(block["stride"]), 206 | padding=pad, 207 | bias=not bn, 208 | ), 209 | ) 210 | if bn: 211 | modules.add_module("batch_norm_%d" % i, nn.BatchNorm2d(filters)) 212 | if block["activation"] == "leaky": 213 | modules.add_module("leaky_%d" % i, nn.LeakyReLU(0.1)) 214 | 215 | elif block["type"] == "maxpool": 216 | kernel_size = int(block["size"]) 217 | stride = int(block["stride"]) 218 | if kernel_size == 2 and stride == 1: 219 | padding = nn.ZeroPad2d((0, 1, 0, 1)) 220 | modules.add_module("_debug_padding_%d" % i, padding) 221 | maxpool = nn.MaxPool2d( 222 | kernel_size=int(block["size"]), 223 | stride=int(block["stride"]), 224 | padding=int((kernel_size - 1) // 2), 225 | ) 226 | modules.add_module("maxpool_%d" % i, maxpool) 227 | 228 | elif block["type"] == "upsample": 229 | upsample = nn.Upsample(scale_factor=int(block["stride"]), mode="nearest") 230 | modules.add_module("upsample_%d" % i, upsample) 231 | 232 | elif block["type"] == "route": 233 | layers = [int(x) for x in block["layers"].split(",")] 234 | filters = sum([output_filters[layer_i] for layer_i in layers]) 235 | modules.add_module("route_%d" % i, EmptyLayer()) 236 | 237 | elif block["type"] == "shortcut": 238 | filters = output_filters[int(block["from"])] 239 | modules.add_module("shortcut_%d" % i, EmptyLayer()) 240 | 241 | elif block["type"] == "yolo": 242 | anchor_idxs = [int(x) for x in block["mask"].split(",")] 243 | # Extract anchors 244 | anchors = [int(x) for x in block["anchors"].split(",")] 245 | anchors = [(anchors[i], anchors[i + 1]) for i in range(0, len(anchors), 2)] 246 | anchors = [anchors[i] for i in anchor_idxs] 247 | num_classes = int(block["classes"]) 248 | img_height = int(hyperparams["height"]) 249 | # Define detection layer 250 | yolo_layer = DetectionLayer(anchors, num_classes, img_height) 251 | modules.add_module("yolo_%d" % i, yolo_layer) 252 | # Register module list and number of output filters 253 | modules_list.append(modules) 254 | output_filters.append(filters) 255 | 256 | return hyperparams, modules_list, num_classes 257 | 258 | -------------------------------------------------------------------------------- /util/parser.py: -------------------------------------------------------------------------------- 1 | def parse_model_configuration(path): 2 | """ 3 | Function: 4 | Takes a configuration file 5 | Arguments: 6 | path -- path of model configuration file 7 | 8 | Returns: 9 | blocks -- a list of blocks. Each blocks describes a block in the neural 10 | network to be built. Block is represented as a dictionary in the list 11 | """ 12 | with open(path) as cfg: 13 | lines = cfg.read() 14 | lines = lines.split("\n") 15 | lines = [line for line in lines if not line.startswith("#") and line] 16 | lines = [line.strip() for line in lines] 17 | blocks = [] 18 | for line in lines: 19 | if line.startswith("["): 20 | blocks.append({}) 21 | blocks[-1]["type"] = line[1:-1].strip() 22 | if blocks[-1]["type"] == "convolutional": 23 | blocks[-1]["batch_normalize"] = 0 24 | else: 25 | key, value = line.split("=") 26 | key, value = key.strip(), value.strip() 27 | blocks[-1][key] = value 28 | return blocks 29 | 30 | def parse_data_config(path): 31 | """ 32 | Function: 33 | Parses the data configuration file 34 | 35 | Arguments: 36 | path -- path of data configuration file 37 | 38 | Returns: 39 | options -- dictionary of data configurations options 40 | """ 41 | options = dict() 42 | options['gpus'] = '0,1,2,3' 43 | options['num_workers'] = '10' 44 | with open(path, 'r') as fp: 45 | lines = fp.readlines() 46 | for line in lines: 47 | line = line.strip() 48 | if line == '' or line.startswith('#'): 49 | continue 50 | key, value = line.split('=') 51 | options[key.strip()] = value.strip() 52 | return options 53 | 54 | def load_classes(path): 55 | """ 56 | Function: 57 | Loads class labels at 'path' 58 | Arguments: 59 | path -- path of class labels file 60 | Returns: 61 | name -- a list of class labels 62 | """ 63 | fp = open(path, "r") 64 | names = fp.read().split("\n")[:-1] 65 | return names 66 | -------------------------------------------------------------------------------- /util/utils.py: -------------------------------------------------------------------------------- 1 | import math 2 | import numpy as np 3 | import torch 4 | 5 | def bbox_iou(box1, box2, x1y1x2y2=True): 6 | """ 7 | Function: 8 | Calculate intersection over union between two bboxes 9 | 10 | Arguments: 11 | box1 -- first bbox 12 | box2 -- second bbox 13 | x1y1x2y2 -- bool value 14 | Return: 15 | iou -- the IoU of two bounding boxes 16 | """ 17 | if not x1y1x2y2: 18 | # Transform from center and width to exact coordinates 19 | b1_x1, b1_x2 = box1[:, 0] - box1[:, 2] / 2, box1[:, 0] + box1[:, 2] / 2 20 | b1_y1, b1_y2 = box1[:, 1] - box1[:, 3] / 2, box1[:, 1] + box1[:, 3] / 2 21 | b2_x1, b2_x2 = box2[:, 0] - box2[:, 2] / 2, box2[:, 0] + box2[:, 2] / 2 22 | b2_y1, b2_y2 = box2[:, 1] - box2[:, 3] / 2, box2[:, 1] + box2[:, 3] / 2 23 | else: 24 | # Get the coordinates of bounding boxes 25 | b1_x1, b1_y1, b1_x2, b1_y2 = box1[:, 0], box1[:, 1], box1[:, 2], box1[:, 3] 26 | b2_x1, b2_y1, b2_x2, b2_y2 = box2[:, 0], box2[:, 1], box2[:, 2], box2[:, 3] 27 | 28 | # get the corrdinates of the intersection rectangle 29 | inter_rect_x1 = torch.max(b1_x1, b2_x1) 30 | inter_rect_y1 = torch.max(b1_y1, b2_y1) 31 | inter_rect_x2 = torch.min(b1_x2, b2_x2) 32 | inter_rect_y2 = torch.min(b1_y2, b2_y2) 33 | # Intersection area 34 | inter_area = torch.clamp(inter_rect_x2 - inter_rect_x1 + 1, min=0) * torch.clamp( 35 | inter_rect_y2 - inter_rect_y1 + 1, min=0 36 | ) 37 | # Union Area 38 | b1_area = (b1_x2 - b1_x1 + 1) * (b1_y2 - b1_y1 + 1) 39 | b2_area = (b2_x2 - b2_x1 + 1) * (b2_y2 - b2_y1 + 1) 40 | 41 | iou = inter_area / (b1_area + b2_area - inter_area + 1e-16) 42 | 43 | return iou 44 | 45 | def unique(tensor): 46 | """ 47 | Function: 48 | Get the various classes detected in the image 49 | 50 | Arguments: 51 | tensor -- torch tensor 52 | 53 | Return: 54 | tensor_res -- torch tensor after preparing 55 | """ 56 | tensor_np = tensor.detach().cpu().numpy() 57 | unique_np = np.unique(tensor_np) 58 | unique_tensor = torch.from_numpy(unique_np) 59 | 60 | tensor_res = tensor.new(unique_tensor.shape) 61 | tensor_res.copy_(unique_tensor) 62 | return tensor_res 63 | 64 | 65 | 66 | 67 | def non_max_suppression(prediction, confidence, num_classes, nms_conf = 0.4): 68 | """ 69 | Function: 70 | Removes detections with lower object confidence score than 'conf_thres' and performs 71 | Non-Maximum Suppression to further filter detections. 72 | 73 | Arguments: 74 | prediction -- tensor of yolo model prediction 75 | confidence -- float value to remove all prediction has confidence value low than the confidence 76 | num_classes -- number of class 77 | nms_conf -- float value (non max suppression) to remove bbox it's iou larger than nms_conf 78 | 79 | Return: 80 | output -- tuple (x1, y1, x2, y2, object_conf, class_score, class_pred) 81 | """ 82 | conf_mask = (prediction[:,:,4] > confidence).float().unsqueeze(2) 83 | prediction = prediction*conf_mask 84 | 85 | box_corner = prediction.new(prediction.shape) 86 | box_corner[:,:,0] = (prediction[:,:,0] - prediction[:,:,2]/2) 87 | box_corner[:,:,1] = (prediction[:,:,1] - prediction[:,:,3]/2) 88 | box_corner[:,:,2] = (prediction[:,:,0] + prediction[:,:,2]/2) 89 | box_corner[:,:,3] = (prediction[:,:,1] + prediction[:,:,3]/2) 90 | prediction[:,:,:4] = box_corner[:,:,:4] 91 | 92 | batch_size = prediction.size(0) 93 | 94 | write = False 95 | 96 | 97 | 98 | for ind in range(batch_size): 99 | image_pred = prediction[ind] #image Tensor 100 | #confidence threshholding 101 | #NMS 102 | 103 | max_conf, max_conf_score = torch.max(image_pred[:,5:5+ num_classes], 1) 104 | max_conf = max_conf.float().unsqueeze(1) 105 | max_conf_score = max_conf_score.float().unsqueeze(1) 106 | seq = (image_pred[:,:5], max_conf, max_conf_score) 107 | image_pred = torch.cat(seq, 1) 108 | 109 | non_zero_ind = (torch.nonzero(image_pred[:,4])) 110 | try: 111 | image_pred_ = image_pred[non_zero_ind.squeeze(),:].view(-1,7) 112 | except: 113 | continue 114 | 115 | if image_pred_.shape[0] == 0: 116 | continue 117 | # 118 | 119 | #Get the various classes detected in the image 120 | img_classes = unique(image_pred_[:,-1]) # -1 index holds the class index 121 | 122 | 123 | for cls in img_classes: 124 | #perform NMS 125 | 126 | 127 | #get the detections with one particular class 128 | cls_mask = image_pred_*(image_pred_[:,-1] == cls).float().unsqueeze(1) 129 | class_mask_ind = torch.nonzero(cls_mask[:,-2]).squeeze() 130 | image_pred_class = image_pred_[class_mask_ind].view(-1,7) 131 | 132 | #sort the detections such that the entry with the maximum objectness 133 | #confidence is at the top 134 | conf_sort_index = torch.sort(image_pred_class[:,4], descending = True )[1] 135 | image_pred_class = image_pred_class[conf_sort_index] 136 | idx = image_pred_class.size(0) #Number of detections 137 | 138 | for i in range(idx): 139 | #Get the IOUs of all boxes that come after the one we are looking at 140 | #in the loop 141 | try: 142 | ious = bbox_iou(image_pred_class[i].unsqueeze(0), image_pred_class[i+1:]) 143 | except ValueError: 144 | break 145 | 146 | except IndexError: 147 | break 148 | 149 | #Zero out all the detections that have IoU > treshhold 150 | iou_mask = (ious < nms_conf).float().unsqueeze(1) 151 | image_pred_class[i+1:] *= iou_mask 152 | 153 | #Remove the non-zero entries 154 | non_zero_ind = torch.nonzero(image_pred_class[:,4]).squeeze() 155 | image_pred_class = image_pred_class[non_zero_ind].view(-1,7) 156 | 157 | batch_ind = image_pred_class.new(image_pred_class.size(0), 1).fill_(ind) #Repeat the batch_id for as many detections of the class cls in the image 158 | seq = batch_ind, image_pred_class 159 | 160 | if not write: 161 | output = torch.cat(seq,1) 162 | write = True 163 | else: 164 | out = torch.cat(seq,1) 165 | output = torch.cat((output,out)) 166 | 167 | try: 168 | return output 169 | except: 170 | return 0 171 | 172 | 173 | def build_targets(pred_boxes, pred_conf, pred_cls, target, anchors, num_anchors, 174 | num_classes, grid_size, ignore_thres, img_dim): 175 | """ 176 | Function: 177 | build the target values for training process 178 | Arguments: 179 | pred_boxes -- predicted bboxes 180 | pred_conf -- predicted confidence of bbox 181 | pred_cls -- predicted class of bbox 182 | target -- target value 183 | anchors -- list of anchors boxs' dimensions 184 | num_anchors -- number of anchor boxes 185 | num_classes -- number of classes 186 | grid_size -- grid size 187 | ignore_thres -- confidence thres 188 | img_dim -- input image dimension 189 | 190 | Return: 191 | nGT -- total number of predictions 192 | n_correct -- number of correct predictions 193 | mask -- mask 194 | conf_mask -- confidence mask 195 | tx -- xs of bboxes 196 | ty -- ys of bboxs 197 | tw -- width of bbox 198 | th -- height of bbox 199 | tconf -- confidence 200 | tcls -- class prediction 201 | """ 202 | 203 | 204 | batch_size = target.size(0) 205 | num_anchors = num_anchors 206 | num_classes = num_classes 207 | n_grid = grid_size 208 | 209 | mask = torch.zeros(batch_size, num_anchors, n_grid, n_grid) 210 | conf_mask = torch.ones(batch_size, num_anchors, n_grid, n_grid) 211 | 212 | tx = torch.zeros(batch_size, num_anchors, n_grid, n_grid) 213 | ty = torch.zeros(batch_size, num_anchors, n_grid, n_grid) 214 | tw = torch.zeros(batch_size, num_anchors, n_grid, n_grid) 215 | th = torch.zeros(batch_size, num_anchors, n_grid, n_grid) 216 | 217 | tconf = torch.ByteTensor(batch_size, num_anchors, n_grid, n_grid).fill_(0) 218 | tcls = torch.ByteTensor(batch_size, num_anchors, n_grid, n_grid, num_classes).fill_(0) 219 | 220 | nGT = 0 221 | n_correct = 0 222 | 223 | for b in range(batch_size): 224 | for t in range(target.shape[1]): 225 | if target[b, t].sum == 0: 226 | continue 227 | 228 | nGT += 1 229 | 230 | # Convert to position relative to box 231 | gx = target[b, t, 1] * n_grid 232 | gy = target[b, t, 2] * n_grid 233 | gw = target[b, t, 3] * n_grid 234 | gh = target[b, t, 4] * n_grid 235 | 236 | # Get grid box indices 237 | gi = int(gx) 238 | gj = int(gy) 239 | 240 | # Get shape of gt box 241 | gt_box = torch.FloatTensor(np.array([0, 0, gw, gh])).unsqueeze(0) 242 | 243 | # Get shape of anchor box 244 | anchor_shapes = torch.FloatTensor(np.concatenate((np.zeros((len(anchors), 2)), np.array(anchors)), 1)) 245 | 246 | # Calculate iou between gt and anchor shapes 247 | anch_ious = bbox_iou(gt_box, anchor_shapes) 248 | 249 | # Where the overlap is larger than threshold set mask to zero (ignore) 250 | conf_mask[b, anch_ious > ignore_thres, gj, gi] = 0 251 | 252 | # Find the best matching anchor box 253 | best_n = np.argmax(anch_ious) 254 | 255 | # Get ground truth box 256 | gt_box = torch.FloatTensor(np.array([gx, gy, gw, gh])).unsqueeze(0) 257 | 258 | # Get the best prediction 259 | pred_box = pred_boxes[b, best_n, gj, gi].unsqueeze(0) 260 | 261 | # Masks 262 | mask[b, best_n, gj, gi] = 1 263 | conf_mask[b, best_n, gj, gi] = 1 264 | 265 | # Coordinates 266 | tx[b, best_n, gj, gi] = gx - gi 267 | ty[b, best_n, gj, gi] = gy - gj 268 | 269 | # Width and height 270 | tw[b, best_n, gj, gi] = math.log(gw / anchors[best_n][0] + 1e-16) 271 | th[b, best_n, gj, gi] = math.log(gh / anchors[best_n][1] + 1e-16) 272 | 273 | # One-hot encoding of label 274 | target_label = int(target[b, t, 0]) 275 | tcls[b, best_n, gj, gi, target_label] = 1 276 | tconf[b, best_n, gj, gi] = 1 277 | 278 | iou = bbox_iou(gt_box, pred_box, x1y1x2y2=False) 279 | pred_label = torch.argmax(pred_cls[b, best_n, gj, gi]) 280 | score = pred_conf[b, best_n, gj, gi] 281 | if iou > 0.5 and pred_label == target_label and score > 0.5: 282 | n_correct += 1 283 | 284 | return nGT, n_correct, mask, conf_mask, tx, ty, tw, th, tconf, tcls 285 | 286 | 287 | 288 | def weights_init_normal(m): 289 | """ 290 | Function: 291 | Initialize weights 292 | 293 | Arguments: 294 | m -- module 295 | """ 296 | classname = m.__class__.__name__ 297 | if classname.find("Conv") != -1: 298 | torch.nn.init.normal_(m.weight.data, 0.0, 0.02) 299 | elif classname.find("BatchNorm2d") != -1: 300 | torch.nn.init.normal_(m.weight.data, 1.0, 0.02) 301 | torch.nn.init.constant_(m.bias.data, 0.0) 302 | 303 | -------------------------------------------------------------------------------- /vehicles-on-lanes/130228082226-india-wealth-sports-cars-640x360.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/vehicles-on-lanes/130228082226-india-wealth-sports-cars-640x360.jpg -------------------------------------------------------------------------------- /vehicles-on-lanes/Road621.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/vehicles-on-lanes/Road621.jpg -------------------------------------------------------------------------------- /vehicles-on-lanes/auto-majors-welcome-retail-sales-numbers-siam-to-stay-with-wholesale-numbers.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/vehicles-on-lanes/auto-majors-welcome-retail-sales-numbers-siam-to-stay-with-wholesale-numbers.jpg -------------------------------------------------------------------------------- /vehicles-on-lanes/large_w9yk3mhHXYLAZxbD5k5tIvOT14QFKb2xL4w-mnYSfBA.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/vehicles-on-lanes/large_w9yk3mhHXYLAZxbD5k5tIvOT14QFKb2xL4w-mnYSfBA.jpg -------------------------------------------------------------------------------- /weights/cars-in-singapore---1204012.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/FYP-ITMS/Intelligent-Traffic-Management-System-using-Machine-Learning/29d5b2618f99079334b95b24ec2f503e022c1ca5/weights/cars-in-singapore---1204012.jpg --------------------------------------------------------------------------------