├── .gitattributes ├── LICENSE ├── README.md ├── dataset ├── test.txt ├── train.txt └── val.txt ├── dataset_cityscapes ├── generate_dataset_txt.py ├── test.txt ├── train.txt ├── train_coarse.txt ├── train_extra.txt ├── train_fine.txt ├── val.txt ├── val_coarse.txt └── val_fine.txt ├── main.py ├── main_msc.py ├── model.py ├── model_msc.py ├── network.py ├── plot_training_curve.py └── utils ├── __init__.py ├── __pycache__ ├── __init__.cpython-35.pyc ├── image_reader.cpython-35.pyc ├── label_utils.cpython-35.pyc └── write_to_log.cpython-35.pyc ├── image_reader.py ├── label_utils.py └── write_to_log.py /.gitattributes: -------------------------------------------------------------------------------- 1 | # Auto detect text files and perform LF normalization 2 | * text=auto -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 3, 29 June 2007 3 | 4 | Copyright (C) 2007 Free Software Foundation, Inc. 5 | Everyone is permitted to copy and distribute verbatim copies 6 | of this license document, but changing it is not allowed. 7 | 8 | Preamble 9 | 10 | The GNU General Public License is a free, copyleft license for 11 | software and other kinds of works. 12 | 13 | The licenses for most software and other practical works are designed 14 | to take away your freedom to share and change the works. By contrast, 15 | the GNU General Public License is intended to guarantee your freedom to 16 | share and change all versions of a program--to make sure it remains free 17 | software for all its users. We, the Free Software Foundation, use the 18 | GNU General Public License for most of our software; it applies also to 19 | any other work released this way by its authors. You can apply it to 20 | your programs, too. 21 | 22 | When we speak of free software, we are referring to freedom, not 23 | price. Our General Public Licenses are designed to make sure that you 24 | have the freedom to distribute copies of free software (and charge for 25 | them if you wish), that you receive source code or can get it if you 26 | want it, that you can change the software or use pieces of it in new 27 | free programs, and that you know you can do these things. 28 | 29 | To protect your rights, we need to prevent others from denying you 30 | these rights or asking you to surrender the rights. Therefore, you have 31 | certain responsibilities if you distribute copies of the software, or if 32 | you modify it: responsibilities to respect the freedom of others. 33 | 34 | For example, if you distribute copies of such a program, whether 35 | gratis or for a fee, you must pass on to the recipients the same 36 | freedoms that you received. You must make sure that they, too, receive 37 | or can get the source code. And you must show them these terms so they 38 | know their rights. 39 | 40 | Developers that use the GNU GPL protect your rights with two steps: 41 | (1) assert copyright on the software, and (2) offer you this License 42 | giving you legal permission to copy, distribute and/or modify it. 43 | 44 | For the developers' and authors' protection, the GPL clearly explains 45 | that there is no warranty for this free software. For both users' and 46 | authors' sake, the GPL requires that modified versions be marked as 47 | changed, so that their problems will not be attributed erroneously to 48 | authors of previous versions. 49 | 50 | Some devices are designed to deny users access to install or run 51 | modified versions of the software inside them, although the manufacturer 52 | can do so. This is fundamentally incompatible with the aim of 53 | protecting users' freedom to change the software. The systematic 54 | pattern of such abuse occurs in the area of products for individuals to 55 | use, which is precisely where it is most unacceptable. Therefore, we 56 | have designed this version of the GPL to prohibit the practice for those 57 | products. If such problems arise substantially in other domains, we 58 | stand ready to extend this provision to those domains in future versions 59 | of the GPL, as needed to protect the freedom of users. 60 | 61 | Finally, every program is threatened constantly by software patents. 62 | States should not allow patents to restrict development and use of 63 | software on general-purpose computers, but in those that do, we wish to 64 | avoid the special danger that patents applied to a free program could 65 | make it effectively proprietary. To prevent this, the GPL assures that 66 | patents cannot be used to render the program non-free. 67 | 68 | The precise terms and conditions for copying, distribution and 69 | modification follow. 70 | 71 | TERMS AND CONDITIONS 72 | 73 | 0. Definitions. 74 | 75 | "This License" refers to version 3 of the GNU General Public License. 76 | 77 | "Copyright" also means copyright-like laws that apply to other kinds of 78 | works, such as semiconductor masks. 79 | 80 | "The Program" refers to any copyrightable work licensed under this 81 | License. Each licensee is addressed as "you". "Licensees" and 82 | "recipients" may be individuals or organizations. 83 | 84 | To "modify" a work means to copy from or adapt all or part of the work 85 | in a fashion requiring copyright permission, other than the making of an 86 | exact copy. The resulting work is called a "modified version" of the 87 | earlier work or a work "based on" the earlier work. 88 | 89 | A "covered work" means either the unmodified Program or a work based 90 | on the Program. 91 | 92 | To "propagate" a work means to do anything with it that, without 93 | permission, would make you directly or secondarily liable for 94 | infringement under applicable copyright law, except executing it on a 95 | computer or modifying a private copy. Propagation includes copying, 96 | distribution (with or without modification), making available to the 97 | public, and in some countries other activities as well. 98 | 99 | To "convey" a work means any kind of propagation that enables other 100 | parties to make or receive copies. Mere interaction with a user through 101 | a computer network, with no transfer of a copy, is not conveying. 102 | 103 | An interactive user interface displays "Appropriate Legal Notices" 104 | to the extent that it includes a convenient and prominently visible 105 | feature that (1) displays an appropriate copyright notice, and (2) 106 | tells the user that there is no warranty for the work (except to the 107 | extent that warranties are provided), that licensees may convey the 108 | work under this License, and how to view a copy of this License. If 109 | the interface presents a list of user commands or options, such as a 110 | menu, a prominent item in the list meets this criterion. 111 | 112 | 1. Source Code. 113 | 114 | The "source code" for a work means the preferred form of the work 115 | for making modifications to it. "Object code" means any non-source 116 | form of a work. 117 | 118 | A "Standard Interface" means an interface that either is an official 119 | standard defined by a recognized standards body, or, in the case of 120 | interfaces specified for a particular programming language, one that 121 | is widely used among developers working in that language. 122 | 123 | The "System Libraries" of an executable work include anything, other 124 | than the work as a whole, that (a) is included in the normal form of 125 | packaging a Major Component, but which is not part of that Major 126 | Component, and (b) serves only to enable use of the work with that 127 | Major Component, or to implement a Standard Interface for which an 128 | implementation is available to the public in source code form. A 129 | "Major Component", in this context, means a major essential component 130 | (kernel, window system, and so on) of the specific operating system 131 | (if any) on which the executable work runs, or a compiler used to 132 | produce the work, or an object code interpreter used to run it. 133 | 134 | The "Corresponding Source" for a work in object code form means all 135 | the source code needed to generate, install, and (for an executable 136 | work) run the object code and to modify the work, including scripts to 137 | control those activities. However, it does not include the work's 138 | System Libraries, or general-purpose tools or generally available free 139 | programs which are used unmodified in performing those activities but 140 | which are not part of the work. For example, Corresponding Source 141 | includes interface definition files associated with source files for 142 | the work, and the source code for shared libraries and dynamically 143 | linked subprograms that the work is specifically designed to require, 144 | such as by intimate data communication or control flow between those 145 | subprograms and other parts of the work. 146 | 147 | The Corresponding Source need not include anything that users 148 | can regenerate automatically from other parts of the Corresponding 149 | Source. 150 | 151 | The Corresponding Source for a work in source code form is that 152 | same work. 153 | 154 | 2. Basic Permissions. 155 | 156 | All rights granted under this License are granted for the term of 157 | copyright on the Program, and are irrevocable provided the stated 158 | conditions are met. This License explicitly affirms your unlimited 159 | permission to run the unmodified Program. The output from running a 160 | covered work is covered by this License only if the output, given its 161 | content, constitutes a covered work. This License acknowledges your 162 | rights of fair use or other equivalent, as provided by copyright law. 163 | 164 | You may make, run and propagate covered works that you do not 165 | convey, without conditions so long as your license otherwise remains 166 | in force. You may convey covered works to others for the sole purpose 167 | of having them make modifications exclusively for you, or provide you 168 | with facilities for running those works, provided that you comply with 169 | the terms of this License in conveying all material for which you do 170 | not control copyright. Those thus making or running the covered works 171 | for you must do so exclusively on your behalf, under your direction 172 | and control, on terms that prohibit them from making any copies of 173 | your copyrighted material outside their relationship with you. 174 | 175 | Conveying under any other circumstances is permitted solely under 176 | the conditions stated below. Sublicensing is not allowed; section 10 177 | makes it unnecessary. 178 | 179 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law. 180 | 181 | No covered work shall be deemed part of an effective technological 182 | measure under any applicable law fulfilling obligations under article 183 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or 184 | similar laws prohibiting or restricting circumvention of such 185 | measures. 186 | 187 | When you convey a covered work, you waive any legal power to forbid 188 | circumvention of technological measures to the extent such circumvention 189 | is effected by exercising rights under this License with respect to 190 | the covered work, and you disclaim any intention to limit operation or 191 | modification of the work as a means of enforcing, against the work's 192 | users, your or third parties' legal rights to forbid circumvention of 193 | technological measures. 194 | 195 | 4. Conveying Verbatim Copies. 196 | 197 | You may convey verbatim copies of the Program's source code as you 198 | receive it, in any medium, provided that you conspicuously and 199 | appropriately publish on each copy an appropriate copyright notice; 200 | keep intact all notices stating that this License and any 201 | non-permissive terms added in accord with section 7 apply to the code; 202 | keep intact all notices of the absence of any warranty; and give all 203 | recipients a copy of this License along with the Program. 204 | 205 | You may charge any price or no price for each copy that you convey, 206 | and you may offer support or warranty protection for a fee. 207 | 208 | 5. Conveying Modified Source Versions. 209 | 210 | You may convey a work based on the Program, or the modifications to 211 | produce it from the Program, in the form of source code under the 212 | terms of section 4, provided that you also meet all of these conditions: 213 | 214 | a) The work must carry prominent notices stating that you modified 215 | it, and giving a relevant date. 216 | 217 | b) The work must carry prominent notices stating that it is 218 | released under this License and any conditions added under section 219 | 7. This requirement modifies the requirement in section 4 to 220 | "keep intact all notices". 221 | 222 | c) You must license the entire work, as a whole, under this 223 | License to anyone who comes into possession of a copy. This 224 | License will therefore apply, along with any applicable section 7 225 | additional terms, to the whole of the work, and all its parts, 226 | regardless of how they are packaged. This License gives no 227 | permission to license the work in any other way, but it does not 228 | invalidate such permission if you have separately received it. 229 | 230 | d) If the work has interactive user interfaces, each must display 231 | Appropriate Legal Notices; however, if the Program has interactive 232 | interfaces that do not display Appropriate Legal Notices, your 233 | work need not make them do so. 234 | 235 | A compilation of a covered work with other separate and independent 236 | works, which are not by their nature extensions of the covered work, 237 | and which are not combined with it such as to form a larger program, 238 | in or on a volume of a storage or distribution medium, is called an 239 | "aggregate" if the compilation and its resulting copyright are not 240 | used to limit the access or legal rights of the compilation's users 241 | beyond what the individual works permit. Inclusion of a covered work 242 | in an aggregate does not cause this License to apply to the other 243 | parts of the aggregate. 244 | 245 | 6. Conveying Non-Source Forms. 246 | 247 | You may convey a covered work in object code form under the terms 248 | of sections 4 and 5, provided that you also convey the 249 | machine-readable Corresponding Source under the terms of this License, 250 | in one of these ways: 251 | 252 | a) Convey the object code in, or embodied in, a physical product 253 | (including a physical distribution medium), accompanied by the 254 | Corresponding Source fixed on a durable physical medium 255 | customarily used for software interchange. 256 | 257 | b) Convey the object code in, or embodied in, a physical product 258 | (including a physical distribution medium), accompanied by a 259 | written offer, valid for at least three years and valid for as 260 | long as you offer spare parts or customer support for that product 261 | model, to give anyone who possesses the object code either (1) a 262 | copy of the Corresponding Source for all the software in the 263 | product that is covered by this License, on a durable physical 264 | medium customarily used for software interchange, for a price no 265 | more than your reasonable cost of physically performing this 266 | conveying of source, or (2) access to copy the 267 | Corresponding Source from a network server at no charge. 268 | 269 | c) Convey individual copies of the object code with a copy of the 270 | written offer to provide the Corresponding Source. This 271 | alternative is allowed only occasionally and noncommercially, and 272 | only if you received the object code with such an offer, in accord 273 | with subsection 6b. 274 | 275 | d) Convey the object code by offering access from a designated 276 | place (gratis or for a charge), and offer equivalent access to the 277 | Corresponding Source in the same way through the same place at no 278 | further charge. You need not require recipients to copy the 279 | Corresponding Source along with the object code. If the place to 280 | copy the object code is a network server, the Corresponding Source 281 | may be on a different server (operated by you or a third party) 282 | that supports equivalent copying facilities, provided you maintain 283 | clear directions next to the object code saying where to find the 284 | Corresponding Source. Regardless of what server hosts the 285 | Corresponding Source, you remain obligated to ensure that it is 286 | available for as long as needed to satisfy these requirements. 287 | 288 | e) Convey the object code using peer-to-peer transmission, provided 289 | you inform other peers where the object code and Corresponding 290 | Source of the work are being offered to the general public at no 291 | charge under subsection 6d. 292 | 293 | A separable portion of the object code, whose source code is excluded 294 | from the Corresponding Source as a System Library, need not be 295 | included in conveying the object code work. 296 | 297 | A "User Product" is either (1) a "consumer product", which means any 298 | tangible personal property which is normally used for personal, family, 299 | or household purposes, or (2) anything designed or sold for incorporation 300 | into a dwelling. In determining whether a product is a consumer product, 301 | doubtful cases shall be resolved in favor of coverage. For a particular 302 | product received by a particular user, "normally used" refers to a 303 | typical or common use of that class of product, regardless of the status 304 | of the particular user or of the way in which the particular user 305 | actually uses, or expects or is expected to use, the product. A product 306 | is a consumer product regardless of whether the product has substantial 307 | commercial, industrial or non-consumer uses, unless such uses represent 308 | the only significant mode of use of the product. 309 | 310 | "Installation Information" for a User Product means any methods, 311 | procedures, authorization keys, or other information required to install 312 | and execute modified versions of a covered work in that User Product from 313 | a modified version of its Corresponding Source. The information must 314 | suffice to ensure that the continued functioning of the modified object 315 | code is in no case prevented or interfered with solely because 316 | modification has been made. 317 | 318 | If you convey an object code work under this section in, or with, or 319 | specifically for use in, a User Product, and the conveying occurs as 320 | part of a transaction in which the right of possession and use of the 321 | User Product is transferred to the recipient in perpetuity or for a 322 | fixed term (regardless of how the transaction is characterized), the 323 | Corresponding Source conveyed under this section must be accompanied 324 | by the Installation Information. But this requirement does not apply 325 | if neither you nor any third party retains the ability to install 326 | modified object code on the User Product (for example, the work has 327 | been installed in ROM). 328 | 329 | The requirement to provide Installation Information does not include a 330 | requirement to continue to provide support service, warranty, or updates 331 | for a work that has been modified or installed by the recipient, or for 332 | the User Product in which it has been modified or installed. Access to a 333 | network may be denied when the modification itself materially and 334 | adversely affects the operation of the network or violates the rules and 335 | protocols for communication across the network. 336 | 337 | Corresponding Source conveyed, and Installation Information provided, 338 | in accord with this section must be in a format that is publicly 339 | documented (and with an implementation available to the public in 340 | source code form), and must require no special password or key for 341 | unpacking, reading or copying. 342 | 343 | 7. Additional Terms. 344 | 345 | "Additional permissions" are terms that supplement the terms of this 346 | License by making exceptions from one or more of its conditions. 347 | Additional permissions that are applicable to the entire Program shall 348 | be treated as though they were included in this License, to the extent 349 | that they are valid under applicable law. If additional permissions 350 | apply only to part of the Program, that part may be used separately 351 | under those permissions, but the entire Program remains governed by 352 | this License without regard to the additional permissions. 353 | 354 | When you convey a copy of a covered work, you may at your option 355 | remove any additional permissions from that copy, or from any part of 356 | it. (Additional permissions may be written to require their own 357 | removal in certain cases when you modify the work.) You may place 358 | additional permissions on material, added by you to a covered work, 359 | for which you have or can give appropriate copyright permission. 360 | 361 | Notwithstanding any other provision of this License, for material you 362 | add to a covered work, you may (if authorized by the copyright holders of 363 | that material) supplement the terms of this License with terms: 364 | 365 | a) Disclaiming warranty or limiting liability differently from the 366 | terms of sections 15 and 16 of this License; or 367 | 368 | b) Requiring preservation of specified reasonable legal notices or 369 | author attributions in that material or in the Appropriate Legal 370 | Notices displayed by works containing it; or 371 | 372 | c) Prohibiting misrepresentation of the origin of that material, or 373 | requiring that modified versions of such material be marked in 374 | reasonable ways as different from the original version; or 375 | 376 | d) Limiting the use for publicity purposes of names of licensors or 377 | authors of the material; or 378 | 379 | e) Declining to grant rights under trademark law for use of some 380 | trade names, trademarks, or service marks; or 381 | 382 | f) Requiring indemnification of licensors and authors of that 383 | material by anyone who conveys the material (or modified versions of 384 | it) with contractual assumptions of liability to the recipient, for 385 | any liability that these contractual assumptions directly impose on 386 | those licensors and authors. 387 | 388 | All other non-permissive additional terms are considered "further 389 | restrictions" within the meaning of section 10. If the Program as you 390 | received it, or any part of it, contains a notice stating that it is 391 | governed by this License along with a term that is a further 392 | restriction, you may remove that term. If a license document contains 393 | a further restriction but permits relicensing or conveying under this 394 | License, you may add to a covered work material governed by the terms 395 | of that license document, provided that the further restriction does 396 | not survive such relicensing or conveying. 397 | 398 | If you add terms to a covered work in accord with this section, you 399 | must place, in the relevant source files, a statement of the 400 | additional terms that apply to those files, or a notice indicating 401 | where to find the applicable terms. 402 | 403 | Additional terms, permissive or non-permissive, may be stated in the 404 | form of a separately written license, or stated as exceptions; 405 | the above requirements apply either way. 406 | 407 | 8. Termination. 408 | 409 | You may not propagate or modify a covered work except as expressly 410 | provided under this License. Any attempt otherwise to propagate or 411 | modify it is void, and will automatically terminate your rights under 412 | this License (including any patent licenses granted under the third 413 | paragraph of section 11). 414 | 415 | However, if you cease all violation of this License, then your 416 | license from a particular copyright holder is reinstated (a) 417 | provisionally, unless and until the copyright holder explicitly and 418 | finally terminates your license, and (b) permanently, if the copyright 419 | holder fails to notify you of the violation by some reasonable means 420 | prior to 60 days after the cessation. 421 | 422 | Moreover, your license from a particular copyright holder is 423 | reinstated permanently if the copyright holder notifies you of the 424 | violation by some reasonable means, this is the first time you have 425 | received notice of violation of this License (for any work) from that 426 | copyright holder, and you cure the violation prior to 30 days after 427 | your receipt of the notice. 428 | 429 | Termination of your rights under this section does not terminate the 430 | licenses of parties who have received copies or rights from you under 431 | this License. If your rights have been terminated and not permanently 432 | reinstated, you do not qualify to receive new licenses for the same 433 | material under section 10. 434 | 435 | 9. Acceptance Not Required for Having Copies. 436 | 437 | You are not required to accept this License in order to receive or 438 | run a copy of the Program. Ancillary propagation of a covered work 439 | occurring solely as a consequence of using peer-to-peer transmission 440 | to receive a copy likewise does not require acceptance. However, 441 | nothing other than this License grants you permission to propagate or 442 | modify any covered work. These actions infringe copyright if you do 443 | not accept this License. Therefore, by modifying or propagating a 444 | covered work, you indicate your acceptance of this License to do so. 445 | 446 | 10. Automatic Licensing of Downstream Recipients. 447 | 448 | Each time you convey a covered work, the recipient automatically 449 | receives a license from the original licensors, to run, modify and 450 | propagate that work, subject to this License. You are not responsible 451 | for enforcing compliance by third parties with this License. 452 | 453 | An "entity transaction" is a transaction transferring control of an 454 | organization, or substantially all assets of one, or subdividing an 455 | organization, or merging organizations. If propagation of a covered 456 | work results from an entity transaction, each party to that 457 | transaction who receives a copy of the work also receives whatever 458 | licenses to the work the party's predecessor in interest had or could 459 | give under the previous paragraph, plus a right to possession of the 460 | Corresponding Source of the work from the predecessor in interest, if 461 | the predecessor has it or can get it with reasonable efforts. 462 | 463 | You may not impose any further restrictions on the exercise of the 464 | rights granted or affirmed under this License. For example, you may 465 | not impose a license fee, royalty, or other charge for exercise of 466 | rights granted under this License, and you may not initiate litigation 467 | (including a cross-claim or counterclaim in a lawsuit) alleging that 468 | any patent claim is infringed by making, using, selling, offering for 469 | sale, or importing the Program or any portion of it. 470 | 471 | 11. Patents. 472 | 473 | A "contributor" is a copyright holder who authorizes use under this 474 | License of the Program or a work on which the Program is based. The 475 | work thus licensed is called the contributor's "contributor version". 476 | 477 | A contributor's "essential patent claims" are all patent claims 478 | owned or controlled by the contributor, whether already acquired or 479 | hereafter acquired, that would be infringed by some manner, permitted 480 | by this License, of making, using, or selling its contributor version, 481 | but do not include claims that would be infringed only as a 482 | consequence of further modification of the contributor version. For 483 | purposes of this definition, "control" includes the right to grant 484 | patent sublicenses in a manner consistent with the requirements of 485 | this License. 486 | 487 | Each contributor grants you a non-exclusive, worldwide, royalty-free 488 | patent license under the contributor's essential patent claims, to 489 | make, use, sell, offer for sale, import and otherwise run, modify and 490 | propagate the contents of its contributor version. 491 | 492 | In the following three paragraphs, a "patent license" is any express 493 | agreement or commitment, however denominated, not to enforce a patent 494 | (such as an express permission to practice a patent or covenant not to 495 | sue for patent infringement). To "grant" such a patent license to a 496 | party means to make such an agreement or commitment not to enforce a 497 | patent against the party. 498 | 499 | If you convey a covered work, knowingly relying on a patent license, 500 | and the Corresponding Source of the work is not available for anyone 501 | to copy, free of charge and under the terms of this License, through a 502 | publicly available network server or other readily accessible means, 503 | then you must either (1) cause the Corresponding Source to be so 504 | available, or (2) arrange to deprive yourself of the benefit of the 505 | patent license for this particular work, or (3) arrange, in a manner 506 | consistent with the requirements of this License, to extend the patent 507 | license to downstream recipients. "Knowingly relying" means you have 508 | actual knowledge that, but for the patent license, your conveying the 509 | covered work in a country, or your recipient's use of the covered work 510 | in a country, would infringe one or more identifiable patents in that 511 | country that you have reason to believe are valid. 512 | 513 | If, pursuant to or in connection with a single transaction or 514 | arrangement, you convey, or propagate by procuring conveyance of, a 515 | covered work, and grant a patent license to some of the parties 516 | receiving the covered work authorizing them to use, propagate, modify 517 | or convey a specific copy of the covered work, then the patent license 518 | you grant is automatically extended to all recipients of the covered 519 | work and works based on it. 520 | 521 | A patent license is "discriminatory" if it does not include within 522 | the scope of its coverage, prohibits the exercise of, or is 523 | conditioned on the non-exercise of one or more of the rights that are 524 | specifically granted under this License. You may not convey a covered 525 | work if you are a party to an arrangement with a third party that is 526 | in the business of distributing software, under which you make payment 527 | to the third party based on the extent of your activity of conveying 528 | the work, and under which the third party grants, to any of the 529 | parties who would receive the covered work from you, a discriminatory 530 | patent license (a) in connection with copies of the covered work 531 | conveyed by you (or copies made from those copies), or (b) primarily 532 | for and in connection with specific products or compilations that 533 | contain the covered work, unless you entered into that arrangement, 534 | or that patent license was granted, prior to 28 March 2007. 535 | 536 | Nothing in this License shall be construed as excluding or limiting 537 | any implied license or other defenses to infringement that may 538 | otherwise be available to you under applicable patent law. 539 | 540 | 12. No Surrender of Others' Freedom. 541 | 542 | If conditions are imposed on you (whether by court order, agreement or 543 | otherwise) that contradict the conditions of this License, they do not 544 | excuse you from the conditions of this License. If you cannot convey a 545 | covered work so as to satisfy simultaneously your obligations under this 546 | License and any other pertinent obligations, then as a consequence you may 547 | not convey it at all. For example, if you agree to terms that obligate you 548 | to collect a royalty for further conveying from those to whom you convey 549 | the Program, the only way you could satisfy both those terms and this 550 | License would be to refrain entirely from conveying the Program. 551 | 552 | 13. Use with the GNU Affero General Public License. 553 | 554 | Notwithstanding any other provision of this License, you have 555 | permission to link or combine any covered work with a work licensed 556 | under version 3 of the GNU Affero General Public License into a single 557 | combined work, and to convey the resulting work. The terms of this 558 | License will continue to apply to the part which is the covered work, 559 | but the special requirements of the GNU Affero General Public License, 560 | section 13, concerning interaction through a network will apply to the 561 | combination as such. 562 | 563 | 14. Revised Versions of this License. 564 | 565 | The Free Software Foundation may publish revised and/or new versions of 566 | the GNU General Public License from time to time. Such new versions will 567 | be similar in spirit to the present version, but may differ in detail to 568 | address new problems or concerns. 569 | 570 | Each version is given a distinguishing version number. If the 571 | Program specifies that a certain numbered version of the GNU General 572 | Public License "or any later version" applies to it, you have the 573 | option of following the terms and conditions either of that numbered 574 | version or of any later version published by the Free Software 575 | Foundation. If the Program does not specify a version number of the 576 | GNU General Public License, you may choose any version ever published 577 | by the Free Software Foundation. 578 | 579 | If the Program specifies that a proxy can decide which future 580 | versions of the GNU General Public License can be used, that proxy's 581 | public statement of acceptance of a version permanently authorizes you 582 | to choose that version for the Program. 583 | 584 | Later license versions may give you additional or different 585 | permissions. However, no additional obligations are imposed on any 586 | author or copyright holder as a result of your choosing to follow a 587 | later version. 588 | 589 | 15. Disclaimer of Warranty. 590 | 591 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY 592 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT 593 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY 594 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, 595 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR 596 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM 597 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF 598 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 599 | 600 | 16. Limitation of Liability. 601 | 602 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 603 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS 604 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY 605 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE 606 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF 607 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD 608 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), 609 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF 610 | SUCH DAMAGES. 611 | 612 | 17. Interpretation of Sections 15 and 16. 613 | 614 | If the disclaimer of warranty and limitation of liability provided 615 | above cannot be given local legal effect according to their terms, 616 | reviewing courts shall apply local law that most closely approximates 617 | an absolute waiver of all civil liability in connection with the 618 | Program, unless a warranty or assumption of liability accompanies a 619 | copy of the Program in return for a fee. 620 | 621 | END OF TERMS AND CONDITIONS 622 | 623 | How to Apply These Terms to Your New Programs 624 | 625 | If you develop a new program, and you want it to be of the greatest 626 | possible use to the public, the best way to achieve this is to make it 627 | free software which everyone can redistribute and change under these terms. 628 | 629 | To do so, attach the following notices to the program. It is safest 630 | to attach them to the start of each source file to most effectively 631 | state the exclusion of warranty; and each file should have at least 632 | the "copyright" line and a pointer to where the full notice is found. 633 | 634 | 635 | Copyright (C) 636 | 637 | This program is free software: you can redistribute it and/or modify 638 | it under the terms of the GNU General Public License as published by 639 | the Free Software Foundation, either version 3 of the License, or 640 | (at your option) any later version. 641 | 642 | This program is distributed in the hope that it will be useful, 643 | but WITHOUT ANY WARRANTY; without even the implied warranty of 644 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 645 | GNU General Public License for more details. 646 | 647 | You should have received a copy of the GNU General Public License 648 | along with this program. If not, see . 649 | 650 | Also add information on how to contact you by electronic and paper mail. 651 | 652 | If the program does terminal interaction, make it output a short 653 | notice like this when it starts in an interactive mode: 654 | 655 | Copyright (C) 656 | This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 657 | This is free software, and you are welcome to redistribute it 658 | under certain conditions; type `show c' for details. 659 | 660 | The hypothetical commands `show w' and `show c' should show the appropriate 661 | parts of the General Public License. Of course, your program's commands 662 | might be different; for a GUI interface, you would use an "about box". 663 | 664 | You should also get your employer (if you work as a programmer) or school, 665 | if any, to sign a "copyright disclaimer" for the program, if necessary. 666 | For more information on this, and how to apply and follow the GNU GPL, see 667 | . 668 | 669 | The GNU General Public License does not permit incorporating your program 670 | into proprietary programs. If your program is a subroutine library, you 671 | may consider it more useful to permit linking proprietary applications with 672 | the library. If this is what you want to do, use the GNU Lesser General 673 | Public License instead of this License. But first, please read 674 | . 675 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Deeplab v2 ResNet for Semantic Image Segmentation 2 | 3 | This is an (re-)implementation of [DeepLab v2 (ResNet-101)](http://liangchiehchen.com/projects/DeepLabv2_resnet.html) in TensorFlow for semantic image segmentation on the [PASCAL VOC 2012 dataset](http://host.robots.ox.ac.uk/pascal/VOC/). We refer to [DrSleep's implementation](https://github.com/DrSleep/tensorflow-deeplab-resnet) (Many thanks!). We do not use tf-to-caffe packages like kaffe so you only need TensorFlow 1.3.0+ to run this code. 4 | 5 | The deeplab pre-trained ResNet-101 ckpt files (pre-trained on MSCOCO) are provided by DrSleep -- [here](https://drive.google.com/drive/folders/0B_rootXHuswsZ0E4Mjh1ZU5xZVU). Thanks again! 6 | 7 | Created by [Zhengyang Wang](http://people.tamu.edu/~zhengyang.wang/) and [Shuiwang Ji](http://people.tamu.edu/~sji/index.html) at Texas A&M University. 8 | 9 | ## Update 10 | **05/08/2018**: 11 | 12 | Our work based on this implementation has led to a paper accepted for long presentation in KDD2018. You may find the code of the work in this [branch](https://github.com/zhengyang-wang/Deeplab-v2--ResNet-101--Tensorflow/tree/smoothed_dilated_conv). 13 | 14 | If using this code , please cite our paper. 15 | ``` 16 | @inproceedings{wang2018smoothed, 17 | title={Smoothed Dilated Convolutions for Improved Dense Prediction}, 18 | author={Wang, Zhengyang and Ji, Shuiwang}, 19 | booktitle={Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery \& Data Mining}, 20 | pages={2486--2495}, 21 | year={2018}, 22 | organization={ACM} 23 | } 24 | ``` 25 | 26 | **02/02/2018**: 27 | 28 | * A clarification: 29 | 30 | As reported, ResNet pre-trained models (NOT deeplab) from Tensorflow were trained using the channel order RGB instead BGR (https://github.com/tensorflow/models/blob/master/research/slim/preprocessing/vgg_preprocessing.py). 31 | 32 | Thus, the most correct way to apply them is to use the same order RGB. The original code is for pre-trained models from Caffe and uses BGR. To correct this, when you use [res101](http://download.tensorflow.org/models/resnet_v1_101_2016_08_28.tar.gz) and [res50](http://download.tensorflow.org/models/resnet_v1_50_2016_08_28.tar.gz), you need to delete [line 116](https://github.com/zhengyang-wang/Deeplab-v2--ResNet-101--Tensorflow/blob/1b449b22a0729767b370c68a2848fda9caeed510/utils/image_reader.py#L116) and [line 117](https://github.com/zhengyang-wang/Deeplab-v2--ResNet-101--Tensorflow/blob/1b449b22a0729767b370c68a2848fda9caeed510/utils/image_reader.py#L117) in utils/image_reader.py to remove the RGB to BGR step when reading images. Then, modify [line 77](https://github.com/zhengyang-wang/Deeplab-v2--ResNet-101--Tensorflow/blob/1b449b22a0729767b370c68a2848fda9caeed510/utils/label_utils.py#L77) in utils/label_utils.py to remove the BGR to RGB step in the inverse process for image visualization. At last, you need to change the IMAGE_MEAN by swapping the first and the third values in [line 26](https://github.com/zhengyang-wang/Deeplab-v2--ResNet-101--Tensorflow/blob/1b449b22a0729767b370c68a2848fda9caeed510/model.py#L26) and [line 26](https://github.com/zhengyang-wang/Deeplab-v2--ResNet-101--Tensorflow/blob/1b449b22a0729767b370c68a2848fda9caeed510/model_msc.py#L26) for non_msc and msc training, respectively. 33 | 34 | However, this change actually does not affect the performance a lot, proved by discussion in [issue 30](https://github.com/zhengyang-wang/Deeplab-v2--ResNet-101--Tensorflow/issues/30). In this task, the size of training patches is different from that in ImageNet. And the set of images is different. The IMAGE_MEAN is never accurate. I guess that simply using IMAGE_MEAN=[127.5, 127.5, 127.5] will work as well. 35 | 36 | **12/13/2017**: 37 | 38 | * Now the test code will output the mIoU as well as the IoU for each class. 39 | 40 | **12/12/2017**: 41 | 42 | * Add 'predict' function, you can use '--option=predict' to save your outputs now (both the true prediction where each pixel is between 0 and 20 and the visual one where each class has its own color). 43 | 44 | * Add multi-scale training, testing and predicting. Check main_msc.py and model_msc.py and use them just as main.py and model.py. 45 | 46 | * Add plot_training_curve.py to use the log.txt to make plots of training curve. 47 | 48 | * Now this is a 'full' (re-)implementation of [DeepLab v2 (ResNet-101)](http://liangchiehchen.com/projects/DeepLabv2_resnet.html) in TensorFlow. Thank you for the support. You are welcome to report your settings and results as well as any bug! 49 | 50 | **11/09/2017**: 51 | 52 | * The new version enables using original ImageNet pre-trained ResNet models (without pre-training on MSCOCO). You may change arguments ('encoder_name' and 'pretrain_file') in main.py to use corresponding pre-trained models. The original pre-trained ResNet-101 ckpt files are provided by tensorflow officially -- [res101](http://download.tensorflow.org/models/resnet_v1_101_2016_08_28.tar.gz) and [res50](http://download.tensorflow.org/models/resnet_v1_50_2016_08_28.tar.gz). 53 | 54 | * To help those who want to use this model on the CityScapes dataset, I shared the corresponding txt files and the python file which generates them. Note that you need to use tools [here](https://github.com/mcordts/cityscapesScripts) to generate labels with trainID first. Hope it would be helpful. Do not forget to change IMG_MEAN in model.py and other settings in main.py. 55 | 56 | * 'is_training' argument is removed and 'self._batch_norm' changes. Basically, for a small batch size, it is better to keep the statistics of the BN layers (running means and variances) frozen, and to not update the values provided by the pre-trained model by setting 'is_training=False'. Note that is_training=False still updates BN parameters gamma (scale) and beta (offset) if they are presented in var_list of the optimiser definition. Set 'trainable=False' in BN fuctions to remove them from trainable_variables. 57 | 58 | * Add 'phase' argument in network.py for future development. 'phase=True' means training. It is mainly for controlling batch normalization (if any) in the non-pre-trained part. 59 | ``` 60 | Example: If you have a batch normalization layer in the decoder, you should use 61 | 62 | outputs = self._batch_norm(inputs, name='g_bn1', is_training=self.phase, activation_fn=tf.nn.relu, trainable=True) 63 | ``` 64 | * Some changes to make the code more readable and easy to modify for future research. 65 | 66 | * I plan to add 'predict' function to enable saving predicted results for offline evaluation, post-processing, etc. 67 | 68 | ## System requirement 69 | 70 | #### Programming language 71 | ``` 72 | Python 3.5 73 | ``` 74 | #### Python Packages 75 | ``` 76 | tensorflow-gpu 1.3.0 77 | ``` 78 | ## Configure the network 79 | 80 | All network hyperparameters are configured in main.py. 81 | 82 | #### Training 83 | ``` 84 | num_steps: how many iterations to train 85 | 86 | save_interval: how many steps to save the model 87 | 88 | random_seed: random seed for tensorflow 89 | 90 | weight_decay: l2 regularization parameter 91 | 92 | learning_rate: initial learning rate 93 | 94 | power: parameter for poly learning rate 95 | 96 | momentum: momentum 97 | 98 | encoder_name: name of pre-trained model, res101, res50 or deeplab 99 | 100 | pretrain_file: the initial pre-trained model file for transfer learning 101 | 102 | data_list: training data list file 103 | 104 | grad_update_every (msc only): accumulate the gradients for how many steps before updating weights. Note that in the msc case, this is actually the true training batch size. 105 | ``` 106 | #### Testing/Validation 107 | ``` 108 | valid_step: checkpoint number for testing/validation 109 | 110 | valid_num_steps: = number of testing/validation samples 111 | 112 | valid_data_list: testing/validation data list file 113 | ``` 114 | #### Prediction 115 | ``` 116 | out_dir: directory for saving prediction outputs 117 | 118 | test_step: checkpoint number for prediction 119 | 120 | test_num_steps: = number of prediction samples 121 | 122 | test_data_list: prediction data list filename 123 | 124 | visual: whether to save visualizable prediction outputs 125 | ``` 126 | #### Data 127 | ``` 128 | data_dir: data directory 129 | 130 | batch_size: training batch size 131 | 132 | input height: height of input image 133 | 134 | input width: width of input image 135 | 136 | num_classes: number of classes 137 | 138 | ignore_label: label pixel value that should be ignored 139 | 140 | random_scale: whether to perform random scaling data-augmentation 141 | 142 | random_mirror: whether to perform random left-right flipping data-augmentation 143 | ``` 144 | #### Log 145 | ``` 146 | modeldir: where to store saved models 147 | 148 | logfile: where to store training log 149 | 150 | logdir: where to store log for tensorboard 151 | ``` 152 | ## Training and Testing 153 | 154 | #### Start training 155 | 156 | After configuring the network, we can start to train. Run 157 | ``` 158 | python main.py 159 | ``` 160 | The training of Deeplab v2 ResNet will start. 161 | 162 | #### Training process visualization 163 | 164 | We employ tensorboard for visualization. 165 | 166 | ``` 167 | tensorboard --logdir=log --port=6006 168 | ``` 169 | 170 | You may visualize the graph of the model and (training images + groud truth labels + predicted labels). 171 | 172 | To visualize the training loss curve, write your own script to make use of the training log. 173 | 174 | #### Testing and prediction 175 | 176 | Select a checkpoint to test/validate your model in terms of pixel accuracy and mean IoU. 177 | 178 | Fill the valid_step in main.py with the checkpoint you want to test. Change valid_num_steps and valid_data_list accordingly. Run 179 | 180 | ``` 181 | python main.py --option=test 182 | ``` 183 | 184 | The final output includes pixel accuracy and mean IoU. 185 | 186 | Run 187 | 188 | ``` 189 | python main.py --option=predict 190 | ``` 191 | The outputs will be saved in the 'output' folder. 192 | -------------------------------------------------------------------------------- /dataset/test.txt: -------------------------------------------------------------------------------- 1 | /JPEGImages/2008_000006.jpg 2 | /JPEGImages/2008_000011.jpg 3 | /JPEGImages/2008_000012.jpg 4 | /JPEGImages/2008_000018.jpg 5 | /JPEGImages/2008_000024.jpg 6 | /JPEGImages/2008_000030.jpg 7 | /JPEGImages/2008_000031.jpg 8 | /JPEGImages/2008_000046.jpg 9 | /JPEGImages/2008_000047.jpg 10 | /JPEGImages/2008_000048.jpg 11 | /JPEGImages/2008_000057.jpg 12 | /JPEGImages/2008_000058.jpg 13 | /JPEGImages/2008_000068.jpg 14 | /JPEGImages/2008_000072.jpg 15 | /JPEGImages/2008_000079.jpg 16 | /JPEGImages/2008_000081.jpg 17 | /JPEGImages/2008_000083.jpg 18 | /JPEGImages/2008_000088.jpg 19 | /JPEGImages/2008_000094.jpg 20 | /JPEGImages/2008_000101.jpg 21 | /JPEGImages/2008_000104.jpg 22 | /JPEGImages/2008_000106.jpg 23 | /JPEGImages/2008_000108.jpg 24 | /JPEGImages/2008_000110.jpg 25 | /JPEGImages/2008_000111.jpg 26 | /JPEGImages/2008_000126.jpg 27 | /JPEGImages/2008_000127.jpg 28 | /JPEGImages/2008_000129.jpg 29 | /JPEGImages/2008_000130.jpg 30 | /JPEGImages/2008_000135.jpg 31 | /JPEGImages/2008_000150.jpg 32 | /JPEGImages/2008_000152.jpg 33 | /JPEGImages/2008_000156.jpg 34 | /JPEGImages/2008_000159.jpg 35 | /JPEGImages/2008_000160.jpg 36 | /JPEGImages/2008_000161.jpg 37 | /JPEGImages/2008_000166.jpg 38 | /JPEGImages/2008_000167.jpg 39 | /JPEGImages/2008_000168.jpg 40 | /JPEGImages/2008_000169.jpg 41 | /JPEGImages/2008_000171.jpg 42 | /JPEGImages/2008_000175.jpg 43 | /JPEGImages/2008_000178.jpg 44 | /JPEGImages/2008_000186.jpg 45 | /JPEGImages/2008_000198.jpg 46 | /JPEGImages/2008_000206.jpg 47 | /JPEGImages/2008_000208.jpg 48 | /JPEGImages/2008_000209.jpg 49 | /JPEGImages/2008_000211.jpg 50 | /JPEGImages/2008_000220.jpg 51 | /JPEGImages/2008_000224.jpg 52 | /JPEGImages/2008_000230.jpg 53 | /JPEGImages/2008_000240.jpg 54 | /JPEGImages/2008_000248.jpg 55 | /JPEGImages/2008_000249.jpg 56 | /JPEGImages/2008_000250.jpg 57 | /JPEGImages/2008_000256.jpg 58 | /JPEGImages/2008_000279.jpg 59 | /JPEGImages/2008_000282.jpg 60 | /JPEGImages/2008_000285.jpg 61 | /JPEGImages/2008_000286.jpg 62 | /JPEGImages/2008_000296.jpg 63 | /JPEGImages/2008_000300.jpg 64 | /JPEGImages/2008_000322.jpg 65 | /JPEGImages/2008_000324.jpg 66 | /JPEGImages/2008_000337.jpg 67 | /JPEGImages/2008_000366.jpg 68 | /JPEGImages/2008_000369.jpg 69 | /JPEGImages/2008_000377.jpg 70 | /JPEGImages/2008_000384.jpg 71 | /JPEGImages/2008_000390.jpg 72 | /JPEGImages/2008_000404.jpg 73 | /JPEGImages/2008_000411.jpg 74 | /JPEGImages/2008_000434.jpg 75 | /JPEGImages/2008_000440.jpg 76 | /JPEGImages/2008_000460.jpg 77 | /JPEGImages/2008_000467.jpg 78 | /JPEGImages/2008_000478.jpg 79 | /JPEGImages/2008_000485.jpg 80 | /JPEGImages/2008_000487.jpg 81 | /JPEGImages/2008_000490.jpg 82 | /JPEGImages/2008_000503.jpg 83 | /JPEGImages/2008_000504.jpg 84 | /JPEGImages/2008_000507.jpg 85 | /JPEGImages/2008_000513.jpg 86 | /JPEGImages/2008_000523.jpg 87 | /JPEGImages/2008_000529.jpg 88 | /JPEGImages/2008_000556.jpg 89 | /JPEGImages/2008_000565.jpg 90 | /JPEGImages/2008_000580.jpg 91 | /JPEGImages/2008_000590.jpg 92 | /JPEGImages/2008_000596.jpg 93 | /JPEGImages/2008_000597.jpg 94 | /JPEGImages/2008_000600.jpg 95 | /JPEGImages/2008_000603.jpg 96 | /JPEGImages/2008_000604.jpg 97 | /JPEGImages/2008_000612.jpg 98 | /JPEGImages/2008_000617.jpg 99 | /JPEGImages/2008_000621.jpg 100 | /JPEGImages/2008_000627.jpg 101 | /JPEGImages/2008_000633.jpg 102 | /JPEGImages/2008_000643.jpg 103 | /JPEGImages/2008_000644.jpg 104 | /JPEGImages/2008_000649.jpg 105 | /JPEGImages/2008_000651.jpg 106 | /JPEGImages/2008_000664.jpg 107 | /JPEGImages/2008_000665.jpg 108 | /JPEGImages/2008_000680.jpg 109 | /JPEGImages/2008_000681.jpg 110 | /JPEGImages/2008_000684.jpg 111 | /JPEGImages/2008_000685.jpg 112 | /JPEGImages/2008_000688.jpg 113 | /JPEGImages/2008_000693.jpg 114 | /JPEGImages/2008_000698.jpg 115 | /JPEGImages/2008_000707.jpg 116 | /JPEGImages/2008_000709.jpg 117 | /JPEGImages/2008_000712.jpg 118 | /JPEGImages/2008_000747.jpg 119 | /JPEGImages/2008_000751.jpg 120 | /JPEGImages/2008_000754.jpg 121 | /JPEGImages/2008_000762.jpg 122 | /JPEGImages/2008_000767.jpg 123 | /JPEGImages/2008_000768.jpg 124 | /JPEGImages/2008_000773.jpg 125 | /JPEGImages/2008_000774.jpg 126 | /JPEGImages/2008_000779.jpg 127 | /JPEGImages/2008_000797.jpg 128 | /JPEGImages/2008_000813.jpg 129 | /JPEGImages/2008_000816.jpg 130 | /JPEGImages/2008_000846.jpg 131 | /JPEGImages/2008_000866.jpg 132 | /JPEGImages/2008_000871.jpg 133 | /JPEGImages/2008_000872.jpg 134 | /JPEGImages/2008_000891.jpg 135 | /JPEGImages/2008_000892.jpg 136 | /JPEGImages/2008_000894.jpg 137 | /JPEGImages/2008_000896.jpg 138 | /JPEGImages/2008_000898.jpg 139 | /JPEGImages/2008_000909.jpg 140 | /JPEGImages/2008_000913.jpg 141 | /JPEGImages/2008_000920.jpg 142 | /JPEGImages/2008_000933.jpg 143 | /JPEGImages/2008_000935.jpg 144 | /JPEGImages/2008_000937.jpg 145 | /JPEGImages/2008_000938.jpg 146 | /JPEGImages/2008_000954.jpg 147 | /JPEGImages/2008_000958.jpg 148 | /JPEGImages/2008_000963.jpg 149 | /JPEGImages/2008_000967.jpg 150 | /JPEGImages/2008_000974.jpg 151 | /JPEGImages/2008_000986.jpg 152 | /JPEGImages/2008_000994.jpg 153 | /JPEGImages/2008_000995.jpg 154 | /JPEGImages/2008_001008.jpg 155 | /JPEGImages/2008_001010.jpg 156 | /JPEGImages/2008_001014.jpg 157 | /JPEGImages/2008_001016.jpg 158 | /JPEGImages/2008_001025.jpg 159 | /JPEGImages/2008_001029.jpg 160 | /JPEGImages/2008_001037.jpg 161 | /JPEGImages/2008_001059.jpg 162 | /JPEGImages/2008_001061.jpg 163 | /JPEGImages/2008_001072.jpg 164 | /JPEGImages/2008_001124.jpg 165 | /JPEGImages/2008_001126.jpg 166 | /JPEGImages/2008_001131.jpg 167 | /JPEGImages/2008_001138.jpg 168 | /JPEGImages/2008_001144.jpg 169 | /JPEGImages/2008_001151.jpg 170 | /JPEGImages/2008_001156.jpg 171 | /JPEGImages/2008_001179.jpg 172 | /JPEGImages/2008_001181.jpg 173 | /JPEGImages/2008_001184.jpg 174 | /JPEGImages/2008_001186.jpg 175 | /JPEGImages/2008_001197.jpg 176 | /JPEGImages/2008_001207.jpg 177 | /JPEGImages/2008_001212.jpg 178 | /JPEGImages/2008_001233.jpg 179 | /JPEGImages/2008_001234.jpg 180 | /JPEGImages/2008_001258.jpg 181 | /JPEGImages/2008_001268.jpg 182 | /JPEGImages/2008_001279.jpg 183 | /JPEGImages/2008_001281.jpg 184 | /JPEGImages/2008_001288.jpg 185 | /JPEGImages/2008_001291.jpg 186 | /JPEGImages/2008_001298.jpg 187 | /JPEGImages/2008_001309.jpg 188 | /JPEGImages/2008_001315.jpg 189 | /JPEGImages/2008_001316.jpg 190 | /JPEGImages/2008_001319.jpg 191 | /JPEGImages/2008_001327.jpg 192 | /JPEGImages/2008_001328.jpg 193 | /JPEGImages/2008_001332.jpg 194 | /JPEGImages/2008_001341.jpg 195 | /JPEGImages/2008_001347.jpg 196 | /JPEGImages/2008_001355.jpg 197 | /JPEGImages/2008_001378.jpg 198 | /JPEGImages/2008_001386.jpg 199 | /JPEGImages/2008_001400.jpg 200 | /JPEGImages/2008_001409.jpg 201 | /JPEGImages/2008_001411.jpg 202 | /JPEGImages/2008_001416.jpg 203 | /JPEGImages/2008_001418.jpg 204 | /JPEGImages/2008_001435.jpg 205 | /JPEGImages/2008_001459.jpg 206 | /JPEGImages/2008_001469.jpg 207 | /JPEGImages/2008_001474.jpg 208 | /JPEGImages/2008_001477.jpg 209 | /JPEGImages/2008_001483.jpg 210 | /JPEGImages/2008_001484.jpg 211 | /JPEGImages/2008_001485.jpg 212 | /JPEGImages/2008_001496.jpg 213 | /JPEGImages/2008_001507.jpg 214 | /JPEGImages/2008_001511.jpg 215 | /JPEGImages/2008_001519.jpg 216 | /JPEGImages/2008_001557.jpg 217 | /JPEGImages/2008_001567.jpg 218 | /JPEGImages/2008_001570.jpg 219 | /JPEGImages/2008_001571.jpg 220 | /JPEGImages/2008_001572.jpg 221 | /JPEGImages/2008_001579.jpg 222 | /JPEGImages/2008_001587.jpg 223 | /JPEGImages/2008_001608.jpg 224 | /JPEGImages/2008_001611.jpg 225 | /JPEGImages/2008_001614.jpg 226 | /JPEGImages/2008_001621.jpg 227 | /JPEGImages/2008_001639.jpg 228 | /JPEGImages/2008_001658.jpg 229 | /JPEGImages/2008_001678.jpg 230 | /JPEGImages/2008_001700.jpg 231 | /JPEGImages/2008_001713.jpg 232 | /JPEGImages/2008_001720.jpg 233 | /JPEGImages/2008_001755.jpg 234 | /JPEGImages/2008_001779.jpg 235 | /JPEGImages/2008_001785.jpg 236 | /JPEGImages/2008_001793.jpg 237 | /JPEGImages/2008_001794.jpg 238 | /JPEGImages/2008_001803.jpg 239 | /JPEGImages/2008_001818.jpg 240 | /JPEGImages/2008_001848.jpg 241 | /JPEGImages/2008_001855.jpg 242 | /JPEGImages/2008_001857.jpg 243 | /JPEGImages/2008_001861.jpg 244 | /JPEGImages/2008_001875.jpg 245 | /JPEGImages/2008_001878.jpg 246 | /JPEGImages/2008_001886.jpg 247 | /JPEGImages/2008_001897.jpg 248 | /JPEGImages/2008_001916.jpg 249 | /JPEGImages/2008_001925.jpg 250 | /JPEGImages/2008_001949.jpg 251 | /JPEGImages/2008_001953.jpg 252 | /JPEGImages/2008_001972.jpg 253 | /JPEGImages/2008_001999.jpg 254 | /JPEGImages/2008_002027.jpg 255 | /JPEGImages/2008_002040.jpg 256 | /JPEGImages/2008_002057.jpg 257 | /JPEGImages/2008_002070.jpg 258 | /JPEGImages/2008_002075.jpg 259 | /JPEGImages/2008_002095.jpg 260 | /JPEGImages/2008_002104.jpg 261 | /JPEGImages/2008_002105.jpg 262 | /JPEGImages/2008_002106.jpg 263 | /JPEGImages/2008_002136.jpg 264 | /JPEGImages/2008_002137.jpg 265 | /JPEGImages/2008_002147.jpg 266 | /JPEGImages/2008_002149.jpg 267 | /JPEGImages/2008_002163.jpg 268 | /JPEGImages/2008_002173.jpg 269 | /JPEGImages/2008_002174.jpg 270 | /JPEGImages/2008_002184.jpg 271 | /JPEGImages/2008_002186.jpg 272 | /JPEGImages/2008_002188.jpg 273 | /JPEGImages/2008_002190.jpg 274 | /JPEGImages/2008_002203.jpg 275 | /JPEGImages/2008_002211.jpg 276 | /JPEGImages/2008_002217.jpg 277 | /JPEGImages/2008_002228.jpg 278 | /JPEGImages/2008_002233.jpg 279 | /JPEGImages/2008_002246.jpg 280 | /JPEGImages/2008_002257.jpg 281 | /JPEGImages/2008_002261.jpg 282 | /JPEGImages/2008_002285.jpg 283 | /JPEGImages/2008_002287.jpg 284 | /JPEGImages/2008_002295.jpg 285 | /JPEGImages/2008_002303.jpg 286 | /JPEGImages/2008_002306.jpg 287 | /JPEGImages/2008_002309.jpg 288 | /JPEGImages/2008_002310.jpg 289 | /JPEGImages/2008_002318.jpg 290 | /JPEGImages/2008_002320.jpg 291 | /JPEGImages/2008_002332.jpg 292 | /JPEGImages/2008_002337.jpg 293 | /JPEGImages/2008_002345.jpg 294 | /JPEGImages/2008_002348.jpg 295 | /JPEGImages/2008_002352.jpg 296 | /JPEGImages/2008_002360.jpg 297 | /JPEGImages/2008_002381.jpg 298 | /JPEGImages/2008_002387.jpg 299 | /JPEGImages/2008_002388.jpg 300 | /JPEGImages/2008_002393.jpg 301 | /JPEGImages/2008_002406.jpg 302 | /JPEGImages/2008_002440.jpg 303 | /JPEGImages/2008_002455.jpg 304 | /JPEGImages/2008_002460.jpg 305 | /JPEGImages/2008_002462.jpg 306 | /JPEGImages/2008_002480.jpg 307 | /JPEGImages/2008_002518.jpg 308 | /JPEGImages/2008_002525.jpg 309 | /JPEGImages/2008_002535.jpg 310 | /JPEGImages/2008_002544.jpg 311 | /JPEGImages/2008_002553.jpg 312 | /JPEGImages/2008_002569.jpg 313 | /JPEGImages/2008_002572.jpg 314 | /JPEGImages/2008_002587.jpg 315 | /JPEGImages/2008_002635.jpg 316 | /JPEGImages/2008_002655.jpg 317 | /JPEGImages/2008_002695.jpg 318 | /JPEGImages/2008_002702.jpg 319 | /JPEGImages/2008_002706.jpg 320 | /JPEGImages/2008_002707.jpg 321 | /JPEGImages/2008_002722.jpg 322 | /JPEGImages/2008_002745.jpg 323 | /JPEGImages/2008_002757.jpg 324 | /JPEGImages/2008_002779.jpg 325 | /JPEGImages/2008_002805.jpg 326 | /JPEGImages/2008_002871.jpg 327 | /JPEGImages/2008_002895.jpg 328 | /JPEGImages/2008_002905.jpg 329 | /JPEGImages/2008_002923.jpg 330 | /JPEGImages/2008_002927.jpg 331 | /JPEGImages/2008_002939.jpg 332 | /JPEGImages/2008_002941.jpg 333 | /JPEGImages/2008_002962.jpg 334 | /JPEGImages/2008_002975.jpg 335 | /JPEGImages/2008_003000.jpg 336 | /JPEGImages/2008_003031.jpg 337 | /JPEGImages/2008_003038.jpg 338 | /JPEGImages/2008_003042.jpg 339 | /JPEGImages/2008_003069.jpg 340 | /JPEGImages/2008_003070.jpg 341 | /JPEGImages/2008_003115.jpg 342 | /JPEGImages/2008_003116.jpg 343 | /JPEGImages/2008_003130.jpg 344 | /JPEGImages/2008_003137.jpg 345 | /JPEGImages/2008_003138.jpg 346 | /JPEGImages/2008_003139.jpg 347 | /JPEGImages/2008_003165.jpg 348 | /JPEGImages/2008_003171.jpg 349 | /JPEGImages/2008_003176.jpg 350 | /JPEGImages/2008_003192.jpg 351 | /JPEGImages/2008_003194.jpg 352 | /JPEGImages/2008_003195.jpg 353 | /JPEGImages/2008_003198.jpg 354 | /JPEGImages/2008_003227.jpg 355 | /JPEGImages/2008_003247.jpg 356 | /JPEGImages/2008_003262.jpg 357 | /JPEGImages/2008_003298.jpg 358 | /JPEGImages/2008_003299.jpg 359 | /JPEGImages/2008_003307.jpg 360 | /JPEGImages/2008_003337.jpg 361 | /JPEGImages/2008_003353.jpg 362 | /JPEGImages/2008_003355.jpg 363 | /JPEGImages/2008_003363.jpg 364 | /JPEGImages/2008_003383.jpg 365 | /JPEGImages/2008_003389.jpg 366 | /JPEGImages/2008_003392.jpg 367 | /JPEGImages/2008_003399.jpg 368 | /JPEGImages/2008_003436.jpg 369 | /JPEGImages/2008_003457.jpg 370 | /JPEGImages/2008_003465.jpg 371 | /JPEGImages/2008_003481.jpg 372 | /JPEGImages/2008_003539.jpg 373 | /JPEGImages/2008_003548.jpg 374 | /JPEGImages/2008_003550.jpg 375 | /JPEGImages/2008_003567.jpg 376 | /JPEGImages/2008_003568.jpg 377 | /JPEGImages/2008_003606.jpg 378 | /JPEGImages/2008_003615.jpg 379 | /JPEGImages/2008_003654.jpg 380 | /JPEGImages/2008_003670.jpg 381 | /JPEGImages/2008_003700.jpg 382 | /JPEGImages/2008_003705.jpg 383 | /JPEGImages/2008_003727.jpg 384 | /JPEGImages/2008_003731.jpg 385 | /JPEGImages/2008_003734.jpg 386 | /JPEGImages/2008_003760.jpg 387 | /JPEGImages/2008_003804.jpg 388 | /JPEGImages/2008_003807.jpg 389 | /JPEGImages/2008_003810.jpg 390 | /JPEGImages/2008_003822.jpg 391 | /JPEGImages/2008_003833.jpg 392 | /JPEGImages/2008_003877.jpg 393 | /JPEGImages/2008_003879.jpg 394 | /JPEGImages/2008_003895.jpg 395 | /JPEGImages/2008_003901.jpg 396 | /JPEGImages/2008_003903.jpg 397 | /JPEGImages/2008_003911.jpg 398 | /JPEGImages/2008_003919.jpg 399 | /JPEGImages/2008_003927.jpg 400 | /JPEGImages/2008_003937.jpg 401 | /JPEGImages/2008_003946.jpg 402 | /JPEGImages/2008_003950.jpg 403 | /JPEGImages/2008_003955.jpg 404 | /JPEGImages/2008_003981.jpg 405 | /JPEGImages/2008_003991.jpg 406 | /JPEGImages/2008_004009.jpg 407 | /JPEGImages/2008_004039.jpg 408 | /JPEGImages/2008_004052.jpg 409 | /JPEGImages/2008_004063.jpg 410 | /JPEGImages/2008_004070.jpg 411 | /JPEGImages/2008_004078.jpg 412 | /JPEGImages/2008_004104.jpg 413 | /JPEGImages/2008_004139.jpg 414 | /JPEGImages/2008_004177.jpg 415 | /JPEGImages/2008_004181.jpg 416 | /JPEGImages/2008_004200.jpg 417 | /JPEGImages/2008_004219.jpg 418 | /JPEGImages/2008_004236.jpg 419 | /JPEGImages/2008_004250.jpg 420 | /JPEGImages/2008_004266.jpg 421 | /JPEGImages/2008_004299.jpg 422 | /JPEGImages/2008_004320.jpg 423 | /JPEGImages/2008_004334.jpg 424 | /JPEGImages/2008_004343.jpg 425 | /JPEGImages/2008_004349.jpg 426 | /JPEGImages/2008_004366.jpg 427 | /JPEGImages/2008_004386.jpg 428 | /JPEGImages/2008_004401.jpg 429 | /JPEGImages/2008_004423.jpg 430 | /JPEGImages/2008_004448.jpg 431 | /JPEGImages/2008_004481.jpg 432 | /JPEGImages/2008_004516.jpg 433 | /JPEGImages/2008_004536.jpg 434 | /JPEGImages/2008_004582.jpg 435 | /JPEGImages/2008_004609.jpg 436 | /JPEGImages/2008_004638.jpg 437 | /JPEGImages/2008_004642.jpg 438 | /JPEGImages/2008_004644.jpg 439 | /JPEGImages/2008_004669.jpg 440 | /JPEGImages/2008_004673.jpg 441 | /JPEGImages/2008_004691.jpg 442 | /JPEGImages/2008_004693.jpg 443 | /JPEGImages/2008_004709.jpg 444 | /JPEGImages/2008_004715.jpg 445 | /JPEGImages/2008_004757.jpg 446 | /JPEGImages/2008_004775.jpg 447 | /JPEGImages/2008_004782.jpg 448 | /JPEGImages/2008_004785.jpg 449 | /JPEGImages/2008_004798.jpg 450 | /JPEGImages/2008_004848.jpg 451 | /JPEGImages/2008_004861.jpg 452 | /JPEGImages/2008_004870.jpg 453 | /JPEGImages/2008_004877.jpg 454 | /JPEGImages/2008_004884.jpg 455 | /JPEGImages/2008_004891.jpg 456 | /JPEGImages/2008_004901.jpg 457 | /JPEGImages/2008_004919.jpg 458 | /JPEGImages/2008_005058.jpg 459 | /JPEGImages/2008_005069.jpg 460 | /JPEGImages/2008_005086.jpg 461 | /JPEGImages/2008_005087.jpg 462 | /JPEGImages/2008_005112.jpg 463 | /JPEGImages/2008_005113.jpg 464 | /JPEGImages/2008_005118.jpg 465 | /JPEGImages/2008_005128.jpg 466 | /JPEGImages/2008_005129.jpg 467 | /JPEGImages/2008_005153.jpg 468 | /JPEGImages/2008_005161.jpg 469 | /JPEGImages/2008_005162.jpg 470 | /JPEGImages/2008_005165.jpg 471 | /JPEGImages/2008_005187.jpg 472 | /JPEGImages/2008_005227.jpg 473 | /JPEGImages/2008_005308.jpg 474 | /JPEGImages/2008_005318.jpg 475 | /JPEGImages/2008_005320.jpg 476 | /JPEGImages/2008_005351.jpg 477 | /JPEGImages/2008_005372.jpg 478 | /JPEGImages/2008_005383.jpg 479 | /JPEGImages/2008_005391.jpg 480 | /JPEGImages/2008_005407.jpg 481 | /JPEGImages/2008_005420.jpg 482 | /JPEGImages/2008_005440.jpg 483 | /JPEGImages/2008_005487.jpg 484 | /JPEGImages/2008_005493.jpg 485 | /JPEGImages/2008_005520.jpg 486 | /JPEGImages/2008_005551.jpg 487 | /JPEGImages/2008_005556.jpg 488 | /JPEGImages/2008_005576.jpg 489 | /JPEGImages/2008_005578.jpg 490 | /JPEGImages/2008_005594.jpg 491 | /JPEGImages/2008_005619.jpg 492 | /JPEGImages/2008_005629.jpg 493 | /JPEGImages/2008_005644.jpg 494 | /JPEGImages/2008_005645.jpg 495 | /JPEGImages/2008_005651.jpg 496 | /JPEGImages/2008_005661.jpg 497 | /JPEGImages/2008_005662.jpg 498 | /JPEGImages/2008_005667.jpg 499 | /JPEGImages/2008_005694.jpg 500 | /JPEGImages/2008_005697.jpg 501 | /JPEGImages/2008_005709.jpg 502 | /JPEGImages/2008_005710.jpg 503 | /JPEGImages/2008_005733.jpg 504 | /JPEGImages/2008_005749.jpg 505 | /JPEGImages/2008_005753.jpg 506 | /JPEGImages/2008_005771.jpg 507 | /JPEGImages/2008_005781.jpg 508 | /JPEGImages/2008_005793.jpg 509 | /JPEGImages/2008_005802.jpg 510 | /JPEGImages/2008_005833.jpg 511 | /JPEGImages/2008_005844.jpg 512 | /JPEGImages/2008_005908.jpg 513 | /JPEGImages/2008_005931.jpg 514 | /JPEGImages/2008_005952.jpg 515 | /JPEGImages/2008_006016.jpg 516 | /JPEGImages/2008_006030.jpg 517 | /JPEGImages/2008_006033.jpg 518 | /JPEGImages/2008_006054.jpg 519 | /JPEGImages/2008_006073.jpg 520 | /JPEGImages/2008_006091.jpg 521 | /JPEGImages/2008_006142.jpg 522 | /JPEGImages/2008_006150.jpg 523 | /JPEGImages/2008_006206.jpg 524 | /JPEGImages/2008_006217.jpg 525 | /JPEGImages/2008_006264.jpg 526 | /JPEGImages/2008_006283.jpg 527 | /JPEGImages/2008_006308.jpg 528 | /JPEGImages/2008_006313.jpg 529 | /JPEGImages/2008_006333.jpg 530 | /JPEGImages/2008_006343.jpg 531 | /JPEGImages/2008_006381.jpg 532 | /JPEGImages/2008_006391.jpg 533 | /JPEGImages/2008_006423.jpg 534 | /JPEGImages/2008_006428.jpg 535 | /JPEGImages/2008_006440.jpg 536 | /JPEGImages/2008_006444.jpg 537 | /JPEGImages/2008_006473.jpg 538 | /JPEGImages/2008_006505.jpg 539 | /JPEGImages/2008_006531.jpg 540 | /JPEGImages/2008_006560.jpg 541 | /JPEGImages/2008_006571.jpg 542 | /JPEGImages/2008_006582.jpg 543 | /JPEGImages/2008_006594.jpg 544 | /JPEGImages/2008_006601.jpg 545 | /JPEGImages/2008_006633.jpg 546 | /JPEGImages/2008_006653.jpg 547 | /JPEGImages/2008_006678.jpg 548 | /JPEGImages/2008_006755.jpg 549 | /JPEGImages/2008_006772.jpg 550 | /JPEGImages/2008_006788.jpg 551 | /JPEGImages/2008_006799.jpg 552 | /JPEGImages/2008_006809.jpg 553 | /JPEGImages/2008_006838.jpg 554 | /JPEGImages/2008_006845.jpg 555 | /JPEGImages/2008_006852.jpg 556 | /JPEGImages/2008_006894.jpg 557 | /JPEGImages/2008_006905.jpg 558 | /JPEGImages/2008_006947.jpg 559 | /JPEGImages/2008_006983.jpg 560 | /JPEGImages/2008_007049.jpg 561 | /JPEGImages/2008_007065.jpg 562 | /JPEGImages/2008_007068.jpg 563 | /JPEGImages/2008_007111.jpg 564 | /JPEGImages/2008_007148.jpg 565 | /JPEGImages/2008_007159.jpg 566 | /JPEGImages/2008_007193.jpg 567 | /JPEGImages/2008_007228.jpg 568 | /JPEGImages/2008_007235.jpg 569 | /JPEGImages/2008_007249.jpg 570 | /JPEGImages/2008_007255.jpg 571 | /JPEGImages/2008_007268.jpg 572 | /JPEGImages/2008_007275.jpg 573 | /JPEGImages/2008_007292.jpg 574 | /JPEGImages/2008_007299.jpg 575 | /JPEGImages/2008_007306.jpg 576 | /JPEGImages/2008_007316.jpg 577 | /JPEGImages/2008_007400.jpg 578 | /JPEGImages/2008_007401.jpg 579 | /JPEGImages/2008_007419.jpg 580 | /JPEGImages/2008_007437.jpg 581 | /JPEGImages/2008_007483.jpg 582 | /JPEGImages/2008_007487.jpg 583 | /JPEGImages/2008_007520.jpg 584 | /JPEGImages/2008_007551.jpg 585 | /JPEGImages/2008_007603.jpg 586 | /JPEGImages/2008_007616.jpg 587 | /JPEGImages/2008_007654.jpg 588 | /JPEGImages/2008_007663.jpg 589 | /JPEGImages/2008_007708.jpg 590 | /JPEGImages/2008_007795.jpg 591 | /JPEGImages/2008_007801.jpg 592 | /JPEGImages/2008_007859.jpg 593 | /JPEGImages/2008_007903.jpg 594 | /JPEGImages/2008_007920.jpg 595 | /JPEGImages/2008_007926.jpg 596 | /JPEGImages/2008_008014.jpg 597 | /JPEGImages/2008_008017.jpg 598 | /JPEGImages/2008_008060.jpg 599 | /JPEGImages/2008_008077.jpg 600 | /JPEGImages/2008_008107.jpg 601 | /JPEGImages/2008_008108.jpg 602 | /JPEGImages/2008_008119.jpg 603 | /JPEGImages/2008_008126.jpg 604 | /JPEGImages/2008_008133.jpg 605 | /JPEGImages/2008_008144.jpg 606 | /JPEGImages/2008_008216.jpg 607 | /JPEGImages/2008_008244.jpg 608 | /JPEGImages/2008_008248.jpg 609 | /JPEGImages/2008_008250.jpg 610 | /JPEGImages/2008_008260.jpg 611 | /JPEGImages/2008_008277.jpg 612 | /JPEGImages/2008_008280.jpg 613 | /JPEGImages/2008_008290.jpg 614 | /JPEGImages/2008_008304.jpg 615 | /JPEGImages/2008_008340.jpg 616 | /JPEGImages/2008_008371.jpg 617 | /JPEGImages/2008_008390.jpg 618 | /JPEGImages/2008_008397.jpg 619 | /JPEGImages/2008_008409.jpg 620 | /JPEGImages/2008_008412.jpg 621 | /JPEGImages/2008_008419.jpg 622 | /JPEGImages/2008_008454.jpg 623 | /JPEGImages/2008_008491.jpg 624 | /JPEGImages/2008_008498.jpg 625 | /JPEGImages/2008_008565.jpg 626 | /JPEGImages/2008_008599.jpg 627 | /JPEGImages/2008_008603.jpg 628 | /JPEGImages/2008_008631.jpg 629 | /JPEGImages/2008_008634.jpg 630 | /JPEGImages/2008_008640.jpg 631 | /JPEGImages/2008_008646.jpg 632 | /JPEGImages/2008_008660.jpg 633 | /JPEGImages/2008_008663.jpg 634 | /JPEGImages/2008_008664.jpg 635 | /JPEGImages/2008_008709.jpg 636 | /JPEGImages/2008_008720.jpg 637 | /JPEGImages/2008_008747.jpg 638 | /JPEGImages/2008_008768.jpg 639 | /JPEGImages/2009_000004.jpg 640 | /JPEGImages/2009_000019.jpg 641 | /JPEGImages/2009_000024.jpg 642 | /JPEGImages/2009_000025.jpg 643 | /JPEGImages/2009_000053.jpg 644 | /JPEGImages/2009_000076.jpg 645 | /JPEGImages/2009_000107.jpg 646 | /JPEGImages/2009_000110.jpg 647 | /JPEGImages/2009_000115.jpg 648 | /JPEGImages/2009_000117.jpg 649 | /JPEGImages/2009_000175.jpg 650 | /JPEGImages/2009_000220.jpg 651 | /JPEGImages/2009_000259.jpg 652 | /JPEGImages/2009_000275.jpg 653 | /JPEGImages/2009_000314.jpg 654 | /JPEGImages/2009_000368.jpg 655 | /JPEGImages/2009_000373.jpg 656 | /JPEGImages/2009_000384.jpg 657 | /JPEGImages/2009_000388.jpg 658 | /JPEGImages/2009_000423.jpg 659 | /JPEGImages/2009_000433.jpg 660 | /JPEGImages/2009_000434.jpg 661 | /JPEGImages/2009_000458.jpg 662 | /JPEGImages/2009_000475.jpg 663 | /JPEGImages/2009_000481.jpg 664 | /JPEGImages/2009_000495.jpg 665 | /JPEGImages/2009_000514.jpg 666 | /JPEGImages/2009_000555.jpg 667 | /JPEGImages/2009_000556.jpg 668 | /JPEGImages/2009_000561.jpg 669 | /JPEGImages/2009_000571.jpg 670 | /JPEGImages/2009_000581.jpg 671 | /JPEGImages/2009_000605.jpg 672 | /JPEGImages/2009_000609.jpg 673 | /JPEGImages/2009_000644.jpg 674 | /JPEGImages/2009_000654.jpg 675 | /JPEGImages/2009_000671.jpg 676 | /JPEGImages/2009_000733.jpg 677 | /JPEGImages/2009_000740.jpg 678 | /JPEGImages/2009_000766.jpg 679 | /JPEGImages/2009_000775.jpg 680 | /JPEGImages/2009_000776.jpg 681 | /JPEGImages/2009_000795.jpg 682 | /JPEGImages/2009_000850.jpg 683 | /JPEGImages/2009_000881.jpg 684 | /JPEGImages/2009_000900.jpg 685 | /JPEGImages/2009_000914.jpg 686 | /JPEGImages/2009_000941.jpg 687 | /JPEGImages/2009_000977.jpg 688 | /JPEGImages/2009_000984.jpg 689 | /JPEGImages/2009_000986.jpg 690 | /JPEGImages/2009_001005.jpg 691 | /JPEGImages/2009_001015.jpg 692 | /JPEGImages/2009_001058.jpg 693 | /JPEGImages/2009_001072.jpg 694 | /JPEGImages/2009_001087.jpg 695 | /JPEGImages/2009_001092.jpg 696 | /JPEGImages/2009_001109.jpg 697 | /JPEGImages/2009_001114.jpg 698 | /JPEGImages/2009_001115.jpg 699 | /JPEGImages/2009_001141.jpg 700 | /JPEGImages/2009_001174.jpg 701 | /JPEGImages/2009_001175.jpg 702 | /JPEGImages/2009_001182.jpg 703 | /JPEGImages/2009_001222.jpg 704 | /JPEGImages/2009_001228.jpg 705 | /JPEGImages/2009_001246.jpg 706 | /JPEGImages/2009_001262.jpg 707 | /JPEGImages/2009_001274.jpg 708 | /JPEGImages/2009_001284.jpg 709 | /JPEGImages/2009_001297.jpg 710 | /JPEGImages/2009_001331.jpg 711 | /JPEGImages/2009_001336.jpg 712 | /JPEGImages/2009_001337.jpg 713 | /JPEGImages/2009_001379.jpg 714 | /JPEGImages/2009_001392.jpg 715 | /JPEGImages/2009_001451.jpg 716 | /JPEGImages/2009_001485.jpg 717 | /JPEGImages/2009_001488.jpg 718 | /JPEGImages/2009_001497.jpg 719 | /JPEGImages/2009_001504.jpg 720 | /JPEGImages/2009_001506.jpg 721 | /JPEGImages/2009_001573.jpg 722 | /JPEGImages/2009_001576.jpg 723 | /JPEGImages/2009_001603.jpg 724 | /JPEGImages/2009_001613.jpg 725 | /JPEGImages/2009_001652.jpg 726 | /JPEGImages/2009_001661.jpg 727 | /JPEGImages/2009_001668.jpg 728 | /JPEGImages/2009_001680.jpg 729 | /JPEGImages/2009_001688.jpg 730 | /JPEGImages/2009_001697.jpg 731 | /JPEGImages/2009_001729.jpg 732 | /JPEGImages/2009_001771.jpg 733 | /JPEGImages/2009_001785.jpg 734 | /JPEGImages/2009_001793.jpg 735 | /JPEGImages/2009_001814.jpg 736 | /JPEGImages/2009_001866.jpg 737 | /JPEGImages/2009_001872.jpg 738 | /JPEGImages/2009_001880.jpg 739 | /JPEGImages/2009_001883.jpg 740 | /JPEGImages/2009_001891.jpg 741 | /JPEGImages/2009_001913.jpg 742 | /JPEGImages/2009_001938.jpg 743 | /JPEGImages/2009_001946.jpg 744 | /JPEGImages/2009_001953.jpg 745 | /JPEGImages/2009_001969.jpg 746 | /JPEGImages/2009_001978.jpg 747 | /JPEGImages/2009_001995.jpg 748 | /JPEGImages/2009_002007.jpg 749 | /JPEGImages/2009_002036.jpg 750 | /JPEGImages/2009_002041.jpg 751 | /JPEGImages/2009_002049.jpg 752 | /JPEGImages/2009_002051.jpg 753 | /JPEGImages/2009_002062.jpg 754 | /JPEGImages/2009_002063.jpg 755 | /JPEGImages/2009_002067.jpg 756 | /JPEGImages/2009_002085.jpg 757 | /JPEGImages/2009_002092.jpg 758 | /JPEGImages/2009_002114.jpg 759 | /JPEGImages/2009_002115.jpg 760 | /JPEGImages/2009_002142.jpg 761 | /JPEGImages/2009_002148.jpg 762 | /JPEGImages/2009_002157.jpg 763 | /JPEGImages/2009_002181.jpg 764 | /JPEGImages/2009_002220.jpg 765 | /JPEGImages/2009_002284.jpg 766 | /JPEGImages/2009_002287.jpg 767 | /JPEGImages/2009_002300.jpg 768 | /JPEGImages/2009_002310.jpg 769 | /JPEGImages/2009_002315.jpg 770 | /JPEGImages/2009_002334.jpg 771 | /JPEGImages/2009_002337.jpg 772 | /JPEGImages/2009_002354.jpg 773 | /JPEGImages/2009_002357.jpg 774 | /JPEGImages/2009_002411.jpg 775 | /JPEGImages/2009_002426.jpg 776 | /JPEGImages/2009_002458.jpg 777 | /JPEGImages/2009_002459.jpg 778 | /JPEGImages/2009_002461.jpg 779 | /JPEGImages/2009_002466.jpg 780 | /JPEGImages/2009_002481.jpg 781 | /JPEGImages/2009_002483.jpg 782 | /JPEGImages/2009_002503.jpg 783 | /JPEGImages/2009_002581.jpg 784 | /JPEGImages/2009_002583.jpg 785 | /JPEGImages/2009_002589.jpg 786 | /JPEGImages/2009_002600.jpg 787 | /JPEGImages/2009_002601.jpg 788 | /JPEGImages/2009_002602.jpg 789 | /JPEGImages/2009_002641.jpg 790 | /JPEGImages/2009_002646.jpg 791 | /JPEGImages/2009_002656.jpg 792 | /JPEGImages/2009_002666.jpg 793 | /JPEGImages/2009_002720.jpg 794 | /JPEGImages/2009_002767.jpg 795 | /JPEGImages/2009_002768.jpg 796 | /JPEGImages/2009_002794.jpg 797 | /JPEGImages/2009_002821.jpg 798 | /JPEGImages/2009_002825.jpg 799 | /JPEGImages/2009_002839.jpg 800 | /JPEGImages/2009_002840.jpg 801 | /JPEGImages/2009_002859.jpg 802 | /JPEGImages/2009_002860.jpg 803 | /JPEGImages/2009_002881.jpg 804 | /JPEGImages/2009_002889.jpg 805 | /JPEGImages/2009_002892.jpg 806 | /JPEGImages/2009_002895.jpg 807 | /JPEGImages/2009_002896.jpg 808 | /JPEGImages/2009_002900.jpg 809 | /JPEGImages/2009_002924.jpg 810 | /JPEGImages/2009_002966.jpg 811 | /JPEGImages/2009_002973.jpg 812 | /JPEGImages/2009_002981.jpg 813 | /JPEGImages/2009_003004.jpg 814 | /JPEGImages/2009_003021.jpg 815 | /JPEGImages/2009_003028.jpg 816 | /JPEGImages/2009_003037.jpg 817 | /JPEGImages/2009_003038.jpg 818 | /JPEGImages/2009_003055.jpg 819 | /JPEGImages/2009_003085.jpg 820 | /JPEGImages/2009_003100.jpg 821 | /JPEGImages/2009_003106.jpg 822 | /JPEGImages/2009_003117.jpg 823 | /JPEGImages/2009_003139.jpg 824 | /JPEGImages/2009_003170.jpg 825 | /JPEGImages/2009_003179.jpg 826 | /JPEGImages/2009_003184.jpg 827 | /JPEGImages/2009_003186.jpg 828 | /JPEGImages/2009_003190.jpg 829 | /JPEGImages/2009_003221.jpg 830 | /JPEGImages/2009_003236.jpg 831 | /JPEGImages/2009_003242.jpg 832 | /JPEGImages/2009_003244.jpg 833 | /JPEGImages/2009_003260.jpg 834 | /JPEGImages/2009_003264.jpg 835 | /JPEGImages/2009_003274.jpg 836 | /JPEGImages/2009_003283.jpg 837 | /JPEGImages/2009_003296.jpg 838 | /JPEGImages/2009_003332.jpg 839 | /JPEGImages/2009_003341.jpg 840 | /JPEGImages/2009_003354.jpg 841 | /JPEGImages/2009_003370.jpg 842 | /JPEGImages/2009_003371.jpg 843 | /JPEGImages/2009_003374.jpg 844 | /JPEGImages/2009_003391.jpg 845 | /JPEGImages/2009_003393.jpg 846 | /JPEGImages/2009_003404.jpg 847 | /JPEGImages/2009_003405.jpg 848 | /JPEGImages/2009_003414.jpg 849 | /JPEGImages/2009_003428.jpg 850 | /JPEGImages/2009_003470.jpg 851 | /JPEGImages/2009_003474.jpg 852 | /JPEGImages/2009_003532.jpg 853 | /JPEGImages/2009_003536.jpg 854 | /JPEGImages/2009_003578.jpg 855 | /JPEGImages/2009_003580.jpg 856 | /JPEGImages/2009_003620.jpg 857 | /JPEGImages/2009_003621.jpg 858 | /JPEGImages/2009_003680.jpg 859 | /JPEGImages/2009_003699.jpg 860 | /JPEGImages/2009_003727.jpg 861 | /JPEGImages/2009_003737.jpg 862 | /JPEGImages/2009_003780.jpg 863 | /JPEGImages/2009_003811.jpg 864 | /JPEGImages/2009_003824.jpg 865 | /JPEGImages/2009_003831.jpg 866 | /JPEGImages/2009_003844.jpg 867 | /JPEGImages/2009_003850.jpg 868 | /JPEGImages/2009_003851.jpg 869 | /JPEGImages/2009_003864.jpg 870 | /JPEGImages/2009_003868.jpg 871 | /JPEGImages/2009_003869.jpg 872 | /JPEGImages/2009_003893.jpg 873 | /JPEGImages/2009_003909.jpg 874 | /JPEGImages/2009_003924.jpg 875 | /JPEGImages/2009_003925.jpg 876 | /JPEGImages/2009_003960.jpg 877 | /JPEGImages/2009_003979.jpg 878 | /JPEGImages/2009_003990.jpg 879 | /JPEGImages/2009_003997.jpg 880 | /JPEGImages/2009_004006.jpg 881 | /JPEGImages/2009_004010.jpg 882 | /JPEGImages/2009_004066.jpg 883 | /JPEGImages/2009_004077.jpg 884 | /JPEGImages/2009_004081.jpg 885 | /JPEGImages/2009_004097.jpg 886 | /JPEGImages/2009_004098.jpg 887 | /JPEGImages/2009_004136.jpg 888 | /JPEGImages/2009_004216.jpg 889 | /JPEGImages/2009_004220.jpg 890 | /JPEGImages/2009_004266.jpg 891 | /JPEGImages/2009_004269.jpg 892 | /JPEGImages/2009_004286.jpg 893 | /JPEGImages/2009_004296.jpg 894 | /JPEGImages/2009_004321.jpg 895 | /JPEGImages/2009_004342.jpg 896 | /JPEGImages/2009_004343.jpg 897 | /JPEGImages/2009_004344.jpg 898 | /JPEGImages/2009_004385.jpg 899 | /JPEGImages/2009_004408.jpg 900 | /JPEGImages/2009_004420.jpg 901 | /JPEGImages/2009_004441.jpg 902 | /JPEGImages/2009_004447.jpg 903 | /JPEGImages/2009_004461.jpg 904 | /JPEGImages/2009_004467.jpg 905 | /JPEGImages/2009_004485.jpg 906 | /JPEGImages/2009_004488.jpg 907 | /JPEGImages/2009_004516.jpg 908 | /JPEGImages/2009_004521.jpg 909 | /JPEGImages/2009_004544.jpg 910 | /JPEGImages/2009_004596.jpg 911 | /JPEGImages/2009_004613.jpg 912 | /JPEGImages/2009_004615.jpg 913 | /JPEGImages/2009_004618.jpg 914 | /JPEGImages/2009_004621.jpg 915 | /JPEGImages/2009_004646.jpg 916 | /JPEGImages/2009_004659.jpg 917 | /JPEGImages/2009_004663.jpg 918 | /JPEGImages/2009_004666.jpg 919 | /JPEGImages/2009_004691.jpg 920 | /JPEGImages/2009_004715.jpg 921 | /JPEGImages/2009_004726.jpg 922 | /JPEGImages/2009_004753.jpg 923 | /JPEGImages/2009_004776.jpg 924 | /JPEGImages/2009_004811.jpg 925 | /JPEGImages/2009_004814.jpg 926 | /JPEGImages/2009_004818.jpg 927 | /JPEGImages/2009_004835.jpg 928 | /JPEGImages/2009_004863.jpg 929 | /JPEGImages/2009_004894.jpg 930 | /JPEGImages/2009_004909.jpg 931 | /JPEGImages/2009_004928.jpg 932 | /JPEGImages/2009_004937.jpg 933 | /JPEGImages/2009_004954.jpg 934 | /JPEGImages/2009_004966.jpg 935 | /JPEGImages/2009_004970.jpg 936 | /JPEGImages/2009_004976.jpg 937 | /JPEGImages/2009_005004.jpg 938 | /JPEGImages/2009_005011.jpg 939 | /JPEGImages/2009_005053.jpg 940 | /JPEGImages/2009_005072.jpg 941 | /JPEGImages/2009_005115.jpg 942 | /JPEGImages/2009_005146.jpg 943 | /JPEGImages/2009_005151.jpg 944 | /JPEGImages/2009_005164.jpg 945 | /JPEGImages/2009_005179.jpg 946 | /JPEGImages/2009_005224.jpg 947 | /JPEGImages/2009_005243.jpg 948 | /JPEGImages/2009_005249.jpg 949 | /JPEGImages/2009_005252.jpg 950 | /JPEGImages/2009_005254.jpg 951 | /JPEGImages/2009_005258.jpg 952 | /JPEGImages/2009_005264.jpg 953 | /JPEGImages/2009_005266.jpg 954 | /JPEGImages/2009_005276.jpg 955 | /JPEGImages/2009_005290.jpg 956 | /JPEGImages/2009_005295.jpg 957 | /JPEGImages/2010_000004.jpg 958 | /JPEGImages/2010_000005.jpg 959 | /JPEGImages/2010_000006.jpg 960 | /JPEGImages/2010_000032.jpg 961 | /JPEGImages/2010_000062.jpg 962 | /JPEGImages/2010_000093.jpg 963 | /JPEGImages/2010_000094.jpg 964 | /JPEGImages/2010_000161.jpg 965 | /JPEGImages/2010_000176.jpg 966 | /JPEGImages/2010_000223.jpg 967 | /JPEGImages/2010_000226.jpg 968 | /JPEGImages/2010_000236.jpg 969 | /JPEGImages/2010_000239.jpg 970 | /JPEGImages/2010_000287.jpg 971 | /JPEGImages/2010_000300.jpg 972 | /JPEGImages/2010_000301.jpg 973 | /JPEGImages/2010_000328.jpg 974 | /JPEGImages/2010_000378.jpg 975 | /JPEGImages/2010_000405.jpg 976 | /JPEGImages/2010_000407.jpg 977 | /JPEGImages/2010_000472.jpg 978 | /JPEGImages/2010_000479.jpg 979 | /JPEGImages/2010_000491.jpg 980 | /JPEGImages/2010_000533.jpg 981 | /JPEGImages/2010_000535.jpg 982 | /JPEGImages/2010_000542.jpg 983 | /JPEGImages/2010_000554.jpg 984 | /JPEGImages/2010_000580.jpg 985 | /JPEGImages/2010_000594.jpg 986 | /JPEGImages/2010_000596.jpg 987 | /JPEGImages/2010_000599.jpg 988 | /JPEGImages/2010_000606.jpg 989 | /JPEGImages/2010_000615.jpg 990 | /JPEGImages/2010_000654.jpg 991 | /JPEGImages/2010_000659.jpg 992 | /JPEGImages/2010_000693.jpg 993 | /JPEGImages/2010_000698.jpg 994 | /JPEGImages/2010_000730.jpg 995 | /JPEGImages/2010_000734.jpg 996 | /JPEGImages/2010_000741.jpg 997 | /JPEGImages/2010_000755.jpg 998 | /JPEGImages/2010_000768.jpg 999 | /JPEGImages/2010_000794.jpg 1000 | /JPEGImages/2010_000813.jpg 1001 | /JPEGImages/2010_000817.jpg 1002 | /JPEGImages/2010_000834.jpg 1003 | /JPEGImages/2010_000839.jpg 1004 | /JPEGImages/2010_000848.jpg 1005 | /JPEGImages/2010_000881.jpg 1006 | /JPEGImages/2010_000888.jpg 1007 | /JPEGImages/2010_000900.jpg 1008 | /JPEGImages/2010_000903.jpg 1009 | /JPEGImages/2010_000924.jpg 1010 | /JPEGImages/2010_000946.jpg 1011 | /JPEGImages/2010_000953.jpg 1012 | /JPEGImages/2010_000957.jpg 1013 | /JPEGImages/2010_000967.jpg 1014 | /JPEGImages/2010_000992.jpg 1015 | /JPEGImages/2010_000998.jpg 1016 | /JPEGImages/2010_001053.jpg 1017 | /JPEGImages/2010_001067.jpg 1018 | /JPEGImages/2010_001114.jpg 1019 | /JPEGImages/2010_001132.jpg 1020 | /JPEGImages/2010_001138.jpg 1021 | /JPEGImages/2010_001169.jpg 1022 | /JPEGImages/2010_001171.jpg 1023 | /JPEGImages/2010_001228.jpg 1024 | /JPEGImages/2010_001260.jpg 1025 | /JPEGImages/2010_001268.jpg 1026 | /JPEGImages/2010_001280.jpg 1027 | /JPEGImages/2010_001298.jpg 1028 | /JPEGImages/2010_001302.jpg 1029 | /JPEGImages/2010_001308.jpg 1030 | /JPEGImages/2010_001324.jpg 1031 | /JPEGImages/2010_001332.jpg 1032 | /JPEGImages/2010_001335.jpg 1033 | /JPEGImages/2010_001345.jpg 1034 | /JPEGImages/2010_001346.jpg 1035 | /JPEGImages/2010_001349.jpg 1036 | /JPEGImages/2010_001373.jpg 1037 | /JPEGImages/2010_001381.jpg 1038 | /JPEGImages/2010_001392.jpg 1039 | /JPEGImages/2010_001396.jpg 1040 | /JPEGImages/2010_001420.jpg 1041 | /JPEGImages/2010_001500.jpg 1042 | /JPEGImages/2010_001506.jpg 1043 | /JPEGImages/2010_001521.jpg 1044 | /JPEGImages/2010_001532.jpg 1045 | /JPEGImages/2010_001558.jpg 1046 | /JPEGImages/2010_001598.jpg 1047 | /JPEGImages/2010_001611.jpg 1048 | /JPEGImages/2010_001631.jpg 1049 | /JPEGImages/2010_001639.jpg 1050 | /JPEGImages/2010_001651.jpg 1051 | /JPEGImages/2010_001663.jpg 1052 | /JPEGImages/2010_001664.jpg 1053 | /JPEGImages/2010_001728.jpg 1054 | /JPEGImages/2010_001778.jpg 1055 | /JPEGImages/2010_001861.jpg 1056 | /JPEGImages/2010_001874.jpg 1057 | /JPEGImages/2010_001900.jpg 1058 | /JPEGImages/2010_001905.jpg 1059 | /JPEGImages/2010_001969.jpg 1060 | /JPEGImages/2010_002008.jpg 1061 | /JPEGImages/2010_002014.jpg 1062 | /JPEGImages/2010_002049.jpg 1063 | /JPEGImages/2010_002052.jpg 1064 | /JPEGImages/2010_002091.jpg 1065 | /JPEGImages/2010_002115.jpg 1066 | /JPEGImages/2010_002119.jpg 1067 | /JPEGImages/2010_002134.jpg 1068 | /JPEGImages/2010_002156.jpg 1069 | /JPEGImages/2010_002160.jpg 1070 | /JPEGImages/2010_002186.jpg 1071 | /JPEGImages/2010_002210.jpg 1072 | /JPEGImages/2010_002241.jpg 1073 | /JPEGImages/2010_002252.jpg 1074 | /JPEGImages/2010_002258.jpg 1075 | /JPEGImages/2010_002262.jpg 1076 | /JPEGImages/2010_002273.jpg 1077 | /JPEGImages/2010_002290.jpg 1078 | /JPEGImages/2010_002292.jpg 1079 | /JPEGImages/2010_002347.jpg 1080 | /JPEGImages/2010_002358.jpg 1081 | /JPEGImages/2010_002360.jpg 1082 | /JPEGImages/2010_002367.jpg 1083 | /JPEGImages/2010_002416.jpg 1084 | /JPEGImages/2010_002451.jpg 1085 | /JPEGImages/2010_002481.jpg 1086 | /JPEGImages/2010_002490.jpg 1087 | /JPEGImages/2010_002495.jpg 1088 | /JPEGImages/2010_002588.jpg 1089 | /JPEGImages/2010_002607.jpg 1090 | /JPEGImages/2010_002609.jpg 1091 | /JPEGImages/2010_002610.jpg 1092 | /JPEGImages/2010_002641.jpg 1093 | /JPEGImages/2010_002685.jpg 1094 | /JPEGImages/2010_002699.jpg 1095 | /JPEGImages/2010_002719.jpg 1096 | /JPEGImages/2010_002735.jpg 1097 | /JPEGImages/2010_002751.jpg 1098 | /JPEGImages/2010_002804.jpg 1099 | /JPEGImages/2010_002835.jpg 1100 | /JPEGImages/2010_002852.jpg 1101 | /JPEGImages/2010_002885.jpg 1102 | /JPEGImages/2010_002889.jpg 1103 | /JPEGImages/2010_002904.jpg 1104 | /JPEGImages/2010_002908.jpg 1105 | /JPEGImages/2010_002916.jpg 1106 | /JPEGImages/2010_002974.jpg 1107 | /JPEGImages/2010_002977.jpg 1108 | /JPEGImages/2010_003005.jpg 1109 | /JPEGImages/2010_003021.jpg 1110 | /JPEGImages/2010_003030.jpg 1111 | /JPEGImages/2010_003038.jpg 1112 | /JPEGImages/2010_003046.jpg 1113 | /JPEGImages/2010_003052.jpg 1114 | /JPEGImages/2010_003089.jpg 1115 | /JPEGImages/2010_003110.jpg 1116 | /JPEGImages/2010_003118.jpg 1117 | /JPEGImages/2010_003171.jpg 1118 | /JPEGImages/2010_003217.jpg 1119 | /JPEGImages/2010_003221.jpg 1120 | /JPEGImages/2010_003228.jpg 1121 | /JPEGImages/2010_003243.jpg 1122 | /JPEGImages/2010_003271.jpg 1123 | /JPEGImages/2010_003295.jpg 1124 | /JPEGImages/2010_003306.jpg 1125 | /JPEGImages/2010_003324.jpg 1126 | /JPEGImages/2010_003363.jpg 1127 | /JPEGImages/2010_003382.jpg 1128 | /JPEGImages/2010_003388.jpg 1129 | /JPEGImages/2010_003389.jpg 1130 | /JPEGImages/2010_003392.jpg 1131 | /JPEGImages/2010_003430.jpg 1132 | /JPEGImages/2010_003442.jpg 1133 | /JPEGImages/2010_003459.jpg 1134 | /JPEGImages/2010_003485.jpg 1135 | /JPEGImages/2010_003486.jpg 1136 | /JPEGImages/2010_003500.jpg 1137 | /JPEGImages/2010_003523.jpg 1138 | /JPEGImages/2010_003542.jpg 1139 | /JPEGImages/2010_003552.jpg 1140 | /JPEGImages/2010_003570.jpg 1141 | /JPEGImages/2010_003572.jpg 1142 | /JPEGImages/2010_003586.jpg 1143 | /JPEGImages/2010_003615.jpg 1144 | /JPEGImages/2010_003623.jpg 1145 | /JPEGImages/2010_003657.jpg 1146 | /JPEGImages/2010_003666.jpg 1147 | /JPEGImages/2010_003705.jpg 1148 | /JPEGImages/2010_003710.jpg 1149 | /JPEGImages/2010_003720.jpg 1150 | /JPEGImages/2010_003733.jpg 1151 | /JPEGImages/2010_003750.jpg 1152 | /JPEGImages/2010_003767.jpg 1153 | /JPEGImages/2010_003802.jpg 1154 | /JPEGImages/2010_003809.jpg 1155 | /JPEGImages/2010_003830.jpg 1156 | /JPEGImages/2010_003832.jpg 1157 | /JPEGImages/2010_003836.jpg 1158 | /JPEGImages/2010_003838.jpg 1159 | /JPEGImages/2010_003850.jpg 1160 | /JPEGImages/2010_003867.jpg 1161 | /JPEGImages/2010_003882.jpg 1162 | /JPEGImages/2010_003909.jpg 1163 | /JPEGImages/2010_003922.jpg 1164 | /JPEGImages/2010_003923.jpg 1165 | /JPEGImages/2010_003978.jpg 1166 | /JPEGImages/2010_003989.jpg 1167 | /JPEGImages/2010_003990.jpg 1168 | /JPEGImages/2010_004000.jpg 1169 | /JPEGImages/2010_004003.jpg 1170 | /JPEGImages/2010_004068.jpg 1171 | /JPEGImages/2010_004076.jpg 1172 | /JPEGImages/2010_004117.jpg 1173 | /JPEGImages/2010_004136.jpg 1174 | /JPEGImages/2010_004142.jpg 1175 | /JPEGImages/2010_004195.jpg 1176 | /JPEGImages/2010_004200.jpg 1177 | /JPEGImages/2010_004202.jpg 1178 | /JPEGImages/2010_004232.jpg 1179 | /JPEGImages/2010_004261.jpg 1180 | /JPEGImages/2010_004266.jpg 1181 | /JPEGImages/2010_004273.jpg 1182 | /JPEGImages/2010_004305.jpg 1183 | /JPEGImages/2010_004403.jpg 1184 | /JPEGImages/2010_004433.jpg 1185 | /JPEGImages/2010_004434.jpg 1186 | /JPEGImages/2010_004435.jpg 1187 | /JPEGImages/2010_004438.jpg 1188 | /JPEGImages/2010_004442.jpg 1189 | /JPEGImages/2010_004473.jpg 1190 | /JPEGImages/2010_004482.jpg 1191 | /JPEGImages/2010_004487.jpg 1192 | /JPEGImages/2010_004489.jpg 1193 | /JPEGImages/2010_004512.jpg 1194 | /JPEGImages/2010_004525.jpg 1195 | /JPEGImages/2010_004527.jpg 1196 | /JPEGImages/2010_004532.jpg 1197 | /JPEGImages/2010_004566.jpg 1198 | /JPEGImages/2010_004568.jpg 1199 | /JPEGImages/2010_004579.jpg 1200 | /JPEGImages/2010_004611.jpg 1201 | /JPEGImages/2010_004641.jpg 1202 | /JPEGImages/2010_004688.jpg 1203 | /JPEGImages/2010_004699.jpg 1204 | /JPEGImages/2010_004702.jpg 1205 | /JPEGImages/2010_004716.jpg 1206 | /JPEGImages/2010_004754.jpg 1207 | /JPEGImages/2010_004767.jpg 1208 | /JPEGImages/2010_004776.jpg 1209 | /JPEGImages/2010_004811.jpg 1210 | /JPEGImages/2010_004837.jpg 1211 | /JPEGImages/2010_004839.jpg 1212 | /JPEGImages/2010_004845.jpg 1213 | /JPEGImages/2010_004860.jpg 1214 | /JPEGImages/2010_004867.jpg 1215 | /JPEGImages/2010_004881.jpg 1216 | /JPEGImages/2010_004939.jpg 1217 | /JPEGImages/2010_005001.jpg 1218 | /JPEGImages/2010_005047.jpg 1219 | /JPEGImages/2010_005051.jpg 1220 | /JPEGImages/2010_005091.jpg 1221 | /JPEGImages/2010_005095.jpg 1222 | /JPEGImages/2010_005125.jpg 1223 | /JPEGImages/2010_005140.jpg 1224 | /JPEGImages/2010_005177.jpg 1225 | /JPEGImages/2010_005178.jpg 1226 | /JPEGImages/2010_005194.jpg 1227 | /JPEGImages/2010_005197.jpg 1228 | /JPEGImages/2010_005200.jpg 1229 | /JPEGImages/2010_005205.jpg 1230 | /JPEGImages/2010_005212.jpg 1231 | /JPEGImages/2010_005248.jpg 1232 | /JPEGImages/2010_005294.jpg 1233 | /JPEGImages/2010_005298.jpg 1234 | /JPEGImages/2010_005313.jpg 1235 | /JPEGImages/2010_005324.jpg 1236 | /JPEGImages/2010_005328.jpg 1237 | /JPEGImages/2010_005329.jpg 1238 | /JPEGImages/2010_005380.jpg 1239 | /JPEGImages/2010_005404.jpg 1240 | /JPEGImages/2010_005407.jpg 1241 | /JPEGImages/2010_005411.jpg 1242 | /JPEGImages/2010_005423.jpg 1243 | /JPEGImages/2010_005499.jpg 1244 | /JPEGImages/2010_005509.jpg 1245 | /JPEGImages/2010_005510.jpg 1246 | /JPEGImages/2010_005544.jpg 1247 | /JPEGImages/2010_005549.jpg 1248 | /JPEGImages/2010_005590.jpg 1249 | /JPEGImages/2010_005639.jpg 1250 | /JPEGImages/2010_005699.jpg 1251 | /JPEGImages/2010_005704.jpg 1252 | /JPEGImages/2010_005707.jpg 1253 | /JPEGImages/2010_005711.jpg 1254 | /JPEGImages/2010_005726.jpg 1255 | /JPEGImages/2010_005741.jpg 1256 | /JPEGImages/2010_005765.jpg 1257 | /JPEGImages/2010_005790.jpg 1258 | /JPEGImages/2010_005792.jpg 1259 | /JPEGImages/2010_005797.jpg 1260 | /JPEGImages/2010_005812.jpg 1261 | /JPEGImages/2010_005850.jpg 1262 | /JPEGImages/2010_005861.jpg 1263 | /JPEGImages/2010_005869.jpg 1264 | /JPEGImages/2010_005908.jpg 1265 | /JPEGImages/2010_005915.jpg 1266 | /JPEGImages/2010_005946.jpg 1267 | /JPEGImages/2010_005965.jpg 1268 | /JPEGImages/2010_006044.jpg 1269 | /JPEGImages/2010_006047.jpg 1270 | /JPEGImages/2010_006052.jpg 1271 | /JPEGImages/2010_006081.jpg 1272 | /JPEGImages/2011_000001.jpg 1273 | /JPEGImages/2011_000013.jpg 1274 | /JPEGImages/2011_000014.jpg 1275 | /JPEGImages/2011_000020.jpg 1276 | /JPEGImages/2011_000032.jpg 1277 | /JPEGImages/2011_000042.jpg 1278 | /JPEGImages/2011_000063.jpg 1279 | /JPEGImages/2011_000115.jpg 1280 | /JPEGImages/2011_000120.jpg 1281 | /JPEGImages/2011_000240.jpg 1282 | /JPEGImages/2011_000244.jpg 1283 | /JPEGImages/2011_000254.jpg 1284 | /JPEGImages/2011_000261.jpg 1285 | /JPEGImages/2011_000262.jpg 1286 | /JPEGImages/2011_000271.jpg 1287 | /JPEGImages/2011_000274.jpg 1288 | /JPEGImages/2011_000306.jpg 1289 | /JPEGImages/2011_000311.jpg 1290 | /JPEGImages/2011_000316.jpg 1291 | /JPEGImages/2011_000328.jpg 1292 | /JPEGImages/2011_000351.jpg 1293 | /JPEGImages/2011_000352.jpg 1294 | /JPEGImages/2011_000406.jpg 1295 | /JPEGImages/2011_000414.jpg 1296 | /JPEGImages/2011_000448.jpg 1297 | /JPEGImages/2011_000451.jpg 1298 | /JPEGImages/2011_000470.jpg 1299 | /JPEGImages/2011_000473.jpg 1300 | /JPEGImages/2011_000515.jpg 1301 | /JPEGImages/2011_000537.jpg 1302 | /JPEGImages/2011_000576.jpg 1303 | /JPEGImages/2011_000603.jpg 1304 | /JPEGImages/2011_000616.jpg 1305 | /JPEGImages/2011_000636.jpg 1306 | /JPEGImages/2011_000639.jpg 1307 | /JPEGImages/2011_000654.jpg 1308 | /JPEGImages/2011_000660.jpg 1309 | /JPEGImages/2011_000664.jpg 1310 | /JPEGImages/2011_000667.jpg 1311 | /JPEGImages/2011_000670.jpg 1312 | /JPEGImages/2011_000676.jpg 1313 | /JPEGImages/2011_000721.jpg 1314 | /JPEGImages/2011_000723.jpg 1315 | /JPEGImages/2011_000762.jpg 1316 | /JPEGImages/2011_000766.jpg 1317 | /JPEGImages/2011_000786.jpg 1318 | /JPEGImages/2011_000802.jpg 1319 | /JPEGImages/2011_000810.jpg 1320 | /JPEGImages/2011_000821.jpg 1321 | /JPEGImages/2011_000841.jpg 1322 | /JPEGImages/2011_000844.jpg 1323 | /JPEGImages/2011_000846.jpg 1324 | /JPEGImages/2011_000869.jpg 1325 | /JPEGImages/2011_000890.jpg 1326 | /JPEGImages/2011_000915.jpg 1327 | /JPEGImages/2011_000924.jpg 1328 | /JPEGImages/2011_000937.jpg 1329 | /JPEGImages/2011_000939.jpg 1330 | /JPEGImages/2011_000952.jpg 1331 | /JPEGImages/2011_000968.jpg 1332 | /JPEGImages/2011_000974.jpg 1333 | /JPEGImages/2011_001037.jpg 1334 | /JPEGImages/2011_001072.jpg 1335 | /JPEGImages/2011_001085.jpg 1336 | /JPEGImages/2011_001089.jpg 1337 | /JPEGImages/2011_001090.jpg 1338 | /JPEGImages/2011_001099.jpg 1339 | /JPEGImages/2011_001104.jpg 1340 | /JPEGImages/2011_001112.jpg 1341 | /JPEGImages/2011_001120.jpg 1342 | /JPEGImages/2011_001132.jpg 1343 | /JPEGImages/2011_001151.jpg 1344 | /JPEGImages/2011_001194.jpg 1345 | /JPEGImages/2011_001258.jpg 1346 | /JPEGImages/2011_001274.jpg 1347 | /JPEGImages/2011_001314.jpg 1348 | /JPEGImages/2011_001317.jpg 1349 | /JPEGImages/2011_001321.jpg 1350 | /JPEGImages/2011_001379.jpg 1351 | /JPEGImages/2011_001425.jpg 1352 | /JPEGImages/2011_001431.jpg 1353 | /JPEGImages/2011_001443.jpg 1354 | /JPEGImages/2011_001446.jpg 1355 | /JPEGImages/2011_001452.jpg 1356 | /JPEGImages/2011_001454.jpg 1357 | /JPEGImages/2011_001477.jpg 1358 | /JPEGImages/2011_001509.jpg 1359 | /JPEGImages/2011_001512.jpg 1360 | /JPEGImages/2011_001515.jpg 1361 | /JPEGImages/2011_001528.jpg 1362 | /JPEGImages/2011_001554.jpg 1363 | /JPEGImages/2011_001561.jpg 1364 | /JPEGImages/2011_001580.jpg 1365 | /JPEGImages/2011_001587.jpg 1366 | /JPEGImages/2011_001623.jpg 1367 | /JPEGImages/2011_001648.jpg 1368 | /JPEGImages/2011_001651.jpg 1369 | /JPEGImages/2011_001654.jpg 1370 | /JPEGImages/2011_001684.jpg 1371 | /JPEGImages/2011_001696.jpg 1372 | /JPEGImages/2011_001697.jpg 1373 | /JPEGImages/2011_001760.jpg 1374 | /JPEGImages/2011_001761.jpg 1375 | /JPEGImages/2011_001798.jpg 1376 | /JPEGImages/2011_001807.jpg 1377 | /JPEGImages/2011_001851.jpg 1378 | /JPEGImages/2011_001852.jpg 1379 | /JPEGImages/2011_001853.jpg 1380 | /JPEGImages/2011_001888.jpg 1381 | /JPEGImages/2011_001940.jpg 1382 | /JPEGImages/2011_002014.jpg 1383 | /JPEGImages/2011_002028.jpg 1384 | /JPEGImages/2011_002056.jpg 1385 | /JPEGImages/2011_002061.jpg 1386 | /JPEGImages/2011_002068.jpg 1387 | /JPEGImages/2011_002076.jpg 1388 | /JPEGImages/2011_002090.jpg 1389 | /JPEGImages/2011_002095.jpg 1390 | /JPEGImages/2011_002104.jpg 1391 | /JPEGImages/2011_002136.jpg 1392 | /JPEGImages/2011_002138.jpg 1393 | /JPEGImages/2011_002151.jpg 1394 | /JPEGImages/2011_002153.jpg 1395 | /JPEGImages/2011_002155.jpg 1396 | /JPEGImages/2011_002197.jpg 1397 | /JPEGImages/2011_002198.jpg 1398 | /JPEGImages/2011_002243.jpg 1399 | /JPEGImages/2011_002250.jpg 1400 | /JPEGImages/2011_002257.jpg 1401 | /JPEGImages/2011_002262.jpg 1402 | /JPEGImages/2011_002264.jpg 1403 | /JPEGImages/2011_002296.jpg 1404 | /JPEGImages/2011_002314.jpg 1405 | /JPEGImages/2011_002331.jpg 1406 | /JPEGImages/2011_002333.jpg 1407 | /JPEGImages/2011_002411.jpg 1408 | /JPEGImages/2011_002417.jpg 1409 | /JPEGImages/2011_002425.jpg 1410 | /JPEGImages/2011_002437.jpg 1411 | /JPEGImages/2011_002444.jpg 1412 | /JPEGImages/2011_002445.jpg 1413 | /JPEGImages/2011_002449.jpg 1414 | /JPEGImages/2011_002468.jpg 1415 | /JPEGImages/2011_002469.jpg 1416 | /JPEGImages/2011_002473.jpg 1417 | /JPEGImages/2011_002508.jpg 1418 | /JPEGImages/2011_002523.jpg 1419 | /JPEGImages/2011_002534.jpg 1420 | /JPEGImages/2011_002557.jpg 1421 | /JPEGImages/2011_002564.jpg 1422 | /JPEGImages/2011_002572.jpg 1423 | /JPEGImages/2011_002597.jpg 1424 | /JPEGImages/2011_002622.jpg 1425 | /JPEGImages/2011_002632.jpg 1426 | /JPEGImages/2011_002635.jpg 1427 | /JPEGImages/2011_002643.jpg 1428 | /JPEGImages/2011_002653.jpg 1429 | /JPEGImages/2011_002667.jpg 1430 | /JPEGImages/2011_002681.jpg 1431 | /JPEGImages/2011_002707.jpg 1432 | /JPEGImages/2011_002736.jpg 1433 | /JPEGImages/2011_002759.jpg 1434 | /JPEGImages/2011_002783.jpg 1435 | /JPEGImages/2011_002792.jpg 1436 | /JPEGImages/2011_002799.jpg 1437 | /JPEGImages/2011_002824.jpg 1438 | /JPEGImages/2011_002835.jpg 1439 | /JPEGImages/2011_002866.jpg 1440 | /JPEGImages/2011_002876.jpg 1441 | /JPEGImages/2011_002888.jpg 1442 | /JPEGImages/2011_002894.jpg 1443 | /JPEGImages/2011_002903.jpg 1444 | /JPEGImages/2011_002905.jpg 1445 | /JPEGImages/2011_002986.jpg 1446 | /JPEGImages/2011_003045.jpg 1447 | /JPEGImages/2011_003064.jpg 1448 | /JPEGImages/2011_003070.jpg 1449 | /JPEGImages/2011_003083.jpg 1450 | /JPEGImages/2011_003093.jpg 1451 | /JPEGImages/2011_003096.jpg 1452 | /JPEGImages/2011_003102.jpg 1453 | /JPEGImages/2011_003156.jpg 1454 | /JPEGImages/2011_003170.jpg 1455 | /JPEGImages/2011_003178.jpg 1456 | /JPEGImages/2011_003231.jpg -------------------------------------------------------------------------------- /dataset_cityscapes/generate_dataset_txt.py: -------------------------------------------------------------------------------- 1 | import os, glob, sys 2 | 3 | # Print an error message and quit 4 | def printError(message): 5 | print('ERROR: {}'.format(message)) 6 | sys.exit(-1) 7 | 8 | def main(): 9 | # Where to look for Cityscapes 10 | cityscapesPath = os.environ['CITYSCAPES_DATASET'] 11 | # how to search for all ground truth 12 | searchTrainFine = os.path.join(cityscapesPath, "gtFine", "train" , "*", "*_gt*_labelTrainIds.png") 13 | searchValFine = os.path.join(cityscapesPath, "gtFine", "val" , "*", "*_gt*_labelTrainIds.png") 14 | searchTrainCoarse = os.path.join(cityscapesPath, "gtCoarse", "train" , "*", "*_gt*_labelTrainIds.png") 15 | searchValCoarse = os.path.join(cityscapesPath, "gtCoarse", "val" , "*", "*_gt*_labelTrainIds.png") 16 | searchExTrainCoarse = os.path.join(cityscapesPath, "gtCoarse", "train_extra", "*", "*_gt*_labelTrainIds.png") 17 | searchTrainImg = os.path.join(cityscapesPath, "leftImg8bit", "train" , "*", "*_leftImg8bit.png") 18 | searchValImg = os.path.join(cityscapesPath, "leftImg8bit", "val" , "*", "*_leftImg8bit.png") 19 | searchExTrainImg = os.path.join(cityscapesPath, "leftImg8bit", "train_extra" , "*", "*_leftImg8bit.png") 20 | searchTestImg = os.path.join(cityscapesPath, "leftImg8bit", "test" , "*", "*_leftImg8bit.png") 21 | 22 | # search files 23 | filesTrainFine = glob.glob(searchTrainFine) 24 | filesTrainFine.sort() 25 | filesValFine = glob.glob(searchValFine) 26 | filesValFine.sort() 27 | filesTrainCoarse = glob.glob(searchTrainCoarse) 28 | filesTrainCoarse.sort() 29 | filesValCoarse = glob.glob(searchValCoarse) 30 | filesValCoarse.sort() 31 | filesExTrainCoarse = glob.glob(searchExTrainCoarse) 32 | filesExTrainCoarse.sort() 33 | filesTrainImg = glob.glob(searchTrainImg) 34 | filesTrainImg.sort() 35 | filesValImg = glob.glob(searchValImg) 36 | filesValImg.sort() 37 | filesExTrainImg = glob.glob(searchExTrainImg) 38 | filesExTrainImg.sort() 39 | filesTestImg = glob.glob(searchTestImg) 40 | filesTestImg.sort() 41 | 42 | # quit if we did not find anything 43 | if not filesTrainFine: 44 | printError("Did not find any gtFine/train files.") 45 | if not filesValFine: 46 | printError("Did not find any gtFine/val files.") 47 | if not filesTrainCoarse: 48 | printError("Did not find any gtCoarse/train files.") 49 | if not filesValCoarse: 50 | printError("Did not find any gtCoarse/val files.") 51 | if not filesExTrainCoarse: 52 | printError("Did not find any gtCoarse/train_extra files.") 53 | if not filesTrainImg: 54 | printError("Did not find any leftImg8bit/train files.") 55 | if not filesValImg: 56 | printError("Did not find any leftImg8bit/val files.") 57 | if not filesExTrainImg: 58 | printError("Did not find any leftImg8bit/train_extra files.") 59 | if not filesTestImg: 60 | printError("Did not find any leftImg8bit/test files.") 61 | 62 | # assertion 63 | assert len(filesTrainImg) == len(filesTrainFine), \ 64 | "Error %d (filesTrainImg) != %d (filesTrainFine)" % (len(filesTrainImg), len(filesTrainFine)) 65 | assert len(filesTrainImg) == len(filesTrainCoarse), \ 66 | "Error %d (filesTrainImg) != %d (filesTrainCoarse)" % (len(filesTrainImg), len(filesTrainCoarse)) 67 | assert len(filesValImg) == len(filesValFine), \ 68 | "Error %d (filesValImg) != %d (filesValFine)" % (len(filesValImg), len(filesValFine)) 69 | assert len(filesValImg) == len(filesValCoarse), \ 70 | "Error %d (filesValImg) != %d (filesValCoarse)" % (len(filesValImg), len(filesValCoarse)) 71 | assert len(filesExTrainImg) == len(filesExTrainCoarse), \ 72 | "Error %d (filesExTrainImg) != %d (filesExTrainCoarse)" % (len(filesExTrainImg), len(filesExTrainCoarse)) 73 | assert len(filesTestImg) == 1525, "Error %d (filesTestImg) != 1525" % len(filesTestImg) 74 | files = filesTrainFine+filesValFine+filesTrainCoarse+filesValCoarse+filesExTrainCoarse 75 | assert len(files) == 26948, "Error %d (gtFiles) != 26948" % len(files) 76 | 77 | # create txt 78 | dir_path = os.path.join(cityscapesPath, 'dataset') 79 | if not os.path.exists(dir_path): 80 | os.makedirs(dir_path) 81 | print("---create test.txt---") 82 | with open(os.path.join(dir_path, 'test.txt'), 'w') as f: 83 | for l in filesTestImg: 84 | f.write(l[len(cityscapesPath):] + '\n') 85 | print("---create train_fine.txt---") 86 | with open(os.path.join(dir_path, 'train_fine.txt'), 'w') as f: 87 | for l in zip(filesTrainImg, filesTrainFine): 88 | assert l[0][len('/tempspace2/zwang6/Cityscapes/leftImg8bit/'):-len('_leftImg8bit.png')] \ 89 | == l[1][len('/tempspace2/zwang6/Cityscapes/gtFine/'):-len('_gtFine_labelTrainIds.png')], \ 90 | "%s != %s" % (l[0][len('/tempspace2/zwang6/Cityscapes/leftImg8bit/'):-len('_leftImg8bit.png')], \ 91 | l[1][len('/tempspace2/zwang6/Cityscapes/gtFine/'):-len('_gtFine_labelTrainIds.png')]) 92 | f.write(l[0][len(cityscapesPath):] + ' ' + l[1][len(cityscapesPath):] + '\n') 93 | print("---create val_fine.txt---") 94 | with open(os.path.join(dir_path, 'val_fine.txt'), 'w') as f: 95 | for l in zip(filesValImg, filesValFine): 96 | assert l[0][len('/tempspace2/zwang6/Cityscapes/leftImg8bit/'):-len('_leftImg8bit.png')] \ 97 | == l[1][len('/tempspace2/zwang6/Cityscapes/gtFine/'):-len('_gtFine_labelTrainIds.png')], \ 98 | "%s != %s" % (l[0][len('/tempspace2/zwang6/Cityscapes/leftImg8bit/'):-len('_leftImg8bit.png')], \ 99 | l[1][len('/tempspace2/zwang6/Cityscapes/gtFine/'):-len('_gtFine_labelTrainIds.png')]) 100 | f.write(l[0][len(cityscapesPath):] + ' ' + l[1][len(cityscapesPath):] + '\n') 101 | print("---create train_coarse.txt---") 102 | with open(os.path.join(dir_path, 'train_coarse.txt'), 'w') as f: 103 | for l in zip(filesTrainImg, filesTrainCoarse): 104 | assert l[0][len('/tempspace2/zwang6/Cityscapes/leftImg8bit/'):-len('_leftImg8bit.png')] \ 105 | == l[1][len('/tempspace2/zwang6/Cityscapes/gtCoarse/'):-len('_gtCoarse_labelTrainIds.png')], \ 106 | "%s != %s" % (l[0][len('/tempspace2/zwang6/Cityscapes/leftImg8bit/'):-len('_leftImg8bit.png')], \ 107 | l[1][len('/tempspace2/zwang6/Cityscapes/gtCoarse/'):-len('_gtCoarse_labelTrainIds.png')]) 108 | f.write(l[0][len(cityscapesPath):] + ' ' + l[1][len(cityscapesPath):] + '\n') 109 | print("---create val_coarse.txt---") 110 | with open(os.path.join(dir_path, 'val_coarse.txt'), 'w') as f: 111 | for l in zip(filesValImg, filesValCoarse): 112 | assert l[0][len('/tempspace2/zwang6/Cityscapes/leftImg8bit/'):-len('_leftImg8bit.png')] \ 113 | == l[1][len('/tempspace2/zwang6/Cityscapes/gtCoarse/'):-len('_gtCoarse_labelTrainIds.png')], \ 114 | "%s != %s" % (l[0][len('/tempspace2/zwang6/Cityscapes/leftImg8bit/'):-len('_leftImg8bit.png')], \ 115 | l[1][len('/tempspace2/zwang6/Cityscapes/gtCoarse/'):-len('_gtCoarse_labelTrainIds.png')]) 116 | f.write(l[0][len(cityscapesPath):] + ' ' + l[1][len(cityscapesPath):] + '\n') 117 | print("---create train_extra.txt---") 118 | with open(os.path.join(dir_path, 'train_extra.txt'), 'w') as f: 119 | for l in zip(filesExTrainImg, filesExTrainCoarse): 120 | assert l[0][len('/tempspace2/zwang6/Cityscapes/leftImg8bit/'):-len('_leftImg8bit.png')] \ 121 | == l[1][len('/tempspace2/zwang6/Cityscapes/gtCoarse/'):-len('_gtCoarse_labelTrainIds.png')], \ 122 | "%s != %s" % (l[0][len('/tempspace2/zwang6/Cityscapes/leftImg8bit/'):-len('_leftImg8bit.png')], \ 123 | l[1][len('/tempspace2/zwang6/Cityscapes/gtCoarse/'):-len('_gtCoarse_labelTrainIds.png')]) 124 | f.write(l[0][len(cityscapesPath):] + ' ' + l[1][len(cityscapesPath):] + '\n') 125 | print("---create train.txt---") 126 | with open(os.path.join(dir_path, 'train.txt'), 'w') as f: 127 | for l in zip(filesTrainImg+filesExTrainImg, filesTrainFine+filesExTrainCoarse): 128 | # rough match: len('gtCoarse') > len('gtFine') 129 | assert l[0][len('/tempspace2/zwang6/Cityscapes/leftImg8bit/'):-len('_leftImg8bit.png')] \ 130 | == l[1][len('/tempspace2/zwang6/Cityscapes/gtCoarse/'):-len('_gtCoarse_labelTrainIds.png')] \ 131 | or l[0][len('/tempspace2/zwang6/Cityscapes/leftImg8bit/'):-len('_leftImg8bit.png')] \ 132 | == l[1][len('/tempspace2/zwang6/Cityscapes/gtFine/'):-len('_gtFine_labelTrainIds.png')], \ 133 | "%s != %s" % (l[0], l[1]) 134 | f.write(l[0][len(cityscapesPath):] + ' ' + l[1][len(cityscapesPath):] + '\n') 135 | 136 | # call the main 137 | if __name__ == "__main__": 138 | os.environ['CITYSCAPES_DATASET'] = '/tempspace2/zwang6/Cityscapes' 139 | main() -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | import tensorflow as tf 4 | from model import Model 5 | 6 | 7 | 8 | """ 9 | This script defines hyperparameters. 10 | """ 11 | 12 | 13 | 14 | def configure(): 15 | flags = tf.app.flags 16 | 17 | # training 18 | flags.DEFINE_integer('num_steps', 20000, 'maximum number of iterations') 19 | flags.DEFINE_integer('save_interval', 1000, 'number of iterations for saving and visualization') 20 | flags.DEFINE_integer('random_seed', 1234, 'random seed') 21 | flags.DEFINE_float('weight_decay', 0.0005, 'weight decay rate') 22 | flags.DEFINE_float('learning_rate', 2.5e-4, 'learning rate') 23 | flags.DEFINE_float('power', 0.9, 'hyperparameter for poly learning rate') 24 | flags.DEFINE_float('momentum', 0.9, 'momentum') 25 | flags.DEFINE_string('encoder_name', 'deeplab', 'name of pre-trained model, res101, res50 or deeplab') 26 | flags.DEFINE_string('pretrain_file', '../reference model/deeplab_resnet_init.ckpt', 'pre-trained model filename corresponding to encoder_name') 27 | flags.DEFINE_string('data_list', './dataset/train.txt', 'training data list filename') 28 | 29 | # validation 30 | flags.DEFINE_integer('valid_step', 20000, 'checkpoint number for validation') 31 | flags.DEFINE_integer('valid_num_steps', 1449, '= number of validation samples') 32 | flags.DEFINE_string('valid_data_list', './dataset/val.txt', 'validation data list filename') 33 | 34 | # prediction / saving outputs for testing or validation 35 | flags.DEFINE_string('out_dir', 'output', 'directory for saving outputs') 36 | flags.DEFINE_integer('test_step', 20000, 'checkpoint number for testing/validation') 37 | flags.DEFINE_integer('test_num_steps', 1449, '= number of testing/validation samples') 38 | flags.DEFINE_string('test_data_list', './dataset/val.txt', 'testing/validation data list filename') 39 | flags.DEFINE_boolean('visual', True, 'whether to save predictions for visualization') 40 | 41 | # data 42 | flags.DEFINE_string('data_dir', '/tempspace2/zwang6/VOC2012', 'data directory') 43 | flags.DEFINE_integer('batch_size', 10, 'training batch size') 44 | flags.DEFINE_integer('input_height', 321, 'input image height') 45 | flags.DEFINE_integer('input_width', 321, 'input image width') 46 | flags.DEFINE_integer('num_classes', 21, 'number of classes') 47 | flags.DEFINE_integer('ignore_label', 255, 'label pixel value that should be ignored') 48 | flags.DEFINE_boolean('random_scale', True, 'whether to perform random scaling data-augmentation') 49 | flags.DEFINE_boolean('random_mirror', True, 'whether to perform random left-right flipping data-augmentation') 50 | 51 | # log 52 | flags.DEFINE_string('modeldir', 'model', 'model directory') 53 | flags.DEFINE_string('logfile', 'log.txt', 'training log filename') 54 | flags.DEFINE_string('logdir', 'log', 'training log directory') 55 | 56 | flags.FLAGS.__dict__['__parsed'] = False 57 | return flags.FLAGS 58 | 59 | def main(_): 60 | parser = argparse.ArgumentParser() 61 | parser.add_argument('--option', dest='option', type=str, default='train', 62 | help='actions: train, test, or predict') 63 | args = parser.parse_args() 64 | 65 | if args.option not in ['train', 'test', 'predict']: 66 | print('invalid option: ', args.option) 67 | print("Please input a option: train, test, or predict") 68 | else: 69 | # Set up tf session and initialize variables. 70 | # config = tf.ConfigProto() 71 | # config.gpu_options.allow_growth = True 72 | # sess = tf.Session(config=config) 73 | sess = tf.Session() 74 | # Run 75 | model = Model(sess, configure()) 76 | getattr(model, args.option)() 77 | 78 | 79 | if __name__ == '__main__': 80 | # Choose which gpu or cpu to use 81 | os.environ['CUDA_VISIBLE_DEVICES'] = '7' 82 | tf.app.run() 83 | -------------------------------------------------------------------------------- /main_msc.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | import tensorflow as tf 4 | from model_msc import Model_msc 5 | 6 | 7 | 8 | """ 9 | This script defines hyperparameters. 10 | """ 11 | 12 | 13 | 14 | def configure(): 15 | flags = tf.app.flags 16 | 17 | # training 18 | flags.DEFINE_integer('num_steps', 20000, 'maximum number of iterations') 19 | flags.DEFINE_integer('save_interval', 1000, 'number of iterations for saving and visualization') 20 | flags.DEFINE_integer('random_seed', 1234, 'random seed') 21 | flags.DEFINE_float('weight_decay', 0.0005, 'weight decay rate') 22 | flags.DEFINE_float('learning_rate', 2.5e-4, 'learning rate') 23 | flags.DEFINE_float('power', 0.9, 'hyperparameter for poly learning rate') 24 | flags.DEFINE_float('momentum', 0.9, 'momentum') 25 | flags.DEFINE_string('encoder_name', 'deeplab', 'name of pre-trained model, res101, res50 or deeplab') 26 | flags.DEFINE_string('pretrain_file', '../reference model/deeplab_resnet_init.ckpt', 'pre-trained model filename corresponding to encoder_name') 27 | flags.DEFINE_string('data_list', './dataset/train.txt', 'training data list filename') 28 | flags.DEFINE_integer('grad_update_every', 10, 'gradient accumulation step') 29 | # Note: grad_update_every = true training batch size 30 | 31 | # validation 32 | flags.DEFINE_integer('valid_step', 20000, 'checkpoint number for validation') 33 | flags.DEFINE_integer('valid_num_steps', 1449, '= number of validation samples') 34 | flags.DEFINE_string('valid_data_list', './dataset/val.txt', 'validation data list filename') 35 | 36 | # prediction / saving outputs for testing or validation 37 | flags.DEFINE_string('out_dir', 'output', 'directory for saving outputs') 38 | flags.DEFINE_integer('test_step', 20000, 'checkpoint number for testing/validation') 39 | flags.DEFINE_integer('test_num_steps', 1449, '= number of testing/validation samples') 40 | flags.DEFINE_string('test_data_list', './dataset/val.txt', 'testing/validation data list filename') 41 | flags.DEFINE_boolean('visual', True, 'whether to save predictions for visualization') 42 | 43 | # data 44 | flags.DEFINE_string('data_dir', '/tempspace2/zwang6/VOC2012', 'data directory') 45 | flags.DEFINE_integer('batch_size', 1, 'training batch size') 46 | flags.DEFINE_integer('input_height', 321, 'input image height') 47 | flags.DEFINE_integer('input_width', 321, 'input image width') 48 | flags.DEFINE_integer('num_classes', 21, 'number of classes') 49 | flags.DEFINE_integer('ignore_label', 255, 'label pixel value that should be ignored') 50 | flags.DEFINE_boolean('random_scale', True, 'whether to perform random scaling data-augmentation') 51 | flags.DEFINE_boolean('random_mirror', True, 'whether to perform random left-right flipping data-augmentation') 52 | 53 | # log 54 | flags.DEFINE_string('modeldir', 'model', 'model directory') 55 | flags.DEFINE_string('logfile', 'log.txt', 'training log filename') 56 | flags.DEFINE_string('logdir', 'log', 'training log directory') 57 | 58 | flags.FLAGS.__dict__['__parsed'] = False 59 | return flags.FLAGS 60 | 61 | def main(_): 62 | parser = argparse.ArgumentParser() 63 | parser.add_argument('--option', dest='option', type=str, default='train', 64 | help='actions: train, test, or predict') 65 | args = parser.parse_args() 66 | 67 | if args.option not in ['train', 'test', 'predict']: 68 | print('invalid option: ', args.option) 69 | print("Please input a option: train, test, or predict") 70 | else: 71 | # Set up tf session and initialize variables. 72 | # config = tf.ConfigProto() 73 | # config.gpu_options.allow_growth = True 74 | # sess = tf.Session(config=config) 75 | sess = tf.Session() 76 | # Run 77 | model = Model_msc(sess, configure()) 78 | getattr(model, args.option)() 79 | 80 | 81 | if __name__ == '__main__': 82 | # Choose which gpu or cpu to use 83 | os.environ['CUDA_VISIBLE_DEVICES'] = '7' 84 | tf.app.run() 85 | -------------------------------------------------------------------------------- /model.py: -------------------------------------------------------------------------------- 1 | from datetime import datetime 2 | import os 3 | import sys 4 | import time 5 | import numpy as np 6 | import tensorflow as tf 7 | from PIL import Image 8 | 9 | from network import * 10 | from utils import ImageReader, decode_labels, inv_preprocess, prepare_label, write_log, read_labeled_image_list 11 | 12 | 13 | 14 | """ 15 | This script trains or evaluates the model on augmented PASCAL VOC 2012 dataset. 16 | The training set contains 10581 training images. 17 | The validation set contains 1449 validation images. 18 | 19 | Training: 20 | 'poly' learning rate 21 | different learning rates for different layers 22 | """ 23 | 24 | 25 | 26 | IMG_MEAN = np.array((104.00698793,116.66876762,122.67891434), dtype=np.float32) 27 | 28 | class Model(object): 29 | 30 | def __init__(self, sess, conf): 31 | self.sess = sess 32 | self.conf = conf 33 | 34 | # train 35 | def train(self): 36 | self.train_setup() 37 | 38 | self.sess.run(tf.global_variables_initializer()) 39 | 40 | # Load the pre-trained model if provided 41 | if self.conf.pretrain_file is not None: 42 | self.load(self.loader, self.conf.pretrain_file) 43 | 44 | # Start queue threads. 45 | threads = tf.train.start_queue_runners(coord=self.coord, sess=self.sess) 46 | 47 | # Train! 48 | for step in range(self.conf.num_steps+1): 49 | start_time = time.time() 50 | feed_dict = { self.curr_step : step } 51 | 52 | if step % self.conf.save_interval == 0: 53 | loss_value, images, labels, preds, summary, _ = self.sess.run( 54 | [self.reduced_loss, 55 | self.image_batch, 56 | self.label_batch, 57 | self.pred, 58 | self.total_summary, 59 | self.train_op], 60 | feed_dict=feed_dict) 61 | self.summary_writer.add_summary(summary, step) 62 | self.save(self.saver, step) 63 | else: 64 | loss_value, _ = self.sess.run([self.reduced_loss, self.train_op], 65 | feed_dict=feed_dict) 66 | 67 | duration = time.time() - start_time 68 | print('step {:d} \t loss = {:.3f}, ({:.3f} sec/step)'.format(step, loss_value, duration)) 69 | write_log('{:d}, {:.3f}'.format(step, loss_value), self.conf.logfile) 70 | 71 | # finish 72 | self.coord.request_stop() 73 | self.coord.join(threads) 74 | 75 | # evaluate 76 | def test(self): 77 | self.test_setup() 78 | 79 | self.sess.run(tf.global_variables_initializer()) 80 | self.sess.run(tf.local_variables_initializer()) 81 | 82 | # load checkpoint 83 | checkpointfile = self.conf.modeldir+ '/model.ckpt-' + str(self.conf.valid_step) 84 | self.load(self.loader, checkpointfile) 85 | 86 | # Start queue threads. 87 | threads = tf.train.start_queue_runners(coord=self.coord, sess=self.sess) 88 | 89 | # Test! 90 | confusion_matrix = np.zeros((self.conf.num_classes, self.conf.num_classes), dtype=np.int) 91 | for step in range(self.conf.valid_num_steps): 92 | preds, _, _, c_matrix = self.sess.run([self.pred, self.accu_update_op, self.mIou_update_op, self.confusion_matrix]) 93 | confusion_matrix += c_matrix 94 | if step % 100 == 0: 95 | print('step {:d}'.format(step)) 96 | print('Pixel Accuracy: {:.3f}'.format(self.accu.eval(session=self.sess))) 97 | print('Mean IoU: {:.3f}'.format(self.mIoU.eval(session=self.sess))) 98 | self.compute_IoU_per_class(confusion_matrix) 99 | 100 | # finish 101 | self.coord.request_stop() 102 | self.coord.join(threads) 103 | 104 | # prediction 105 | def predict(self): 106 | self.predict_setup() 107 | 108 | self.sess.run(tf.global_variables_initializer()) 109 | self.sess.run(tf.local_variables_initializer()) 110 | 111 | # load checkpoint 112 | checkpointfile = self.conf.modeldir+ '/model.ckpt-' + str(self.conf.valid_step) 113 | self.load(self.loader, checkpointfile) 114 | 115 | # Start queue threads. 116 | threads = tf.train.start_queue_runners(coord=self.coord, sess=self.sess) 117 | 118 | # img_name_list 119 | image_list, _ = read_labeled_image_list('', self.conf.test_data_list) 120 | 121 | # Predict! 122 | for step in range(self.conf.test_num_steps): 123 | preds = self.sess.run(self.pred) 124 | 125 | img_name = image_list[step].split('/')[2].split('.')[0] 126 | # Save raw predictions, i.e. each pixel is an integer between [0,20]. 127 | im = Image.fromarray(preds[0,:,:,0], mode='L') 128 | filename = '/%s_mask.png' % (img_name) 129 | im.save(self.conf.out_dir + '/prediction' + filename) 130 | 131 | # Save predictions for visualization. 132 | # See utils/label_utils.py for color setting 133 | # Need to be modified based on datasets. 134 | if self.conf.visual: 135 | msk = decode_labels(preds, num_classes=self.conf.num_classes) 136 | im = Image.fromarray(msk[0], mode='RGB') 137 | filename = '/%s_mask_visual.png' % (img_name) 138 | im.save(self.conf.out_dir + '/visual_prediction' + filename) 139 | 140 | if step % 100 == 0: 141 | print('step {:d}'.format(step)) 142 | 143 | print('The output files has been saved to {}'.format(self.conf.out_dir)) 144 | 145 | # finish 146 | self.coord.request_stop() 147 | self.coord.join(threads) 148 | 149 | def train_setup(self): 150 | tf.set_random_seed(self.conf.random_seed) 151 | 152 | # Create queue coordinator. 153 | self.coord = tf.train.Coordinator() 154 | 155 | # Input size 156 | input_size = (self.conf.input_height, self.conf.input_width) 157 | 158 | # Load reader 159 | with tf.name_scope("create_inputs"): 160 | reader = ImageReader( 161 | self.conf.data_dir, 162 | self.conf.data_list, 163 | input_size, 164 | self.conf.random_scale, 165 | self.conf.random_mirror, 166 | self.conf.ignore_label, 167 | IMG_MEAN, 168 | self.coord) 169 | self.image_batch, self.label_batch = reader.dequeue(self.conf.batch_size) 170 | 171 | # Create network 172 | if self.conf.encoder_name not in ['res101', 'res50', 'deeplab']: 173 | print('encoder_name ERROR!') 174 | print("Please input: res101, res50, or deeplab") 175 | sys.exit(-1) 176 | elif self.conf.encoder_name == 'deeplab': 177 | net = Deeplab_v2(self.image_batch, self.conf.num_classes, True) 178 | # Variables that load from pre-trained model. 179 | restore_var = [v for v in tf.global_variables() if 'fc' not in v.name] 180 | # Trainable Variables 181 | all_trainable = tf.trainable_variables() 182 | # Fine-tune part 183 | encoder_trainable = [v for v in all_trainable if 'fc' not in v.name] # lr * 1.0 184 | # Decoder part 185 | decoder_trainable = [v for v in all_trainable if 'fc' in v.name] 186 | else: 187 | net = ResNet_segmentation(self.image_batch, self.conf.num_classes, True, self.conf.encoder_name) 188 | # Variables that load from pre-trained model. 189 | restore_var = [v for v in tf.global_variables() if 'resnet_v1' in v.name] 190 | # Trainable Variables 191 | all_trainable = tf.trainable_variables() 192 | # Fine-tune part 193 | encoder_trainable = [v for v in all_trainable if 'resnet_v1' in v.name] # lr * 1.0 194 | # Decoder part 195 | decoder_trainable = [v for v in all_trainable if 'decoder' in v.name] 196 | 197 | decoder_w_trainable = [v for v in decoder_trainable if 'weights' in v.name or 'gamma' in v.name] # lr * 10.0 198 | decoder_b_trainable = [v for v in decoder_trainable if 'biases' in v.name or 'beta' in v.name] # lr * 20.0 199 | # Check 200 | assert(len(all_trainable) == len(decoder_trainable) + len(encoder_trainable)) 201 | assert(len(decoder_trainable) == len(decoder_w_trainable) + len(decoder_b_trainable)) 202 | 203 | # Network raw output 204 | raw_output = net.outputs # [batch_size, h, w, 21] 205 | 206 | # Output size 207 | output_shape = tf.shape(raw_output) 208 | output_size = (output_shape[1], output_shape[2]) 209 | 210 | # Groud Truth: ignoring all labels greater or equal than n_classes 211 | label_proc = prepare_label(self.label_batch, output_size, num_classes=self.conf.num_classes, one_hot=False) 212 | raw_gt = tf.reshape(label_proc, [-1,]) 213 | indices = tf.squeeze(tf.where(tf.less_equal(raw_gt, self.conf.num_classes - 1)), 1) 214 | gt = tf.cast(tf.gather(raw_gt, indices), tf.int32) 215 | raw_prediction = tf.reshape(raw_output, [-1, self.conf.num_classes]) 216 | prediction = tf.gather(raw_prediction, indices) 217 | 218 | # Pixel-wise softmax_cross_entropy loss 219 | loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=prediction, labels=gt) 220 | # L2 regularization 221 | l2_losses = [self.conf.weight_decay * tf.nn.l2_loss(v) for v in all_trainable if 'weights' in v.name] 222 | # Loss function 223 | self.reduced_loss = tf.reduce_mean(loss) + tf.add_n(l2_losses) 224 | 225 | # Define optimizers 226 | # 'poly' learning rate 227 | base_lr = tf.constant(self.conf.learning_rate) 228 | self.curr_step = tf.placeholder(dtype=tf.float32, shape=()) 229 | learning_rate = tf.scalar_mul(base_lr, tf.pow((1 - self.curr_step / self.conf.num_steps), self.conf.power)) 230 | # We have several optimizers here in order to handle the different lr_mult 231 | # which is a kind of parameters in Caffe. This controls the actual lr for each 232 | # layer. 233 | opt_encoder = tf.train.MomentumOptimizer(learning_rate, self.conf.momentum) 234 | opt_decoder_w = tf.train.MomentumOptimizer(learning_rate * 10.0, self.conf.momentum) 235 | opt_decoder_b = tf.train.MomentumOptimizer(learning_rate * 20.0, self.conf.momentum) 236 | # To make sure each layer gets updated by different lr's, we do not use 'minimize' here. 237 | # Instead, we separate the steps compute_grads+update_params. 238 | # Compute grads 239 | grads = tf.gradients(self.reduced_loss, encoder_trainable + decoder_w_trainable + decoder_b_trainable) 240 | grads_encoder = grads[:len(encoder_trainable)] 241 | grads_decoder_w = grads[len(encoder_trainable) : (len(encoder_trainable) + len(decoder_w_trainable))] 242 | grads_decoder_b = grads[(len(encoder_trainable) + len(decoder_w_trainable)):] 243 | # Update params 244 | train_op_conv = opt_encoder.apply_gradients(zip(grads_encoder, encoder_trainable)) 245 | train_op_fc_w = opt_decoder_w.apply_gradients(zip(grads_decoder_w, decoder_w_trainable)) 246 | train_op_fc_b = opt_decoder_b.apply_gradients(zip(grads_decoder_b, decoder_b_trainable)) 247 | # Finally, get the train_op! 248 | update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) # for collecting moving_mean and moving_variance 249 | with tf.control_dependencies(update_ops): 250 | self.train_op = tf.group(train_op_conv, train_op_fc_w, train_op_fc_b) 251 | 252 | # Saver for storing checkpoints of the model 253 | self.saver = tf.train.Saver(var_list=tf.global_variables(), max_to_keep=0) 254 | 255 | # Loader for loading the pre-trained model 256 | self.loader = tf.train.Saver(var_list=restore_var) 257 | 258 | # Training summary 259 | # Processed predictions: for visualisation. 260 | raw_output_up = tf.image.resize_bilinear(raw_output, input_size) 261 | raw_output_up = tf.argmax(raw_output_up, axis=3) 262 | self.pred = tf.expand_dims(raw_output_up, dim=3) 263 | # Image summary. 264 | images_summary = tf.py_func(inv_preprocess, [self.image_batch, 2, IMG_MEAN], tf.uint8) 265 | labels_summary = tf.py_func(decode_labels, [self.label_batch, 2, self.conf.num_classes], tf.uint8) 266 | preds_summary = tf.py_func(decode_labels, [self.pred, 2, self.conf.num_classes], tf.uint8) 267 | self.total_summary = tf.summary.image('images', 268 | tf.concat(axis=2, values=[images_summary, labels_summary, preds_summary]), 269 | max_outputs=2) # Concatenate row-wise. 270 | if not os.path.exists(self.conf.logdir): 271 | os.makedirs(self.conf.logdir) 272 | self.summary_writer = tf.summary.FileWriter(self.conf.logdir, graph=tf.get_default_graph()) 273 | 274 | def test_setup(self): 275 | # Create queue coordinator. 276 | self.coord = tf.train.Coordinator() 277 | 278 | # Load reader 279 | with tf.name_scope("create_inputs"): 280 | reader = ImageReader( 281 | self.conf.data_dir, 282 | self.conf.valid_data_list, 283 | None, # the images have different sizes 284 | False, # no data-aug 285 | False, # no data-aug 286 | self.conf.ignore_label, 287 | IMG_MEAN, 288 | self.coord) 289 | image, label = reader.image, reader.label # [h, w, 3 or 1] 290 | # Add one batch dimension [1, h, w, 3 or 1] 291 | self.image_batch, self.label_batch = tf.expand_dims(image, dim=0), tf.expand_dims(label, dim=0) 292 | 293 | # Create network 294 | if self.conf.encoder_name not in ['res101', 'res50', 'deeplab']: 295 | print('encoder_name ERROR!') 296 | print("Please input: res101, res50, or deeplab") 297 | sys.exit(-1) 298 | elif self.conf.encoder_name == 'deeplab': 299 | net = Deeplab_v2(self.image_batch, self.conf.num_classes, False) 300 | else: 301 | net = ResNet_segmentation(self.image_batch, self.conf.num_classes, False, self.conf.encoder_name) 302 | 303 | # predictions 304 | raw_output = net.outputs 305 | raw_output = tf.image.resize_bilinear(raw_output, tf.shape(self.image_batch)[1:3,]) 306 | raw_output = tf.argmax(raw_output, axis=3) 307 | pred = tf.expand_dims(raw_output, dim=3) 308 | self.pred = tf.reshape(pred, [-1,]) 309 | # labels 310 | gt = tf.reshape(self.label_batch, [-1,]) 311 | # Ignoring all labels greater than or equal to n_classes. 312 | temp = tf.less_equal(gt, self.conf.num_classes - 1) 313 | weights = tf.cast(temp, tf.int32) 314 | 315 | # fix for tf 1.3.0 316 | gt = tf.where(temp, gt, tf.cast(temp, tf.uint8)) 317 | 318 | # Pixel accuracy 319 | self.accu, self.accu_update_op = tf.contrib.metrics.streaming_accuracy( 320 | self.pred, gt, weights=weights) 321 | 322 | # mIoU 323 | self.mIoU, self.mIou_update_op = tf.contrib.metrics.streaming_mean_iou( 324 | self.pred, gt, num_classes=self.conf.num_classes, weights=weights) 325 | 326 | # confusion matrix 327 | self.confusion_matrix = tf.contrib.metrics.confusion_matrix( 328 | self.pred, gt, num_classes=self.conf.num_classes, weights=weights) 329 | 330 | # Loader for loading the checkpoint 331 | self.loader = tf.train.Saver(var_list=tf.global_variables()) 332 | 333 | def predict_setup(self): 334 | # Create queue coordinator. 335 | self.coord = tf.train.Coordinator() 336 | 337 | # Load reader 338 | with tf.name_scope("create_inputs"): 339 | reader = ImageReader( 340 | self.conf.data_dir, 341 | self.conf.test_data_list, 342 | None, # the images have different sizes 343 | False, # no data-aug 344 | False, # no data-aug 345 | self.conf.ignore_label, 346 | IMG_MEAN, 347 | self.coord) 348 | image, label = reader.image, reader.label # [h, w, 3 or 1] 349 | # Add one batch dimension [1, h, w, 3 or 1] 350 | image_batch, label_batch = tf.expand_dims(image, dim=0), tf.expand_dims(label, dim=0) 351 | 352 | # Create network 353 | if self.conf.encoder_name not in ['res101', 'res50', 'deeplab']: 354 | print('encoder_name ERROR!') 355 | print("Please input: res101, res50, or deeplab") 356 | sys.exit(-1) 357 | elif self.conf.encoder_name == 'deeplab': 358 | net = Deeplab_v2(image_batch, self.conf.num_classes, False) 359 | else: 360 | net = ResNet_segmentation(image_batch, self.conf.num_classes, False, self.conf.encoder_name) 361 | 362 | # Predictions. 363 | raw_output = net.outputs 364 | raw_output = tf.image.resize_bilinear(raw_output, tf.shape(image_batch)[1:3,]) 365 | raw_output = tf.argmax(raw_output, axis=3) 366 | self.pred = tf.cast(tf.expand_dims(raw_output, dim=3), tf.uint8) 367 | 368 | # Create directory 369 | if not os.path.exists(self.conf.out_dir): 370 | os.makedirs(self.conf.out_dir) 371 | os.makedirs(self.conf.out_dir + '/prediction') 372 | if self.conf.visual: 373 | os.makedirs(self.conf.out_dir + '/visual_prediction') 374 | 375 | # Loader for loading the checkpoint 376 | self.loader = tf.train.Saver(var_list=tf.global_variables()) 377 | 378 | def save(self, saver, step): 379 | ''' 380 | Save weights. 381 | ''' 382 | model_name = 'model.ckpt' 383 | checkpoint_path = os.path.join(self.conf.modeldir, model_name) 384 | if not os.path.exists(self.conf.modeldir): 385 | os.makedirs(self.conf.modeldir) 386 | saver.save(self.sess, checkpoint_path, global_step=step) 387 | print('The checkpoint has been created.') 388 | 389 | def load(self, saver, filename): 390 | ''' 391 | Load trained weights. 392 | ''' 393 | saver.restore(self.sess, filename) 394 | print("Restored model parameters from {}".format(filename)) 395 | 396 | def compute_IoU_per_class(self, confusion_matrix): 397 | mIoU = 0 398 | for i in range(self.conf.num_classes): 399 | # IoU = true_positive / (true_positive + false_positive + false_negative) 400 | TP = confusion_matrix[i,i] 401 | FP = np.sum(confusion_matrix[:, i]) - TP 402 | FN = np.sum(confusion_matrix[i]) - TP 403 | IoU = TP / (TP + FP + FN) 404 | print ('class %d: %.3f' % (i, IoU)) 405 | mIoU += IoU / self.conf.num_classes 406 | print ('mIoU: %.3f' % mIoU) -------------------------------------------------------------------------------- /model_msc.py: -------------------------------------------------------------------------------- 1 | from datetime import datetime 2 | import os 3 | import sys 4 | import time 5 | import numpy as np 6 | import tensorflow as tf 7 | from PIL import Image 8 | 9 | from network import * 10 | from utils import ImageReader, decode_labels, inv_preprocess, prepare_label, write_log, read_labeled_image_list 11 | 12 | 13 | 14 | """ 15 | This script trains or evaluates the model on augmented PASCAL VOC 2012 dataset. 16 | The training set contains 10581 training images. 17 | The validation set contains 1449 validation images. 18 | 19 | Training: 20 | 'poly' learning rate 21 | different learning rates for different layers 22 | """ 23 | 24 | 25 | 26 | IMG_MEAN = np.array((104.00698793,116.66876762,122.67891434), dtype=np.float32) 27 | 28 | class Model_msc(object): 29 | 30 | def __init__(self, sess, conf): 31 | self.sess = sess 32 | self.conf = conf 33 | 34 | # train 35 | def train(self): 36 | self.train_setup() 37 | 38 | self.sess.run(tf.global_variables_initializer()) 39 | 40 | # Load the pre-trained model if provided 41 | if self.conf.pretrain_file is not None: 42 | self.load(self.loader, self.conf.pretrain_file) 43 | 44 | # Start queue threads. 45 | threads = tf.train.start_queue_runners(coord=self.coord, sess=self.sess) 46 | 47 | # Train! 48 | for step in range(self.conf.num_steps+1): 49 | start_time = time.time() 50 | feed_dict = { self.curr_step : step } 51 | loss_value = 0 52 | 53 | # Clear the accumulated gradients. 54 | self.sess.run(self.zero_op, feed_dict=feed_dict) 55 | 56 | # Accumulate gradients. 57 | for i in range(self.conf.grad_update_every): 58 | _, l_val = self.sess.run([self.accum_grads_op, self.reduced_loss], feed_dict=feed_dict) 59 | loss_value += l_val 60 | 61 | # Normalise the loss. 62 | loss_value /= self.conf.grad_update_every 63 | 64 | # Apply gradients. 65 | if step % self.conf.save_interval == 0: 66 | images, labels, summary, _ = self.sess.run( 67 | [self.image_batch, 68 | self.label_batch, 69 | self.total_summary, 70 | self.train_op], 71 | feed_dict=feed_dict) 72 | self.summary_writer.add_summary(summary, step) 73 | self.save(self.saver, step) 74 | else: 75 | self.sess.run(self.train_op, feed_dict=feed_dict) 76 | 77 | duration = time.time() - start_time 78 | print('step {:d} \t loss = {:.3f}, ({:.3f} sec/step)'.format(step, loss_value, duration)) 79 | write_log('{:d}, {:.3f}'.format(step, loss_value), self.conf.logfile) 80 | 81 | # finish 82 | self.coord.request_stop() 83 | self.coord.join(threads) 84 | 85 | # evaluate 86 | def test(self): 87 | self.test_setup() 88 | 89 | self.sess.run(tf.global_variables_initializer()) 90 | self.sess.run(tf.local_variables_initializer()) 91 | 92 | # load checkpoint 93 | checkpointfile = self.conf.modeldir+ '/model.ckpt-' + str(self.conf.valid_step) 94 | self.load(self.loader, checkpointfile) 95 | 96 | # Start queue threads. 97 | threads = tf.train.start_queue_runners(coord=self.coord, sess=self.sess) 98 | 99 | # Test! 100 | confusion_matrix = np.zeros((self.conf.num_classes, self.conf.num_classes), dtype=np.int) 101 | for step in range(self.conf.valid_num_steps): 102 | preds, _, _, c_matrix = self.sess.run([self.pred, self.accu_update_op, self.mIou_update_op, self.confusion_matrix]) 103 | confusion_matrix += c_matrix 104 | if step % 100 == 0: 105 | print('step {:d}'.format(step)) 106 | print('Pixel Accuracy: {:.3f}'.format(self.accu.eval(session=self.sess))) 107 | print('Mean IoU: {:.3f}'.format(self.mIoU.eval(session=self.sess))) 108 | self.compute_IoU_per_class(confusion_matrix) 109 | 110 | # finish 111 | self.coord.request_stop() 112 | self.coord.join(threads) 113 | 114 | # prediction 115 | def predict(self): 116 | self.predict_setup() 117 | 118 | self.sess.run(tf.global_variables_initializer()) 119 | self.sess.run(tf.local_variables_initializer()) 120 | 121 | # load checkpoint 122 | checkpointfile = self.conf.modeldir+ '/model.ckpt-' + str(self.conf.valid_step) 123 | self.load(self.loader, checkpointfile) 124 | 125 | # Start queue threads. 126 | threads = tf.train.start_queue_runners(coord=self.coord, sess=self.sess) 127 | 128 | # img_name_list 129 | image_list, _ = read_labeled_image_list('', self.conf.test_data_list) 130 | 131 | # Predict! 132 | for step in range(self.conf.test_num_steps): 133 | preds = self.sess.run(self.pred) 134 | 135 | img_name = image_list[step].split('/')[2].split('.')[0] 136 | # Save raw predictions, i.e. each pixel is an integer between [0,20]. 137 | im = Image.fromarray(preds[0,:,:,0], mode='L') 138 | filename = '/%s_mask.png' % (img_name) 139 | im.save(self.conf.out_dir + '/prediction' + filename) 140 | 141 | # Save predictions for visualization. 142 | # See utils/label_utils.py for color setting 143 | # Need to be modified based on datasets. 144 | if self.conf.visual: 145 | msk = decode_labels(preds, num_classes=self.conf.num_classes) 146 | im = Image.fromarray(msk[0], mode='RGB') 147 | filename = '/%s_mask_visual.png' % (img_name) 148 | im.save(self.conf.out_dir + '/visual_prediction' + filename) 149 | 150 | if step % 100 == 0: 151 | print('step {:d}'.format(step)) 152 | 153 | print('The output files has been saved to {}'.format(self.conf.out_dir)) 154 | 155 | # finish 156 | self.coord.request_stop() 157 | self.coord.join(threads) 158 | 159 | def train_setup(self): 160 | tf.set_random_seed(self.conf.random_seed) 161 | 162 | # Create queue coordinator. 163 | self.coord = tf.train.Coordinator() 164 | 165 | # Input size 166 | h, w = (self.conf.input_height, self.conf.input_width) 167 | input_size = (h, w) 168 | 169 | # Load reader 170 | with tf.name_scope("create_inputs"): 171 | reader = ImageReader( 172 | self.conf.data_dir, 173 | self.conf.data_list, 174 | input_size, 175 | self.conf.random_scale, 176 | self.conf.random_mirror, 177 | self.conf.ignore_label, 178 | IMG_MEAN, 179 | self.coord) 180 | self.image_batch, self.label_batch = reader.dequeue(self.conf.batch_size) 181 | image_batch_075 = tf.image.resize_images(self.image_batch, [int(h * 0.75), int(w * 0.75)]) 182 | image_batch_05 = tf.image.resize_images(self.image_batch, [int(h * 0.5), int(w * 0.5)]) 183 | 184 | # Create network 185 | if self.conf.encoder_name not in ['res101', 'res50', 'deeplab']: 186 | print('encoder_name ERROR!') 187 | print("Please input: res101, res50, or deeplab") 188 | sys.exit(-1) 189 | elif self.conf.encoder_name == 'deeplab': 190 | with tf.variable_scope('', reuse=False): 191 | net = Deeplab_v2(self.image_batch, self.conf.num_classes, True) 192 | with tf.variable_scope('', reuse=True): 193 | net075 = Deeplab_v2(image_batch_075, self.conf.num_classes, True) 194 | with tf.variable_scope('', reuse=True): 195 | net05 = Deeplab_v2(image_batch_05, self.conf.num_classes, True) 196 | # Variables that load from pre-trained model. 197 | restore_var = [v for v in tf.global_variables() if 'fc' not in v.name] 198 | # Trainable Variables 199 | all_trainable = tf.trainable_variables() 200 | # Fine-tune part 201 | encoder_trainable = [v for v in all_trainable if 'fc' not in v.name] # lr * 1.0 202 | # Decoder part 203 | decoder_trainable = [v for v in all_trainable if 'fc' in v.name] 204 | else: 205 | with tf.variable_scope('', reuse=False): 206 | net = ResNet_segmentation(self.image_batch, self.conf.num_classes, True, self.conf.encoder_name) 207 | with tf.variable_scope('', reuse=True): 208 | net075 = ResNet_segmentation(image_batch_075, self.conf.num_classes, True, self.conf.encoder_name) 209 | with tf.variable_scope('', reuse=True): 210 | net05 = ResNet_segmentation(image_batch_05, self.conf.num_classes, True, self.conf.encoder_name) 211 | # Variables that load from pre-trained model. 212 | restore_var = [v for v in tf.global_variables() if 'resnet_v1' in v.name] 213 | # Trainable Variables 214 | all_trainable = tf.trainable_variables() 215 | # Fine-tune part 216 | encoder_trainable = [v for v in all_trainable if 'resnet_v1' in v.name] # lr * 1.0 217 | # Decoder part 218 | decoder_trainable = [v for v in all_trainable if 'decoder' in v.name] 219 | 220 | decoder_w_trainable = [v for v in decoder_trainable if 'weights' in v.name or 'gamma' in v.name] # lr * 10.0 221 | decoder_b_trainable = [v for v in decoder_trainable if 'biases' in v.name or 'beta' in v.name] # lr * 20.0 222 | # Check 223 | assert(len(all_trainable) == len(decoder_trainable) + len(encoder_trainable)) 224 | assert(len(decoder_trainable) == len(decoder_w_trainable) + len(decoder_b_trainable)) 225 | 226 | # Network raw output 227 | raw_output100 = net.outputs 228 | raw_output075 = net075.outputs 229 | raw_output05 = net05.outputs 230 | raw_output = tf.reduce_max(tf.stack([raw_output100, 231 | tf.image.resize_images(raw_output075, tf.shape(raw_output100)[1:3,]), 232 | tf.image.resize_images(raw_output05, tf.shape(raw_output100)[1:3,])]), axis=0) 233 | 234 | # Groud Truth: ignoring all labels greater or equal than n_classes 235 | label_proc = prepare_label(self.label_batch, tf.stack(raw_output.get_shape()[1:3]), num_classes=self.conf.num_classes, one_hot=False) # [batch_size, h, w] 236 | label_proc075 = prepare_label(self.label_batch, tf.stack(raw_output075.get_shape()[1:3]), num_classes=self.conf.num_classes, one_hot=False) 237 | label_proc05 = prepare_label(self.label_batch, tf.stack(raw_output05.get_shape()[1:3]), num_classes=self.conf.num_classes, one_hot=False) 238 | 239 | raw_gt = tf.reshape(label_proc, [-1,]) 240 | raw_gt075 = tf.reshape(label_proc075, [-1,]) 241 | raw_gt05 = tf.reshape(label_proc05, [-1,]) 242 | 243 | indices = tf.squeeze(tf.where(tf.less_equal(raw_gt, self.conf.num_classes - 1)), 1) 244 | indices075 = tf.squeeze(tf.where(tf.less_equal(raw_gt075, self.conf.num_classes - 1)), 1) 245 | indices05 = tf.squeeze(tf.where(tf.less_equal(raw_gt05, self.conf.num_classes - 1)), 1) 246 | 247 | gt = tf.cast(tf.gather(raw_gt, indices), tf.int32) 248 | gt075 = tf.cast(tf.gather(raw_gt075, indices075), tf.int32) 249 | gt05 = tf.cast(tf.gather(raw_gt05, indices05), tf.int32) 250 | 251 | raw_prediction = tf.reshape(raw_output, [-1, self.conf.num_classes]) 252 | raw_prediction100 = tf.reshape(raw_output100, [-1, self.conf.num_classes]) 253 | raw_prediction075 = tf.reshape(raw_output075, [-1, self.conf.num_classes]) 254 | raw_prediction05 = tf.reshape(raw_output05, [-1, self.conf.num_classes]) 255 | 256 | prediction = tf.gather(raw_prediction, indices) 257 | prediction100 = tf.gather(raw_prediction100, indices) 258 | prediction075 = tf.gather(raw_prediction075, indices075) 259 | prediction05 = tf.gather(raw_prediction05, indices05) 260 | 261 | # Pixel-wise softmax_cross_entropy loss 262 | loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=prediction, labels=gt) 263 | loss100 = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=prediction100, labels=gt) 264 | loss075 = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=prediction075, labels=gt075) 265 | loss05 = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=prediction05, labels=gt05) 266 | # L2 regularization 267 | l2_losses = [self.conf.weight_decay * tf.nn.l2_loss(v) for v in all_trainable if 'weights' in v.name] 268 | # Loss function 269 | self.reduced_loss = tf.reduce_mean(loss) + tf.reduce_mean(loss100) + tf.reduce_mean(loss075) + tf.reduce_mean(loss05) + tf.add_n(l2_losses) 270 | 271 | # Define optimizers 272 | # 'poly' learning rate 273 | base_lr = tf.constant(self.conf.learning_rate) 274 | self.curr_step = tf.placeholder(dtype=tf.float32, shape=()) 275 | learning_rate = tf.scalar_mul(base_lr, tf.pow((1 - self.curr_step / self.conf.num_steps), self.conf.power)) 276 | # We have several optimizers here in order to handle the different lr_mult 277 | # which is a kind of parameters in Caffe. This controls the actual lr for each 278 | # layer. 279 | opt_encoder = tf.train.MomentumOptimizer(learning_rate, self.conf.momentum) 280 | opt_decoder_w = tf.train.MomentumOptimizer(learning_rate * 10.0, self.conf.momentum) 281 | opt_decoder_b = tf.train.MomentumOptimizer(learning_rate * 20.0, self.conf.momentum) 282 | 283 | # Gradient accumulation 284 | # Define a variable to accumulate gradients. 285 | accum_grads = [tf.Variable(tf.zeros_like(v.initialized_value()), 286 | trainable=False) for v in encoder_trainable + decoder_w_trainable + decoder_b_trainable] 287 | # Define an operation to clear the accumulated gradients for next batch. 288 | self.zero_op = [v.assign(tf.zeros_like(v)) for v in accum_grads] 289 | # To make sure each layer gets updated by different lr's, we do not use 'minimize' here. 290 | # Instead, we separate the steps compute_grads+update_params. 291 | # Compute grads 292 | grads = tf.gradients(self.reduced_loss, encoder_trainable + decoder_w_trainable + decoder_b_trainable) 293 | # Accumulate and normalise the gradients. 294 | self.accum_grads_op = [accum_grads[i].assign_add(grad / self.conf.grad_update_every) for i, grad in enumerate(grads)] 295 | 296 | grads = tf.gradients(self.reduced_loss, encoder_trainable + decoder_w_trainable + decoder_b_trainable) 297 | grads_encoder = accum_grads[:len(encoder_trainable)] 298 | grads_decoder_w = accum_grads[len(encoder_trainable) : (len(encoder_trainable) + len(decoder_w_trainable))] 299 | grads_decoder_b = accum_grads[(len(encoder_trainable) + len(decoder_w_trainable)):] 300 | # Update params 301 | train_op_conv = opt_encoder.apply_gradients(zip(grads_encoder, encoder_trainable)) 302 | train_op_fc_w = opt_decoder_w.apply_gradients(zip(grads_decoder_w, decoder_w_trainable)) 303 | train_op_fc_b = opt_decoder_b.apply_gradients(zip(grads_decoder_b, decoder_b_trainable)) 304 | # Finally, get the train_op! 305 | update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) # for collecting moving_mean and moving_variance 306 | with tf.control_dependencies(update_ops): 307 | self.train_op = tf.group(train_op_conv, train_op_fc_w, train_op_fc_b) 308 | 309 | # Saver for storing checkpoints of the model 310 | self.saver = tf.train.Saver(var_list=tf.global_variables(), max_to_keep=0) 311 | 312 | # Loader for loading the pre-trained model 313 | self.loader = tf.train.Saver(var_list=restore_var) 314 | 315 | # Training summary 316 | # Processed predictions: for visualisation. 317 | raw_output_up = tf.image.resize_bilinear(raw_output, input_size) 318 | raw_output_up = tf.argmax(raw_output_up, axis=3) 319 | self.pred = tf.expand_dims(raw_output_up, dim=3) 320 | # Image summary. 321 | images_summary = tf.py_func(inv_preprocess, [self.image_batch, 1, IMG_MEAN], tf.uint8) 322 | labels_summary = tf.py_func(decode_labels, [self.label_batch, 1, self.conf.num_classes], tf.uint8) 323 | preds_summary = tf.py_func(decode_labels, [self.pred, 1, self.conf.num_classes], tf.uint8) 324 | self.total_summary = tf.summary.image('images', 325 | tf.concat(axis=2, values=[images_summary, labels_summary, preds_summary]), 326 | max_outputs=1) # Concatenate row-wise. 327 | if not os.path.exists(self.conf.logdir): 328 | os.makedirs(self.conf.logdir) 329 | self.summary_writer = tf.summary.FileWriter(self.conf.logdir, graph=tf.get_default_graph()) 330 | 331 | def test_setup(self): 332 | # Create queue coordinator. 333 | self.coord = tf.train.Coordinator() 334 | 335 | # Load reader 336 | with tf.name_scope("create_inputs"): 337 | reader = ImageReader( 338 | self.conf.data_dir, 339 | self.conf.valid_data_list, 340 | None, # the images have different sizes 341 | False, # no data-aug 342 | False, # no data-aug 343 | self.conf.ignore_label, 344 | IMG_MEAN, 345 | self.coord) 346 | image, label = reader.image, reader.label # [h, w, 3 or 1] 347 | # Add one batch dimension [1, h, w, 3 or 1] 348 | self.image_batch, self.label_batch = tf.expand_dims(image, dim=0), tf.expand_dims(label, dim=0) 349 | h_orig, w_orig = tf.to_float(tf.shape(self.image_batch)[1]), tf.to_float(tf.shape(self.image_batch)[2]) 350 | image_batch_075 = tf.image.resize_images(self.image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 0.75)), tf.to_int32(tf.multiply(w_orig, 0.75))])) 351 | image_batch_05 = tf.image.resize_images(self.image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 0.5)), tf.to_int32(tf.multiply(w_orig, 0.5))])) 352 | 353 | # Create network 354 | if self.conf.encoder_name not in ['res101', 'res50', 'deeplab']: 355 | print('encoder_name ERROR!') 356 | print("Please input: res101, res50, or deeplab") 357 | sys.exit(-1) 358 | elif self.conf.encoder_name == 'deeplab': 359 | with tf.variable_scope('', reuse=False): 360 | net = Deeplab_v2(self.image_batch, self.conf.num_classes, False) 361 | with tf.variable_scope('', reuse=True): 362 | net075 = Deeplab_v2(image_batch_075, self.conf.num_classes, False) 363 | with tf.variable_scope('', reuse=True): 364 | net05 = Deeplab_v2(image_batch_05, self.conf.num_classes, False) 365 | else: 366 | with tf.variable_scope('', reuse=False): 367 | net = ResNet_segmentation(self.image_batch, self.conf.num_classes, False, self.conf.encoder_name) 368 | with tf.variable_scope('', reuse=True): 369 | net075 = ResNet_segmentation(image_batch_075, self.conf.num_classes, False, self.conf.encoder_name) 370 | with tf.variable_scope('', reuse=True): 371 | net05 = ResNet_segmentation(image_batch_05, self.conf.num_classes, False, self.conf.encoder_name) 372 | 373 | # predictions 374 | # Network raw output 375 | raw_output100 = net.outputs 376 | raw_output075 = net075.outputs 377 | raw_output05 = net05.outputs 378 | raw_output = tf.reduce_max(tf.stack([raw_output100, 379 | tf.image.resize_images(raw_output075, tf.shape(raw_output100)[1:3,]), 380 | tf.image.resize_images(raw_output05, tf.shape(raw_output100)[1:3,])]), axis=0) 381 | raw_output = tf.image.resize_bilinear(raw_output, tf.shape(self.image_batch)[1:3,]) 382 | raw_output = tf.argmax(raw_output, axis=3) 383 | pred = tf.expand_dims(raw_output, dim=3) 384 | self.pred = tf.reshape(pred, [-1,]) 385 | # labels 386 | gt = tf.reshape(self.label_batch, [-1,]) 387 | # Ignoring all labels greater than or equal to n_classes. 388 | temp = tf.less_equal(gt, self.conf.num_classes - 1) 389 | weights = tf.cast(temp, tf.int32) 390 | 391 | # fix for tf 1.3.0 392 | gt = tf.where(temp, gt, tf.cast(temp, tf.uint8)) 393 | 394 | # Pixel accuracy 395 | self.accu, self.accu_update_op = tf.contrib.metrics.streaming_accuracy( 396 | self.pred, gt, weights=weights) 397 | 398 | # mIoU 399 | self.mIoU, self.mIou_update_op = tf.contrib.metrics.streaming_mean_iou( 400 | self.pred, gt, num_classes=self.conf.num_classes, weights=weights) 401 | 402 | # confusion matrix 403 | self.confusion_matrix = tf.contrib.metrics.confusion_matrix( 404 | self.pred, gt, num_classes=self.conf.num_classes, weights=weights) 405 | 406 | # Loader for loading the checkpoint 407 | self.loader = tf.train.Saver(var_list=tf.global_variables()) 408 | 409 | def predict_setup(self): 410 | # Create queue coordinator. 411 | self.coord = tf.train.Coordinator() 412 | 413 | # Load reader 414 | with tf.name_scope("create_inputs"): 415 | reader = ImageReader( 416 | self.conf.data_dir, 417 | self.conf.test_data_list, 418 | None, # the images have different sizes 419 | False, # no data-aug 420 | False, # no data-aug 421 | self.conf.ignore_label, 422 | IMG_MEAN, 423 | self.coord) 424 | image, label = reader.image, reader.label # [h, w, 3 or 1] 425 | # Add one batch dimension [1, h, w, 3 or 1] 426 | image_batch, label_batch = tf.expand_dims(image, dim=0), tf.expand_dims(label, dim=0) 427 | h_orig, w_orig = tf.to_float(tf.shape(image_batch)[1]), tf.to_float(tf.shape(image_batch)[2]) 428 | image_batch_075 = tf.image.resize_images(image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 0.75)), tf.to_int32(tf.multiply(w_orig, 0.75))])) 429 | image_batch_05 = tf.image.resize_images(image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 0.5)), tf.to_int32(tf.multiply(w_orig, 0.5))])) 430 | 431 | 432 | # Create network 433 | if self.conf.encoder_name not in ['res101', 'res50', 'deeplab']: 434 | print('encoder_name ERROR!') 435 | print("Please input: res101, res50, or deeplab") 436 | sys.exit(-1) 437 | elif self.conf.encoder_name == 'deeplab': 438 | with tf.variable_scope('', reuse=False): 439 | net = Deeplab_v2(image_batch, self.conf.num_classes, False) 440 | with tf.variable_scope('', reuse=True): 441 | net075 = Deeplab_v2(image_batch_075, self.conf.num_classes, False) 442 | with tf.variable_scope('', reuse=True): 443 | net05 = Deeplab_v2(image_batch_05, self.conf.num_classes, False) 444 | else: 445 | with tf.variable_scope('', reuse=False): 446 | net = ResNet_segmentation(image_batch, self.conf.num_classes, False, self.conf.encoder_name) 447 | with tf.variable_scope('', reuse=True): 448 | net075 = ResNet_segmentation(image_batch_075, self.conf.num_classes, False, self.conf.encoder_name) 449 | with tf.variable_scope('', reuse=True): 450 | net05 = ResNet_segmentation(image_batch_05, self.conf.num_classes, False, self.conf.encoder_name) 451 | 452 | # predictions 453 | # Network raw output 454 | raw_output100 = net.outputs 455 | raw_output075 = net075.outputs 456 | raw_output05 = net05.outputs 457 | raw_output = tf.reduce_max(tf.stack([raw_output100, 458 | tf.image.resize_images(raw_output075, tf.shape(raw_output100)[1:3,]), 459 | tf.image.resize_images(raw_output05, tf.shape(raw_output100)[1:3,])]), axis=0) 460 | raw_output = tf.image.resize_bilinear(raw_output, tf.shape(image_batch)[1:3,]) 461 | raw_output = tf.argmax(raw_output, axis=3) 462 | self.pred = tf.cast(tf.expand_dims(raw_output, dim=3), tf.uint8) 463 | 464 | # Create directory 465 | if not os.path.exists(self.conf.out_dir): 466 | os.makedirs(self.conf.out_dir) 467 | os.makedirs(self.conf.out_dir + '/prediction') 468 | if self.conf.visual: 469 | os.makedirs(self.conf.out_dir + '/visual_prediction') 470 | 471 | # Loader for loading the checkpoint 472 | self.loader = tf.train.Saver(var_list=tf.global_variables()) 473 | 474 | def save(self, saver, step): 475 | ''' 476 | Save weights. 477 | ''' 478 | model_name = 'model.ckpt' 479 | checkpoint_path = os.path.join(self.conf.modeldir, model_name) 480 | if not os.path.exists(self.conf.modeldir): 481 | os.makedirs(self.conf.modeldir) 482 | saver.save(self.sess, checkpoint_path, global_step=step) 483 | print('The checkpoint has been created.') 484 | 485 | def load(self, saver, filename): 486 | ''' 487 | Load trained weights. 488 | ''' 489 | saver.restore(self.sess, filename) 490 | print("Restored model parameters from {}".format(filename)) 491 | 492 | def compute_IoU_per_class(self, confusion_matrix): 493 | mIoU = 0 494 | for i in range(self.conf.num_classes): 495 | # IoU = true_positive / (true_positive + false_positive + false_negative) 496 | TP = confusion_matrix[i,i] 497 | FP = np.sum(confusion_matrix[:, i]) - TP 498 | FN = np.sum(confusion_matrix[i]) - TP 499 | IoU = TP / (TP + FP + FN) 500 | print ('class %d: %.3f' % (i, IoU)) 501 | mIoU += IoU / self.conf.num_classes 502 | print ('mIoU: %.3f' % mIoU) -------------------------------------------------------------------------------- /network.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | import numpy as np 3 | import six 4 | 5 | 6 | 7 | """ 8 | This script defines the segmentation network. 9 | 10 | The encoding part is a pre-trained ResNet. This script supports several settings (you need to specify in main.py): 11 | 12 | Deeplab v2 pre-trained model (pre-trained on MSCOCO) ('deeplab_resnet_init.ckpt') 13 | Deeplab v2 pre-trained model (pre-trained on MSCOCO + PASCAL_train+val) ('deeplab_resnet.ckpt') 14 | Original ResNet-101 ('resnet_v1_101.ckpt') 15 | Original ResNet-50 ('resnet_v1_50.ckpt') 16 | 17 | You may find the download links in README. 18 | 19 | To use the pre-trained models, the name of each layer is the same as that in .ckpy file. 20 | """ 21 | 22 | 23 | 24 | class Deeplab_v2(object): 25 | """ 26 | Deeplab v2 pre-trained model (pre-trained on MSCOCO) ('deeplab_resnet_init.ckpt') 27 | Deeplab v2 pre-trained model (pre-trained on MSCOCO + PASCAL_train+val) ('deeplab_resnet.ckpt') 28 | """ 29 | def __init__(self, inputs, num_classes, phase): 30 | self.inputs = inputs 31 | self.num_classes = num_classes 32 | self.channel_axis = 3 33 | self.phase = phase # train (True) or test (False), for BN layers in the decoder 34 | self.build_network() 35 | 36 | def build_network(self): 37 | self.encoding = self.build_encoder() 38 | self.outputs = self.build_decoder(self.encoding) 39 | 40 | def build_encoder(self): 41 | print("-----------build encoder: deeplab pre-trained-----------") 42 | outputs = self._start_block() 43 | print("after start block:", outputs.shape) 44 | outputs = self._bottleneck_resblock(outputs, 256, '2a', identity_connection=False) 45 | outputs = self._bottleneck_resblock(outputs, 256, '2b') 46 | outputs = self._bottleneck_resblock(outputs, 256, '2c') 47 | print("after block1:", outputs.shape) 48 | outputs = self._bottleneck_resblock(outputs, 512, '3a', half_size=True, identity_connection=False) 49 | for i in six.moves.range(1, 4): 50 | outputs = self._bottleneck_resblock(outputs, 512, '3b%d' % i) 51 | print("after block2:", outputs.shape) 52 | outputs = self._dilated_bottle_resblock(outputs, 1024, 2, '4a', identity_connection=False) 53 | for i in six.moves.range(1, 23): 54 | outputs = self._dilated_bottle_resblock(outputs, 1024, 2, '4b%d' % i) 55 | print("after block3:", outputs.shape) 56 | outputs = self._dilated_bottle_resblock(outputs, 2048, 4, '5a', identity_connection=False) 57 | outputs = self._dilated_bottle_resblock(outputs, 2048, 4, '5b') 58 | outputs = self._dilated_bottle_resblock(outputs, 2048, 4, '5c') 59 | print("after block4:", outputs.shape) 60 | return outputs 61 | 62 | def build_decoder(self, encoding): 63 | print("-----------build decoder-----------") 64 | outputs = self._ASPP(encoding, self.num_classes, [6, 12, 18, 24]) 65 | print("after aspp block:", outputs.shape) 66 | return outputs 67 | 68 | # blocks 69 | def _start_block(self): 70 | outputs = self._conv2d(self.inputs, 7, 64, 2, name='conv1') 71 | outputs = self._batch_norm(outputs, name='bn_conv1', is_training=False, activation_fn=tf.nn.relu) 72 | outputs = self._max_pool2d(outputs, 3, 2, name='pool1') 73 | return outputs 74 | 75 | def _bottleneck_resblock(self, x, num_o, name, half_size=False, identity_connection=True): 76 | first_s = 2 if half_size else 1 77 | assert num_o % 4 == 0, 'Bottleneck number of output ERROR!' 78 | # branch1 79 | if not identity_connection: 80 | o_b1 = self._conv2d(x, 1, num_o, first_s, name='res%s_branch1' % name) 81 | o_b1 = self._batch_norm(o_b1, name='bn%s_branch1' % name, is_training=False, activation_fn=None) 82 | else: 83 | o_b1 = x 84 | # branch2 85 | o_b2a = self._conv2d(x, 1, num_o / 4, first_s, name='res%s_branch2a' % name) 86 | o_b2a = self._batch_norm(o_b2a, name='bn%s_branch2a' % name, is_training=False, activation_fn=tf.nn.relu) 87 | 88 | o_b2b = self._conv2d(o_b2a, 3, num_o / 4, 1, name='res%s_branch2b' % name) 89 | o_b2b = self._batch_norm(o_b2b, name='bn%s_branch2b' % name, is_training=False, activation_fn=tf.nn.relu) 90 | 91 | o_b2c = self._conv2d(o_b2b, 1, num_o, 1, name='res%s_branch2c' % name) 92 | o_b2c = self._batch_norm(o_b2c, name='bn%s_branch2c' % name, is_training=False, activation_fn=None) 93 | # add 94 | outputs = self._add([o_b1,o_b2c], name='res%s' % name) 95 | # relu 96 | outputs = self._relu(outputs, name='res%s_relu' % name) 97 | return outputs 98 | 99 | def _dilated_bottle_resblock(self, x, num_o, dilation_factor, name, identity_connection=True): 100 | assert num_o % 4 == 0, 'Bottleneck number of output ERROR!' 101 | # branch1 102 | if not identity_connection: 103 | o_b1 = self._conv2d(x, 1, num_o, 1, name='res%s_branch1' % name) 104 | o_b1 = self._batch_norm(o_b1, name='bn%s_branch1' % name, is_training=False, activation_fn=None) 105 | else: 106 | o_b1 = x 107 | # branch2 108 | o_b2a = self._conv2d(x, 1, num_o / 4, 1, name='res%s_branch2a' % name) 109 | o_b2a = self._batch_norm(o_b2a, name='bn%s_branch2a' % name, is_training=False, activation_fn=tf.nn.relu) 110 | 111 | o_b2b = self._dilated_conv2d(o_b2a, 3, num_o / 4, dilation_factor, name='res%s_branch2b' % name) 112 | o_b2b = self._batch_norm(o_b2b, name='bn%s_branch2b' % name, is_training=False, activation_fn=tf.nn.relu) 113 | 114 | o_b2c = self._conv2d(o_b2b, 1, num_o, 1, name='res%s_branch2c' % name) 115 | o_b2c = self._batch_norm(o_b2c, name='bn%s_branch2c' % name, is_training=False, activation_fn=None) 116 | # add 117 | outputs = self._add([o_b1,o_b2c], name='res%s' % name) 118 | # relu 119 | outputs = self._relu(outputs, name='res%s_relu' % name) 120 | return outputs 121 | 122 | def _ASPP(self, x, num_o, dilations): 123 | o = [] 124 | for i, d in enumerate(dilations): 125 | o.append(self._dilated_conv2d(x, 3, num_o, d, name='fc1_voc12_c%d' % i, biased=True)) 126 | return self._add(o, name='fc1_voc12') 127 | 128 | # layers 129 | def _conv2d(self, x, kernel_size, num_o, stride, name, biased=False): 130 | """ 131 | Conv2d without BN or relu. 132 | """ 133 | num_x = x.shape[self.channel_axis].value 134 | with tf.variable_scope(name) as scope: 135 | w = tf.get_variable('weights', shape=[kernel_size, kernel_size, num_x, num_o]) 136 | s = [1, stride, stride, 1] 137 | o = tf.nn.conv2d(x, w, s, padding='SAME') 138 | if biased: 139 | b = tf.get_variable('biases', shape=[num_o]) 140 | o = tf.nn.bias_add(o, b) 141 | return o 142 | 143 | def _dilated_conv2d(self, x, kernel_size, num_o, dilation_factor, name, biased=False): 144 | """ 145 | Dilated conv2d without BN or relu. 146 | """ 147 | num_x = x.shape[self.channel_axis].value 148 | with tf.variable_scope(name) as scope: 149 | w = tf.get_variable('weights', shape=[kernel_size, kernel_size, num_x, num_o]) 150 | o = tf.nn.atrous_conv2d(x, w, dilation_factor, padding='SAME') 151 | if biased: 152 | b = tf.get_variable('biases', shape=[num_o]) 153 | o = tf.nn.bias_add(o, b) 154 | return o 155 | 156 | def _relu(self, x, name): 157 | return tf.nn.relu(x, name=name) 158 | 159 | def _add(self, x_l, name): 160 | return tf.add_n(x_l, name=name) 161 | 162 | def _max_pool2d(self, x, kernel_size, stride, name): 163 | k = [1, kernel_size, kernel_size, 1] 164 | s = [1, stride, stride, 1] 165 | return tf.nn.max_pool(x, k, s, padding='SAME', name=name) 166 | 167 | def _batch_norm(self, x, name, is_training, activation_fn, trainable=False): 168 | # For a small batch size, it is better to keep 169 | # the statistics of the BN layers (running means and variances) frozen, 170 | # and to not update the values provided by the pre-trained model by setting is_training=False. 171 | # Note that is_training=False still updates BN parameters gamma (scale) and beta (offset) 172 | # if they are presented in var_list of the optimiser definition. 173 | # Set trainable = False to remove them from trainable_variables. 174 | with tf.variable_scope(name) as scope: 175 | o = tf.contrib.layers.batch_norm( 176 | x, 177 | scale=True, 178 | activation_fn=activation_fn, 179 | is_training=is_training, 180 | trainable=trainable, 181 | scope=scope) 182 | return o 183 | 184 | 185 | 186 | class ResNet_segmentation(object): 187 | """ 188 | Original ResNet-101 ('resnet_v1_101.ckpt') 189 | Original ResNet-50 ('resnet_v1_50.ckpt') 190 | """ 191 | def __init__(self, inputs, num_classes, phase, encoder_name): 192 | if encoder_name not in ['res101', 'res50']: 193 | print('encoder_name ERROR!') 194 | print("Please input: res101, res50") 195 | sys.exit(-1) 196 | self.encoder_name = encoder_name 197 | self.inputs = inputs 198 | self.num_classes = num_classes 199 | self.channel_axis = 3 200 | self.phase = phase # train (True) or test (False), for BN layers in the decoder 201 | self.build_network() 202 | 203 | def build_network(self): 204 | self.encoding = self.build_encoder() 205 | self.outputs = self.build_decoder(self.encoding) 206 | 207 | def build_encoder(self): 208 | print("-----------build encoder: %s-----------" % self.encoder_name) 209 | scope_name = 'resnet_v1_101' if self.encoder_name == 'res101' else 'resnet_v1_50' 210 | with tf.variable_scope(scope_name) as scope: 211 | outputs = self._start_block('conv1') 212 | print("after start block:", outputs.shape) 213 | with tf.variable_scope('block1') as scope: 214 | outputs = self._bottleneck_resblock(outputs, 256, 'unit_1', identity_connection=False) 215 | outputs = self._bottleneck_resblock(outputs, 256, 'unit_2') 216 | outputs = self._bottleneck_resblock(outputs, 256, 'unit_3') 217 | print("after block1:", outputs.shape) 218 | with tf.variable_scope('block2') as scope: 219 | outputs = self._bottleneck_resblock(outputs, 512, 'unit_1', half_size=True, identity_connection=False) 220 | for i in six.moves.range(2, 5): 221 | outputs = self._bottleneck_resblock(outputs, 512, 'unit_%d' % i) 222 | print("after block2:", outputs.shape) 223 | with tf.variable_scope('block3') as scope: 224 | outputs = self._dilated_bottle_resblock(outputs, 1024, 2, 'unit_1', identity_connection=False) 225 | num_layers_block3 = 23 if self.encoder_name == 'res101' else 6 226 | for i in six.moves.range(2, num_layers_block3+1): 227 | outputs = self._dilated_bottle_resblock(outputs, 1024, 2, 'unit_%d' % i) 228 | print("after block3:", outputs.shape) 229 | with tf.variable_scope('block4') as scope: 230 | outputs = self._dilated_bottle_resblock(outputs, 2048, 4, 'unit_1', identity_connection=False) 231 | outputs = self._dilated_bottle_resblock(outputs, 2048, 4, 'unit_2') 232 | outputs = self._dilated_bottle_resblock(outputs, 2048, 4, 'unit_3') 233 | print("after block4:", outputs.shape) 234 | return outputs 235 | 236 | def build_decoder(self, encoding): 237 | print("-----------build decoder-----------") 238 | with tf.variable_scope('decoder') as scope: 239 | outputs = self._ASPP(encoding, self.num_classes, [6, 12, 18, 24]) 240 | print("after aspp block:", outputs.shape) 241 | return outputs 242 | 243 | # blocks 244 | def _start_block(self, name): 245 | outputs = self._conv2d(self.inputs, 7, 64, 2, name=name) 246 | outputs = self._batch_norm(outputs, name=name, is_training=False, activation_fn=tf.nn.relu) 247 | outputs = self._max_pool2d(outputs, 3, 2, name='pool1') 248 | return outputs 249 | 250 | def _bottleneck_resblock(self, x, num_o, name, half_size=False, identity_connection=True): 251 | first_s = 2 if half_size else 1 252 | assert num_o % 4 == 0, 'Bottleneck number of output ERROR!' 253 | # branch1 254 | if not identity_connection: 255 | o_b1 = self._conv2d(x, 1, num_o, first_s, name='%s/bottleneck_v1/shortcut' % name) 256 | o_b1 = self._batch_norm(o_b1, name='%s/bottleneck_v1/shortcut' % name, is_training=False, activation_fn=None) 257 | else: 258 | o_b1 = x 259 | # branch2 260 | o_b2a = self._conv2d(x, 1, num_o / 4, first_s, name='%s/bottleneck_v1/conv1' % name) 261 | o_b2a = self._batch_norm(o_b2a, name='%s/bottleneck_v1/conv1' % name, is_training=False, activation_fn=tf.nn.relu) 262 | 263 | o_b2b = self._conv2d(o_b2a, 3, num_o / 4, 1, name='%s/bottleneck_v1/conv2' % name) 264 | o_b2b = self._batch_norm(o_b2b, name='%s/bottleneck_v1/conv2' % name, is_training=False, activation_fn=tf.nn.relu) 265 | 266 | o_b2c = self._conv2d(o_b2b, 1, num_o, 1, name='%s/bottleneck_v1/conv3' % name) 267 | o_b2c = self._batch_norm(o_b2c, name='%s/bottleneck_v1/conv3' % name, is_training=False, activation_fn=None) 268 | # add 269 | outputs = self._add([o_b1,o_b2c], name='%s/bottleneck_v1/add' % name) 270 | # relu 271 | outputs = self._relu(outputs, name='%s/bottleneck_v1/relu' % name) 272 | return outputs 273 | 274 | def _dilated_bottle_resblock(self, x, num_o, dilation_factor, name, identity_connection=True): 275 | assert num_o % 4 == 0, 'Bottleneck number of output ERROR!' 276 | # branch1 277 | if not identity_connection: 278 | o_b1 = self._conv2d(x, 1, num_o, 1, name='%s/bottleneck_v1/shortcut' % name) 279 | o_b1 = self._batch_norm(o_b1, name='%s/bottleneck_v1/shortcut' % name, is_training=False, activation_fn=None) 280 | else: 281 | o_b1 = x 282 | # branch2 283 | o_b2a = self._conv2d(x, 1, num_o / 4, 1, name='%s/bottleneck_v1/conv1' % name) 284 | o_b2a = self._batch_norm(o_b2a, name='%s/bottleneck_v1/conv1' % name, is_training=False, activation_fn=tf.nn.relu) 285 | 286 | o_b2b = self._dilated_conv2d(o_b2a, 3, num_o / 4, dilation_factor, name='%s/bottleneck_v1/conv2' % name) 287 | o_b2b = self._batch_norm(o_b2b, name='%s/bottleneck_v1/conv2' % name, is_training=False, activation_fn=tf.nn.relu) 288 | 289 | o_b2c = self._conv2d(o_b2b, 1, num_o, 1, name='%s/bottleneck_v1/conv3' % name) 290 | o_b2c = self._batch_norm(o_b2c, name='%s/bottleneck_v1/conv3' % name, is_training=False, activation_fn=None) 291 | # add 292 | outputs = self._add([o_b1,o_b2c], name='%s/bottleneck_v1/add' % name) 293 | # relu 294 | outputs = self._relu(outputs, name='%s/bottleneck_v1/relu' % name) 295 | return outputs 296 | 297 | def _ASPP(self, x, num_o, dilations): 298 | o = [] 299 | for i, d in enumerate(dilations): 300 | o.append(self._dilated_conv2d(x, 3, num_o, d, name='aspp/conv%d' % (i+1), biased=True)) 301 | return self._add(o, name='aspp/add') 302 | 303 | # layers 304 | def _conv2d(self, x, kernel_size, num_o, stride, name, biased=False): 305 | """ 306 | Conv2d without BN or relu. 307 | """ 308 | num_x = x.shape[self.channel_axis].value 309 | with tf.variable_scope(name) as scope: 310 | w = tf.get_variable('weights', shape=[kernel_size, kernel_size, num_x, num_o]) 311 | s = [1, stride, stride, 1] 312 | o = tf.nn.conv2d(x, w, s, padding='SAME') 313 | if biased: 314 | b = tf.get_variable('biases', shape=[num_o]) 315 | o = tf.nn.bias_add(o, b) 316 | return o 317 | 318 | def _dilated_conv2d(self, x, kernel_size, num_o, dilation_factor, name, biased=False): 319 | """ 320 | Dilated conv2d without BN or relu. 321 | """ 322 | num_x = x.shape[self.channel_axis].value 323 | with tf.variable_scope(name) as scope: 324 | w = tf.get_variable('weights', shape=[kernel_size, kernel_size, num_x, num_o]) 325 | o = tf.nn.atrous_conv2d(x, w, dilation_factor, padding='SAME') 326 | if biased: 327 | b = tf.get_variable('biases', shape=[num_o]) 328 | o = tf.nn.bias_add(o, b) 329 | return o 330 | 331 | def _relu(self, x, name): 332 | return tf.nn.relu(x, name=name) 333 | 334 | def _add(self, x_l, name): 335 | return tf.add_n(x_l, name=name) 336 | 337 | def _max_pool2d(self, x, kernel_size, stride, name): 338 | k = [1, kernel_size, kernel_size, 1] 339 | s = [1, stride, stride, 1] 340 | return tf.nn.max_pool(x, k, s, padding='SAME', name=name) 341 | 342 | def _batch_norm(self, x, name, is_training, activation_fn, trainable=False): 343 | # For a small batch size, it is better to keep 344 | # the statistics of the BN layers (running means and variances) frozen, 345 | # and to not update the values provided by the pre-trained model by setting is_training=False. 346 | # Note that is_training=False still updates BN parameters gamma (scale) and beta (offset) 347 | # if they are presented in var_list of the optimiser definition. 348 | # Set trainable = False to remove them from trainable_variables. 349 | with tf.variable_scope(name+'/BatchNorm') as scope: 350 | o = tf.contrib.layers.batch_norm( 351 | x, 352 | scale=True, 353 | activation_fn=activation_fn, 354 | is_training=is_training, 355 | trainable=trainable, 356 | scope=scope) 357 | return o 358 | -------------------------------------------------------------------------------- /plot_training_curve.py: -------------------------------------------------------------------------------- 1 | import matplotlib.pyplot as plt 2 | import numpy as np 3 | 4 | LOG_FILE = './log.txt' 5 | 6 | def get_log(log): 7 | f = open(log, 'r') 8 | lines = f.readlines() 9 | f.close() 10 | 11 | loss = [] 12 | for line in lines: 13 | loss.append(float(line.strip('\n').split(' ')[1])) 14 | 15 | return loss 16 | 17 | def plot_iteration(log): 18 | loss = get_log(log) 19 | plt.plot(range(len(loss)), loss) 20 | plt.xlabel('Iteration') 21 | plt.ylabel('Loss') 22 | plt.title('Training Curve') 23 | plt.show() 24 | 25 | def plot_epoch(log, num_samples, batch_size): 26 | """Avg for each epoch 27 | num_per_epoch: number of samples in the training dataset 28 | batch_size: training batch size 29 | """ 30 | loss = get_log(log) 31 | epochs = len(loss) * batch_size // num_samples 32 | iters_per_epochs = num_samples // batch_size 33 | x = range(0, epochs+1) 34 | y = [loss[0]] 35 | for i in range(epochs): 36 | y.append(np.mean(np.array(loss[i*iters_per_epochs+1: (i+1)*iters_per_epochs+1]))) 37 | plt.plot(x, y) 38 | plt.xlabel('Epoch') 39 | plt.ylabel('Loss') 40 | plt.title('Training Curve') 41 | plt.show() 42 | 43 | if __name__ == '__main__': 44 | plot_epoch(LOG_FILE, 10582, 10) -------------------------------------------------------------------------------- /utils/__init__.py: -------------------------------------------------------------------------------- 1 | from .image_reader import ImageReader, read_labeled_image_list 2 | from .label_utils import decode_labels, inv_preprocess, prepare_label 3 | from .write_to_log import write_log -------------------------------------------------------------------------------- /utils/__pycache__/__init__.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhengyang-wang/Deeplab-v2--ResNet-101--Tensorflow/85e16ec8c00353a55a8fc5ef50d141b2a8ddd701/utils/__pycache__/__init__.cpython-35.pyc -------------------------------------------------------------------------------- /utils/__pycache__/image_reader.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhengyang-wang/Deeplab-v2--ResNet-101--Tensorflow/85e16ec8c00353a55a8fc5ef50d141b2a8ddd701/utils/__pycache__/image_reader.cpython-35.pyc -------------------------------------------------------------------------------- /utils/__pycache__/label_utils.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhengyang-wang/Deeplab-v2--ResNet-101--Tensorflow/85e16ec8c00353a55a8fc5ef50d141b2a8ddd701/utils/__pycache__/label_utils.cpython-35.pyc -------------------------------------------------------------------------------- /utils/__pycache__/write_to_log.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhengyang-wang/Deeplab-v2--ResNet-101--Tensorflow/85e16ec8c00353a55a8fc5ef50d141b2a8ddd701/utils/__pycache__/write_to_log.cpython-35.pyc -------------------------------------------------------------------------------- /utils/image_reader.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | import numpy as np 4 | import tensorflow as tf 5 | 6 | def image_scaling(img, label): 7 | """ 8 | Randomly scales the images between 0.5 to 1.5 times the original size. 9 | 10 | Args: 11 | img: Training image to scale. 12 | label: Segmentation mask to scale. 13 | """ 14 | 15 | scale = tf.random_uniform([1], minval=0.5, maxval=1.5, dtype=tf.float32, seed=None) 16 | h_new = tf.to_int32(tf.multiply(tf.to_float(tf.shape(img)[0]), scale)) 17 | w_new = tf.to_int32(tf.multiply(tf.to_float(tf.shape(img)[1]), scale)) 18 | new_shape = tf.squeeze(tf.stack([h_new, w_new]), squeeze_dims=[1]) 19 | img = tf.image.resize_images(img, new_shape) 20 | label = tf.image.resize_nearest_neighbor(tf.expand_dims(label, 0), new_shape) 21 | label = tf.squeeze(label, squeeze_dims=[0]) 22 | 23 | return img, label 24 | 25 | def image_mirroring(img, label): 26 | """ 27 | Randomly mirrors the images. 28 | 29 | Args: 30 | img: Training image to mirror. 31 | label: Segmentation mask to mirror. 32 | """ 33 | 34 | distort_left_right_random = tf.random_uniform([1], 0, 1.0, dtype=tf.float32)[0] 35 | mirror = tf.less(tf.stack([1.0, distort_left_right_random, 1.0]), 0.5) 36 | mirror = tf.boolean_mask([0, 1, 2], mirror) 37 | img = tf.reverse(img, mirror) 38 | label = tf.reverse(label, mirror) 39 | return img, label 40 | 41 | def random_crop_and_pad_image_and_labels(image, label, crop_h, crop_w, ignore_label=255): 42 | """ 43 | Randomly crop and pads the input images. 44 | 45 | Args: 46 | image: Training image to crop/ pad. 47 | label: Segmentation mask to crop/ pad. 48 | crop_h: Height of cropped segment. 49 | crop_w: Width of cropped segment. 50 | ignore_label: Label to ignore during the training. 51 | """ 52 | 53 | label = tf.cast(label, dtype=tf.float32) 54 | label = label - ignore_label # Needs to be subtracted and later added due to 0 padding. 55 | combined = tf.concat(axis=2, values=[image, label]) 56 | image_shape = tf.shape(image) 57 | combined_pad = tf.image.pad_to_bounding_box(combined, 0, 0, tf.maximum(crop_h, image_shape[0]), tf.maximum(crop_w, image_shape[1])) 58 | 59 | last_image_dim = tf.shape(image)[-1] 60 | # last_label_dim = tf.shape(label)[-1] 61 | combined_crop = tf.random_crop(combined_pad, [crop_h, crop_w, 4]) 62 | img_crop = combined_crop[:, :, :last_image_dim] 63 | label_crop = combined_crop[:, :, last_image_dim:] 64 | label_crop = label_crop + ignore_label 65 | label_crop = tf.cast(label_crop, dtype=tf.uint8) 66 | 67 | # Set static shape so that tensorflow knows shape at compile time. 68 | img_crop.set_shape((crop_h, crop_w, 3)) 69 | label_crop.set_shape((crop_h,crop_w, 1)) 70 | return img_crop, label_crop 71 | 72 | def read_labeled_image_list(data_dir, data_list): 73 | """Reads txt file containing paths to images and ground truth masks. 74 | 75 | Args: 76 | data_dir: path to the directory with images and masks. 77 | data_list: path to the file with lines of the form '/path/to/image /path/to/mask'. 78 | 79 | Returns: 80 | Two lists with all file names for images and masks, respectively. 81 | """ 82 | f = open(data_list, 'r') 83 | images = [] 84 | masks = [] 85 | for line in f: 86 | try: 87 | image, mask = line.strip("\n").split(' ') 88 | except ValueError: # Adhoc for test. 89 | image = mask = line.strip("\n") 90 | images.append(data_dir + image) 91 | masks.append(data_dir + mask) 92 | return images, masks 93 | 94 | def read_images_from_disk(input_queue, input_size, random_scale, random_mirror, ignore_label, img_mean): # optional pre-processing arguments 95 | """Read one image and its corresponding mask with optional pre-processing. 96 | 97 | Args: 98 | input_queue: tf queue with paths to the image and its mask. 99 | input_size: a tuple with (height, width) values. 100 | If not given, return images of original size. 101 | random_scale: whether to randomly scale the images prior 102 | to random crop. 103 | random_mirror: whether to randomly mirror the images prior 104 | to random crop. 105 | ignore_label: index of label to ignore during the training. 106 | img_mean: vector of mean colour values. 107 | 108 | Returns: 109 | Two tensors: the decoded image and its mask. 110 | """ 111 | 112 | img_contents = tf.read_file(input_queue[0]) 113 | label_contents = tf.read_file(input_queue[1]) 114 | 115 | img = tf.image.decode_jpeg(img_contents, channels=3) 116 | img_r, img_g, img_b = tf.split(axis=2, num_or_size_splits=3, value=img) 117 | img = tf.cast(tf.concat(axis=2, values=[img_b, img_g, img_r]), dtype=tf.float32) 118 | # Extract mean. 119 | img -= img_mean 120 | 121 | label = tf.image.decode_png(label_contents, channels=1) 122 | 123 | if input_size is not None: 124 | h, w = input_size 125 | 126 | # Randomly scale the images and labels. 127 | if random_scale: 128 | img, label = image_scaling(img, label) 129 | 130 | # Randomly mirror the images and labels. 131 | if random_mirror: 132 | img, label = image_mirroring(img, label) 133 | 134 | # Randomly crops the images and labels. 135 | img, label = random_crop_and_pad_image_and_labels(img, label, h, w, ignore_label) 136 | 137 | return img, label 138 | 139 | class ImageReader(object): 140 | '''Generic ImageReader which reads images and corresponding segmentation 141 | masks from the disk, and enqueues them into a TensorFlow queue. 142 | ''' 143 | 144 | def __init__(self, data_dir, data_list, input_size, 145 | random_scale, random_mirror, ignore_label, img_mean, coord): 146 | '''Initialise an ImageReader. 147 | 148 | Args: 149 | data_dir: path to the directory with images and masks. 150 | data_list: path to the file with lines of the form '/path/to/image /path/to/mask'. 151 | input_size: a tuple with (height, width) values, to which all the images will be resized. 152 | random_scale: whether to randomly scale the images prior to random crop. 153 | random_mirror: whether to randomly mirror the images prior to random crop. 154 | ignore_label: index of label to ignore during the training. 155 | img_mean: vector of mean colour values. 156 | coord: TensorFlow queue coordinator. 157 | ''' 158 | self.data_dir = data_dir 159 | self.data_list = data_list 160 | self.input_size = input_size 161 | self.coord = coord 162 | 163 | self.image_list, self.label_list = read_labeled_image_list(self.data_dir, self.data_list) 164 | self.images = tf.convert_to_tensor(self.image_list, dtype=tf.string) 165 | self.labels = tf.convert_to_tensor(self.label_list, dtype=tf.string) 166 | self.queue = tf.train.slice_input_producer([self.images, self.labels], 167 | shuffle=input_size is not None) # not shuffling if it is val 168 | self.image, self.label = read_images_from_disk(self.queue, self.input_size, random_scale, random_mirror, ignore_label, img_mean) 169 | 170 | def dequeue(self, num_elements): 171 | '''Pack images and labels into a batch. 172 | 173 | Args: 174 | num_elements: the batch size. 175 | 176 | Returns: 177 | Two tensors of size (batch_size, h, w, {3, 1}) for images and masks.''' 178 | image_batch, label_batch = tf.train.batch([self.image, self.label], 179 | num_elements) 180 | return image_batch, label_batch 181 | -------------------------------------------------------------------------------- /utils/label_utils.py: -------------------------------------------------------------------------------- 1 | from PIL import Image 2 | import numpy as np 3 | import tensorflow as tf 4 | 5 | # colour map 6 | label_colours = [(0,0,0) 7 | # 0=background 8 | ,(128,0,0),(0,128,0),(128,128,0),(0,0,128),(128,0,128) 9 | # 1=aeroplane, 2=bicycle, 3=bird, 4=boat, 5=bottle 10 | ,(0,128,128),(128,128,128),(64,0,0),(192,0,0),(64,128,0) 11 | # 6=bus, 7=car, 8=cat, 9=chair, 10=cow 12 | ,(192,128,0),(64,0,128),(192,0,128),(64,128,128),(192,128,128) 13 | # 11=diningtable, 12=dog, 13=horse, 14=motorbike, 15=person 14 | ,(0,64,0),(128,64,0),(0,192,0),(128,192,0),(0,64,128)] 15 | # 16=potted plant, 17=sheep, 18=sofa, 19=train, 20=tv/monitor 16 | 17 | def decode_labels(mask, num_images=1, num_classes=21): 18 | """Decode batch of segmentation masks. 19 | 20 | Args: 21 | mask: result of inference after taking argmax. 22 | num_images: number of images to decode from the batch. 23 | num_classes: number of classes to predict (including background). 24 | 25 | Returns: 26 | A batch with num_images RGB images of the same size as the input. 27 | """ 28 | n, h, w, c = mask.shape 29 | assert(n >= num_images), 'Batch size %d should be greater or equal than number of images to save %d.' % (n, num_images) 30 | outputs = np.zeros((num_images, h, w, 3), dtype=np.uint8) 31 | for i in range(num_images): 32 | img = Image.new('RGB', (len(mask[i, 0]), len(mask[i]))) # Size is given as a (width, height)-tuple. 33 | pixels = img.load() 34 | for j_, j in enumerate(mask[i, :, :, 0]): 35 | for k_, k in enumerate(j): 36 | if k < num_classes: 37 | pixels[k_,j_] = label_colours[k] 38 | outputs[i] = np.array(img) 39 | return outputs 40 | 41 | def prepare_label(input_batch, new_size, num_classes, one_hot=True): 42 | """Resize masks and perform one-hot encoding. 43 | 44 | Args: 45 | input_batch: input tensor of shape [batch_size H W 1]. 46 | new_size: a tensor with new height and width. 47 | num_classes: number of classes to predict (including background). 48 | one_hot: whether perform one-hot encoding. 49 | 50 | Returns: 51 | Outputs a tensor of shape [batch_size h w 21] 52 | with last dimension comprised of 0's and 1's only. 53 | """ 54 | with tf.name_scope('label_encode'): 55 | input_batch = tf.image.resize_nearest_neighbor(input_batch, new_size) # as labels are integer numbers, need to use NN interp. 56 | input_batch = tf.squeeze(input_batch, squeeze_dims=[3]) # reducing the channel dimension. 57 | if one_hot: 58 | input_batch = tf.one_hot(input_batch, depth=num_classes) 59 | return input_batch 60 | 61 | def inv_preprocess(imgs, num_images, img_mean): 62 | """Inverse preprocessing of the batch of images. 63 | Add the mean vector and convert from BGR to RGB. 64 | 65 | Args: 66 | imgs: batch of input images. 67 | num_images: number of images to apply the inverse transformations on. 68 | img_mean: vector of mean colour values. 69 | 70 | Returns: 71 | The batch of the size num_images with the same spatial dimensions as the input. 72 | """ 73 | n, h, w, c = imgs.shape 74 | assert(n >= num_images), 'Batch size %d should be greater or equal than number of images to save %d.' % (n, num_images) 75 | outputs = np.zeros((num_images, h, w, c), dtype=np.uint8) 76 | for i in range(num_images): 77 | outputs[i] = (imgs[i] + img_mean)[:, :, ::-1].astype(np.uint8) 78 | return outputs 79 | -------------------------------------------------------------------------------- /utils/write_to_log.py: -------------------------------------------------------------------------------- 1 | def write_log(str, filename): 2 | with open(filename, 'a') as f: 3 | f.write(str + "\n") 4 | --------------------------------------------------------------------------------