├── .gitignore ├── LICENSE ├── README.md ├── VOCdevkit └── VOC2007 │ ├── ImageSets │ └── Segmentation │ │ └── README.md │ ├── JPEGImages │ └── README.md │ └── SegmentationClass │ └── README.md ├── datasets ├── JPEGImages │ └── 1.jpg ├── SegmentationClass │ └── 1.png └── before │ ├── 1.jpg │ └── 1.json ├── deeplab.py ├── get_miou.py ├── img └── street.jpg ├── json_to_dataset.py ├── logs └── README.md ├── model_data ├── README.md └── deeplabv3_mobilenetv2.h5 ├── nets ├── Xception.py ├── __init__.py ├── deeplab.py ├── deeplab_training.py └── mobilenet.py ├── predict.py ├── requirements.txt ├── summary.py ├── train.py ├── utils ├── __init__.py ├── callbacks.py ├── dataloader.py ├── utils.py └── utils_metrics.py ├── voc_annotation.py └── 常见问题汇总.md /.gitignore: -------------------------------------------------------------------------------- 1 | # ignore map, miou, datasets 2 | map_out/ 3 | miou_out/ 4 | VOCdevkit/ 5 | datasets/ 6 | Medical_Datasets/ 7 | lfw/ 8 | logs/ 9 | model_data/ 10 | .temp_miou_out/ 11 | 12 | # Byte-compiled / optimized / DLL files 13 | __pycache__/ 14 | *.py[cod] 15 | *$py.class 16 | 17 | # C extensions 18 | *.so 19 | 20 | # Distribution / packaging 21 | .Python 22 | build/ 23 | develop-eggs/ 24 | dist/ 25 | downloads/ 26 | eggs/ 27 | .eggs/ 28 | lib/ 29 | lib64/ 30 | parts/ 31 | sdist/ 32 | var/ 33 | wheels/ 34 | pip-wheel-metadata/ 35 | share/python-wheels/ 36 | *.egg-info/ 37 | .installed.cfg 38 | *.egg 39 | MANIFEST 40 | 41 | # PyInstaller 42 | # Usually these files are written by a python script from a template 43 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 44 | *.manifest 45 | *.spec 46 | 47 | # Installer logs 48 | pip-log.txt 49 | pip-delete-this-directory.txt 50 | 51 | # Unit test / coverage reports 52 | htmlcov/ 53 | .tox/ 54 | .nox/ 55 | .coverage 56 | .coverage.* 57 | .cache 58 | nosetests.xml 59 | coverage.xml 60 | *.cover 61 | *.py,cover 62 | .hypothesis/ 63 | .pytest_cache/ 64 | 65 | # Translations 66 | *.mo 67 | *.pot 68 | 69 | # Django stuff: 70 | *.log 71 | local_settings.py 72 | db.sqlite3 73 | db.sqlite3-journal 74 | 75 | # Flask stuff: 76 | instance/ 77 | .webassets-cache 78 | 79 | # Scrapy stuff: 80 | .scrapy 81 | 82 | # Sphinx documentation 83 | docs/_build/ 84 | 85 | # PyBuilder 86 | target/ 87 | 88 | # Jupyter Notebook 89 | .ipynb_checkpoints 90 | 91 | # IPython 92 | profile_default/ 93 | ipython_config.py 94 | 95 | # pyenv 96 | .python-version 97 | 98 | # pipenv 99 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 100 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 101 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 102 | # install all needed dependencies. 103 | #Pipfile.lock 104 | 105 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow 106 | __pypackages__/ 107 | 108 | # Celery stuff 109 | celerybeat-schedule 110 | celerybeat.pid 111 | 112 | # SageMath parsed files 113 | *.sage.py 114 | 115 | # Environments 116 | .env 117 | .venv 118 | env/ 119 | venv/ 120 | ENV/ 121 | env.bak/ 122 | venv.bak/ 123 | 124 | # Spyder project settings 125 | .spyderproject 126 | .spyproject 127 | 128 | # Rope project settings 129 | .ropeproject 130 | 131 | # mkdocs documentation 132 | /site 133 | 134 | # mypy 135 | .mypy_cache/ 136 | .dmypy.json 137 | dmypy.json 138 | 139 | # Pyre type checker 140 | .pyre/ 141 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2021 Bubbliiiing 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## DeepLabv3+:Encoder-Decoder with Atrous Separable Convolution语义分割模型在keras当中的实现 2 | --- 3 | 4 | ### 目录 5 | 1. [仓库更新 Top News](#仓库更新) 6 | 2. [相关仓库 Related code](#相关仓库) 7 | 3. [性能情况 Performance](#性能情况) 8 | 4. [所需环境 Environment](#所需环境) 9 | 5. [文件下载 Download](#文件下载) 10 | 6. [训练步骤 How2train](#训练步骤) 11 | 7. [预测步骤 How2predict](#预测步骤) 12 | 8. [评估步骤 miou](#评估步骤) 13 | 9. [参考资料 Reference](#Reference) 14 | 15 | ## Top News 16 | **`2022-04`**:**支持多GPU训练。** 17 | 18 | **`2022-03`**:**进行大幅度更新、支持step、cos学习率下降法、支持adam、sgd优化器选择、支持学习率根据batch_size自适应调整。** 19 | BiliBili视频中的原仓库地址为:https://github.com/bubbliiiing/deeplabv3-plus-keras/tree/bilibili 20 | 21 | **`2020-08`**:**创建仓库、支持多backbone、支持数据miou评估、标注数据处理、大量注释等。** 22 | 23 | ## 相关仓库 24 | | 模型 | 路径 | 25 | | :----- | :----- | 26 | Unet | https://github.com/bubbliiiing/unet-keras 27 | PSPnet | https://github.com/bubbliiiing/pspnet-keras 28 | deeplabv3+ | https://github.com/bubbliiiing/deeplabv3-plus-keras 29 | 30 | ### 性能情况 31 | | 训练数据集 | 权值文件名称 | 测试数据集 | 输入图片大小 | mIOU | 32 | | :-----: | :-----: | :------: | :------: | :------: | 33 | | VOC12+SBD | [deeplabv3_mobilenetv2.h5](https://github.com/bubbliiiing/deeplabv3-plus-keras/releases/download/v1.0/deeplabv3_mobilenetv2.h5) | VOC-Val12 | 512x512 | 72.50 | 34 | | VOC12+SBD | [deeplabv3_xception.h5](https://github.com/bubbliiiing/deeplabv3-plus-keras/releases/download/v1.0/deeplabv3_xception.h5) | VOC-Val12 | 512x512 | 87.10 | 35 | 36 | ### 所需环境 37 | tensorflow==1.13.2 38 | keras==2.1.5 39 | 40 | ### 文件下载 41 | 训练所需的deeplabv3_mobilenetv2.h5和deeplabv3_xception.h5可在百度网盘中下载。 42 | 链接: https://pan.baidu.com/s/1_NzxXQj4drMXaPCVnmr23w 提取码: a5jm 43 | 44 | VOC拓展数据集的百度网盘如下: 45 | 链接: https://pan.baidu.com/s/1vkk3lMheUm6IjTXznlg7Ng 提取码: 44mk 46 | 47 | ### 训练步骤 48 | #### a、训练voc数据集 49 | 1、将我提供的voc数据集放入VOCdevkit中(无需运行voc_annotation.py)。 50 | 2、在train.py中设置对应参数,默认参数已经对应voc数据集所需要的参数了,所以只要修改backbone和model_path即可。 51 | 3、运行train.py进行训练。 52 | 53 | #### b、训练自己的数据集 54 | 1、本文使用VOC格式进行训练。 55 | 2、训练前将标签文件放在VOCdevkit文件夹下的VOC2007文件夹下的SegmentationClass中。 56 | 3、训练前将图片文件放在VOCdevkit文件夹下的VOC2007文件夹下的JPEGImages中。 57 | 4、在训练前利用voc_annotation.py文件生成对应的txt。 58 | 5、在train.py文件夹下面,选择自己要使用的主干模型和下采样因子。本文提供的主干模型有mobilenet和xception。下采样因子可以在8和16中选择。需要注意的是,预训练模型需要和主干模型相对应。 59 | 6、注意修改train.py的num_classes为分类个数+1。 60 | 7、运行train.py即可开始训练。 61 | 62 | ### 预测步骤 63 | #### a、使用预训练权重 64 | 1、下载完库后解压,如果想用backbone为mobilenet的进行预测,直接运行predict.py就可以了;如果想要利用backbone为xception的进行预测,在百度网盘下载deeplab_xception.h5,放入model_data,修改deeplab.py的backbone和model_path之后再运行predict.py,输入。 65 | ```python 66 | img/street.jpg 67 | ``` 68 | 可完成预测。 69 | 2、在predict.py里面进行设置可以进行fps测试、整个文件夹的测试和video视频检测。 70 | 71 | #### b、使用自己训练的权重 72 | 1、按照训练步骤训练。 73 | 2、在deeplab.py文件里面,在如下部分修改model_path、num_classes、backbone使其对应训练好的文件;**model_path对应logs文件夹下面的权值文件,num_classes代表要预测的类的数量加1,backbone是所使用的主干特征提取网络**。 74 | ```python 75 | _defaults = { 76 | #----------------------------------------# 77 | # model_path指向logs文件夹下的权值文件 78 | #----------------------------------------# 79 | "model_path" : 'model_data/deeplabv3_mobilenetv2.h5', 80 | #----------------------------------------# 81 | # 所需要区分的类的个数+1 82 | #----------------------------------------# 83 | "num_classes" : 21, 84 | #----------------------------------------# 85 | # 所使用的的主干网络:mobilenet、xception 86 | #----------------------------------------# 87 | "backbone" : "mobilenet", 88 | #----------------------------------------# 89 | # 输入图片的大小 90 | #----------------------------------------# 91 | "input_shape" : [512, 512], 92 | #----------------------------------------# 93 | # 下采样的倍数,一般可选的为8和16 94 | # 与训练时设置的一样即可 95 | #----------------------------------------# 96 | "downsample_factor" : 16, 97 | #--------------------------------# 98 | # blend参数用于控制是否 99 | # 让识别结果和原图混合 100 | #--------------------------------# 101 | "blend" : True, 102 | } 103 | ``` 104 | 3、运行predict.py,输入 105 | ```python 106 | img/street.jpg 107 | ``` 108 | 可完成预测。 109 | 4、在predict.py里面进行设置可以进行fps测试、整个文件夹的测试和video视频检测。 110 | 111 | ### 评估步骤 112 | 1、设置get_miou.py里面的num_classes为预测的类的数量加1。 113 | 2、设置get_miou.py里面的name_classes为需要去区分的类别。 114 | 3、运行get_miou.py即可获得miou大小。 115 | 116 | ### Reference 117 | https://github.com/ggyyzm/pytorch_segmentation 118 | https://github.com/bonlime/keras-deeplab-v3-plus 119 | -------------------------------------------------------------------------------- /VOCdevkit/VOC2007/ImageSets/Segmentation/README.md: -------------------------------------------------------------------------------- 1 | 存放的是指向文件名称的txt 2 | 3 | -------------------------------------------------------------------------------- /VOCdevkit/VOC2007/JPEGImages/README.md: -------------------------------------------------------------------------------- 1 | 这里面存放的是训练用的图片文件。 2 | -------------------------------------------------------------------------------- /VOCdevkit/VOC2007/SegmentationClass/README.md: -------------------------------------------------------------------------------- 1 | 这里面存放的是训练过程中产生的权重。 2 | -------------------------------------------------------------------------------- /datasets/JPEGImages/1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/bubbliiiing/deeplabv3-plus-keras/ba440f790bbaac0f947c193eae4514b4eac83235/datasets/JPEGImages/1.jpg -------------------------------------------------------------------------------- /datasets/SegmentationClass/1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/bubbliiiing/deeplabv3-plus-keras/ba440f790bbaac0f947c193eae4514b4eac83235/datasets/SegmentationClass/1.png -------------------------------------------------------------------------------- /datasets/before/1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/bubbliiiing/deeplabv3-plus-keras/ba440f790bbaac0f947c193eae4514b4eac83235/datasets/before/1.jpg -------------------------------------------------------------------------------- /datasets/before/1.json: -------------------------------------------------------------------------------- 1 | { 2 | "version": "3.16.7", 3 | "flags": {}, 4 | "shapes": [ 5 | { 6 | "label": "cat", 7 | "line_color": null, 8 | "fill_color": null, 9 | "points": [ 10 | [ 11 | 202.77358490566036, 12 | 626.0943396226414 13 | ], 14 | [ 15 | 178.24528301886792, 16 | 552.5094339622641 17 | ], 18 | [ 19 | 195.22641509433961, 20 | 444.9622641509434 21 | ], 22 | [ 23 | 177.30188679245282, 24 | 340.2452830188679 25 | ], 26 | [ 27 | 173.52830188679243, 28 | 201.56603773584905 29 | ], 30 | [ 31 | 211.2641509433962, 32 | 158.16981132075472 33 | ], 34 | [ 35 | 226.35849056603772, 36 | 87.41509433962264 37 | ], 38 | [ 39 | 208.43396226415092, 40 | 6.283018867924525 41 | ], 42 | [ 43 | 277.3018867924528, 44 | 57.226415094339615 45 | ], 46 | [ 47 | 416.92452830188677, 48 | 80.81132075471697 49 | ], 50 | [ 51 | 497.1132075471698, 52 | 64.77358490566037 53 | ], 54 | [ 55 | 578.2452830188679, 56 | 6.283018867924525 57 | ], 58 | [ 59 | 599.0, 60 | 35.52830188679245 61 | ], 62 | [ 63 | 589.566037735849, 64 | 96.84905660377359 65 | ], 66 | [ 67 | 592.3962264150944, 68 | 133.64150943396226 69 | ], 70 | [ 71 | 679.188679245283, 72 | 174.2075471698113 73 | ], 74 | [ 75 | 723.5283018867924, 76 | 165.71698113207546 77 | ], 78 | [ 79 | 726.3584905660377, 80 | 222.32075471698113 81 | ], 82 | [ 83 | 759.377358490566, 84 | 262.88679245283015 85 | ], 86 | [ 87 | 782.9622641509434, 88 | 350.62264150943395 89 | ], 90 | [ 91 | 766.9245283018868, 92 | 428.92452830188677 93 | ], 94 | [ 95 | 712.2075471698113, 96 | 465.71698113207543 97 | ], 98 | [ 99 | 695.2264150943396, 100 | 538.3584905660377 101 | ], 102 | [ 103 | 657.4905660377358, 104 | 601.566037735849 105 | ], 106 | [ 107 | 606, 108 | 633 109 | ], 110 | [ 111 | 213, 112 | 633 113 | ] 114 | ], 115 | "shape_type": "polygon", 116 | "flags": {} 117 | } 118 | ], 119 | "lineColor": [ 120 | 0, 121 | 255, 122 | 0, 123 | 128 124 | ], 125 | "fillColor": [ 126 | 255, 127 | 0, 128 | 0, 129 | 128 130 | ], 131 | "imagePath": "1.jpg", 132 | "imageData": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCAJ6A7YDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwCY06M0jcUJ96vlZLU96JOpp1NFLUmgUlLRRYBKDS0lADaKdTTQAxqYakNNIoAjfoayL/7prXfpWRqHQ1SA43U/vn61jsK2tT++ayQuTXqUPhOSpuQ7DTWT2rQWDIpHgrRVNSOQyWj5pjirskXJqCSPitoyM3Esaef3grr9PbgVxdmdsgrrdNbgVw4tHVh5aWOjQZjqCVOtTQv8tMnbg15yR1XK0d19nk5rdtilxECMGuTuOTVzSr5oJAjHivVwdVr3WceJpXXMjTv9NJBKCuR1GOeCRtynFejRus0OazryxhvcqVFd8qKlqjhVZrRnm3zSnmniGukvPDzW5JUcVnm1YdqyleOhtSs3coJb1ZjgHpUwjxU8UYxXFO7PYo1VEj8n5aiZa0JFwtU5BiuZXPVoVVJFKVapysV6Grs1UpK6qO5zYySsys07evFMMjY605hTgo/GvRVkj5ed+YYm6lbNTrHT/K4rOUjqpwdiiysahMZBrRaPFV5I6OY1dMgWQrV+C5461lyZFNSYqaUoKSM1PlZ00U+RU4esKC4960opsgVxyhynVGfMi5upQ1RBqdnmsTUsLJ709XxVQHFPVqloZcDU4NVZWp4alYRPuoLVDmgmgCbdTd+KiJpjSUJATl6YZKgaWomlppElozcVE89V95Y8VPFavMa0jC4rjAxc8VtadZncCRS2WldyK6O0sfL7V0Qp2MZzvsTWUGAOK0wuAKjii2gVKxq2zEjeoHqVjUbVmxortTDUrCoiOagsBSg0lFQMeDTwaYqk1ajg3VrGDkZyko7iRgtVtVCrSxxKi46mnBa3jTsYSq3KE65zmuT1+by1YA9q7G9kWGInrxXnWu3PmStzQ3ysI6nOTuNx4qHd6U58ZzTK2VRmZLupjSHHWmEmjBpBzNDTTgfzpoFPC0ydRwJp4oVeKeI6keoscW8+9OksWYdKnsx+9ANbqQKwFXFkSRy8GkzzzhVGB616j4N8IWsFubmeJZJexbtWTZ2iLzgZrt9DvESDyScV1Uqlmc1SF0XZtKhlhKsikfSuS1GySzm44Brup54khLSyqox3NcLr2owTyERHIHeqqu61FTTT0Mi4uFVCKyZZ8mm3d2M1nefubFedUjzHfSlYuCfnrTvOqOOHzBmphb8iuX2TOtTQzc0hwtW4NNlmNWrOz6cV0NpagY4ranh11M54i2xSsNJWLBIret7NfSnpF2q/axYNbciRzObY+KxXHSmyWuO1a0MYxRLDTRPMzGWHaeasp0qdrb5qcIgBTC7Id1SIe1OMftSBRmgBXOBVC4kq3KeKzrg1z1pWR34SHMylK2T71CakbrTTXj1JXZ9DTjZEJFRPUzkCqsj04q5E5JaDXNMPTNRs3NRtP711Ql0OOSFkPNQE4pJJagDZNXpcuK0uy9bqGk5rXiXisKGXaa1YrtfLqK9N2ujz8RUuy6JNtNkkqhJdgng1Cbg+tc0IWepjHUtTXGGorJnuMNRXoJKxrY23pF6096Z3rzpBElFSCogeKXNZmpJSUmaaTQA40ZplFAD80wmkpDQAppppf50mKAIpDxWVf/cNasnQ1j6gflNWhWOR1L/WmqEa5bFX7/8A1hqrbx5kr0YaRMJLUuxQZHSle34PFaUEPy9KdLB8vSuZ1GmaKJzMsPzVBLBwa1pof3lPSxEn3jge3NdMahjKKRz0MREtdNp+QBTk0+BTxCzH/aOP0FaduFiHy28K/gW/mTWlRKS1MIzcXoWYG4p0nIpVuHH8EP8A36X/AAqZZi33ooz/AMBx/KsI0o3NXUZmSQ81WOYpM1uyeR/FAP8AgLEfzzVSeC0cfLMyN6MOPzFdEaNtiPavqWrS7YQZBp8N/tm+aoLO1aOMkYdPVTmoZyscuK74XSOOdrnTq0N3EBxWVe6cFJKrmoLS724Ga2onE0eDg1o4qRMZOLOHuovLnxToga3NUsBu3qKx1UqcYrjqUrHdSrXEaqkvWrTmqzcmuKVNXPTp13FFKWqMi1qSpVdofWqi+UyqV3Mzcc8001ckh21RlOK3jLmOCpGzuTxtzVheazY5cGrkcvNXKJ00JJomeOoXizU4bNKMVJ0GZNBWfLHtNbkwGKzLla0g9TlrwVirHNitK2n6c1kkYNWbcnIp1IpowpTaZ0EcuQKmDVnwkkVZDYrglHU9CMrlnNOBqsJKcJKz5TS5aBqQN+VVBMKXz6nlGXN3SmmSqvniozP701FiLhkqFpKrG496haYk4HNUoNkORZaXNLFG0zYA61Pa6VPMFlmxGjdC3GfwrpLHTLaLbtV5D6t8oraNOxm59jMstKLYJFdHaaUAB8taVrbKoG1FX6D/ABrSSLA5rZJIycmylDZBR0q4kIUU/IFNaSk2QKTTGNIZKYWqLjsBqNqVpKjZqkYGoTStJT4YjJU2vsO9hgUngVPHb5G406TZbLuYj8awNS1wj5IWxXRCiupzzrPobcs8FuPmYZqk+tQK3DiuRuNRc53Sbiaz3vH7CuhWRzttnptrfRTjO8VbF2o4BBry6y1WaGUDdXV6feNKQSabYkaOqzExNn0rzzU+ZGrtNSdpFNcdqC8muNv3jrUfdMUrzTdlSt1pK2uZWIttLt/OnkgUzcKLj5RCPpSgUtPWPvScrFeyYq4qWmBfSlUZasucpQJYn2yA1uWtwCOayktxwTVpFKYxVwqDnSSR0ENwqjrU51Uwj5TzXPrOwHWmtMTXVE4matxqksxyzk/jVKW4yuc1UaTAqpPcYGM1TVyE7Fa9uDuODUFrKWl5qGTMklS20WJBUtaG1P3mdHanKitKCDcelZ9hGcDNbkAwoFZwpNvU2nJRLNrCARWskqxjFZycUpl9K6HGxyXubcEu+ti0izXM2UvzV0dpPgVlJD5kbCJtFOK5qoLnjrTvtAqLBzIkZRTDHUiyg0UrFERWoynerB6VDKQBQNFOfpWRcNzV+6lwKyJXya4cVPQ9rAxW4maY5oLVBJJXmJXZ67kkhsjVA3WkeSmM4xXbTSscU22yKaqb8VYlkGKpSSZNXy2COohJY1KkRxTYEBNaKRDFZupysyxFSysikyleagkuWXIBrQuFwOKypl5Nd11KF0eXTvKdmCXhzVxZ90ec1k4OeKuQKfLrl5OaR6EsPyq5HcS/PRTbiI76K6lDQ5zsX61Ef1qZ6hYV5MhRHCnU1RT6zsbCGjNKaSgYlLRiloJuJSU6koGIaQ0tBoGQS8CsTUD8prZn6Vg6ieDVx3EcveHMppbKPMlNuOZjV3Toi8oAGTXfqonP9o3baH5akmg+XpV60gSOP5zk+gqzIwEeERV98ZNc/s7vU1ctDljYSSyHCE/QVqW2kT4/493/AO/ZqyWmc481j+NW7eCvSoYZWuefVqu9iFdJcDJhb/vmmvYbf4cfhWuqFR6UySWVf4ifrzVVadka4ePMzEMGD0pVXFaDTIxw8K/VeKX7NFKuYn59G4rnpQdzpqpJGXMaz5Eya1bm3dD8wIql5RLV2qB5kpajrFPKbO4/hVmeaHzP38Ik9+hFPt7XAyetTywREfNitYohlFo0aPdandjqvQj/ABqSzv8AyuHpksCfwNgjvUEjRPhJ22v2lHf61oQbwKXMXHOay7nTf3hNFm8ttJtc5B6EdDWi0oKZNEo8yHTlyswpLFqpSWbCushiWUHvVee0GDxXn1Y8p6VKTkck8BqF0xW9PbY7VnzQe1efOqr2O2nhZPUxZuhqjNHmtma3PpVJoGHbFa0qljaWETWpkeWQ1TL0q40HXimGCuh17mccI47DBLgUCf3qJ0IqFvvVUdSakXEsPNmqsg3U5eak21srI5mnIoNFzUsMeDUzriolfa1Vucko8rNGHgVI0mBVVZOOKRpeDXLKGp0RqaDzPg0onqg8nNSREtVchpGpctmY1G1wanjtiw6U59PYjgUlFFtyKLXhpouy3AqyNGnmbaiMxJ4AHWuh03w5FYbXnVZ7j/nn1VPr6n2rZQic7nO5k2OlXF1D9olYQW+cbm6t/ujvW5bWQiCrbQ7fWVhlj/hW1b6Y0rb5cs3qa1oNPCjpSlboGvUyLTTiTlssfU1u21sEHSp1gVRQ0gUVjsUi0mFFK0wFUDce9RPce9TzFcpdeeoTcVQec+tRiU1LkVyml54prT1Q82mPMR3qbisXHuKjM/NZzTEnir9laPMQWHFOKchNpK7LFvGZiD2qa71CGwhx3qG/vV0+DCda426vZ76Uk7h9a6ox5TknNyLep61LPkBiAfSskTK3XOfep0sFY5aX8KlWyWPpWhkUGi5yuacbZ9vMWR61pfZztOF/EVEsk8R2jp6EVSRJjGIwy7q6nRZgQuPSsS9jST5iNrVc0EsJNo5xQC3Oh1BlER9a5DUGHNdFfybVO6uXvjnNcT+I74x90x5JcNSq2aq3DHzKntjmulxtEinG8rCyA4NVt5zWiy5FUZk5pU5JnQ6FtRyy81aWQbazScU9JHNaSppofOo6M0PM4pqTgSdapOzYqo0jBs1nCkjlrT7HVxTqQMVaEg21yMV6yd6uxX7NjJ4prDWdzJ121Y2zIM0hk5rPS7BHWl+0jPWuhI5ZFx24qv5Jmb2oSXzDWtY2471tCNzCc7Iz1sfapYrTZIM1uC3AHSqlwm01U4ovDykWrXAGK2LeLIBrCtpMEVswTjbU6WNJ3LhHapI4QX7VWD1Zt5l3Dms76mbvY0IbfGCBV6MlRUMLAip6qxyubJRKacs5FVy2KTzRSshxlI0Y7irkc4Nc8bgKetTQ3w9aykjrpttG60tU7ibrUAucr1qtPNmuecrI6IrUr3UtUDk1PIcmkArx6s3JnpUZuC0IGBxVaQGtErUTxCoib/WGZLqRVWSQjNbE0HFZc9uxNXGepUa1yi8pY4FM8pupq/DZ+oq19kGOldNKalKxnUxXKtDOgyDWkrqFqGSIJVdpsdTXXKlG5xc8qg+4f8qzpDuNSzTqe9U2l5rnl7uiPVweH6tDwKuwgeWOKzFfLDFaEbYAFXR1OrEq0SZofM5oq3AmU4ors5TzTZcVEetTNUJ614kjCIi8U/saaKcDWZogooopMsKKKKQAaSlNJTsAUjUtIadgKtx0Nc9qPQ10Uykise7ty2eK3o0nJ6GcpJI5Xyi8x69a3dLtyBwu0HrzyaSGxzJ0rWtbZkr0fYu2pzwmpMuxIFUU50yKkSLipCtZSSidcafMVo4OatxrigDFTLgCtMPi1sZYjB8quBfiqsz1ZbmqM3BruupnHTk4Mh6tmh5ABxUTy7artPk1UaKQVsQ+pa+0uoxwy/3W5FKI4JuYjsk/ut0P0NVd+RSqpI5rZU9DznUuxJ7ie3k2shX60/eJYct1qZCGXZcLvTt6j6UyazMcG9Tuh9fT61lLRnRFNrUpmReeaqzKjDLCrCFfMxT5hGV4FHQhkFneLB+5c7oSfxH0rYUAxAbtyn7retc1cQ45FXdLvxGfInJMR/Q+tVGRLNm2kaB8E8Gr0uJI8g1TeMjjgkcj3pkVx+82tSqQUkbUKzgxLiHNUZLbJxit7ylkWm/Yu9eRVwTbPbp46PKc69j7VC2mBh0rpfsgz0p4tB6VP1VxH9dvscZNpbL0FUpLQjtXdy2YYHisq7sB1xWUoSjsddHFJ6M5B7bI6VWNgSeAa6drA54FPSw9qqnKVzWvKnY5M6c69qabdx2rsjYe1RyaciDLKCcdK64t9TynNXOHmQjtVNxzXUXth1wKwZrYiQ1tCaMKkOYrxuRxmpDyOKb5TCpF4FEjOMSExkmrVqnNR4qeDg1lKWhrBam/ZQCTrW9aaV55AVc1jaQGnmSJBkk12ME8UUXkwkH+/J/e/wDrVEe7NpS6ISKwisxi35kIw0o/kv8AjUsNgq9qsxFcVKZlArTmMbCxwqgFKzqtVZboCqE18PXis5TKUTQluAO9UJbn3qhNfZ70ixXE0ZlI2RDndKdo/D1/Cs7t7FpJbk7XPPWmGf3qoZLdPv3BbH/PNf6mnR31kv8Aywdz/tP/AIVFu5foiyH3VKtQLfofu28Y/WpPt+P+WUf5VVkS79h7nFV2Yk1aFzFIAHtlOfQkVfs9Pgk/eMCg9zmrjTuYSqcu5TstPaU7nBAqze6kmnw7ImGaffXpiXyrYqfbODXKXknmSEzNhvQ10KPKcspuTC6vnuZSXl/Cq7/MOtVZJdrDac+9Sw5Y5ZqpXJ2JIoGZshgKvwxbcb+fpRDZCUblPNThfLOGXHvWqRDZL5C5DRE/Q1Ex5OR+BFWF4+YU8hZRkcGrJOa1N8nGwCneHZcXbAVb1e2O3cMYrP0MeTekg9ahlI2tTOM7jz6VzV1yDWxqszPLmsaTkVwy+I9SkvdMO7XmoYZth61du0zmsqTKmu2n7yOWcvZyujTFyMdaryzgk1S8w0hkNNUkivrjZeGGFSxgCqCTdqmFwFpSi+gudS1JXHNVpRSPdZNRNNmiMGjGU0NNSRuaizk1at4CxGa3TMJMcsj4p6M+7mr0dluHSn/YSrdKI7mUpFmwXNdHajaAaw7ZfKxWvDL8oFdC0MFFyZphhtqjcdanjYtTmg3VEtTqilAzVJDYFaduxwKjFn81XYoMCo9myXUTBpDiiCUiTrUxi+Wqh+WSk4iUlY6OzmyorREny1gWc3FaBm/d0HLOOpNNPjvVU3Rz1qpPOag80kVnKR1UqehbkueOtFvPk9azpXOKns8mspO51QjY6GKX93UUs1RpkLVW4mwDXLX2NqUeZj2m5pyy1m+YWPFTpuxXjtNs7pU+VGgsualXkVno5Bq3HJxVyg4o5nJXHyKCKqtCuKfJJioxNXOpWRaYmxVFQvKFFPllGOKz5nPY0o1XF3QcnMNnnBBrJuZRk4NW35BzWdOvWuxYpz3NcPTUHcrNMajaakkzmoNrE1qnc9qlViW7YlpK2oIHbB20zRtJLAPL37V1UVkiqMDpXdh46XZ5+NxSvaJlxIVXpRWjLEFaiuq6PO9qyZqhep3qB68OSCLGDrTxUe7mnhqzsbJj8UYpu6nDJpqm2HMOAoxmnBTSrExPAq/ZMOYZilwKsLaO3Y082bDtR7MpIqYFSLDmpfIOelTJFxW1KknuS0ynJbhu1V3slPatYRU0Q5bpXo0YpHLVjIxjYBTnFTR2+O1bP2QEcipY7Vaud3sZwXLuZaQEjpSNblea3EtB6U2a1+XpWfsObc2hinBnPtHg1IkeatyWrZ6U1Ldx2rP6oovQK+MclYqPHt5rMvJMCt6W3bb92sa8t29K6acWjkjJGO5aUYFENm5rRt7KtWCyAHSu2CMK8lLYxo7UqOacVEda88G0dKxblju21c5e6Y06fvE0cYkNWSDApK/l61Hp67h81XJoDsOOlcTTvc9GUoqNjKa2iuAzwLhh96P/AAqnJA3YVpLvhkyvBFW2VbmPKhVl7j1rRI42zl3jcHkVVlt1Lbvun2rbuY8HpxWbcKKLDLWmzPIBbSvyPuMf5VanOxgV7daxBOYyvqDxWw0y3VuJRwTw3sau4i/bXOa0opVZa5qJnhbDfnV+1uCzkZ6VLLWhsmPPShU9arRXGDyauxzKetTZD5mI0QIqlPEMYxxWkCKieHzD0qXSTNI12jJSx3HI5q5HYKe1XIrYg+1XY4MDpQqMUOWJlLqZTacoHT9Kz7ixwfu5rq2jGMVWktwxxjmpcEKNZnB31uxyAgH0Fc/cafhjlea9TfTEkH3Mms2Xw35rFttR7Fm/1lWPLJbNs/d4qnNFg8V6rP4bXySoX5q5m/8AClyso2IcGnKDFGsmzixwasW8bSzLEilmc4AFbV34buIZFUI2T1rVs/Dl9YW/nxRH7VN8qnH+qXufqayUG9zWVVJaFNpRpUf2GBlZyf30q9/9kH0q/Z3hHFXIPC0rBcoeOtW/+EdlhxsU89ql0ZN3COIjsNS+460yTUPekbR71R9wjNOHh+6lXldoHVm4AqfZSK9rDuU5bxmOBnJ6AVHKvkxB7qbys9Il5f8ALtWqbIwTFbOJsAYMzD5j9PSqMmkSySZYMTml7Kw/bIqfbsR4tYRGf+eh+Zz+Pb8KqSPM7HezMx6ljmuhg0RlXJBpBorsSxWn7NsXtoo5QFyWp8UbseldK2hkjGMVbg0iCJV3Uvq7H9aSMK1gckcGtW20x5ZskfnWgPs0BwoBNSNqUUK5O0VcaSRjLEN7EsdjBaLvlKn2rL1LVv8AllCPyqlqWshySWwPSsGW5luflQ7c9+9a2sc7be5NeyDBLXHz+maz1mnbhm8xP7rVYj0+FW3OxJ96tJBbRn7+PwoSC5Wgs3blAPoa0be2PcVJDEDyjsfwq6kRIGea2iiGxIYNvTip/vfJKvHrShT0xUgizg5q7E3IxG0PBG5D3qK4/dDKnirm2VBx8wqpcsGX0PpSY0Yd9c5U5qrow8y8arF+vXjimaEGFxL04FYmour/ACSGsRpM9au6tKfPbc3PpWOZea5Wrs7YytFCzEGsy4HNXWfNU5ua6aOhhX94q8UnGKGNMLYroucvKGcGkMlITUZNA27Dy1AOaaoJNW4YOKbZCux1vFk1s20PIFVbaCta0iywqCnoadnaAgcVZkseOlWbGDgVrC3G3pXRCJ5lapqcpLbGM5pY5MVs3luOcCsSZdpNOex04Z3NSzbOK2Iowwrn7CSuhtjwKUNS8RId5IpwAAqRqiatTlTEc8VmznBq+xrPuRUSNYMtWctaZkJWsG2kxJtrorWHzFFYMrluzOkVmbpSiJsVuixB7UNY8dKwe52ppI55oyzAVq2Vt0qwum/NnFX7W229RQkPm0Gi3+WqF1bZBrf2gCqlwo5rlm9Rwm46mNBZjNWmtgFqVSqmlmuEVa5nTs7mzxEpIy5vkNOhmBHJrOv74bjg1Sjvsd66ZwTgRGEpPQ3J5RjrVI3AB61Qm1AY61ly6gSeK8mVO7aR6dDB1JdDoHuBjrUPmAmsP7YT3qaO89TWDoNHRLCSijTkcYqhKMnika6HrSxsCacYuJkotbkSQZ6ipreyDTA44Bq3EoOKvwQAc01WaYRi5OyNCxjwoq5JcJCvzGs8S+UtZ15cE55roWO6I645fzfEXLrWoUfFFcldzZk60VusRUsX/Z9M9FeKoWhJrV8jjpUZgpSpSueZGmrGaLbJqQWvFaUdtmrAsyR0rSGHJk1Ex0tuelWFt/atSLTWY1dTTSo6V1Qo2MHJGLDYl26Vq2+ljH3anS3ZD92tK2HqK6YUEctSvZ6FRdNVR0pGsAf4a2VXcelTJACelN4dExxbRzT6V/s0xtLI7V2AtFx0oNmpHSs3h+xvHGnDyWLjtUQtmXtXayaep7VVm08YPFR7JxOlYuEtzljHgc05AK0rqxPOKoG3ZTU3kmbxjSmieMjHNLJtxVflRUTTEHmtPaWM3hUSlQTUkcCntUKS+1TrOq0lVJlg00SGzUjpWfd6UrD7taaXa055VYV0xldHl1aTg7HLmx8k9KnRRitWWJW7VVa09K2jI5pJlGeLcKyJrDdJnFdRHYnvT304YzgVrK1iIXuc7a2e3irzwkDGeKstbeU3FKVAHIrFWOh3sYF0FQ8iqryeWQU61q3xTByK564nCtxSuSW5MXcTSgfOP9YvqPUVjXMW08Hg8irMV+IZQw7VPOkVxHvix5Uh4x/C1AznJVJPTNX9JVhNjOYn4YfyqJl2ybe/TFXreMIA2MH1poRce3ZW246VGv7pm7GrjszQpKOexqmSDIR60WGPExI4NWY7h2AqkuF5PSpo2Ixt5HrSsBrxTZxzV1ZVXqee1YX26KEAjlv5VWl1BpC3zfrSugtc6QXwz1FNbUyzbVNc5HcsBnvipopWWIStwT0FK9wsdIlwWPWrCt3z1rAhuMEc81eWV5cbeM96ANVG+b71WFIYdayx8pwTwKet9DFVXJsX2jUc0x2t+jAZrJm1lcOB2rOk1PJDc0rjsdK1nZS4kZBxzUU0qLgqoBrHm1Exxxxc7mG5vp2qq9+SNx55qWyje+1qG6DirNuVmwT3rmftJccVqW1x5UHmy/dBx16+1IDoFtosBpeF/nUcvkEbSBgdAKxX1nzpD6DoB2qu2pKG70NlJGw8FsoJ2is6c2652gVRl1MsSv61Rnu1253fWsyrmo14oXtVOa+29DWb9p8w/L0qndz9eOaaiFy5car1w1VDqbSjhqzHmwuDxmoS4QcHihom5oT6iY1IDc1jyX8srEb6q3Fz1qiZiz8HioKNiLypTlz09astf29um2JBn1xWRl3Xao4qzbae8p+8i/7xpWZV0TeYZ2zVyCPmn2+jy5/16H/dNacWmTRY5z+NaKLIbQyBWz3FaEcR/hXmmJDLGeUzUn7wHKlkNaozZKuP4lxUgj3jiljdiP3qiT8KnTYCAAV9qqxNyjJK0APWsq4nSXJArprixM0W5NpPoa565t1RiDEUP04qJRNYtGLdBpFIFJo0ZzNn5TU1yBG2M8VNp+2KCR2/OskaN6GBq1sVLOzVzkkoDda0dfv2lnIU/L6VgeZk5oUB+1LwfNNdc1HDmQ4FaMdmXXpTegRfMZLx1AVramsWA6VnSQ7T0pKRq4aFQ0Bc1L5RJqxFbkmtOYy5G2NtbQyEcVsR2eAOKs6bYj5citkWQCdKxlIrk5THjg2itKxt/mBp5tua0bO3xjitqRyVpJGlZxAAVfPC1Xgj2inTSYFdHQ8xq8ijeuADWBcMC1X7+frWPvzJXPOt0PZwmG0uX7MYNb9s3y1z9qcEVrQS4FaUKhWJwxrA01qrrNkU/fxXVdHmOFhzYqpcAYNPd8fSoZJAaiTSNKcW2Q2y/v8ANdbp2CBXKRkBs1v6fcDArlkzrUbHTxoMVKIVIqlBPkCrqSishgbdaRlEYqYy1TuZeDTuCTbsQT3GzvWZPfdeaZfTkZrGlkZjiuCtiIxkenh8BOauXGvTnrVa4vCV61W571FKwC1l9aUmdkcuUdyjczNIxqmSfWrExzVcqaVSrc9DCYRRZG5NRVKymk21gmevGmkQnrRuIqbbTCKq4OCY0Sc1etn5qhjmr1nCWb2qZq6OWthU0bdqN1aiAAVTtIMAVbc7Vrzqjtoc9HDKLuQzycVi3s+M1dupwCa56+n3Zwa1w1FyY8RiVTKlxPl+tFZ0hbeaK9ZUtDynjdT6KWzZh0p6aaxPINdTHYKB0qUWiL2r1HTR4n1uRz9vpRHUVbFkiCtORokHWse+v1TODVxpmE8Q+pMqop7VZRUYdq4+fWWEuMmtPT9T3EZNJxsR7Vs3Wtt3QVJFZkdqkt7tGUc1a+0J6iqRBHHAF7VOqgVGZ17VG1xjvTV2S5KO5a3igyCs43NNe64q1TZzvFRNB5VAqrLOtZ8142Kz5b1s0nA1p1uY1HKvVeS1UjIqit971ZivQw5NZOJ206zWxn3UBXtWY45wa6KYCUcVlT2h3ZxXNOGp6dHFX3KyrxTHap9pUVWl5NJwTR1Rq6kLTEHinx3jZ5NM8vdTHhxSppoyxCjJGnHOJB1qwhFYSSsh61ajvSOtayqKO550cO5vQ3oytLLtx1rIW+4605b3J5qViot2udDy9pXLMidcCs+cevFTTX4VetYd7fnJwa6Yy0OGpTaZHex5U81zs0X7w96lur2bJGTWet2xbG7mgxsPeFSOlPsZFt5ij/6l+GHp71GGctzSHJbn86aGXb63VJlLjrwWx39fxpNvljk1dgIvtPKNy0YwPp2rGWcEtB91x0qmSjQjkzbzIrdtwFUjLhwW61Da3BF7GH45wwqtczNDcTQv0ViBU3HYvGUTfLnGanNwIIsKenBrnxdMDlSetWkkaUEngNzS5hlm4nGcgjmoI5f33XrxVYj5ueBU6xBYN/vnmp3LLqTkEtkY9KnaUykLnA96pxwvLIhY4B5GaexVQo681ViTQtnxIQe1XzqHlrhTwKx95+btn0pZgyw8AnPAp2Ea8d8zrlm4qlcXbSHjjn86qTu1tF06FQefanLvmmhQ5xgcUgJJyfs4ycZPJptg3mXcSH5kzlifQcmo9QlYYCn5RkVJp2ItPublztJwkf40AOvLkPcPKep5wP5U2Of92CRj61UyCS+c+lNkmc/KBQBr6exnbJJCDlmHYVLdXZmmXB2oBhV9BWff3psIhaLhXIDTY9ew/CswXZc53YB60Aaj3OJcI2fxqVZiehH1rL+QA4btTbdbiVv3Ks0a9T2H40rDNKeRpDhM+5FVPm3Ac/WrCypHw9yi+y8mnyXenoOS7H8qVh3GApFHlutU5pzK2AOPWnyXdrJx5U/PYNQFtSM75FX0IzTRNzNvGeIfKARWfkNktWpewNJ/qXDr7ViXEJjPLUpFIHjznnioI4kUktgfWrUMRxuY8e9V52DZ284qShAwEnPT2q5Ddqo6c+9Zkb8/MeKuxXKAY60Aa8N/LwEU/gK07e/ufRj9axLfUYoxjOPoK0YtSgI/1xX61SEzoYb4OoE0AFWFltpOjAfjXPi4mIBil3D60oml/iXH4VVyOU6QRA8qw/A1dhhJAyc1yqyk/dfBq5BqE9serfhTuLlOoSD2zWdqMEsOZYoiy9xUtjqwkHzofqK0JpElgJHQ1e4tmcDfsk658kI+elQXCiLTcY+Y1s3yWzXG0HoeQetc/r18u3yoe3esnobR10OE1GL98eaobDmtiaHcTVfyPm6UoyVglTaJdNt90gyK7Cz08NH0rG0W0LS9K7uxttsYyKiWo4aHP3Wmfuz8tcxqFiVY8V6ddW4MZrk9VtBk8Vi9NTppvmdji0gO7pWlbW2WHFOSD96a1rOAFhU+1ud0KFlcu6daYxxWr9lyOlPsYOBxWxHb5HSqRxVNzAayOelWoICB0rY+x57VKtpjtW8JWPOrU+YzljIFVboHFb4tc9qimsNwPFaSqaGVOhZ3ZwV7ktjHNVFgfPCmu0l0NS2dtKmij+7XHKEmz26NaEI2OVhicYO2rayMo5rpRpAA+7UEuk8421dO8RVK0ZGZCSatc4q0un7BwKV4CF6V0qbOCUUzOeoD1q08bZNQmHPam3dEwaixinmtCyzmqKxHNb2nW+QKzNpWZp2wOBV9M0W8HA4q2kHHSoM7kYziq0+cVoNHtFUZyBWVWVkaUXqYt0vNZzqBmta5rIuTivAq+9O59Rhqq5bFSU1nXE1WZpsZrJuJhk81pCOp0XuP3ZNKSMVUWXBpTNWrO6hHQkdhUfmVC8uTUZc01E6SYyU0tUWTT41JNVYkngiaWQADJJrq7LTfKjAxz3qjoNj5kwYjpXZRWwC9KtQ9255uKxFpcqMsQ+WKoXk+0da1L5hGCBXLajcYzzxXluPNOxpGfu3KV5c9eax5ZdzGpJ5CzHmqx616dGPKjysW1IY0e45NFTBsCiunmPMPrOW9SMdax7vW0TODzXH3fiJmyFJrEudSllbk8V0vEK5ywwb6naTa15rYDcVTlYznrXNWtz8/WtyGcbRXoYeSkjzMXScGQXEPNMjvBARk4q1Nyuaw75TnNaVIaXMKM+h0UWvhQBu/WtG31jzf4q4SKIsRW9ZDywM1EYq1zWpKx2UN2ZB1qbzM96worxY1HPNSrqK561asedUcpGxmmsaox3obvVhZga0OazRBOTis2WTmtSbBU1j3KkEmspno4WSeg3d3qNbho2qMS44pRG0h4rHc71Fx1Nq0uRIKtSQ+YuRWZYwFTW7FEdtZSjcpSszDuItuazH610d5Bwa5+dNsma5ZPlPRoVL6CxAZpJ4+Klh9BTpcbfmI+grSM1a50tXMt15pm1quEpu6Z+tSZGPuL+VefiOaZ0UbR6GeMipVk4qyRn+BfyqJkA/gH4V5zfK9zvjNSQx1DCq0lrFJ14q0FT1K0yWJuSPmHqK9DD4iRy4jDwauZF1Y2+Dk5rnLqwIlLRHmuku2+U1g3F3t6ivUhLmPBrw5WVstGPm61WkucN6UXFyGAwetUZRu5PStUcxq2OqNZ3iN1Rjhh7U/XbMw3glh4yNwx3rGaUbRjt3NdG8h1Lw4J/8AltbHB+lPdCZiyHzxHOuQc4b1zU2owk6ju/hniV/ocVAGG1lHcbhj1rTH+kWlo552qUP507DMqK3JOe1SqAxEQ4wasLA0MYYcZOcH0pig7twXPbNSAwQF5xCuCxOOKv3dod0MA4BbGfYUzR7dxqhb1yRmuhFsZrxm2/JCoP1OKqMRNmPcwmKVNpzgfpVW0VrmdIlGQM5ramtywKZyVHLU7QtKEV7vbLA8inYVygbN413t1rVurdYFhDdhn9BV7VtPkjsomRc/vKoapL5sCP8A7I4/CgEYuqL57ADjLZ9qkEwhuNueVTH40y4k3soAycqMVBJI/wDpDkD0WpKIr5mkjh4zxmrF5G66ZBEONzFv6VHcT+YsQxgrHmpNUn8uODviJeD70CKkG6FiCcirulzLEbm+mUFIOEX+856fl1rMWXfnC5b+6PWrWqziGKPT4AP3HMh9WPU/h0oAzLpp7mQvKe/JzRFbyswhgDOx6k8AD3q7DZCVWubhtkKDj1Y+gqlqWqmRfs9uuyEDkL3Pue9LRDNFJrS3kCMy3UoGMD7g/wDiqpXd/O8u3c20dFHQfhWdDKFjJ/M0n2lnZgep6VIFkXYiO48sael2srZzxWbN/rVCH5u9RyEw453c8UAbpuRkqDgCiNj5RJBIJrDW5xIeRVg6gNuckgdh3qbgXpDKWBQlaTyGuztmK7v+eg71ntfNLFgfL7U6CeYn5d3Tqaq4yS7HkZU8VQXJDELhfWtL7K12A1w3A6Uy6X9yIkwsY9O9ZlIw5JGLEgYA70xZX6bqfPHKR93CCoBE1VYZZRz3arKy8feNU48Z+arSbSaVhlyKV1GQ7fnV2HULhT99vxrPQEHgVZhiZupFMehsRaiZB86/iKuW96FPDce/FZKwbRyali3BuxFO7FZHU293ESMMFb36Grb3wVduSp/SuTbd0BxVu3lZmWJiTk9auMiXE0bhPMUzY3H1rlr2BnkY7cV3sVli1ArJvdN5PFacqZnzNbHn89uQelRR25LDiurl0zJ6VGNMwelc0o2eh1U+aSGaVAIsHFdTbyALWDFH5Jq8kxxSadjN6M0bmcba5rUXDE1bup2INYlxKSa5akmkduEjzSKwT5q0bIfvKqxrmr1mv7wVxxn7x71SlamdFYL0rchTgVk6evSt2FeBXox2PnKz1HCIU4RCpQBSE4qjnHRxDNWBbhhUUTCriMMVQmVHtF9Kj+ygGrzMKiJqkTqVzbqRUb2o9Kt5ozQx3M5rUelV5bTIrXODUZjBpknOvZZJ4qE2J9K6XyBmgWgNUpD5Tml087q2rG0244rQWzHpVqGALUtlIWGLAFWAuBTlwBTZG4qbisU7mTFY9zcVcvZetYsxrzsXXS0OuhRe5HcT1kXUuQatXDHBrMlbiuCUb6nq4RO5n3UuB1rMkbcau3OSapEc1pTVkevSs3YaOlBJpTTTWh6cVZDaOtKKUCmIFGauW8OSKjhiyRW5Y2RbBxUSYpSUVdm94fttsJb1roHG2Oq2nW/lQAVJePsiNdDdqZ85Vqc9Y53U5/mauR1GfLGtvUpyd1cneS7pcVw0Ic0rnZWnywImfJpyKKYozUy4H1ruaseXOVxdmegoqxAobNFRqYWR3jIajeI4zWqbbioXt8CvS+qNGX1uLM+Fip962LWfKjmstoynSliuNhxXTh1yaHHipKotDpPMDR1mXmKbHdZX71Vp5smu2Tujy4U2mXLKMMK0GIjjrHtLjYKsTXgMeM1g5WQ5RbYkt+4bAohvJieaz0zJIa1be34q6UW9WKdoluK9Zepq7FqWOrVmvBgVUk3KeK0lFow5VI6uK9DjrSThXXiubgvGU8mr6X/y4Jqb9xKDi7ocbcmatixstwHFZ1vMsjCujsGQR1zSVmenGo3HUsRWSp2q0sYApolHrS+ZUkkFxDuBrDurPnJro/vVBPbhhWVSnzI3oVOVnIyboQQBharPNmtu8thg8VhXERjb2ryK/PSdz6TDShUiCHmrcYyKpRsKtLLxW9CrGcdSa1BrYn2io5BxxTPPpQd1Y1KHNI5eacCpKwGaqyXJi5DY9CKuXUQKnHBrm7+68gsko+hrWFBLYiWKlswvdTiyRcLkH+JeGH+Nc/dRPIDLA4ng6kr1H1Hao7q6E+fQVSjmlgl3wSlX9u/1r0aS01PMqy5ncay+auBSx58soeTWrbLbakF2hYbvPK9Ff6ehqjf2rW05IJXP8JFbWMTNkcHcnP0rZ8N3Tfa2sZx+6uVKfjjisz7N5xMoxnvzSxvNb3COmNyYYU46CZNeRvYXu1g37t8EV0mg24ubSdeqo2R+X/1qq63Al8INRi+7MAWx2Peuh8Nae0DTIcbXXOfXH/66qKJbMnUIFxblVG0kjrUWmWbkSiRR0OOPSty4sA1syMvzL80bHsa0tG0xp44OASQQaIxE2QadoSwAOy5LDj8a2oLBIldnTAXk119rpMRht9yj92tZur2LICsSkhgc1T0JucSliZ57lwM7sAD610el6L+4TjsTU2kabm4kOMYK4rsIrZI4wFXpRsFziNYtWj05gpI2f4VwEkhWREfJBIBr1/W7ES2cg6EivKriHymkQr8wyah7lIylBmujhePM2qKztQkaKCZ/4Q5Ue5rp1tvIt4Zwv71iTisHXbTzlt7JG27AHlb1Y0WLMOeYllQFtxCg4o1y7c6yyr/ql2xgH2qW+CxanBbxdiu/j9KyJWe41OVmUsGc0WEmdLpUkEb3N6yhltACAehbtUdrCsqvfajuEByRjgyt6fT1q3DZJ/Y9tFPmFXP2m5x/d6Ko9/8AGqc8k+oRlseRaocIWHQeiilJagiG5vJryUhlVYlXAAGFVaozRL5IK85OM1sFtN+x8iYF+NzAc/hWRqLS27Lko0QAMbAcVFiiBehDD5R0qNiM7R2GSagluJZIcqVwTgnNPVl8sBW5osAomIOzbn0xVSUmKUs59lUVeBWAjepJbse9QSBJJ+B8+c89qYFZ7dwu5yOecelRwyMcKF+UdzVqX97GQrdOKhZRBDk5MuPwqQuTLhRnIJ9BVyOUA5bk/wB2siDcw35+pqxHN8uQflB6+tBVzVSeac/M2FHbtVsQI455X3rHW9JICripW1VQQvOfQVGozQns1lG/yuB3NZdzCqxku2B2FXY7i4uVzysXbNMmtjN90hj2zTsFzDVSTwDj3p+5o3wBmtWDTHkbB5+lTNpRU52n60xop27Ej5uK0Yh9KpGHDEdhVy0PPPSguxoxtheRQ7lW+4uKVipA2/lUJYySfNwKlkli1V5WyUGK1tOsjJeIccA1SsMMdqV1mk2ZQ7iK0hG7Jk7I0PLAjA7VRniBzWpN0qmy5NbWMDLeyB7VXeyx2rcEWabJb8VlONzuw9RJWZy81t83SpYLPPatGa3+bpUkEYFTZ2FVs3oZlxYDb0rnL628tjxXd3ATyzXK6mqljXJXjodGClaaMeFa0bRf3oqpEvNaNqB5grxnK0j6qSvSN6xXpWzH0FZVl2rTz8vvXsUXdHyeJVpEjS4FQNcAHrUM0pqo8hrbZHPFXZppc89asLecda54zOvemfbnB61w1MVyvU9angedaHS/ax60faR61zgv2p632aI42LIqZe4nQfaR60C4GetYRvfenC7461usSjzpUWmbqzD1qVZBWEl371Mt5/tVSrxYKi2bYK04MorHF7701r/3q/bIr6vM3fOWlE9c9/aPPWpUvc96SrRYfV5djdE/Skkl+Ws6KfPepzLkVMqqSKp0JNlO6yeaypmA61pXT8Vh3cuK8LFT5p6H0eHwqVLUq3cwA61jzzcVJdTZJ5rOkYk1UE1uXCnGIOc1Dt61Ii5PNJN8orW44TSkVX60yldufemitkj1I1FYUVPEhJpka5rRtLbzGAxSfYu6Suy3ptiZpBxxXZWNgqKvy81R022WILxXQwlQBRUovQ+exuOvLlRYWIRxVi6tLthbmtqWYeXxXLa5N+6NXJPkPPw0uaocpqE2c1z0vzS1p3zk1mNSw0eVHbi6nQUDApDljgU1m4xWrpemtLiVx16V1qNzhbuNtLdwlFdNDYBV4FFa+wKudr5Ipr2+RUoNP7V77SPkXWkZs1sMHiqMllzW4+OageMdaynTTLhWaMbyDHUUi1pTBRVN+tc7TR6lOUZor/Mo4qBpmJxV8rkYqMWmWzUum2YTkosmso84rchXC1RtINorQHSuyEbRscNSV2I1V5I91TkUAVTM0yi0BFQurLWmw5qKSIEVHLcuMncqW9yyMM10FnqPyjmufaMA1LGSK5pLU7oJtHWJqWe9XYLrea4+OVgeta1lOQRmo5Szq4m4pXbiqUNwCgqfdmkQQXMW4VhXlt14rpCvFU7mDcOlc1eipo7sLiHTkcbKhiahZSK1ru1zkYrHeIo2DXz1eE6MtNj6ehVjViSqcmrMf6VRRsGray/JxXXhcQnuZ4ihdXQ642+Wc1x+uLndt5roruYhT6Vyd/O5mI5xXpwtJ6Hh4iPKjAKk5bGD7VGYhyynn0rVFnNKMpE2PpUE2k3T8pCwrqSPOuVYgTIF2kN2JrpIlTUrVLa4ZQ/8MrdR7GsZbK6tFBmRsDuak+YlSASOnynmrRLC5s/s0nkNEVccZxSxWJQ5lXdj06Vo/aFvoBb3J2yR8RSf0NT2ULs3kjKzAce9CQh2n2m60a3YHYhyOP4a7LQbMF/kIHUfXipfDtgsu07QrdfVWFdRpuhR2BKpkqx3D/ZPp9K1tbchsz4/Dn2iCJj8vqD3rT07QVs2U+natuOIKuMVLtqOYLEax4XFRTwCRcEA1aFFTcZQgshFJkeuaukcU6g0XAzr5cx4xmvPdT0tv7U837oJ6H3r091BBzWJqtkrq74+ZVwv600I811fERRFY7s7V/qa5PUL5ft03y52Hkj9BXc6paES5wdxGATXLz2dvBfGUJuSMZwR1b3plI5eWN42Nw5ClsEA9c03R7UveQxYJ3PknFGpzNc3QJIwOn9TWtoUCgNNuZVA5b0Hc0kMvakRKZPOH+i2552/8tn9B7DpXPvcNOWacbQ3CLngD0ArTvJRKS23bCBtjXPQf56ms+ea2hbcxK8fwjn6CoY0VLqJ3kVd6rgYAx0pqwefA1iSG6lCf71Qy3EjBtoJX/ZH8zUdvcPBIhb5QT070IGZ7xGH5WwCD+VWhsCj1xzmruqwAXAnAG2QBgc1mzwMSNgyB3J4pDJYlM8nyk7hwM1E1sRdH5sbOreppkRaF+WyD949/pUs14nl+VgHPagBUQeWT0BGaqTSGeTbt2gfmaszv5uDGoQA84pI/wB6+FB24ySKBmeMiTYOhPrTtv7slicD9afEDJK4QfKTwKfeKY4QqDLZw1AFZrslNsfA6VPaosJLuxY+9VogrSYwcL+pqQbrmT7uAOgFSM3YZ0kxu4UVetjB2bvXJPMynaOgPOKt212Qck4oQjt7cwRR/KBn+dUNUvVWPqP90Vlrqflw7tygDuetZ7z/AGxi7OxH0oZcSU3DTN0x7VYjuNi1RBQfdYn6ioXnwetSzW5ri9JHB5q7C/nqCTXPwS7jx0rXs/TuazV7gzpdOdIWGOWNd1aSD7ODiuGsbfaqnHPvXY2G77OM11UzmmWZDk0wJmnEU9BWpncaI6f5WakUVKi81VguZ8tpu7VVe0K9K6FYQ1JJZ5HSk4j5jj7zcqmuWvifMNejXmmFgeK5m+0dixO2uOvRbWh3YWsoy1OWiHNX7biQVM+mPH2NRrE0cnIrwKtKUZao+qp4iM6VkzdtGrQDcVh28uK0EnGK9GhJWPBxVN8xYkANVzFS+cKfvBFb8yMKdF3KU6YFZshrXuCDWXKPm4rir01I9ehUcEMSrCDNV+anXNeXy8rCrib6DiOKbnFO34FQtKM1Unpoc0Yc7JC5FQNK2fvGhpc0w1gpS7nr4fCxS1HfaZf71L9ol9aixzS1ftJdzr9hT7EonbvVqG8wapAVYjjpe3lHqJ4aEuhrQ3lW/tI29ay448VKWwKh4yb0FHBwjsSzz5Gawb64q1d3IA61z13PuJ5p0U5O7NZqyshkkuTVZ2AFNaXFVpZ816MIOTPPry5EWlmqGWXIqm0pHSlBY13UcInqzyJYp30AtzT0OaSOEntV+3sHduFrvWFgkCzGaFtky1dFYQBME9agtNNMeCwrZtrf2rkqYeN/dO1ZnzQsy5bnAqwbjBxTEiwOKjkXmhwdjyalWMmXGuPl61zuszboyK0ZZdo61gajLu4zWdTVWLoyUXcwpoix9qozQlRW0FU1BND5h45ojTsglUc5GZY2xnuBnoK7iwt1jjUAdqxNOtfLbpXTQrtjFdNK1tDR+6tSYRgCikz70VsTzG6pp4PFVEkqUPx1r1up8f1Jj0qCXIFSIc0yccUyr2Mm4nw5B4qv5wyMUy/iO7INUllI61ySdmejQNNZhmrsJBNYgm5q3Bc4NQpG1SkpROgjwBUgPT0rLivB61bjuAa6YyujzZRaZbApDTRLxSFquxnZsd3qORqQyYFRNLk0m7GiiyF8lqkSlC55p4WueS1OynNpaioOauRzFKrAVIqk0rD9pdmpa3hJHNb1tJuFcxawtuFdFZggVjIvSxoAZFIyZFKDxSGSpuCM+5tww6Vh31nweK6dyMVkXuOa4cTQUkengcQ4yOY2FWx3p+SBgdasyRjzCcgD1rLvbzylxAcHue5rgw2CfNqetiMdGEdNyK6nSJtszf8AAR1rBubxhM2yJQPXGTT7iQykmqjwfKSWNezCmobHz9fEOpuOF7LMu0zMR6A4qBmYHhj+NNSIo2S2R3GKtDygCO3vWxzkQvpolIZ2K+hORT4ru3k4lTGejRjBpjBJgUGfwpFsJY49yRs34UXJLSWyXbboJ92OSCMHFa1naztOkwXLRkcHuPSs20Xziu4COYceYo/mK77w/ZhpYzNAodeDt5VhWkUS2dLoMEEtmGRAp9RXQRrgYqpaWkVpnygVU847CrwOaUmKI4UtIDS54qBh3opCaAaAFzSGgmkoAQmqNzgxtu59qutVO4wIm6niqQM4TXIGuJzMkgEUfG3+8fT+QridbsLmY+XEcHqxY4HPoOprudW3KWwQsEQ3yKOpPYV5/wCIrzytP3y5X7RJgRxHB/GkNGBPBBGCjOrN0AXoPX61ppFdQacqRQvsbljjr6D6VRhtvLl3AKwjXdIWOSzdl9h7VZfUDYSqrbprk8NzwPUGnshlSdpTIYiMknhRzWfPuJBbaGyeGYcfhWjqksVkSImI8wBkx1we1Z0OI1abYskp6GQfKtQURIq7TvPzkfKp6t/9am+Qclv4hwMUrvF5pJYsx6gc0QmIdHLHP3R/WnuAJtmh2uGYqeM96ZM2eyqg9+tWnxIQEwBjnmoFgVBuZ9wz1NISKnkq65wqj34pBZeZJkbeB1HarTWqy5YNwe2KeyrBGATg4pDK5slA3Z4Xk561HJL5MDKgwW4zUvyFhu27c9FpZo4sAn5ueKBla1h8kKSB6470kpRWIJPPIAFXBG0q9AoHf1qrNC8k33Tj09aAKyQEhhECC/r2FSvttLTyVGSRy3c1aWJlfGNp7gCleBJWBd8IDyAOtSBiBGwXwdo70wFmbjpWxcbDEYok4FVEg53OMAUAQL8xwwJ9AKuwW7TLtQfKOtQ+Zul4XA/WrqbjDtztBqrDuV5RFENitufuBWdcNzjFaEse1cRLj1Y96zWBz/eNKxVy1aMeK6LTVAlDNxXNW5fIOMVtWUzZ+dqLBzHaW8oODXWacd1vxXB2kp8sc8V2uitmDFawM5F8g1ItPK0wjmrMiRfu1KlQrU6VSAuwVeSIEVmwk5rUgbirJIprYMOlZlxYBj0rdc8VXZQaTQ1Kxy0+lA5+Wsm50nr8td40APaq0tgG7VzzoKW51UsVKGzOA/s9lJ4oaFlHSuwm032rOnsBzxXLLDJbHXHGcz1OUd2Vu9SLOcVpzWHJ4qD7FXHKjOL0PSp16TRSd2IqARMzdK1PstSpaVSpN7mNTEJbGSYCB0phyPpW21vxVGaECuerhDnVXmKJHFU5etXXIxiqsi5rNYbQ9ChpqQrJTg9II6awxXPPDdT1aWIWxIDmniqyNlsVp21qz8muKa5dzq9rG1xsceauRpipls9opkn7usGnII14ilgoqpcXAANRzXHWqTkvVRp9zT2sStd3Gc81lSvmrl0CKyZZdpr1sLR5tjir4lRElPBqBVLGnKd5q/bQZIr2IUVTjdnz2Kxbm7IgiszJ2q3HYH+7Wtb2y4FXo7YVwPFydXlRldKNzMtNM5GVrobTT1SP7tOt4QK0kIx7V6M7yicCqXkVlth6VZihApGlHalWXArKk7bnU6cmtCZsAVnzzAE0txeAL1rFur33p1cRFaBTwVWWpJdXY55rn7y63N1p91dZrGlmZpK5oy5mdDouCsy8LjJ4rRtE87HFY1rC8hBrqdJttq5PrWnxOwRahqX7KxUDcRV1oSBxViCL5alZRiqlWjSiEaU60jP2kUVO8eDRXF9eO/6gzOj1HJ61ZGoD1rJbTJ4ZMc4pZLKfZkZr6hVLnxUqaizoIb5T3FWvPVx1rjV+0wfeq3DqRHDHFbRmmZuJtXUasD3rHaHDEVM2pKV+9UUc6ySVz1bM6qV0hn2duoqNt69Aa27eJSKLi1QgkCpVJNGv1hp2OfW5cNWjb3xyOap3VvtORVaOTaaxk/Zs6oUVVVzp4rs461YW4rn4rkirAvOa1jiETLBO+hqy3FRRTbpKoPPnvUlsT5g4odVSJ+ruJ0MMfmLTjCRUljHuQVotb5FRcyktDMSLNXIrfNTx23NXI4QopSkKERbWDFaSYUVVTgU8y4FZtmpYMlRtNiqrz1WkuMd6i9jWnScixLcYHWsu6uc5FEtxu71l3UhIODQlzG8mqSK95ckjaDWJcSn6n0qzdM4Gay5923cTj3rRJI5JScnchkkbPC/XNPFyqjOM1Qefyzyxqsb9AdvLMe3SgRrFhK3yKWPYCqt1qVrbnGBM464OF/8Ar1lXuq+SvkW/UjErZ6+wrm7vUpSxAalcSOgu/ERLH59g7CMYxTI/EW4hfOz+JFcYxlnlwNzMegFX7fQruQ5fEf1NS5JbspRvsdquttAwaKYsQfm5yDXq3gfxFb6rCsYws0fBjJ5PuK8EksZ4JCUulYE52mtrw9rraVqMUrrhgexwcVVOrHuTODPqKOUYyMY9M1KJK4/RNcS+t43V92RnHeuihn9a0ZBpg0VXWcEU7zKkY/PNOFQ7qeDxQA8mmFqa0mBVYz8n0pXAstIMVXnmXymye1VZrnrzVCa5znNHMFjmfEhaCz8m3LefcnBbGdq9zXm12yXGsSiWUJZ2ZwGYZy3cmvRNeujFDJN8u4DC7vXsK8xvFt4F8mSVzKWLStt4LHkmjmKSEu9UENk0qBSUKmGP+7nPzH1Y1iPf7uFAUnliTUVy6gSjJ/eEEE+1Zs0scZ3M5/xqbjsdA16kuhxsi7pIJihY9cHkH+dUGvmU/OSc/n+VUra9eXTr+EcYVXX8DWWqXc5+VXbv8qk02M349RRD8kIyepJyf1q0rrOMLwD1Oa5Y+fAMPuT2IIqWC5mh6ZxSTFY6k2P7s7b2Pnscg1DHZtnbNKuD0xkgVnQar52Azfd6cVppdidctOVI/vUxWE8tLYEhmbnpUUk0EkZyshY9BmrZUlcrKrA1SYssmzye/wB6mhiwFd2GYDHXnOKkkdN2clj24qERpKcKeB7YNTxqACWZTjsTQBZgH7sEuMkcUY2ys+ApA6tVdGYtwp3euc1bEDmEE5YA+lICCadBC3JAP8S9T7VEYj5Ks6+XEPU5NaN0itGiwgHA6+9SQWuYgJcNj7xYcVSQrmPHtLBUUkerf4U+W2VcnBLnoO1arwjOEi2j1HU0j2YGHILHsp7UWC5gSxSxEFxub+ECrdvBcNEPNXA7Ada0zBsYSyhdx6YqxC8THBK596dguVYNK+0RjzvlQdqiv9HURYgT8TW/EuV+R1watR2bsQcCnYR5nJBcWzYI5qS0vWikAmT867y98PtfE/OqsOmax5vDnkNtuiB6EGp5QuW7ORDGjLKDntXdaKT5Arg7XTGgwy/Mn1rutGb9wAOlVFDbNsGncVDupTJWpmOzzViHmqo61ctx0oQFyKOrsYwKgixipwasgVjSCnhc0/ZVCI1PNWY4wwqAJg1YjO0VAyOa3GKy7m3FbEknFZ85zmpaKTMGeAVReHmtuZaoSJWMoo2jUaM8oBTQRU80fFUicGo5bGnPcnOCKoXMXBq9GciobhflrOS0NKT1Ocucq1Vt244q9ex8ms3kNXBOok7H0FGm3G5bUDFV3TceKVST0qeOM9+tO6khN8g+ysh5mTXQ2tsAtZtt8taAvBGOteRjafVHP7eTepZmUKtYl7J1xU91qYwcNWBdXvmE1y4WOupo6kraAzZel42VS8/mh7j5eOtdtWnd6FQqytqRXzACuenbdJxWrdM0mazvs5LdK9PApRWpyYmUmwthzWzb4GKoQwEEcVowxHiuyTbOLkT1NGB60ockVQtIDxurXiXArnp4O8uZkzqL4UPTipDLUZHFRscGtK9VU1Y9HA5eqr5mSNLVea6IXimSPxVOQk15csRJ7H0tPBUoLVENzdsay5pmJ61dljLVWe1Y1neT3HP2UVYoSZaiCxMrg4rSh095D92tez00qRuWpnVcEcUqVKoV7HTPlHFb1tabRU0ECxjpVkDHQVnDFSREsFBiqNopSeKQmq8stClKqymoYeNxs021qKpzMS1FdSoKxwPMNTp7mxQfw1T+yoeMVu3o+WsF5/Lmwa+jhPufKVKLkrohm0xGH3aybrSRzha6aOdWHanNCsgzXQjzW5Qep59PYyxNxnFNhDiQda7S4sA3asiXT9rZArOUDSOJdrD7OciMA1NPcLjrVZF2iql0+O9Pmsioy5nqLPIGFZsiHPFC3XzYOKuQL52K5px52evQqciM7zWU9KlSfB61rtp6yR8is+bTXST5eVrkqU5x2PRpYmnPQt2SGds10lnp4wOKqaPY4UZFdZbW4RRxXRT21OLE1Pe0G2lv5YHFaHljbTFwBTjJgVdzkEwFpRJUDy1A8+KALxlAqCS496pPcgd6qy3Oe9RJ2OuhQci3Lc+hqu05PeqTT1LAd4rm9pzysj0/ZqlC7HseM1n3bHnFXZ2wtYt3IxNdsVZHjVZ8zuZ1zM2dpJrMuZ1C/f5qxdzbXOaw7udmBAXmggiuZuTtbn1zWWbxoyzuQSOARSXEkoz8vFZskrs33Tj6UJATPdDnYqiqqwPcShQOSeMVNHavJICR17YrpNKsUhPmsvKjisqkuRXZpCNyTSfD8NsA0u3zsZyah1S1uXJWBiv0qtq2r+ZN9nVtrJyCO9PtNUe4VVf74/WlTgpe8xSk1ojJ1LQtXsLGK/lJaBzjKtnH1rNhvZQw807h696797OW/tfJlzsPY1xmraQ+nTkD5l6g1vUppapGUZPqeneAdYaWHyN+WTkfSvUbPUTIoBbFfPHge9a21iIZwCdpr2u3lxipuVa52cE/HJzVxZa52zl3VsRt2700S0Xlkp+6qwfA96aZDmi4h8rkZ5qhLNg1JNLxWfI/JqGUkEs3PWqFxPwalkbPFU5TzzQOxzessZSdx+Uc/jXmmo3P2i69Aefzr0nxN+506Rg3zYNeU3kwVg2RwtAzNv7sqT03HoPQVFY6dLffvpW2Qg8sep+lMtLT+0L4nPyKct/hVvWb8wL9mh447dqylN35Y7lpaXZdhvdM0yQxwwqzkEZPzH9apt4ouMny4CFFU9J0s3EgnnJVR+Zrt5Ly3fQ102KxgH96UJ8zVvDDKS95mMqjWxy8evxXPyXtsGHuKln0m1u4/O09vLY/wg5BoudKwCyofyqjpt8bO7CspVc4IPas6uHcNYsuFTm3KW0xzGC4TY4/iHepdsqt3IHQ10msaYt/ZefFxKg3AjvWdaQGeyWaNeh2uvoadOfOvM0cbMqwSNkbZZkPfnNWHWUKT9p/Si508xrvB5HpVOOcq+19zCrRnKNizvZesufxq7bxZXe8xb2PWqcdh5kZdX3DuvcVahiTaIldtx7kVQi+svlDbDEDnuasvJMI8Nj6CoI43QqCw2/X+tW4dvyt5fTpnvSsA4TGFRhPouOtLK19IAqGMDOTxUvlNJJ8rnI7AcD8aaI1zl2Ofc1SAlFlN5Ct5nzE8n/61RvzjaenXvTkNu7AmdWOcbd/FW0A2Z2Zx6HNUSVTClwDiJmOMbpDx+AqpHp2yRpA3sea2Cpc4R1BHYmnG2QsPNO/vxmgCG0L8KEXb6Ct2AMsXK81SjtwBuxjHbFWkuVK4Tkj0oAsGEyHcMBv0rP1ZIfsuZ03sOmKvIZc5Zwo+matBEIyefqKBHI6bdwk7Hz1/Kuz04ALhTxWZdW6wL5sUQb1wKn0m9Ekm0oVHvTQG5zTamI4FREYpiZKlWonxiqanFPV+apCNWKWramsqFqvo3AqyTQjPFPZhVVXwKY89BFizvFPDVQWXmrMZyKEMWRjUDLmp2FRt0pFFGZeKpSitKUZqjMvBqWUjMm71TdRmr80ZyarPHWTRaZCnFNm5Wjoaa54rKSOik9TGvRyeKyvKLSe1bN0u4mqyRYNeNXpuUz6ShXUKQkNuAoqwIwopRhRUcswUV1xp2ieTXxTchzS7RxVKa7b1pkk+apyy5Ncc6LkxRq6BLOzHrVYnNKxqJpKTw3LsbRr3EakHNNLihZK0jRuiZ130HGIGm+SM9KcstTIcmuijSdzlq1W0Nig5rQhg9qZF+lXozgV6KpWOO8mSxoIxVhZABVfOaQmnKahE68HgqlapfoWWnqInNRilzXz+IqOcj7rC4ZUoWQjDNQuuBTjLg1FJMKVOCZOIm0RMKfDH5jAYqrLOS2BV2ybaRkV1U6V2eFiKkmdDYWCkD5a2E0xNv3aqabcJgZrdilRhwa3qYdSVrHnxrzpyuYk1m0R4FQkEV0xiWQVm3liMEjivIxGDcdYnsYfGKejMhuhrPuJNpq9N8hwayrgGVtq1lhZWnZlYyHNC6ERDNyBmir9pBsixjNFe0qLsfPcrOwuwDH71yl+uJDj1revbjatc9LOssuM16EY3Zwxk4kUM0qkA9K1be445qmsQNDBkPFbRvE5a/LUNniQVWmt8j2qC3uDxnrV8OGFbJpnnTpuJiT23XisO/iYA8V2UkIYGsm8sg2eKiUR0panCtG/m1r6dvj+9V4aWPMzjirK2W0dK5XKzPXgromimXbzU8EayyetZdyroODUmn3+JcMea0jNSJlFx1O00+zGBgVqGIqMVnaXcqwHNboUSR5qZKwKVzPbgVA8lWriLB4rPmO2pGNlmArPuLwLnmo7u7296wbq5ZnODVqIJ6mjJe5frTftPvWOHY1KJDiuXEwbWh7eEqQWhpK+5q1bfAjFYVm26UA10KhVhrHCUmndjzGp7qSKV42axrqXbux1rRuiWJxWVdNhTkZr0DxTBvJWyTgk+1Y9zI0ilfmDVs3Ecshyv61RkSVc+aB+VIDn5IZwSpGR7mi3tppZQDgLW2YYJBjyyT71LDaKGXCED2q4oljVtAssShRyOtXLhTFaMwGOMmny2+JlKgjirktuZ9JJXvH/AErkxabSNqbseWTz+ddO/qauW9yVIJ/h6Gs6VSkrKRgg09CfWtI6JWI3Z00viW5S3xEy+ZnuO1Z02pXN6c3B3A+3ArOBJHSrNrFNMdijNVzSZNkjS0ZPKuzKOAOa9ssG823jf1ArxySE2Ols54dsAYr2Tw/A8ml2w/6Zrkn6VEPeuzTZHQWKliMVsopqtYWmFzitNIDnpxW1jNsaKcy8VZWCn+UMU7EmNOprOkyDW9cw5HArJnhOTxUOJUWUH6VTk6c9RV2aE4OKozKcelBRw/jS8ItTCvXI6eleYXwIDM2eK9H8YxnD+20j16151rLeXBtz1NQi0i1otqIdOacjluawpZFa7eaUZyeBXaafAp0NAB1jriL2JomPoCRWFB3m2FTZF+1vgJQGO6P09K6Jdd060jXJPJxwM1wizEe1K8hYdfwrujUaMHG518/iq2abyhE5XONxrn9SuBcXZmiACYxVDhscck9alls3VflIKmiU3LcIxSO78NzNd6aoPzADBqDQLZV1HUrIj5TyBVrwnZvDpSl+hGaTScf29qEw6IuK48O/3rR0N+5cluIoo7Moow2MGuRu4nRuRkGunnm8+DA6nqKxDGXkbdtPpjpXVYzkyjFM8T4BGV/hPFWvte1WVmAB7Lyc1MbfzTl+wxhRTIYIhIURCJMZyT0oZBct7me3iWVyqofXrV1b9D15z0IFY9yBEwiBEkpHLMc1EksyT4M6f7uScUijpYpyQXGcDsDUyu0rZVDn06D86zLa4nkU9Mr3XvVz7U0zLEZTG44OKaAuSwQFhK1l+96cGliX93jdtOemelRRw3IkyGG31JzVyeN4ADuTPXkmqsSSZyAGfaB/EBR5rmTajhlPQnk1W3TXMW1HT09h+VT2+nypHlpyW/2RimIvxweZF84/EcURCAMAJfmHtUSQeQPnaRie7HinuYkXcFcL3JNAF5vKB+9v46VIkpC421StwZjuSbb7Y5q9HE8fLMpPv1pIAjmlOdoA9jU9pMGkw8QB9QKasTiUEZ2981aW3A+ZRVBc0AcrxTSaUcRA1GTQIM0oao6OSapCLsUvNXo5eKx1bFWY5qpMVjV83iomkqr5vvRvzRcRbSTmrsUlZcZ5q9EaaBlsnNRvT1IFMeQUAV3qu4qwxBqJ8UgKUqdaqTLWg4qrLHkVLLRkyDBqu54rQlhqnJFWbiaU5amfMKrZANX5YuKzbjK5rysRFxdz3cLJTjyiSTYqlK+ajklbNQNKa5JYppWOyOATdwdqgY0rE1GQay+sNnTHLojWJqM1YCn8KaU9qX1iRqsDBdCm3BqMMc1PNGaZFBuNaxrMwqYWCYJuzVyEHHSp4bPj7tX4rLjpXVQxSi9TycVRj9kqxnbVlHJ6VKbQY6UzZtPSvWWMpuJzYbBynIkDHFSKKYi+tTqteNi8TzuyPscHhVSiAGKjkOKkY4FUbmcAV56O8gnnxVF7jjrSTS5JqOGB55AK66MG2edi5pIs2qmVtx6VqR/KBT7TTiqAAVdNiyrkivVp00j5uvUb2CK8x8orWs71wQO1Y8FkxkzW3bW+0e9bSscsE5bnQ2tyrRjJp1ywK1kB/KHBxQL7zPlzXNKPMauDhqinfxkk7aqQwc5rVcBhVZgENc0cHGM+Y76FWVSPKKg2DFFVJrnDYBorf2sTp+om3dxl0rn7m0aNty9a6BJtw5qOaBZBxXVdHycXYwIrsxnDVcWUSDINQ3dn7VSj82JsdquM+5M6SlqjZQZqxHnFUYZzgZq/A2a3icVdNKxZTpUUkQY1MORxSohJFXJnFTi3Iqi1z2qCWDbW9HBxUNxbZB4rimrnr0tEcrdQ5FZL27CXctdTcW3WqLW4zXNrF3OjdEukXLoQDXa2V0Gj5NcXbxeW1bVvPtArb2l0YcljoZCJBmsm9Xg+tKt5x1qGe4DDGaUp8quVGOpzl8rkmsaQ4bBrrHhWQGsHU7IgkisIY6MnynR9XW5SjIJqxtFU4sqcGrZIx15r0Yx0OOcpQloWrKLMoINbp4iHNYVgf3wxk1uSnMQrPlsazrSmtSlMowWyKxbkqwPrWrdSKkfXFYryqWOB+NMzMt9yS57VDIPOc8/gKuXS5GPmye9Yr/aYJCEXNMC4LTzPl2fl1rUs9NCRg5JPoRWNb3O1gJSyn1robGeGTmKb5u4JqhMsqqQTRybAyg4II4os4gZLm0OAYmIx6qeRVjaJYipK/hXO6pPd6XeRXsK5KDa4x95airHmjYIuzOU8YeHpbDUGuUU+RJznHQ1z0MEjnAFe6abcaV4t07ylZS7DDRHqK5y9+GU1tP5tpO6LngFciuenNbSNJXWxw9npYx8wOa3bLSvKXCLweprpLPwpdL/ryDjrgYFZviLVoNKj+xWLJNdsNuF5C+5qp1U/cp7smMW9ZbGLer9u1aDTkYbIyDI3bNfQWg6WLfTbdfvYQDPrxXjHgDR3u9egV1LsW3ysRkV9G20CxxKo7CumNNU4JEyldjbe22rVoRgU4cUGpuITFBFOFFFwIHQEVSnt1PPetE1C4HU00BhS2/JGKyLuHGQRgiusmiBU9xWHqcC+S3U4HWk0VFnlni9d0gTt3avKtTjM11Mvp0r0fxPc+deMiZwODmuF1CBo7sTAH3rJbnS4+6bHha4W504wlhkcYrP1nT/Iu3Rl/dy9D6Gs2xvG0XUw6nMEhz9K9BX7NrNluGGDDp6VzO9KpzdCWuaJ5RcWbQS7e3Y+tPFjKY92Pwrur3w4VXKfNjpnrWHLbXsUmxLGdvQ4rujKEtUznba0Myz0xmBaUdO1XLDSrm+1NIUjKwZ5JrX0zw/rOoNg2/2eM9S3Wu5tdKtdA07fO6rtGfmPzE1jXqxitB04uTM25aHStMLNhVVa5bSpmW1lfB826bP0FVdf1mXxBfeQi/uEb+HvW5p1kbWEF9xdgOg4FLCQ5U5Pdm1SS0iiWK1wDsePf/tDpVSeEDPmrgk5JUcGtO8l+zQ4UbmI4DE4rBmvmLFG2CT/AGRwK2bMRZJkhAGc/wCz1P5VC/kvGr/cBP3Qev1xTF3SMSXWHHQ4yTURhWSXzXLO3QFiP60rlDHtVnyxUgD+EHrRLaEgIhXjnGCpFW3kgswMIGOM5Dbf0rOe/uzPlbdWQ9mAx+FCAvJ/o+wrtDegNaVrM13JhrYqcemaxUuJ1mBYLH+HFadtPP8ANsEbN9KaA1V+1QHA5X3JGKk+2So21k3Z6EVSDO8QzvL+gOaemZ12yiMlehYbj+tMk1InJzhF3fQ1Mssh/iQnso61k5ngjlZZlRQMAKMVTz5sDMoYy553HBNDKNq8vv3XkKF8z61ThuL5TtdGC9mGcGks7eAqvm+Yp/2elbcCwAFNu5T1JoJIbPUCDtAQH1OOa145xdryAHHcCsK5+z25KwQ8fXinabdgNgAL9DQmFjroAyxcjNTQtg9ODWLBdNFL94nP5VqxTrIen5VYjRYjyqqs1TdVxVZvvUCHCnAUwU7dQIQnFORqaxpu7FMZb3ZqRDVFZealWaqEaKnFTpLis5ZamWSnckvGfAqBrg561XeSoGk5pXHYvibP1pxkz9aoLLUokouOxIzc0xjmms9RNJQAyXFUnqy7cVUk61BSIJQKzLqPNaLmqktc1aHMjsw1VxZiSw81A0YrSnWs+Q4NfOYmm4yPrMJV5kRbBSbBS7xRvrn1PRQoQUuwUgbmnZpajKk0fNS2cALZNPKEnOKt20J9K1jJ2PKxdXlLUMa1cSNcVFFDjrVpRS5mjyVepKxXljxVUx81fmFVttaRlKx9FgsOoxuxirilJwKVjiqs02BTuekNuJsA1k3E/NPup+vNZrMXbA5rWlHmZnWqqKHg7mrY05RkVVtbEkA4rYgs2j6V69ClY+axeJbehs2gXaKvmESCsu3LoRkVqwkkc10M4ovmFS2C1IzLGKe0mBWbdTkA4rO5eiHT3Haq8Uh8zNUjKWarcBzTVrj5XI0hJ8tUbq45p0kuFrNmJZ65cXW5VaJ6mBwyi7sHl3GiolVj2orxueR7Wh0MM3rVsS8Vhw3QPeppLvavWvS5q1Oep+dWjJGk5WTg1XazBOQKzY9RBlC5rVhnBXrXoxldC5JFZ7cr2qWHeGxVvKtSrEN2RWsZGUrPcuWkO/61qR2OQDiq1kVGM1sROCKtyuY8iRW8jaBxUTxZHStTaDUTwCoLOfurfg8Vz943kn2rsrqD5TxXIaxEcHAqXG5SkV4bsE9auLccda5iGdllKnqK045SQKxlFo2Vma/2n5etVZ70qetU3nIFUri49656r0OzDUOZm5FqC7eTUF1Orr1rnXvCvQ0iXpPVq5sFSjKsdWJwsqceZF1sbjSNntUayqe9TRMS3Ar6eVNKJ4sldl7SlYzDNbdyD5eAcVT01SGBNXLuVRXG7EMyblQI+SWPvVBmRR0/IVY1BuOhPsKzlklx9z8DQhEFw5kPXHpVY/MuDuDepFT3G4rzAMn3qusjRx/cb8s0wKxEWcFS3vjFWrZ4UOQVDdsjNOWSORccZPap4bBZCF3op6/NVpEmjBcqQPmXJ61Ze3ivYik8W5cY71mSae0GJTNlR2U1bh/1Yb7Xz6E4NVYRhXPhS9sbj7Votz5co5wGxWnYeMvF+nAW93ZLdqOMkjP510FnbTzR5VXlB7xjNalt4du7lR/o5AJ7is5UYy3Q1Oxwuoa34t17MNlB9ihPXBGTVrwx8NGecXN23mEHLMa9W0zwfHF812QfSNen410cdlBBF5UUSqvoBRCNOn8KBylLc5Dwz4eSw1B5vLVMjCr/ALNdsoxTIolToKlyKJSuJIUUYoBozU2GBOKZmnHmm0wEzUbjPWpKYTmqAjK8GsTVrYyqQB9c5rdaQLWdcyCYhFiJ3DtRYS0PJfEfh55599kMp054ya5uTQbqX908CqQOdx717mNPUx7GQKT17kVTfQo5A3yDf70KCL9o7HgOp+F0cEbSGHLY/Q1nWv8AaWhvuiJktxzxyRX0JqHg6C9tTtG2dBkEd/avOtW0FrOclhhCM9MHPpSlTUtGKNRox9N8Z6fLhL+Ir7gZrqLfVfC8sXnC9TA6gg5rjrnRLe5LloeVOcAYOKrL4asjGeJPw6j8K5nhuzNHNPdHW6p4+0KwhZdMRZ5scHBPNcPdT694unLykrCfwFakWlWlkNyWiv8Ahkip/tJVcD5c9mUDFEaEYu71DndrLQh03QYNNhLOm5scsR0qxc3cUKg7m5HYAf1rPfV5t3lCL6kGsy5luS3zkNnqvpW7ZBcu7jz4TkyM3XjgY+lY6sQ3C8j8KsmRViG8AZHQGoA7JkDgHpwP50ih6rLNId0LFQOgGAKZJC0uDuj8sHHTBqtFcPJJglcD1NTo3mSbluAo9MgGgCULEMAwSOB3ABx/n61OtxY7mVYmDY6N2qohVJAxmCseOaeqSmU52sh6cZFACCGF9w3KHHTJx/WpoIHhnAeXanfBNQ3sAEYyqq3Yg8U2ySaCZWbcOcAlcqfxoA1WEwYyQzFkPQkE1IJLktuUiQ9D1yP61GZ+Nmzy2zUzHzivmqV9GBwaYFpEleJjMNwJxgMM0jRQwr8sLKx9SKjuFl8oLuST+Lay5NVoFmGfmDD+7mkBo2N1cKSqgH/d5I+oq8n2qbJwx/ACsiGP96Sq+X6gVcYkxDnp/EP/AK1O5JTvZ3WQqZceoIOfzqe3g3EPECCR1zmpY7Tz13b1bHc1Fm5tZ8oY2H0oKNi1Ux4MrGtq1l7npWTaym6iAfarelatqu0dRxVIk2Yjuj4zUEmQ3NT2TCTjpTbyIKaoRBmgGodxzT1NAibtUTGnZpj9KAGhualU1WzzUqmmItIaso/FUlepFkqgLDtUDNSNJTCcmkwHiSniSoRTqkCRpKjLE0hNAoYxT0qvJVhulVJjjNIZBKetU261LIetRAc1O5cXYrzRbh71lTwNk10Ij3Ux7IEdK4MVhec9jBY32ehzRiNNEZrams9vaqTxYNeHWpypvU+joYhTVyoEqzFDnrT0hq1HDx0rn1exFfEqKI0hGauwQe1JFBk1fhj21dNWPEqVHNkPlbaUECpZTVJ5K2lC534HD3lqOlkzVdmxQzcVVmm4pbaH0UY2Q6aYDvWVc3HvUsz55rKuZacdWKdVRI5p8mr2m2LSsGK1Ss7driUccZrttKsQirxXrYajy6s8PF4rm0Q+y075RkVqpYgdqvQQKq9KlOF6113PNcb7lEWgHan7fLFTPKKpzzcUaiskNmmAGM1mzyZzSTz9earI5dqTYiRIs9qsKDGKtWkO4dKnntgFryq+JcZaHRhprmszMdiaRYCx6U8xEN7VcgiGK53V5tz3NIrQjjsxjkUVpRqMUUrGPtmcNBcECnzXTeX1pLWAMtWJLHMdfQVq1FbnxEG2ZcV2fO5retb07QM1zssBikBrQtG4rxsTiZRleJ9Dh6UJUdTpoLjdV5JsVhWslaCSHFFLHSW55NaiubQ2IbmtGC8x3rm0kNWIrkg16lHExmckqbOuhvAauJIrVycN571owXvvXUncycTXnjDKa5zU7EP2raF2rDrVOch+lUiTkW0lfM3bal/s/A6VvLECxqf7KGXgUSVyoyscTdQGPPFYd0xBru7+x68VyGo2ZDHivPxELRPZwE05GFLMc1GsxpLhSrVWJwa82jN05XR9JKkqlOxorcMDxWxp0xYjNcr57Ka0tNvH8wFiAPeve+t+0p6HzOKwbps9EsQCBTroc9qztNvVOPmBq/cMZFyBms6LbWp5k42Zk3UmOAQPpVMwLJHuLkH2FXJkZmyRgUiqvl5wfxroRkYs8ax5272+pqtGA4wysv41q3BGcYzTIZYQ21IVZ/8AayapCK8NpDIMOSvuRWpbWH7rYHaRfdR/jUZuguF+z2w9eRWhaql2Q3m7PaPbj+daoli2WjrM23Maqf4WDA112jeF4opQ7W6t/wB9Y/XNRaJB++XJu2x0OOK7m1ztwWDfhim3yoz3Y63tYoEASNV+gqcADtS0tc7dzVISkNKelMY45NNDGs2BgVGo+ctTGlGTk0CdfWmOzJs4pPM45qLz1ZiAcj1FJ5g9adgsTA0/cKqCT5vlP0qTfjrRYLE+abio1lGM7s0ofJwKCRs65jP0qO3iFtb4yWPqetWaeMUriZAYgMY5qNIP3pYnI6CrbDP40BcCi4WIvL/KsHxFolvfQmV/ldRnIGQfqK6QDioLu3E8JUnHvRcLHid7BBbzFNkgPYqR/IjmsS8uFsIP35mO4nA8gA/owrsPF+i3rTZzAQDkYyDj/PvXN3cM0kKmVPNA/igk+YfUHFUNHL3OoeZExG7BHAIHH/j1ZhlJJberP23L/kVd1eCK0mw42CTkecu0n6EcVkbQWzz5RGARzisihTNKzHGAfQUklujria5Ge45FIYZgp8kt/wB87qQTzR8Tyqy+hHSgoaIBFhRGvPfcWAq7DbQnmUbvZVGKrvcIF+QK6Y6gHFOt5n6RGTOPugCgB81laMC6q2B69DUW1bdTs+UHrjI/nUimWaUjyZC3tSz3AUFGlCHusvekBD5EE0eBN8/YZqErPFJhT7c5qWNbQnfAoL91zV3DT2+WgX/eB6UCZVSZjgTxYI43Rkj9OlW4ZgY9rJui9ccipFjHkKq9DxknioNz2xDE7XH93ofwpBYkjngaXa0rewJxVuGSHBUSkD0bkGs5I7e4Jdwobtg9aW7YiJfIccfwsKYzReUEhiFK/wDXTFTKy+UD8zL1xnJH41ixMW2iQMHHYj+VaMcvlN8jqDj7rKaaYx81whUNFKygHlc8j9Kls7mJW2nOfXIIP4YpG+zXUO6W2xP/AHo24NZzEIw3CRFB4yKBHQK8Xm5RV3f7PWl8+OUZlXgf3hiqUdxmMH7rD+KrouGKgbkf6Gi4rEqXiKARyvselbVtOvUHg1hopYZ8rqex61p2W1GCkEZPemmM6zS3BxgVLfxd81HpUHG5TmrV6jY61oQYhPNPBpzx4NMoEPXmlbpQvalHSmBFt5pxFKTimlqoQueaeJMVEDRQBKGzTs1FnFNaWpAmLUF6rF6TzKQyz5lOVqp7qlSWgZaJ4qrMOtTCTimPjFJgZ7jmgCiY4NRq/NAyzF1q4qgiqSGr0XK1LLTsUrtRg1jyAAnNb9xFkHisaeA7q8nHU7q56mGxLiiKIZNX44siq8UWAK0IkIFeKmom6lKqyNUwalDBRUbtioDLlsVUVzM3jQ5dSSUlqoygg1pqvFVblMCuhx5Ua0sZ7OVkZrycVVkJJqy/U1Xl4Fcjlqeksc5GfPJgEVnEGaYKKtXZLHAq5p1jkhiK9HCUebU5a+IZf0qyCheK6u0j8sCqFpAFUcVpoSBivYSsjzHK7LqyYFRyTUzIxUEsgUGmPmGSyms+4nOKfNOB3rOuJgRxQSV5Z8nrVyyXzCKxnJMnPSt3SioxWFdtRuS2dBZRYWp5lyKLfGKnkwBXizXOx0073MtoRmmM6xCnXUwjzWJdXuTgGlCDuen9YaiaovAehorGt3Zsmiu9U9Didd33MzTrlema1/MUrXEWF6VxmtyK+ynWqr4bn2Z4ybi7NEl9jJPeksWyuKp3Vxup9hLhgO1cdbDOnHU9ChirR5To4VwKux1QhlGBVoSiueFSL0Zm3dlhjioBNg9ajln461W83mqdT2bujWjR53Y14bir0dwR3rn4pquJPXp4fGqRrWy7TQ3o7vPerCzbu9Ycc9W4p69CNRM8qrh3E1Y8Z5NX4CKxEmzV6GfB61smcrVie6gDZ4rmNRsgSeK6d5gRWVd4Y1hXV4no4H4jz/UbHEnSsSaAqeld3f24Y9Kwbuz68V83KbjUaPsaL9w5do6WOZYzhs1dubUrnArLdmVsbCT6V6GHcmzixbg0dTol6JJ1UE12LyHyRz2rhtAiuGnQ+U30ArvTaE2oZyE46GvWpxsj5TEtc2hkTzEHmTj61Ue8CxnHNXLiBSc5z9aqGNPMHybh7VocxVM28dCPwpy7h0RSPUdamfYx4gYH86eYhGPmfb9V/pVIkILSSU7Y2IJ7EDFbFpZX9tHiGXaR1+7j8qzI5UjwpeN89jHg1v6LYr5ocCFc/wB4k/pW0SJHT+H2ufLG+ZZB3BTFdXCdwzWbpkaRJ/yzP+6MVqqR+FTMmJMOlFMMmKN1Z2LHMQBWfqF6ttAT1Y8AU66uNqnHauN1TVS8hTOCKTdjajDmkjSk1Jjzuqs16xByxA/nXOtqGG27hn60fbz0qLo9f6uraHS2urCMshJIPrU/9pbj14rkftBHOaPtzetVzWIWETZ2EWq+U2etXPtyzjIY8da4IamudvmLu9M81at9U2kAngU/aEVMJpdHeRXC8AGrcZGB3B9a4xdZiiwxlVR6E4rYs9SFyBtP5GqUkzzp02joAQB7Uu4HkGqUU4bjNWlPemYk2eKFPrUYPrQfbkUgJs0xzkdKAahlJI4pAYWv/ZmtWyCfp/8AWryjUpbsXb2+2HGfkLRDH0OOter6vYtPHuhnaJx2J+U+1eZalFNM08E6iCfPytuO0/QjkfpVAjk9XRoWKiymjL8jypdyMfcGsFZYbwNbktanP3iMYNbN5qGsWUxt7pBPbNxuk7/8Cqldwi8tlM6vgH5Wx86/j0Yf54qBlE6Xc5CxPBKR1ZZVVsfzqnPbTRyAjPpz1q61jC0w2XCo/ZpFZQfyzT/JJG52TA65k3flSLK8UCH/AFpIfsP/AK9Na22ytuIZRzjFSLCWYmCFZAOeuDUoA8vJwvqCeVpAMzbTjaiSKw/izn+dOksiADKxYEfKQOKZKGhjDsN8bj7ynBH1p1vc/KFSY+UevmZ4oAqGxI3MrYcH+6OaEvp1AHlBSvUgdfwqS6M+4lgc9mXofwp0ULSqAT82OARtyKkZAJnlZnwob1X5aic3Ll8csOSvQ1qNGYGXfHuU8MwH3frUfl/LsjXvwTzQBiOLjAOOvTnmpo/tBA/eu+B909q2Gt1WMNJEQe/GcVSvGXd+5ZQRyPeqAltm3RjcMc+tWWlU4DscdiDnFZC3xmUggb+mSOn0pYLd927ev1Gf5UAb1rqDQybGC7RxnHNajyRSDKoMn0NYdnIJAUdVc/7J5FaiQQN0mY46gUxFuKOKZdrBc+2KgNoIZD8x574qY2QfBRlYjsRircdqBGNw5FAyC3RkIIYEewrXtMysNuOvpWabds5jGPbrWnprkSgMPwFEQOw0eJwOtWr1COan0nPkD5c8elF4vtWqRm2YEn3jUWKszx4PSq205osK4A5p/akAooEMeoqmaoGOKLjAyU4NxVZjzSq1MRM71AZOaHOaixSGTA4p4NQipVoAcaTOKd2pjUDHpKTT2fioV60rdKRZWuGqoG5qzNVQ9akC9btk1tWoyK5+2PNdDZAkCgY6aH5ay54OeldG0OVrOuIMVx4uN4nZhVeVjKjhxU/3Vp5XHJ4qtPPgYFfNyw8m9T6GkoQRBPJVdD8wNQzyljTrcE9aI+4VVmnE1I34qtdSZFSKMLTHAI5rou5I8lxs7mVKKpTvnitK5worMVfOmwKVKg5ysjojUSWoW9h50gJ5rpbPTlVelN02yHGa3UiCrXt0oezVjneIUmVlt9oGKkEeBTmcKaRpk9a64wcilqNbgVRuZgAfWppZhisi9fjg1LjYCrPc/MRmqzT5qGWUb+TTAw7VDBDuS1bGnMAQKy44y1a1lAykE1FW3JqXGKbOktZMKKddXQjU81VR9kXNY+pXjYIWvJUW2bNKBDqOpfMQDWdGWmaokgeeXc1bNpZ4A4rspUO5x1cQiS0gIj5orThgAWiuvkRw+2Z5Gv7s1eglY4AquVz9antSAea46dTld2fRZhgLxvFGkIGkGaswwYaltpBtA7VZJUGuPGYlz0PnoUnB6liMnj0qyslUkkGcVKJK8tm4s0pqsZqJ5OKovLz1pxjc9LBfEasc1WUnrGinPrVqO49QKLOLPaaNiOfpVuOesVLqPun5GrUd5CP4T+ddtDFOO7OSthlNbG3HNV2KbBrCjvYv7v61divlx90V69HFwl1PFxGAl0RqPPwaqSzkmozeAjoKhku2H3Qo/CuiVRNGeHoShLYZNHLKflQmqsliuMzyqg+tVby/uDx5rAe1ZM07nqx/GvPhSoSq3Z6VWtWjDQ1ZbTTRndKz/QVUkksLbJgslZh0MlUUuc8E1YVVc816v7uC0PKbqVPiY6HV7+aUIAkKeiiumty8luC5ZjjvXPRxxRnKrXRWMm6DBHaiMlI5a9PlWxVmidumAKrM4T5Tg1euTgHFYkzEMS2aZzlmOVXmw2T7KcVpAERjyolTP8TDrXLzag8IIhYR+/eo7eaedg1zM8w6hd3X61SZNjYmguJps+a4QdwcCt/RIriGUK12yAcjIyfyrLs/NkhDF1QD/nmv8q2bQ3O4BbgqPzNXF2IaO1tbiUqMtuAHUpitKKct1NczbXDxxgeazH3qYXsrHG6k5BY6MzZbGelO3PJwBx61lWfnPgHOPY1pLlQBTEJJCHjweprkfEejfui6ZB9R2rtBnqeKpX9sJ7dgaicbo0pTcXc8A1uyuluC/mvuB4INXtA1ae43W11zKgyrf3hXZ6roqSMxK59q5dtM+x3gcLgg4zXJFNM9yjiFJWZoifLYrJ8SXFymnMlsxV2HLDqBVxpCsi+9Nu0E0W1hmtGrmydjyW0t9VS9FxA8izA53k12lrPr98oSe4wB/dGK6Oz0KOTog9c1v2OiJFztFGrOSdWNN6GVpOizTSK1xK7H3r0LTLLyIh+VQafYqCPlHFbsUI24Axit4RPOr1XNjUXkcH61bRjjGaSNBjkc+opSApHOCf1rQ5rkyyYODQsnzGomYd+tVTdKsm05FIDSMo700yD0qgJzyQc+1Na56EZwe1SFixcNtHUc9iOtcjr1iLhXKWyGUnoe9dIZt3HUehqC4KSxlXi6dCO1O4HjF61pb3f2acPZydtx+Rvz4rnruyuLCZpIE/dvzmKTAP4d69f1zSra8t3QoDkdGxzXml1pQ0ozRStPBk5jXBI/LJBqCkZX2mxuF23csitjIKrnFUxbxef5u1JFzjdH3qS7tJoiJ2AdGPEo/r3/ADogtDK4WGYLLjlTxu+lIokRbaSYI/yZ6NjOKZJYW5mO58P0DAkZFVrmzniLOhYYxu7021+2xPhpd6E9GPH5UBYW4Xy96qyuQckDNMtJgWIeHYWGOBkH8K13tVuNpQIr46eUoP4ECsm6t5lkOIiuOdy/zoGO3bQYhEyxE4LKcgf4UOHhjVcswzninreGIBmmyeh3DrV2KeJ494zg9MDpQBBanGTtkfHU9asSwQeVui+YD7wHVfwqBpVjlBVMH+8pxmlEhkl3DzEJHXOaAK73c0cpwA+Dwen4VTu5Ir4BThJTxWuLckF0XzM8MOKzpbVWkKyw9eMjg0mKxShsDC2H6+9XR9miIDA78djio5YXiVRKxYdiRzVaSdjndltvTNIdjUWRdylk2p221egiikO+3Yqw6qRWJFqi+XsZPl9etW49RHlqPKBJ6MKLgdFCkpbPmqCPUc1YVbnzfmlGPQisa1NxMu5ipUf3qvxNMDtV+PQ07isXy2P+WvHpitfSUV5Vz61hRI0kgwct6V1OjR/Z5F3gM2eg7VcQZ3umWai3XjPHalvLYKM1e0ra0AwKXUIiYziuhGLONu1AJrPNbNxZuWPFUpLJl7Umh3KJpKsNFimMtSMrtUL1a20x1FICgQSacq1N5fNPAFAiArTCtTsKYVoYyEjFIslSMOKiIoBEm+kJqMGlqWMeGpTJkUzNMY0xg/NQNHTmakB5qWUS2yfNXS2C8CuftzyK6CwbgUmM2BGPLqhdxqATWkp/d1lX7HBrKa0NqMnFmJcycmsyd88VbuW5NVBCZK8XE1VHRHvUIOSuyskTM1aVvbHaDT4LYCryjy1rgtf3hVX2Kjx7RWfcTBM1oXcoUGuYvrjLEA1pFt6IzhHm3Hzz+YcDqat6dZFmBxVPT7ZppAzV1tjbCNenNe7gqChG7ODGVeX3Yk1rAI1FSzSbF6044VazLy4wCM10yjeRyULsgmuSCeaqm8PrVK7ueeDVBrg5603ilSfKz6PBYRzhzGyb3PBNQTtuHtWYJyT1q1FLuGK6JWqRvEmrQcGVJ4PnyKbCvzVp+RvHSnw2GGyRXG/d3OZonsrbODitaODaKZaRbR0q4eBXPL3io+6QyDjFZ01t5jVpkZoEYzVwpqxyYivcz4LIDtWlFAFp6qB2p4NbJHA22PAAoozRVEnj+KTpyOKkxQRxXiXP0Zq+hPbTtkCtaAlu9YsC85rXtTxXLXZ83j8Ooyui6imnt060iScUMQRXJfueeU7lyo61j3NyQeta1ym7pWJewHPFddBJs6cNNRkTwXfPJrSimDd65lWKnmr8F1gcmuiWGuz2VWdjeDYqRZaxxfcdamS8B71xzoNM1hO6NhJz61Zjusd6x0nBFTCWsbOL0NGkzejuwe9PacMKw1m96lWet/rU0rGP1eN7lqX5jVGWL2q0smaSTDCudVp3uW6UWrGX5WDVuAYPWnNHmpLa0nncCGJn/wB0V2U69WbOOdGnEuwqh5rWgZfL9KhttLlXHnvHCP8AabJ/StSKKxiXHnSTH/ZXAr2MNz9TyMY6dtDDupduetY0sx53V1s0UU5xFpsk31b/AOtTDZiHl7Gyg/67vk/zrvUTw5PU88mtzcyHAOas2Ft5MoCxvJIfXO0V27atp2nrl7qzJ/u29qD+ppn/AAld1KSLG0YqP+WjlUX9AKpJIm7K9ot1kCK0kkx3AOK6O1tb1lBe3dPYrisCXxHdKo8293uf+WVvyB9WNWdN1q7lkOE2Z43E7m/M0AdNHb3W37jYq1awENl3iB9N24/pVWygmvnUMxb1LHNdVaWcFqmABRFBcjs0mK9ML6kYq4I8HLHAoa7iAwCM+lU5L/5toxVklppRVaefCFRyTVZpWkbA/Sq00p3HB4pFJFO7i3SH9TWFqVgHXKjkVuPJuzVeQAgisGjenNo4S8iKrk9ualtUNzIiryDWzqtmpG4DkjmotDt1iX7uOvJoR3+3901rKzWKIYrSiQdMVDGMYqzGwpo4ZSuadmVj4wM/zrRUgd6xUk7VZ+0nbg/StUzBmoMdyPrTJt3l8VnpebBtJyp9asrcIOpyBTuTYZHM7ZDrz/OkdUPDHHoTU4lt5fuEfSo5LdZ48ZyP5UDK7rt+46t7VDuPPr6U/wCwPCcZyKWWEsvKkH1FQ0MqNOfTNQSXD7OOfxqR7SXOfvgdx1/KmeVFnklagop3LCeMq4K/UZrldWgPk7FWNlHvz+RrtWhDfdKt+NZ95Yb1Imh3L6kUwPJb4TRSkQxlIm6SLyufT2/Gs+8hlj2efCYnA/dyqPlNd9qvhqCQmWC4ktn9VbGawLjT9Vs1YHZcxHgqeh/DsaGBz8ru0Stu2y9GBPX3FUnvfJYxOFRifvEcf/WrSuLBoFaVLVyhGGjzxWS6hImG07em1ucUmCEF2shG9mU+oq19rEi+USzN2DDr7VneWs/IwrqONp61D9suYRhAWH03CkM04dmCzKfL7rxxQ08MRD27YZesR6/iDWbdSTXkaXIVV3fK4GfvUyK0f7wByO1AGvJdpcNuQIr4wy8iqiHy5s/P8p+7nIp0ZWQKrYznBXHb+tTyW4UYTafYnFAImivAWJR/LPY9qsz6pEsQ80KT3YCudNiJJsYx71aEYgChWUj3NAy+17ayRj5wyHrjrUJ0pb7mwlLuP+WLcOR7f3v500W0TglX2f1oQGJgh+Ydd2aLiKkdq8NwUngxjqGGK1I7aKIgJtRTzWi+qwS26RXq/aAgwJPuuPx7/jRDZaPcwlhdToT/AAmLmluMghkIO1eWHatGCGWU75sQRf8APRhwfYetQW8Vrb48mJnlXo0xz+lOJa5lLTylmPrVJCNe2nhj+WFdvrJ3P+FdHokQMy7uea5q0gTcoJxXdaDaL8vNaxRLO001UWFccGrskQkHPNVbWLaoxWgo4qzNmdLZL6Vm3VkCOFro3Xiqc0OatMk5GaywelUpoQO1dLdw4BrAulOTQwTM5hioG61ZdW5qB4zWdiisxxTRJk0simos7TQMlpD3poegmgCNzUDmp26e9QOKAGA0bqMUu2pY7js1G33adinY4pjK7ULRJxTFagotwdRW/YN0rnojg+1bdhJUDOkj5jrPvo/lNXIHyoqvecqaiotDWi0mcvcL+8NJEM1PcRfvKbH8tfN4mn71z3IVbxsiwgAFRXE4VTTnlwKyL244IzXI6nQ7MPh+fcqahe9QDWbawtcS80soMslbOm2wAFe1l2G51zSOXMKkcPG0dy9p1mIwOK2VxGtRwKI1qO4nCqea9hwtsfNKTqyuxtzcAA81g3lzx1p95c+9Ys8xkNKUlTjdns4TDOo7IjlkLNUJ5p22nBa8KvU55XPraEFTjZDFHNW4Ac0xIsnitSztST0rsweL9npLYwxUFKNy3aRblFaCwYHSnW1ttFWyoAoxeLTfunhS0ZXXCCo3nHrTbmTbWNNdfvMZrGhUcjCczaSbJ61YU9KyLWXOK1YjkV3o8+W5ODS0gFPAqkZ2DmilAop3FY8kApaCKFrxrH6LcsWyZrVhg+Ws627Vt2+DHXHVvc+ezFvmI/LZeajdverzpkVUni4rn66nlplYuCSKr3EQPNDfKaikn6gmvWwdOO5lOUk9DKuEG6q2SPpV2X5mp62ZYdK66vLE9LD4nlVpFLzTTlnYHrU01qV6VUYYOMVypxkepCopLQ0be7Pc1oR3II681zwYipFuCvek8PGQnUlFnRrMKmWWudS+Ixk1o2Mk15MsUCNI5OAFGa5HhpXsjdVopXbNdZT61dtLS4vBlF+QdZGOAPxq1b6XZaUA+r3MRm6i3jO4j64p0uu2WNkVqZAOgkOFH/ARWkcJGOtR/I5amLctKav5k8FpYpgKr3c3cDhB/U1sJY6g0O1tltB/dGFH/wBesm11Sd8BCsS+kYxWoj+Zy7Fj7nNeph6VNLQ8XF1qqepYh0+wiH765eRvSJf6mrAmtrcfubVfrKdxqGLbkZqaQK64UV2xSWx5k6kpbmZqGq3skZQTbI/7sYwKwTDNdSBR8xJrYubN5ZCB90DJPYCs25ibb5UJKJ3/ALzVVjIzriLTrXlytzcf88wfkH4jrVG51Wa4CxNwqcLGOAPwrYh8K3t4hcBYIv8AnpIcCrFtoOlWMv7+WS7l7iIYX86NeoHNwS3O8ZQba6fRoL6YZWJgPpgfrWxZR+bKsOn6Ugb1I3Ee+a6SDSPskYfUrkAn/lkv+FUloQ2P0mJ44x1ZwPwFbEckpGJj9AKr29wko8q0h2qP4jV8eVCmWPPeqER/ZGbnBGe2aabVI+43e3aknvlVepFZ098239yMZ7mk2MtSMsS4FUJZQehqF52Kjcearkt1NRcpIlkk9Kgc9qU9M1BLLtzSKRWucsrZ7Cq9sSCRnHaqmq6vFZw8nk1ytj4lZ7xASfnb8hQja+h6PDKdqirkTZP86w7HUEmiU/UfjWrE3dWzQYs1Uwc89KkRhuwTx2NZqSt9KlE5HHrVXJsaDRAvnH1FElvg/KTx0NV4bs5wf/1VL9t2sMjIp3EIkEgYNE22XsD0NTf2i8HDJhqVJA3KsOe1EkSzruxn1FADo9V8w4YcGp/tkMi+ntVZLcR9VBU0k1gX+eI5HoetK7DQtkwkfLwap3FuGbLcE9G/xpkcXln7+0+jdKlaRoyUlUj0z3qb3KM+QCFtj8GhWkXlGxVySBZxjo3Y1U+zOvQ5FSBVmt4Js+fEpB67flP+FZN3okH3bS4J44jlO1h7DtW7JGcc1lXsWVP6ZqiTz3xBDqGnytEyhkxgxSqAfzrirqAxlikSof7ua9RvLu7gzHKBLbjrHL8y4/pWBewaZeT+VG62cjn5Y5uY/wAHHT8fzqSjiFSeaTO0E9z3qO4R4ZQGiyD1I7Vr6jaXVlO1vLD9luU/vD5T7g+nvVZ7y7h+WVeTwAQCpoGJaKkhnhAPI3ceopkMu1hj7h4YkdKm065ebVoleHrxn8KpvHKs8hV9gPQEYFAGi2nuT5qAuoPVecVBMqW3Mu7J/hxnP4Vmub6ReGZQOPlPWp4GlAzJ82O560AKlzKGxDEXz0UjH41XlO9twUqc87TVyR0hJZ5WKkZEYIohngnOUgKjoCaAK9uHGcFt/oelWYMsWLrgDrxmrDWkjEFSC3v8tNkwuUeLb2JpFIkc28kQAGcd+lS28TRj5Dn2NU47OZuYnB/2fWpPMlil2kbD3xQMu/vYyHHH0q7ZKZCd2TVSA+coDbfetWzHlEcZFVFEGvYwgsMiu40SAjGCa5HT03SBl4+td1pMJAHGfpWqJZ0dvu281fjNUYOlXE6VRDJG5qCQVYprLQmSzLuIPMrNnsBXQOAKzbqYKK1TEc3c2gjJrPlAFa92+/NY0/3qllIpyrVKSOr7c1VkXrUDKvNKKk280YxQgG0xlp+RTS1AEZFFKetAoGJtp4WgVKKBleSLNVjHtNaRHFVZVpAQoea1bKTpWOTg1btp8EVJR11tL8gp0/IrMtJ+BV5pMrSktARm3KDJqg8gWr90/WsWeQ5NeFjo2Pey6HOxJ5+Disi5cscVadutRGInmvMw1F1Ktj6GTjRp3K8Kc81uWRxiskRlTWhaZzX29GlGlSPjMdUdaZtiT93WdeyHBxV2PlapXcfWuKpioxM6FCxhXLEk1RxV64GGqsoryq1d1GfW4CCjHQaseanSHNLFFk1p21p3IrG53ynyrUjtrQdxWxbRKuKgEYQU03HlnisJXexzt+0NgSBRVWe5AHWs9744qjLcNITV0qUnuePioqLJ7u53cA1nbctk9afgk81KsdehCHKjzmya1bFasL8CstIyKuQk10RMpI1Eapl5qnE35VcjNaoxaJAMUVKoyKKsk8l2g9qQwe1LGwzVpGGK85JM+oeOcSGBdprYtjxWeu3NXLdgOlcuIp22PPxOI9qzQ42VVm6GplORUMozXJynIZFzwTisyRjk1sXC8ms2WEZrqoJrY9DDwhJakMJ+bmtm3RWUVkquDV23m29DXTWXNGxz4jDyjK62L01spXpWPdWeM8VupcKRg1DOoccVnhcLK92Y/W3S2OVkiKUxVMjAKpJ9B3rSuouelaGmWawqrwsPPIy855WBf/iq7nRt1O2GYJx1WpFp2gIxD3z4I58kHBA9WP8ACKtal4ljs4jp+jKtvEOHmiG0t7A9fxqjqdy7Qm1tcrbbskn70p9WP9KxTG27kVm6sbWiVGn7R88maUF0T1NXI7nGOay47c5+Vg1WRBMo5U1yT5ehqq1tDo7G+wRzXR2l3u7159C8iHiun0pbq4xsiYj17fnWtCq07HBimpanXwyKe9X4d0uFRdxPpWJFPaWg/f3AkkH/ACzjP9a07bV/3J2bYlPAx1r0KdVM8mSLN1aKI8TSiNB2HU1hT6jBZnbaQLv/AOesg3GtWQPdrsiVnY9hVdtCtbIGfWLgoeot4uXb/CtzEy47i71CXB865c/wjmtyHS0tgH1O5WD0t4eXP1Paq/8AbQtIzBZW62oP8I5bHqzf0qsoaZt7MWZuTmn6COmi1XEPkWMIto+5X7x+poSFnbcSWJ7nk1StIHCbjwo7mrX20RfLEN3+0aGBqW7m3GB3qU+bNyOB3PpVa1U+X9pvj5cfbPVqlN4J/lQBYl6AUrgBVeg+b/aNRuox0qVnGMZqCRs0DIWULye9V3I4qd+RzVSU4pFEcrgCsHVdSWCNstjFW7678qMtmvO9bvp55XyQF6AVLLijP13Wftt1gMQg4A9az7WRlmVjng5FRSWLlTM52k9M96baSkT7c5xzj1osXzI6Wx1l7ZW+fgNurttN1jzPL+YHPXmvJ5JS5GeMkjFW7bWLiyuDFn5ARg57VUSZWPdIplcDmrCgH3rhNH8RxT7lZwGXg11VpfLNGpU9aZBtLGNoNO8tWGDVaO4O2plfuDzQIkWHjA6jpQssyluT9DTllUt9asMFkTcOp60ARC5eP73I9KsRXinkcVGsYYYbkfyoNrsGV5FGoFxmSdcN97+9UYV4eMBkP8J5FV1VlPFWEn4w3IpEiCFJPmiba390/wBKZNGVkyVwTzTnHOQcjsanWb9yFcblz360AZ74xyKozojZBOPZhW1LarMN1u271U/eFZM0JyQc5HapEctrFhKYW8pMDH3h2rjptNTb88Ocf3fl/H2/CvUHhKHKkg+1Z09jBOx+0W/X+KLg/l0qhnnaXn2GD7NcRfbbE8CGX+D/AHD/AAn6fjmopNFhkgN1pk7XEOMvE3Dp7H/GuxuPCqszfZis6t1HRvxU/wBKpwaFNY3aTRho2Tp2YUDucdZW7R3W4243AMc9e1Uhpd0ZyViK55BViDXotzY2TxN5IxeMOYV4Df7vv7Vyl3ezRs0MIUHptPJzUjRkqlzbHYVZyf4m+aq93bqpc3DMqnsCMU15tWkl8pnC8/dEeKqzxTMwDOGU9mbmhjGrbWiNuihOPXdkfyqZYlUlipC4yMDpTY5JbcFFVtv+0Ki+0SSzYJYAHp6VIyR5rkRbkdip98ioPOcrtPK9far5zBG5QYD8YHSqZmVJcSodo6kdRTEEUxjbuAa0Duk2iceYhHDfxD8aiSEfeXa6HoRT43POw59iKYy7DbGPoQ6+uK17Tt8uPasy2u2DKGQVuWYE0g28exq4iZ0ekQhmGRXa2EPlqK5vSrMjaQa6213BQGFaCLsdW4zVRKspQiC0KDzTVp9MkrTLxWVdQnFbbjiqk0IIq4sTOTuxtFYszcmutvrVcGubu4QpNNgihTZFGKCeaD0rMoqyAKagZqnmqk8nNADieKQmkU040DsR7qPMxSPVd2xSAtiSplkFZgkNWI3oAubqikOaaDxTHNAFeWmxyYNJKeahDc0rlnQWU3TmtpGzHXL2UhBFdBDJ+7pgV7ysWdua2btqxJ+WrgxdHnR6+X1+RkKLuariwDHSmQRd6uhgq1y4TCum+Zm2Px3P7sSg8IBqxBGAc1FPKMmiGXmtsXi3blRyUqHu8zNENiq1wc5p6tmmSDivMk21cqKszGul56VDHFu7Vdmj3NxU9tbgdaUdj6DCVVGAlrbeorRUCMUIoUVFI1aRg5GVfEXY2eXg4rPYk1cKlqYIM1008N3OWeM5FZFTaSaVYTV4QU9YMdq6lSSPMnXc9yqkNTJDVpYakWKrUTLmK6xY7VMsVTiOnBfar5SOYbGMVaiOKiC09eKZLLqHiioVaimSeZzadcQMcpVdvNi/hNeuXOhq3VQfwrLuPD6EfcH5VzrBcuzOmWO5t0eYG72nmp4dQXPWup1Hw1FIp+TH4Vxeo6FPZyMYnOKieFY4V4yZv296pA5q55wYVwSXlxanDg1pW+tjAya5/qzRbs9jfuIw1Zs0RB4oTUVfvUglDc1tTppExqSg9Cg2R2pEdgautGr1C1vjpVuKO2ni76SHxXGOtWPP+Ws4gqeaUSdqTlKK0HPD0qmpeihW8lEYOD1z2A9aufJ5Qtrf/Ur1/wBtvWqYcwwLEOGkGXI9OwqeDisMRVfLY4pUlF6Cy2YYVnyWXPSt1PmFBt/N4Aya8lTlF2No1WjEitSD0rStbGeeQJCpJrUh0pIFE104Vey9zS3Go4j8m1TyE9R1NdNre9N/IiVRy2GG00/T+b4LPP8A884+31qK51j7QvlRO1tH08tRxVNxvOT19anstEudTmxEAsY+9K3CqPc041p1HyxRPLFK8mRRDdIAlwpPYGu107RJYbOC41O4W2t8ZAJ+ZvoKwRquleGlZNOjF7qA4+2Sj5U/3B/WsefVbnUWWa5neaYE5JNd1Bwpbav8DOVKU/Q9Uj1aBI/J01FhXGDITljWfPbgnz3fc56Z/nXMaNJMw8w5CDHHrXZRW5lh3TttJ7d69OPvK5wyjyuxiLZiWf75Ziew61sQW1vZ/f8Ank/ujtSuqRrsgUL6t/EaWG1eQhUXLGqIFaSWY7e3ZRV6O0hsAs1380p5WH/Gpkjh0uM4AluzwCOQtRBfIbzrj95cHna3OPrSEJKs05FzduVj/gUdT9KjDH7+PLhHQDvU23I8+4JOei+v/wBanRRNM29+nYdgKgsdDl1Lt8qD1ocgnjpUNxPuYLHxGvQf1piuep6U7gSy9BVCcYU1aMmahk5BouBzGrg+Wx7Vw19KFlyVJfoB6/4V6NqcRMTADJNcZNbIkxLjcc/lUPctPQ52eE7t0zB5CuMKfu57VnmxYXBcttAHAHat25h3BXTKID09TVRYPm2srNk5LNVElF4g00JHPy9TVWaFRKS3HHNW2LtcALzt7VY+xsQwcjOeuO1ACaVL5cr7mIJI59RXXafrTJdLCz4J+6PyrlkgQRoqknqBnqDU1sji9DE5B457U7getWl95sS+takEm4CuQ0eVjAinrXRwOy07ks1kWrEZOcetUYpOKsxzDii4iwGqzDcbTgjK9xVMyDdT8gGi4WLc0ezDpzE3Q00IJfu8N6etRRXSodkuTG3X296bLlZdgOD2PrUgTrJs4PSkflQoI3HkehqON/tfyP8ALOOh9frTWyD5DDkdDQALIySdSrCrRaK6G2YbX6Bh3qn5gYbH7dG7imOWhODyP507hYkuLJojz09apvEo61dhvSF2v80Xv1H/ANaoLqzPLwNuXrt60gM2byl6darvfOWCzoJk6c9fwNTTR1WfG3B/OhMDMu9NW5lzaXHlzZyqynaQfZq53V9EkvDI5gaDUolzOCOJV/vCuluwDEQenY1zk/iG/sJFNtNGyxtnyZhuRh9D0+oqtBo5DUHlhjy7AsBwwrNW6Fyo3qCw7sK7PU9P0XXbWC/sp/sMlwSGjlG6NX7ruHKj6iuXu/DF/YynzoiFB/1i/MrD2YcVNijMe7UEwOGGejA5AqEwTwncGJz3PcVoizUy7Oc+hFSx2/IjGaQFeE5UxO+PTPahol24Y5J4q2LdJoiGGCPzqB0AwCQR60rjGQR+RJxna1aNraASZzj29arqA3yselXVK4AZWOOhFUgLdvbLIflIyOxrd0yzbcAecVnWhRl+Xr79a63Rog2Bjn1rSJLOh0mAqo5rfiHFULSHaorQSmBYSrCVWWrCVRDLKVIKhQ1MKBMCKjkTIqbFJihMRi30DbSRXJ30T7zmvQJ4Qy4rmtVswuSBWt7ok4xxtajNS3S7ZDxVctgVBRDNzWfLwauzydazpJPmpFDkapd9QJT6QCMarSVYaq8gpMZF1NTxnFRDrUg6UwLAkpryjFVnlxVZ56kollbmmKarmbJpytQBs2XUVtxthawrE4xWp5ny0AFy+azHGWq65zmqsnFJo0jJrYliHFMuJNoojkwKim+avNxWLVPQ9PCYF1PekZss7GTFWrbJNQtb5atCzg5rg51P3mdlePs1youQoSKfJCcGrcMPFSOoqPaRRyxhJmMYMGlB21anAGaosCTVx9/Y6oycFqT+Zml8vNJEnSrCrxXoUKfKjkr177EYip4jqUCnBa6TjbuRBKeI6lC0oWqIGCOlC4qULTtlAEQFKBU3l0COlcCMCnAVIEp4jpisRAUVNsoqbhY7OSFT2qlPbpirE1xjvWXc3nvXczgVyhfW6EHiuT1OxVs8V0dzeZ71iXk4OazZrG5weqaYuT8tcldwGFjjivQ9SKkGuN1EDJrmnodVORjJeSxHrV+DWSMBjWbJHzUBXmpsmaSkzrrfVFk7ir63KsOtcJHM8R4NadtqTDGTUuNioSR0zYaoHQryKqw3wYdalM4apRtGryk0dxzyelXI7ketZQRp5AsQJY9q1Y7a30qITag26XtCKh4dS1HKojYsIZZxuPyx92bgVpi9tbQbbch37sa4K88S3Nwdn+rgHSNaSDWeeTWU8Lb4ER8W52U0zTtuZtxqt5LyyBUUsx6ADOai0iObUVMq4WBPvSt0FdDJqtloMe21UPekD5pB933/APrVhHBO95g5paREi0W206Dz9Yfa5GYrZT87f7390VWv9QnvIRAoWG3XpDHwPx9azmvnvJ2mmcvK5yWPetJLT915s7CGP1bqfoKyrU5/DTVkCjbWW5zl1aE9Kn07R5hMr3T/AGeM8jd1P0XrWpPfQW5/0SL5h/y1k5P4Cs/bNcFpXcmSTqzHoveu7C4dJXeonVlsblnq2W8qxQLBHwZm6n39q2IdYDfIrcdyeprgX1AKPs9txGOp7sa2tBsbrULhQnT1PYV3xk72InSi48x31kPtJAUbia1UPkDZb8ysMM39BWStzBY4sLQ75sYml9fYVrR5giDf8tW/StDgasIwW256zf8AoP8A9eo1jAXzpunYd2qRIwQZX+6P1NQSsZpM49gB2qSQXddTEt09P6VNNJhdi9KD+5jCjqev0qtISTUssTGSKQ9cUqjCE0g61ICjv6U0pTlp4waYFK4gDqRjrXI65prYAQHJPJruilVLixWbqKATPMLuCWMBdhaq0ytEWc9SOBius1rT2iJ2ZHHaudljbG0+lAzIUrEqkoobvj0q2j79zKVJ6UiKksg3euKnjgwSeBg4IzU3GVoovMYlsfIR+Rq3ZQxG6VTu4JGfSnyRZiYL8ynnj0rX0rTFnhB5JPVvWqA3NJt9pyOc10SRd8VT061MMYWtiOLApkkAUipFBFWBHR5dIZViMrb/ADQBhjtweoq7Ed8e3uKZspVyrAjtU3AY/XBq3EPtMHlf8tEGY/celRyqNwYDAYZpqEowI7c5ouBKn7wZ/wCWo6e9WVIvF2scXC9D61BMNwWdeM9cdjTGYyL5yH94vJx/OmmISYEHJ4YcEe9Q/a1UbHG5P5Ve+XULZiBi4Uc+9Yroc80noJEszNCfNiO9D0YD9D70QXz8LztznaOo9xTIS0RIHKt95T0NSvaKR50H3R1HdaEMLiIz9CqyNyrdFlH9D7VmPGQCHUqQcYNbEIVl8qXoefofWmTxCXMNzxLj5Zf5ZqyTlrsMdwU/Q1xWqQzGcgL068da9GubNoZGSQYYVz+o26LIXI7dKCkclpCeVNLpkwzBej92T/BL/Cfz4qGHUrvSZniglOxuHgl5U/UVfuPL89Qg2leQfSofEyxTSxXqLjz4wxx2bv8ArRcYCLTNWjIyNOvOseT+5Y+nqv8AKqFzb3GmXULXEBXeeJOqP7hhwazZpJtq7fvDjcKu2WqXFpD5OEmtid0kEo3IT6j+6fcUKwDXuIWndBwc8VXmVeecOeSPWtOTSbTVmE+kFo7gLmW0lb5s/wCwf4hWdJZzgEMrZHBzwwqbDK8ZZZBnOPWtGzlYnpxTbW33sMg9K2bTSyG+ZfyqkNlvTrUSsDjn2rttHsyFU4PHesXS9PKkFeldnYptUdjjmtFoQzQhUhRU61ChxUymgTJlqZOlQLUyGrJLKVOtVUNWENBJLRQO9FIAYVn3tp5ynitDtSEU07AcFqOksrE7TXP3EBQnivTby3EiniuN1Wz8sk4rTcSOVmU4NZ8qc5rXnXmqEy1myyolTjkVBjBqZDQAbabIlSFhTGbikwKzLikzxTpGqrJLikMZPJVCSXmluJuaqGXmkNFlWqxGeapI1WI25ouM3bI9K1B0rEtHxitRJuKALG3IqrOMVYV6hmIIqZ7GlH4kUlbmrCRtIcYzTIYTLKFUV0VjpnAJFfP1MPKrUuz6iGIhSpmbFpzMMkVbisvKrcFqsa9KqXBVa1lh1GNkccsR7R3ZX+6Kqyy06SXmqzksa5Y4aTY1XjEZId1RqmTUojNPWKvToUVFHLWr8zEUYqQCnrFUgjrqOVjAtPEdSBaeBTJGLHTxHUgFPApiGrHTvLpwFOCk1QmM20banWEntUiWx9KVhXKwSpFhJ7Vfjsye1WY7H2qlEXMZYgOOlFbq2Ix0op8hPOY1xee9ZFxeZJ5qvNc5qpLL71tcwSCe4PPNY93c4zzU1zNx1rntQusZ5qJSsjSMblTUL3g1zlw5ds1auHMjk9qqPXK5XZ0RjZFVxmq7rzVpvpUD00DRXIwKZkjmpWqMKWYKoJ+lWtSB0dyyVu6TBcagd2dkS9ZGqG00qC2iF1qTbF6iLu1VNS1ua6AhgHkWy8BV7/WrsieZnUSa1a2EZg08B5OjTH+lZMly07FpWLse5Nc4lwymr1pM88yxRKzuxwFUZJo5WXCaRalQsflGTWpa6JBpkEWoa27qrn9zZx/fm+v90Vr2dg+i24mW2+1aoR91h+7tx6ljxu9qaIrbRpG1LxBdGfUZF3W8MZ3Fc/xZ6fStIq25FSpzbEt/r11Y4iaBVvnUGG2h+5aKemR3f+VUYrO6EP23VblbVGOf3nzSN9F61mv4ieNSmn26W2TlpfvSMfdjWe80s7F5XZ3PUsck1M2maUlJI6pPE9par5NjbkN/z8TYLH6DoKadbec5d2Y+pNcXMzKeKfbTuzKq5JJ4FZOnzrU3VRRdmdta3D3U6xJzk9fSrF/e8fZ4RwOGb1rCN0LC1+zq3+kv98j+EelO05bnULtLe3RpJXPAFCVtEbw5Jas6HRtLN1OEXBY8sT91R6mumuNZt9K04W9kzZcEBuhb1Y+3p+dY4vbayt5LS2YGztsG8uM83D9kX2zWbpYk1fUZtTviRaxHc57eyiqbeyJcYyfM9judAMsUAu5Vyz8RKe/vXY2ULzLulb3JNef6ZrX229DYGTwsa9APQV6LZNvjEQ5x94+9bQWh51d6iTfMcAYUdBSRxYJY9qtSw4qu52jbUtGBA4ySaj8upu1AqGWQstNEf51OaTFICDZS4qbbS7RSAjFPwCKdtFAFAGbfWAmUtj6VzM+gkJKT0NdwetMkiWRduKBXPNl0cRyqNmcGoX00eZINvysMmvRWsE6qOaoy6SOoH6UWHc4qKxPlsq98V1GjWflQ4x71bg0kKeVFakFr5YGBTBskhTAq2gpFUAU8YqbgKBSEUZpCaGwDAoOBTSfemHpUXCxOW3RAelQtJg0inkenSmlaTYye3myTE33X4qMloJueoPOahPBqefMsSS9f4W+vaqWwCM5hYTwnaM9B29qnu1SaIXUI4P8ArFHY1UifaSGGVPBFWIGNtMAeYnGD6EetVERVVlNSROY+VOKddWohlyv3D0NRD0o2AmlAkXfGMY+8vp9Kasy3EYgf/W/8s29faohuQ7gSDSywiYedDww5ZR29xV3ArmdeYLgfJ2bHzJ/9asnWLLYMHDAjKsOjfStq6i+0RibHz9Gx61nGUxKYZl8yBuCp7e49DQB57NCsplI++nIpJP8ASdAyyAPHKRg+4rU1rRns5t8LeZbyHKyDv7H0NZ8m6LSTxy0owPpQM5yKAvMwZeOwpzWLfw8Mf1rWih80/dwasrYu2ML+VIDBg80EbhyPwIro4JodRhFte/LMRlbkDJ+jev16/WopbBmYZXr1rTsrDopH04poGZa6JPazlXGfRhyrD1FdPpNiGGGHIrR022xEIJl3p2z2rXi01YeUOV9aqwrjbawWLBWtGOPFJGuBUyigY9amUVGtTLVIQ9akWmCnrVkEymrEZqstTJTAsDpUgqJakqWSLSGimmkAyRcisDVbEyqSBXRHkVXnjDDpVxYmeZXliyMeKypocV6FqFgGBOK5S+sypPFW0CZzUkdQcitGZMGqT9ayaLEzSGlHSo3pMZXnbArNml61euDwayp+ppAVJpM1W3nNSS1CBk1JZYSWrEUvNVQOKniBzQJmzatWrGcisW1bGK2IGBqkBOGIpM7jinmmxD95Uy2Lg9TZ0y0XIOOa6aKNI46wbFgoFX5Lg7cCuGWh3Rk5bj7q5AziseVzI3Wp33OaQQ1mot7jlKy0KwjNAiq4IfaniA+laqJhzFMRVIIqtrbMe1SrZse1apE8xR8vFLt9q0009j2qdNMP92rUWLnRjCNvSplgY1vR6X6irMem+1WoMl1Uc+lqxqwli1dClgB2qdbMDtVchLqHPJp5ParKafW6tuPSpBCKfKiPaMxksParKWI44rSCCnhRTsTzMpJagdqmEAqfFLTFch8oUVNRRcR4w0uahllwOtMMnFVJ5eDSbsbJFa8n4PNc5eSFmPpWneT9ax5zk1yzlc6acbFGQcmqzirT1XcVmasqvxmoG61adalttOe4+diEiHVjVozkUre1mupNkKbjWp5Vvo65VDPeY44yFplxfpbxeRYjb6ydzWabu56ec/51pdGDTHXEeo6hMZXikY/TAFRDSpf+W00EY/6aOKbJPPIMNM5HoTVcg1asLU1LXT9JMqxS3U9zM5wsVtF1Ppk1vXN/p3hKNU02yUaz/FLM3meQPT03fhWBpWpppUVxKkAa8cbYZSf9V6ke9VLa0n1TUUhVt0szfeY/mSatPoiLdzbt9Vvr6KTU9Znkns4T8kRO1ZZeygDj3NYl1fzX13JcTtudzk+3sKsaxdrI0VlbsTZ2o2xDsx/if6sf6VlAGk30HHQuxsPWrKsMVmqxFSLOQak3hNIszDdVm3VdOg+0Sj/SH/1K+nvTbKNXzPMcQpyff2qteXDXNwXbp0A9BVJ9CJu7uTw755c8szH8zXbxxf2HZx2Nru/tm8GJsdYVPYe571k+HrddI0//AISC6RWwSlnC38cv94j+6tT2888FrPq0zlrq5JWJj1yfvNQ9DSDuhbsG6ubbR7EZjjbBYf8ALVz1Y/0qXVtThtYk0qwlzbQn94w/jfuadEP7D0c3bj/Tb1SIPVF7t+NYmlWR1LUo4GO2PO6Vj/Co5J/Kjd2NJM9E8E2Rggiv3+/JxAD+rV6zpkAitx6mvH9F1qK41P8AcJ5dugCQoT91R0r1nSp/MgGT2rpVraHnVL31L8y5FZs33jWlK3y1myj5qxkJEYHFOFMzTgagYYo20+jmpYDMUUpFJSAKQt6Up71Gc80ihwp1R0uaAH0daaDSigBwAqQU1acBQJC5ozRSgVBQlJTsUfyoAbijHFLS0rCuMFK45pTQ1OwXIGFSwfMrRdmH69qQrTo/3cgPoaEMh24NWE2tGEJ57Uk4xKxHQ8imCmhFuIiWFoH4YdKqOpVsGp1Y/LMPvL1p88ayfMoxkZx7VoBSYcUR7kbK8VMY+KaFxSAcIkbLLwrDDL6e9Z9zbZYjHzD9a0UJU5FPuIhJGsyfQ+1Mk5lYynmxTIHtnP7yNv5j0NZup6D9yW3+e2Awjf4+9dTJbCTJ/OnQQhMxON0D9V/rQO5w8OmASAlefSr8VhtbIFdJNpSxS7hgqeQcdRQtmB2oGZQ01H6r1p0Wm+VL935a20gGOlSGNQKAKsVuI8VZSQoeP/10w0lFxljK9V/L0pytUK09adxWLANSLUS1KpqkBKvWpV+7UamnrzVogkBqVDUQqRDTuBZQ1LUK1KKTJYpphp1NNMBO9BFKKcaAKc8AYVzWq2Q5wK6xuQaz7q08wGtIsk80vbRgTxWRLHg13mqWG0HiuQvLdlJ4qZItMzulRPSyZBqB5cVmyiGfoax7hhk1fuZxg1i3M/JpMoZIaYlQNNzTkmwakZeHap41qrG4PNWo2oAvQg1pQHFZsLVeiNMRoZ4p0AJeoUfOK1NPg8xhSauCdjSs4mKjitNLJ5B0q9pun8Dit2KxUDpUexL9u0c4mmMe1WE0r2roxaKO1SLbr6VSpoj2zZgLpPtUq6UPSt8Qil2Cq5ELnZippg/u1Omnr6Vp7BS4p2RPMyglko7VOtso7VZxRTFch8lR2pQgFSmkqgECj0oxTqMUANxQBThS0ANxQBTqKkLjcUYpaKoYmKKWipsB4NK2BWVdTVeuG4NY1ySTxXPUkdkIlWY7qpSLk1eK5qJ4qxNtjOaM81GISxAUZJrVWyLDc3yr60rSpANsA5/vGnbuLmKQtIrZfNuCC3/POqV5dPcHA+WMdFFWpQzkliSTVcxc0w5e5nGP2qMx81omD2ppg9qZmzOaKmeVWkYPam+RVXIsZxirVt4lsdGluc/6Rc5ijHovc/0psNm086RKPmdgo/GrWrov2zyITuhtwIlI746n881onZXE0c8Y6Qx1o+R7U0wH0qLj5TP2UsUDSyBVHWrxtz6VOsX2eMsP9Yf0FVcTRTuZcRrbp91P1NWdD0s6nqKq/wAttGN88nZVHWofs5Y9ya6DUIn0HRV0pDtursCW8x1C/wAKf1P4VadtSJdipe3z+Idchgtk8uDiGCLsq/55NaVoqarrPkebtsLReW7bV7/if51j6fbva2N3fAENt8iI/wC03X9M1enUaT4djgH/AB93vzyj0TsPxpvuVGVtBNU1M6hfvN0TpGvZVHQVeSRdL8ON8v8ApN/wD3WIH+prn9Oge81CC3H8bgfhWlrV8l7q7CAYtogIoh/sjiojojfnT0NbwwcXaYyTXuGhhmgUn0ryPwfHbeapON1ez6WyCBRHjpXVHSJxVpJyLrjiqUoq854qlLWUiUVGNIJKV6gY8VmxllW5p+aqK2DUokzSAsZoqMGnZ4oYDsU0x07NGaAIytJt5qbijFRYdyILzTsVIAKXAphcaBTqM0FqBgaM0zzKPMqRWH5opM0ZFUMXrRSg8UtKwCYoYcCnClNMCMUuKcQPxoxQK46RcxxN3xio/Lqx1h+hplAxijBqxGMxFB/D8w/rUdPjba1UiRhX9aTZU0g2sQOnUVFQAzbUsPyk5GQeCKbRQUEkIU5HKmo9tWAw6N90/pTGUqcGgkWPDL5Ln5T0PoaieLYxBpScU9pfPiz/ABoPzFBRCxAFVmkpXkqA8mkMkBqRahGakTikgJwKcBimrUy1aAVakUUgFPAqhD1FSrUYpwqiSUVIpqCpFNUIsxmph0qBKlFMkkFFMzS0gHU2nUw96AFNMaloK0IDKvoA6niuT1CwHPFd1LBkVkXtlkHitNyTzq6sQCeKwL6PZmu71GzMe7iuL1dMZFZyRaZyV5cEGsmWfJrQvkO7NY8qmoZdw8zJqWPJNQopq7AmTUgixBuq/EtMhhAFWlwBSbKJ4hiraNVIN6VftIGlI4oTCxatUZ2Fdlo1mcrkVmaZpvQ4rs9LswoHFaxRnJmxZQhYxxWko4qvEuBVgGmzMXFFFGKChaKPWipGBpKKBQAuKMUtFUAmOaMUtFAxKKWkoFYKKPWigAooooADSUtFACUUGigD58uOlZUi5Nas/NVDFk1wz3PShoU/Kp5RY+2TU/lYpDHxSQMpybn61XMPNaBi9qTyabQrmW0FMMFa3kU0wUJBzGUYPamG39q1/s9L9n9qoi5j/Z/am/Zq2fs9H2fnpQhFTTLPbJNcn/l3iLA/7XQfqaoG256V0kcG2xlH99lH5Zqt9m9qpkpmJ9l9qT7L7VufZfaj7L7UuUdzC+y4OcUjWhZstzW/9k9qBZ5PSgLlTQ9Li86W9uAPItRvI/vN2H51nXMUt3dS3Ex3O53EmuvvYFgsoLFBgj97KfVj0FZ6WQZgPU1T7EeZWttMFw1lYv8ALCuZ5T+p/QVn6lCb/UJZsfLnEY9FHAFdjcW6xQTyr1kxCv8AujrWULbHarYkUdG05LOC8v2+/GmyL/ebj+Wazf7NHO0V19xa+VpttF3ctKf5D+tVo7EE0N3Aq+HLOe3u89q9e0cy+WufSuC0+1aOQYrvtEilKjNbw2MJ7m5/BVWXvVtk2iqsgqJAiowqFhU7moWrNlIjpFbmnMKZUjJxJUgeqm+pFkoAs5pQaiDU4NQSSZpd1RE0oNIokDU7fUQpc0ASFqid6KTFADMn8aATmnYpMVNgJF+7SlwKjzTe9AEwenBuagB5p2eOtArFndgUx7jA4quxJpG6Ci47E6z1MJBiqCgiplzQmJo0FYeW1QmXmmxjKt9Kbtqhkm/Ipd1NA5pTSuBYLeZCrd14qI4606AgkoejDFIe4qhIM80VHmkaQCgZKzYXNMSYP+7J+hqtJP8AujVQyVNwsXJZu3cHFRRzmOUMO1Rs3nR7/wCIfe9x60zB5pjLNxH5c3ynKsNwPtTBT0/e2pHeM5H0pgpgPWngUxalFMB61MtRLUq0CY9RUgpgp4q0SSCnCmCn81QDqVTUdPU81QizHU61WRqlVqYicUhNIDRSELmkpaUUAIBT6KKAGkVDLCGFWKaelCYHLaxaDy2wK861m2OW4r2C8tBNGRXE61peM8VpuidjyG/hwTWLJDk13GqaZljxXPTWRjPSsWjRMy4rcH61digC1IqhKY83YVmykTeaFFNWZpDgVCqPOeOlbNhphOCRSsVcWytmcjNdRp1iBjimWVgFxxXQ2cGMcVSRLZfsLQADiujtItorPs4ula8IwK2RkyylTDpUS9BUgoEPFLSClpFBS4pKKBhSiigUkAdKKM0UwQUUtFSMKSloqgG0U7FJSuITFFLRimAhopTSUAIaKDRUgeAumTURjrQaA+lNMB9K5eU7+YoGKm+VV8wH0pPI9qOUXMUDFSeXV7yD6UfZz6U+UnmKPlUeTmrwtzTvs59KfKTcoeTR5NaAtz6U77OfSnYLmb5NL5NaP2U+lOFsfSlYLlLyf9FA/wBo/wAhUfkVq/Zj5WMd6QWp9KuxPMjM8mjyPatYWZPalFifSiwcxleR7VasbZTPvlHyRje34dq0BYt/dNWBZeXZ7QPmkOT/ALoquUlyMKaMzTO7dWOaWC3zMgx3rYGnk/w1ZttNImUlehoURcyMi/h/fbB0TiqotjnpXRNpxZiSMkmnx6WSw+WnysXMjGu4C023HCKqj8qhW3YHgV076WZJWbb1NOXSf9mhQYc6M7TIGaQZFd/plvtgBx2rEsNLKyD5a6y3i8uED2reKsjKUrsgnGKz5a0pqoSjms5DRTYVGVqV6jNZM0I2xUZWpDim0gISKTPNSsKjK1FhkqHNSCol4FODVQiYUtRhqeDQA4UtIDQTQA6img0uaACmmlJpcCgBuKTFPxS4qbAM29aXFDHBpRRYVxNtKV5p4FAosMYFqVVxQKC1FgJF4B+lNLc0wNwaTNMSJC9IDk0yk3BTQFidW2kH0pbmUI1UZLj0pJZTJHGx7jafqOlK4WHSz4PHQ1AZ896jbLLz1Hao6ksm3ZU0goQfKacooAfETHJuH5VLJHjBH3T0qNRU8WCCh79PrVIQtr8swz0PBoePZIVPY0gqVm8xt3eqAjC1MKaOKkAFMBVqVRUQNTKaoQ4U4UgpwHpTRI4EinB6jopgSZzTl5NRipEpoCeMVOOKhRqkzVEskBpwNRinCmIeDmnCo804GkBJSimU4UALSGlpDSAjcZFYmqWolU8VuGs++OIzVxEzzzUrBQTxXI6lZgZIFdprVz5bGuUuGac4xSkOJyNxG27Ap9rprzMMg10sWj+cwJWtyx0QL/DWXKaXMGx0cgD5a3rbTio+7XQ22lcD5a04dKHHFWoEcxhW9keOK1rWzPpWtFpwH8NXI7MKOlPlFzFe2t8DpV9I+KckWBUoFMkQClFOoFAAKWilpAFLSUCgtC0UtJQAClpBSmlcAop2KKQAKKKaaAHU2iigB1NopKoANFFFADcUUtFAHk7WNMNj7V1TWPtTDY+1LkL5zlzY+1J9hPpXUfYfaj7B7UuQPaHMfYPal+we1dSLD2pRYe1PlJ5zlRYe1O/s4+ldWNPHpS/YB6Ucguc5Yaf7U4ab7V1QsB6U8WI9KfILnOUGne1PGme1dWLEelOFiOOKfIg5zll03jpT10z2rqRZD0p62i+lHKLnOYXS/apF0r2rpltB6VItsPSnyoXMc4mlc9KmbTMnp04roRABT/JFOyDmZzg0oelTR6cB2rd8kUoiFIm7MQaaM9KlTT1HatfYKXbVC1M0WK+lPFivpWgFoIqRleGBVPSrZ6UxRzT2+7QUipNWfKavzVSkFRItFGSoWOKsuM1AwrE0IicUmaUr+VAwKADFBWgtSZpAJTSaU1GaljH+ZTlk4qA8YpCx5ouFi15wpGlqpk0biaLisXElp2/mqiNinebzRcLFwGnbhVTzaUzUXCxbDChnAFUxN+VRyS5FFwsTefzU6S5rLDHNWUl2imM0Q3FG6qiz0omoFYsmSmF6rmamGTmlcLFvzcKaRZc1WZv3YH40zcelFx2LMs4FQNMWNQvmhakB5JqdRusn9UYH86rirVuMxyL6x5/KmgI8ZOfUU3ZUyLxj8aXZ81FgGKnFPVak2808LVAMAp6ikxg1KtMB0kYb5hxnrSBcVJH6HoeKaw2k0wALTgKRTUo5FNCG4pwHNKKdimACnA0gp1MkXNJmigimAtSLUQWplFAEimpVNRLzUqirESKafUYqRRTJF704Uop2KQCCn03FOFACikoozQAxqoXoypFX3rNvJMCmiTjtWsPNJrIh0b950rrJsO1SQ2wJ6VTVx3sZFppQGOK2bbTQv8NXobcDtV6OICjQlsrRWgHarSQgVKFpw6UXJsNVKeFpQKWobLExRijFLilcdhBQKXFFMQlLS4ooKEpRRg0CgA/GgUop1AwooptABRRRQAetFFOxUgNooNFCAQ0UGkqgFpKWk7VICUUUU7gZBhHNHkCrRFGytSCt5A9KBCKs4oxQIreT7U4Q+1T4oAqSbkPlil8qp8UYqrjIREKd5YqXFKBQBDsp20VJijFFxWGbaULT8UuMVIxu2lxS0tO4DcUuKXGadii4WGYoA60+jtUjGYoNPNFAxgpMZNLT0HNCEKsdNccVOBTHHFFxlCUVTk4q/KKpyCpZaKTDmoGWrT1A1ZsohK0wrUjU0j86kojximmnGm4pANpDTz1pMUhIhK0hGKmNMK0FERFNHWpGFNxUAITSZpSKQimAueKTfTRTsUANyQaCaSlApDEFLmngUbaYhAxBp/mcU3FLigBQaVeTSAVIgxzQAh6/SinKOadigCIrQq1LtpwWiwxoWrNmP3pHqrD9KjVantV/fpTQrjUHzZqUrjn8KaBzUrVQEYFLSGnr0oAMcUDiilHNADg1Pb5lz6VFinxnB56UxABTxQRg0DimA+nDtTRS0wHUoYUygUAP3CgGm4xTgcUwJFqQGoc04c0ySdSKmFQIOferKVSESKvFOApBT1FUSKtSCkFLUgLikxTs0UhjKbT6YRVCGP0rLvOa036VnTruamgZmrblj0q/Bb4FSww1aVBTIGpHUoWlAp1IVhuKdiilxQVYQU7FAFPpAMAp+KKKCgxTcU6igLDRQaKKAsIKWjvR/OgB1Np1NoAdRQKKkaA0006g0DG0CnU01Qgoop1ADTSU+m1ICUmKdSGgYlFO/OigClRT6bitDIbilxS0tADdv0pQKdQBQTYTFGKdTqdwsR4pafijFIoZilxTqKBWG4p1GKKRQCilFFAWEpfWlooCwhoFKaQUABpKDSc0APVc08LSIOlTAYpNghMVE/SpTUb9KSApzVSkHNXpjVCU0MpFWSomqV8VExrNlkRpjGldutQtnmpKEJoHNJtpelIApDS0e1ADetBWncCkJ4pDGEVGRTzz9KVqAITTSKl20m3mpsMiA5p2OKk200iiwiMDmlFOxQBQhiGheaUihRigQdzS0oHNKBzTsAKtS4wMUuMClFMBAKdikB5p9KwABTgBQKB1pgOAqaAfvk+tQg81NAf3yfWmAjDk0/8AhprffNHb8aBDRT1plPBoGLSiiigQ4e9LikpRTAl6qPUUmBSrjPsaRvlNVcBcUCmCTFO8ygB1FICKKAHilApgNPGaZJII6kVMUwZqRRTESqKlSolp4NUgJwacDiowcU5eaZLJRTxUY6U+gBwopuaM0rAONNNLmkNAMryZxVF2+ar0prMuGwaaAtxNmrIrNt5a0EbiqJJKUUgp1IAxS0CnUgCm06igBtOoooKCjFLRSGJRRRTAKM0UUAFNp3ajFAgooooGFBoFFSAUUYoxQA3NOoxRVAFFFFSAU00UUDCikJxRQBXNJin0VdzMZilp1NNABTqbTqAsFAooFABilxSUooAWikp1K4Ce1FFFMAoFAooAKWkpTQAGkooFK4CHrT0FMp6UwJAKdQKKRQhqKSpTUbUkSU5qz5a0ZaoygU2Uii/FQ81YkqA1kUMIpmKkNGKksiIwKbipcUmKAIsYpueaVqQCkAp6UwjNP7UoFIY0LxSYp9FADCPypNtPNIaAGYppWpBzSlaAIsYpxFOooAj20h4qTtSGgAFKPU0Cl7UAJnNPHSmCnigApVoxQKAJKYx5pQeKaaAFzzU9uczp9arVPbf69aSAex/eH60ucgVH3pw61QDh1p/8qZ3pRQA8U4UwVIKYBTgaSkoJuSinMNyg1AD+VPRucetAxMUtB706gBcCnACm0veqAkCgUoOKi3GlElMCcGpFNVg1SA0ybFkNTlOarq1SK3arEWB9akU1AtTLQIlFOpgp2aBC06mA0uaAHGmGlzSE0AVLlsCsi4ky1bFwuVNYdxGfNpMaLNt1rVhHFZdola8Q4qiWSAU7FFLSELTTThR3oKCiiigAopaSgBc0UCnjipGM5pKlpjCi4xpoooqhBRRQKADFFFFABRRRigBRRQKKQCUUppKYBTacTTaACg0lFACUUUUDIqKdSUzMTFFOpO9FwsJRinUGgBtKKKUUAHWjFFLSuAlLRQe1MBKDSmkoAKKWloAbQKcaSgAxSUtJSASnrxTO9PFMB4NOpmaM0h3Anio3p7VGwzTEV35qrKtXHHWq0vWkykUJI6hZTVxwAKrtis2UiDZQVqTNNJqBkZqNqec00igoiNAFSbaUrgUgITzTgOKdgZoNAyJqQU5qbSAcOaRqUcCmHk0APUcUhoBooASg9KQmmA5oAcTSGjvSmgBBT8U2nigBppRS4oAoAUGlpDSigBQKDTwKDQAwCpoR+8z7GmLUsY5J9qYEa/KacOtBHNKKAFpRSCloEOFOBxTVNONMB2aTNIKWgABzThSUUAPPQGgCkHIxTM80ATDilzUOfenA0ASUopoIp4YVVwHLTgtMBBp6mqJJVWnio92actMROhqVahWnrVATg0tRA076UEkgpajBpwNADqDSZprScUwIZ2+WsmQZlq/cScdapqMtUsZbtY60FHFVrdeKuUyB1AopRQUJRRS0AApMUooxSAMUUUuKQwFOFIBS5oGLTWozSGgBKSnZpKYBikopcUCEopaSgAooopgFKaSigBTSUZooAaaSn000AxDSUUlAgopKKAuGKMU6kpkiUUUGgBKDSmkoAMUUoooASnUlKaQCUd6KKLgApaQUUwA0ZpKKVgFNFFKKYBjmkxS0UgG4pwpDTWOM0wH5pDJUPmYprPmgCUy0wyiomaoi1A0SSS1WeSlY1A8gqWNDXaqzNUjMKiODWbLGUuKdxTCwqQQmKSjOaXbQA0Uj04jFMNBQgpDgCjkGmtSGNJyaWgCpKQEeKQDmnE0HgUANPFHagHJpGbmgBjdaFoao1bnFAEoFOFIKDQAUvQUoprGgBRS96h381IpyKAJBS+tMU05aAHijOKDxTCaAJBUy/daqytmpt4EZoQBmnCow3FOBpgOFLmmUoNADhQSaTNBNADgc0/OKgzTw1AEmaQuKjJo60xWJFbFOY85pg4pQeMUhjhQQaYKduIpgOGactNDCnA0ASAU9TioC+KUS1VxFkGplaqiyU8NTuSWg9ODmqoaniSncRaDU8ScVVVjTwasCfzPelDZqLFKKBWJDJxVO4uCoqyWGKrT7WFJjM97hmNWLYEmqzKPM4q9a9qYM0oRgVYFRR1KKDMWlpPalFBQUUU4DFSAAUUuaM0hiEUDrQQTSgYFMYGkNKaQ0gEoopaYkJSU6koAQ0YopaYCGkp2KKAG0tFFA7BSGl7UlAWCjNNNApiCkp9NNAMaaQ0tJmgQYooooAWkxS0UyRvailoxSuAlJinUlFwCiiii4BmgUd6WgBDS0UelMBKX86KTtQAUlLjNFJgApaBS0kAhopDSUwFNRuOKkprcimBSlbbUP2ipLleKzXbDYoAuGemGU1CvNSVBQxpCBVeSXmpnGaheOpY0QtNSeeKikWq7HGai5Vi55lJvGaomU0sb85NFx2NBOKduqmbgUCcnvRcZbNIagElO80UAONFM3flSFqQxc80jNgUmajY0gFDc0rPioxwKYzc0BYlDUwnmmA0pNACs3FMXnmomahG4oGWg1OzUKGlY8UCJd2aYxzUQkxRvoGx5qZKrhqlSQdKBMmNKpqNzSb6AJWkppPFQeaM04NxQBIDTt3FRqRTiKAHBqlVuKiA4oFAFgNQaiWng0AG6nBqYaaDjNAE/WkNNDUuaYCg04Gos4oEopBYn5pB1qLzgKb5/NMLE/IPtT+1QGUEUwSnpSAn704NUAJp+3NAWHluaep5qHBpy5pgWAakVqrgmnA00BYElPDVCDThVXEWFJp+cVXWSpN4qiCUNSlsVB5oFNaYYphYlkkxVSSbNMecetVmbJp3AnXlq07VelZkCktWzax8ChAy4nSpKRRxS4pki0UUUAKKXNJSVLGOpRSA0ZpDHUgozRQAUlOpKAEpaDSUxAf0ooooGJS0Ud6BBSGig0DCj+dFFAXEpKdSUAJRS0mKBBmmU/FM6jiqASkbpSeYDIyhgSvUDtSnp/WgBpooNFAh+KKKDTJEopaSkAUdqMUUDsFFLRQIQ0Cg0GgAooooAKKMUUAJRS0dqYAKWkozSADSGnUlADaQmnU00wK0/3axp+GrYmPymsO8kwaQ0TREVZAzWZBLk4rVh6UDEMeKhlWrZqBxUtAjMlXrVV461ZIs1Vli5rNotGY681C2avPFiq7x1BaK2fWnLNTXjOah5U1Fxl/fnHNKJOaoebipYpe5p3FYtmWkE3NVWmyaQycUXKsXDNQGBqj5mOaRZ8nrSuFi+xqPPNRedSBqLiLA5prcZpiyUM1MBhpAcUhNR5y3tQUWFkokkqAybagmmLcCi5JM04WmiYk1WwTip448c0FFjfxT42qvnmgy9hRcVi6ZhUfn5+lUnlOMZ5pynEdFx2LKPk1OG4qhGeanEuKLklkPg1KHDDINZzS5ojnKmlzDsaiyCnVlmc5zT4rxs0XCxpgigmqfn5ponOadwsXS3FRmTBqsZTmlzuxTuFifzttBuag2k0oipXEOM5PSo2kbNLs5qRUBqgGgt609M0/wAvFPRcUrAKoJqRRjrSqKkxTENA5pRmnheKeFoAQU7bShaeOlVYBgpcUpFJRYQZxS7xUbmoTJzQMtF8UGaqpmqNpqq5Ni40tQPNx1qo09RmbJouOxKXJNSR5JqFTmrcI5piZftRyK2IBis+1i6VqRDirREicdKU0wuq9TTt2aGIUU7FIKdSGJikIpw5paTGR4NKop9JQACiiigBaKQ0UAFGKKKACiikpghcUhpaSkACkpaDTASilNJQAUlFIaBBmjrQabQAcYpvTpS02qAaFAJPrTs8UUZpWASikNFMRJRiigUEgaQipOPxo4NBRFS0p69KKAEpKWkoJAUd6KKACg0tJQAUtApTQAlFHpRQAlH8qXFJQACig0maB2FpjU+o5KBFC5PBrn72Tk1vXXQ1zt6eamTLihltN+8ret5crXKxviStyzm+Uc0RCRrDFIwzTEYYp5aqEQsBVeRRVphUTrUFIoOlV3jrQZearulQ0UjNeKq7xVpPHVdo+ayaNEZ7RdajKkVfaOmGKpsMoYagEirhiFNMApDKjtxUKuwNW3gqNoMdqTQDFmqZJarshpobbQhlgzEGnCbPeq5OajdtozVXJsXnfjrTEfms9bnJwalWWncLElxLzUURxkmmyNk0L92puOxYRsmrOeKqRCrHNO4Ck4+tM5NSAZFAj5oERbCTUmPlqUR07bxRYCBBinYqULTgtMCALT/LqcR08RUWArCLIpwhqfy8U8CnYLkKx04R81LilAosFyPys0gBU1YxRtzVBciBJqUU3GDUmOlMkTZSgc09akC0ANFKF5p4FOXigQCnAUYpRQA6pFPFRinUwJBS1GGpSeKpEsU0zdil8zdVaVttSMWZuM1UaanSTDFUpWPagZM03vUTTn1qt+9J4FPWBz3qbgKZCaljUk0+O3NWooParQBDFWhBD0pIYqvQx1SJZbtlxiry4AqrH2qcZNUiGO2jOaeKYtSgZoYBmlAzShPWnUhgOKWkzRmgAoJooNABR6UUtACUUUUAFFFFAAe9AoxRigQUUUUDDFJ3paSgBO9IaU9aaapCYUUGikA00UGkoAbRQabVALSUUUkIM/SikopgSU9c9aaKePcUEi96aOpNL2NB7CkUIQc0hBp560dKLgRin4popT60wEIHpSAUH070vagAx7frSY5xT/emL70AGBnNL27ZpwpnegBeKQ8dqfTeSKCRvWkpTRQUIaMZpaKAExTHqTFNYUAZl0OK568jyTXU3CZBrCu4DnpUSRSMHy8NmrttLgiop0IPSmK4Q0IbOggfIq0tYtpc5wK14jkVVyCUio2FSZpQKYIrMvtULrV5lqJ0qGUZzR1A8daLR1C8dZtFpme0eKjMdXXjqIpWZRUMVBjqyY6aRSKKxiqNoquEVGy0ElB4vSqkkVajiq0kdJoq5nZ2nmmSnip5kqjJJ1FIZCzANmpom3VQZj52O1X4BnFJDLOKkRaRFqVRiqFcEXBqcLSLUqimITaRSAGpgKXbmmgEWlxShadg0AAFLtzQKkXmqExmMVIpFDLTQMUCFemgmn0mOaVgCpAaYaeo4pgOHSlFIBSimApFOFItKBzQA5RzUnemgYpw5piFHSn8VHnFLmkAoNLmmE0x3xTAs5ozUAfIqMy4NFwJmamLcc4NQtL1qq8oBzRcC8745FRvNuHvVT7SCOtQG4+bFFwsXiQTnGPammMGo45M1YSgGNSH2qdIKegFW44807ARJBVhIPap0iqdIqdhEMcNWo4+lPSOpkjqibiIDUmcd6cF4qJgSaaJJUOcVYUVDDHirAoEKKWm5pCaQxTRSGjNAXFpabmjNAx1LSUZoAWikzRQAUUUUAHc0UUUCFpKKKADrSClozQMTrTTTqaaaAQ02nGmmmhCUlFBosAhpppxpppiEooNJQAUU3NFKwFiikpf4jTAXNFJS0AFBFFHagAFJ6UtBoASjnFKKBQSNANLmikNAC/zpMmlpDQULmkoooJCilpBQULRR2paAGmil7UdqAInXNZ9xBkHitJqry9KAOburfGeKx50Iaumu+hrBn+9WbKQ20O1veuituY65yD/AFororT/AFQqoiZYA5pwpDTqoQ3JzQV4qQUGpGVWjqJos1dPSom+7UsZSeKoGWr0lVTWTNEysRUZFTt96om61DGMIpjVIaa1AFdxVd161ZboaiegooTjg1hXUnlyV0M3Q1zOo/epMaI1YNJWlbnpWLF98VsQdKEDNBCKlqvH0FWEqmIcKmSoRUqUASCpQaiHWnigB4p9NXoaUVRIuKUHFIaB1oKH0lApR0pkiUooPSkHegB9LmmDrR3oAlWnDFMFKv3qAH0Dg0dqSgCUnimBsGlPSo260AS5pucYpq/dpDQBIWyKiduKUdKjegBqTYNMnlxzUP8AFTZ/uGgBrXfNVprj0qrL96o3qblFhJCT1qVV3GoIetW46YE8O4Cr8earw9avRdqpEMliQmtO2g4qtF1rShrSJDZKkPFSCOnpTqqxFwVKeFpy/dp1ILjcU3y+af3paoAHSgv70w1G9AE/miml/Sq6dasL92gBytmn0nelFJjENApaSgBaXNNNLSAdmjNNFFFhj80lIKDRYB2cUA000ooAWilpBSAKKWkpiENIaWkNADaYaU0hpoBDSGlNIaYCUhpe9IaYhpphNKaaKAEJoprdaKQH/9k=", 133 | "imageHeight": 634, 134 | "imageWidth": 950 135 | } -------------------------------------------------------------------------------- /deeplab.py: -------------------------------------------------------------------------------- 1 | import colorsys 2 | import copy 3 | import time 4 | 5 | import cv2 6 | import numpy as np 7 | from PIL import Image 8 | 9 | from nets.deeplab import Deeplabv3 10 | from utils.utils import cvtColor, preprocess_input, resize_image, show_config 11 | 12 | 13 | #-----------------------------------------------------------------------------------# 14 | # 使用自己训练好的模型预测需要修改3个参数 15 | # model_path、backbone和num_classes都需要修改! 16 | # 如果出现shape不匹配,一定要注意训练时的model_path、backbone和num_classes的修改 17 | #-----------------------------------------------------------------------------------# 18 | class DeeplabV3(object): 19 | _defaults = { 20 | #-------------------------------------------------------------------# 21 | # model_path指向logs文件夹下的权值文件 22 | # 训练好后logs文件夹下存在多个权值文件,选择验证集损失较低的即可。 23 | # 验证集损失较低不代表miou较高,仅代表该权值在验证集上泛化性能较好。 24 | #-------------------------------------------------------------------# 25 | "model_path" : 'model_data/deeplabv3_mobilenetv2.h5', 26 | #----------------------------------------# 27 | # 所需要区分的类的个数+1 28 | #----------------------------------------# 29 | "num_classes" : 21, 30 | #----------------------------------------# 31 | # 所使用的的主干网络: 32 | # mobilenet 33 | # xception 34 | #----------------------------------------# 35 | "backbone" : "mobilenet", 36 | #----------------------------------------# 37 | # 输入图片的大小 38 | #----------------------------------------# 39 | "input_shape" : [512, 512], 40 | #----------------------------------------# 41 | # 下采样的倍数,一般可选的为8和16 42 | # 与训练时设置的一样即可 43 | #----------------------------------------# 44 | "downsample_factor" : 16, 45 | #-------------------------------------------------# 46 | # mix_type参数用于控制检测结果的可视化方式 47 | # 48 | # mix_type = 0的时候代表原图与生成的图进行混合 49 | # mix_type = 1的时候代表仅保留生成的图 50 | # mix_type = 2的时候代表仅扣去背景,仅保留原图中的目标 51 | #-------------------------------------------------# 52 | "mix_type" : 0, 53 | } 54 | 55 | #---------------------------------------------------# 56 | # 初始化Deeplab 57 | #---------------------------------------------------# 58 | def __init__(self, **kwargs): 59 | self.__dict__.update(self._defaults) 60 | for name, value in kwargs.items(): 61 | setattr(self, name, value) 62 | #---------------------------------------------------# 63 | # 画框设置不同的颜色 64 | #---------------------------------------------------# 65 | if self.num_classes <= 21: 66 | self.colors = [ (0, 0, 0), (128, 0, 0), (0, 128, 0), (128, 128, 0), (0, 0, 128), (128, 0, 128), (0, 128, 128), 67 | (128, 128, 128), (64, 0, 0), (192, 0, 0), (64, 128, 0), (192, 128, 0), (64, 0, 128), (192, 0, 128), 68 | (64, 128, 128), (192, 128, 128), (0, 64, 0), (128, 64, 0), (0, 192, 0), (128, 192, 0), (0, 64, 128), 69 | (128, 64, 12)] 70 | else: 71 | hsv_tuples = [(x / self.num_classes, 1., 1.) for x in range(self.num_classes)] 72 | self.colors = list(map(lambda x: colorsys.hsv_to_rgb(*x), hsv_tuples)) 73 | self.colors = list(map(lambda x: (int(x[0] * 255), int(x[1] * 255), int(x[2] * 255)), self.colors)) 74 | #---------------------------------------------------# 75 | # 获得模型 76 | #---------------------------------------------------# 77 | self.generate() 78 | 79 | show_config(**self._defaults) 80 | 81 | #---------------------------------------------------# 82 | # 获得所有的分类 83 | #---------------------------------------------------# 84 | def generate(self): 85 | #-------------------------------# 86 | # 载入模型与权值 87 | #-------------------------------# 88 | self.model = Deeplabv3([self.input_shape[0], self.input_shape[1], 3], self.num_classes, 89 | backbone = self.backbone, downsample_factor = self.downsample_factor) 90 | 91 | self.model.load_weights(self.model_path) 92 | print('{} model loaded.'.format(self.model_path)) 93 | 94 | #---------------------------------------------------# 95 | # 检测图片 96 | #---------------------------------------------------# 97 | def detect_image(self, image, count=False, name_classes=None): 98 | #---------------------------------------------------------# 99 | # 在这里将图像转换成RGB图像,防止灰度图在预测时报错。 100 | # 代码仅仅支持RGB图像的预测,所有其它类型的图像都会转化成RGB 101 | #---------------------------------------------------------# 102 | image = cvtColor(image) 103 | #---------------------------------------------------# 104 | # 对输入图像进行一个备份,后面用于绘图 105 | #---------------------------------------------------# 106 | old_img = copy.deepcopy(image) 107 | orininal_h = np.array(image).shape[0] 108 | orininal_w = np.array(image).shape[1] 109 | #---------------------------------------------------------# 110 | # 给图像增加灰条,实现不失真的resize 111 | #---------------------------------------------------------# 112 | image_data, nw, nh = resize_image(image, (self.input_shape[1], self.input_shape[0])) 113 | #---------------------------------------------------------# 114 | # 归一化+添加上batch_size维度 115 | #---------------------------------------------------------# 116 | image_data = np.expand_dims(preprocess_input(np.array(image_data, np.float32)), 0) 117 | 118 | #---------------------------------------------------# 119 | # 图片传入网络进行预测 120 | #---------------------------------------------------# 121 | pr = self.model.predict(image_data)[0] 122 | #---------------------------------------------------# 123 | # 将灰条部分截取掉 124 | #---------------------------------------------------# 125 | pr = pr[int((self.input_shape[0] - nh) // 2) : int((self.input_shape[0] - nh) // 2 + nh), \ 126 | int((self.input_shape[1] - nw) // 2) : int((self.input_shape[1] - nw) // 2 + nw)] 127 | #---------------------------------------------------# 128 | # 进行图片的resize 129 | #---------------------------------------------------# 130 | pr = cv2.resize(pr, (orininal_w, orininal_h), interpolation = cv2.INTER_LINEAR) 131 | #---------------------------------------------------# 132 | # 取出每一个像素点的种类 133 | #---------------------------------------------------# 134 | pr = pr.argmax(axis=-1) 135 | 136 | #---------------------------------------------------------# 137 | # 计数 138 | #---------------------------------------------------------# 139 | if count: 140 | classes_nums = np.zeros([self.num_classes]) 141 | total_points_num = orininal_h * orininal_w 142 | print('-' * 63) 143 | print("|%25s | %15s | %15s|"%("Key", "Value", "Ratio")) 144 | print('-' * 63) 145 | for i in range(self.num_classes): 146 | num = np.sum(pr == i) 147 | ratio = num / total_points_num * 100 148 | if num > 0: 149 | print("|%25s | %15s | %14.2f%%|"%(str(name_classes[i]), str(num), ratio)) 150 | print('-' * 63) 151 | classes_nums[i] = num 152 | print("classes_nums:", classes_nums) 153 | 154 | if self.mix_type == 0: 155 | # seg_img = np.zeros((np.shape(pr)[0], np.shape(pr)[1], 3)) 156 | # for c in range(self.num_classes): 157 | # seg_img[:, :, 0] += ((pr[:, :] == c ) * self.colors[c][0]).astype('uint8') 158 | # seg_img[:, :, 1] += ((pr[:, :] == c ) * self.colors[c][1]).astype('uint8') 159 | # seg_img[:, :, 2] += ((pr[:, :] == c ) * self.colors[c][2]).astype('uint8') 160 | seg_img = np.reshape(np.array(self.colors, np.uint8)[np.reshape(pr, [-1])], [orininal_h, orininal_w, -1]) 161 | #------------------------------------------------# 162 | # 将新图片转换成Image的形式 163 | #------------------------------------------------# 164 | image = Image.fromarray(np.uint8(seg_img)) 165 | #------------------------------------------------# 166 | # 将新图与原图及进行混合 167 | #------------------------------------------------# 168 | image = Image.blend(old_img, image, 0.7) 169 | 170 | elif self.mix_type == 1: 171 | # seg_img = np.zeros((np.shape(pr)[0], np.shape(pr)[1], 3)) 172 | # for c in range(self.num_classes): 173 | # seg_img[:, :, 0] += ((pr[:, :] == c ) * self.colors[c][0]).astype('uint8') 174 | # seg_img[:, :, 1] += ((pr[:, :] == c ) * self.colors[c][1]).astype('uint8') 175 | # seg_img[:, :, 2] += ((pr[:, :] == c ) * self.colors[c][2]).astype('uint8') 176 | seg_img = np.reshape(np.array(self.colors, np.uint8)[np.reshape(pr, [-1])], [orininal_h, orininal_w, -1]) 177 | #------------------------------------------------# 178 | # 将新图片转换成Image的形式 179 | #------------------------------------------------# 180 | image = Image.fromarray(np.uint8(seg_img)) 181 | 182 | elif self.mix_type == 2: 183 | seg_img = (np.expand_dims(pr != 0, -1) * np.array(old_img, np.float32)).astype('uint8') 184 | #------------------------------------------------# 185 | # 将新图片转换成Image的形式 186 | #------------------------------------------------# 187 | image = Image.fromarray(np.uint8(seg_img)) 188 | 189 | return image 190 | 191 | def get_FPS(self, image, test_interval): 192 | #---------------------------------------------------------# 193 | # 在这里将图像转换成RGB图像,防止灰度图在预测时报错。 194 | # 代码仅仅支持RGB图像的预测,所有其它类型的图像都会转化成RGB 195 | #---------------------------------------------------------# 196 | image = cvtColor(image) 197 | #---------------------------------------------------------# 198 | # 给图像增加灰条,实现不失真的resize 199 | #---------------------------------------------------------# 200 | image_data, nw, nh = resize_image(image, (self.input_shape[1], self.input_shape[0])) 201 | #---------------------------------------------------------# 202 | # 归一化+添加上batch_size维度 203 | #---------------------------------------------------------# 204 | image_data = np.expand_dims(preprocess_input(np.array(image_data, np.float32)), 0) 205 | 206 | #---------------------------------------------------# 207 | # 图片传入网络进行预测 208 | #---------------------------------------------------# 209 | pr = self.model.predict(image_data)[0] 210 | #--------------------------------------# 211 | # 将灰条部分截取掉 212 | #--------------------------------------# 213 | pr = pr[int((self.input_shape[0] - nh) // 2) : int((self.input_shape[0] - nh) // 2 + nh), \ 214 | int((self.input_shape[1] - nw) // 2) : int((self.input_shape[1] - nw) // 2 + nw)] 215 | #---------------------------------------------------# 216 | # 取出每一个像素点的种类 217 | #---------------------------------------------------# 218 | pr = pr.argmax(axis=-1).reshape([self.input_shape[0],self.input_shape[1]]) 219 | 220 | t1 = time.time() 221 | for _ in range(test_interval): 222 | #---------------------------------------------------# 223 | # 图片传入网络进行预测 224 | #---------------------------------------------------# 225 | pr = self.model.predict(image_data)[0] 226 | #--------------------------------------# 227 | # 将灰条部分截取掉 228 | #--------------------------------------# 229 | pr = pr[int((self.input_shape[0] - nh) // 2) : int((self.input_shape[0] - nh) // 2 + nh), \ 230 | int((self.input_shape[1] - nw) // 2) : int((self.input_shape[1] - nw) // 2 + nw)] 231 | #---------------------------------------------------# 232 | # 取出每一个像素点的种类 233 | #---------------------------------------------------# 234 | pr = pr.argmax(axis=-1).reshape([self.input_shape[0],self.input_shape[1]]) 235 | 236 | t2 = time.time() 237 | tact_time = (t2 - t1) / test_interval 238 | return tact_time 239 | 240 | def get_miou_png(self, image): 241 | #---------------------------------------------------------# 242 | # 在这里将图像转换成RGB图像,防止灰度图在预测时报错。 243 | # 代码仅仅支持RGB图像的预测,所有其它类型的图像都会转化成RGB 244 | #---------------------------------------------------------# 245 | image = cvtColor(image) 246 | orininal_h = np.array(image).shape[0] 247 | orininal_w = np.array(image).shape[1] 248 | #---------------------------------------------------------# 249 | # 给图像增加灰条,实现不失真的resize 250 | #---------------------------------------------------------# 251 | image_data, nw, nh = resize_image(image, (self.input_shape[1], self.input_shape[0])) 252 | #---------------------------------------------------------# 253 | # 归一化+添加上batch_size维度 254 | #---------------------------------------------------------# 255 | image_data = np.expand_dims(preprocess_input(np.array(image_data, np.float32)), 0) 256 | 257 | #--------------------------------------# 258 | # 图片传入网络进行预测 259 | #--------------------------------------# 260 | pr = self.model.predict(image_data)[0] 261 | #--------------------------------------# 262 | # 将灰条部分截取掉 263 | #--------------------------------------# 264 | pr = pr[int((self.input_shape[0] - nh) // 2) : int((self.input_shape[0] - nh) // 2 + nh), \ 265 | int((self.input_shape[1] - nw) // 2) : int((self.input_shape[1] - nw) // 2 + nw)] 266 | #--------------------------------------# 267 | # 进行图片的resize 268 | #--------------------------------------# 269 | pr = cv2.resize(pr, (orininal_w, orininal_h), interpolation = cv2.INTER_LINEAR) 270 | #---------------------------------------------------# 271 | # 取出每一个像素点的种类 272 | #---------------------------------------------------# 273 | pr = pr.argmax(axis=-1) 274 | 275 | image = Image.fromarray(np.uint8(pr)) 276 | return image 277 | -------------------------------------------------------------------------------- /get_miou.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | from PIL import Image 4 | from tqdm import tqdm 5 | 6 | from deeplab import DeeplabV3 7 | from utils.utils_metrics import compute_mIoU, show_results 8 | 9 | ''' 10 | 进行指标评估需要注意以下几点: 11 | 1、该文件生成的图为灰度图,因为值比较小,按照PNG形式的图看是没有显示效果的,所以看到近似全黑的图是正常的。 12 | 2、该文件计算的是验证集的miou,当前该库将测试集当作验证集使用,不单独划分测试集 13 | ''' 14 | if __name__ == "__main__": 15 | #---------------------------------------------------------------------------# 16 | # miou_mode用于指定该文件运行时计算的内容 17 | # miou_mode为0代表整个miou计算流程,包括获得预测结果、计算miou。 18 | # miou_mode为1代表仅仅获得预测结果。 19 | # miou_mode为2代表仅仅计算miou。 20 | #---------------------------------------------------------------------------# 21 | miou_mode = 0 22 | #------------------------------# 23 | # 分类个数+1、如2+1 24 | #------------------------------# 25 | num_classes = 21 26 | #--------------------------------------------# 27 | # 区分的种类,和json_to_dataset里面的一样 28 | #--------------------------------------------# 29 | name_classes = ["background","aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"] 30 | # name_classes = ["_background_","cat","dog"] 31 | #-------------------------------------------------------# 32 | # 指向VOC数据集所在的文件夹 33 | # 默认指向根目录下的VOC数据集 34 | #-------------------------------------------------------# 35 | VOCdevkit_path = 'VOCdevkit' 36 | 37 | image_ids = open(os.path.join(VOCdevkit_path, "VOC2007/ImageSets/Segmentation/val.txt"),'r').read().splitlines() 38 | gt_dir = os.path.join(VOCdevkit_path, "VOC2007/SegmentationClass/") 39 | miou_out_path = "miou_out" 40 | pred_dir = os.path.join(miou_out_path, 'detection-results') 41 | 42 | if miou_mode == 0 or miou_mode == 1: 43 | if not os.path.exists(pred_dir): 44 | os.makedirs(pred_dir) 45 | 46 | print("Load model.") 47 | deeplab = DeeplabV3() 48 | print("Load model done.") 49 | 50 | print("Get predict result.") 51 | for image_id in tqdm(image_ids): 52 | image_path = os.path.join(VOCdevkit_path, "VOC2007/JPEGImages/"+image_id+".jpg") 53 | image = Image.open(image_path) 54 | image = deeplab.get_miou_png(image) 55 | image.save(os.path.join(pred_dir, image_id + ".png")) 56 | print("Get predict result done.") 57 | 58 | if miou_mode == 0 or miou_mode == 2: 59 | print("Get miou.") 60 | hist, IoUs, PA_Recall, Precision = compute_mIoU(gt_dir, pred_dir, image_ids, num_classes, name_classes) # 执行计算mIoU的函数 61 | print("Get miou done.") 62 | show_results(miou_out_path, hist, IoUs, PA_Recall, Precision, name_classes) 63 | -------------------------------------------------------------------------------- /img/street.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/bubbliiiing/deeplabv3-plus-keras/ba440f790bbaac0f947c193eae4514b4eac83235/img/street.jpg -------------------------------------------------------------------------------- /json_to_dataset.py: -------------------------------------------------------------------------------- 1 | import base64 2 | import json 3 | import os 4 | import os.path as osp 5 | 6 | import numpy as np 7 | import PIL.Image 8 | from labelme import utils 9 | 10 | ''' 11 | 制作自己的语义分割数据集需要注意以下几点: 12 | 1、我使用的labelme版本是3.16.7,建议使用该版本的labelme,有些版本的labelme会发生错误, 13 | 具体错误为:Too many dimensions: 3 > 2 14 | 安装方式为命令行pip install labelme==3.16.7 15 | 2、此处生成的标签图是8位彩色图,与视频中看起来的数据集格式不太一样。 16 | 虽然看起来是彩图,但事实上只有8位,此时每个像素点的值就是这个像素点所属的种类。 17 | 所以其实和视频中VOC数据集的格式一样。因此这样制作出来的数据集是可以正常使用的。也是正常的。 18 | ''' 19 | if __name__ == '__main__': 20 | jpgs_path = "datasets/JPEGImages" 21 | pngs_path = "datasets/SegmentationClass" 22 | classes = ["_background_","aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"] 23 | # classes = ["_background_","cat","dog"] 24 | 25 | count = os.listdir("./datasets/before/") 26 | for i in range(0, len(count)): 27 | path = os.path.join("./datasets/before", count[i]) 28 | 29 | if os.path.isfile(path) and path.endswith('json'): 30 | data = json.load(open(path)) 31 | 32 | if data['imageData']: 33 | imageData = data['imageData'] 34 | else: 35 | imagePath = os.path.join(os.path.dirname(path), data['imagePath']) 36 | with open(imagePath, 'rb') as f: 37 | imageData = f.read() 38 | imageData = base64.b64encode(imageData).decode('utf-8') 39 | 40 | img = utils.img_b64_to_arr(imageData) 41 | label_name_to_value = {'_background_': 0} 42 | for shape in data['shapes']: 43 | label_name = shape['label'] 44 | if label_name in label_name_to_value: 45 | label_value = label_name_to_value[label_name] 46 | else: 47 | label_value = len(label_name_to_value) 48 | label_name_to_value[label_name] = label_value 49 | 50 | # label_values must be dense 51 | label_values, label_names = [], [] 52 | for ln, lv in sorted(label_name_to_value.items(), key=lambda x: x[1]): 53 | label_values.append(lv) 54 | label_names.append(ln) 55 | assert label_values == list(range(len(label_values))) 56 | 57 | lbl = utils.shapes_to_label(img.shape, data['shapes'], label_name_to_value) 58 | 59 | 60 | PIL.Image.fromarray(img).save(osp.join(jpgs_path, count[i].split(".")[0]+'.jpg')) 61 | 62 | new = np.zeros([np.shape(img)[0],np.shape(img)[1]]) 63 | for name in label_names: 64 | index_json = label_names.index(name) 65 | index_all = classes.index(name) 66 | new = new + index_all*(np.array(lbl) == index_json) 67 | 68 | utils.lblsave(osp.join(pngs_path, count[i].split(".")[0]+'.png'), new) 69 | print('Saved ' + count[i].split(".")[0] + '.jpg and ' + count[i].split(".")[0] + '.png') 70 | -------------------------------------------------------------------------------- /logs/README.md: -------------------------------------------------------------------------------- 1 | 这里面存放的是训练过程中产生的权重。 2 | -------------------------------------------------------------------------------- /model_data/README.md: -------------------------------------------------------------------------------- 1 | 这里面存放的是已经训练好的权重,可通过百度网盘下载。 2 | -------------------------------------------------------------------------------- /model_data/deeplabv3_mobilenetv2.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/bubbliiiing/deeplabv3-plus-keras/ba440f790bbaac0f947c193eae4514b4eac83235/model_data/deeplabv3_mobilenetv2.h5 -------------------------------------------------------------------------------- /nets/Xception.py: -------------------------------------------------------------------------------- 1 | from keras import layers 2 | from keras.layers import (Activation, BatchNormalization, Conv2D, 3 | DepthwiseConv2D, ZeroPadding2D) 4 | 5 | 6 | def _conv2d_same(x, filters, prefix, stride=1, kernel_size=3, rate=1): 7 | # 计算padding的数量,hw是否需要收缩 8 | if stride == 1: 9 | return Conv2D(filters, 10 | (kernel_size, kernel_size), 11 | strides=(stride, stride), 12 | padding='same', use_bias=False, 13 | dilation_rate=(rate, rate), 14 | name=prefix)(x) 15 | else: 16 | kernel_size_effective = kernel_size + (kernel_size - 1) * (rate - 1) 17 | pad_total = kernel_size_effective - 1 18 | pad_beg = pad_total // 2 19 | pad_end = pad_total - pad_beg 20 | x = ZeroPadding2D((pad_beg, pad_end))(x) 21 | return Conv2D(filters, 22 | (kernel_size, kernel_size), 23 | strides=(stride, stride), 24 | padding='valid', use_bias=False, 25 | dilation_rate=(rate, rate), 26 | name=prefix)(x) 27 | 28 | def SepConv_BN(x, filters, prefix, stride=1, kernel_size=3, rate=1, depth_activation=False, epsilon=1e-3): 29 | # 计算padding的数量,hw是否需要收缩 30 | if stride == 1: 31 | depth_padding = 'same' 32 | else: 33 | kernel_size_effective = kernel_size + (kernel_size - 1) * (rate - 1) 34 | pad_total = kernel_size_effective - 1 35 | pad_beg = pad_total // 2 36 | pad_end = pad_total - pad_beg 37 | x = ZeroPadding2D((pad_beg, pad_end))(x) 38 | depth_padding = 'valid' 39 | 40 | # 如果需要激活函数 41 | if not depth_activation: 42 | x = Activation('relu')(x) 43 | 44 | # 分离卷积,首先3x3分离卷积,再1x1卷积 45 | # 3x3采用膨胀卷积 46 | x = DepthwiseConv2D((kernel_size, kernel_size), strides=(stride, stride), dilation_rate=(rate, rate), 47 | padding=depth_padding, use_bias=False, name=prefix + '_depthwise')(x) 48 | x = BatchNormalization(name=prefix + '_depthwise_BN', epsilon=epsilon)(x) 49 | if depth_activation: 50 | x = Activation('relu')(x) 51 | 52 | # 1x1卷积,进行压缩 53 | x = Conv2D(filters, (1, 1), padding='same', 54 | use_bias=False, name=prefix + '_pointwise')(x) 55 | x = BatchNormalization(name=prefix + '_pointwise_BN', epsilon=epsilon)(x) 56 | if depth_activation: 57 | x = Activation('relu')(x) 58 | 59 | return x 60 | 61 | def _xception_block(inputs, depth_list, prefix, skip_connection_type, stride, 62 | rate=1, depth_activation=False, return_skip=False): 63 | 64 | residual = inputs 65 | for i in range(3): 66 | residual = SepConv_BN(residual, 67 | depth_list[i], 68 | prefix + '_separable_conv{}'.format(i + 1), 69 | stride=stride if i == 2 else 1, 70 | rate=rate, 71 | depth_activation=depth_activation) 72 | if i == 1: 73 | skip = residual 74 | if skip_connection_type == 'conv': 75 | shortcut = _conv2d_same(inputs, depth_list[-1], prefix + '_shortcut', 76 | kernel_size=1, 77 | stride=stride) 78 | shortcut = BatchNormalization(name=prefix + '_shortcut_BN')(shortcut) 79 | outputs = layers.add([residual, shortcut]) 80 | elif skip_connection_type == 'sum': 81 | outputs = layers.add([residual, inputs]) 82 | elif skip_connection_type == 'none': 83 | outputs = residual 84 | if return_skip: 85 | return outputs, skip 86 | else: 87 | return outputs 88 | 89 | def Xception(inputs, alpha=1, downsample_factor=16): 90 | if downsample_factor == 8: 91 | entry_block3_stride = 1 92 | middle_block_rate = 2 # ! Not mentioned in paper, but required 93 | exit_block_rates = (2, 4) 94 | atrous_rates = (12, 24, 36) 95 | elif downsample_factor == 16: 96 | entry_block3_stride = 2 97 | middle_block_rate = 1 98 | exit_block_rates = (1, 2) 99 | atrous_rates = (6, 12, 18) 100 | else: 101 | raise ValueError('Unsupported factor - `{}`, Use 8 or 16.'.format(downsample_factor)) 102 | 103 | # 256,256,32 104 | x = Conv2D(32, (3, 3), strides=(2, 2), 105 | name='entry_flow_conv1_1', use_bias=False, padding='same')(inputs) 106 | x = BatchNormalization(name='entry_flow_conv1_1_BN')(x) 107 | x = Activation('relu')(x) 108 | 109 | # 256,256,64 110 | x = _conv2d_same(x, 64, 'entry_flow_conv1_2', kernel_size=3, stride=1) 111 | x = BatchNormalization(name='entry_flow_conv1_2_BN')(x) 112 | x = Activation('relu')(x) 113 | 114 | # 256,256,128 -> 256,256,128 -> 128,128,128 115 | x = _xception_block(x, [128, 128, 128], 'entry_flow_block1', 116 | skip_connection_type='conv', stride=2, 117 | depth_activation=False) 118 | 119 | # 128,128,256 -> 128,128,256 -> 64,64,256 120 | # skip = 128,128,256 121 | x, skip1 = _xception_block(x, [256, 256, 256], 'entry_flow_block2', 122 | skip_connection_type='conv', stride=2, 123 | depth_activation=False, return_skip=True) 124 | 125 | x = _xception_block(x, [728, 728, 728], 'entry_flow_block3', 126 | skip_connection_type='conv', stride=entry_block3_stride, 127 | depth_activation=False) 128 | for i in range(16): 129 | x = _xception_block(x, [728, 728, 728], 'middle_flow_unit_{}'.format(i + 1), 130 | skip_connection_type='sum', stride=1, rate=middle_block_rate, 131 | depth_activation=False) 132 | 133 | x = _xception_block(x, [728, 1024, 1024], 'exit_flow_block1', 134 | skip_connection_type='conv', stride=1, rate=exit_block_rates[0], 135 | depth_activation=False) 136 | x = _xception_block(x, [1536, 1536, 2048], 'exit_flow_block2', 137 | skip_connection_type='none', stride=1, rate=exit_block_rates[1], 138 | depth_activation=True) 139 | return x,atrous_rates,skip1 140 | -------------------------------------------------------------------------------- /nets/__init__.py: -------------------------------------------------------------------------------- 1 | # -------------------------------------------------------------------------------- /nets/deeplab.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | from keras import backend as K 3 | from keras.layers import (Activation, BatchNormalization, Concatenate, Conv2D, 4 | DepthwiseConv2D, Dropout, GlobalAveragePooling2D, 5 | Input, Lambda, Softmax, ZeroPadding2D) 6 | from keras.models import Model 7 | 8 | from nets.mobilenet import mobilenetV2 9 | from nets.Xception import Xception 10 | 11 | 12 | def SepConv_BN(x, filters, prefix, stride=1, kernel_size=3, rate=1, depth_activation=False, epsilon=1e-3): 13 | # 计算padding的数量,hw是否需要收缩 14 | if stride == 1: 15 | depth_padding = 'same' 16 | else: 17 | kernel_size_effective = kernel_size + (kernel_size - 1) * (rate - 1) 18 | pad_total = kernel_size_effective - 1 19 | pad_beg = pad_total // 2 20 | pad_end = pad_total - pad_beg 21 | x = ZeroPadding2D((pad_beg, pad_end))(x) 22 | depth_padding = 'valid' 23 | 24 | # 如果需要激活函数 25 | if not depth_activation: 26 | x = Activation('relu')(x) 27 | 28 | # 分离卷积,首先3x3分离卷积,再1x1卷积 29 | # 3x3采用膨胀卷积 30 | x = DepthwiseConv2D((kernel_size, kernel_size), strides=(stride, stride), dilation_rate=(rate, rate), 31 | padding=depth_padding, use_bias=False, name=prefix + '_depthwise')(x) 32 | x = BatchNormalization(name=prefix + '_depthwise_BN', epsilon=epsilon)(x) 33 | if depth_activation: 34 | x = Activation('relu')(x) 35 | 36 | # 1x1卷积,进行压缩 37 | x = Conv2D(filters, (1, 1), padding='same', 38 | use_bias=False, name=prefix + '_pointwise')(x) 39 | x = BatchNormalization(name=prefix + '_pointwise_BN', epsilon=epsilon)(x) 40 | if depth_activation: 41 | x = Activation('relu')(x) 42 | 43 | return x 44 | 45 | def Deeplabv3(input_shape, num_classes, alpha=1., backbone="mobilenet", downsample_factor=16): 46 | img_input = Input(shape=input_shape) 47 | 48 | if backbone=="xception": 49 | #----------------------------------# 50 | # 获得两个特征层 51 | # 浅层特征skip1 [128,128,256] 52 | # 主干部分x [30,30,2048] 53 | #----------------------------------# 54 | x, atrous_rates, skip1 = Xception(img_input, alpha, downsample_factor=downsample_factor) 55 | elif backbone=="mobilenet": 56 | #----------------------------------# 57 | # 获得两个特征层 58 | # 浅层特征skip1 [128,128,24] 59 | # 主干部分x [30,30,320] 60 | #----------------------------------# 61 | x, atrous_rates, skip1 = mobilenetV2(img_input, alpha, downsample_factor=downsample_factor) 62 | else: 63 | raise ValueError('Unsupported backbone - `{}`, Use mobilenet, xception.'.format(backbone)) 64 | 65 | size_before = tf.keras.backend.int_shape(x) 66 | 67 | #-----------------------------------------# 68 | # 一共五个分支 69 | # ASPP特征提取模块 70 | # 利用不同膨胀率的膨胀卷积进行特征提取 71 | #-----------------------------------------# 72 | # 分支0 73 | b0 = Conv2D(256, (1, 1), padding='same', use_bias=False, name='aspp0')(x) 74 | b0 = BatchNormalization(name='aspp0_BN', epsilon=1e-5)(b0) 75 | b0 = Activation('relu', name='aspp0_activation')(b0) 76 | 77 | # 分支1 rate = 6 (12) 78 | b1 = SepConv_BN(x, 256, 'aspp1', 79 | rate=atrous_rates[0], depth_activation=True, epsilon=1e-5) 80 | # 分支2 rate = 12 (24) 81 | b2 = SepConv_BN(x, 256, 'aspp2', 82 | rate=atrous_rates[1], depth_activation=True, epsilon=1e-5) 83 | # 分支3 rate = 18 (36) 84 | b3 = SepConv_BN(x, 256, 'aspp3', 85 | rate=atrous_rates[2], depth_activation=True, epsilon=1e-5) 86 | 87 | # 分支4 全部求平均后,再利用expand_dims扩充维度,之后利用1x1卷积调整通道 88 | b4 = GlobalAveragePooling2D()(x) 89 | b4 = Lambda(lambda x: K.expand_dims(x, 1))(b4) 90 | b4 = Lambda(lambda x: K.expand_dims(x, 1))(b4) 91 | b4 = Conv2D(256, (1, 1), padding='same', use_bias=False, name='image_pooling')(b4) 92 | b4 = BatchNormalization(name='image_pooling_BN', epsilon=1e-5)(b4) 93 | b4 = Activation('relu')(b4) 94 | # 直接利用resize_images扩充hw 95 | b4 = Lambda(lambda x: tf.image.resize_images(x, size_before[1:3], align_corners=True))(b4) 96 | 97 | #-----------------------------------------# 98 | # 将五个分支的内容堆叠起来 99 | # 然后1x1卷积整合特征。 100 | #-----------------------------------------# 101 | x = Concatenate()([b4, b0, b1, b2, b3]) 102 | # 利用conv2d压缩 32,32,256 103 | x = Conv2D(256, (1, 1), padding='same', use_bias=False, name='concat_projection')(x) 104 | x = BatchNormalization(name='concat_projection_BN', epsilon=1e-5)(x) 105 | x = Activation('relu')(x) 106 | x = Dropout(0.1)(x) 107 | 108 | skip_size = tf.keras.backend.int_shape(skip1) 109 | #-----------------------------------------# 110 | # 将加强特征边上采样 111 | #-----------------------------------------# 112 | x = Lambda(lambda xx: tf.image.resize_images(xx, skip_size[1:3], align_corners=True))(x) 113 | #----------------------------------# 114 | # 浅层特征边 115 | #----------------------------------# 116 | dec_skip1 = Conv2D(48, (1, 1), padding='same',use_bias=False, name='feature_projection0')(skip1) 117 | dec_skip1 = BatchNormalization(name='feature_projection0_BN', epsilon=1e-5)(dec_skip1) 118 | dec_skip1 = Activation(tf.nn.relu)(dec_skip1) 119 | 120 | #-----------------------------------------# 121 | # 与浅层特征堆叠后利用卷积进行特征提取 122 | #-----------------------------------------# 123 | x = Concatenate()([x, dec_skip1]) 124 | x = SepConv_BN(x, 256, 'decoder_conv0', 125 | depth_activation=True, epsilon=1e-5) 126 | x = SepConv_BN(x, 256, 'decoder_conv1', 127 | depth_activation=True, epsilon=1e-5) 128 | 129 | #-----------------------------------------# 130 | # 获得每个像素点的分类 131 | #-----------------------------------------# 132 | # 512,512 133 | size_before3 = tf.keras.backend.int_shape(img_input) 134 | # 512,512,21 135 | x = Conv2D(num_classes, (1, 1), padding='same')(x) 136 | x = Lambda(lambda xx:tf.image.resize_images(xx,size_before3[1:3], align_corners=True))(x) 137 | x = Softmax()(x) 138 | 139 | model = Model(img_input, x, name='deeplabv3plus') 140 | return model 141 | -------------------------------------------------------------------------------- /nets/deeplab_training.py: -------------------------------------------------------------------------------- 1 | import math 2 | from functools import partial 3 | 4 | import numpy as np 5 | import tensorflow as tf 6 | from keras import backend as K 7 | 8 | 9 | def dice_loss_with_CE(cls_weights, beta=1, smooth = 1e-5): 10 | cls_weights = np.reshape(cls_weights, [1, 1, 1, -1]) 11 | def _dice_loss_with_CE(y_true, y_pred): 12 | y_pred = K.clip(y_pred, K.epsilon(), 1.0 - K.epsilon()) 13 | #------------------------------------------------------------------------# 14 | # 在VOC数据集中,部分标签中物体存在白边,这些白边是训练时需要去忽略的。 15 | # 白边的像素点值为255,在数据载入时,默认将白边值调整成num_classes + 1 16 | # 在one_hot处理后,我们只需要取前num_classes序号的内容来进行训练即可 17 | #------------------------------------------------------------------------# 18 | CE_loss = - y_true[...,:-1] * K.log(y_pred) * cls_weights 19 | CE_loss = K.mean(K.sum(CE_loss, axis = -1)) 20 | 21 | tp = K.sum(y_true[...,:-1] * y_pred, axis=[0,1,2]) 22 | fp = K.sum(y_pred , axis=[0,1,2]) - tp 23 | fn = K.sum(y_true[...,:-1], axis=[0,1,2]) - tp 24 | 25 | score = ((1 + beta ** 2) * tp + smooth) / ((1 + beta ** 2) * tp + beta ** 2 * fn + fp + smooth) 26 | score = tf.reduce_mean(score) 27 | dice_loss = 1 - score 28 | # dice_loss = tf.Print(dice_loss, [dice_loss, CE_loss]) 29 | return CE_loss + dice_loss 30 | return _dice_loss_with_CE 31 | 32 | def CE(cls_weights): 33 | cls_weights = np.reshape(cls_weights, [1, 1, 1, -1]) 34 | def _CE(y_true, y_pred): 35 | y_pred = K.clip(y_pred, K.epsilon(), 1.0 - K.epsilon()) 36 | #------------------------------------------------------------------------# 37 | # 在VOC数据集中,部分标签中物体存在白边,这些白边是训练时需要去忽略的。 38 | # 白边的像素点值为255,在数据载入时,默认将白边值调整成num_classes + 1 39 | # 在one_hot处理后,我们只需要取前num_classes序号的内容来进行训练即可 40 | #------------------------------------------------------------------------# 41 | CE_loss = - y_true[...,:-1] * K.log(y_pred) * cls_weights 42 | CE_loss = K.mean(K.sum(CE_loss, axis = -1)) 43 | # dice_loss = tf.Print(CE_loss, [CE_loss]) 44 | return CE_loss 45 | return _CE 46 | 47 | def dice_loss_with_Focal_Loss(cls_weights, beta=1, smooth = 1e-5, alpha=0.5, gamma=2): 48 | cls_weights = np.reshape(cls_weights, [1, 1, 1, -1]) 49 | def _dice_loss_with_Focal_Loss(y_true, y_pred): 50 | y_pred = K.clip(y_pred, K.epsilon(), 1.0 - K.epsilon()) 51 | #------------------------------------------------------------------------# 52 | # 在VOC数据集中,部分标签中物体存在白边,这些白边是训练时需要去忽略的。 53 | # 白边的像素点值为255,在数据载入时,默认将白边值调整成num_classes + 1 54 | # 在one_hot处理后,我们只需要取前num_classes序号的内容来进行训练即可 55 | #------------------------------------------------------------------------# 56 | logpt = - y_true[...,:-1] * K.log(y_pred) * cls_weights 57 | logpt = - K.sum(logpt, axis = -1) 58 | 59 | pt = tf.exp(logpt) 60 | if alpha is not None: 61 | logpt *= alpha 62 | CE_loss = -((1 - pt) ** gamma) * logpt 63 | CE_loss = K.mean(CE_loss) 64 | 65 | tp = K.sum(y_true[...,:-1] * y_pred, axis=[0,1,2]) 66 | fp = K.sum(y_pred , axis=[0,1,2]) - tp 67 | fn = K.sum(y_true[...,:-1], axis=[0,1,2]) - tp 68 | 69 | score = ((1 + beta ** 2) * tp + smooth) / ((1 + beta ** 2) * tp + beta ** 2 * fn + fp + smooth) 70 | score = tf.reduce_mean(score) 71 | dice_loss = 1 - score 72 | # dice_loss = tf.Print(dice_loss, [dice_loss, CE_loss]) 73 | return CE_loss + dice_loss 74 | return _dice_loss_with_Focal_Loss 75 | 76 | def Focal_Loss(cls_weights, alpha=0.5, gamma=2): 77 | cls_weights = np.reshape(cls_weights, [1, 1, 1, -1]) 78 | def _Focal_Loss(y_true, y_pred): 79 | y_pred = K.clip(y_pred, K.epsilon(), 1.0 - K.epsilon()) 80 | #------------------------------------------------------------------------# 81 | # 在VOC数据集中,部分标签中物体存在白边,这些白边是训练时需要去忽略的。 82 | # 白边的像素点值为255,在数据载入时,默认将白边值调整成num_classes + 1 83 | # 在one_hot处理后,我们只需要取前num_classes序号的内容来进行训练即可 84 | #------------------------------------------------------------------------# 85 | logpt = - y_true[...,:-1] * K.log(y_pred) * cls_weights 86 | logpt = - K.sum(logpt, axis = -1) 87 | 88 | pt = tf.exp(logpt) 89 | if alpha is not None: 90 | logpt *= alpha 91 | CE_loss = -((1 - pt) ** gamma) * logpt 92 | CE_loss = K.mean(CE_loss) 93 | return CE_loss 94 | return _Focal_Loss 95 | 96 | def get_lr_scheduler(lr_decay_type, lr, min_lr, total_iters, warmup_iters_ratio = 0.1, warmup_lr_ratio = 0.1, no_aug_iter_ratio = 0.3, step_num = 10): 97 | def yolox_warm_cos_lr(lr, min_lr, total_iters, warmup_total_iters, warmup_lr_start, no_aug_iter, iters): 98 | if iters <= warmup_total_iters: 99 | # lr = (lr - warmup_lr_start) * iters / float(warmup_total_iters) + warmup_lr_start 100 | lr = (lr - warmup_lr_start) * pow(iters / float(warmup_total_iters), 2 101 | ) + warmup_lr_start 102 | elif iters >= total_iters - no_aug_iter: 103 | lr = min_lr 104 | else: 105 | lr = min_lr + 0.5 * (lr - min_lr) * ( 106 | 1.0 107 | + math.cos( 108 | math.pi 109 | * (iters - warmup_total_iters) 110 | / (total_iters - warmup_total_iters - no_aug_iter) 111 | ) 112 | ) 113 | return lr 114 | 115 | def step_lr(lr, decay_rate, step_size, iters): 116 | if step_size < 1: 117 | raise ValueError("step_size must above 1.") 118 | n = iters // step_size 119 | out_lr = lr * decay_rate ** n 120 | return out_lr 121 | 122 | if lr_decay_type == "cos": 123 | warmup_total_iters = min(max(warmup_iters_ratio * total_iters, 1), 3) 124 | warmup_lr_start = max(warmup_lr_ratio * lr, 1e-6) 125 | no_aug_iter = min(max(no_aug_iter_ratio * total_iters, 1), 15) 126 | func = partial(yolox_warm_cos_lr ,lr, min_lr, total_iters, warmup_total_iters, warmup_lr_start, no_aug_iter) 127 | else: 128 | decay_rate = (min_lr / lr) ** (1 / (step_num - 1)) 129 | step_size = total_iters / step_num 130 | func = partial(step_lr, lr, decay_rate, step_size) 131 | 132 | return func 133 | 134 | -------------------------------------------------------------------------------- /nets/mobilenet.py: -------------------------------------------------------------------------------- 1 | from keras.activations import relu 2 | from keras.layers import (Activation, Add, BatchNormalization, Conv2D, 3 | DepthwiseConv2D) 4 | 5 | 6 | def _make_divisible(v, divisor, min_value=None): 7 | if min_value is None: 8 | min_value = divisor 9 | new_v = max(min_value, int(v + divisor / 2) // divisor * divisor) 10 | if new_v < 0.9 * v: 11 | new_v += divisor 12 | return new_v 13 | 14 | def relu6(x): 15 | return relu(x, max_value=6) 16 | 17 | def _inverted_res_block(inputs, expansion, stride, alpha, filters, block_id, skip_connection, rate=1): 18 | in_channels = inputs.shape[-1].value 19 | pointwise_filters = _make_divisible(int(filters * alpha), 8) 20 | prefix = 'expanded_conv_{}_'.format(block_id) 21 | 22 | x = inputs 23 | #----------------------------------------------------# 24 | # 利用1x1卷积根据输入进来的通道数进行通道数上升 25 | #----------------------------------------------------# 26 | if block_id: 27 | x = Conv2D(expansion * in_channels, kernel_size=1, padding='same', 28 | use_bias=False, activation=None, 29 | name=prefix + 'expand')(x) 30 | x = BatchNormalization(epsilon=1e-3, momentum=0.999, 31 | name=prefix + 'expand_BN')(x) 32 | x = Activation(relu6, name=prefix + 'expand_relu')(x) 33 | else: 34 | prefix = 'expanded_conv_' 35 | 36 | #----------------------------------------------------# 37 | # 利用深度可分离卷积进行特征提取 38 | #----------------------------------------------------# 39 | x = DepthwiseConv2D(kernel_size=3, strides=stride, activation=None, 40 | use_bias=False, padding='same', dilation_rate=(rate, rate), 41 | name=prefix + 'depthwise')(x) 42 | x = BatchNormalization(epsilon=1e-3, momentum=0.999, 43 | name=prefix + 'depthwise_BN')(x) 44 | 45 | x = Activation(relu6, name=prefix + 'depthwise_relu')(x) 46 | 47 | #----------------------------------------------------# 48 | # 利用1x1的卷积进行通道数的下降 49 | #----------------------------------------------------# 50 | x = Conv2D(pointwise_filters, 51 | kernel_size=1, padding='same', use_bias=False, activation=None, 52 | name=prefix + 'project')(x) 53 | x = BatchNormalization(epsilon=1e-3, momentum=0.999, 54 | name=prefix + 'project_BN')(x) 55 | 56 | #----------------------------------------------------# 57 | # 添加残差边 58 | #----------------------------------------------------# 59 | if skip_connection: 60 | return Add(name=prefix + 'add')([inputs, x]) 61 | return x 62 | 63 | def mobilenetV2(inputs, alpha=1, downsample_factor=8): 64 | if downsample_factor == 8: 65 | block4_dilation = 2 66 | block5_dilation = 4 67 | block4_stride = 1 68 | atrous_rates = (12, 24, 36) 69 | elif downsample_factor == 16: 70 | block4_dilation = 1 71 | block5_dilation = 2 72 | block4_stride = 2 73 | atrous_rates = (6, 12, 18) 74 | else: 75 | raise ValueError('Unsupported factor - `{}`, Use 8 or 16.'.format(downsample_factor)) 76 | 77 | first_block_filters = _make_divisible(32 * alpha, 8) 78 | # 512,512,3 -> 256,256,32 79 | x = Conv2D(first_block_filters, 80 | kernel_size=3, 81 | strides=(2, 2), padding='same', 82 | use_bias=False, name='Conv')(inputs) 83 | x = BatchNormalization( 84 | epsilon=1e-3, momentum=0.999, name='Conv_BN')(x) 85 | x = Activation(relu6, name='Conv_Relu6')(x) 86 | 87 | # 256,256,32 -> 256,256,16 88 | x = _inverted_res_block(x, filters=16, alpha=alpha, stride=1, 89 | expansion=1, block_id=0, skip_connection=False) 90 | 91 | #---------------------------------------------------------------# 92 | # 256,256,16 -> 128,128,24 93 | x = _inverted_res_block(x, filters=24, alpha=alpha, stride=2, 94 | expansion=6, block_id=1, skip_connection=False) 95 | x = _inverted_res_block(x, filters=24, alpha=alpha, stride=1, 96 | expansion=6, block_id=2, skip_connection=True) 97 | skip1 = x 98 | #---------------------------------------------------------------# 99 | # 128,128,24 -> 64,64.32 100 | x = _inverted_res_block(x, filters=32, alpha=alpha, stride=2, 101 | expansion=6, block_id=3, skip_connection=False) 102 | x = _inverted_res_block(x, filters=32, alpha=alpha, stride=1, 103 | expansion=6, block_id=4, skip_connection=True) 104 | x = _inverted_res_block(x, filters=32, alpha=alpha, stride=1, 105 | expansion=6, block_id=5, skip_connection=True) 106 | #---------------------------------------------------------------# 107 | # 64,64,32 -> 32,32.64 108 | x = _inverted_res_block(x, filters=64, alpha=alpha, stride=block4_stride, 109 | expansion=6, block_id=6, skip_connection=False) 110 | x = _inverted_res_block(x, filters=64, alpha=alpha, stride=1, rate=block4_dilation, 111 | expansion=6, block_id=7, skip_connection=True) 112 | x = _inverted_res_block(x, filters=64, alpha=alpha, stride=1, rate=block4_dilation, 113 | expansion=6, block_id=8, skip_connection=True) 114 | x = _inverted_res_block(x, filters=64, alpha=alpha, stride=1, rate=block4_dilation, 115 | expansion=6, block_id=9, skip_connection=True) 116 | 117 | # 32,32.64 -> 32,32.96 118 | x = _inverted_res_block(x, filters=96, alpha=alpha, stride=1, rate=block4_dilation, 119 | expansion=6, block_id=10, skip_connection=False) 120 | x = _inverted_res_block(x, filters=96, alpha=alpha, stride=1, rate=block4_dilation, 121 | expansion=6, block_id=11, skip_connection=True) 122 | x = _inverted_res_block(x, filters=96, alpha=alpha, stride=1, rate=block4_dilation, 123 | expansion=6, block_id=12, skip_connection=True) 124 | 125 | #---------------------------------------------------------------# 126 | # 32,32.96 -> 32,32,160 -> 32,32,320 127 | x = _inverted_res_block(x, filters=160, alpha=alpha, stride=1, rate=block4_dilation, # 1! 128 | expansion=6, block_id=13, skip_connection=False) 129 | x = _inverted_res_block(x, filters=160, alpha=alpha, stride=1, rate=block5_dilation, 130 | expansion=6, block_id=14, skip_connection=True) 131 | x = _inverted_res_block(x, filters=160, alpha=alpha, stride=1, rate=block5_dilation, 132 | expansion=6, block_id=15, skip_connection=True) 133 | 134 | x = _inverted_res_block(x, filters=320, alpha=alpha, stride=1, rate=block5_dilation, 135 | expansion=6, block_id=16, skip_connection=False) 136 | return x,atrous_rates,skip1 137 | -------------------------------------------------------------------------------- /predict.py: -------------------------------------------------------------------------------- 1 | #----------------------------------------------------# 2 | # 将单张图片预测、摄像头检测和FPS测试功能 3 | # 整合到了一个py文件中,通过指定mode进行模式的修改。 4 | #----------------------------------------------------# 5 | import time 6 | 7 | import cv2 8 | import numpy as np 9 | from PIL import Image 10 | 11 | from deeplab import DeeplabV3 12 | 13 | if __name__ == "__main__": 14 | #-------------------------------------------------------------------------# 15 | # 如果想要修改对应种类的颜色,到__init__函数里修改self.colors即可 16 | #-------------------------------------------------------------------------# 17 | deeplab = DeeplabV3() 18 | #----------------------------------------------------------------------------------------------------------# 19 | # mode用于指定测试的模式: 20 | # 'predict' 表示单张图片预测,如果想对预测过程进行修改,如保存图片,截取对象等,可以先看下方详细的注释 21 | # 'video' 表示视频检测,可调用摄像头或者视频进行检测,详情查看下方注释。 22 | # 'fps' 表示测试fps,使用的图片是img里面的street.jpg,详情查看下方注释。 23 | # 'dir_predict' 表示遍历文件夹进行检测并保存。默认遍历img文件夹,保存img_out文件夹,详情查看下方注释。 24 | #----------------------------------------------------------------------------------------------------------# 25 | mode = "predict" 26 | #-------------------------------------------------------------------------# 27 | # count 指定了是否进行目标的像素点计数(即面积)与比例计算 28 | # name_classes 区分的种类,和json_to_dataset里面的一样,用于打印种类和数量 29 | # 30 | # count、name_classes仅在mode='predict'时有效 31 | #-------------------------------------------------------------------------# 32 | count = False 33 | name_classes = ["background","aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"] 34 | # name_classes = ["background","cat","dog"] 35 | #----------------------------------------------------------------------------------------------------------# 36 | # video_path 用于指定视频的路径,当video_path=0时表示检测摄像头 37 | # 想要检测视频,则设置如video_path = "xxx.mp4"即可,代表读取出根目录下的xxx.mp4文件。 38 | # video_save_path 表示视频保存的路径,当video_save_path=""时表示不保存 39 | # 想要保存视频,则设置如video_save_path = "yyy.mp4"即可,代表保存为根目录下的yyy.mp4文件。 40 | # video_fps 用于保存的视频的fps 41 | # 42 | # video_path、video_save_path和video_fps仅在mode='video'时有效 43 | # 保存视频时需要ctrl+c退出或者运行到最后一帧才会完成完整的保存步骤。 44 | #----------------------------------------------------------------------------------------------------------# 45 | video_path = 0 46 | video_save_path = "" 47 | video_fps = 25.0 48 | #----------------------------------------------------------------------------------------------------------# 49 | # test_interval 用于指定测量fps的时候,图片检测的次数。理论上test_interval越大,fps越准确。 50 | # fps_image_path 用于指定测试的fps图片 51 | # 52 | # test_interval和fps_image_path仅在mode='fps'有效 53 | #----------------------------------------------------------------------------------------------------------# 54 | test_interval = 100 55 | fps_image_path = "img/street.jpg" 56 | #-------------------------------------------------------------------------# 57 | # dir_origin_path 指定了用于检测的图片的文件夹路径 58 | # dir_save_path 指定了检测完图片的保存路径 59 | # 60 | # dir_origin_path和dir_save_path仅在mode='dir_predict'时有效 61 | #-------------------------------------------------------------------------# 62 | dir_origin_path = "img/" 63 | dir_save_path = "img_out/" 64 | 65 | if mode == "predict": 66 | ''' 67 | predict.py有几个注意点 68 | 1、该代码无法直接进行批量预测,如果想要批量预测,可以利用os.listdir()遍历文件夹,利用Image.open打开图片文件进行预测。 69 | 具体流程可以参考get_miou_prediction.py,在get_miou_prediction.py即实现了遍历。 70 | 2、如果想要保存,利用r_image.save("img.jpg")即可保存。 71 | 3、如果想要原图和分割图不混合,可以把blend参数设置成False。 72 | 4、如果想根据mask获取对应的区域,可以参考detect_image函数中,利用预测结果绘图的部分,判断每一个像素点的种类,然后根据种类获取对应的部分。 73 | seg_img = np.zeros((np.shape(pr)[0],np.shape(pr)[1],3)) 74 | for c in range(self.num_classes): 75 | seg_img[:, :, 0] += ((pr == c)*( self.colors[c][0] )).astype('uint8') 76 | seg_img[:, :, 1] += ((pr == c)*( self.colors[c][1] )).astype('uint8') 77 | seg_img[:, :, 2] += ((pr == c)*( self.colors[c][2] )).astype('uint8') 78 | ''' 79 | while True: 80 | img = input('Input image filename:') 81 | try: 82 | image = Image.open(img) 83 | except: 84 | print('Open Error! Try again!') 85 | continue 86 | else: 87 | r_image = deeplab.detect_image(image, count=count, name_classes=name_classes) 88 | r_image.show() 89 | 90 | elif mode == "video": 91 | capture=cv2.VideoCapture(video_path) 92 | if video_save_path!="": 93 | fourcc = cv2.VideoWriter_fourcc(*'XVID') 94 | size = (int(capture.get(cv2.CAP_PROP_FRAME_WIDTH)), int(capture.get(cv2.CAP_PROP_FRAME_HEIGHT))) 95 | out = cv2.VideoWriter(video_save_path, fourcc, video_fps, size) 96 | 97 | ref, frame = capture.read() 98 | if not ref: 99 | raise ValueError("未能正确读取摄像头(视频),请注意是否正确安装摄像头(是否正确填写视频路径)。") 100 | 101 | fps = 0.0 102 | while(True): 103 | t1 = time.time() 104 | # 读取某一帧 105 | ref, frame = capture.read() 106 | if not ref: 107 | break 108 | # 格式转变,BGRtoRGB 109 | frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB) 110 | # 转变成Image 111 | frame = Image.fromarray(np.uint8(frame)) 112 | # 进行检测 113 | frame = np.array(deeplab.detect_image(frame)) 114 | # RGBtoBGR满足opencv显示格式 115 | frame = cv2.cvtColor(frame,cv2.COLOR_RGB2BGR) 116 | 117 | fps = ( fps + (1./(time.time()-t1)) ) / 2 118 | print("fps= %.2f"%(fps)) 119 | frame = cv2.putText(frame, "fps= %.2f"%(fps), (0, 40), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2) 120 | 121 | cv2.imshow("video",frame) 122 | c= cv2.waitKey(1) & 0xff 123 | if video_save_path!="": 124 | out.write(frame) 125 | 126 | if c==27: 127 | capture.release() 128 | break 129 | print("Video Detection Done!") 130 | capture.release() 131 | if video_save_path!="": 132 | print("Save processed video to the path :" + video_save_path) 133 | out.release() 134 | cv2.destroyAllWindows() 135 | 136 | elif mode == "fps": 137 | img = Image.open(fps_image_path) 138 | tact_time = deeplab.get_FPS(img, test_interval) 139 | print(str(tact_time) + ' seconds, ' + str(1/tact_time) + 'FPS, @batch_size 1') 140 | 141 | elif mode == "dir_predict": 142 | import os 143 | from tqdm import tqdm 144 | 145 | img_names = os.listdir(dir_origin_path) 146 | for img_name in tqdm(img_names): 147 | if img_name.lower().endswith(('.bmp', '.dib', '.png', '.jpg', '.jpeg', '.pbm', '.pgm', '.ppm', '.tif', '.tiff')): 148 | image_path = os.path.join(dir_origin_path, img_name) 149 | image = Image.open(image_path) 150 | r_image = deeplab.detect_image(image) 151 | if not os.path.exists(dir_save_path): 152 | os.makedirs(dir_save_path) 153 | r_image.save(os.path.join(dir_save_path, img_name)) 154 | 155 | else: 156 | raise AssertionError("Please specify the correct mode: 'predict', 'video', 'fps' or 'dir_predict'.") 157 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | scipy==1.2.1 2 | numpy==1.17.0 3 | Keras==2.1.5 4 | matplotlib==3.1.2 5 | opencv_python==4.1.2.30 6 | tensorflow_gpu==1.13.2 7 | tqdm==4.60.0 8 | Pillow==8.2.0 9 | h5py==2.10.0 10 | -------------------------------------------------------------------------------- /summary.py: -------------------------------------------------------------------------------- 1 | #--------------------------------------------# 2 | # 该部分代码用于看网络结构 3 | #--------------------------------------------# 4 | from nets.deeplab import Deeplabv3 5 | from utils.utils import net_flops 6 | 7 | if __name__ == "__main__": 8 | input_shape = [512, 512] 9 | num_classes = 21 10 | backbone = 'mobilenet' 11 | 12 | model = Deeplabv3([input_shape[0], input_shape[1], 3], num_classes, backbone=backbone) 13 | #--------------------------------------------# 14 | # 查看网络结构网络结构 15 | #--------------------------------------------# 16 | model.summary() 17 | #--------------------------------------------# 18 | # 计算网络的FLOPS 19 | #--------------------------------------------# 20 | net_flops(model, table=False) 21 | 22 | #--------------------------------------------# 23 | # 获得网络每个层的名称与序号 24 | #--------------------------------------------# 25 | # for i,layer in enumerate(model.layers): 26 | # print(i,layer.name) 27 | -------------------------------------------------------------------------------- /train.py: -------------------------------------------------------------------------------- 1 | import datetime 2 | import os 3 | 4 | import numpy as np 5 | import tensorflow as tf 6 | from keras.callbacks import (EarlyStopping, LearningRateScheduler, 7 | ModelCheckpoint, TensorBoard) 8 | from keras.layers import Conv2D, Dense, DepthwiseConv2D 9 | from keras.optimizers import SGD, Adam 10 | from keras.regularizers import l2 11 | from keras.utils.multi_gpu_utils import multi_gpu_model 12 | 13 | from nets.deeplab import Deeplabv3 14 | from nets.deeplab_training import (CE, Focal_Loss, dice_loss_with_CE, 15 | dice_loss_with_Focal_Loss, get_lr_scheduler) 16 | from utils.callbacks import LossHistory, ParallelModelCheckpoint, EvalCallback 17 | from utils.dataloader import DeeplabDataset 18 | from utils.utils import show_config 19 | from utils.utils_metrics import Iou_score, f_score 20 | 21 | tf.logging.set_verbosity(tf.logging.ERROR) 22 | 23 | ''' 24 | 训练自己的目标检测模型一定需要注意以下几点: 25 | 1、训练前仔细检查自己的格式是否满足要求,该库要求数据集格式为VOC格式,需要准备好的内容有输入图片和标签 26 | 输入图片为.jpg图片,无需固定大小,传入训练前会自动进行resize。 27 | 灰度图会自动转成RGB图片进行训练,无需自己修改。 28 | 输入图片如果后缀非jpg,需要自己批量转成jpg后再开始训练。 29 | 30 | 标签为png图片,无需固定大小,传入训练前会自动进行resize。 31 | 由于许多同学的数据集是网络上下载的,标签格式并不符合,需要再度处理。一定要注意!标签的每个像素点的值就是这个像素点所属的种类。 32 | 网上常见的数据集总共对输入图片分两类,背景的像素点值为0,目标的像素点值为255。这样的数据集可以正常运行但是预测是没有效果的! 33 | 需要改成,背景的像素点值为0,目标的像素点值为1。 34 | 如果格式有误,参考:https://github.com/bubbliiiing/segmentation-format-fix 35 | 36 | 2、损失值的大小用于判断是否收敛,比较重要的是有收敛的趋势,即验证集损失不断下降,如果验证集损失基本上不改变的话,模型基本上就收敛了。 37 | 损失值的具体大小并没有什么意义,大和小只在于损失的计算方式,并不是接近于0才好。如果想要让损失好看点,可以直接到对应的损失函数里面除上10000。 38 | 训练过程中的损失值会保存在logs文件夹下的loss_%Y_%m_%d_%H_%M_%S文件夹中 39 | 40 | 3、训练好的权值文件保存在logs文件夹中,每个训练世代(Epoch)包含若干训练步长(Step),每个训练步长(Step)进行一次梯度下降。 41 | 如果只是训练了几个Step是不会保存的,Epoch和Step的概念要捋清楚一下。 42 | ''' 43 | if __name__ == "__main__": 44 | #---------------------------------------------------------------------# 45 | # train_gpu 训练用到的GPU 46 | # 默认为第一张卡、双卡为[0, 1]、三卡为[0, 1, 2] 47 | # 在使用多GPU时,每个卡上的batch为总batch除以卡的数量。 48 | #---------------------------------------------------------------------# 49 | train_gpu = [0,] 50 | #-----------------------------------------------------# 51 | # num_classes 训练自己的数据集必须要修改的 52 | # 自己需要的分类个数+1,如2+1 53 | #-----------------------------------------------------# 54 | num_classes = 21 55 | #---------------------------------# 56 | # 所使用的的主干网络: 57 | # mobilenet 58 | # xception 59 | #---------------------------------# 60 | backbone = "mobilenet" 61 | #----------------------------------------------------------------------------------------------------------------------------# 62 | # 权值文件的下载请看README,可以通过网盘下载。模型的 预训练权重 对不同数据集是通用的,因为特征是通用的。 63 | # 模型的 预训练权重 比较重要的部分是 主干特征提取网络的权值部分,用于进行特征提取。 64 | # 预训练权重对于99%的情况都必须要用,不用的话主干部分的权值太过随机,特征提取效果不明显,网络训练的结果也不会好 65 | # 训练自己的数据集时提示维度不匹配正常,预测的东西都不一样了自然维度不匹配 66 | # 67 | # 如果训练过程中存在中断训练的操作,可以将model_path设置成logs文件夹下的权值文件,将已经训练了一部分的权值再次载入。 68 | # 同时修改下方的 冻结阶段 或者 解冻阶段 的参数,来保证模型epoch的连续性。 69 | # 70 | # 当model_path = ''的时候不加载整个模型的权值。 71 | # 72 | # 此处使用的是整个模型的权重,因此是在train.py进行加载的。 73 | # 如果想要让模型从主干的预训练权值开始训练,则设置model_path为主干网络的权值,此时仅加载主干。 74 | # 如果想要让模型从0开始训练,则设置model_path = '',下面的Freeze_Train = Fasle,此时从0开始训练,且没有冻结主干的过程。 75 | # 76 | # 一般来讲,网络从0开始的训练效果会很差,因为权值太过随机,特征提取效果不明显,因此非常、非常、非常不建议大家从0开始训练! 77 | # 如果一定要从0开始,可以了解imagenet数据集,首先训练分类模型,获得网络的主干部分权值,分类模型的 主干部分 和该模型通用,基于此进行训练。 78 | #----------------------------------------------------------------------------------------------------------------------------# 79 | model_path = "model_data/deeplabv3_mobilenetv2.h5" 80 | #---------------------------------------------------------# 81 | # downsample_factor 下采样的倍数8、16 82 | # 8下采样的倍数较小、理论上效果更好。 83 | # 但也要求更大的显存 84 | #---------------------------------------------------------# 85 | downsample_factor = 16 86 | #------------------------------# 87 | # 输入图片的大小 88 | #------------------------------# 89 | input_shape = [512, 512] 90 | 91 | #----------------------------------------------------------------------------------------------------------------------------# 92 | # 训练分为两个阶段,分别是冻结阶段和解冻阶段。设置冻结阶段是为了满足机器性能不足的同学的训练需求。 93 | # 冻结训练需要的显存较小,显卡非常差的情况下,可设置Freeze_Epoch等于UnFreeze_Epoch,此时仅仅进行冻结训练。 94 | # 95 | # 在此提供若干参数设置建议,各位训练者根据自己的需求进行灵活调整: 96 | # (一)从整个模型的预训练权重开始训练: 97 | # Adam: 98 | # Init_Epoch = 0,Freeze_Epoch = 50,UnFreeze_Epoch = 100,Freeze_Train = True,optimizer_type = 'adam',Init_lr = 5e-4,weight_decay = 0。(冻结) 99 | # Init_Epoch = 0,UnFreeze_Epoch = 100,Freeze_Train = False,optimizer_type = 'adam',Init_lr = 5e-4,weight_decay = 0。(不冻结) 100 | # SGD: 101 | # Init_Epoch = 0,Freeze_Epoch = 50,UnFreeze_Epoch = 100,Freeze_Train = True,optimizer_type = 'sgd',Init_lr = 7e-3,weight_decay = 1e-4。(冻结) 102 | # Init_Epoch = 0,UnFreeze_Epoch = 100,Freeze_Train = False,optimizer_type = 'sgd',Init_lr = 7e-3,weight_decay = 1e-4。(不冻结) 103 | # 其中:UnFreeze_Epoch可以在100-300之间调整。 104 | # (二)从主干网络的预训练权重开始训练: 105 | # Adam: 106 | # Init_Epoch = 0,Freeze_Epoch = 50,UnFreeze_Epoch = 100,Freeze_Train = True,optimizer_type = 'adam',Init_lr = 5e-4,weight_decay = 0。(冻结) 107 | # Init_Epoch = 0,UnFreeze_Epoch = 100,Freeze_Train = False,optimizer_type = 'adam',Init_lr = 5e-4,weight_decay = 0。(不冻结) 108 | # SGD: 109 | # Init_Epoch = 0,Freeze_Epoch = 50,UnFreeze_Epoch = 120,Freeze_Train = True,optimizer_type = 'sgd',Init_lr = 7e-3,weight_decay = 1e-4。(冻结) 110 | # Init_Epoch = 0,UnFreeze_Epoch = 120,Freeze_Train = False,optimizer_type = 'sgd',Init_lr = 7e-3,weight_decay = 1e-4。(不冻结) 111 | # 其中:由于从主干网络的预训练权重开始训练,主干的权值不一定适合语义分割,需要更多的训练跳出局部最优解。 112 | # UnFreeze_Epoch可以在120-300之间调整。 113 | # Adam相较于SGD收敛的快一些。因此UnFreeze_Epoch理论上可以小一点,但依然推荐更多的Epoch。 114 | # (三)batch_size的设置: 115 | # 在显卡能够接受的范围内,以大为好。显存不足与数据集大小无关,提示显存不足(OOM或者CUDA out of memory)请调小batch_size。 116 | # 受到BatchNorm层影响,batch_size最小为2,不能为1。 117 | # 正常情况下Freeze_batch_size建议为Unfreeze_batch_size的1-2倍。不建议设置的差距过大,因为关系到学习率的自动调整。 118 | #----------------------------------------------------------------------------------------------------------------------------# 119 | #------------------------------------------------------------------# 120 | # 冻结阶段训练参数 121 | # 此时模型的主干被冻结了,特征提取网络不发生改变 122 | # 占用的显存较小,仅对网络进行微调 123 | # Init_Epoch 模型当前开始的训练世代,其值可以大于Freeze_Epoch,如设置: 124 | # Init_Epoch = 60、Freeze_Epoch = 50、UnFreeze_Epoch = 100 125 | # 会跳过冻结阶段,直接从60代开始,并调整对应的学习率。 126 | # (断点续练时使用) 127 | # Freeze_Epoch 模型冻结训练的Freeze_Epoch 128 | # (当Freeze_Train=False时失效) 129 | # Freeze_batch_size 模型冻结训练的batch_size 130 | # (当Freeze_Train=False时失效) 131 | #------------------------------------------------------------------# 132 | Init_Epoch = 0 133 | Freeze_Epoch = 50 134 | Freeze_batch_size = 8 135 | #------------------------------------------------------------------# 136 | # 解冻阶段训练参数 137 | # 此时模型的主干不被冻结了,特征提取网络会发生改变 138 | # 占用的显存较大,网络所有的参数都会发生改变 139 | # UnFreeze_Epoch 模型总共训练的epoch 140 | # Unfreeze_batch_size 模型在解冻后的batch_size 141 | #------------------------------------------------------------------# 142 | UnFreeze_Epoch = 100 143 | Unfreeze_batch_size = 4 144 | #------------------------------------------------------------------# 145 | # Freeze_Train 是否进行冻结训练 146 | # 默认先冻结主干训练后解冻训练。 147 | #------------------------------------------------------------------# 148 | Freeze_Train = True 149 | 150 | #------------------------------------------------------------------# 151 | # 其它训练参数:学习率、优化器、学习率下降有关 152 | #------------------------------------------------------------------# 153 | #------------------------------------------------------------------# 154 | # Init_lr 模型的最大学习率 155 | # 当使用Adam优化器时建议设置 Init_lr=5e-4 156 | # 当使用SGD优化器时建议设置 Init_lr=7e-3 157 | # Min_lr 模型的最小学习率,默认为最大学习率的0.01 158 | #------------------------------------------------------------------# 159 | Init_lr = 7e-3 160 | Min_lr = Init_lr * 0.01 161 | #------------------------------------------------------------------# 162 | # optimizer_type 使用到的优化器种类,可选的有adam、sgd 163 | # 当使用Adam优化器时建议设置 Init_lr=5e-4 164 | # 当使用SGD优化器时建议设置 Init_lr=7e-3 165 | # momentum 优化器内部使用到的momentum参数 166 | # weight_decay 权值衰减,可防止过拟合 167 | # adam会导致weight_decay错误,使用adam时建议设置为0。 168 | #------------------------------------------------------------------# 169 | optimizer_type = "sgd" 170 | momentum = 0.9 171 | weight_decay = 1e-4 172 | #------------------------------------------------------------------# 173 | # lr_decay_type 使用到的学习率下降方式,可选的有'step'、'cos' 174 | #------------------------------------------------------------------# 175 | lr_decay_type = 'cos' 176 | #------------------------------------------------------------------# 177 | # save_period 多少个epoch保存一次权值 178 | #------------------------------------------------------------------# 179 | save_period = 5 180 | #------------------------------------------------------------------# 181 | # save_dir 权值与日志文件保存的文件夹 182 | #------------------------------------------------------------------# 183 | save_dir = 'logs' 184 | #------------------------------------------------------------------# 185 | # eval_flag 是否在训练时进行评估,评估对象为验证集 186 | # eval_period 代表多少个epoch评估一次,不建议频繁的评估 187 | # 评估需要消耗较多的时间,频繁评估会导致训练非常慢 188 | # 此处获得的mAP会与get_map.py获得的会有所不同,原因有二: 189 | # (一)此处获得的mAP为验证集的mAP。 190 | # (二)此处设置评估参数较为保守,目的是加快评估速度。 191 | #------------------------------------------------------------------# 192 | eval_flag = True 193 | eval_period = 5 194 | 195 | #------------------------------------------------------------------# 196 | # VOCdevkit_path 数据集路径 197 | #------------------------------------------------------------------# 198 | VOCdevkit_path = 'VOCdevkit' 199 | #------------------------------------------------------------------# 200 | # 建议选项: 201 | # 种类少(几类)时,设置为True 202 | # 种类多(十几类)时,如果batch_size比较大(10以上),那么设置为True 203 | # 种类多(十几类)时,如果batch_size比较小(10以下),那么设置为False 204 | #------------------------------------------------------------------# 205 | dice_loss = False 206 | #------------------------------------------------------------------# 207 | # 是否使用focal loss来防止正负样本不平衡 208 | #------------------------------------------------------------------# 209 | focal_loss = False 210 | #------------------------------------------------------------------# 211 | # 是否给不同种类赋予不同的损失权值,默认是平衡的。 212 | # 设置的话,注意设置成numpy形式的,长度和num_classes一样。 213 | # 如: 214 | # num_classes = 3 215 | # cls_weights = np.array([1, 2, 3], np.float32) 216 | #------------------------------------------------------------------# 217 | cls_weights = np.ones([num_classes], np.float32) 218 | #-------------------------------------------------------------------# 219 | # 用于设置是否使用多线程读取数据,1代表关闭多线程 220 | # 开启后会加快数据读取速度,但是会占用更多内存 221 | # 在IO为瓶颈的时候再开启多线程,即GPU运算速度远大于读取图片的速度。 222 | #-------------------------------------------------------------------# 223 | num_workers = 1 224 | 225 | #------------------------------------------------------# 226 | # 设置用到的显卡 227 | #------------------------------------------------------# 228 | os.environ["CUDA_VISIBLE_DEVICES"] = ','.join(str(x) for x in train_gpu) 229 | ngpus_per_node = len(train_gpu) 230 | print('Number of devices: {}'.format(ngpus_per_node)) 231 | 232 | #------------------------------------------------------# 233 | # 获取model 234 | #------------------------------------------------------# 235 | model_body = Deeplabv3([input_shape[0], input_shape[1], 3], num_classes, backbone = backbone, downsample_factor = downsample_factor) 236 | if model_path != '': 237 | #------------------------------------------------------# 238 | # 载入预训练权重 239 | #------------------------------------------------------# 240 | print('Load weights {}.'.format(model_path)) 241 | model_body.load_weights(model_path, by_name=True, skip_mismatch=True) 242 | 243 | if ngpus_per_node > 1: 244 | model = multi_gpu_model(model_body, gpus=ngpus_per_node) 245 | else: 246 | model = model_body 247 | 248 | #--------------------------# 249 | # 使用到的损失函数 250 | #--------------------------# 251 | if focal_loss: 252 | if dice_loss: 253 | loss = dice_loss_with_Focal_Loss(cls_weights) 254 | else: 255 | loss = Focal_Loss(cls_weights) 256 | else: 257 | if dice_loss: 258 | loss = dice_loss_with_CE(cls_weights) 259 | else: 260 | loss = CE(cls_weights) 261 | 262 | #---------------------------# 263 | # 读取数据集对应的txt 264 | #---------------------------# 265 | with open(os.path.join(VOCdevkit_path, "VOC2007/ImageSets/Segmentation/train.txt"),"r") as f: 266 | train_lines = f.readlines() 267 | with open(os.path.join(VOCdevkit_path, "VOC2007/ImageSets/Segmentation/val.txt"),"r") as f: 268 | val_lines = f.readlines() 269 | num_train = len(train_lines) 270 | num_val = len(val_lines) 271 | 272 | show_config( 273 | num_classes = num_classes, backbone = backbone, model_path = model_path, input_shape = input_shape, \ 274 | Init_Epoch = Init_Epoch, Freeze_Epoch = Freeze_Epoch, UnFreeze_Epoch = UnFreeze_Epoch, Freeze_batch_size = Freeze_batch_size, Unfreeze_batch_size = Unfreeze_batch_size, Freeze_Train = Freeze_Train, \ 275 | Init_lr = Init_lr, Min_lr = Min_lr, optimizer_type = optimizer_type, momentum = momentum, lr_decay_type = lr_decay_type, \ 276 | save_period = save_period, save_dir = save_dir, num_workers = num_workers, num_train = num_train, num_val = num_val 277 | ) 278 | #---------------------------------------------------------# 279 | # 总训练世代指的是遍历全部数据的总次数 280 | # 总训练步长指的是梯度下降的总次数 281 | # 每个训练世代包含若干训练步长,每个训练步长进行一次梯度下降。 282 | # 此处仅建议最低训练世代,上不封顶,计算时只考虑了解冻部分 283 | #----------------------------------------------------------# 284 | wanted_step = 1.5e4 if optimizer_type == "sgd" else 0.5e4 285 | total_step = num_train // Unfreeze_batch_size * UnFreeze_Epoch 286 | if total_step <= wanted_step: 287 | if num_train // Unfreeze_batch_size == 0: 288 | raise ValueError('数据集过小,无法进行训练,请扩充数据集。') 289 | wanted_epoch = wanted_step // (num_train // Unfreeze_batch_size) + 1 290 | print("\n\033[1;33;44m[Warning] 使用%s优化器时,建议将训练总步长设置到%d以上。\033[0m"%(optimizer_type, wanted_step)) 291 | print("\033[1;33;44m[Warning] 本次运行的总训练数据量为%d,Unfreeze_batch_size为%d,共训练%d个Epoch,计算出总训练步长为%d。\033[0m"%(num_train, Unfreeze_batch_size, UnFreeze_Epoch, total_step)) 292 | print("\033[1;33;44m[Warning] 由于总训练步长为%d,小于建议总步长%d,建议设置总世代为%d。\033[0m"%(total_step, wanted_step, wanted_epoch)) 293 | 294 | for layer in model_body.layers: 295 | if isinstance(layer, DepthwiseConv2D): 296 | layer.add_loss(l2(weight_decay)(layer.depthwise_kernel)) 297 | elif isinstance(layer, Conv2D) or isinstance(layer, Dense): 298 | layer.add_loss(l2(weight_decay)(layer.kernel)) 299 | 300 | #------------------------------------------------------# 301 | # 主干特征提取网络特征通用,冻结训练可以加快训练速度 302 | # 也可以在训练初期防止权值被破坏。 303 | # Init_Epoch为起始世代 304 | # Freeze_Epoch为冻结训练的世代 305 | # Epoch总训练世代 306 | # 提示OOM或者显存不足请调小Batch_size 307 | #------------------------------------------------------# 308 | if True: 309 | if Freeze_Train: 310 | if backbone=="mobilenet": 311 | freeze_layers = 146 312 | else: 313 | freeze_layers = 358 314 | for i in range(freeze_layers): model_body.layers[i].trainable = False 315 | print('Freeze the first {} layers of total {} layers.'.format(freeze_layers, len(model_body.layers))) 316 | 317 | #-------------------------------------------------------------------# 318 | # 如果不冻结训练的话,直接设置batch_size为Unfreeze_batch_size 319 | #-------------------------------------------------------------------# 320 | batch_size = Freeze_batch_size if Freeze_Train else Unfreeze_batch_size 321 | start_epoch = Init_Epoch 322 | end_epoch = Freeze_Epoch if Freeze_Train else UnFreeze_Epoch 323 | 324 | #-------------------------------------------------------------------# 325 | # 判断当前batch_size,自适应调整学习率 326 | #-------------------------------------------------------------------# 327 | nbs = 16 328 | lr_limit_max = 5e-4 if optimizer_type == 'adam' else 1e-1 329 | lr_limit_min = 3e-4 if optimizer_type == 'adam' else 5e-4 330 | if backbone == "xception": 331 | lr_limit_max = 1e-4 if optimizer_type == 'adam' else 1e-1 332 | lr_limit_min = 1e-4 if optimizer_type == 'adam' else 5e-4 333 | Init_lr_fit = min(max(batch_size / nbs * Init_lr, lr_limit_min), lr_limit_max) 334 | Min_lr_fit = min(max(batch_size / nbs * Min_lr, lr_limit_min * 1e-2), lr_limit_max * 1e-2) 335 | 336 | optimizer = { 337 | 'adam' : Adam(lr = Init_lr_fit, beta_1 = momentum), 338 | 'sgd' : SGD(lr = Init_lr_fit, momentum = momentum, nesterov=True) 339 | }[optimizer_type] 340 | model.compile(loss = loss, 341 | optimizer = optimizer, 342 | metrics = [f_score()]) 343 | 344 | #---------------------------------------# 345 | # 获得学习率下降的公式 346 | #---------------------------------------# 347 | lr_scheduler_func = get_lr_scheduler(lr_decay_type, Init_lr_fit, Min_lr_fit, UnFreeze_Epoch) 348 | 349 | epoch_step = num_train // batch_size 350 | epoch_step_val = num_val // batch_size 351 | 352 | if epoch_step == 0 or epoch_step_val == 0: 353 | raise ValueError('数据集过小,无法进行训练,请扩充数据集。') 354 | 355 | train_dataloader = DeeplabDataset(train_lines, input_shape, batch_size, num_classes, True, VOCdevkit_path) 356 | val_dataloader = DeeplabDataset(val_lines, input_shape, batch_size, num_classes, False, VOCdevkit_path) 357 | 358 | #-------------------------------------------------------------------------------# 359 | # 训练参数的设置 360 | # logging 用于设置tensorboard的保存地址 361 | # checkpoint 用于设置权值保存的细节,period用于修改多少epoch保存一次 362 | # lr_scheduler 用于设置学习率下降的方式 363 | # early_stopping 用于设定早停,val_loss多次不下降自动结束训练,表示模型基本收敛 364 | #-------------------------------------------------------------------------------# 365 | time_str = datetime.datetime.strftime(datetime.datetime.now(),'%Y_%m_%d_%H_%M_%S') 366 | log_dir = os.path.join(save_dir, "loss_" + str(time_str)) 367 | logging = TensorBoard(log_dir) 368 | loss_history = LossHistory(log_dir) 369 | if ngpus_per_node > 1: 370 | checkpoint = ParallelModelCheckpoint(model_body, os.path.join(save_dir, "ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5"), 371 | monitor = 'val_loss', save_weights_only = True, save_best_only = False, period = save_period) 372 | checkpoint_last = ParallelModelCheckpoint(model_body, os.path.join(save_dir, "last_epoch_weights.h5"), 373 | monitor = 'val_loss', save_weights_only = True, save_best_only = False, period = 1) 374 | checkpoint_best = ParallelModelCheckpoint(model_body, os.path.join(save_dir, "best_epoch_weights.h5"), 375 | monitor = 'val_loss', save_weights_only = True, save_best_only = True, period = 1) 376 | else: 377 | checkpoint = ModelCheckpoint(os.path.join(save_dir, "ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5"), 378 | monitor = 'val_loss', save_weights_only = True, save_best_only = False, period = save_period) 379 | checkpoint_last = ModelCheckpoint(os.path.join(save_dir, "last_epoch_weights.h5"), 380 | monitor = 'val_loss', save_weights_only = True, save_best_only = False, period = 1) 381 | checkpoint_best = ModelCheckpoint(os.path.join(save_dir, "best_epoch_weights.h5"), 382 | monitor = 'val_loss', save_weights_only = True, save_best_only = True, period = 1) 383 | early_stopping = EarlyStopping(monitor='val_loss', min_delta = 0, patience = 10, verbose = 1) 384 | lr_scheduler = LearningRateScheduler(lr_scheduler_func, verbose = 1) 385 | eval_callback = EvalCallback(model_body, input_shape, num_classes, val_lines, VOCdevkit_path, log_dir, \ 386 | eval_flag=eval_flag, period=eval_period) 387 | callbacks = [logging, loss_history, checkpoint, checkpoint_last, checkpoint_best, lr_scheduler, eval_callback] 388 | 389 | if start_epoch < end_epoch: 390 | print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size)) 391 | model.fit_generator( 392 | generator = train_dataloader, 393 | steps_per_epoch = epoch_step, 394 | validation_data = val_dataloader, 395 | validation_steps = epoch_step_val, 396 | epochs = end_epoch, 397 | initial_epoch = start_epoch, 398 | use_multiprocessing = True if num_workers > 1 else False, 399 | workers = num_workers, 400 | callbacks = callbacks 401 | ) 402 | #---------------------------------------# 403 | # 如果模型有冻结学习部分 404 | # 则解冻,并设置参数 405 | #---------------------------------------# 406 | if Freeze_Train: 407 | batch_size = Unfreeze_batch_size 408 | start_epoch = Freeze_Epoch if start_epoch < Freeze_Epoch else start_epoch 409 | end_epoch = UnFreeze_Epoch 410 | 411 | #-------------------------------------------------------------------# 412 | # 判断当前batch_size,自适应调整学习率 413 | #-------------------------------------------------------------------# 414 | nbs = 16 415 | lr_limit_max = 5e-4 if optimizer_type == 'adam' else 1e-1 416 | lr_limit_min = 3e-4 if optimizer_type == 'adam' else 5e-4 417 | if backbone == "xception": 418 | lr_limit_max = 1e-4 if optimizer_type == 'adam' else 1e-1 419 | lr_limit_min = 1e-4 if optimizer_type == 'adam' else 5e-4 420 | Init_lr_fit = min(max(batch_size / nbs * Init_lr, lr_limit_min), lr_limit_max) 421 | Min_lr_fit = min(max(batch_size / nbs * Min_lr, lr_limit_min * 1e-2), lr_limit_max * 1e-2) 422 | #---------------------------------------# 423 | # 获得学习率下降的公式 424 | #---------------------------------------# 425 | lr_scheduler_func = get_lr_scheduler(lr_decay_type, Init_lr_fit, Min_lr_fit, UnFreeze_Epoch) 426 | lr_scheduler = LearningRateScheduler(lr_scheduler_func, verbose = 1) 427 | callbacks = [logging, loss_history, checkpoint, checkpoint_last, checkpoint_best, lr_scheduler, eval_callback] 428 | 429 | for i in range(len(model_body.layers)): 430 | model_body.layers[i].trainable = True 431 | model.compile(loss = loss, 432 | optimizer = optimizer, 433 | metrics = [f_score()]) 434 | 435 | epoch_step = num_train // batch_size 436 | epoch_step_val = num_val // batch_size 437 | 438 | if epoch_step == 0 or epoch_step_val == 0: 439 | raise ValueError("数据集过小,无法继续进行训练,请扩充数据集。") 440 | 441 | train_dataloader.batch_size = Unfreeze_batch_size 442 | val_dataloader.batch_size = Unfreeze_batch_size 443 | 444 | print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size)) 445 | model.fit_generator( 446 | generator = train_dataloader, 447 | steps_per_epoch = epoch_step, 448 | validation_data = val_dataloader, 449 | validation_steps = epoch_step_val, 450 | epochs = end_epoch, 451 | initial_epoch = start_epoch, 452 | use_multiprocessing = True if num_workers > 1 else False, 453 | workers = num_workers, 454 | callbacks = callbacks 455 | ) 456 | -------------------------------------------------------------------------------- /utils/__init__.py: -------------------------------------------------------------------------------- 1 | # -------------------------------------------------------------------------------- /utils/callbacks.py: -------------------------------------------------------------------------------- 1 | import os 2 | import math 3 | 4 | import keras 5 | import matplotlib 6 | matplotlib.use('Agg') 7 | from matplotlib import pyplot as plt 8 | import scipy.signal 9 | 10 | import cv2 11 | import shutil 12 | import numpy as np 13 | 14 | from PIL import Image 15 | from keras import backend as K 16 | from tqdm import tqdm 17 | from .utils import cvtColor, preprocess_input, resize_image 18 | from .utils_metrics import compute_mIoU 19 | 20 | 21 | class LossHistory(keras.callbacks.Callback): 22 | def __init__(self, log_dir): 23 | self.log_dir = log_dir 24 | self.losses = [] 25 | self.val_loss = [] 26 | 27 | os.makedirs(self.log_dir) 28 | 29 | def on_epoch_end(self, epoch, logs={}): 30 | if not os.path.exists(self.log_dir): 31 | os.makedirs(self.log_dir) 32 | 33 | self.losses.append(logs.get('loss')) 34 | self.val_loss.append(logs.get('val_loss')) 35 | 36 | with open(os.path.join(self.log_dir, "epoch_loss.txt"), 'a') as f: 37 | f.write(str(logs.get('loss'))) 38 | f.write("\n") 39 | with open(os.path.join(self.log_dir, "epoch_val_loss.txt"), 'a') as f: 40 | f.write(str(logs.get('val_loss'))) 41 | f.write("\n") 42 | self.loss_plot() 43 | 44 | def loss_plot(self): 45 | iters = range(len(self.losses)) 46 | 47 | plt.figure() 48 | plt.plot(iters, self.losses, 'red', linewidth = 2, label='train loss') 49 | plt.plot(iters, self.val_loss, 'coral', linewidth = 2, label='val loss') 50 | try: 51 | if len(self.losses) < 25: 52 | num = 5 53 | else: 54 | num = 15 55 | 56 | plt.plot(iters, scipy.signal.savgol_filter(self.losses, num, 3), 'green', linestyle = '--', linewidth = 2, label='smooth train loss') 57 | plt.plot(iters, scipy.signal.savgol_filter(self.val_loss, num, 3), '#8B4513', linestyle = '--', linewidth = 2, label='smooth val loss') 58 | except: 59 | pass 60 | 61 | plt.grid(True) 62 | plt.xlabel('Epoch') 63 | plt.ylabel('Loss') 64 | plt.title('A Loss Curve') 65 | plt.legend(loc="upper right") 66 | 67 | plt.savefig(os.path.join(self.log_dir, "epoch_loss.png")) 68 | 69 | plt.cla() 70 | plt.close("all") 71 | 72 | class ExponentDecayScheduler(keras.callbacks.Callback): 73 | def __init__(self, 74 | decay_rate, 75 | verbose=0): 76 | super(ExponentDecayScheduler, self).__init__() 77 | self.decay_rate = decay_rate 78 | self.verbose = verbose 79 | self.learning_rates = [] 80 | 81 | def on_epoch_end(self, batch, logs=None): 82 | learning_rate = K.get_value(self.model.optimizer.lr) * self.decay_rate 83 | K.set_value(self.model.optimizer.lr, learning_rate) 84 | if self.verbose > 0: 85 | print('Setting learning rate to %s.' % (learning_rate)) 86 | 87 | class WarmUpCosineDecayScheduler(keras.callbacks.Callback): 88 | def __init__(self, T_max, eta_min=0, verbose=0): 89 | super(WarmUpCosineDecayScheduler, self).__init__() 90 | self.T_max = T_max 91 | self.eta_min = eta_min 92 | self.verbose = verbose 93 | self.init_lr = 0 94 | self.last_epoch = 0 95 | 96 | def on_train_begin(self, batch, logs=None): 97 | self.init_lr = K.get_value(self.model.optimizer.lr) 98 | 99 | def on_epoch_end(self, batch, logs=None): 100 | learning_rate = self.eta_min + (self.init_lr - self.eta_min) * (1 + math.cos(math.pi * self.last_epoch / self.T_max)) / 2 101 | self.last_epoch += 1 102 | 103 | K.set_value(self.model.optimizer.lr, learning_rate) 104 | if self.verbose > 0: 105 | print('Setting learning rate to %s.' % (learning_rate)) 106 | 107 | class ParallelModelCheckpoint(keras.callbacks.ModelCheckpoint): 108 | def __init__(self, model, filepath, monitor='val_loss', verbose=0, 109 | save_best_only=False, save_weights_only=False, 110 | mode='auto', period=1): 111 | self.single_model = model 112 | super(ParallelModelCheckpoint,self).__init__(filepath, monitor, verbose,save_best_only, save_weights_only,mode, period) 113 | 114 | def set_model(self, model): 115 | super(ParallelModelCheckpoint,self).set_model(self.single_model) 116 | 117 | class EvalCallback(keras.callbacks.Callback): 118 | def __init__(self, model_body, input_shape, num_classes, image_ids, dataset_path, log_dir,\ 119 | miou_out_path=".temp_miou_out", eval_flag=True, period=1): 120 | super(EvalCallback, self).__init__() 121 | 122 | self.model_body = model_body 123 | self.input_shape = input_shape 124 | self.num_classes = num_classes 125 | self.image_ids = image_ids 126 | self.dataset_path = dataset_path 127 | self.log_dir = log_dir 128 | self.miou_out_path = miou_out_path 129 | self.eval_flag = eval_flag 130 | self.period = period 131 | 132 | self.image_ids = [image_id.split()[0] for image_id in image_ids] 133 | self.mious = [0] 134 | self.epoches = [0] 135 | if self.eval_flag: 136 | with open(os.path.join(self.log_dir, "epoch_miou.txt"), 'a') as f: 137 | f.write(str(0)) 138 | f.write("\n") 139 | 140 | def get_miou_png(self, image): 141 | #---------------------------------------------------------# 142 | # 在这里将图像转换成RGB图像,防止灰度图在预测时报错。 143 | # 代码仅仅支持RGB图像的预测,所有其它类型的图像都会转化成RGB 144 | #---------------------------------------------------------# 145 | image = cvtColor(image) 146 | orininal_h = np.array(image).shape[0] 147 | orininal_w = np.array(image).shape[1] 148 | #---------------------------------------------------------# 149 | # 给图像增加灰条,实现不失真的resize 150 | #---------------------------------------------------------# 151 | image_data, nw, nh = resize_image(image, (self.input_shape[1], self.input_shape[0])) 152 | #---------------------------------------------------------# 153 | # 归一化+添加上batch_size维度 154 | #---------------------------------------------------------# 155 | image_data = np.expand_dims(preprocess_input(np.array(image_data, np.float32)), 0) 156 | 157 | #--------------------------------------# 158 | # 图片传入网络进行预测 159 | #--------------------------------------# 160 | pr = self.model_body.predict(image_data)[0] 161 | #--------------------------------------# 162 | # 将灰条部分截取掉 163 | #--------------------------------------# 164 | pr = pr[int((self.input_shape[0] - nh) // 2) : int((self.input_shape[0] - nh) // 2 + nh), \ 165 | int((self.input_shape[1] - nw) // 2) : int((self.input_shape[1] - nw) // 2 + nw)] 166 | #--------------------------------------# 167 | # 进行图片的resize 168 | #--------------------------------------# 169 | pr = cv2.resize(pr, (orininal_w, orininal_h), interpolation = cv2.INTER_LINEAR) 170 | #---------------------------------------------------# 171 | # 取出每一个像素点的种类 172 | #---------------------------------------------------# 173 | pr = pr.argmax(axis=-1) 174 | 175 | image = Image.fromarray(np.uint8(pr)) 176 | return image 177 | 178 | def on_epoch_end(self, epoch, logs=None): 179 | temp_epoch = epoch + 1 180 | if temp_epoch % self.period == 0 and self.eval_flag: 181 | gt_dir = os.path.join(self.dataset_path, "VOC2007/SegmentationClass/") 182 | pred_dir = os.path.join(self.miou_out_path, 'detection-results') 183 | if not os.path.exists(self.miou_out_path): 184 | os.makedirs(self.miou_out_path) 185 | if not os.path.exists(pred_dir): 186 | os.makedirs(pred_dir) 187 | print("Get miou.") 188 | for image_id in tqdm(self.image_ids): 189 | #-------------------------------# 190 | # 从文件中读取图像 191 | #-------------------------------# 192 | image_path = os.path.join(self.dataset_path, "VOC2007/JPEGImages/"+image_id+".jpg") 193 | image = Image.open(image_path) 194 | #------------------------------# 195 | # 获得预测txt 196 | #------------------------------# 197 | image = self.get_miou_png(image) 198 | image.save(os.path.join(pred_dir, image_id + ".png")) 199 | 200 | print("Calculate miou.") 201 | _, IoUs, _, _ = compute_mIoU(gt_dir, pred_dir, self.image_ids, self.num_classes, None) # 执行计算mIoU的函数 202 | temp_miou = np.nanmean(IoUs) * 100 203 | 204 | self.mious.append(temp_miou) 205 | self.epoches.append(temp_epoch) 206 | 207 | with open(os.path.join(self.log_dir, "epoch_miou.txt"), 'a') as f: 208 | f.write(str(temp_miou)) 209 | f.write("\n") 210 | 211 | plt.figure() 212 | plt.plot(self.epoches, self.mious, 'red', linewidth = 2, label='train miou') 213 | 214 | plt.grid(True) 215 | plt.xlabel('Epoch') 216 | plt.ylabel('Miou') 217 | plt.title('A Miou Curve') 218 | plt.legend(loc="upper right") 219 | 220 | plt.savefig(os.path.join(self.log_dir, "epoch_miou.png")) 221 | plt.cla() 222 | plt.close("all") 223 | 224 | print("Get miou done.") 225 | shutil.rmtree(self.miou_out_path) 226 | -------------------------------------------------------------------------------- /utils/dataloader.py: -------------------------------------------------------------------------------- 1 | import math 2 | import os 3 | from random import shuffle 4 | 5 | import cv2 6 | import keras 7 | import numpy as np 8 | from PIL import Image 9 | 10 | from utils.utils import cvtColor, preprocess_input 11 | 12 | 13 | class DeeplabDataset(keras.utils.Sequence): 14 | def __init__(self, annotation_lines, input_shape, batch_size, num_classes, train, dataset_path): 15 | self.annotation_lines = annotation_lines 16 | self.length = len(self.annotation_lines) 17 | self.input_shape = input_shape 18 | self.batch_size = batch_size 19 | self.num_classes = num_classes 20 | self.train = train 21 | self.dataset_path = dataset_path 22 | 23 | def __len__(self): 24 | return math.ceil(len(self.annotation_lines) / float(self.batch_size)) 25 | 26 | def __getitem__(self, index): 27 | images = [] 28 | targets = [] 29 | for i in range(index * self.batch_size, (index + 1) * self.batch_size): 30 | i = i % self.length 31 | name = self.annotation_lines[i].split()[0] 32 | #-------------------------------# 33 | # 从文件中读取图像 34 | #-------------------------------# 35 | jpg = Image.open(os.path.join(os.path.join(self.dataset_path, "VOC2007/JPEGImages"), name + ".jpg")) 36 | png = Image.open(os.path.join(os.path.join(self.dataset_path, "VOC2007/SegmentationClass"), name + ".png")) 37 | #-------------------------------# 38 | # 数据增强 39 | #-------------------------------# 40 | jpg, png = self.get_random_data(jpg, png, self.input_shape, random = self.train) 41 | jpg = preprocess_input(np.array(jpg, np.float64)) 42 | png = np.array(png) 43 | png[png >= self.num_classes] = self.num_classes 44 | #-------------------------------------------------------# 45 | # 转化成one_hot的形式 46 | # 在这里需要+1是因为voc数据集有些标签具有白边部分 47 | # 我们需要将白边部分进行忽略,+1的目的是方便忽略。 48 | #-------------------------------------------------------# 49 | seg_labels = np.eye(self.num_classes + 1)[png.reshape([-1])] 50 | seg_labels = seg_labels.reshape((int(self.input_shape[0]), int(self.input_shape[1]), self.num_classes + 1)) 51 | 52 | images.append(jpg) 53 | targets.append(seg_labels) 54 | 55 | images = np.array(images) 56 | targets = np.array(targets) 57 | return images, targets 58 | 59 | def on_epoch_end(self): 60 | shuffle(self.annotation_lines) 61 | 62 | def rand(self, a=0, b=1): 63 | return np.random.rand() * (b - a) + a 64 | 65 | def get_random_data(self, image, label, input_shape, jitter=.3, hue=.1, sat=0.7, val=0.3, random=True): 66 | image = cvtColor(image) 67 | label = Image.fromarray(np.array(label)) 68 | #------------------------------# 69 | # 获得图像的高宽与目标高宽 70 | #------------------------------# 71 | iw, ih = image.size 72 | h, w = input_shape 73 | 74 | if not random: 75 | iw, ih = image.size 76 | scale = min(w/iw, h/ih) 77 | nw = int(iw*scale) 78 | nh = int(ih*scale) 79 | 80 | image = image.resize((nw,nh), Image.BICUBIC) 81 | new_image = Image.new('RGB', [w, h], (128,128,128)) 82 | new_image.paste(image, ((w-nw)//2, (h-nh)//2)) 83 | 84 | label = label.resize((nw,nh), Image.NEAREST) 85 | new_label = Image.new('L', [w, h], (0)) 86 | new_label.paste(label, ((w-nw)//2, (h-nh)//2)) 87 | return new_image, new_label 88 | 89 | #------------------------------------------# 90 | # 对图像进行缩放并且进行长和宽的扭曲 91 | #------------------------------------------# 92 | new_ar = iw/ih * self.rand(1-jitter,1+jitter) / self.rand(1-jitter,1+jitter) 93 | scale = self.rand(0.25, 2) 94 | if new_ar < 1: 95 | nh = int(scale*h) 96 | nw = int(nh*new_ar) 97 | else: 98 | nw = int(scale*w) 99 | nh = int(nw/new_ar) 100 | image = image.resize((nw,nh), Image.BICUBIC) 101 | label = label.resize((nw,nh), Image.NEAREST) 102 | 103 | #------------------------------------------# 104 | # 翻转图像 105 | #------------------------------------------# 106 | flip = self.rand()<.5 107 | if flip: 108 | image = image.transpose(Image.FLIP_LEFT_RIGHT) 109 | label = label.transpose(Image.FLIP_LEFT_RIGHT) 110 | 111 | #------------------------------------------# 112 | # 将图像多余的部分加上灰条 113 | #------------------------------------------# 114 | dx = int(self.rand(0, w-nw)) 115 | dy = int(self.rand(0, h-nh)) 116 | new_image = Image.new('RGB', (w,h), (128,128,128)) 117 | new_label = Image.new('L', (w,h), (0)) 118 | new_image.paste(image, (dx, dy)) 119 | new_label.paste(label, (dx, dy)) 120 | image = new_image 121 | label = new_label 122 | 123 | image_data = np.array(image, np.uint8) 124 | 125 | #------------------------------------------# 126 | # 高斯模糊 127 | #------------------------------------------# 128 | blur = self.rand() < 0.25 129 | if blur: 130 | image_data = cv2.GaussianBlur(image_data, (5, 5), 0) 131 | 132 | #------------------------------------------# 133 | # 旋转 134 | #------------------------------------------# 135 | rotate = self.rand() < 0.25 136 | if rotate: 137 | center = (w // 2, h // 2) 138 | rotation = np.random.randint(-10, 11) 139 | M = cv2.getRotationMatrix2D(center, -rotation, scale=1) 140 | image_data = cv2.warpAffine(image_data, M, (w, h), flags=cv2.INTER_CUBIC, borderValue=(128,128,128)) 141 | label = cv2.warpAffine(np.array(label, np.uint8), M, (w, h), flags=cv2.INTER_NEAREST, borderValue=(0)) 142 | 143 | #---------------------------------# 144 | # 对图像进行色域变换 145 | # 计算色域变换的参数 146 | #---------------------------------# 147 | r = np.random.uniform(-1, 1, 3) * [hue, sat, val] + 1 148 | #---------------------------------# 149 | # 将图像转到HSV上 150 | #---------------------------------# 151 | hue, sat, val = cv2.split(cv2.cvtColor(image_data, cv2.COLOR_RGB2HSV)) 152 | dtype = image_data.dtype 153 | #---------------------------------# 154 | # 应用变换 155 | #---------------------------------# 156 | x = np.arange(0, 256, dtype=r.dtype) 157 | lut_hue = ((x * r[0]) % 180).astype(dtype) 158 | lut_sat = np.clip(x * r[1], 0, 255).astype(dtype) 159 | lut_val = np.clip(x * r[2], 0, 255).astype(dtype) 160 | 161 | image_data = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val))) 162 | image_data = cv2.cvtColor(image_data, cv2.COLOR_HSV2RGB) 163 | 164 | return image_data, label 165 | -------------------------------------------------------------------------------- /utils/utils.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from PIL import Image 3 | 4 | #---------------------------------------------------------# 5 | # 将图像转换成RGB图像,防止灰度图在预测时报错。 6 | # 代码仅仅支持RGB图像的预测,所有其它类型的图像都会转化成RGB 7 | #---------------------------------------------------------# 8 | def cvtColor(image): 9 | if len(np.shape(image)) == 3 and np.shape(image)[2] == 3: 10 | return image 11 | else: 12 | image = image.convert('RGB') 13 | return image 14 | 15 | #---------------------------------------------------# 16 | # 对输入图像进行resize 17 | #---------------------------------------------------# 18 | def resize_image(image, size): 19 | iw, ih = image.size 20 | w, h = size 21 | 22 | scale = min(w/iw, h/ih) 23 | nw = int(iw*scale) 24 | nh = int(ih*scale) 25 | 26 | image = image.resize((nw,nh), Image.BICUBIC) 27 | new_image = Image.new('RGB', size, (128,128,128)) 28 | new_image.paste(image, ((w-nw)//2, (h-nh)//2)) 29 | 30 | return new_image, nw, nh 31 | 32 | def preprocess_input(image): 33 | image = image / 127.5 - 1 34 | return image 35 | 36 | def show_config(**kwargs): 37 | print('Configurations:') 38 | print('-' * 70) 39 | print('|%25s | %40s|' % ('keys', 'values')) 40 | print('-' * 70) 41 | for key, value in kwargs.items(): 42 | print('|%25s | %40s|' % (str(key), str(value))) 43 | print('-' * 70) 44 | 45 | #-------------------------------------------------------------------------------------------------------------------------------# 46 | # From https://github.com/ckyrkou/Keras_FLOP_Estimator 47 | # Fix lots of bugs 48 | #-------------------------------------------------------------------------------------------------------------------------------# 49 | def net_flops(model, table=False, print_result=True): 50 | if (table == True): 51 | print("\n") 52 | print('%25s | %16s | %16s | %16s | %16s | %6s | %6s' % ( 53 | 'Layer Name', 'Input Shape', 'Output Shape', 'Kernel Size', 'Filters', 'Strides', 'FLOPS')) 54 | print('=' * 120) 55 | 56 | #---------------------------------------------------# 57 | # 总的FLOPs 58 | #---------------------------------------------------# 59 | t_flops = 0 60 | factor = 1e9 61 | 62 | for l in model.layers: 63 | try: 64 | #--------------------------------------# 65 | # 所需参数的初始化定义 66 | #--------------------------------------# 67 | o_shape, i_shape, strides, ks, filters = ('', '', ''), ('', '', ''), (1, 1), (0, 0), 0 68 | flops = 0 69 | #--------------------------------------# 70 | # 获得层的名字 71 | #--------------------------------------# 72 | name = l.name 73 | 74 | if ('InputLayer' in str(l)): 75 | i_shape = l.get_input_shape_at(0)[1:4] 76 | o_shape = l.get_output_shape_at(0)[1:4] 77 | 78 | #--------------------------------------# 79 | # Reshape层 80 | #--------------------------------------# 81 | elif ('Reshape' in str(l)): 82 | i_shape = l.get_input_shape_at(0)[1:4] 83 | o_shape = l.get_output_shape_at(0)[1:4] 84 | 85 | #--------------------------------------# 86 | # 填充层 87 | #--------------------------------------# 88 | elif ('Padding' in str(l)): 89 | i_shape = l.get_input_shape_at(0)[1:4] 90 | o_shape = l.get_output_shape_at(0)[1:4] 91 | 92 | #--------------------------------------# 93 | # 平铺层 94 | #--------------------------------------# 95 | elif ('Flatten' in str(l)): 96 | i_shape = l.get_input_shape_at(0)[1:4] 97 | o_shape = l.get_output_shape_at(0)[1:4] 98 | 99 | #--------------------------------------# 100 | # 激活函数层 101 | #--------------------------------------# 102 | elif 'Activation' in str(l): 103 | i_shape = l.get_input_shape_at(0)[1:4] 104 | o_shape = l.get_output_shape_at(0)[1:4] 105 | 106 | #--------------------------------------# 107 | # LeakyReLU 108 | #--------------------------------------# 109 | elif 'LeakyReLU' in str(l): 110 | for i in range(len(l._inbound_nodes)): 111 | i_shape = l.get_input_shape_at(i)[1:4] 112 | o_shape = l.get_output_shape_at(i)[1:4] 113 | 114 | flops += i_shape[0] * i_shape[1] * i_shape[2] 115 | 116 | #--------------------------------------# 117 | # 池化层 118 | #--------------------------------------# 119 | elif 'MaxPooling' in str(l): 120 | i_shape = l.get_input_shape_at(0)[1:4] 121 | o_shape = l.get_output_shape_at(0)[1:4] 122 | 123 | #--------------------------------------# 124 | # 池化层 125 | #--------------------------------------# 126 | elif ('AveragePooling' in str(l) and 'Global' not in str(l)): 127 | strides = l.strides 128 | ks = l.pool_size 129 | 130 | for i in range(len(l._inbound_nodes)): 131 | i_shape = l.get_input_shape_at(i)[1:4] 132 | o_shape = l.get_output_shape_at(i)[1:4] 133 | 134 | flops += o_shape[0] * o_shape[1] * o_shape[2] 135 | 136 | #--------------------------------------# 137 | # 全局池化层 138 | #--------------------------------------# 139 | elif ('AveragePooling' in str(l) and 'Global' in str(l)): 140 | for i in range(len(l._inbound_nodes)): 141 | i_shape = l.get_input_shape_at(i)[1:4] 142 | o_shape = l.get_output_shape_at(i)[1:4] 143 | 144 | flops += (i_shape[0] * i_shape[1] + 1) * i_shape[2] 145 | 146 | #--------------------------------------# 147 | # 标准化层 148 | #--------------------------------------# 149 | elif ('BatchNormalization' in str(l)): 150 | for i in range(len(l._inbound_nodes)): 151 | i_shape = l.get_input_shape_at(i)[1:4] 152 | o_shape = l.get_output_shape_at(i)[1:4] 153 | 154 | temp_flops = 1 155 | for i in range(len(i_shape)): 156 | temp_flops *= i_shape[i] 157 | temp_flops *= 2 158 | 159 | flops += temp_flops 160 | 161 | #--------------------------------------# 162 | # 全连接层 163 | #--------------------------------------# 164 | elif ('Dense' in str(l)): 165 | for i in range(len(l._inbound_nodes)): 166 | i_shape = l.get_input_shape_at(i)[1:4] 167 | o_shape = l.get_output_shape_at(i)[1:4] 168 | 169 | temp_flops = 1 170 | for i in range(len(o_shape)): 171 | temp_flops *= o_shape[i] 172 | 173 | if (i_shape[-1] == None): 174 | temp_flops = temp_flops * o_shape[-1] 175 | else: 176 | temp_flops = temp_flops * i_shape[-1] 177 | flops += temp_flops 178 | 179 | #--------------------------------------# 180 | # 普通卷积层 181 | #--------------------------------------# 182 | elif ('Conv2D' in str(l) and 'DepthwiseConv2D' not in str(l) and 'SeparableConv2D' not in str(l)): 183 | strides = l.strides 184 | ks = l.kernel_size 185 | filters = l.filters 186 | bias = 1 if l.use_bias else 0 187 | 188 | for i in range(len(l._inbound_nodes)): 189 | i_shape = l.get_input_shape_at(i)[1:4] 190 | o_shape = l.get_output_shape_at(i)[1:4] 191 | 192 | if (filters == None): 193 | filters = i_shape[2] 194 | flops += filters * o_shape[0] * o_shape[1] * (ks[0] * ks[1] * i_shape[2] + bias) 195 | 196 | #--------------------------------------# 197 | # 逐层卷积层 198 | #--------------------------------------# 199 | elif ('Conv2D' in str(l) and 'DepthwiseConv2D' in str(l) and 'SeparableConv2D' not in str(l)): 200 | strides = l.strides 201 | ks = l.kernel_size 202 | filters = l.filters 203 | bias = 1 if l.use_bias else 0 204 | 205 | for i in range(len(l._inbound_nodes)): 206 | i_shape = l.get_input_shape_at(i)[1:4] 207 | o_shape = l.get_output_shape_at(i)[1:4] 208 | 209 | if (filters == None): 210 | filters = i_shape[2] 211 | flops += filters * o_shape[0] * o_shape[1] * (ks[0] * ks[1] + bias) 212 | 213 | #--------------------------------------# 214 | # 深度可分离卷积层 215 | #--------------------------------------# 216 | elif ('Conv2D' in str(l) and 'DepthwiseConv2D' not in str(l) and 'SeparableConv2D' in str(l)): 217 | strides = l.strides 218 | ks = l.kernel_size 219 | filters = l.filters 220 | 221 | for i in range(len(l._inbound_nodes)): 222 | i_shape = l.get_input_shape_at(i)[1:4] 223 | o_shape = l.get_output_shape_at(i)[1:4] 224 | 225 | if (filters == None): 226 | filters = i_shape[2] 227 | flops += i_shape[2] * o_shape[0] * o_shape[1] * (ks[0] * ks[1] + bias) + \ 228 | filters * o_shape[0] * o_shape[1] * (1 * 1 * i_shape[2] + bias) 229 | #--------------------------------------# 230 | # 模型中有模型时 231 | #--------------------------------------# 232 | elif 'Model' in str(l): 233 | flops = net_flops(l, print_result=False) 234 | 235 | t_flops += flops 236 | 237 | if (table == True): 238 | print('%25s | %16s | %16s | %16s | %16s | %6s | %5.4f' % ( 239 | name[:25], str(i_shape), str(o_shape), str(ks), str(filters), str(strides), flops)) 240 | 241 | except: 242 | pass 243 | 244 | t_flops = t_flops * 2 245 | if print_result: 246 | show_flops = t_flops / factor 247 | print('Total GFLOPs: %.3fG' % (show_flops)) 248 | return t_flops -------------------------------------------------------------------------------- /utils/utils_metrics.py: -------------------------------------------------------------------------------- 1 | import csv 2 | import os 3 | from os.path import join 4 | 5 | import matplotlib.pyplot as plt 6 | import numpy as np 7 | from keras import backend 8 | from PIL import Image 9 | 10 | 11 | def Iou_score(smooth = 1e-5, threhold = 0.5): 12 | def _Iou_score(y_true, y_pred): 13 | # score calculation 14 | y_pred = backend.greater(y_pred, threhold) 15 | y_pred = backend.cast(y_pred, backend.floatx()) 16 | intersection = backend.sum(y_true[...,:-1] * y_pred, axis=[0,1,2]) 17 | union = backend.sum(y_true[...,:-1] + y_pred, axis=[0,1,2]) - intersection 18 | 19 | score = (intersection + smooth) / (union + smooth) 20 | return score 21 | return _Iou_score 22 | 23 | def f_score(beta=1, smooth = 1e-5, threhold = 0.5): 24 | def _f_score(y_true, y_pred): 25 | y_pred = backend.greater(y_pred, threhold) 26 | y_pred = backend.cast(y_pred, backend.floatx()) 27 | 28 | tp = backend.sum(y_true[...,:-1] * y_pred, axis=[0,1,2]) 29 | fp = backend.sum(y_pred , axis=[0,1,2]) - tp 30 | fn = backend.sum(y_true[...,:-1], axis=[0,1,2]) - tp 31 | 32 | score = ((1 + beta ** 2) * tp + smooth) \ 33 | / ((1 + beta ** 2) * tp + beta ** 2 * fn + fp + smooth) 34 | return score 35 | return _f_score 36 | 37 | # 设标签宽W,长H 38 | def fast_hist(a, b, n): 39 | #--------------------------------------------------------------------------------# 40 | # a是转化成一维数组的标签,形状(H×W,);b是转化成一维数组的预测结果,形状(H×W,) 41 | #--------------------------------------------------------------------------------# 42 | k = (a >= 0) & (a < n) 43 | #--------------------------------------------------------------------------------# 44 | # np.bincount计算了从0到n**2-1这n**2个数中每个数出现的次数,返回值形状(n, n) 45 | # 返回中,写对角线上的为分类正确的像素点 46 | #--------------------------------------------------------------------------------# 47 | return np.bincount(n * a[k].astype(int) + b[k], minlength=n ** 2).reshape(n, n) 48 | 49 | def per_class_iu(hist): 50 | return np.diag(hist) / np.maximum((hist.sum(1) + hist.sum(0) - np.diag(hist)), 1) 51 | 52 | def per_class_PA_Recall(hist): 53 | return np.diag(hist) / np.maximum(hist.sum(1), 1) 54 | 55 | def per_class_Precision(hist): 56 | return np.diag(hist) / np.maximum(hist.sum(0), 1) 57 | 58 | def per_Accuracy(hist): 59 | return np.sum(np.diag(hist)) / np.maximum(np.sum(hist), 1) 60 | 61 | def compute_mIoU(gt_dir, pred_dir, png_name_list, num_classes, name_classes=None): 62 | print('Num classes', num_classes) 63 | #-----------------------------------------# 64 | # 创建一个全是0的矩阵,是一个混淆矩阵 65 | #-----------------------------------------# 66 | hist = np.zeros((num_classes, num_classes)) 67 | 68 | #------------------------------------------------# 69 | # 获得验证集标签路径列表,方便直接读取 70 | # 获得验证集图像分割结果路径列表,方便直接读取 71 | #------------------------------------------------# 72 | gt_imgs = [join(gt_dir, x + ".png") for x in png_name_list] 73 | pred_imgs = [join(pred_dir, x + ".png") for x in png_name_list] 74 | 75 | #------------------------------------------------# 76 | # 读取每一个(图片-标签)对 77 | #------------------------------------------------# 78 | for ind in range(len(gt_imgs)): 79 | #------------------------------------------------# 80 | # 读取一张图像分割结果,转化成numpy数组 81 | #------------------------------------------------# 82 | pred = np.array(Image.open(pred_imgs[ind])) 83 | #------------------------------------------------# 84 | # 读取一张对应的标签,转化成numpy数组 85 | #------------------------------------------------# 86 | label = np.array(Image.open(gt_imgs[ind])) 87 | 88 | # 如果图像分割结果与标签的大小不一样,这张图片就不计算 89 | if len(label.flatten()) != len(pred.flatten()): 90 | print( 91 | 'Skipping: len(gt) = {:d}, len(pred) = {:d}, {:s}, {:s}'.format( 92 | len(label.flatten()), len(pred.flatten()), gt_imgs[ind], 93 | pred_imgs[ind])) 94 | continue 95 | 96 | #------------------------------------------------# 97 | # 对一张图片计算21×21的hist矩阵,并累加 98 | #------------------------------------------------# 99 | hist += fast_hist(label.flatten(), pred.flatten(), num_classes) 100 | # 每计算10张就输出一下目前已计算的图片中所有类别平均的mIoU值 101 | if name_classes is not None and ind > 0 and ind % 10 == 0: 102 | print('{:d} / {:d}: mIou-{:0.2f}%; mPA-{:0.2f}%; Accuracy-{:0.2f}%'.format( 103 | ind, 104 | len(gt_imgs), 105 | 100 * np.nanmean(per_class_iu(hist)), 106 | 100 * np.nanmean(per_class_PA_Recall(hist)), 107 | 100 * per_Accuracy(hist) 108 | ) 109 | ) 110 | #------------------------------------------------# 111 | # 计算所有验证集图片的逐类别mIoU值 112 | #------------------------------------------------# 113 | IoUs = per_class_iu(hist) 114 | PA_Recall = per_class_PA_Recall(hist) 115 | Precision = per_class_Precision(hist) 116 | #------------------------------------------------# 117 | # 逐类别输出一下mIoU值 118 | #------------------------------------------------# 119 | if name_classes is not None: 120 | for ind_class in range(num_classes): 121 | print('===>' + name_classes[ind_class] + ':\tIou-' + str(round(IoUs[ind_class] * 100, 2)) \ 122 | + '; Recall (equal to the PA)-' + str(round(PA_Recall[ind_class] * 100, 2))+ '; Precision-' + str(round(Precision[ind_class] * 100, 2))) 123 | 124 | #-----------------------------------------------------------------# 125 | # 在所有验证集图像上求所有类别平均的mIoU值,计算时忽略NaN值 126 | #-----------------------------------------------------------------# 127 | print('===> mIoU: ' + str(round(np.nanmean(IoUs) * 100, 2)) + '; mPA: ' + str(round(np.nanmean(PA_Recall) * 100, 2)) + '; Accuracy: ' + str(round(per_Accuracy(hist) * 100, 2))) 128 | return np.array(hist, np.int), IoUs, PA_Recall, Precision 129 | 130 | def adjust_axes(r, t, fig, axes): 131 | bb = t.get_window_extent(renderer=r) 132 | text_width_inches = bb.width / fig.dpi 133 | current_fig_width = fig.get_figwidth() 134 | new_fig_width = current_fig_width + text_width_inches 135 | propotion = new_fig_width / current_fig_width 136 | x_lim = axes.get_xlim() 137 | axes.set_xlim([x_lim[0], x_lim[1] * propotion]) 138 | 139 | def draw_plot_func(values, name_classes, plot_title, x_label, output_path, tick_font_size = 12, plt_show = True): 140 | fig = plt.gcf() 141 | axes = plt.gca() 142 | plt.barh(range(len(values)), values, color='royalblue') 143 | plt.title(plot_title, fontsize=tick_font_size + 2) 144 | plt.xlabel(x_label, fontsize=tick_font_size) 145 | plt.yticks(range(len(values)), name_classes, fontsize=tick_font_size) 146 | r = fig.canvas.get_renderer() 147 | for i, val in enumerate(values): 148 | str_val = " " + str(val) 149 | if val < 1.0: 150 | str_val = " {0:.2f}".format(val) 151 | t = plt.text(val, i, str_val, color='royalblue', va='center', fontweight='bold') 152 | if i == (len(values)-1): 153 | adjust_axes(r, t, fig, axes) 154 | 155 | fig.tight_layout() 156 | fig.savefig(output_path) 157 | if plt_show: 158 | plt.show() 159 | plt.close() 160 | 161 | def show_results(miou_out_path, hist, IoUs, PA_Recall, Precision, name_classes, tick_font_size = 12): 162 | draw_plot_func(IoUs, name_classes, "mIoU = {0:.2f}%".format(np.nanmean(IoUs)*100), "Intersection over Union", \ 163 | os.path.join(miou_out_path, "mIoU.png"), tick_font_size = tick_font_size, plt_show = True) 164 | print("Save mIoU out to " + os.path.join(miou_out_path, "mIoU.png")) 165 | 166 | draw_plot_func(PA_Recall, name_classes, "mPA = {0:.2f}%".format(np.nanmean(PA_Recall)*100), "Pixel Accuracy", \ 167 | os.path.join(miou_out_path, "mPA.png"), tick_font_size = tick_font_size, plt_show = False) 168 | print("Save mPA out to " + os.path.join(miou_out_path, "mPA.png")) 169 | 170 | draw_plot_func(PA_Recall, name_classes, "mRecall = {0:.2f}%".format(np.nanmean(PA_Recall)*100), "Recall", \ 171 | os.path.join(miou_out_path, "Recall.png"), tick_font_size = tick_font_size, plt_show = False) 172 | print("Save Recall out to " + os.path.join(miou_out_path, "Recall.png")) 173 | 174 | draw_plot_func(Precision, name_classes, "mPrecision = {0:.2f}%".format(np.nanmean(Precision)*100), "Precision", \ 175 | os.path.join(miou_out_path, "Precision.png"), tick_font_size = tick_font_size, plt_show = False) 176 | print("Save Precision out to " + os.path.join(miou_out_path, "Precision.png")) 177 | 178 | with open(os.path.join(miou_out_path, "confusion_matrix.csv"), 'w', newline='') as f: 179 | writer = csv.writer(f) 180 | writer_list = [] 181 | writer_list.append([' '] + [str(c) for c in name_classes]) 182 | for i in range(len(hist)): 183 | writer_list.append([name_classes[i]] + [str(x) for x in hist[i]]) 184 | writer.writerows(writer_list) 185 | print("Save confusion_matrix out to " + os.path.join(miou_out_path, "confusion_matrix.csv")) 186 | -------------------------------------------------------------------------------- /voc_annotation.py: -------------------------------------------------------------------------------- 1 | import os 2 | import random 3 | 4 | import numpy as np 5 | from PIL import Image 6 | from tqdm import tqdm 7 | 8 | #-------------------------------------------------------# 9 | # 想要增加测试集修改trainval_percent 10 | # 修改train_percent用于改变验证集的比例 9:1 11 | # 12 | # 当前该库将测试集当作验证集使用,不单独划分测试集 13 | #-------------------------------------------------------# 14 | trainval_percent = 1 15 | train_percent = 0.9 16 | #-------------------------------------------------------# 17 | # 指向VOC数据集所在的文件夹 18 | # 默认指向根目录下的VOC数据集 19 | #-------------------------------------------------------# 20 | VOCdevkit_path = 'VOCdevkit' 21 | 22 | if __name__ == "__main__": 23 | random.seed(0) 24 | print("Generate txt in ImageSets.") 25 | segfilepath = os.path.join(VOCdevkit_path, 'VOC2007/SegmentationClass') 26 | saveBasePath = os.path.join(VOCdevkit_path, 'VOC2007/ImageSets/Segmentation') 27 | 28 | temp_seg = os.listdir(segfilepath) 29 | total_seg = [] 30 | for seg in temp_seg: 31 | if seg.endswith(".png"): 32 | total_seg.append(seg) 33 | 34 | num = len(total_seg) 35 | list = range(num) 36 | tv = int(num*trainval_percent) 37 | tr = int(tv*train_percent) 38 | trainval= random.sample(list,tv) 39 | train = random.sample(trainval,tr) 40 | 41 | print("train and val size",tv) 42 | print("traub suze",tr) 43 | ftrainval = open(os.path.join(saveBasePath,'trainval.txt'), 'w') 44 | ftest = open(os.path.join(saveBasePath,'test.txt'), 'w') 45 | ftrain = open(os.path.join(saveBasePath,'train.txt'), 'w') 46 | fval = open(os.path.join(saveBasePath,'val.txt'), 'w') 47 | 48 | for i in list: 49 | name = total_seg[i][:-4]+'\n' 50 | if i in trainval: 51 | ftrainval.write(name) 52 | if i in train: 53 | ftrain.write(name) 54 | else: 55 | fval.write(name) 56 | else: 57 | ftest.write(name) 58 | 59 | ftrainval.close() 60 | ftrain.close() 61 | fval.close() 62 | ftest.close() 63 | print("Generate txt in ImageSets done.") 64 | 65 | print("Check datasets format, this may take a while.") 66 | print("检查数据集格式是否符合要求,这可能需要一段时间。") 67 | classes_nums = np.zeros([256], np.int) 68 | for i in tqdm(list): 69 | name = total_seg[i] 70 | png_file_name = os.path.join(segfilepath, name) 71 | if not os.path.exists(png_file_name): 72 | raise ValueError("未检测到标签图片%s,请查看具体路径下文件是否存在以及后缀是否为png。"%(png_file_name)) 73 | 74 | png = np.array(Image.open(png_file_name), np.uint8) 75 | if len(np.shape(png)) > 2: 76 | print("标签图片%s的shape为%s,不属于灰度图或者八位彩图,请仔细检查数据集格式。"%(name, str(np.shape(png)))) 77 | print("标签图片需要为灰度图或者八位彩图,标签的每个像素点的值就是这个像素点所属的种类。"%(name, str(np.shape(png)))) 78 | 79 | classes_nums += np.bincount(np.reshape(png, [-1]), minlength=256) 80 | 81 | print("打印像素点的值与数量。") 82 | print('-' * 37) 83 | print("| %15s | %15s |"%("Key", "Value")) 84 | print('-' * 37) 85 | for i in range(256): 86 | if classes_nums[i] > 0: 87 | print("| %15s | %15s |"%(str(i), str(classes_nums[i]))) 88 | print('-' * 37) 89 | 90 | if classes_nums[255] > 0 and classes_nums[0] > 0 and np.sum(classes_nums[1:255]) == 0: 91 | print("检测到标签中像素点的值仅包含0与255,数据格式有误。") 92 | print("二分类问题需要将标签修改为背景的像素点值为0,目标的像素点值为1。") 93 | elif classes_nums[0] > 0 and np.sum(classes_nums[1:]) == 0: 94 | print("检测到标签中仅仅包含背景像素点,数据格式有误,请仔细检查数据集格式。") 95 | 96 | print("JPEGImages中的图片应当为.jpg文件、SegmentationClass中的图片应当为.png文件。") 97 | print("如果格式有误,参考:") 98 | print("https://github.com/bubbliiiing/segmentation-format-fix") -------------------------------------------------------------------------------- /常见问题汇总.md: -------------------------------------------------------------------------------- 1 | 该markdown文档里面包含了许多的问题,大家可在这里面查询了之后再去B站询问。 2 | 3 | 如果在md文档看着不清晰,可以去博客上看[https://blog.csdn.net/weixin_44791964/article/details/107517428](https://blog.csdn.net/weixin_44791964/article/details/107517428),博客上有目录。 4 | 5 | # 问题汇总 6 | ## 1、下载问题 7 | **问:up主,可以给我发一份代码吗,代码在哪里下载啊? 8 | 答:Github上的地址就在视频简介里。复制一下就能进去下载了。** 9 | 10 | **问:up主,为什么我下载的代码里面,model_data下面没有.pth或者.h5文件? 11 | 答:我一般会把权值上传到百度网盘,在GITHUB的README里面就能找到。** 12 | 13 | **问:up主,为什么我下载的代码提示压缩包损坏? 14 | 答:重新去Github下载。** 15 | 16 | ## 2、数据集问题 17 | **问:up主,XXXX数据集在哪里下载啊? 18 | 答:一般数据集的下载地址我会放在README里面,基本上都有,目标检测的数据集我放在了资源汇总帖。**[https://blog.csdn.net/weixin_44791964/article/details/105123842](https://blog.csdn.net/weixin_44791964/article/details/105123842) 19 | 20 | ## 3、GPU利用问题 21 | **问:up主,我好像没有在用gpu进行训练啊,怎么看是不是用了GPU进行训练? 22 | 答:查看是否使用GPU进行训练一般使用NVIDIA在命令行的查看命令,如果要看任务管理器的话,请看性能部分GPU的显存是否利用,或者查看任务管理器的Cuda,而非Copy。** 23 | ![在这里插入图片描述](https://img-blog.csdnimg.cn/20201013234241524.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl80NDc5MTk2NA==,size_16,color_FFFFFF,t_70#pic_center) 24 | ## 4、环境配置问题 25 | **pytorch代码对应的pytorch版本为1.2,博客地址对应**[https://blog.csdn.net/weixin_44791964/article/details/106037141](https://blog.csdn.net/weixin_44791964/article/details/106037141)。 26 | 27 | **keras代码对应的tensorflow版本为1.13.2,keras版本是2.1.5,博客地址对应**[https://blog.csdn.net/weixin_44791964/article/details/104702142](https://blog.csdn.net/weixin_44791964/article/details/104702142)。 28 | 29 | **tf2代码对应的tensorflow版本为2.2.0,无需安装keras,博客地址对应**[https://blog.csdn.net/weixin_44791964/article/details/109161493](https://blog.csdn.net/weixin_44791964/article/details/109161493)。 30 | 31 | **问:为什么我安装了tensorflow-gpu但是却没用利用GPU进行训练呢? 32 | 答:确认tensorflow-gpu已经装好,利用pip list查看tensorflow版本,然后查看任务管理器或者利用nvidia命令看看是否使用了gpu进行训练,任务管理器的话要看显存使用情况。** 33 | 34 | **问:你的代码某某某版本的tensorflow和pytorch能用嘛? 35 | 答:最好按照我推荐的配置,配置教程也有!其它版本的我没有试过!** 36 | 37 | **问:up主,为什么我按照你的环境配置后还是不能使用? 38 | 答:请把你的GPU、CUDA、CUDNN、TF版本以及PYTORCH版本私聊告诉我。** 39 | 40 | **问:出现如下错误** 41 | ```python 42 | Traceback (most recent call last): 43 | File "C:\Users\focus\Anaconda3\ana\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in 44 | from tensorflow.python.pywrap_tensorflow_internal import * 45 | File "C:\Users\focus\Anaconda3\ana\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in 46 | pywrap_tensorflow_internal = swig_import_helper() 47 | File "C:\Users\focus\Anaconda3\ana\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper 48 | _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) 49 | File "C:\Users\focus\Anaconda3\ana\envs\tensorflow-gpu\lib\imp.py", line 243, in load_modulereturn load_dynamic(name, filename, file) 50 | File "C:\Users\focus\Anaconda3\ana\envs\tensorflow-gpu\lib\imp.py", line 343, in load_dynamic 51 | return _load(spec) 52 | ImportError: DLL load failed: 找不到指定的模块。 53 | ``` 54 | **答:如果没重启过就重启一下,否则重新按照步骤安装,还无法解决则把你的GPU、CUDA、CUDNN、TF版本以及PYTORCH版本私聊告诉我。** 55 | ## 5、shape不匹配问题 56 | ### a、训练时shape不匹配问题 57 | **问:up主,为什么运行train.py会提示shape不匹配啊? 58 | 答:因为你训练的种类和原始的种类不同,网络结构会变化,所以最尾部的shape会有少量不匹配。** 59 | ### b、预测时shape不匹配问题 60 | **问:为什么我运行predict.py会提示我说shape不匹配呀。 61 | 在Pytorch里面是这样的:** 62 | ![在这里插入图片描述](https://img-blog.csdnimg.cn/20200722171631901.png) 63 | 在Keras里面是这样的: 64 | ![在这里插入图片描述](https://img-blog.csdnimg.cn/20200722171523380.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl80NDc5MTk2NA==,size_16,color_FFFFFF,t_70) 65 | 66 | **答:原因主要有仨: 67 | 1、在ssd、FasterRCNN里面,可能是train.py里面的num_classes没改。 68 | 2、model_path没改。 69 | 3、classes_path没改。 70 | 请检查清楚了!确定自己所用的model_path和classes_path是对应的!训练的时候用到的num_classes或者classes_path也需要检查!** 71 | ## 6、no module问题 72 | **问:为什么提示说no module name utils.utils(no module name nets.yolo、no module name nets.ssd等一系列问题)啊? 73 | 答:根目录不对,查查相对目录的概念。查了基本上就明白了。** 74 | 75 | **问:为什么提示说no module name matplotlib(no module name PIL)? 76 | 答:打开命令行安装就好。pip install matplotlib** 77 | 78 | **问:为什么提示说No module named 'torch' ? 79 | 答:其实我也真的很想知道为什么会有这个问题……这个pytorch没装是什么情况?一般就俩情况,一个是真的没装,还有一个是装到其它环境了,当前激活的环境不是自己装的环境。** 80 | 81 | **问:为什么提示说No module named 'tensorflow' ? 82 | 答:同上。** 83 | ## 7、显存问题 84 | **问:为什么我运行train.py下面的命令行闪的贼快,还提示OOM啥的? 85 | 答:爆显存了,可以改小batch_size,如果batch_size=1才能运行的话,那么直接换网络吧,SSD的显存占用率是最小的,建议用SSD; 86 | 2G显存:SSD 87 | 4G显存:YOLOV3 Faster RCNN 88 | 6G显存:YOLOV4 Retinanet M2det Efficientdet等 89 | 8G+显存:随便选吧** 90 | 91 | **问:为什么提示 RuntimeError: CUDA out of memory. Tried to allocate 52.00 MiB (GPU 0; 15.90 GiB total capacity; 14.85 GiB already allocated; 51.88 MiB free; 15.07 GiB reserved in total by PyTorch)? 92 | 答:同上** 93 | ## 8、训练问题(冻结训练,LOSS问题等) 94 | **问:为什么要冻结训练和解冻训练呀? 95 | 答:这是迁移学习的思想,因为神经网络主干特征提取部分所提取到的特征是通用的,我们冻结起来训练可以加快训练效率,也可以防止权值被破坏。** 96 | 97 | **问:为什么我的网络不收敛啊,LOSS是XXXX。 98 | 答:不同网络的LOSS不同,LOSS只是一个参考指标,用于查看网络是否收敛,而非评价网络好坏,我的yolo代码都没有归一化,所以LOSS值看起来比较高,LOSS的值不重要,重要的是是否在变小,预测是否有效果!** 99 | 100 | **问:为什么我的训练效果不好?预测了没有框(框不准)。 101 | 答: 102 | 1、 数据集过少,小于500的自行考虑增加数据集。 103 | 2、 是否解冻训练。 104 | 3、 如果是yoloV4可以考虑关闭mosaic,mosaic不适用所有的情况。 105 | 4、 网络不适应,比如SSD不适合小目标,因为先验框固定了。 106 | 5、 不同网络的LOSS不同,LOSS只是一个参考指标,用于查看网络是否收敛,而非评价网络好坏,我的yolo代码都没有归一化,所以LOSS值看起来比较高,LOSS的值不重要,重要的是是否收敛! 107 | 6、 测试不同的模型,确认数据集是好的。 108 | 7、 确认自己是否按照步骤去做了,如果比如voc_annotation.py里面的classes是否修改了等。** 109 | 110 | **问:我怎么出现了gbk什么的编码错误啊:** 111 | ```python 112 | UnicodeDecodeError: 'gbk' codec can't decode byte 0xa6 in position 446: illegal multibyte sequence 113 | ``` 114 | **答:标签和路径不要使用中文,如果一定要使用中文,请注意处理的时候编码的问题,改成打开文件的encoding方式改为utf-8。** 115 | 116 | **问:我的图片是xxx*xxx的分辨率的,可以用吗!** 117 | **答:可以用,代码里面会自动进行resize或者数据增强。** 118 | 119 | **问:为什么我yolo的loss降到了0.0几了什么都预测不出来?** 120 | **答:yolo系列的loss是降不到这么多的。查看2007_train.txt文件是否有目标信息。** 121 | 122 | **问:怎么进行多GPU训练? 123 | 答:这个直接百度就好了,实现并不复杂。** 124 | ## 9、灰度图问题 125 | **问:能不能训练灰度图(预测灰度图)啊? 126 | 答:我的大多数库会将灰度图转化成RGB进行训练和预测,如果遇到代码不能训练或者预测灰度图的情况,可以尝试一下在get_random_data里面将Image.open后的结果转换成RGB,预测的时候也这样试试。(仅供参考)** 127 | 128 | ## 10、断点续练问题 129 | **问:我已经训练过几个世代了,能不能从这个基础上继续开始训练 130 | 答:可以,你在训练前,和载入预训练权重一样载入训练过的权重就行了。一般训练好的权重会保存在logs文件夹里面,将model_path修改成你要开始的权值的路径即可。** 131 | 132 | ## 11、预训练权重的问题 133 | **问:如果我要训练其它的数据集,预训练权重要怎么办啊?** 134 | **答:还是查查迁移学习吧,就是同一个思想,用原来的就行。** 135 | 136 | **问:up,我修改了网络,预训练权重还能用吗? 137 | 答:修改了主干的话,如果不是用的现有的网络,基本上预训练权重是不能用的,要么就自己判断权值里卷积核的shape然后自己匹配,要么只能自己预训练去了;修改了后半部分的话,前半部分的主干部分的预训练权重还是可以用的,如果是pytorch代码的话,需要自己修改一下载入权值的方式,判断shape后载入,如果是keras代码,直接by_name=True,skip_mismatch=True即可。** 138 | 权值匹配的方式可以参考如下: 139 | ```python 140 | # 加快模型训练的效率 141 | print('Loading weights into state dict...') 142 | device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') 143 | model_dict = model.state_dict() 144 | pretrained_dict = torch.load(model_path, map_location=device) 145 | a = {} 146 | for k, v in pretrained_dict.items(): 147 | try: 148 | if np.shape(model_dict[k]) == np.shape(v): 149 | a[k]=v 150 | except: 151 | pass 152 | model_dict.update(a) 153 | model.load_state_dict(model_dict) 154 | print('Finished!') 155 | ``` 156 | 157 | 158 | **问:我要怎么不使用预训练权重啊? 159 | 答:把载入预训练权重的代码注释了就行。** 160 | 161 | **问:为什么我不使用预训练权重效果这么差啊? 162 | 答:因为随机初始化的权值不好,提取的特征不好,也就导致了模型训练的效果不好,voc07+12、coco+voc07+12效果都不一样,预训练权重还是非常重要的。** 163 | 164 | ## 12、交流群问题 165 | **问:up,有没有QQ群啥的呢? 166 | 答:没有没有,我没有时间管理QQ群……** 167 | 168 | ## 13、视频检测问题与摄像头检测问题 169 | **问:怎么用摄像头检测呀? 170 | 答:基本上所有目标检测库都有video.py可以进行摄像头检测,也有视频详细解释了摄像头检测的思路。** 171 | 172 | **问:怎么用视频检测呀? 173 | 答:同上** 174 | 175 | ## 14、保存问题 176 | **问:检测完的图片怎么保存? 177 | 答:一般目标检测用的是Image,所以查询一下PIL库的Image如何进行保存。如果有些库用的是cv2,那就是查一下cv2怎么保存图片** 178 | 179 | **问:怎么用视频保存呀? 180 | 答:直接百度查,cv2如何保存图片** 181 | 182 | ## 15、遍历问题 183 | **问:如何对一个文件夹的图片进行遍历? 184 | 答:一般使用os.listdir先找出文件夹里面的所有图片,然后根据predict.py文件里面的执行思路检测图片就行了。** 185 | 186 | **问:如何对一个文件夹的图片进行遍历?并且保存。 187 | 答:遍历的话一般使用os.listdir先找出文件夹里面的所有图片,然后根据predict.py文件里面的执行思路检测图片就行了。保存的话一般目标检测用的是Image,所以查询一下PIL库的Image如何进行保存。如果有些库用的是cv2,那就是查一下cv2怎么保存图片** 188 | 189 | ## 16、路径问题 190 | **问:我怎么出现了这样的错误呀:** 191 | ```python 192 | FileNotFoundError: 【Errno 2】 No such file or directory 193 | …………………………………… 194 | …………………………………… 195 | ``` 196 | **答:去检查一下文件夹路径,查看是否有对应文件;并且检查一下2007_train.txt,其中文件路径是否有错。** 197 | 关于路径有几个重要的点: 198 | **文件夹名称中一定不要有空格。 199 | 注意相对路径和绝对路径。 200 | 多百度路径相关的知识。** 201 | 202 | **所有的路径问题基本上都是根目录问题,好好查一下相对目录的概念!** 203 | ## 17、和原版比较问题 204 | **问:你这个代码和原版比怎么样,可以达到原版的效果么? 205 | 答:基本上可以达到,我都用voc数据测过,我没有好显卡,没有能力在coco上测试与训练。** 206 | 207 | **问:你有没有实现yolov4所有的tricks,和原版差距多少? 208 | 答:并没有实现全部的改进部分,由于YOLOV4使用的改进实在太多了,很难完全实现与列出来,这里只列出来了一些我比较感兴趣,而且非常有效的改进。论文中提到的SAM(注意力机制模块),作者自己的源码也没有使用。还有其它很多的tricks,不是所有的tricks都有提升,我也没法实现全部的tricks。至于和原版的比较,我没有能力训练coco数据集,根据使用过的同学反应差距不大。** 209 | ## 18、FPS问题(检测速度问题) 210 | **问:你这个FPS可以到达多少,可以到 XX FPS么? 211 | 答:FPS和机子的配置有关,配置高就快,配置低就慢。** 212 | 213 | **问:为什么我用服务器去测试yolov4(or others)的FPS只有十几? 214 | 答:检查是否正确安装了tensorflow-gpu或者pytorch的gpu版本,如果已经正确安装,可以去利用time.time()的方法查看detect_image里面,哪一段代码耗时更长(不仅只有网络耗时长,其它处理部分也会耗时,如绘图等)。** 215 | 216 | **问:为什么论文中说速度可以达到XX,但是这里却没有? 217 | 答:检查是否正确安装了tensorflow-gpu或者pytorch的gpu版本,如果已经正确安装,可以去利用time.time()的方法查看detect_image里面,哪一段代码耗时更长(不仅只有网络耗时长,其它处理部分也会耗时,如绘图等)。有些论文还会使用多batch进行预测,我并没有去实现这个部分。** 218 | 219 | ## 19、预测图片不显示问题 220 | **问:为什么你的代码在预测完成后不显示图片?只是在命令行告诉我有什么目标。 221 | 答:给系统安装一个图片查看器就行了。** 222 | 223 | ## 20、算法评价问题(目标检测的map、PR曲线、Recall、Precision等) 224 | **问:怎么计算map? 225 | 答:看map视频,都一个流程。** 226 | 227 | **问:计算map的时候,get_map.py里面有一个MINOVERLAP是什么用的,是iou吗? 228 | 答:是iou,它的作用是判断预测框和真实框的重合成度,如果重合程度大于MINOVERLAP,则预测正确。** 229 | 230 | **问:为什么get_map.py里面的self.confidence(self.score)要设置的那么小? 231 | 答:看一下map的视频的原理部分,要知道所有的结果然后再进行pr曲线的绘制。** 232 | 233 | **问:能不能说说怎么绘制PR曲线啥的呀。 234 | 答:可以看mAP视频,结果里面有PR曲线。** 235 | 236 | **问:怎么计算Recall、Precision指标。 237 | 答:这俩指标应该是相对于特定的置信度的,如果是要绘制Recall、Precision关于confidence的曲线的话,这代码我还没有去实现过…** 238 | 239 | ## 21、coco数据集训练问题 240 | **问:目标检测怎么训练COCO数据集啊?。 241 | 答:coco数据训练所需要的txt文件可以参考qqwweee的yolo3的库,格式都是一样的。** 242 | 243 | ## 22、怎么学习的问题 244 | **问:up,你的学习路线怎么样的?我是个小白我要怎么学? 245 | 答:这里有几点需要注意哈 246 | 1、我不是高手,很多东西我也不会,我的学习路线也不一定适用所有人。 247 | 2、我实验室不做深度学习,所以我很多东西都是自学,自己摸索,正确与否我也不知道。 248 | 3、我个人觉得学习更靠自学** 249 | 学习路线的话,我是先学习了莫烦的python教程,从tensorflow、keras、pytorch入门,入门完之后学的SSD,YOLO,然后了解了很多经典的卷积网,后面就开始学很多不同的代码了,我的学习方法就是一行一行的看,了解整个代码的执行流程,特征层的shape变化等,花了很多时间也没有什么捷径,就是要花时间吧。 250 | 251 | ## 23、模型优化(模型修改)问题 252 | **问:up,YOLO系列使用Focal LOSS的代码你有吗,有提升吗? 253 | 答:很多人试过,提升效果也不大(甚至变的更Low),它自己有自己的正负样本的平衡方式。** 254 | 255 | **问:up,我修改了网络,预训练权重还能用吗? 256 | 答:修改了主干的话,如果不是用的现有的网络,基本上预训练权重是不能用的,要么就自己判断权值里卷积核的shape然后自己匹配,要么只能自己预训练去了;修改了后半部分的话,前半部分的主干部分的预训练权重还是可以用的,如果是pytorch代码的话,需要自己修改一下载入权值的方式,判断shape后载入,如果是keras代码,直接by_name=True,skip_mismatch=True即可。** 257 | 权值匹配的方式可以参考如下: 258 | ```python 259 | # 加快模型训练的效率 260 | print('Loading weights into state dict...') 261 | device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') 262 | model_dict = model.state_dict() 263 | pretrained_dict = torch.load(model_path, map_location=device) 264 | a = {} 265 | for k, v in pretrained_dict.items(): 266 | try: 267 | if np.shape(model_dict[k]) == np.shape(v): 268 | a[k]=v 269 | except: 270 | pass 271 | model_dict.update(a) 272 | model.load_state_dict(model_dict) 273 | print('Finished!') 274 | ``` 275 | ## 24、部署问题 276 | 我没有具体部署到手机等设备上过,所以很多部署问题我并不了解…… 277 | 278 | ## 25、cuda安装失败问题 279 | 一般cuda安装前需要安装Visual Studio,装个2017版本即可。 280 | 281 | ## 26、Ubuntu系统问题 282 | **所有代码在Ubuntu下可以使用,我两个系统都试过。** 283 | 284 | ## 27、VSCODE问题 285 | **问:为什么在VSCODE里面提示一大堆的错误啊? 286 | 答:我也提示一大堆的错误,但是不影响,是VSCODE的问题,如果不想看错误的话就装Pycharm。** 287 | 288 | ## 28、使用cpu进行训练与预测的问题 289 | **对于keras和tf2的代码而言,如果想用cpu进行训练和预测,直接装cpu版本的tensorflow就可以了。** 290 | 291 | **对于pytorch的代码而言,如果想用cpu进行训练和预测,需要将cuda=True修改成cuda=False。Pytorch当中的Faster rcnn最好还是有gpu,因为使用了cupy,如果想不用gpu的话需要自己去查查如何使用没有gpu的cupy啥的。** 292 | 293 | ## 29、其它问题 294 | **问:为什么提示TypeError: cat() got an unexpected keyword argument 'axis',Traceback (most recent call last),AttributeError: 'Tensor' object has no attribute 'bool'? 295 | 答:这是版本问题,建议使用torch1.2以上版本** 296 | 297 | **其它有很多稀奇古怪的问题,很多是版本问题,建议按照我的视频教程安装Keras和tensorflow。比如装的是tensorflow2,就不用问我说为什么我没法运行Keras-yolo啥的。那是必然不行的。** 298 | 299 | --------------------------------------------------------------------------------