├── .gitignore
├── LICENSE
├── README.md
├── data
├── ADE20K_object150_train.txt
├── ADE20K_object150_val.txt
├── color150.mat
├── mapped_25info.txt
├── object150_info.csv
├── object25_info.csv
├── object25_info.xlsx
├── train.odgt
└── validation.odgt
├── dataset.py
├── demo_test.sh
├── lib
├── nn
│ ├── __init__.py
│ ├── modules
│ │ ├── __init__.py
│ │ ├── batchnorm.py
│ │ ├── comm.py
│ │ ├── replicate.py
│ │ ├── tests
│ │ │ ├── test_numeric_batchnorm.py
│ │ │ └── test_sync_batchnorm.py
│ │ └── unittest.py
│ └── parallel
│ │ ├── __init__.py
│ │ └── data_parallel.py
└── utils
│ ├── __init__.py
│ ├── data
│ ├── __init__.py
│ ├── dataloader.py
│ ├── dataset.py
│ ├── distributed.py
│ └── sampler.py
│ └── th.py
├── models
├── __init__.py
├── models.py
└── resnet.py
├── test.py
├── utils.py
└── wiki
└── img
├── artifacts.jpg
├── bottleneck.jpg
├── compare-res.jpg
├── dilation.jpg
├── drna-drnb-drnc.jpg
├── error-rate-dilation.jpg
├── example-psp.jpg
├── loss-supervision.jpg
├── overall-loss-supervision.jpg
├── psp-on-imgenet.jpg
├── psp-semantic-cityscapes.jpg
├── pspnet-comparison.jpg
├── pspnet-structure.jpg
├── regularization-supervision.jpg
├── resblock.jpg
├── resplainacc.jpg
├── scene-parsing-issues-pyramid.jpg
├── semantic-acc-dilated.jpg
├── semantic-sample.jpg
├── supervision.jpg
└── top-accuracies-on-places-supervision.jpg
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib64/
18 | parts/
19 | sdist/
20 | var/
21 | wheels/
22 | *.egg-info/
23 | .installed.cfg
24 | *.egg
25 | MANIFEST
26 |
27 | # PyInstaller
28 | # Usually these files are written by a python script from a template
29 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
30 | *.manifest
31 | *.spec
32 |
33 | # Installer logs
34 | pip-log.txt
35 | pip-delete-this-directory.txt
36 |
37 | # Unit test / coverage reports
38 | htmlcov/
39 | .tox/
40 | .coverage
41 | .coverage.*
42 | .cache
43 | nosetests.xml
44 | coverage.xml
45 | *.cover
46 | .hypothesis/
47 | .pytest_cache/
48 |
49 | # Translations
50 | *.mo
51 | *.pot
52 |
53 | # Django stuff:
54 | *.log
55 | local_settings.py
56 | db.sqlite3
57 |
58 | # Flask stuff:
59 | instance/
60 | .webassets-cache
61 |
62 | # Scrapy stuff:
63 | .scrapy
64 |
65 | # Sphinx documentation
66 | docs/_build/
67 |
68 | # PyBuilder
69 | target/
70 |
71 | # Jupyter Notebook
72 | .ipynb_checkpoints
73 |
74 | # pyenv
75 | .python-version
76 |
77 | # celery beat schedule file
78 | celerybeat-schedule
79 |
80 | # SageMath parsed files
81 | *.sage.py
82 |
83 | # Environments
84 | .env
85 | .venv
86 | env/
87 | venv/
88 | ENV/
89 | env.bak/
90 | venv.bak/
91 |
92 | # Spyder project settings
93 | .spyderproject
94 | .spyproject
95 |
96 | # Rope project settings
97 | .ropeproject
98 |
99 | # mkdocs documentation
100 | /site
101 |
102 | # mypy
103 | .mypy_cache/
104 | .idea/
105 |
106 | # pretrained models files
107 | pretrained/
108 | ckpt/
109 | vis/
110 | log/
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2018 M. Doosti Lakhani
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # ObjectNet
2 | Note I have cloned the [official PyTorch](https://github.com/hangzhaomit/semantic-segmentation-pytorch) implementation and just added a function to merge most common classes to reduce number of classes from 150 to 25.
3 | I could not address this modification in the title of the repo.
4 |
5 | ## ResNet101
6 | Deep Residual Learning for Image Recognition [link to paper](https://arxiv.org/pdf/1512.03385.pdf)
7 |
8 | 
9 |
10 | we address the degradation problem by
11 | introducing a deep residual learning framework. Instead
12 | of hoping each few stacked layers directly fit a
13 | desired underlying mapping, we explicitly let these layers
14 | fit a residual mapping. Formally, denoting the desired
15 | underlying mapping as **H(x)**, we let the stacked nonlinear
16 | layers fit another mapping of **F(x) := H(x)-x**. The original
17 | mapping is recast into **F(x)+x**. We hypothesize that it
18 | is easier to optimize the residual mapping than to optimize
19 | the original, unreferenced mapping. To the extreme, if an
20 | identity mapping were optimal, it would be easier to push
21 | the residual to zero than to fit an identity mapping by a stack
22 | of nonlinear layers.
23 | In our case, the shortcut connections simply
24 | perform identity mapping, and their outputs are added to
25 | the outputs of the stacked layers.
26 | In the paper they show that:
27 | 1) Our extremely deep residual nets
28 | are easy to optimize, but the counterpart “plain” nets (that
29 | simply stack layers) exhibit higher training error when the
30 | depth increases.
31 | 2) Our deep residual nets can easily enjoy
32 | accuracy gains from greatly increased depth, producing results
33 | substantially better than previous networks.
34 |
35 | Here is implementation structure they used in theirs paper
36 |
37 | 
38 |
39 | And they got this top-1 error rates:
40 |
41 | 
42 |
43 | Because we are using resnet101, residual blocks constructed over Bottleneck block. Here is the structure:
44 |
45 | 
46 |
47 |
48 | ## Dilation
49 | Dilated Residual Networks [link to paper](https://arxiv.org/pdf/1705.09914.pdf)
50 |
51 | We show that dilated residual networks
52 | (DRNs) outperform their non-dilated counterparts in image
53 | classification without increasing the model’s depth or
54 | complexity. We then study gridding artifacts introduced by
55 | dilation, develop an approach to removing these artifacts
56 | (‘degridding’), and show that this further increases the performance
57 | of DRNs. In addition, we show that the accuracy
58 | advantage of DRNs is further magnified in downstream applications
59 | such as object localization and semantic segmentation.
60 |
61 | While convolutional networks have done well, the almost
62 | complete elimination of spatial acuity may be preventing
63 | these models from achieving even higher accuracy, for
64 | example by preserving the contribution of small and thin
65 | objects that may be important for correctly understanding
66 | the image.
67 |
68 | 
69 |
70 | The use of dilated convolutions can cause gridding artifacts.
71 |
72 | 
73 |
74 | So they introduced three methods to remove this artifacts. Here is the structure of them:
75 |
76 | 
77 |
78 | Here is comparison of differenet dilated resnets based on error rates
79 |
80 | 
81 |
82 | And you can see the accuracy of this model in semantic segmentation on cityscapes dataset:
83 |
84 | 
85 |
86 | A sample comparison:
87 |
88 | 
89 |
90 |
91 | ## Supervision
92 | Training Deeper Convolutional Networks with Deep Supervision [link to paper](https://arxiv.org/pdf/1505.02496.pdf)
93 |
94 | In order to train deeper networks, we propose to
95 | add auxiliary supervision branches after certain intermediate
96 | layers during training. We formulate a simple rule of
97 | thumb to determine where these branches should be added.
98 | The resulting deeply supervised structure makes the training
99 | much easier and also produces better classification results
100 | on ImageNet and the recently released, larger MIT
101 | Places dataset.
102 |
103 | Illustration of our deep models with 8 and 13 convolutional layers. The additional supervision loss branches
104 | are indicated by dashed red boxes. Xl denote the intermediate layer outputs and Wl are the weight matrices for each
105 | computational block. Blocks of the same type are shown in the same color. A legend below the network diagrams shows the
106 | internal structure of the different block types.
107 |
108 | 
109 |
110 | Loss functions:
111 |
112 | auxiliary loss
113 |
114 | 
115 |
116 | Note that this loss depends on W, not just Ws, because the
117 | computation of the feature map S8 involves the weights of
118 | the early convolutional layers W1; : : :W4.
119 | The combined loss function for the whole network is
120 | given by a weighted sum of the main loss L0(W) and the
121 | auxiliary supervision loss Ls(Ws):
122 |
123 | 
124 |
125 | where alpha-t controls the trade-off between the two terms. In
126 | the course of training, in order to use the second term
127 | mainly as regularization, we adopt the same strategy as
128 | in [6], where alpha decays as a function of epoch t (with N
129 | being the total number of epochs):
130 |
131 | 
132 |
133 | We train our deeply supervised model using stochastic
134 | gradient descent.
135 |
136 | Here is top-1 and top-5 accuracies on places dataset:
137 |
138 | 
139 |
140 |
141 | ## Pyramid Pooling
142 | Pyramid Scene Parsing Network [link to paper](https://arxiv.org/pdf/1612.01105.pdf)
143 | [Link](https://github.com/hszhao/PSPNet) to implementation repository.
144 |
145 | In this paper, we exploit the
146 | capability of global context information by different-regionbased
147 | context aggregation through our pyramid pooling
148 | module together with the proposed pyramid scene parsing
149 | network (PSPNet).
150 |
151 | Our main contributions are threefold.
152 |
153 | - We propose a pyramid scene parsing network to embed
154 | difficult scenery context features in an FCN based
155 | pixel prediction framework.
156 | - We develop an effective optimization strategy for deep
157 | ResNet based on deeply supervised loss.
158 | - We build a practical system for state-of-the-art scene
159 | parsing and semantic segmentation where all crucial
160 | implementation details are included.
161 |
162 | Scene parsing issues:
163 |
164 | 
165 |
166 |
167 | feature maps in different levels generated by
168 | pyramid pooling were finally flattened and concatenated to
169 | be fed into a fully connected layer for classification. This
170 | global prior is designed to remove the fixed-size constraint
171 | of CNN for image classification. To further reduce context
172 | information loss between different sub-regions, we propose
173 | a hierarchical global prior, containing information with different
174 | scales and varying among different sub-regions.
175 | we call it pyramid pooling module for global scene prior construction
176 | upon the final-layer-feature-map of the deep neural
177 | network.
178 |
179 |
180 | Here is the PSPNet structure:
181 |
182 | 
183 |
184 | The pyramid pooling module fuses features under four
185 | different pyramid scales. The coarsest level highlighted in
186 | red is global pooling to generate a single bin output. The
187 | following pyramid level separates the feature map into different
188 | sub-regions and forms pooled representation for different
189 | locations. The output of different levels in the pyramid
190 | pooling module contains the feature map with varied
191 | sizes. To maintain the weight of global feature, we use 1x1
192 | convolution layer after each pyramid level to reduce the dimension
193 | of context representation to 1/N of the original
194 | one if the level size of pyramid is N. Then we directly upsample
195 | the low-dimension feature maps to get the same size
196 | feature as the original feature map via bilinear interpolation.
197 | Finally, different levels of features are concatenated as the
198 | final pyramid pooling global feature.
199 |
200 | You can see the comparison of PSPNet with other models:
201 |
202 | 
203 |
204 | Also there is a result on ImageNet dataset:
205 |
206 | 
207 |
208 | Example of PSPNet"
209 |
210 | 
211 |
212 | The result of PSPNet and other methods on CityScapes dataset in semantic segmentation:
213 |
214 | 
215 |
216 |
217 | # Reference
218 | Repository of models [link](https://github.com/CSAILVision/sceneparsing)
219 |
220 | Repository of pytorch implementation [link](https://github.com/hangzhaomit/semantic-segmentation-pytorch)
221 |
222 | # Citation
223 | Please cite this porject as:
224 | Nikan Doosti. (2020). Nikronic/ObjectNet: DOI Release (v0.1-alpha). Zenodo. https://doi.org/10.5281/zenodo.3838620
225 |
226 | [](https://doi.org/10.5281/zenodo.3838620)
227 |
--------------------------------------------------------------------------------
/data/ADE20K_object150_val.txt:
--------------------------------------------------------------------------------
1 | validation/ADE_val_00000001.jpg
2 | validation/ADE_val_00000002.jpg
3 | validation/ADE_val_00000003.jpg
4 | validation/ADE_val_00000004.jpg
5 | validation/ADE_val_00000005.jpg
6 | validation/ADE_val_00000006.jpg
7 | validation/ADE_val_00000007.jpg
8 | validation/ADE_val_00000008.jpg
9 | validation/ADE_val_00000009.jpg
10 | validation/ADE_val_00000010.jpg
11 | validation/ADE_val_00000011.jpg
12 | validation/ADE_val_00000012.jpg
13 | validation/ADE_val_00000013.jpg
14 | validation/ADE_val_00000014.jpg
15 | validation/ADE_val_00000015.jpg
16 | validation/ADE_val_00000016.jpg
17 | validation/ADE_val_00000017.jpg
18 | validation/ADE_val_00000018.jpg
19 | validation/ADE_val_00000019.jpg
20 | validation/ADE_val_00000020.jpg
21 | validation/ADE_val_00000021.jpg
22 | validation/ADE_val_00000022.jpg
23 | validation/ADE_val_00000023.jpg
24 | validation/ADE_val_00000024.jpg
25 | validation/ADE_val_00000025.jpg
26 | validation/ADE_val_00000026.jpg
27 | validation/ADE_val_00000027.jpg
28 | validation/ADE_val_00000028.jpg
29 | validation/ADE_val_00000029.jpg
30 | validation/ADE_val_00000030.jpg
31 | validation/ADE_val_00000031.jpg
32 | validation/ADE_val_00000032.jpg
33 | validation/ADE_val_00000033.jpg
34 | validation/ADE_val_00000034.jpg
35 | validation/ADE_val_00000035.jpg
36 | validation/ADE_val_00000036.jpg
37 | validation/ADE_val_00000037.jpg
38 | validation/ADE_val_00000038.jpg
39 | validation/ADE_val_00000039.jpg
40 | validation/ADE_val_00000040.jpg
41 | validation/ADE_val_00000041.jpg
42 | validation/ADE_val_00000042.jpg
43 | validation/ADE_val_00000043.jpg
44 | validation/ADE_val_00000044.jpg
45 | validation/ADE_val_00000045.jpg
46 | validation/ADE_val_00000046.jpg
47 | validation/ADE_val_00000047.jpg
48 | validation/ADE_val_00000048.jpg
49 | validation/ADE_val_00000049.jpg
50 | validation/ADE_val_00000050.jpg
51 | validation/ADE_val_00000051.jpg
52 | validation/ADE_val_00000052.jpg
53 | validation/ADE_val_00000053.jpg
54 | validation/ADE_val_00000054.jpg
55 | validation/ADE_val_00000055.jpg
56 | validation/ADE_val_00000056.jpg
57 | validation/ADE_val_00000057.jpg
58 | validation/ADE_val_00000058.jpg
59 | validation/ADE_val_00000059.jpg
60 | validation/ADE_val_00000060.jpg
61 | validation/ADE_val_00000061.jpg
62 | validation/ADE_val_00000062.jpg
63 | validation/ADE_val_00000063.jpg
64 | validation/ADE_val_00000064.jpg
65 | validation/ADE_val_00000065.jpg
66 | validation/ADE_val_00000066.jpg
67 | validation/ADE_val_00000067.jpg
68 | validation/ADE_val_00000068.jpg
69 | validation/ADE_val_00000069.jpg
70 | validation/ADE_val_00000070.jpg
71 | validation/ADE_val_00000071.jpg
72 | validation/ADE_val_00000072.jpg
73 | validation/ADE_val_00000073.jpg
74 | validation/ADE_val_00000074.jpg
75 | validation/ADE_val_00000075.jpg
76 | validation/ADE_val_00000076.jpg
77 | validation/ADE_val_00000077.jpg
78 | validation/ADE_val_00000078.jpg
79 | validation/ADE_val_00000079.jpg
80 | validation/ADE_val_00000080.jpg
81 | validation/ADE_val_00000081.jpg
82 | validation/ADE_val_00000082.jpg
83 | validation/ADE_val_00000083.jpg
84 | validation/ADE_val_00000084.jpg
85 | validation/ADE_val_00000085.jpg
86 | validation/ADE_val_00000086.jpg
87 | validation/ADE_val_00000087.jpg
88 | validation/ADE_val_00000088.jpg
89 | validation/ADE_val_00000089.jpg
90 | validation/ADE_val_00000090.jpg
91 | validation/ADE_val_00000091.jpg
92 | validation/ADE_val_00000092.jpg
93 | validation/ADE_val_00000093.jpg
94 | validation/ADE_val_00000094.jpg
95 | validation/ADE_val_00000095.jpg
96 | validation/ADE_val_00000096.jpg
97 | validation/ADE_val_00000097.jpg
98 | validation/ADE_val_00000098.jpg
99 | validation/ADE_val_00000099.jpg
100 | validation/ADE_val_00000100.jpg
101 | validation/ADE_val_00000101.jpg
102 | validation/ADE_val_00000102.jpg
103 | validation/ADE_val_00000103.jpg
104 | validation/ADE_val_00000104.jpg
105 | validation/ADE_val_00000105.jpg
106 | validation/ADE_val_00000106.jpg
107 | validation/ADE_val_00000107.jpg
108 | validation/ADE_val_00000108.jpg
109 | validation/ADE_val_00000109.jpg
110 | validation/ADE_val_00000110.jpg
111 | validation/ADE_val_00000111.jpg
112 | validation/ADE_val_00000112.jpg
113 | validation/ADE_val_00000113.jpg
114 | validation/ADE_val_00000114.jpg
115 | validation/ADE_val_00000115.jpg
116 | validation/ADE_val_00000116.jpg
117 | validation/ADE_val_00000117.jpg
118 | validation/ADE_val_00000118.jpg
119 | validation/ADE_val_00000119.jpg
120 | validation/ADE_val_00000120.jpg
121 | validation/ADE_val_00000121.jpg
122 | validation/ADE_val_00000122.jpg
123 | validation/ADE_val_00000123.jpg
124 | validation/ADE_val_00000124.jpg
125 | validation/ADE_val_00000125.jpg
126 | validation/ADE_val_00000126.jpg
127 | validation/ADE_val_00000127.jpg
128 | validation/ADE_val_00000128.jpg
129 | validation/ADE_val_00000129.jpg
130 | validation/ADE_val_00000130.jpg
131 | validation/ADE_val_00000131.jpg
132 | validation/ADE_val_00000132.jpg
133 | validation/ADE_val_00000133.jpg
134 | validation/ADE_val_00000134.jpg
135 | validation/ADE_val_00000135.jpg
136 | validation/ADE_val_00000136.jpg
137 | validation/ADE_val_00000137.jpg
138 | validation/ADE_val_00000138.jpg
139 | validation/ADE_val_00000139.jpg
140 | validation/ADE_val_00000140.jpg
141 | validation/ADE_val_00000141.jpg
142 | validation/ADE_val_00000142.jpg
143 | validation/ADE_val_00000143.jpg
144 | validation/ADE_val_00000144.jpg
145 | validation/ADE_val_00000145.jpg
146 | validation/ADE_val_00000146.jpg
147 | validation/ADE_val_00000147.jpg
148 | validation/ADE_val_00000148.jpg
149 | validation/ADE_val_00000149.jpg
150 | validation/ADE_val_00000150.jpg
151 | validation/ADE_val_00000151.jpg
152 | validation/ADE_val_00000152.jpg
153 | validation/ADE_val_00000153.jpg
154 | validation/ADE_val_00000154.jpg
155 | validation/ADE_val_00000155.jpg
156 | validation/ADE_val_00000156.jpg
157 | validation/ADE_val_00000157.jpg
158 | validation/ADE_val_00000158.jpg
159 | validation/ADE_val_00000159.jpg
160 | validation/ADE_val_00000160.jpg
161 | validation/ADE_val_00000161.jpg
162 | validation/ADE_val_00000162.jpg
163 | validation/ADE_val_00000163.jpg
164 | validation/ADE_val_00000164.jpg
165 | validation/ADE_val_00000165.jpg
166 | validation/ADE_val_00000166.jpg
167 | validation/ADE_val_00000167.jpg
168 | validation/ADE_val_00000168.jpg
169 | validation/ADE_val_00000169.jpg
170 | validation/ADE_val_00000170.jpg
171 | validation/ADE_val_00000171.jpg
172 | validation/ADE_val_00000172.jpg
173 | validation/ADE_val_00000173.jpg
174 | validation/ADE_val_00000174.jpg
175 | validation/ADE_val_00000175.jpg
176 | validation/ADE_val_00000176.jpg
177 | validation/ADE_val_00000177.jpg
178 | validation/ADE_val_00000178.jpg
179 | validation/ADE_val_00000179.jpg
180 | validation/ADE_val_00000180.jpg
181 | validation/ADE_val_00000181.jpg
182 | validation/ADE_val_00000182.jpg
183 | validation/ADE_val_00000183.jpg
184 | validation/ADE_val_00000184.jpg
185 | validation/ADE_val_00000185.jpg
186 | validation/ADE_val_00000186.jpg
187 | validation/ADE_val_00000187.jpg
188 | validation/ADE_val_00000188.jpg
189 | validation/ADE_val_00000189.jpg
190 | validation/ADE_val_00000190.jpg
191 | validation/ADE_val_00000191.jpg
192 | validation/ADE_val_00000192.jpg
193 | validation/ADE_val_00000193.jpg
194 | validation/ADE_val_00000194.jpg
195 | validation/ADE_val_00000195.jpg
196 | validation/ADE_val_00000196.jpg
197 | validation/ADE_val_00000197.jpg
198 | validation/ADE_val_00000198.jpg
199 | validation/ADE_val_00000199.jpg
200 | validation/ADE_val_00000200.jpg
201 | validation/ADE_val_00000201.jpg
202 | validation/ADE_val_00000202.jpg
203 | validation/ADE_val_00000203.jpg
204 | validation/ADE_val_00000204.jpg
205 | validation/ADE_val_00000205.jpg
206 | validation/ADE_val_00000206.jpg
207 | validation/ADE_val_00000207.jpg
208 | validation/ADE_val_00000208.jpg
209 | validation/ADE_val_00000209.jpg
210 | validation/ADE_val_00000210.jpg
211 | validation/ADE_val_00000211.jpg
212 | validation/ADE_val_00000212.jpg
213 | validation/ADE_val_00000213.jpg
214 | validation/ADE_val_00000214.jpg
215 | validation/ADE_val_00000215.jpg
216 | validation/ADE_val_00000216.jpg
217 | validation/ADE_val_00000217.jpg
218 | validation/ADE_val_00000218.jpg
219 | validation/ADE_val_00000219.jpg
220 | validation/ADE_val_00000220.jpg
221 | validation/ADE_val_00000221.jpg
222 | validation/ADE_val_00000222.jpg
223 | validation/ADE_val_00000223.jpg
224 | validation/ADE_val_00000224.jpg
225 | validation/ADE_val_00000225.jpg
226 | validation/ADE_val_00000226.jpg
227 | validation/ADE_val_00000227.jpg
228 | validation/ADE_val_00000228.jpg
229 | validation/ADE_val_00000229.jpg
230 | validation/ADE_val_00000230.jpg
231 | validation/ADE_val_00000231.jpg
232 | validation/ADE_val_00000232.jpg
233 | validation/ADE_val_00000233.jpg
234 | validation/ADE_val_00000234.jpg
235 | validation/ADE_val_00000235.jpg
236 | validation/ADE_val_00000236.jpg
237 | validation/ADE_val_00000237.jpg
238 | validation/ADE_val_00000238.jpg
239 | validation/ADE_val_00000239.jpg
240 | validation/ADE_val_00000240.jpg
241 | validation/ADE_val_00000241.jpg
242 | validation/ADE_val_00000242.jpg
243 | validation/ADE_val_00000243.jpg
244 | validation/ADE_val_00000244.jpg
245 | validation/ADE_val_00000245.jpg
246 | validation/ADE_val_00000246.jpg
247 | validation/ADE_val_00000247.jpg
248 | validation/ADE_val_00000248.jpg
249 | validation/ADE_val_00000249.jpg
250 | validation/ADE_val_00000250.jpg
251 | validation/ADE_val_00000251.jpg
252 | validation/ADE_val_00000252.jpg
253 | validation/ADE_val_00000253.jpg
254 | validation/ADE_val_00000254.jpg
255 | validation/ADE_val_00000255.jpg
256 | validation/ADE_val_00000256.jpg
257 | validation/ADE_val_00000257.jpg
258 | validation/ADE_val_00000258.jpg
259 | validation/ADE_val_00000259.jpg
260 | validation/ADE_val_00000260.jpg
261 | validation/ADE_val_00000261.jpg
262 | validation/ADE_val_00000262.jpg
263 | validation/ADE_val_00000263.jpg
264 | validation/ADE_val_00000264.jpg
265 | validation/ADE_val_00000265.jpg
266 | validation/ADE_val_00000266.jpg
267 | validation/ADE_val_00000267.jpg
268 | validation/ADE_val_00000268.jpg
269 | validation/ADE_val_00000269.jpg
270 | validation/ADE_val_00000270.jpg
271 | validation/ADE_val_00000271.jpg
272 | validation/ADE_val_00000272.jpg
273 | validation/ADE_val_00000273.jpg
274 | validation/ADE_val_00000274.jpg
275 | validation/ADE_val_00000275.jpg
276 | validation/ADE_val_00000276.jpg
277 | validation/ADE_val_00000277.jpg
278 | validation/ADE_val_00000278.jpg
279 | validation/ADE_val_00000279.jpg
280 | validation/ADE_val_00000280.jpg
281 | validation/ADE_val_00000281.jpg
282 | validation/ADE_val_00000282.jpg
283 | validation/ADE_val_00000283.jpg
284 | validation/ADE_val_00000284.jpg
285 | validation/ADE_val_00000285.jpg
286 | validation/ADE_val_00000286.jpg
287 | validation/ADE_val_00000287.jpg
288 | validation/ADE_val_00000288.jpg
289 | validation/ADE_val_00000289.jpg
290 | validation/ADE_val_00000290.jpg
291 | validation/ADE_val_00000291.jpg
292 | validation/ADE_val_00000292.jpg
293 | validation/ADE_val_00000293.jpg
294 | validation/ADE_val_00000294.jpg
295 | validation/ADE_val_00000295.jpg
296 | validation/ADE_val_00000296.jpg
297 | validation/ADE_val_00000297.jpg
298 | validation/ADE_val_00000298.jpg
299 | validation/ADE_val_00000299.jpg
300 | validation/ADE_val_00000300.jpg
301 | validation/ADE_val_00000301.jpg
302 | validation/ADE_val_00000302.jpg
303 | validation/ADE_val_00000303.jpg
304 | validation/ADE_val_00000304.jpg
305 | validation/ADE_val_00000305.jpg
306 | validation/ADE_val_00000306.jpg
307 | validation/ADE_val_00000307.jpg
308 | validation/ADE_val_00000308.jpg
309 | validation/ADE_val_00000309.jpg
310 | validation/ADE_val_00000310.jpg
311 | validation/ADE_val_00000311.jpg
312 | validation/ADE_val_00000312.jpg
313 | validation/ADE_val_00000313.jpg
314 | validation/ADE_val_00000314.jpg
315 | validation/ADE_val_00000315.jpg
316 | validation/ADE_val_00000316.jpg
317 | validation/ADE_val_00000317.jpg
318 | validation/ADE_val_00000318.jpg
319 | validation/ADE_val_00000319.jpg
320 | validation/ADE_val_00000320.jpg
321 | validation/ADE_val_00000321.jpg
322 | validation/ADE_val_00000322.jpg
323 | validation/ADE_val_00000323.jpg
324 | validation/ADE_val_00000324.jpg
325 | validation/ADE_val_00000325.jpg
326 | validation/ADE_val_00000326.jpg
327 | validation/ADE_val_00000327.jpg
328 | validation/ADE_val_00000328.jpg
329 | validation/ADE_val_00000329.jpg
330 | validation/ADE_val_00000330.jpg
331 | validation/ADE_val_00000331.jpg
332 | validation/ADE_val_00000332.jpg
333 | validation/ADE_val_00000333.jpg
334 | validation/ADE_val_00000334.jpg
335 | validation/ADE_val_00000335.jpg
336 | validation/ADE_val_00000336.jpg
337 | validation/ADE_val_00000337.jpg
338 | validation/ADE_val_00000338.jpg
339 | validation/ADE_val_00000339.jpg
340 | validation/ADE_val_00000340.jpg
341 | validation/ADE_val_00000341.jpg
342 | validation/ADE_val_00000342.jpg
343 | validation/ADE_val_00000343.jpg
344 | validation/ADE_val_00000344.jpg
345 | validation/ADE_val_00000345.jpg
346 | validation/ADE_val_00000346.jpg
347 | validation/ADE_val_00000347.jpg
348 | validation/ADE_val_00000348.jpg
349 | validation/ADE_val_00000349.jpg
350 | validation/ADE_val_00000350.jpg
351 | validation/ADE_val_00000351.jpg
352 | validation/ADE_val_00000352.jpg
353 | validation/ADE_val_00000353.jpg
354 | validation/ADE_val_00000354.jpg
355 | validation/ADE_val_00000355.jpg
356 | validation/ADE_val_00000356.jpg
357 | validation/ADE_val_00000357.jpg
358 | validation/ADE_val_00000358.jpg
359 | validation/ADE_val_00000359.jpg
360 | validation/ADE_val_00000360.jpg
361 | validation/ADE_val_00000361.jpg
362 | validation/ADE_val_00000362.jpg
363 | validation/ADE_val_00000363.jpg
364 | validation/ADE_val_00000364.jpg
365 | validation/ADE_val_00000365.jpg
366 | validation/ADE_val_00000366.jpg
367 | validation/ADE_val_00000367.jpg
368 | validation/ADE_val_00000368.jpg
369 | validation/ADE_val_00000369.jpg
370 | validation/ADE_val_00000370.jpg
371 | validation/ADE_val_00000371.jpg
372 | validation/ADE_val_00000372.jpg
373 | validation/ADE_val_00000373.jpg
374 | validation/ADE_val_00000374.jpg
375 | validation/ADE_val_00000375.jpg
376 | validation/ADE_val_00000376.jpg
377 | validation/ADE_val_00000377.jpg
378 | validation/ADE_val_00000378.jpg
379 | validation/ADE_val_00000379.jpg
380 | validation/ADE_val_00000380.jpg
381 | validation/ADE_val_00000381.jpg
382 | validation/ADE_val_00000382.jpg
383 | validation/ADE_val_00000383.jpg
384 | validation/ADE_val_00000384.jpg
385 | validation/ADE_val_00000385.jpg
386 | validation/ADE_val_00000386.jpg
387 | validation/ADE_val_00000387.jpg
388 | validation/ADE_val_00000388.jpg
389 | validation/ADE_val_00000389.jpg
390 | validation/ADE_val_00000390.jpg
391 | validation/ADE_val_00000391.jpg
392 | validation/ADE_val_00000392.jpg
393 | validation/ADE_val_00000393.jpg
394 | validation/ADE_val_00000394.jpg
395 | validation/ADE_val_00000395.jpg
396 | validation/ADE_val_00000396.jpg
397 | validation/ADE_val_00000397.jpg
398 | validation/ADE_val_00000398.jpg
399 | validation/ADE_val_00000399.jpg
400 | validation/ADE_val_00000400.jpg
401 | validation/ADE_val_00000401.jpg
402 | validation/ADE_val_00000402.jpg
403 | validation/ADE_val_00000403.jpg
404 | validation/ADE_val_00000404.jpg
405 | validation/ADE_val_00000405.jpg
406 | validation/ADE_val_00000406.jpg
407 | validation/ADE_val_00000407.jpg
408 | validation/ADE_val_00000408.jpg
409 | validation/ADE_val_00000409.jpg
410 | validation/ADE_val_00000410.jpg
411 | validation/ADE_val_00000411.jpg
412 | validation/ADE_val_00000412.jpg
413 | validation/ADE_val_00000413.jpg
414 | validation/ADE_val_00000414.jpg
415 | validation/ADE_val_00000415.jpg
416 | validation/ADE_val_00000416.jpg
417 | validation/ADE_val_00000417.jpg
418 | validation/ADE_val_00000418.jpg
419 | validation/ADE_val_00000419.jpg
420 | validation/ADE_val_00000420.jpg
421 | validation/ADE_val_00000421.jpg
422 | validation/ADE_val_00000422.jpg
423 | validation/ADE_val_00000423.jpg
424 | validation/ADE_val_00000424.jpg
425 | validation/ADE_val_00000425.jpg
426 | validation/ADE_val_00000426.jpg
427 | validation/ADE_val_00000427.jpg
428 | validation/ADE_val_00000428.jpg
429 | validation/ADE_val_00000429.jpg
430 | validation/ADE_val_00000430.jpg
431 | validation/ADE_val_00000431.jpg
432 | validation/ADE_val_00000432.jpg
433 | validation/ADE_val_00000433.jpg
434 | validation/ADE_val_00000434.jpg
435 | validation/ADE_val_00000435.jpg
436 | validation/ADE_val_00000436.jpg
437 | validation/ADE_val_00000437.jpg
438 | validation/ADE_val_00000438.jpg
439 | validation/ADE_val_00000439.jpg
440 | validation/ADE_val_00000440.jpg
441 | validation/ADE_val_00000441.jpg
442 | validation/ADE_val_00000442.jpg
443 | validation/ADE_val_00000443.jpg
444 | validation/ADE_val_00000444.jpg
445 | validation/ADE_val_00000445.jpg
446 | validation/ADE_val_00000446.jpg
447 | validation/ADE_val_00000447.jpg
448 | validation/ADE_val_00000448.jpg
449 | validation/ADE_val_00000449.jpg
450 | validation/ADE_val_00000450.jpg
451 | validation/ADE_val_00000451.jpg
452 | validation/ADE_val_00000452.jpg
453 | validation/ADE_val_00000453.jpg
454 | validation/ADE_val_00000454.jpg
455 | validation/ADE_val_00000455.jpg
456 | validation/ADE_val_00000456.jpg
457 | validation/ADE_val_00000457.jpg
458 | validation/ADE_val_00000458.jpg
459 | validation/ADE_val_00000459.jpg
460 | validation/ADE_val_00000460.jpg
461 | validation/ADE_val_00000461.jpg
462 | validation/ADE_val_00000462.jpg
463 | validation/ADE_val_00000463.jpg
464 | validation/ADE_val_00000464.jpg
465 | validation/ADE_val_00000465.jpg
466 | validation/ADE_val_00000466.jpg
467 | validation/ADE_val_00000467.jpg
468 | validation/ADE_val_00000468.jpg
469 | validation/ADE_val_00000469.jpg
470 | validation/ADE_val_00000470.jpg
471 | validation/ADE_val_00000471.jpg
472 | validation/ADE_val_00000472.jpg
473 | validation/ADE_val_00000473.jpg
474 | validation/ADE_val_00000474.jpg
475 | validation/ADE_val_00000475.jpg
476 | validation/ADE_val_00000476.jpg
477 | validation/ADE_val_00000477.jpg
478 | validation/ADE_val_00000478.jpg
479 | validation/ADE_val_00000479.jpg
480 | validation/ADE_val_00000480.jpg
481 | validation/ADE_val_00000481.jpg
482 | validation/ADE_val_00000482.jpg
483 | validation/ADE_val_00000483.jpg
484 | validation/ADE_val_00000484.jpg
485 | validation/ADE_val_00000485.jpg
486 | validation/ADE_val_00000486.jpg
487 | validation/ADE_val_00000487.jpg
488 | validation/ADE_val_00000488.jpg
489 | validation/ADE_val_00000489.jpg
490 | validation/ADE_val_00000490.jpg
491 | validation/ADE_val_00000491.jpg
492 | validation/ADE_val_00000492.jpg
493 | validation/ADE_val_00000493.jpg
494 | validation/ADE_val_00000494.jpg
495 | validation/ADE_val_00000495.jpg
496 | validation/ADE_val_00000496.jpg
497 | validation/ADE_val_00000497.jpg
498 | validation/ADE_val_00000498.jpg
499 | validation/ADE_val_00000499.jpg
500 | validation/ADE_val_00000500.jpg
501 | validation/ADE_val_00000501.jpg
502 | validation/ADE_val_00000502.jpg
503 | validation/ADE_val_00000503.jpg
504 | validation/ADE_val_00000504.jpg
505 | validation/ADE_val_00000505.jpg
506 | validation/ADE_val_00000506.jpg
507 | validation/ADE_val_00000507.jpg
508 | validation/ADE_val_00000508.jpg
509 | validation/ADE_val_00000509.jpg
510 | validation/ADE_val_00000510.jpg
511 | validation/ADE_val_00000511.jpg
512 | validation/ADE_val_00000512.jpg
513 | validation/ADE_val_00000513.jpg
514 | validation/ADE_val_00000514.jpg
515 | validation/ADE_val_00000515.jpg
516 | validation/ADE_val_00000516.jpg
517 | validation/ADE_val_00000517.jpg
518 | validation/ADE_val_00000518.jpg
519 | validation/ADE_val_00000519.jpg
520 | validation/ADE_val_00000520.jpg
521 | validation/ADE_val_00000521.jpg
522 | validation/ADE_val_00000522.jpg
523 | validation/ADE_val_00000523.jpg
524 | validation/ADE_val_00000524.jpg
525 | validation/ADE_val_00000525.jpg
526 | validation/ADE_val_00000526.jpg
527 | validation/ADE_val_00000527.jpg
528 | validation/ADE_val_00000528.jpg
529 | validation/ADE_val_00000529.jpg
530 | validation/ADE_val_00000530.jpg
531 | validation/ADE_val_00000531.jpg
532 | validation/ADE_val_00000532.jpg
533 | validation/ADE_val_00000533.jpg
534 | validation/ADE_val_00000534.jpg
535 | validation/ADE_val_00000535.jpg
536 | validation/ADE_val_00000536.jpg
537 | validation/ADE_val_00000537.jpg
538 | validation/ADE_val_00000538.jpg
539 | validation/ADE_val_00000539.jpg
540 | validation/ADE_val_00000540.jpg
541 | validation/ADE_val_00000541.jpg
542 | validation/ADE_val_00000542.jpg
543 | validation/ADE_val_00000543.jpg
544 | validation/ADE_val_00000544.jpg
545 | validation/ADE_val_00000545.jpg
546 | validation/ADE_val_00000546.jpg
547 | validation/ADE_val_00000547.jpg
548 | validation/ADE_val_00000548.jpg
549 | validation/ADE_val_00000549.jpg
550 | validation/ADE_val_00000550.jpg
551 | validation/ADE_val_00000551.jpg
552 | validation/ADE_val_00000552.jpg
553 | validation/ADE_val_00000553.jpg
554 | validation/ADE_val_00000554.jpg
555 | validation/ADE_val_00000555.jpg
556 | validation/ADE_val_00000556.jpg
557 | validation/ADE_val_00000557.jpg
558 | validation/ADE_val_00000558.jpg
559 | validation/ADE_val_00000559.jpg
560 | validation/ADE_val_00000560.jpg
561 | validation/ADE_val_00000561.jpg
562 | validation/ADE_val_00000562.jpg
563 | validation/ADE_val_00000563.jpg
564 | validation/ADE_val_00000564.jpg
565 | validation/ADE_val_00000565.jpg
566 | validation/ADE_val_00000566.jpg
567 | validation/ADE_val_00000567.jpg
568 | validation/ADE_val_00000568.jpg
569 | validation/ADE_val_00000569.jpg
570 | validation/ADE_val_00000570.jpg
571 | validation/ADE_val_00000571.jpg
572 | validation/ADE_val_00000572.jpg
573 | validation/ADE_val_00000573.jpg
574 | validation/ADE_val_00000574.jpg
575 | validation/ADE_val_00000575.jpg
576 | validation/ADE_val_00000576.jpg
577 | validation/ADE_val_00000577.jpg
578 | validation/ADE_val_00000578.jpg
579 | validation/ADE_val_00000579.jpg
580 | validation/ADE_val_00000580.jpg
581 | validation/ADE_val_00000581.jpg
582 | validation/ADE_val_00000582.jpg
583 | validation/ADE_val_00000583.jpg
584 | validation/ADE_val_00000584.jpg
585 | validation/ADE_val_00000585.jpg
586 | validation/ADE_val_00000586.jpg
587 | validation/ADE_val_00000587.jpg
588 | validation/ADE_val_00000588.jpg
589 | validation/ADE_val_00000589.jpg
590 | validation/ADE_val_00000590.jpg
591 | validation/ADE_val_00000591.jpg
592 | validation/ADE_val_00000592.jpg
593 | validation/ADE_val_00000593.jpg
594 | validation/ADE_val_00000594.jpg
595 | validation/ADE_val_00000595.jpg
596 | validation/ADE_val_00000596.jpg
597 | validation/ADE_val_00000597.jpg
598 | validation/ADE_val_00000598.jpg
599 | validation/ADE_val_00000599.jpg
600 | validation/ADE_val_00000600.jpg
601 | validation/ADE_val_00000601.jpg
602 | validation/ADE_val_00000602.jpg
603 | validation/ADE_val_00000603.jpg
604 | validation/ADE_val_00000604.jpg
605 | validation/ADE_val_00000605.jpg
606 | validation/ADE_val_00000606.jpg
607 | validation/ADE_val_00000607.jpg
608 | validation/ADE_val_00000608.jpg
609 | validation/ADE_val_00000609.jpg
610 | validation/ADE_val_00000610.jpg
611 | validation/ADE_val_00000611.jpg
612 | validation/ADE_val_00000612.jpg
613 | validation/ADE_val_00000613.jpg
614 | validation/ADE_val_00000614.jpg
615 | validation/ADE_val_00000615.jpg
616 | validation/ADE_val_00000616.jpg
617 | validation/ADE_val_00000617.jpg
618 | validation/ADE_val_00000618.jpg
619 | validation/ADE_val_00000619.jpg
620 | validation/ADE_val_00000620.jpg
621 | validation/ADE_val_00000621.jpg
622 | validation/ADE_val_00000622.jpg
623 | validation/ADE_val_00000623.jpg
624 | validation/ADE_val_00000624.jpg
625 | validation/ADE_val_00000625.jpg
626 | validation/ADE_val_00000626.jpg
627 | validation/ADE_val_00000627.jpg
628 | validation/ADE_val_00000628.jpg
629 | validation/ADE_val_00000629.jpg
630 | validation/ADE_val_00000630.jpg
631 | validation/ADE_val_00000631.jpg
632 | validation/ADE_val_00000632.jpg
633 | validation/ADE_val_00000633.jpg
634 | validation/ADE_val_00000634.jpg
635 | validation/ADE_val_00000635.jpg
636 | validation/ADE_val_00000636.jpg
637 | validation/ADE_val_00000637.jpg
638 | validation/ADE_val_00000638.jpg
639 | validation/ADE_val_00000639.jpg
640 | validation/ADE_val_00000640.jpg
641 | validation/ADE_val_00000641.jpg
642 | validation/ADE_val_00000642.jpg
643 | validation/ADE_val_00000643.jpg
644 | validation/ADE_val_00000644.jpg
645 | validation/ADE_val_00000645.jpg
646 | validation/ADE_val_00000646.jpg
647 | validation/ADE_val_00000647.jpg
648 | validation/ADE_val_00000648.jpg
649 | validation/ADE_val_00000649.jpg
650 | validation/ADE_val_00000650.jpg
651 | validation/ADE_val_00000651.jpg
652 | validation/ADE_val_00000652.jpg
653 | validation/ADE_val_00000653.jpg
654 | validation/ADE_val_00000654.jpg
655 | validation/ADE_val_00000655.jpg
656 | validation/ADE_val_00000656.jpg
657 | validation/ADE_val_00000657.jpg
658 | validation/ADE_val_00000658.jpg
659 | validation/ADE_val_00000659.jpg
660 | validation/ADE_val_00000660.jpg
661 | validation/ADE_val_00000661.jpg
662 | validation/ADE_val_00000662.jpg
663 | validation/ADE_val_00000663.jpg
664 | validation/ADE_val_00000664.jpg
665 | validation/ADE_val_00000665.jpg
666 | validation/ADE_val_00000666.jpg
667 | validation/ADE_val_00000667.jpg
668 | validation/ADE_val_00000668.jpg
669 | validation/ADE_val_00000669.jpg
670 | validation/ADE_val_00000670.jpg
671 | validation/ADE_val_00000671.jpg
672 | validation/ADE_val_00000672.jpg
673 | validation/ADE_val_00000673.jpg
674 | validation/ADE_val_00000674.jpg
675 | validation/ADE_val_00000675.jpg
676 | validation/ADE_val_00000676.jpg
677 | validation/ADE_val_00000677.jpg
678 | validation/ADE_val_00000678.jpg
679 | validation/ADE_val_00000679.jpg
680 | validation/ADE_val_00000680.jpg
681 | validation/ADE_val_00000681.jpg
682 | validation/ADE_val_00000682.jpg
683 | validation/ADE_val_00000683.jpg
684 | validation/ADE_val_00000684.jpg
685 | validation/ADE_val_00000685.jpg
686 | validation/ADE_val_00000686.jpg
687 | validation/ADE_val_00000687.jpg
688 | validation/ADE_val_00000688.jpg
689 | validation/ADE_val_00000689.jpg
690 | validation/ADE_val_00000690.jpg
691 | validation/ADE_val_00000691.jpg
692 | validation/ADE_val_00000692.jpg
693 | validation/ADE_val_00000693.jpg
694 | validation/ADE_val_00000694.jpg
695 | validation/ADE_val_00000695.jpg
696 | validation/ADE_val_00000696.jpg
697 | validation/ADE_val_00000697.jpg
698 | validation/ADE_val_00000698.jpg
699 | validation/ADE_val_00000699.jpg
700 | validation/ADE_val_00000700.jpg
701 | validation/ADE_val_00000701.jpg
702 | validation/ADE_val_00000702.jpg
703 | validation/ADE_val_00000703.jpg
704 | validation/ADE_val_00000704.jpg
705 | validation/ADE_val_00000705.jpg
706 | validation/ADE_val_00000706.jpg
707 | validation/ADE_val_00000707.jpg
708 | validation/ADE_val_00000708.jpg
709 | validation/ADE_val_00000709.jpg
710 | validation/ADE_val_00000710.jpg
711 | validation/ADE_val_00000711.jpg
712 | validation/ADE_val_00000712.jpg
713 | validation/ADE_val_00000713.jpg
714 | validation/ADE_val_00000714.jpg
715 | validation/ADE_val_00000715.jpg
716 | validation/ADE_val_00000716.jpg
717 | validation/ADE_val_00000717.jpg
718 | validation/ADE_val_00000718.jpg
719 | validation/ADE_val_00000719.jpg
720 | validation/ADE_val_00000720.jpg
721 | validation/ADE_val_00000721.jpg
722 | validation/ADE_val_00000722.jpg
723 | validation/ADE_val_00000723.jpg
724 | validation/ADE_val_00000724.jpg
725 | validation/ADE_val_00000725.jpg
726 | validation/ADE_val_00000726.jpg
727 | validation/ADE_val_00000727.jpg
728 | validation/ADE_val_00000728.jpg
729 | validation/ADE_val_00000729.jpg
730 | validation/ADE_val_00000730.jpg
731 | validation/ADE_val_00000731.jpg
732 | validation/ADE_val_00000732.jpg
733 | validation/ADE_val_00000733.jpg
734 | validation/ADE_val_00000734.jpg
735 | validation/ADE_val_00000735.jpg
736 | validation/ADE_val_00000736.jpg
737 | validation/ADE_val_00000737.jpg
738 | validation/ADE_val_00000738.jpg
739 | validation/ADE_val_00000739.jpg
740 | validation/ADE_val_00000740.jpg
741 | validation/ADE_val_00000741.jpg
742 | validation/ADE_val_00000742.jpg
743 | validation/ADE_val_00000743.jpg
744 | validation/ADE_val_00000744.jpg
745 | validation/ADE_val_00000745.jpg
746 | validation/ADE_val_00000746.jpg
747 | validation/ADE_val_00000747.jpg
748 | validation/ADE_val_00000748.jpg
749 | validation/ADE_val_00000749.jpg
750 | validation/ADE_val_00000750.jpg
751 | validation/ADE_val_00000751.jpg
752 | validation/ADE_val_00000752.jpg
753 | validation/ADE_val_00000753.jpg
754 | validation/ADE_val_00000754.jpg
755 | validation/ADE_val_00000755.jpg
756 | validation/ADE_val_00000756.jpg
757 | validation/ADE_val_00000757.jpg
758 | validation/ADE_val_00000758.jpg
759 | validation/ADE_val_00000759.jpg
760 | validation/ADE_val_00000760.jpg
761 | validation/ADE_val_00000761.jpg
762 | validation/ADE_val_00000762.jpg
763 | validation/ADE_val_00000763.jpg
764 | validation/ADE_val_00000764.jpg
765 | validation/ADE_val_00000765.jpg
766 | validation/ADE_val_00000766.jpg
767 | validation/ADE_val_00000767.jpg
768 | validation/ADE_val_00000768.jpg
769 | validation/ADE_val_00000769.jpg
770 | validation/ADE_val_00000770.jpg
771 | validation/ADE_val_00000771.jpg
772 | validation/ADE_val_00000772.jpg
773 | validation/ADE_val_00000773.jpg
774 | validation/ADE_val_00000774.jpg
775 | validation/ADE_val_00000775.jpg
776 | validation/ADE_val_00000776.jpg
777 | validation/ADE_val_00000777.jpg
778 | validation/ADE_val_00000778.jpg
779 | validation/ADE_val_00000779.jpg
780 | validation/ADE_val_00000780.jpg
781 | validation/ADE_val_00000781.jpg
782 | validation/ADE_val_00000782.jpg
783 | validation/ADE_val_00000783.jpg
784 | validation/ADE_val_00000784.jpg
785 | validation/ADE_val_00000785.jpg
786 | validation/ADE_val_00000786.jpg
787 | validation/ADE_val_00000787.jpg
788 | validation/ADE_val_00000788.jpg
789 | validation/ADE_val_00000789.jpg
790 | validation/ADE_val_00000790.jpg
791 | validation/ADE_val_00000791.jpg
792 | validation/ADE_val_00000792.jpg
793 | validation/ADE_val_00000793.jpg
794 | validation/ADE_val_00000794.jpg
795 | validation/ADE_val_00000795.jpg
796 | validation/ADE_val_00000796.jpg
797 | validation/ADE_val_00000797.jpg
798 | validation/ADE_val_00000798.jpg
799 | validation/ADE_val_00000799.jpg
800 | validation/ADE_val_00000800.jpg
801 | validation/ADE_val_00000801.jpg
802 | validation/ADE_val_00000802.jpg
803 | validation/ADE_val_00000803.jpg
804 | validation/ADE_val_00000804.jpg
805 | validation/ADE_val_00000805.jpg
806 | validation/ADE_val_00000806.jpg
807 | validation/ADE_val_00000807.jpg
808 | validation/ADE_val_00000808.jpg
809 | validation/ADE_val_00000809.jpg
810 | validation/ADE_val_00000810.jpg
811 | validation/ADE_val_00000811.jpg
812 | validation/ADE_val_00000812.jpg
813 | validation/ADE_val_00000813.jpg
814 | validation/ADE_val_00000814.jpg
815 | validation/ADE_val_00000815.jpg
816 | validation/ADE_val_00000816.jpg
817 | validation/ADE_val_00000817.jpg
818 | validation/ADE_val_00000818.jpg
819 | validation/ADE_val_00000819.jpg
820 | validation/ADE_val_00000820.jpg
821 | validation/ADE_val_00000821.jpg
822 | validation/ADE_val_00000822.jpg
823 | validation/ADE_val_00000823.jpg
824 | validation/ADE_val_00000824.jpg
825 | validation/ADE_val_00000825.jpg
826 | validation/ADE_val_00000826.jpg
827 | validation/ADE_val_00000827.jpg
828 | validation/ADE_val_00000828.jpg
829 | validation/ADE_val_00000829.jpg
830 | validation/ADE_val_00000830.jpg
831 | validation/ADE_val_00000831.jpg
832 | validation/ADE_val_00000832.jpg
833 | validation/ADE_val_00000833.jpg
834 | validation/ADE_val_00000834.jpg
835 | validation/ADE_val_00000835.jpg
836 | validation/ADE_val_00000836.jpg
837 | validation/ADE_val_00000837.jpg
838 | validation/ADE_val_00000838.jpg
839 | validation/ADE_val_00000839.jpg
840 | validation/ADE_val_00000840.jpg
841 | validation/ADE_val_00000841.jpg
842 | validation/ADE_val_00000842.jpg
843 | validation/ADE_val_00000843.jpg
844 | validation/ADE_val_00000844.jpg
845 | validation/ADE_val_00000845.jpg
846 | validation/ADE_val_00000846.jpg
847 | validation/ADE_val_00000847.jpg
848 | validation/ADE_val_00000848.jpg
849 | validation/ADE_val_00000849.jpg
850 | validation/ADE_val_00000850.jpg
851 | validation/ADE_val_00000851.jpg
852 | validation/ADE_val_00000852.jpg
853 | validation/ADE_val_00000853.jpg
854 | validation/ADE_val_00000854.jpg
855 | validation/ADE_val_00000855.jpg
856 | validation/ADE_val_00000856.jpg
857 | validation/ADE_val_00000857.jpg
858 | validation/ADE_val_00000858.jpg
859 | validation/ADE_val_00000859.jpg
860 | validation/ADE_val_00000860.jpg
861 | validation/ADE_val_00000861.jpg
862 | validation/ADE_val_00000862.jpg
863 | validation/ADE_val_00000863.jpg
864 | validation/ADE_val_00000864.jpg
865 | validation/ADE_val_00000865.jpg
866 | validation/ADE_val_00000866.jpg
867 | validation/ADE_val_00000867.jpg
868 | validation/ADE_val_00000868.jpg
869 | validation/ADE_val_00000869.jpg
870 | validation/ADE_val_00000870.jpg
871 | validation/ADE_val_00000871.jpg
872 | validation/ADE_val_00000872.jpg
873 | validation/ADE_val_00000873.jpg
874 | validation/ADE_val_00000874.jpg
875 | validation/ADE_val_00000875.jpg
876 | validation/ADE_val_00000876.jpg
877 | validation/ADE_val_00000877.jpg
878 | validation/ADE_val_00000878.jpg
879 | validation/ADE_val_00000879.jpg
880 | validation/ADE_val_00000880.jpg
881 | validation/ADE_val_00000881.jpg
882 | validation/ADE_val_00000882.jpg
883 | validation/ADE_val_00000883.jpg
884 | validation/ADE_val_00000884.jpg
885 | validation/ADE_val_00000885.jpg
886 | validation/ADE_val_00000886.jpg
887 | validation/ADE_val_00000887.jpg
888 | validation/ADE_val_00000888.jpg
889 | validation/ADE_val_00000889.jpg
890 | validation/ADE_val_00000890.jpg
891 | validation/ADE_val_00000891.jpg
892 | validation/ADE_val_00000892.jpg
893 | validation/ADE_val_00000893.jpg
894 | validation/ADE_val_00000894.jpg
895 | validation/ADE_val_00000895.jpg
896 | validation/ADE_val_00000896.jpg
897 | validation/ADE_val_00000897.jpg
898 | validation/ADE_val_00000898.jpg
899 | validation/ADE_val_00000899.jpg
900 | validation/ADE_val_00000900.jpg
901 | validation/ADE_val_00000901.jpg
902 | validation/ADE_val_00000902.jpg
903 | validation/ADE_val_00000903.jpg
904 | validation/ADE_val_00000904.jpg
905 | validation/ADE_val_00000905.jpg
906 | validation/ADE_val_00000906.jpg
907 | validation/ADE_val_00000907.jpg
908 | validation/ADE_val_00000908.jpg
909 | validation/ADE_val_00000909.jpg
910 | validation/ADE_val_00000910.jpg
911 | validation/ADE_val_00000911.jpg
912 | validation/ADE_val_00000912.jpg
913 | validation/ADE_val_00000913.jpg
914 | validation/ADE_val_00000914.jpg
915 | validation/ADE_val_00000915.jpg
916 | validation/ADE_val_00000916.jpg
917 | validation/ADE_val_00000917.jpg
918 | validation/ADE_val_00000918.jpg
919 | validation/ADE_val_00000919.jpg
920 | validation/ADE_val_00000920.jpg
921 | validation/ADE_val_00000921.jpg
922 | validation/ADE_val_00000922.jpg
923 | validation/ADE_val_00000923.jpg
924 | validation/ADE_val_00000924.jpg
925 | validation/ADE_val_00000925.jpg
926 | validation/ADE_val_00000926.jpg
927 | validation/ADE_val_00000927.jpg
928 | validation/ADE_val_00000928.jpg
929 | validation/ADE_val_00000929.jpg
930 | validation/ADE_val_00000930.jpg
931 | validation/ADE_val_00000931.jpg
932 | validation/ADE_val_00000932.jpg
933 | validation/ADE_val_00000933.jpg
934 | validation/ADE_val_00000934.jpg
935 | validation/ADE_val_00000935.jpg
936 | validation/ADE_val_00000936.jpg
937 | validation/ADE_val_00000937.jpg
938 | validation/ADE_val_00000938.jpg
939 | validation/ADE_val_00000939.jpg
940 | validation/ADE_val_00000940.jpg
941 | validation/ADE_val_00000941.jpg
942 | validation/ADE_val_00000942.jpg
943 | validation/ADE_val_00000943.jpg
944 | validation/ADE_val_00000944.jpg
945 | validation/ADE_val_00000945.jpg
946 | validation/ADE_val_00000946.jpg
947 | validation/ADE_val_00000947.jpg
948 | validation/ADE_val_00000948.jpg
949 | validation/ADE_val_00000949.jpg
950 | validation/ADE_val_00000950.jpg
951 | validation/ADE_val_00000951.jpg
952 | validation/ADE_val_00000952.jpg
953 | validation/ADE_val_00000953.jpg
954 | validation/ADE_val_00000954.jpg
955 | validation/ADE_val_00000955.jpg
956 | validation/ADE_val_00000956.jpg
957 | validation/ADE_val_00000957.jpg
958 | validation/ADE_val_00000958.jpg
959 | validation/ADE_val_00000959.jpg
960 | validation/ADE_val_00000960.jpg
961 | validation/ADE_val_00000961.jpg
962 | validation/ADE_val_00000962.jpg
963 | validation/ADE_val_00000963.jpg
964 | validation/ADE_val_00000964.jpg
965 | validation/ADE_val_00000965.jpg
966 | validation/ADE_val_00000966.jpg
967 | validation/ADE_val_00000967.jpg
968 | validation/ADE_val_00000968.jpg
969 | validation/ADE_val_00000969.jpg
970 | validation/ADE_val_00000970.jpg
971 | validation/ADE_val_00000971.jpg
972 | validation/ADE_val_00000972.jpg
973 | validation/ADE_val_00000973.jpg
974 | validation/ADE_val_00000974.jpg
975 | validation/ADE_val_00000975.jpg
976 | validation/ADE_val_00000976.jpg
977 | validation/ADE_val_00000977.jpg
978 | validation/ADE_val_00000978.jpg
979 | validation/ADE_val_00000979.jpg
980 | validation/ADE_val_00000980.jpg
981 | validation/ADE_val_00000981.jpg
982 | validation/ADE_val_00000982.jpg
983 | validation/ADE_val_00000983.jpg
984 | validation/ADE_val_00000984.jpg
985 | validation/ADE_val_00000985.jpg
986 | validation/ADE_val_00000986.jpg
987 | validation/ADE_val_00000987.jpg
988 | validation/ADE_val_00000988.jpg
989 | validation/ADE_val_00000989.jpg
990 | validation/ADE_val_00000990.jpg
991 | validation/ADE_val_00000991.jpg
992 | validation/ADE_val_00000992.jpg
993 | validation/ADE_val_00000993.jpg
994 | validation/ADE_val_00000994.jpg
995 | validation/ADE_val_00000995.jpg
996 | validation/ADE_val_00000996.jpg
997 | validation/ADE_val_00000997.jpg
998 | validation/ADE_val_00000998.jpg
999 | validation/ADE_val_00000999.jpg
1000 | validation/ADE_val_00001000.jpg
1001 | validation/ADE_val_00001001.jpg
1002 | validation/ADE_val_00001002.jpg
1003 | validation/ADE_val_00001003.jpg
1004 | validation/ADE_val_00001004.jpg
1005 | validation/ADE_val_00001005.jpg
1006 | validation/ADE_val_00001006.jpg
1007 | validation/ADE_val_00001007.jpg
1008 | validation/ADE_val_00001008.jpg
1009 | validation/ADE_val_00001009.jpg
1010 | validation/ADE_val_00001010.jpg
1011 | validation/ADE_val_00001011.jpg
1012 | validation/ADE_val_00001012.jpg
1013 | validation/ADE_val_00001013.jpg
1014 | validation/ADE_val_00001014.jpg
1015 | validation/ADE_val_00001015.jpg
1016 | validation/ADE_val_00001016.jpg
1017 | validation/ADE_val_00001017.jpg
1018 | validation/ADE_val_00001018.jpg
1019 | validation/ADE_val_00001019.jpg
1020 | validation/ADE_val_00001020.jpg
1021 | validation/ADE_val_00001021.jpg
1022 | validation/ADE_val_00001022.jpg
1023 | validation/ADE_val_00001023.jpg
1024 | validation/ADE_val_00001024.jpg
1025 | validation/ADE_val_00001025.jpg
1026 | validation/ADE_val_00001026.jpg
1027 | validation/ADE_val_00001027.jpg
1028 | validation/ADE_val_00001028.jpg
1029 | validation/ADE_val_00001029.jpg
1030 | validation/ADE_val_00001030.jpg
1031 | validation/ADE_val_00001031.jpg
1032 | validation/ADE_val_00001032.jpg
1033 | validation/ADE_val_00001033.jpg
1034 | validation/ADE_val_00001034.jpg
1035 | validation/ADE_val_00001035.jpg
1036 | validation/ADE_val_00001036.jpg
1037 | validation/ADE_val_00001037.jpg
1038 | validation/ADE_val_00001038.jpg
1039 | validation/ADE_val_00001039.jpg
1040 | validation/ADE_val_00001040.jpg
1041 | validation/ADE_val_00001041.jpg
1042 | validation/ADE_val_00001042.jpg
1043 | validation/ADE_val_00001043.jpg
1044 | validation/ADE_val_00001044.jpg
1045 | validation/ADE_val_00001045.jpg
1046 | validation/ADE_val_00001046.jpg
1047 | validation/ADE_val_00001047.jpg
1048 | validation/ADE_val_00001048.jpg
1049 | validation/ADE_val_00001049.jpg
1050 | validation/ADE_val_00001050.jpg
1051 | validation/ADE_val_00001051.jpg
1052 | validation/ADE_val_00001052.jpg
1053 | validation/ADE_val_00001053.jpg
1054 | validation/ADE_val_00001054.jpg
1055 | validation/ADE_val_00001055.jpg
1056 | validation/ADE_val_00001056.jpg
1057 | validation/ADE_val_00001057.jpg
1058 | validation/ADE_val_00001058.jpg
1059 | validation/ADE_val_00001059.jpg
1060 | validation/ADE_val_00001060.jpg
1061 | validation/ADE_val_00001061.jpg
1062 | validation/ADE_val_00001062.jpg
1063 | validation/ADE_val_00001063.jpg
1064 | validation/ADE_val_00001064.jpg
1065 | validation/ADE_val_00001065.jpg
1066 | validation/ADE_val_00001066.jpg
1067 | validation/ADE_val_00001067.jpg
1068 | validation/ADE_val_00001068.jpg
1069 | validation/ADE_val_00001069.jpg
1070 | validation/ADE_val_00001070.jpg
1071 | validation/ADE_val_00001071.jpg
1072 | validation/ADE_val_00001072.jpg
1073 | validation/ADE_val_00001073.jpg
1074 | validation/ADE_val_00001074.jpg
1075 | validation/ADE_val_00001075.jpg
1076 | validation/ADE_val_00001076.jpg
1077 | validation/ADE_val_00001077.jpg
1078 | validation/ADE_val_00001078.jpg
1079 | validation/ADE_val_00001079.jpg
1080 | validation/ADE_val_00001080.jpg
1081 | validation/ADE_val_00001081.jpg
1082 | validation/ADE_val_00001082.jpg
1083 | validation/ADE_val_00001083.jpg
1084 | validation/ADE_val_00001084.jpg
1085 | validation/ADE_val_00001085.jpg
1086 | validation/ADE_val_00001086.jpg
1087 | validation/ADE_val_00001087.jpg
1088 | validation/ADE_val_00001088.jpg
1089 | validation/ADE_val_00001089.jpg
1090 | validation/ADE_val_00001090.jpg
1091 | validation/ADE_val_00001091.jpg
1092 | validation/ADE_val_00001092.jpg
1093 | validation/ADE_val_00001093.jpg
1094 | validation/ADE_val_00001094.jpg
1095 | validation/ADE_val_00001095.jpg
1096 | validation/ADE_val_00001096.jpg
1097 | validation/ADE_val_00001097.jpg
1098 | validation/ADE_val_00001098.jpg
1099 | validation/ADE_val_00001099.jpg
1100 | validation/ADE_val_00001100.jpg
1101 | validation/ADE_val_00001101.jpg
1102 | validation/ADE_val_00001102.jpg
1103 | validation/ADE_val_00001103.jpg
1104 | validation/ADE_val_00001104.jpg
1105 | validation/ADE_val_00001105.jpg
1106 | validation/ADE_val_00001106.jpg
1107 | validation/ADE_val_00001107.jpg
1108 | validation/ADE_val_00001108.jpg
1109 | validation/ADE_val_00001109.jpg
1110 | validation/ADE_val_00001110.jpg
1111 | validation/ADE_val_00001111.jpg
1112 | validation/ADE_val_00001112.jpg
1113 | validation/ADE_val_00001113.jpg
1114 | validation/ADE_val_00001114.jpg
1115 | validation/ADE_val_00001115.jpg
1116 | validation/ADE_val_00001116.jpg
1117 | validation/ADE_val_00001117.jpg
1118 | validation/ADE_val_00001118.jpg
1119 | validation/ADE_val_00001119.jpg
1120 | validation/ADE_val_00001120.jpg
1121 | validation/ADE_val_00001121.jpg
1122 | validation/ADE_val_00001122.jpg
1123 | validation/ADE_val_00001123.jpg
1124 | validation/ADE_val_00001124.jpg
1125 | validation/ADE_val_00001125.jpg
1126 | validation/ADE_val_00001126.jpg
1127 | validation/ADE_val_00001127.jpg
1128 | validation/ADE_val_00001128.jpg
1129 | validation/ADE_val_00001129.jpg
1130 | validation/ADE_val_00001130.jpg
1131 | validation/ADE_val_00001131.jpg
1132 | validation/ADE_val_00001132.jpg
1133 | validation/ADE_val_00001133.jpg
1134 | validation/ADE_val_00001134.jpg
1135 | validation/ADE_val_00001135.jpg
1136 | validation/ADE_val_00001136.jpg
1137 | validation/ADE_val_00001137.jpg
1138 | validation/ADE_val_00001138.jpg
1139 | validation/ADE_val_00001139.jpg
1140 | validation/ADE_val_00001140.jpg
1141 | validation/ADE_val_00001141.jpg
1142 | validation/ADE_val_00001142.jpg
1143 | validation/ADE_val_00001143.jpg
1144 | validation/ADE_val_00001144.jpg
1145 | validation/ADE_val_00001145.jpg
1146 | validation/ADE_val_00001146.jpg
1147 | validation/ADE_val_00001147.jpg
1148 | validation/ADE_val_00001148.jpg
1149 | validation/ADE_val_00001149.jpg
1150 | validation/ADE_val_00001150.jpg
1151 | validation/ADE_val_00001151.jpg
1152 | validation/ADE_val_00001152.jpg
1153 | validation/ADE_val_00001153.jpg
1154 | validation/ADE_val_00001154.jpg
1155 | validation/ADE_val_00001155.jpg
1156 | validation/ADE_val_00001156.jpg
1157 | validation/ADE_val_00001157.jpg
1158 | validation/ADE_val_00001158.jpg
1159 | validation/ADE_val_00001159.jpg
1160 | validation/ADE_val_00001160.jpg
1161 | validation/ADE_val_00001161.jpg
1162 | validation/ADE_val_00001162.jpg
1163 | validation/ADE_val_00001163.jpg
1164 | validation/ADE_val_00001164.jpg
1165 | validation/ADE_val_00001165.jpg
1166 | validation/ADE_val_00001166.jpg
1167 | validation/ADE_val_00001167.jpg
1168 | validation/ADE_val_00001168.jpg
1169 | validation/ADE_val_00001169.jpg
1170 | validation/ADE_val_00001170.jpg
1171 | validation/ADE_val_00001171.jpg
1172 | validation/ADE_val_00001172.jpg
1173 | validation/ADE_val_00001173.jpg
1174 | validation/ADE_val_00001174.jpg
1175 | validation/ADE_val_00001175.jpg
1176 | validation/ADE_val_00001176.jpg
1177 | validation/ADE_val_00001177.jpg
1178 | validation/ADE_val_00001178.jpg
1179 | validation/ADE_val_00001179.jpg
1180 | validation/ADE_val_00001180.jpg
1181 | validation/ADE_val_00001181.jpg
1182 | validation/ADE_val_00001182.jpg
1183 | validation/ADE_val_00001183.jpg
1184 | validation/ADE_val_00001184.jpg
1185 | validation/ADE_val_00001185.jpg
1186 | validation/ADE_val_00001186.jpg
1187 | validation/ADE_val_00001187.jpg
1188 | validation/ADE_val_00001188.jpg
1189 | validation/ADE_val_00001189.jpg
1190 | validation/ADE_val_00001190.jpg
1191 | validation/ADE_val_00001191.jpg
1192 | validation/ADE_val_00001192.jpg
1193 | validation/ADE_val_00001193.jpg
1194 | validation/ADE_val_00001194.jpg
1195 | validation/ADE_val_00001195.jpg
1196 | validation/ADE_val_00001196.jpg
1197 | validation/ADE_val_00001197.jpg
1198 | validation/ADE_val_00001198.jpg
1199 | validation/ADE_val_00001199.jpg
1200 | validation/ADE_val_00001200.jpg
1201 | validation/ADE_val_00001201.jpg
1202 | validation/ADE_val_00001202.jpg
1203 | validation/ADE_val_00001203.jpg
1204 | validation/ADE_val_00001204.jpg
1205 | validation/ADE_val_00001205.jpg
1206 | validation/ADE_val_00001206.jpg
1207 | validation/ADE_val_00001207.jpg
1208 | validation/ADE_val_00001208.jpg
1209 | validation/ADE_val_00001209.jpg
1210 | validation/ADE_val_00001210.jpg
1211 | validation/ADE_val_00001211.jpg
1212 | validation/ADE_val_00001212.jpg
1213 | validation/ADE_val_00001213.jpg
1214 | validation/ADE_val_00001214.jpg
1215 | validation/ADE_val_00001215.jpg
1216 | validation/ADE_val_00001216.jpg
1217 | validation/ADE_val_00001217.jpg
1218 | validation/ADE_val_00001218.jpg
1219 | validation/ADE_val_00001219.jpg
1220 | validation/ADE_val_00001220.jpg
1221 | validation/ADE_val_00001221.jpg
1222 | validation/ADE_val_00001222.jpg
1223 | validation/ADE_val_00001223.jpg
1224 | validation/ADE_val_00001224.jpg
1225 | validation/ADE_val_00001225.jpg
1226 | validation/ADE_val_00001226.jpg
1227 | validation/ADE_val_00001227.jpg
1228 | validation/ADE_val_00001228.jpg
1229 | validation/ADE_val_00001229.jpg
1230 | validation/ADE_val_00001230.jpg
1231 | validation/ADE_val_00001231.jpg
1232 | validation/ADE_val_00001232.jpg
1233 | validation/ADE_val_00001233.jpg
1234 | validation/ADE_val_00001234.jpg
1235 | validation/ADE_val_00001235.jpg
1236 | validation/ADE_val_00001236.jpg
1237 | validation/ADE_val_00001237.jpg
1238 | validation/ADE_val_00001238.jpg
1239 | validation/ADE_val_00001239.jpg
1240 | validation/ADE_val_00001240.jpg
1241 | validation/ADE_val_00001241.jpg
1242 | validation/ADE_val_00001242.jpg
1243 | validation/ADE_val_00001243.jpg
1244 | validation/ADE_val_00001244.jpg
1245 | validation/ADE_val_00001245.jpg
1246 | validation/ADE_val_00001246.jpg
1247 | validation/ADE_val_00001247.jpg
1248 | validation/ADE_val_00001248.jpg
1249 | validation/ADE_val_00001249.jpg
1250 | validation/ADE_val_00001250.jpg
1251 | validation/ADE_val_00001251.jpg
1252 | validation/ADE_val_00001252.jpg
1253 | validation/ADE_val_00001253.jpg
1254 | validation/ADE_val_00001254.jpg
1255 | validation/ADE_val_00001255.jpg
1256 | validation/ADE_val_00001256.jpg
1257 | validation/ADE_val_00001257.jpg
1258 | validation/ADE_val_00001258.jpg
1259 | validation/ADE_val_00001259.jpg
1260 | validation/ADE_val_00001260.jpg
1261 | validation/ADE_val_00001261.jpg
1262 | validation/ADE_val_00001262.jpg
1263 | validation/ADE_val_00001263.jpg
1264 | validation/ADE_val_00001264.jpg
1265 | validation/ADE_val_00001265.jpg
1266 | validation/ADE_val_00001266.jpg
1267 | validation/ADE_val_00001267.jpg
1268 | validation/ADE_val_00001268.jpg
1269 | validation/ADE_val_00001269.jpg
1270 | validation/ADE_val_00001270.jpg
1271 | validation/ADE_val_00001271.jpg
1272 | validation/ADE_val_00001272.jpg
1273 | validation/ADE_val_00001273.jpg
1274 | validation/ADE_val_00001274.jpg
1275 | validation/ADE_val_00001275.jpg
1276 | validation/ADE_val_00001276.jpg
1277 | validation/ADE_val_00001277.jpg
1278 | validation/ADE_val_00001278.jpg
1279 | validation/ADE_val_00001279.jpg
1280 | validation/ADE_val_00001280.jpg
1281 | validation/ADE_val_00001281.jpg
1282 | validation/ADE_val_00001282.jpg
1283 | validation/ADE_val_00001283.jpg
1284 | validation/ADE_val_00001284.jpg
1285 | validation/ADE_val_00001285.jpg
1286 | validation/ADE_val_00001286.jpg
1287 | validation/ADE_val_00001287.jpg
1288 | validation/ADE_val_00001288.jpg
1289 | validation/ADE_val_00001289.jpg
1290 | validation/ADE_val_00001290.jpg
1291 | validation/ADE_val_00001291.jpg
1292 | validation/ADE_val_00001292.jpg
1293 | validation/ADE_val_00001293.jpg
1294 | validation/ADE_val_00001294.jpg
1295 | validation/ADE_val_00001295.jpg
1296 | validation/ADE_val_00001296.jpg
1297 | validation/ADE_val_00001297.jpg
1298 | validation/ADE_val_00001298.jpg
1299 | validation/ADE_val_00001299.jpg
1300 | validation/ADE_val_00001300.jpg
1301 | validation/ADE_val_00001301.jpg
1302 | validation/ADE_val_00001302.jpg
1303 | validation/ADE_val_00001303.jpg
1304 | validation/ADE_val_00001304.jpg
1305 | validation/ADE_val_00001305.jpg
1306 | validation/ADE_val_00001306.jpg
1307 | validation/ADE_val_00001307.jpg
1308 | validation/ADE_val_00001308.jpg
1309 | validation/ADE_val_00001309.jpg
1310 | validation/ADE_val_00001310.jpg
1311 | validation/ADE_val_00001311.jpg
1312 | validation/ADE_val_00001312.jpg
1313 | validation/ADE_val_00001313.jpg
1314 | validation/ADE_val_00001314.jpg
1315 | validation/ADE_val_00001315.jpg
1316 | validation/ADE_val_00001316.jpg
1317 | validation/ADE_val_00001317.jpg
1318 | validation/ADE_val_00001318.jpg
1319 | validation/ADE_val_00001319.jpg
1320 | validation/ADE_val_00001320.jpg
1321 | validation/ADE_val_00001321.jpg
1322 | validation/ADE_val_00001322.jpg
1323 | validation/ADE_val_00001323.jpg
1324 | validation/ADE_val_00001324.jpg
1325 | validation/ADE_val_00001325.jpg
1326 | validation/ADE_val_00001326.jpg
1327 | validation/ADE_val_00001327.jpg
1328 | validation/ADE_val_00001328.jpg
1329 | validation/ADE_val_00001329.jpg
1330 | validation/ADE_val_00001330.jpg
1331 | validation/ADE_val_00001331.jpg
1332 | validation/ADE_val_00001332.jpg
1333 | validation/ADE_val_00001333.jpg
1334 | validation/ADE_val_00001334.jpg
1335 | validation/ADE_val_00001335.jpg
1336 | validation/ADE_val_00001336.jpg
1337 | validation/ADE_val_00001337.jpg
1338 | validation/ADE_val_00001338.jpg
1339 | validation/ADE_val_00001339.jpg
1340 | validation/ADE_val_00001340.jpg
1341 | validation/ADE_val_00001341.jpg
1342 | validation/ADE_val_00001342.jpg
1343 | validation/ADE_val_00001343.jpg
1344 | validation/ADE_val_00001344.jpg
1345 | validation/ADE_val_00001345.jpg
1346 | validation/ADE_val_00001346.jpg
1347 | validation/ADE_val_00001347.jpg
1348 | validation/ADE_val_00001348.jpg
1349 | validation/ADE_val_00001349.jpg
1350 | validation/ADE_val_00001350.jpg
1351 | validation/ADE_val_00001351.jpg
1352 | validation/ADE_val_00001352.jpg
1353 | validation/ADE_val_00001353.jpg
1354 | validation/ADE_val_00001354.jpg
1355 | validation/ADE_val_00001355.jpg
1356 | validation/ADE_val_00001356.jpg
1357 | validation/ADE_val_00001357.jpg
1358 | validation/ADE_val_00001358.jpg
1359 | validation/ADE_val_00001359.jpg
1360 | validation/ADE_val_00001360.jpg
1361 | validation/ADE_val_00001361.jpg
1362 | validation/ADE_val_00001362.jpg
1363 | validation/ADE_val_00001363.jpg
1364 | validation/ADE_val_00001364.jpg
1365 | validation/ADE_val_00001365.jpg
1366 | validation/ADE_val_00001366.jpg
1367 | validation/ADE_val_00001367.jpg
1368 | validation/ADE_val_00001368.jpg
1369 | validation/ADE_val_00001369.jpg
1370 | validation/ADE_val_00001370.jpg
1371 | validation/ADE_val_00001371.jpg
1372 | validation/ADE_val_00001372.jpg
1373 | validation/ADE_val_00001373.jpg
1374 | validation/ADE_val_00001374.jpg
1375 | validation/ADE_val_00001375.jpg
1376 | validation/ADE_val_00001376.jpg
1377 | validation/ADE_val_00001377.jpg
1378 | validation/ADE_val_00001378.jpg
1379 | validation/ADE_val_00001379.jpg
1380 | validation/ADE_val_00001380.jpg
1381 | validation/ADE_val_00001381.jpg
1382 | validation/ADE_val_00001382.jpg
1383 | validation/ADE_val_00001383.jpg
1384 | validation/ADE_val_00001384.jpg
1385 | validation/ADE_val_00001385.jpg
1386 | validation/ADE_val_00001386.jpg
1387 | validation/ADE_val_00001387.jpg
1388 | validation/ADE_val_00001388.jpg
1389 | validation/ADE_val_00001389.jpg
1390 | validation/ADE_val_00001390.jpg
1391 | validation/ADE_val_00001391.jpg
1392 | validation/ADE_val_00001392.jpg
1393 | validation/ADE_val_00001393.jpg
1394 | validation/ADE_val_00001394.jpg
1395 | validation/ADE_val_00001395.jpg
1396 | validation/ADE_val_00001396.jpg
1397 | validation/ADE_val_00001397.jpg
1398 | validation/ADE_val_00001398.jpg
1399 | validation/ADE_val_00001399.jpg
1400 | validation/ADE_val_00001400.jpg
1401 | validation/ADE_val_00001401.jpg
1402 | validation/ADE_val_00001402.jpg
1403 | validation/ADE_val_00001403.jpg
1404 | validation/ADE_val_00001404.jpg
1405 | validation/ADE_val_00001405.jpg
1406 | validation/ADE_val_00001406.jpg
1407 | validation/ADE_val_00001407.jpg
1408 | validation/ADE_val_00001408.jpg
1409 | validation/ADE_val_00001409.jpg
1410 | validation/ADE_val_00001410.jpg
1411 | validation/ADE_val_00001411.jpg
1412 | validation/ADE_val_00001412.jpg
1413 | validation/ADE_val_00001413.jpg
1414 | validation/ADE_val_00001414.jpg
1415 | validation/ADE_val_00001415.jpg
1416 | validation/ADE_val_00001416.jpg
1417 | validation/ADE_val_00001417.jpg
1418 | validation/ADE_val_00001418.jpg
1419 | validation/ADE_val_00001419.jpg
1420 | validation/ADE_val_00001420.jpg
1421 | validation/ADE_val_00001421.jpg
1422 | validation/ADE_val_00001422.jpg
1423 | validation/ADE_val_00001423.jpg
1424 | validation/ADE_val_00001424.jpg
1425 | validation/ADE_val_00001425.jpg
1426 | validation/ADE_val_00001426.jpg
1427 | validation/ADE_val_00001427.jpg
1428 | validation/ADE_val_00001428.jpg
1429 | validation/ADE_val_00001429.jpg
1430 | validation/ADE_val_00001430.jpg
1431 | validation/ADE_val_00001431.jpg
1432 | validation/ADE_val_00001432.jpg
1433 | validation/ADE_val_00001433.jpg
1434 | validation/ADE_val_00001434.jpg
1435 | validation/ADE_val_00001435.jpg
1436 | validation/ADE_val_00001436.jpg
1437 | validation/ADE_val_00001437.jpg
1438 | validation/ADE_val_00001438.jpg
1439 | validation/ADE_val_00001439.jpg
1440 | validation/ADE_val_00001440.jpg
1441 | validation/ADE_val_00001441.jpg
1442 | validation/ADE_val_00001442.jpg
1443 | validation/ADE_val_00001443.jpg
1444 | validation/ADE_val_00001444.jpg
1445 | validation/ADE_val_00001445.jpg
1446 | validation/ADE_val_00001446.jpg
1447 | validation/ADE_val_00001447.jpg
1448 | validation/ADE_val_00001448.jpg
1449 | validation/ADE_val_00001449.jpg
1450 | validation/ADE_val_00001450.jpg
1451 | validation/ADE_val_00001451.jpg
1452 | validation/ADE_val_00001452.jpg
1453 | validation/ADE_val_00001453.jpg
1454 | validation/ADE_val_00001454.jpg
1455 | validation/ADE_val_00001455.jpg
1456 | validation/ADE_val_00001456.jpg
1457 | validation/ADE_val_00001457.jpg
1458 | validation/ADE_val_00001458.jpg
1459 | validation/ADE_val_00001459.jpg
1460 | validation/ADE_val_00001460.jpg
1461 | validation/ADE_val_00001461.jpg
1462 | validation/ADE_val_00001462.jpg
1463 | validation/ADE_val_00001463.jpg
1464 | validation/ADE_val_00001464.jpg
1465 | validation/ADE_val_00001465.jpg
1466 | validation/ADE_val_00001466.jpg
1467 | validation/ADE_val_00001467.jpg
1468 | validation/ADE_val_00001468.jpg
1469 | validation/ADE_val_00001469.jpg
1470 | validation/ADE_val_00001470.jpg
1471 | validation/ADE_val_00001471.jpg
1472 | validation/ADE_val_00001472.jpg
1473 | validation/ADE_val_00001473.jpg
1474 | validation/ADE_val_00001474.jpg
1475 | validation/ADE_val_00001475.jpg
1476 | validation/ADE_val_00001476.jpg
1477 | validation/ADE_val_00001477.jpg
1478 | validation/ADE_val_00001478.jpg
1479 | validation/ADE_val_00001479.jpg
1480 | validation/ADE_val_00001480.jpg
1481 | validation/ADE_val_00001481.jpg
1482 | validation/ADE_val_00001482.jpg
1483 | validation/ADE_val_00001483.jpg
1484 | validation/ADE_val_00001484.jpg
1485 | validation/ADE_val_00001485.jpg
1486 | validation/ADE_val_00001486.jpg
1487 | validation/ADE_val_00001487.jpg
1488 | validation/ADE_val_00001488.jpg
1489 | validation/ADE_val_00001489.jpg
1490 | validation/ADE_val_00001490.jpg
1491 | validation/ADE_val_00001491.jpg
1492 | validation/ADE_val_00001492.jpg
1493 | validation/ADE_val_00001493.jpg
1494 | validation/ADE_val_00001494.jpg
1495 | validation/ADE_val_00001495.jpg
1496 | validation/ADE_val_00001496.jpg
1497 | validation/ADE_val_00001497.jpg
1498 | validation/ADE_val_00001498.jpg
1499 | validation/ADE_val_00001499.jpg
1500 | validation/ADE_val_00001500.jpg
1501 | validation/ADE_val_00001501.jpg
1502 | validation/ADE_val_00001502.jpg
1503 | validation/ADE_val_00001503.jpg
1504 | validation/ADE_val_00001504.jpg
1505 | validation/ADE_val_00001505.jpg
1506 | validation/ADE_val_00001506.jpg
1507 | validation/ADE_val_00001507.jpg
1508 | validation/ADE_val_00001508.jpg
1509 | validation/ADE_val_00001509.jpg
1510 | validation/ADE_val_00001510.jpg
1511 | validation/ADE_val_00001511.jpg
1512 | validation/ADE_val_00001512.jpg
1513 | validation/ADE_val_00001513.jpg
1514 | validation/ADE_val_00001514.jpg
1515 | validation/ADE_val_00001515.jpg
1516 | validation/ADE_val_00001516.jpg
1517 | validation/ADE_val_00001517.jpg
1518 | validation/ADE_val_00001518.jpg
1519 | validation/ADE_val_00001519.jpg
1520 | validation/ADE_val_00001520.jpg
1521 | validation/ADE_val_00001521.jpg
1522 | validation/ADE_val_00001522.jpg
1523 | validation/ADE_val_00001523.jpg
1524 | validation/ADE_val_00001524.jpg
1525 | validation/ADE_val_00001525.jpg
1526 | validation/ADE_val_00001526.jpg
1527 | validation/ADE_val_00001527.jpg
1528 | validation/ADE_val_00001528.jpg
1529 | validation/ADE_val_00001529.jpg
1530 | validation/ADE_val_00001530.jpg
1531 | validation/ADE_val_00001531.jpg
1532 | validation/ADE_val_00001532.jpg
1533 | validation/ADE_val_00001533.jpg
1534 | validation/ADE_val_00001534.jpg
1535 | validation/ADE_val_00001535.jpg
1536 | validation/ADE_val_00001536.jpg
1537 | validation/ADE_val_00001537.jpg
1538 | validation/ADE_val_00001538.jpg
1539 | validation/ADE_val_00001539.jpg
1540 | validation/ADE_val_00001540.jpg
1541 | validation/ADE_val_00001541.jpg
1542 | validation/ADE_val_00001542.jpg
1543 | validation/ADE_val_00001543.jpg
1544 | validation/ADE_val_00001544.jpg
1545 | validation/ADE_val_00001545.jpg
1546 | validation/ADE_val_00001546.jpg
1547 | validation/ADE_val_00001547.jpg
1548 | validation/ADE_val_00001548.jpg
1549 | validation/ADE_val_00001549.jpg
1550 | validation/ADE_val_00001550.jpg
1551 | validation/ADE_val_00001551.jpg
1552 | validation/ADE_val_00001552.jpg
1553 | validation/ADE_val_00001553.jpg
1554 | validation/ADE_val_00001554.jpg
1555 | validation/ADE_val_00001555.jpg
1556 | validation/ADE_val_00001556.jpg
1557 | validation/ADE_val_00001557.jpg
1558 | validation/ADE_val_00001558.jpg
1559 | validation/ADE_val_00001559.jpg
1560 | validation/ADE_val_00001560.jpg
1561 | validation/ADE_val_00001561.jpg
1562 | validation/ADE_val_00001562.jpg
1563 | validation/ADE_val_00001563.jpg
1564 | validation/ADE_val_00001564.jpg
1565 | validation/ADE_val_00001565.jpg
1566 | validation/ADE_val_00001566.jpg
1567 | validation/ADE_val_00001567.jpg
1568 | validation/ADE_val_00001568.jpg
1569 | validation/ADE_val_00001569.jpg
1570 | validation/ADE_val_00001570.jpg
1571 | validation/ADE_val_00001571.jpg
1572 | validation/ADE_val_00001572.jpg
1573 | validation/ADE_val_00001573.jpg
1574 | validation/ADE_val_00001574.jpg
1575 | validation/ADE_val_00001575.jpg
1576 | validation/ADE_val_00001576.jpg
1577 | validation/ADE_val_00001577.jpg
1578 | validation/ADE_val_00001578.jpg
1579 | validation/ADE_val_00001579.jpg
1580 | validation/ADE_val_00001580.jpg
1581 | validation/ADE_val_00001581.jpg
1582 | validation/ADE_val_00001582.jpg
1583 | validation/ADE_val_00001583.jpg
1584 | validation/ADE_val_00001584.jpg
1585 | validation/ADE_val_00001585.jpg
1586 | validation/ADE_val_00001586.jpg
1587 | validation/ADE_val_00001587.jpg
1588 | validation/ADE_val_00001588.jpg
1589 | validation/ADE_val_00001589.jpg
1590 | validation/ADE_val_00001590.jpg
1591 | validation/ADE_val_00001591.jpg
1592 | validation/ADE_val_00001592.jpg
1593 | validation/ADE_val_00001593.jpg
1594 | validation/ADE_val_00001594.jpg
1595 | validation/ADE_val_00001595.jpg
1596 | validation/ADE_val_00001596.jpg
1597 | validation/ADE_val_00001597.jpg
1598 | validation/ADE_val_00001598.jpg
1599 | validation/ADE_val_00001599.jpg
1600 | validation/ADE_val_00001600.jpg
1601 | validation/ADE_val_00001601.jpg
1602 | validation/ADE_val_00001602.jpg
1603 | validation/ADE_val_00001603.jpg
1604 | validation/ADE_val_00001604.jpg
1605 | validation/ADE_val_00001605.jpg
1606 | validation/ADE_val_00001606.jpg
1607 | validation/ADE_val_00001607.jpg
1608 | validation/ADE_val_00001608.jpg
1609 | validation/ADE_val_00001609.jpg
1610 | validation/ADE_val_00001610.jpg
1611 | validation/ADE_val_00001611.jpg
1612 | validation/ADE_val_00001612.jpg
1613 | validation/ADE_val_00001613.jpg
1614 | validation/ADE_val_00001614.jpg
1615 | validation/ADE_val_00001615.jpg
1616 | validation/ADE_val_00001616.jpg
1617 | validation/ADE_val_00001617.jpg
1618 | validation/ADE_val_00001618.jpg
1619 | validation/ADE_val_00001619.jpg
1620 | validation/ADE_val_00001620.jpg
1621 | validation/ADE_val_00001621.jpg
1622 | validation/ADE_val_00001622.jpg
1623 | validation/ADE_val_00001623.jpg
1624 | validation/ADE_val_00001624.jpg
1625 | validation/ADE_val_00001625.jpg
1626 | validation/ADE_val_00001626.jpg
1627 | validation/ADE_val_00001627.jpg
1628 | validation/ADE_val_00001628.jpg
1629 | validation/ADE_val_00001629.jpg
1630 | validation/ADE_val_00001630.jpg
1631 | validation/ADE_val_00001631.jpg
1632 | validation/ADE_val_00001632.jpg
1633 | validation/ADE_val_00001633.jpg
1634 | validation/ADE_val_00001634.jpg
1635 | validation/ADE_val_00001635.jpg
1636 | validation/ADE_val_00001636.jpg
1637 | validation/ADE_val_00001637.jpg
1638 | validation/ADE_val_00001638.jpg
1639 | validation/ADE_val_00001639.jpg
1640 | validation/ADE_val_00001640.jpg
1641 | validation/ADE_val_00001641.jpg
1642 | validation/ADE_val_00001642.jpg
1643 | validation/ADE_val_00001643.jpg
1644 | validation/ADE_val_00001644.jpg
1645 | validation/ADE_val_00001645.jpg
1646 | validation/ADE_val_00001646.jpg
1647 | validation/ADE_val_00001647.jpg
1648 | validation/ADE_val_00001648.jpg
1649 | validation/ADE_val_00001649.jpg
1650 | validation/ADE_val_00001650.jpg
1651 | validation/ADE_val_00001651.jpg
1652 | validation/ADE_val_00001652.jpg
1653 | validation/ADE_val_00001653.jpg
1654 | validation/ADE_val_00001654.jpg
1655 | validation/ADE_val_00001655.jpg
1656 | validation/ADE_val_00001656.jpg
1657 | validation/ADE_val_00001657.jpg
1658 | validation/ADE_val_00001658.jpg
1659 | validation/ADE_val_00001659.jpg
1660 | validation/ADE_val_00001660.jpg
1661 | validation/ADE_val_00001661.jpg
1662 | validation/ADE_val_00001662.jpg
1663 | validation/ADE_val_00001663.jpg
1664 | validation/ADE_val_00001664.jpg
1665 | validation/ADE_val_00001665.jpg
1666 | validation/ADE_val_00001666.jpg
1667 | validation/ADE_val_00001667.jpg
1668 | validation/ADE_val_00001668.jpg
1669 | validation/ADE_val_00001669.jpg
1670 | validation/ADE_val_00001670.jpg
1671 | validation/ADE_val_00001671.jpg
1672 | validation/ADE_val_00001672.jpg
1673 | validation/ADE_val_00001673.jpg
1674 | validation/ADE_val_00001674.jpg
1675 | validation/ADE_val_00001675.jpg
1676 | validation/ADE_val_00001676.jpg
1677 | validation/ADE_val_00001677.jpg
1678 | validation/ADE_val_00001678.jpg
1679 | validation/ADE_val_00001679.jpg
1680 | validation/ADE_val_00001680.jpg
1681 | validation/ADE_val_00001681.jpg
1682 | validation/ADE_val_00001682.jpg
1683 | validation/ADE_val_00001683.jpg
1684 | validation/ADE_val_00001684.jpg
1685 | validation/ADE_val_00001685.jpg
1686 | validation/ADE_val_00001686.jpg
1687 | validation/ADE_val_00001687.jpg
1688 | validation/ADE_val_00001688.jpg
1689 | validation/ADE_val_00001689.jpg
1690 | validation/ADE_val_00001690.jpg
1691 | validation/ADE_val_00001691.jpg
1692 | validation/ADE_val_00001692.jpg
1693 | validation/ADE_val_00001693.jpg
1694 | validation/ADE_val_00001694.jpg
1695 | validation/ADE_val_00001695.jpg
1696 | validation/ADE_val_00001696.jpg
1697 | validation/ADE_val_00001697.jpg
1698 | validation/ADE_val_00001698.jpg
1699 | validation/ADE_val_00001699.jpg
1700 | validation/ADE_val_00001700.jpg
1701 | validation/ADE_val_00001701.jpg
1702 | validation/ADE_val_00001702.jpg
1703 | validation/ADE_val_00001703.jpg
1704 | validation/ADE_val_00001704.jpg
1705 | validation/ADE_val_00001705.jpg
1706 | validation/ADE_val_00001706.jpg
1707 | validation/ADE_val_00001707.jpg
1708 | validation/ADE_val_00001708.jpg
1709 | validation/ADE_val_00001709.jpg
1710 | validation/ADE_val_00001710.jpg
1711 | validation/ADE_val_00001711.jpg
1712 | validation/ADE_val_00001712.jpg
1713 | validation/ADE_val_00001713.jpg
1714 | validation/ADE_val_00001714.jpg
1715 | validation/ADE_val_00001715.jpg
1716 | validation/ADE_val_00001716.jpg
1717 | validation/ADE_val_00001717.jpg
1718 | validation/ADE_val_00001718.jpg
1719 | validation/ADE_val_00001719.jpg
1720 | validation/ADE_val_00001720.jpg
1721 | validation/ADE_val_00001721.jpg
1722 | validation/ADE_val_00001722.jpg
1723 | validation/ADE_val_00001723.jpg
1724 | validation/ADE_val_00001724.jpg
1725 | validation/ADE_val_00001725.jpg
1726 | validation/ADE_val_00001726.jpg
1727 | validation/ADE_val_00001727.jpg
1728 | validation/ADE_val_00001728.jpg
1729 | validation/ADE_val_00001729.jpg
1730 | validation/ADE_val_00001730.jpg
1731 | validation/ADE_val_00001731.jpg
1732 | validation/ADE_val_00001732.jpg
1733 | validation/ADE_val_00001733.jpg
1734 | validation/ADE_val_00001734.jpg
1735 | validation/ADE_val_00001735.jpg
1736 | validation/ADE_val_00001736.jpg
1737 | validation/ADE_val_00001737.jpg
1738 | validation/ADE_val_00001738.jpg
1739 | validation/ADE_val_00001739.jpg
1740 | validation/ADE_val_00001740.jpg
1741 | validation/ADE_val_00001741.jpg
1742 | validation/ADE_val_00001742.jpg
1743 | validation/ADE_val_00001743.jpg
1744 | validation/ADE_val_00001744.jpg
1745 | validation/ADE_val_00001745.jpg
1746 | validation/ADE_val_00001746.jpg
1747 | validation/ADE_val_00001747.jpg
1748 | validation/ADE_val_00001748.jpg
1749 | validation/ADE_val_00001749.jpg
1750 | validation/ADE_val_00001750.jpg
1751 | validation/ADE_val_00001751.jpg
1752 | validation/ADE_val_00001752.jpg
1753 | validation/ADE_val_00001753.jpg
1754 | validation/ADE_val_00001754.jpg
1755 | validation/ADE_val_00001755.jpg
1756 | validation/ADE_val_00001756.jpg
1757 | validation/ADE_val_00001757.jpg
1758 | validation/ADE_val_00001758.jpg
1759 | validation/ADE_val_00001759.jpg
1760 | validation/ADE_val_00001760.jpg
1761 | validation/ADE_val_00001761.jpg
1762 | validation/ADE_val_00001762.jpg
1763 | validation/ADE_val_00001763.jpg
1764 | validation/ADE_val_00001764.jpg
1765 | validation/ADE_val_00001765.jpg
1766 | validation/ADE_val_00001766.jpg
1767 | validation/ADE_val_00001767.jpg
1768 | validation/ADE_val_00001768.jpg
1769 | validation/ADE_val_00001769.jpg
1770 | validation/ADE_val_00001770.jpg
1771 | validation/ADE_val_00001771.jpg
1772 | validation/ADE_val_00001772.jpg
1773 | validation/ADE_val_00001773.jpg
1774 | validation/ADE_val_00001774.jpg
1775 | validation/ADE_val_00001775.jpg
1776 | validation/ADE_val_00001776.jpg
1777 | validation/ADE_val_00001777.jpg
1778 | validation/ADE_val_00001778.jpg
1779 | validation/ADE_val_00001779.jpg
1780 | validation/ADE_val_00001780.jpg
1781 | validation/ADE_val_00001781.jpg
1782 | validation/ADE_val_00001782.jpg
1783 | validation/ADE_val_00001783.jpg
1784 | validation/ADE_val_00001784.jpg
1785 | validation/ADE_val_00001785.jpg
1786 | validation/ADE_val_00001786.jpg
1787 | validation/ADE_val_00001787.jpg
1788 | validation/ADE_val_00001788.jpg
1789 | validation/ADE_val_00001789.jpg
1790 | validation/ADE_val_00001790.jpg
1791 | validation/ADE_val_00001791.jpg
1792 | validation/ADE_val_00001792.jpg
1793 | validation/ADE_val_00001793.jpg
1794 | validation/ADE_val_00001794.jpg
1795 | validation/ADE_val_00001795.jpg
1796 | validation/ADE_val_00001796.jpg
1797 | validation/ADE_val_00001797.jpg
1798 | validation/ADE_val_00001798.jpg
1799 | validation/ADE_val_00001799.jpg
1800 | validation/ADE_val_00001800.jpg
1801 | validation/ADE_val_00001801.jpg
1802 | validation/ADE_val_00001802.jpg
1803 | validation/ADE_val_00001803.jpg
1804 | validation/ADE_val_00001804.jpg
1805 | validation/ADE_val_00001805.jpg
1806 | validation/ADE_val_00001806.jpg
1807 | validation/ADE_val_00001807.jpg
1808 | validation/ADE_val_00001808.jpg
1809 | validation/ADE_val_00001809.jpg
1810 | validation/ADE_val_00001810.jpg
1811 | validation/ADE_val_00001811.jpg
1812 | validation/ADE_val_00001812.jpg
1813 | validation/ADE_val_00001813.jpg
1814 | validation/ADE_val_00001814.jpg
1815 | validation/ADE_val_00001815.jpg
1816 | validation/ADE_val_00001816.jpg
1817 | validation/ADE_val_00001817.jpg
1818 | validation/ADE_val_00001818.jpg
1819 | validation/ADE_val_00001819.jpg
1820 | validation/ADE_val_00001820.jpg
1821 | validation/ADE_val_00001821.jpg
1822 | validation/ADE_val_00001822.jpg
1823 | validation/ADE_val_00001823.jpg
1824 | validation/ADE_val_00001824.jpg
1825 | validation/ADE_val_00001825.jpg
1826 | validation/ADE_val_00001826.jpg
1827 | validation/ADE_val_00001827.jpg
1828 | validation/ADE_val_00001828.jpg
1829 | validation/ADE_val_00001829.jpg
1830 | validation/ADE_val_00001830.jpg
1831 | validation/ADE_val_00001831.jpg
1832 | validation/ADE_val_00001832.jpg
1833 | validation/ADE_val_00001833.jpg
1834 | validation/ADE_val_00001834.jpg
1835 | validation/ADE_val_00001835.jpg
1836 | validation/ADE_val_00001836.jpg
1837 | validation/ADE_val_00001837.jpg
1838 | validation/ADE_val_00001838.jpg
1839 | validation/ADE_val_00001839.jpg
1840 | validation/ADE_val_00001840.jpg
1841 | validation/ADE_val_00001841.jpg
1842 | validation/ADE_val_00001842.jpg
1843 | validation/ADE_val_00001843.jpg
1844 | validation/ADE_val_00001844.jpg
1845 | validation/ADE_val_00001845.jpg
1846 | validation/ADE_val_00001846.jpg
1847 | validation/ADE_val_00001847.jpg
1848 | validation/ADE_val_00001848.jpg
1849 | validation/ADE_val_00001849.jpg
1850 | validation/ADE_val_00001850.jpg
1851 | validation/ADE_val_00001851.jpg
1852 | validation/ADE_val_00001852.jpg
1853 | validation/ADE_val_00001853.jpg
1854 | validation/ADE_val_00001854.jpg
1855 | validation/ADE_val_00001855.jpg
1856 | validation/ADE_val_00001856.jpg
1857 | validation/ADE_val_00001857.jpg
1858 | validation/ADE_val_00001858.jpg
1859 | validation/ADE_val_00001859.jpg
1860 | validation/ADE_val_00001860.jpg
1861 | validation/ADE_val_00001861.jpg
1862 | validation/ADE_val_00001862.jpg
1863 | validation/ADE_val_00001863.jpg
1864 | validation/ADE_val_00001864.jpg
1865 | validation/ADE_val_00001865.jpg
1866 | validation/ADE_val_00001866.jpg
1867 | validation/ADE_val_00001867.jpg
1868 | validation/ADE_val_00001868.jpg
1869 | validation/ADE_val_00001869.jpg
1870 | validation/ADE_val_00001870.jpg
1871 | validation/ADE_val_00001871.jpg
1872 | validation/ADE_val_00001872.jpg
1873 | validation/ADE_val_00001873.jpg
1874 | validation/ADE_val_00001874.jpg
1875 | validation/ADE_val_00001875.jpg
1876 | validation/ADE_val_00001876.jpg
1877 | validation/ADE_val_00001877.jpg
1878 | validation/ADE_val_00001878.jpg
1879 | validation/ADE_val_00001879.jpg
1880 | validation/ADE_val_00001880.jpg
1881 | validation/ADE_val_00001881.jpg
1882 | validation/ADE_val_00001882.jpg
1883 | validation/ADE_val_00001883.jpg
1884 | validation/ADE_val_00001884.jpg
1885 | validation/ADE_val_00001885.jpg
1886 | validation/ADE_val_00001886.jpg
1887 | validation/ADE_val_00001887.jpg
1888 | validation/ADE_val_00001888.jpg
1889 | validation/ADE_val_00001889.jpg
1890 | validation/ADE_val_00001890.jpg
1891 | validation/ADE_val_00001891.jpg
1892 | validation/ADE_val_00001892.jpg
1893 | validation/ADE_val_00001893.jpg
1894 | validation/ADE_val_00001894.jpg
1895 | validation/ADE_val_00001895.jpg
1896 | validation/ADE_val_00001896.jpg
1897 | validation/ADE_val_00001897.jpg
1898 | validation/ADE_val_00001898.jpg
1899 | validation/ADE_val_00001899.jpg
1900 | validation/ADE_val_00001900.jpg
1901 | validation/ADE_val_00001901.jpg
1902 | validation/ADE_val_00001902.jpg
1903 | validation/ADE_val_00001903.jpg
1904 | validation/ADE_val_00001904.jpg
1905 | validation/ADE_val_00001905.jpg
1906 | validation/ADE_val_00001906.jpg
1907 | validation/ADE_val_00001907.jpg
1908 | validation/ADE_val_00001908.jpg
1909 | validation/ADE_val_00001909.jpg
1910 | validation/ADE_val_00001910.jpg
1911 | validation/ADE_val_00001911.jpg
1912 | validation/ADE_val_00001912.jpg
1913 | validation/ADE_val_00001913.jpg
1914 | validation/ADE_val_00001914.jpg
1915 | validation/ADE_val_00001915.jpg
1916 | validation/ADE_val_00001916.jpg
1917 | validation/ADE_val_00001917.jpg
1918 | validation/ADE_val_00001918.jpg
1919 | validation/ADE_val_00001919.jpg
1920 | validation/ADE_val_00001920.jpg
1921 | validation/ADE_val_00001921.jpg
1922 | validation/ADE_val_00001922.jpg
1923 | validation/ADE_val_00001923.jpg
1924 | validation/ADE_val_00001924.jpg
1925 | validation/ADE_val_00001925.jpg
1926 | validation/ADE_val_00001926.jpg
1927 | validation/ADE_val_00001927.jpg
1928 | validation/ADE_val_00001928.jpg
1929 | validation/ADE_val_00001929.jpg
1930 | validation/ADE_val_00001930.jpg
1931 | validation/ADE_val_00001931.jpg
1932 | validation/ADE_val_00001932.jpg
1933 | validation/ADE_val_00001933.jpg
1934 | validation/ADE_val_00001934.jpg
1935 | validation/ADE_val_00001935.jpg
1936 | validation/ADE_val_00001936.jpg
1937 | validation/ADE_val_00001937.jpg
1938 | validation/ADE_val_00001938.jpg
1939 | validation/ADE_val_00001939.jpg
1940 | validation/ADE_val_00001940.jpg
1941 | validation/ADE_val_00001941.jpg
1942 | validation/ADE_val_00001942.jpg
1943 | validation/ADE_val_00001943.jpg
1944 | validation/ADE_val_00001944.jpg
1945 | validation/ADE_val_00001945.jpg
1946 | validation/ADE_val_00001946.jpg
1947 | validation/ADE_val_00001947.jpg
1948 | validation/ADE_val_00001948.jpg
1949 | validation/ADE_val_00001949.jpg
1950 | validation/ADE_val_00001950.jpg
1951 | validation/ADE_val_00001951.jpg
1952 | validation/ADE_val_00001952.jpg
1953 | validation/ADE_val_00001953.jpg
1954 | validation/ADE_val_00001954.jpg
1955 | validation/ADE_val_00001955.jpg
1956 | validation/ADE_val_00001956.jpg
1957 | validation/ADE_val_00001957.jpg
1958 | validation/ADE_val_00001958.jpg
1959 | validation/ADE_val_00001959.jpg
1960 | validation/ADE_val_00001960.jpg
1961 | validation/ADE_val_00001961.jpg
1962 | validation/ADE_val_00001962.jpg
1963 | validation/ADE_val_00001963.jpg
1964 | validation/ADE_val_00001964.jpg
1965 | validation/ADE_val_00001965.jpg
1966 | validation/ADE_val_00001966.jpg
1967 | validation/ADE_val_00001967.jpg
1968 | validation/ADE_val_00001968.jpg
1969 | validation/ADE_val_00001969.jpg
1970 | validation/ADE_val_00001970.jpg
1971 | validation/ADE_val_00001971.jpg
1972 | validation/ADE_val_00001972.jpg
1973 | validation/ADE_val_00001973.jpg
1974 | validation/ADE_val_00001974.jpg
1975 | validation/ADE_val_00001975.jpg
1976 | validation/ADE_val_00001976.jpg
1977 | validation/ADE_val_00001977.jpg
1978 | validation/ADE_val_00001978.jpg
1979 | validation/ADE_val_00001979.jpg
1980 | validation/ADE_val_00001980.jpg
1981 | validation/ADE_val_00001981.jpg
1982 | validation/ADE_val_00001982.jpg
1983 | validation/ADE_val_00001983.jpg
1984 | validation/ADE_val_00001984.jpg
1985 | validation/ADE_val_00001985.jpg
1986 | validation/ADE_val_00001986.jpg
1987 | validation/ADE_val_00001987.jpg
1988 | validation/ADE_val_00001988.jpg
1989 | validation/ADE_val_00001989.jpg
1990 | validation/ADE_val_00001990.jpg
1991 | validation/ADE_val_00001991.jpg
1992 | validation/ADE_val_00001992.jpg
1993 | validation/ADE_val_00001993.jpg
1994 | validation/ADE_val_00001994.jpg
1995 | validation/ADE_val_00001995.jpg
1996 | validation/ADE_val_00001996.jpg
1997 | validation/ADE_val_00001997.jpg
1998 | validation/ADE_val_00001998.jpg
1999 | validation/ADE_val_00001999.jpg
2000 | validation/ADE_val_00002000.jpg
2001 |
--------------------------------------------------------------------------------
/data/color150.mat:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/data/color150.mat
--------------------------------------------------------------------------------
/data/mapped_25info.txt:
--------------------------------------------------------------------------------
1 | 42:0
2 | 25:1
3 | 48:1
4 | 84:1
5 | 72:4
6 | 10:6
7 | 51:6
8 | 53:6
9 | 57:7
10 | 44:10
11 | 35:10
12 | 29:13
13 | 46:13
14 | 58:14
15 | 19:15
16 | 30:15
17 | 33:15
18 | 45:15
19 | 56:15
20 | 64:15
21 | 69:15
22 | 70:15
23 | 75:15
24 | 99:15
25 | 110:15
26 | 34:16
27 | 68:16
28 | 8:17
29 | 66:17
30 | 101:18
31 | 80:20
32 | 83:20
33 | 90:20
34 | 102:20
35 | 116:20
36 | 127:20
37 | 21:26
38 | 60:26
39 | 109:26
40 | 128:26
41 | 140:26
42 | 82:36
43 | 87:36
44 | 100:43
45 | 123:43
46 | 142:43
47 | 144:43
48 | 59:53
49 | 96:53
50 | 121:53
51 | 37:65
52 | 78:74
53 | 89:74
54 | 141:74
55 | 143:74
56 |
--------------------------------------------------------------------------------
/data/object150_info.csv:
--------------------------------------------------------------------------------
1 | Idx,Ratio,Train,Val,Stuff,Name
2 | 1,0.1576,11664,1172,1,wall
3 | 2,0.1072,6046,612,1,building;edifice
4 | 3,0.0878,8265,796,1,sky
5 | 4,0.0621,9336,917,1,floor;flooring
6 | 5,0.0480,6678,641,0,tree
7 | 6,0.0450,6604,643,1,ceiling
8 | 7,0.0398,4023,408,1,road;route
9 | 8,0.0231,1906,199,0,bed
10 | 9,0.0198,4688,460,0,windowpane;window
11 | 10,0.0183,2423,225,1,grass
12 | 11,0.0181,2874,294,0,cabinet
13 | 12,0.0166,3068,310,1,sidewalk;pavement
14 | 13,0.0160,5075,526,0,person;individual;someone;somebody;mortal;soul
15 | 14,0.0151,1804,190,1,earth;ground
16 | 15,0.0118,6666,796,0,door;double;door
17 | 16,0.0110,4269,411,0,table
18 | 17,0.0109,1691,160,1,mountain;mount
19 | 18,0.0104,3999,441,0,plant;flora;plant;life
20 | 19,0.0104,2149,217,0,curtain;drape;drapery;mantle;pall
21 | 20,0.0103,3261,318,0,chair
22 | 21,0.0098,3164,306,0,car;auto;automobile;machine;motorcar
23 | 22,0.0074,709,75,1,water
24 | 23,0.0067,3296,315,0,painting;picture
25 | 24,0.0065,1191,106,0,sofa;couch;lounge
26 | 25,0.0061,1516,162,0,shelf
27 | 26,0.0060,667,69,1,house
28 | 27,0.0053,651,57,1,sea
29 | 28,0.0052,1847,224,0,mirror
30 | 29,0.0046,1158,128,1,rug;carpet;carpeting
31 | 30,0.0044,480,44,1,field
32 | 31,0.0044,1172,98,0,armchair
33 | 32,0.0044,1292,184,0,seat
34 | 33,0.0033,1386,138,0,fence;fencing
35 | 34,0.0031,698,61,0,desk
36 | 35,0.0030,781,73,0,rock;stone
37 | 36,0.0027,380,43,0,wardrobe;closet;press
38 | 37,0.0026,3089,302,0,lamp
39 | 38,0.0024,404,37,0,bathtub;bathing;tub;bath;tub
40 | 39,0.0024,804,99,0,railing;rail
41 | 40,0.0023,1453,153,0,cushion
42 | 41,0.0023,411,37,0,base;pedestal;stand
43 | 42,0.0022,1440,162,0,box
44 | 43,0.0022,800,77,0,column;pillar
45 | 44,0.0020,2650,298,0,signboard;sign
46 | 45,0.0019,549,46,0,chest;of;drawers;chest;bureau;dresser
47 | 46,0.0019,367,36,0,counter
48 | 47,0.0018,311,30,1,sand
49 | 48,0.0018,1181,122,0,sink
50 | 49,0.0018,287,23,1,skyscraper
51 | 50,0.0018,468,38,0,fireplace;hearth;open;fireplace
52 | 51,0.0018,402,43,0,refrigerator;icebox
53 | 52,0.0018,130,12,1,grandstand;covered;stand
54 | 53,0.0018,561,64,1,path
55 | 54,0.0017,880,102,0,stairs;steps
56 | 55,0.0017,86,12,1,runway
57 | 56,0.0017,172,11,0,case;display;case;showcase;vitrine
58 | 57,0.0017,198,18,0,pool;table;billiard;table;snooker;table
59 | 58,0.0017,930,109,0,pillow
60 | 59,0.0015,139,18,0,screen;door;screen
61 | 60,0.0015,564,52,1,stairway;staircase
62 | 61,0.0015,320,26,1,river
63 | 62,0.0015,261,29,1,bridge;span
64 | 63,0.0014,275,22,0,bookcase
65 | 64,0.0014,335,60,0,blind;screen
66 | 65,0.0014,792,75,0,coffee;table;cocktail;table
67 | 66,0.0014,395,49,0,toilet;can;commode;crapper;pot;potty;stool;throne
68 | 67,0.0014,1309,138,0,flower
69 | 68,0.0013,1112,113,0,book
70 | 69,0.0013,266,27,1,hill
71 | 70,0.0013,659,66,0,bench
72 | 71,0.0012,331,31,0,countertop
73 | 72,0.0012,531,56,0,stove;kitchen;stove;range;kitchen;range;cooking;stove
74 | 73,0.0012,369,36,0,palm;palm;tree
75 | 74,0.0012,144,9,0,kitchen;island
76 | 75,0.0011,265,29,0,computer;computing;machine;computing;device;data;processor;electronic;computer;information;processing;system
77 | 76,0.0010,324,33,0,swivel;chair
78 | 77,0.0009,304,27,0,boat
79 | 78,0.0009,170,20,0,bar
80 | 79,0.0009,68,6,0,arcade;machine
81 | 80,0.0009,65,8,1,hovel;hut;hutch;shack;shanty
82 | 81,0.0009,248,25,0,bus;autobus;coach;charabanc;double-decker;jitney;motorbus;motorcoach;omnibus;passenger;vehicle
83 | 82,0.0008,492,49,0,towel
84 | 83,0.0008,2510,269,0,light;light;source
85 | 84,0.0008,440,39,0,truck;motortruck
86 | 85,0.0008,147,18,1,tower
87 | 86,0.0008,583,56,0,chandelier;pendant;pendent
88 | 87,0.0007,533,61,0,awning;sunshade;sunblind
89 | 88,0.0007,1989,239,0,streetlight;street;lamp
90 | 89,0.0007,71,5,0,booth;cubicle;stall;kiosk
91 | 90,0.0007,618,53,0,television;television;receiver;television;set;tv;tv;set;idiot;box;boob;tube;telly;goggle;box
92 | 91,0.0007,135,12,0,airplane;aeroplane;plane
93 | 92,0.0007,83,5,1,dirt;track
94 | 93,0.0007,178,17,0,apparel;wearing;apparel;dress;clothes
95 | 94,0.0006,1003,104,0,pole
96 | 95,0.0006,182,12,1,land;ground;soil
97 | 96,0.0006,452,50,0,bannister;banister;balustrade;balusters;handrail
98 | 97,0.0006,42,6,1,escalator;moving;staircase;moving;stairway
99 | 98,0.0006,307,31,0,ottoman;pouf;pouffe;puff;hassock
100 | 99,0.0006,965,114,0,bottle
101 | 100,0.0006,117,13,0,buffet;counter;sideboard
102 | 101,0.0006,354,35,0,poster;posting;placard;notice;bill;card
103 | 102,0.0006,108,9,1,stage
104 | 103,0.0006,557,55,0,van
105 | 104,0.0006,52,4,0,ship
106 | 105,0.0005,99,5,0,fountain
107 | 106,0.0005,57,4,1,conveyer;belt;conveyor;belt;conveyer;conveyor;transporter
108 | 107,0.0005,292,31,0,canopy
109 | 108,0.0005,77,9,0,washer;automatic;washer;washing;machine
110 | 109,0.0005,340,38,0,plaything;toy
111 | 110,0.0005,66,3,1,swimming;pool;swimming;bath;natatorium
112 | 111,0.0005,465,49,0,stool
113 | 112,0.0005,50,4,0,barrel;cask
114 | 113,0.0005,622,75,0,basket;handbasket
115 | 114,0.0005,80,9,1,waterfall;falls
116 | 115,0.0005,59,3,0,tent;collapsible;shelter
117 | 116,0.0005,531,72,0,bag
118 | 117,0.0005,282,30,0,minibike;motorbike
119 | 118,0.0005,73,7,0,cradle
120 | 119,0.0005,435,44,0,oven
121 | 120,0.0005,136,25,0,ball
122 | 121,0.0005,116,24,0,food;solid;food
123 | 122,0.0004,266,31,0,step;stair
124 | 123,0.0004,58,12,0,tank;storage;tank
125 | 124,0.0004,418,83,0,trade;name;brand;name;brand;marque
126 | 125,0.0004,319,43,0,microwave;microwave;oven
127 | 126,0.0004,1193,139,0,pot;flowerpot
128 | 127,0.0004,97,23,0,animal;animate;being;beast;brute;creature;fauna
129 | 128,0.0004,347,36,0,bicycle;bike;wheel;cycle
130 | 129,0.0004,52,5,1,lake
131 | 130,0.0004,246,22,0,dishwasher;dish;washer;dishwashing;machine
132 | 131,0.0004,108,13,0,screen;silver;screen;projection;screen
133 | 132,0.0004,201,30,0,blanket;cover
134 | 133,0.0004,285,21,0,sculpture
135 | 134,0.0004,268,27,0,hood;exhaust;hood
136 | 135,0.0003,1020,108,0,sconce
137 | 136,0.0003,1282,122,0,vase
138 | 137,0.0003,528,65,0,traffic;light;traffic;signal;stoplight
139 | 138,0.0003,453,57,0,tray
140 | 139,0.0003,671,100,0,ashcan;trash;can;garbage;can;wastebin;ash;bin;ash-bin;ashbin;dustbin;trash;barrel;trash;bin
141 | 140,0.0003,397,44,0,fan
142 | 141,0.0003,92,8,1,pier;wharf;wharfage;dock
143 | 142,0.0003,228,18,0,crt;screen
144 | 143,0.0003,570,59,0,plate
145 | 144,0.0003,217,22,0,monitor;monitoring;device
146 | 145,0.0003,206,19,0,bulletin;board;notice;board
147 | 146,0.0003,130,14,0,shower
148 | 147,0.0003,178,28,0,radiator
149 | 148,0.0002,504,57,0,glass;drinking;glass
150 | 149,0.0002,775,96,0,clock
151 | 150,0.0002,421,56,0,flag
152 |
--------------------------------------------------------------------------------
/data/object25_info.csv:
--------------------------------------------------------------------------------
1 | 1,43
2 | 2,26 49 85
3 | 3,
4 | 4,
5 | 5,73
6 | 6,
7 | 7,11 52 54
8 | 8,58
9 | 11,45 36
10 | 13,
11 | 14,30 47
12 | 15,59
13 | 16,20 31 34 46 57 65 70 71 76 100 111
14 | 17,35 69
15 | 18,9 67
16 | 19,102
17 | 21,81 84 91 103 117 128
18 | 24,
19 | 25,
20 | 27,141 129 110 61 22
21 | 37,83 88
22 | 44,101 124 143 145
23 | 54,60 97 122
24 | 66,38
25 | 75,79 90 142 144
26 |
--------------------------------------------------------------------------------
/data/object25_info.xlsx:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/data/object25_info.xlsx
--------------------------------------------------------------------------------
/dataset.py:
--------------------------------------------------------------------------------
1 | import os
2 | import json
3 | import torch
4 | import lib.utils.data as torchdata
5 | import cv2
6 | from torchvision import transforms
7 | from scipy.misc import imread, imresize
8 | import numpy as np
9 |
10 |
11 | # Round x to the nearest multiple of p and x' >= x
12 | def round2nearest_multiple(x, p):
13 | return ((x - 1) // p + 1) * p
14 |
15 |
16 | class TrainDataset(torchdata.Dataset):
17 | def __init__(self, odgt, opt, max_sample=-1, batch_per_gpu=1):
18 | self.root_dataset = opt.root_dataset
19 | self.imgSize = opt.imgSize
20 | self.imgMaxSize = opt.imgMaxSize
21 | self.random_flip = opt.random_flip
22 | # max down sampling rate of network to avoid rounding during conv or pooling
23 | self.padding_constant = opt.padding_constant
24 | # down sampling rate of segm labe
25 | self.segm_downsampling_rate = opt.segm_downsampling_rate
26 | self.batch_per_gpu = batch_per_gpu
27 |
28 | # classify images into two classes: 1. h > w and 2. h <= w
29 | self.batch_record_list = [[], []]
30 |
31 | # override dataset length when training with batch_per_gpu > 1
32 | self.cur_idx = 0
33 |
34 | # mean and std
35 | self.img_transform = transforms.Compose([
36 | transforms.Normalize(mean=[102.9801, 115.9465, 122.7717], std=[1., 1., 1.])
37 | ])
38 |
39 | self.list_sample = [json.loads(x.rstrip()) for x in open(odgt, 'r')]
40 |
41 | self.if_shuffled = False
42 | if max_sample > 0:
43 | self.list_sample = self.list_sample[0:max_sample]
44 | self.num_sample = len(self.list_sample)
45 | assert self.num_sample > 0
46 | print('# samples: {}'.format(self.num_sample))
47 |
48 | def _get_sub_batch(self):
49 | while True:
50 | # get a sample record
51 | this_sample = self.list_sample[self.cur_idx]
52 | if this_sample['height'] > this_sample['width']:
53 | self.batch_record_list[0].append(this_sample) # h > w, go to 1st class
54 | else:
55 | self.batch_record_list[1].append(this_sample) # h <= w, go to 2nd class
56 |
57 | # update current sample pointer
58 | self.cur_idx += 1
59 | if self.cur_idx >= self.num_sample:
60 | self.cur_idx = 0
61 | np.random.shuffle(self.list_sample)
62 |
63 | if len(self.batch_record_list[0]) == self.batch_per_gpu:
64 | batch_records = self.batch_record_list[0]
65 | self.batch_record_list[0] = []
66 | break
67 | elif len(self.batch_record_list[1]) == self.batch_per_gpu:
68 | batch_records = self.batch_record_list[1]
69 | self.batch_record_list[1] = []
70 | break
71 | return batch_records
72 |
73 | def __getitem__(self, index):
74 | # NOTE: random shuffle for the first time. shuffle in __init__ is useless
75 | if not self.if_shuffled:
76 | np.random.shuffle(self.list_sample)
77 | self.if_shuffled = True
78 |
79 | # get sub-batch candidates
80 | batch_records = self._get_sub_batch()
81 |
82 | # resize all images' short edges to the chosen size
83 | if isinstance(self.imgSize, list):
84 | this_short_size = np.random.choice(self.imgSize)
85 | else:
86 | this_short_size = self.imgSize
87 |
88 | # calculate the BATCH's height and width
89 | # since we concat more than one samples, the batch's h and w shall be larger than EACH sample
90 | batch_resized_size = np.zeros((self.batch_per_gpu, 2), np.int32)
91 | for i in range(self.batch_per_gpu):
92 | img_height, img_width = batch_records[i]['height'], batch_records[i]['width']
93 | this_scale = min(this_short_size / min(img_height, img_width), \
94 | self.imgMaxSize / max(img_height, img_width))
95 | img_resized_height, img_resized_width = img_height * this_scale, img_width * this_scale
96 | batch_resized_size[i, :] = img_resized_height, img_resized_width
97 | batch_resized_height = np.max(batch_resized_size[:, 0])
98 | batch_resized_width = np.max(batch_resized_size[:, 1])
99 |
100 | # Here we must pad both input image and segmentation map to size h' and w' so that p | h' and p | w'
101 | batch_resized_height = int(round2nearest_multiple(batch_resized_height, self.padding_constant))
102 | batch_resized_width = int(round2nearest_multiple(batch_resized_width, self.padding_constant))
103 |
104 | assert self.padding_constant >= self.segm_downsampling_rate, \
105 | 'padding constant must be equal or large than segm downsamping rate'
106 | batch_images = torch.zeros(self.batch_per_gpu, 3, batch_resized_height, batch_resized_width)
107 | batch_segms = torch.zeros(self.batch_per_gpu, batch_resized_height // self.segm_downsampling_rate, \
108 | batch_resized_width // self.segm_downsampling_rate).long()
109 |
110 | for i in range(self.batch_per_gpu):
111 | this_record = batch_records[i]
112 |
113 | # load image and label
114 | image_path = os.path.join(self.root_dataset, this_record['fpath_img'])
115 | segm_path = os.path.join(self.root_dataset, this_record['fpath_segm'])
116 | img = imread(image_path, mode='RGB')
117 | segm = imread(segm_path)
118 |
119 | assert (img.ndim == 3)
120 | assert (segm.ndim == 2)
121 | assert (img.shape[0] == segm.shape[0])
122 | assert (img.shape[1] == segm.shape[1])
123 |
124 | if self.random_flip == True:
125 | random_flip = np.random.choice([0, 1])
126 | if random_flip == 1:
127 | img = cv2.flip(img, 1)
128 | segm = cv2.flip(segm, 1)
129 |
130 | # note that each sample within a mini batch has different scale param
131 | img = imresize(img, (batch_resized_size[i, 0], batch_resized_size[i, 1]), interp='bilinear')
132 | segm = imresize(segm, (batch_resized_size[i, 0], batch_resized_size[i, 1]), interp='nearest')
133 |
134 | # to avoid seg label misalignment
135 | segm_rounded_height = round2nearest_multiple(segm.shape[0], self.segm_downsampling_rate)
136 | segm_rounded_width = round2nearest_multiple(segm.shape[1], self.segm_downsampling_rate)
137 | segm_rounded = np.zeros((segm_rounded_height, segm_rounded_width), dtype='uint8')
138 | segm_rounded[:segm.shape[0], :segm.shape[1]] = segm
139 |
140 | segm = imresize(segm_rounded, (segm_rounded.shape[0] // self.segm_downsampling_rate, \
141 | segm_rounded.shape[1] // self.segm_downsampling_rate), \
142 | interp='nearest')
143 | # image to float
144 | img = img.astype(np.float32)[:, :, ::-1] # RGB to BGR!!!
145 | img = img.transpose((2, 0, 1))
146 | img = self.img_transform(torch.from_numpy(img.copy()))
147 |
148 | batch_images[i][:, :img.shape[1], :img.shape[2]] = img
149 | batch_segms[i][:segm.shape[0], :segm.shape[1]] = torch.from_numpy(segm.astype(np.int)).long()
150 |
151 | batch_segms = batch_segms - 1 # label from -1 to 149
152 | output = dict()
153 | output['img_data'] = batch_images
154 | output['seg_label'] = batch_segms
155 | return output
156 |
157 | def __len__(self):
158 | return int(1e10) # It's a fake length due to the trick that every loader maintains its own list
159 | # return self.num_sampleclass
160 |
161 |
162 | class ValDataset(torchdata.Dataset):
163 | def __init__(self, odgt, opt, max_sample=-1, start_idx=-1, end_idx=-1):
164 | self.root_dataset = opt.root_dataset
165 | self.imgSize = opt.imgSize
166 | self.imgMaxSize = opt.imgMaxSize
167 | # max down sampling rate of network to avoid rounding during conv or pooling
168 | self.padding_constant = opt.padding_constant
169 |
170 | # mean and std
171 | self.img_transform = transforms.Compose([
172 | transforms.Normalize(mean=[102.9801, 115.9465, 122.7717], std=[1., 1., 1.])
173 | ])
174 |
175 | self.list_sample = [json.loads(x.rstrip()) for x in open(odgt, 'r')]
176 |
177 | if max_sample > 0:
178 | self.list_sample = self.list_sample[0:max_sample]
179 |
180 | if start_idx >= 0 and end_idx >= 0: # divide file list
181 | self.list_sample = self.list_sample[start_idx:end_idx]
182 |
183 | self.num_sample = len(self.list_sample)
184 | assert self.num_sample > 0
185 | print('# samples: {}'.format(self.num_sample))
186 |
187 | def __getitem__(self, index):
188 | this_record = self.list_sample[index]
189 | # load image and label
190 | image_path = os.path.join(self.root_dataset, this_record['fpath_img'])
191 | segm_path = os.path.join(self.root_dataset, this_record['fpath_segm'])
192 | img = imread(image_path, mode='RGB')
193 | img = img[:, :, ::-1] # BGR to RGB!!!
194 | segm = imread(segm_path)
195 |
196 | ori_height, ori_width, _ = img.shape
197 |
198 | img_resized_list = []
199 | for this_short_size in self.imgSize:
200 | # calculate target height and width
201 | scale = min(this_short_size / float(min(ori_height, ori_width)),
202 | self.imgMaxSize / float(max(ori_height, ori_width)))
203 | target_height, target_width = int(ori_height * scale), int(ori_width * scale)
204 |
205 | # to avoid rounding in network
206 | target_height = round2nearest_multiple(target_height, self.padding_constant)
207 | target_width = round2nearest_multiple(target_width, self.padding_constant)
208 |
209 | # resize
210 | img_resized = cv2.resize(img.copy(), (target_width, target_height))
211 |
212 | # image to float
213 | img_resized = img_resized.astype(np.float32)
214 | img_resized = img_resized.transpose((2, 0, 1))
215 | img_resized = self.img_transform(torch.from_numpy(img_resized))
216 |
217 | img_resized = torch.unsqueeze(img_resized, 0)
218 | img_resized_list.append(img_resized)
219 |
220 | segm = torch.from_numpy(segm.astype(np.int)).long()
221 |
222 | batch_segms = torch.unsqueeze(segm, 0)
223 |
224 | batch_segms = batch_segms - 1 # label from -1 to 149
225 | output = dict()
226 | output['img_ori'] = img.copy()
227 | output['img_data'] = [x.contiguous() for x in img_resized_list]
228 | output['seg_label'] = batch_segms.contiguous()
229 | output['info'] = this_record['fpath_img']
230 | return output
231 |
232 | def __len__(self):
233 | return self.num_sample
234 |
235 |
236 | class TestDataset(torchdata.Dataset):
237 | def __init__(self, odgt, opt, max_sample=-1):
238 | self.imgSize = opt.imgSize
239 | self.imgMaxSize = opt.imgMaxSize
240 | # max down sampling rate of network to avoid rounding during conv or pooling
241 | self.padding_constant = opt.padding_constant
242 | # down sampling rate of segm labe
243 | self.segm_downsampling_rate = opt.segm_downsampling_rate
244 |
245 | # mean and std
246 | self.img_transform = transforms.Compose([
247 | transforms.Normalize(mean=[102.9801, 115.9465, 122.7717], std=[1., 1., 1.])
248 | ])
249 |
250 | if isinstance(odgt, list):
251 | self.list_sample = odgt
252 | elif isinstance(odgt, str):
253 | self.list_sample = [json.loads(x.rstrip()) for x in open(odgt, 'r')]
254 |
255 | if max_sample > 0:
256 | self.list_sample = self.list_sample[0:max_sample]
257 | self.num_sample = len(self.list_sample)
258 | assert self.num_sample > 0
259 | print('# samples: {}'.format(self.num_sample))
260 |
261 | def __getitem__(self, index):
262 | this_record = self.list_sample[index]
263 | # load image and label
264 | image_path = this_record['fpath_img']
265 | img = imread(image_path, mode='RGB')
266 | img = img[:, :, ::-1] # BGR to RGB!!!
267 |
268 | ori_height, ori_width, _ = img.shape
269 |
270 | img_resized_list = []
271 | for this_short_size in self.imgSize:
272 | # calculate target height and width
273 | scale = min(this_short_size / float(min(ori_height, ori_width)),
274 | self.imgMaxSize / float(max(ori_height, ori_width)))
275 | target_height, target_width = int(ori_height * scale), int(ori_width * scale)
276 |
277 | # to avoid rounding in network
278 | target_height = round2nearest_multiple(target_height, self.padding_constant)
279 | target_width = round2nearest_multiple(target_width, self.padding_constant)
280 |
281 | # resize
282 | img_resized = cv2.resize(img.copy(), (target_width, target_height))
283 |
284 | # image to float
285 | img_resized = img_resized.astype(np.float32)
286 | img_resized = img_resized.transpose((2, 0, 1))
287 | img_resized = self.img_transform(torch.from_numpy(img_resized))
288 |
289 | img_resized = torch.unsqueeze(img_resized, 0)
290 | img_resized_list.append(img_resized)
291 |
292 | # segm = torch.from_numpy(segm.astype(np.int)).long()
293 |
294 | # batch_segms = torch.unsqueeze(segm, 0)
295 |
296 | # batch_segms = batch_segms - 1 # label from -1 to 149
297 | output = dict()
298 | output['img_ori'] = img.copy()
299 | output['img_data'] = [x.contiguous() for x in img_resized_list]
300 | # output['seg_label'] = batch_segms.contiguous()
301 | output['info'] = this_record['fpath_img']
302 | return output
303 |
304 | def __len__(self):
305 | return self.num_sample
306 |
--------------------------------------------------------------------------------
/demo_test.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # Image and model names
4 | TEST_IMG=ADE_val_00001519.jpg
5 | MODEL_PATH=baseline-resnet50dilated-ppm_deepsup
6 | RESULT_PATH=./
7 |
8 | ENCODER=$MODEL_PATH/encoder_epoch_20.pth
9 | DECODER=$MODEL_PATH/decoder_epoch_20.pth
10 |
11 | # Download model weights and image
12 | if [ ! -e $MODEL_PATH ]; then
13 | mkdir $MODEL_PATH
14 | fi
15 | if [ ! -e $ENCODER ]; then
16 | wget -P $MODEL_PATH http://sceneparsing.csail.mit.edu/model/pytorch/$ENCODER
17 | fi
18 | if [ ! -e $DECODER ]; then
19 | wget -P $MODEL_PATH http://sceneparsing.csail.mit.edu/model/pytorch/$DECODER
20 | fi
21 | if [ ! -e $TEST_IMG ]; then
22 | wget -P $RESULT_PATH http://sceneparsing.csail.mit.edu/data/ADEChallengeData2016/images/validation/$TEST_IMG
23 | fi
24 |
25 | # Inference
26 | python3 -u test.py \
27 | --model_path $MODEL_PATH \
28 | --test_imgs $TEST_IMG \
29 | --arch_encoder resnet50dilated \
30 | --arch_decoder ppm_deepsup \
31 | --fc_dim 2048 \
32 | --result $RESULT_PATH
33 |
--------------------------------------------------------------------------------
/lib/nn/__init__.py:
--------------------------------------------------------------------------------
1 | from .modules import *
2 | from .parallel import UserScatteredDataParallel, user_scattered_collate, async_copy_to
3 |
--------------------------------------------------------------------------------
/lib/nn/modules/__init__.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | # File : __init__.py
3 | # Author : Jiayuan Mao
4 | # Email : maojiayuan@gmail.com
5 | # Date : 27/01/2018
6 | #
7 | # This file is part of Synchronized-BatchNorm-PyTorch.
8 | # https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
9 | # Distributed under MIT License.
10 |
11 | from .batchnorm import SynchronizedBatchNorm1d, SynchronizedBatchNorm2d, SynchronizedBatchNorm3d
12 | from .replicate import DataParallelWithCallback, patch_replication_callback
13 |
--------------------------------------------------------------------------------
/lib/nn/modules/batchnorm.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | # File : batchnorm.py
3 | # Author : Jiayuan Mao
4 | # Email : maojiayuan@gmail.com
5 | # Date : 27/01/2018
6 | #
7 | # This file is part of Synchronized-BatchNorm-PyTorch.
8 | # https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
9 | # Distributed under MIT License.
10 |
11 | import collections
12 |
13 | import torch
14 | import torch.nn.functional as F
15 |
16 | from torch.nn.modules.batchnorm import _BatchNorm
17 | from torch.nn.parallel._functions import ReduceAddCoalesced, Broadcast
18 |
19 | from .comm import SyncMaster
20 |
21 | __all__ = ['SynchronizedBatchNorm1d', 'SynchronizedBatchNorm2d', 'SynchronizedBatchNorm3d']
22 |
23 |
24 | def _sum_ft(tensor):
25 | """sum over the first and last dimention"""
26 | return tensor.sum(dim=0).sum(dim=-1)
27 |
28 |
29 | def _unsqueeze_ft(tensor):
30 | """add new dementions at the front and the tail"""
31 | return tensor.unsqueeze(0).unsqueeze(-1)
32 |
33 |
34 | _ChildMessage = collections.namedtuple('_ChildMessage', ['sum', 'ssum', 'sum_size'])
35 | _MasterMessage = collections.namedtuple('_MasterMessage', ['sum', 'inv_std'])
36 |
37 |
38 | class _SynchronizedBatchNorm(_BatchNorm):
39 | def __init__(self, num_features, eps=1e-5, momentum=0.001, affine=True):
40 | super(_SynchronizedBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine)
41 |
42 | self._sync_master = SyncMaster(self._data_parallel_master)
43 |
44 | self._is_parallel = False
45 | self._parallel_id = None
46 | self._slave_pipe = None
47 |
48 | # customed batch norm statistics
49 | self._moving_average_fraction = 1. - momentum
50 | self.register_buffer('_tmp_running_mean', torch.zeros(self.num_features))
51 | self.register_buffer('_tmp_running_var', torch.ones(self.num_features))
52 | self.register_buffer('_running_iter', torch.ones(1))
53 | self._tmp_running_mean = self.running_mean.clone() * self._running_iter
54 | self._tmp_running_var = self.running_var.clone() * self._running_iter
55 |
56 | def forward(self, input):
57 | # If it is not parallel computation or is in evaluation mode, use PyTorch's implementation.
58 | if not (self._is_parallel and self.training):
59 | return F.batch_norm(
60 | input, self.running_mean, self.running_var, self.weight, self.bias,
61 | self.training, self.momentum, self.eps)
62 |
63 | # Resize the input to (B, C, -1).
64 | input_shape = input.size()
65 | input = input.view(input.size(0), self.num_features, -1)
66 |
67 | # Compute the sum and square-sum.
68 | sum_size = input.size(0) * input.size(2)
69 | input_sum = _sum_ft(input)
70 | input_ssum = _sum_ft(input ** 2)
71 |
72 | # Reduce-and-broadcast the statistics.
73 | if self._parallel_id == 0:
74 | mean, inv_std = self._sync_master.run_master(_ChildMessage(input_sum, input_ssum, sum_size))
75 | else:
76 | mean, inv_std = self._slave_pipe.run_slave(_ChildMessage(input_sum, input_ssum, sum_size))
77 |
78 | # Compute the output.
79 | if self.affine:
80 | # MJY:: Fuse the multiplication for speed.
81 | output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std * self.weight) + _unsqueeze_ft(self.bias)
82 | else:
83 | output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std)
84 |
85 | # Reshape it.
86 | return output.view(input_shape)
87 |
88 | def __data_parallel_replicate__(self, ctx, copy_id):
89 | self._is_parallel = True
90 | self._parallel_id = copy_id
91 |
92 | # parallel_id == 0 means master device.
93 | if self._parallel_id == 0:
94 | ctx.sync_master = self._sync_master
95 | else:
96 | self._slave_pipe = ctx.sync_master.register_slave(copy_id)
97 |
98 | def _data_parallel_master(self, intermediates):
99 | """Reduce the sum and square-sum, compute the statistics, and broadcast it."""
100 | intermediates = sorted(intermediates, key=lambda i: i[1].sum.get_device())
101 |
102 | to_reduce = [i[1][:2] for i in intermediates]
103 | to_reduce = [j for i in to_reduce for j in i] # flatten
104 | target_gpus = [i[1].sum.get_device() for i in intermediates]
105 |
106 | sum_size = sum([i[1].sum_size for i in intermediates])
107 | sum_, ssum = ReduceAddCoalesced.apply(target_gpus[0], 2, *to_reduce)
108 |
109 | mean, inv_std = self._compute_mean_std(sum_, ssum, sum_size)
110 |
111 | broadcasted = Broadcast.apply(target_gpus, mean, inv_std)
112 |
113 | outputs = []
114 | for i, rec in enumerate(intermediates):
115 | outputs.append((rec[0], _MasterMessage(*broadcasted[i*2:i*2+2])))
116 |
117 | return outputs
118 |
119 | def _add_weighted(self, dest, delta, alpha=1, beta=1, bias=0):
120 | """return *dest* by `dest := dest*alpha + delta*beta + bias`"""
121 | return dest * alpha + delta * beta + bias
122 |
123 | def _compute_mean_std(self, sum_, ssum, size):
124 | """Compute the mean and standard-deviation with sum and square-sum. This method
125 | also maintains the moving average on the master device."""
126 | assert size > 1, 'BatchNorm computes unbiased standard-deviation, which requires size > 1.'
127 | mean = sum_ / size
128 | sumvar = ssum - sum_ * mean
129 | unbias_var = sumvar / (size - 1)
130 | bias_var = sumvar / size
131 |
132 | self._tmp_running_mean = self._add_weighted(self._tmp_running_mean, mean.data, alpha=self._moving_average_fraction)
133 | self._tmp_running_var = self._add_weighted(self._tmp_running_var, unbias_var.data, alpha=self._moving_average_fraction)
134 | self._running_iter = self._add_weighted(self._running_iter, 1, alpha=self._moving_average_fraction)
135 |
136 | self.running_mean = self._tmp_running_mean / self._running_iter
137 | self.running_var = self._tmp_running_var / self._running_iter
138 |
139 | return mean, bias_var.clamp(self.eps) ** -0.5
140 |
141 |
142 | class SynchronizedBatchNorm1d(_SynchronizedBatchNorm):
143 | r"""Applies Synchronized Batch Normalization over a 2d or 3d input that is seen as a
144 | mini-batch.
145 |
146 | .. math::
147 |
148 | y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
149 |
150 | This module differs from the built-in PyTorch BatchNorm1d as the mean and
151 | standard-deviation are reduced across all devices during training.
152 |
153 | For example, when one uses `nn.DataParallel` to wrap the network during
154 | training, PyTorch's implementation normalize the tensor on each device using
155 | the statistics only on that device, which accelerated the computation and
156 | is also easy to implement, but the statistics might be inaccurate.
157 | Instead, in this synchronized version, the statistics will be computed
158 | over all training samples distributed on multiple devices.
159 |
160 | Note that, for one-GPU or CPU-only case, this module behaves exactly same
161 | as the built-in PyTorch implementation.
162 |
163 | The mean and standard-deviation are calculated per-dimension over
164 | the mini-batches and gamma and beta are learnable parameter vectors
165 | of size C (where C is the input size).
166 |
167 | During training, this layer keeps a running estimate of its computed mean
168 | and variance. The running sum is kept with a default momentum of 0.1.
169 |
170 | During evaluation, this running mean/variance is used for normalization.
171 |
172 | Because the BatchNorm is done over the `C` dimension, computing statistics
173 | on `(N, L)` slices, it's common terminology to call this Temporal BatchNorm
174 |
175 | Args:
176 | num_features: num_features from an expected input of size
177 | `batch_size x num_features [x width]`
178 | eps: a value added to the denominator for numerical stability.
179 | Default: 1e-5
180 | momentum: the value used for the running_mean and running_var
181 | computation. Default: 0.1
182 | affine: a boolean value that when set to ``True``, gives the layer learnable
183 | affine parameters. Default: ``True``
184 |
185 | Shape:
186 | - Input: :math:`(N, C)` or :math:`(N, C, L)`
187 | - Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input)
188 |
189 | Examples:
190 | >>> # With Learnable Parameters
191 | >>> m = SynchronizedBatchNorm1d(100)
192 | >>> # Without Learnable Parameters
193 | >>> m = SynchronizedBatchNorm1d(100, affine=False)
194 | >>> input = torch.autograd.Variable(torch.randn(20, 100))
195 | >>> output = m(input)
196 | """
197 |
198 | def _check_input_dim(self, input):
199 | if input.dim() != 2 and input.dim() != 3:
200 | raise ValueError('expected 2D or 3D input (got {}D input)'
201 | .format(input.dim()))
202 | super(SynchronizedBatchNorm1d, self)._check_input_dim(input)
203 |
204 |
205 | class SynchronizedBatchNorm2d(_SynchronizedBatchNorm):
206 | r"""Applies Batch Normalization over a 4d input that is seen as a mini-batch
207 | of 3d inputs
208 |
209 | .. math::
210 |
211 | y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
212 |
213 | This module differs from the built-in PyTorch BatchNorm2d as the mean and
214 | standard-deviation are reduced across all devices during training.
215 |
216 | For example, when one uses `nn.DataParallel` to wrap the network during
217 | training, PyTorch's implementation normalize the tensor on each device using
218 | the statistics only on that device, which accelerated the computation and
219 | is also easy to implement, but the statistics might be inaccurate.
220 | Instead, in this synchronized version, the statistics will be computed
221 | over all training samples distributed on multiple devices.
222 |
223 | Note that, for one-GPU or CPU-only case, this module behaves exactly same
224 | as the built-in PyTorch implementation.
225 |
226 | The mean and standard-deviation are calculated per-dimension over
227 | the mini-batches and gamma and beta are learnable parameter vectors
228 | of size C (where C is the input size).
229 |
230 | During training, this layer keeps a running estimate of its computed mean
231 | and variance. The running sum is kept with a default momentum of 0.1.
232 |
233 | During evaluation, this running mean/variance is used for normalization.
234 |
235 | Because the BatchNorm is done over the `C` dimension, computing statistics
236 | on `(N, H, W)` slices, it's common terminology to call this Spatial BatchNorm
237 |
238 | Args:
239 | num_features: num_features from an expected input of
240 | size batch_size x num_features x height x width
241 | eps: a value added to the denominator for numerical stability.
242 | Default: 1e-5
243 | momentum: the value used for the running_mean and running_var
244 | computation. Default: 0.1
245 | affine: a boolean value that when set to ``True``, gives the layer learnable
246 | affine parameters. Default: ``True``
247 |
248 | Shape:
249 | - Input: :math:`(N, C, H, W)`
250 | - Output: :math:`(N, C, H, W)` (same shape as input)
251 |
252 | Examples:
253 | >>> # With Learnable Parameters
254 | >>> m = SynchronizedBatchNorm2d(100)
255 | >>> # Without Learnable Parameters
256 | >>> m = SynchronizedBatchNorm2d(100, affine=False)
257 | >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45))
258 | >>> output = m(input)
259 | """
260 |
261 | def _check_input_dim(self, input):
262 | if input.dim() != 4:
263 | raise ValueError('expected 4D input (got {}D input)'
264 | .format(input.dim()))
265 | super(SynchronizedBatchNorm2d, self)._check_input_dim(input)
266 |
267 |
268 | class SynchronizedBatchNorm3d(_SynchronizedBatchNorm):
269 | r"""Applies Batch Normalization over a 5d input that is seen as a mini-batch
270 | of 4d inputs
271 |
272 | .. math::
273 |
274 | y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
275 |
276 | This module differs from the built-in PyTorch BatchNorm3d as the mean and
277 | standard-deviation are reduced across all devices during training.
278 |
279 | For example, when one uses `nn.DataParallel` to wrap the network during
280 | training, PyTorch's implementation normalize the tensor on each device using
281 | the statistics only on that device, which accelerated the computation and
282 | is also easy to implement, but the statistics might be inaccurate.
283 | Instead, in this synchronized version, the statistics will be computed
284 | over all training samples distributed on multiple devices.
285 |
286 | Note that, for one-GPU or CPU-only case, this module behaves exactly same
287 | as the built-in PyTorch implementation.
288 |
289 | The mean and standard-deviation are calculated per-dimension over
290 | the mini-batches and gamma and beta are learnable parameter vectors
291 | of size C (where C is the input size).
292 |
293 | During training, this layer keeps a running estimate of its computed mean
294 | and variance. The running sum is kept with a default momentum of 0.1.
295 |
296 | During evaluation, this running mean/variance is used for normalization.
297 |
298 | Because the BatchNorm is done over the `C` dimension, computing statistics
299 | on `(N, D, H, W)` slices, it's common terminology to call this Volumetric BatchNorm
300 | or Spatio-temporal BatchNorm
301 |
302 | Args:
303 | num_features: num_features from an expected input of
304 | size batch_size x num_features x depth x height x width
305 | eps: a value added to the denominator for numerical stability.
306 | Default: 1e-5
307 | momentum: the value used for the running_mean and running_var
308 | computation. Default: 0.1
309 | affine: a boolean value that when set to ``True``, gives the layer learnable
310 | affine parameters. Default: ``True``
311 |
312 | Shape:
313 | - Input: :math:`(N, C, D, H, W)`
314 | - Output: :math:`(N, C, D, H, W)` (same shape as input)
315 |
316 | Examples:
317 | >>> # With Learnable Parameters
318 | >>> m = SynchronizedBatchNorm3d(100)
319 | >>> # Without Learnable Parameters
320 | >>> m = SynchronizedBatchNorm3d(100, affine=False)
321 | >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45, 10))
322 | >>> output = m(input)
323 | """
324 |
325 | def _check_input_dim(self, input):
326 | if input.dim() != 5:
327 | raise ValueError('expected 5D input (got {}D input)'
328 | .format(input.dim()))
329 | super(SynchronizedBatchNorm3d, self)._check_input_dim(input)
330 |
--------------------------------------------------------------------------------
/lib/nn/modules/comm.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | # File : comm.py
3 | # Author : Jiayuan Mao
4 | # Email : maojiayuan@gmail.com
5 | # Date : 27/01/2018
6 | #
7 | # This file is part of Synchronized-BatchNorm-PyTorch.
8 | # https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
9 | # Distributed under MIT License.
10 |
11 | import queue
12 | import collections
13 | import threading
14 |
15 | __all__ = ['FutureResult', 'SlavePipe', 'SyncMaster']
16 |
17 |
18 | class FutureResult(object):
19 | """A thread-safe future implementation. Used only as one-to-one pipe."""
20 |
21 | def __init__(self):
22 | self._result = None
23 | self._lock = threading.Lock()
24 | self._cond = threading.Condition(self._lock)
25 |
26 | def put(self, result):
27 | with self._lock:
28 | assert self._result is None, 'Previous result has\'t been fetched.'
29 | self._result = result
30 | self._cond.notify()
31 |
32 | def get(self):
33 | with self._lock:
34 | if self._result is None:
35 | self._cond.wait()
36 |
37 | res = self._result
38 | self._result = None
39 | return res
40 |
41 |
42 | _MasterRegistry = collections.namedtuple('MasterRegistry', ['result'])
43 | _SlavePipeBase = collections.namedtuple('_SlavePipeBase', ['identifier', 'queue', 'result'])
44 |
45 |
46 | class SlavePipe(_SlavePipeBase):
47 | """Pipe for master-slave communication."""
48 |
49 | def run_slave(self, msg):
50 | self.queue.put((self.identifier, msg))
51 | ret = self.result.get()
52 | self.queue.put(True)
53 | return ret
54 |
55 |
56 | class SyncMaster(object):
57 | """An abstract `SyncMaster` object.
58 |
59 | - During the replication, as the data parallel will trigger an callback of each module, all slave devices should
60 | call `register(id)` and obtain an `SlavePipe` to communicate with the master.
61 | - During the forward pass, master device invokes `run_master`, all messages from slave devices will be collected,
62 | and passed to a registered callback.
63 | - After receiving the messages, the master device should gather the information and determine to message passed
64 | back to each slave devices.
65 | """
66 |
67 | def __init__(self, master_callback):
68 | """
69 |
70 | Args:
71 | master_callback: a callback to be invoked after having collected messages from slave devices.
72 | """
73 | self._master_callback = master_callback
74 | self._queue = queue.Queue()
75 | self._registry = collections.OrderedDict()
76 | self._activated = False
77 |
78 | def register_slave(self, identifier):
79 | """
80 | Register an slave device.
81 |
82 | Args:
83 | identifier: an identifier, usually is the device id.
84 |
85 | Returns: a `SlavePipe` object which can be used to communicate with the master device.
86 |
87 | """
88 | if self._activated:
89 | assert self._queue.empty(), 'Queue is not clean before next initialization.'
90 | self._activated = False
91 | self._registry.clear()
92 | future = FutureResult()
93 | self._registry[identifier] = _MasterRegistry(future)
94 | return SlavePipe(identifier, self._queue, future)
95 |
96 | def run_master(self, master_msg):
97 | """
98 | Main entry for the master device in each forward pass.
99 | The messages were first collected from each devices (including the master device), and then
100 | an callback will be invoked to compute the message to be sent back to each devices
101 | (including the master device).
102 |
103 | Args:
104 | master_msg: the message that the master want to send to itself. This will be placed as the first
105 | message when calling `master_callback`. For detailed usage, see `_SynchronizedBatchNorm` for an example.
106 |
107 | Returns: the message to be sent back to the master device.
108 |
109 | """
110 | self._activated = True
111 |
112 | intermediates = [(0, master_msg)]
113 | for i in range(self.nr_slaves):
114 | intermediates.append(self._queue.get())
115 |
116 | results = self._master_callback(intermediates)
117 | assert results[0][0] == 0, 'The first result should belongs to the master.'
118 |
119 | for i, res in results:
120 | if i == 0:
121 | continue
122 | self._registry[i].result.put(res)
123 |
124 | for i in range(self.nr_slaves):
125 | assert self._queue.get() is True
126 |
127 | return results[0][1]
128 |
129 | @property
130 | def nr_slaves(self):
131 | return len(self._registry)
132 |
--------------------------------------------------------------------------------
/lib/nn/modules/replicate.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | # File : replicate.py
3 | # Author : Jiayuan Mao
4 | # Email : maojiayuan@gmail.com
5 | # Date : 27/01/2018
6 | #
7 | # This file is part of Synchronized-BatchNorm-PyTorch.
8 | # https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
9 | # Distributed under MIT License.
10 |
11 | import functools
12 |
13 | from torch.nn.parallel.data_parallel import DataParallel
14 |
15 | __all__ = [
16 | 'CallbackContext',
17 | 'execute_replication_callbacks',
18 | 'DataParallelWithCallback',
19 | 'patch_replication_callback'
20 | ]
21 |
22 |
23 | class CallbackContext(object):
24 | pass
25 |
26 |
27 | def execute_replication_callbacks(modules):
28 | """
29 | Execute an replication callback `__data_parallel_replicate__` on each module created by original replication.
30 |
31 | The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)`
32 |
33 | Note that, as all modules are isomorphism, we assign each sub-module with a context
34 | (shared among multiple copies of this module on different devices).
35 | Through this context, different copies can share some information.
36 |
37 | We guarantee that the callback on the master copy (the first copy) will be called ahead of calling the callback
38 | of any slave copies.
39 | """
40 | master_copy = modules[0]
41 | nr_modules = len(list(master_copy.modules()))
42 | ctxs = [CallbackContext() for _ in range(nr_modules)]
43 |
44 | for i, module in enumerate(modules):
45 | for j, m in enumerate(module.modules()):
46 | if hasattr(m, '__data_parallel_replicate__'):
47 | m.__data_parallel_replicate__(ctxs[j], i)
48 |
49 |
50 | class DataParallelWithCallback(DataParallel):
51 | """
52 | Data Parallel with a replication callback.
53 |
54 | An replication callback `__data_parallel_replicate__` of each module will be invoked after being created by
55 | original `replicate` function.
56 | The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)`
57 |
58 | Examples:
59 | > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False)
60 | > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1])
61 | # sync_bn.__data_parallel_replicate__ will be invoked.
62 | """
63 |
64 | def replicate(self, module, device_ids):
65 | modules = super(DataParallelWithCallback, self).replicate(module, device_ids)
66 | execute_replication_callbacks(modules)
67 | return modules
68 |
69 |
70 | def patch_replication_callback(data_parallel):
71 | """
72 | Monkey-patch an existing `DataParallel` object. Add the replication callback.
73 | Useful when you have customized `DataParallel` implementation.
74 |
75 | Examples:
76 | > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False)
77 | > sync_bn = DataParallel(sync_bn, device_ids=[0, 1])
78 | > patch_replication_callback(sync_bn)
79 | # this is equivalent to
80 | > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False)
81 | > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1])
82 | """
83 |
84 | assert isinstance(data_parallel, DataParallel)
85 |
86 | old_replicate = data_parallel.replicate
87 |
88 | @functools.wraps(old_replicate)
89 | def new_replicate(module, device_ids):
90 | modules = old_replicate(module, device_ids)
91 | execute_replication_callbacks(modules)
92 | return modules
93 |
94 | data_parallel.replicate = new_replicate
95 |
--------------------------------------------------------------------------------
/lib/nn/modules/tests/test_numeric_batchnorm.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | # File : test_numeric_batchnorm.py
3 | # Author : Jiayuan Mao
4 | # Email : maojiayuan@gmail.com
5 | # Date : 27/01/2018
6 | #
7 | # This file is part of Synchronized-BatchNorm-PyTorch.
8 |
9 | import unittest
10 |
11 | import torch
12 | import torch.nn as nn
13 | from torch.autograd import Variable
14 |
15 | from sync_batchnorm.unittest import TorchTestCase
16 |
17 |
18 | def handy_var(a, unbias=True):
19 | n = a.size(0)
20 | asum = a.sum(dim=0)
21 | as_sum = (a ** 2).sum(dim=0) # a square sum
22 | sumvar = as_sum - asum * asum / n
23 | if unbias:
24 | return sumvar / (n - 1)
25 | else:
26 | return sumvar / n
27 |
28 |
29 | class NumericTestCase(TorchTestCase):
30 | def testNumericBatchNorm(self):
31 | a = torch.rand(16, 10)
32 | bn = nn.BatchNorm2d(10, momentum=1, eps=1e-5, affine=False)
33 | bn.train()
34 |
35 | a_var1 = Variable(a, requires_grad=True)
36 | b_var1 = bn(a_var1)
37 | loss1 = b_var1.sum()
38 | loss1.backward()
39 |
40 | a_var2 = Variable(a, requires_grad=True)
41 | a_mean2 = a_var2.mean(dim=0, keepdim=True)
42 | a_std2 = torch.sqrt(handy_var(a_var2, unbias=False).clamp(min=1e-5))
43 | # a_std2 = torch.sqrt(a_var2.var(dim=0, keepdim=True, unbiased=False) + 1e-5)
44 | b_var2 = (a_var2 - a_mean2) / a_std2
45 | loss2 = b_var2.sum()
46 | loss2.backward()
47 |
48 | self.assertTensorClose(bn.running_mean, a.mean(dim=0))
49 | self.assertTensorClose(bn.running_var, handy_var(a))
50 | self.assertTensorClose(a_var1.data, a_var2.data)
51 | self.assertTensorClose(b_var1.data, b_var2.data)
52 | self.assertTensorClose(a_var1.grad, a_var2.grad)
53 |
54 |
55 | if __name__ == '__main__':
56 | unittest.main()
57 |
--------------------------------------------------------------------------------
/lib/nn/modules/tests/test_sync_batchnorm.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | # File : test_sync_batchnorm.py
3 | # Author : Jiayuan Mao
4 | # Email : maojiayuan@gmail.com
5 | # Date : 27/01/2018
6 | #
7 | # This file is part of Synchronized-BatchNorm-PyTorch.
8 |
9 | import unittest
10 |
11 | import torch
12 | import torch.nn as nn
13 | from torch.autograd import Variable
14 |
15 | from sync_batchnorm import SynchronizedBatchNorm1d, SynchronizedBatchNorm2d, DataParallelWithCallback
16 | from sync_batchnorm.unittest import TorchTestCase
17 |
18 |
19 | def handy_var(a, unbias=True):
20 | n = a.size(0)
21 | asum = a.sum(dim=0)
22 | as_sum = (a ** 2).sum(dim=0) # a square sum
23 | sumvar = as_sum - asum * asum / n
24 | if unbias:
25 | return sumvar / (n - 1)
26 | else:
27 | return sumvar / n
28 |
29 |
30 | def _find_bn(module):
31 | for m in module.modules():
32 | if isinstance(m, (nn.BatchNorm1d, nn.BatchNorm2d, SynchronizedBatchNorm1d, SynchronizedBatchNorm2d)):
33 | return m
34 |
35 |
36 | class SyncTestCase(TorchTestCase):
37 | def _syncParameters(self, bn1, bn2):
38 | bn1.reset_parameters()
39 | bn2.reset_parameters()
40 | if bn1.affine and bn2.affine:
41 | bn2.weight.data.copy_(bn1.weight.data)
42 | bn2.bias.data.copy_(bn1.bias.data)
43 |
44 | def _checkBatchNormResult(self, bn1, bn2, input, is_train, cuda=False):
45 | """Check the forward and backward for the customized batch normalization."""
46 | bn1.train(mode=is_train)
47 | bn2.train(mode=is_train)
48 |
49 | if cuda:
50 | input = input.cuda()
51 |
52 | self._syncParameters(_find_bn(bn1), _find_bn(bn2))
53 |
54 | input1 = Variable(input, requires_grad=True)
55 | output1 = bn1(input1)
56 | output1.sum().backward()
57 | input2 = Variable(input, requires_grad=True)
58 | output2 = bn2(input2)
59 | output2.sum().backward()
60 |
61 | self.assertTensorClose(input1.data, input2.data)
62 | self.assertTensorClose(output1.data, output2.data)
63 | self.assertTensorClose(input1.grad, input2.grad)
64 | self.assertTensorClose(_find_bn(bn1).running_mean, _find_bn(bn2).running_mean)
65 | self.assertTensorClose(_find_bn(bn1).running_var, _find_bn(bn2).running_var)
66 |
67 | def testSyncBatchNormNormalTrain(self):
68 | bn = nn.BatchNorm1d(10)
69 | sync_bn = SynchronizedBatchNorm1d(10)
70 |
71 | self._checkBatchNormResult(bn, sync_bn, torch.rand(16, 10), True)
72 |
73 | def testSyncBatchNormNormalEval(self):
74 | bn = nn.BatchNorm1d(10)
75 | sync_bn = SynchronizedBatchNorm1d(10)
76 |
77 | self._checkBatchNormResult(bn, sync_bn, torch.rand(16, 10), False)
78 |
79 | def testSyncBatchNormSyncTrain(self):
80 | bn = nn.BatchNorm1d(10, eps=1e-5, affine=False)
81 | sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False)
82 | sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1])
83 |
84 | bn.cuda()
85 | sync_bn.cuda()
86 |
87 | self._checkBatchNormResult(bn, sync_bn, torch.rand(16, 10), True, cuda=True)
88 |
89 | def testSyncBatchNormSyncEval(self):
90 | bn = nn.BatchNorm1d(10, eps=1e-5, affine=False)
91 | sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False)
92 | sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1])
93 |
94 | bn.cuda()
95 | sync_bn.cuda()
96 |
97 | self._checkBatchNormResult(bn, sync_bn, torch.rand(16, 10), False, cuda=True)
98 |
99 | def testSyncBatchNorm2DSyncTrain(self):
100 | bn = nn.BatchNorm2d(10)
101 | sync_bn = SynchronizedBatchNorm2d(10)
102 | sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1])
103 |
104 | bn.cuda()
105 | sync_bn.cuda()
106 |
107 | self._checkBatchNormResult(bn, sync_bn, torch.rand(16, 10, 16, 16), True, cuda=True)
108 |
109 |
110 | if __name__ == '__main__':
111 | unittest.main()
112 |
--------------------------------------------------------------------------------
/lib/nn/modules/unittest.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | # File : unittest.py
3 | # Author : Jiayuan Mao
4 | # Email : maojiayuan@gmail.com
5 | # Date : 27/01/2018
6 | #
7 | # This file is part of Synchronized-BatchNorm-PyTorch.
8 | # https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
9 | # Distributed under MIT License.
10 |
11 | import unittest
12 |
13 | import numpy as np
14 | from torch.autograd import Variable
15 |
16 |
17 | def as_numpy(v):
18 | if isinstance(v, Variable):
19 | v = v.data
20 | return v.cpu().numpy()
21 |
22 |
23 | class TorchTestCase(unittest.TestCase):
24 | def assertTensorClose(self, a, b, atol=1e-3, rtol=1e-3):
25 | npa, npb = as_numpy(a), as_numpy(b)
26 | self.assertTrue(
27 | np.allclose(npa, npb, atol=atol),
28 | 'Tensor close check failed\n{}\n{}\nadiff={}, rdiff={}'.format(a, b, np.abs(npa - npb).max(), np.abs((npa - npb) / np.fmax(npa, 1e-5)).max())
29 | )
30 |
--------------------------------------------------------------------------------
/lib/nn/parallel/__init__.py:
--------------------------------------------------------------------------------
1 | from .data_parallel import UserScatteredDataParallel, user_scattered_collate, async_copy_to
2 |
--------------------------------------------------------------------------------
/lib/nn/parallel/data_parallel.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf8 -*-
2 |
3 | import torch.cuda as cuda
4 | import torch.nn as nn
5 | import torch
6 | import collections
7 | from torch.nn.parallel._functions import Gather
8 |
9 |
10 | __all__ = ['UserScatteredDataParallel', 'user_scattered_collate', 'async_copy_to']
11 |
12 |
13 | def async_copy_to(obj, dev, main_stream=None):
14 | if torch.is_tensor(obj):
15 | v = obj.cuda(dev, non_blocking=True)
16 | if main_stream is not None:
17 | v.data.record_stream(main_stream)
18 | return v
19 | elif isinstance(obj, collections.Mapping):
20 | return {k: async_copy_to(o, dev, main_stream) for k, o in obj.items()}
21 | elif isinstance(obj, collections.Sequence):
22 | return [async_copy_to(o, dev, main_stream) for o in obj]
23 | else:
24 | return obj
25 |
26 |
27 | def dict_gather(outputs, target_device, dim=0):
28 | """
29 | Gathers variables from different GPUs on a specified device
30 | (-1 means the CPU), with dictionary support.
31 | """
32 | def gather_map(outputs):
33 | out = outputs[0]
34 | if torch.is_tensor(out):
35 | # MJY(20180330) HACK:: force nr_dims > 0
36 | if out.dim() == 0:
37 | outputs = [o.unsqueeze(0) for o in outputs]
38 | return Gather.apply(target_device, dim, *outputs)
39 | elif out is None:
40 | return None
41 | elif isinstance(out, collections.Mapping):
42 | return {k: gather_map([o[k] for o in outputs]) for k in out}
43 | elif isinstance(out, collections.Sequence):
44 | return type(out)(map(gather_map, zip(*outputs)))
45 | return gather_map(outputs)
46 |
47 |
48 | class DictGatherDataParallel(nn.DataParallel):
49 | def gather(self, outputs, output_device):
50 | return dict_gather(outputs, output_device, dim=self.dim)
51 |
52 |
53 | class UserScatteredDataParallel(DictGatherDataParallel):
54 | def scatter(self, inputs, kwargs, device_ids):
55 | assert len(inputs) == 1
56 | inputs = inputs[0]
57 | inputs = _async_copy_stream(inputs, device_ids)
58 | inputs = [[i] for i in inputs]
59 | assert len(kwargs) == 0
60 | kwargs = [{} for _ in range(len(inputs))]
61 |
62 | return inputs, kwargs
63 |
64 |
65 | def user_scattered_collate(batch):
66 | return batch
67 |
68 |
69 | def _async_copy(inputs, device_ids):
70 | nr_devs = len(device_ids)
71 | assert type(inputs) in (tuple, list)
72 | assert len(inputs) == nr_devs
73 |
74 | outputs = []
75 | for i, dev in zip(inputs, device_ids):
76 | with cuda.device(dev):
77 | outputs.append(async_copy_to(i, dev))
78 |
79 | return tuple(outputs)
80 |
81 |
82 | def _async_copy_stream(inputs, device_ids):
83 | nr_devs = len(device_ids)
84 | assert type(inputs) in (tuple, list)
85 | assert len(inputs) == nr_devs
86 |
87 | outputs = []
88 | streams = [_get_stream(d) for d in device_ids]
89 | for i, dev, stream in zip(inputs, device_ids, streams):
90 | with cuda.device(dev):
91 | main_stream = cuda.current_stream()
92 | with cuda.stream(stream):
93 | outputs.append(async_copy_to(i, dev, main_stream=main_stream))
94 | main_stream.wait_stream(stream)
95 |
96 | return outputs
97 |
98 |
99 | """Adapted from: torch/nn/parallel/_functions.py"""
100 | # background streams used for copying
101 | _streams = None
102 |
103 |
104 | def _get_stream(device):
105 | """Gets a background stream for copying between CPU and GPU"""
106 | global _streams
107 | if device == -1:
108 | return None
109 | if _streams is None:
110 | _streams = [None] * cuda.device_count()
111 | if _streams[device] is None: _streams[device] = cuda.Stream(device)
112 | return _streams[device]
113 |
--------------------------------------------------------------------------------
/lib/utils/__init__.py:
--------------------------------------------------------------------------------
1 | from .th import *
2 |
--------------------------------------------------------------------------------
/lib/utils/data/__init__.py:
--------------------------------------------------------------------------------
1 |
2 | from .dataset import Dataset, TensorDataset, ConcatDataset
3 | from .dataloader import DataLoader
4 |
--------------------------------------------------------------------------------
/lib/utils/data/dataloader.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.multiprocessing as multiprocessing
3 | from torch._C import _set_worker_signal_handlers, _update_worker_pids, \
4 | _remove_worker_pids, _error_if_any_worker_fails
5 | from .sampler import SequentialSampler, RandomSampler, BatchSampler
6 | import signal
7 | import functools
8 | import collections
9 | import re
10 | import sys
11 | import threading
12 | import traceback
13 | from torch._six import string_classes, int_classes
14 | import numpy as np
15 |
16 | if sys.version_info[0] == 2:
17 | import Queue as queue
18 | else:
19 | import queue
20 |
21 |
22 | class ExceptionWrapper(object):
23 | r"Wraps an exception plus traceback to communicate across threads"
24 |
25 | def __init__(self, exc_info):
26 | self.exc_type = exc_info[0]
27 | self.exc_msg = "".join(traceback.format_exception(*exc_info))
28 |
29 |
30 | _use_shared_memory = False
31 | """Whether to use shared memory in default_collate"""
32 |
33 |
34 | def _worker_loop(dataset, index_queue, data_queue, collate_fn, seed, init_fn, worker_id):
35 | global _use_shared_memory
36 | _use_shared_memory = True
37 |
38 | # Intialize C side signal handlers for SIGBUS and SIGSEGV. Python signal
39 | # module's handlers are executed after Python returns from C low-level
40 | # handlers, likely when the same fatal signal happened again already.
41 | # https://docs.python.org/3/library/signal.html Sec. 18.8.1.1
42 | _set_worker_signal_handlers()
43 |
44 | torch.set_num_threads(1)
45 | torch.manual_seed(seed)
46 | np.random.seed(seed)
47 |
48 | if init_fn is not None:
49 | init_fn(worker_id)
50 |
51 | while True:
52 | r = index_queue.get()
53 | if r is None:
54 | break
55 | idx, batch_indices = r
56 | try:
57 | samples = collate_fn([dataset[i] for i in batch_indices])
58 | except Exception:
59 | data_queue.put((idx, ExceptionWrapper(sys.exc_info())))
60 | else:
61 | data_queue.put((idx, samples))
62 |
63 |
64 | def _worker_manager_loop(in_queue, out_queue, done_event, pin_memory, device_id):
65 | if pin_memory:
66 | torch.cuda.set_device(device_id)
67 |
68 | while True:
69 | try:
70 | r = in_queue.get()
71 | except Exception:
72 | if done_event.is_set():
73 | return
74 | raise
75 | if r is None:
76 | break
77 | if isinstance(r[1], ExceptionWrapper):
78 | out_queue.put(r)
79 | continue
80 | idx, batch = r
81 | try:
82 | if pin_memory:
83 | batch = pin_memory_batch(batch)
84 | except Exception:
85 | out_queue.put((idx, ExceptionWrapper(sys.exc_info())))
86 | else:
87 | out_queue.put((idx, batch))
88 |
89 | numpy_type_map = {
90 | 'float64': torch.DoubleTensor,
91 | 'float32': torch.FloatTensor,
92 | 'float16': torch.HalfTensor,
93 | 'int64': torch.LongTensor,
94 | 'int32': torch.IntTensor,
95 | 'int16': torch.ShortTensor,
96 | 'int8': torch.CharTensor,
97 | 'uint8': torch.ByteTensor,
98 | }
99 |
100 |
101 | def default_collate(batch):
102 | "Puts each data field into a tensor with outer dimension batch size"
103 |
104 | error_msg = "batch must contain tensors, numbers, dicts or lists; found {}"
105 | elem_type = type(batch[0])
106 | if torch.is_tensor(batch[0]):
107 | out = None
108 | if _use_shared_memory:
109 | # If we're in a background process, concatenate directly into a
110 | # shared memory tensor to avoid an extra copy
111 | numel = sum([x.numel() for x in batch])
112 | storage = batch[0].storage()._new_shared(numel)
113 | out = batch[0].new(storage)
114 | return torch.stack(batch, 0, out=out)
115 | elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \
116 | and elem_type.__name__ != 'string_':
117 | elem = batch[0]
118 | if elem_type.__name__ == 'ndarray':
119 | # array of string classes and object
120 | if re.search('[SaUO]', elem.dtype.str) is not None:
121 | raise TypeError(error_msg.format(elem.dtype))
122 |
123 | return torch.stack([torch.from_numpy(b) for b in batch], 0)
124 | if elem.shape == (): # scalars
125 | py_type = float if elem.dtype.name.startswith('float') else int
126 | return numpy_type_map[elem.dtype.name](list(map(py_type, batch)))
127 | elif isinstance(batch[0], int_classes):
128 | return torch.LongTensor(batch)
129 | elif isinstance(batch[0], float):
130 | return torch.DoubleTensor(batch)
131 | elif isinstance(batch[0], string_classes):
132 | return batch
133 | elif isinstance(batch[0], collections.Mapping):
134 | return {key: default_collate([d[key] for d in batch]) for key in batch[0]}
135 | elif isinstance(batch[0], collections.Sequence):
136 | transposed = zip(*batch)
137 | return [default_collate(samples) for samples in transposed]
138 |
139 | raise TypeError((error_msg.format(type(batch[0]))))
140 |
141 |
142 | def pin_memory_batch(batch):
143 | if torch.is_tensor(batch):
144 | return batch.pin_memory()
145 | elif isinstance(batch, string_classes):
146 | return batch
147 | elif isinstance(batch, collections.Mapping):
148 | return {k: pin_memory_batch(sample) for k, sample in batch.items()}
149 | elif isinstance(batch, collections.Sequence):
150 | return [pin_memory_batch(sample) for sample in batch]
151 | else:
152 | return batch
153 |
154 |
155 | _SIGCHLD_handler_set = False
156 | """Whether SIGCHLD handler is set for DataLoader worker failures. Only one
157 | handler needs to be set for all DataLoaders in a process."""
158 |
159 |
160 | def _set_SIGCHLD_handler():
161 | # Windows doesn't support SIGCHLD handler
162 | if sys.platform == 'win32':
163 | return
164 | # can't set signal in child threads
165 | if not isinstance(threading.current_thread(), threading._MainThread):
166 | return
167 | global _SIGCHLD_handler_set
168 | if _SIGCHLD_handler_set:
169 | return
170 | previous_handler = signal.getsignal(signal.SIGCHLD)
171 | if not callable(previous_handler):
172 | previous_handler = None
173 |
174 | def handler(signum, frame):
175 | # This following call uses `waitid` with WNOHANG from C side. Therefore,
176 | # Python can still get and update the process status successfully.
177 | _error_if_any_worker_fails()
178 | if previous_handler is not None:
179 | previous_handler(signum, frame)
180 |
181 | signal.signal(signal.SIGCHLD, handler)
182 | _SIGCHLD_handler_set = True
183 |
184 |
185 | class DataLoaderIter(object):
186 | "Iterates once over the DataLoader's dataset, as specified by the sampler"
187 |
188 | def __init__(self, loader):
189 | self.dataset = loader.dataset
190 | self.collate_fn = loader.collate_fn
191 | self.batch_sampler = loader.batch_sampler
192 | self.num_workers = loader.num_workers
193 | self.pin_memory = loader.pin_memory and torch.cuda.is_available()
194 | self.timeout = loader.timeout
195 | self.done_event = threading.Event()
196 |
197 | self.sample_iter = iter(self.batch_sampler)
198 |
199 | if self.num_workers > 0:
200 | self.worker_init_fn = loader.worker_init_fn
201 | self.index_queue = multiprocessing.SimpleQueue()
202 | self.worker_result_queue = multiprocessing.SimpleQueue()
203 | self.batches_outstanding = 0
204 | self.worker_pids_set = False
205 | self.shutdown = False
206 | self.send_idx = 0
207 | self.rcvd_idx = 0
208 | self.reorder_dict = {}
209 |
210 | base_seed = torch.LongTensor(1).random_(0, 2**31-1)[0]
211 | self.workers = [
212 | multiprocessing.Process(
213 | target=_worker_loop,
214 | args=(self.dataset, self.index_queue, self.worker_result_queue, self.collate_fn,
215 | base_seed + i, self.worker_init_fn, i))
216 | for i in range(self.num_workers)]
217 |
218 | if self.pin_memory or self.timeout > 0:
219 | self.data_queue = queue.Queue()
220 | if self.pin_memory:
221 | maybe_device_id = torch.cuda.current_device()
222 | else:
223 | # do not initialize cuda context if not necessary
224 | maybe_device_id = None
225 | self.worker_manager_thread = threading.Thread(
226 | target=_worker_manager_loop,
227 | args=(self.worker_result_queue, self.data_queue, self.done_event, self.pin_memory,
228 | maybe_device_id))
229 | self.worker_manager_thread.daemon = True
230 | self.worker_manager_thread.start()
231 | else:
232 | self.data_queue = self.worker_result_queue
233 |
234 | for w in self.workers:
235 | w.daemon = True # ensure that the worker exits on process exit
236 | w.start()
237 |
238 | _update_worker_pids(id(self), tuple(w.pid for w in self.workers))
239 | _set_SIGCHLD_handler()
240 | self.worker_pids_set = True
241 |
242 | # prime the prefetch loop
243 | for _ in range(2 * self.num_workers):
244 | self._put_indices()
245 |
246 | def __len__(self):
247 | return len(self.batch_sampler)
248 |
249 | def _get_batch(self):
250 | if self.timeout > 0:
251 | try:
252 | return self.data_queue.get(timeout=self.timeout)
253 | except queue.Empty:
254 | raise RuntimeError('DataLoader timed out after {} seconds'.format(self.timeout))
255 | else:
256 | return self.data_queue.get()
257 |
258 | def __next__(self):
259 | if self.num_workers == 0: # same-process loading
260 | indices = next(self.sample_iter) # may raise StopIteration
261 | batch = self.collate_fn([self.dataset[i] for i in indices])
262 | if self.pin_memory:
263 | batch = pin_memory_batch(batch)
264 | return batch
265 |
266 | # check if the next sample has already been generated
267 | if self.rcvd_idx in self.reorder_dict:
268 | batch = self.reorder_dict.pop(self.rcvd_idx)
269 | return self._process_next_batch(batch)
270 |
271 | if self.batches_outstanding == 0:
272 | self._shutdown_workers()
273 | raise StopIteration
274 |
275 | while True:
276 | assert (not self.shutdown and self.batches_outstanding > 0)
277 | idx, batch = self._get_batch()
278 | self.batches_outstanding -= 1
279 | if idx != self.rcvd_idx:
280 | # store out-of-order samples
281 | self.reorder_dict[idx] = batch
282 | continue
283 | return self._process_next_batch(batch)
284 |
285 | next = __next__ # Python 2 compatibility
286 |
287 | def __iter__(self):
288 | return self
289 |
290 | def _put_indices(self):
291 | assert self.batches_outstanding < 2 * self.num_workers
292 | indices = next(self.sample_iter, None)
293 | if indices is None:
294 | return
295 | self.index_queue.put((self.send_idx, indices))
296 | self.batches_outstanding += 1
297 | self.send_idx += 1
298 |
299 | def _process_next_batch(self, batch):
300 | self.rcvd_idx += 1
301 | self._put_indices()
302 | if isinstance(batch, ExceptionWrapper):
303 | raise batch.exc_type(batch.exc_msg)
304 | return batch
305 |
306 | def __getstate__(self):
307 | # TODO: add limited pickling support for sharing an iterator
308 | # across multiple threads for HOGWILD.
309 | # Probably the best way to do this is by moving the sample pushing
310 | # to a separate thread and then just sharing the data queue
311 | # but signalling the end is tricky without a non-blocking API
312 | raise NotImplementedError("DataLoaderIterator cannot be pickled")
313 |
314 | def _shutdown_workers(self):
315 | try:
316 | if not self.shutdown:
317 | self.shutdown = True
318 | self.done_event.set()
319 | # if worker_manager_thread is waiting to put
320 | while not self.data_queue.empty():
321 | self.data_queue.get()
322 | for _ in self.workers:
323 | self.index_queue.put(None)
324 | # done_event should be sufficient to exit worker_manager_thread,
325 | # but be safe here and put another None
326 | self.worker_result_queue.put(None)
327 | finally:
328 | # removes pids no matter what
329 | if self.worker_pids_set:
330 | _remove_worker_pids(id(self))
331 | self.worker_pids_set = False
332 |
333 | def __del__(self):
334 | if self.num_workers > 0:
335 | self._shutdown_workers()
336 |
337 |
338 | class DataLoader(object):
339 | """
340 | Data loader. Combines a dataset and a sampler, and provides
341 | single- or multi-process iterators over the dataset.
342 |
343 | Arguments:
344 | dataset (Dataset): dataset from which to load the data.
345 | batch_size (int, optional): how many samples per batch to load
346 | (default: 1).
347 | shuffle (bool, optional): set to ``True`` to have the data reshuffled
348 | at every epoch (default: False).
349 | sampler (Sampler, optional): defines the strategy to draw samples from
350 | the dataset. If specified, ``shuffle`` must be False.
351 | batch_sampler (Sampler, optional): like sampler, but returns a batch of
352 | indices at a time. Mutually exclusive with batch_size, shuffle,
353 | sampler, and drop_last.
354 | num_workers (int, optional): how many subprocesses to use for data
355 | loading. 0 means that the data will be loaded in the main process.
356 | (default: 0)
357 | collate_fn (callable, optional): merges a list of samples to form a mini-batch.
358 | pin_memory (bool, optional): If ``True``, the data loader will copy tensors
359 | into CUDA pinned memory before returning them.
360 | drop_last (bool, optional): set to ``True`` to drop the last incomplete batch,
361 | if the dataset size is not divisible by the batch size. If ``False`` and
362 | the size of dataset is not divisible by the batch size, then the last batch
363 | will be smaller. (default: False)
364 | timeout (numeric, optional): if positive, the timeout value for collecting a batch
365 | from workers. Should always be non-negative. (default: 0)
366 | worker_init_fn (callable, optional): If not None, this will be called on each
367 | worker subprocess with the worker id (an int in ``[0, num_workers - 1]``) as
368 | input, after seeding and before data loading. (default: None)
369 |
370 | .. note:: By default, each worker will have its PyTorch seed set to
371 | ``base_seed + worker_id``, where ``base_seed`` is a long generated
372 | by main process using its RNG. You may use ``torch.initial_seed()`` to access
373 | this value in :attr:`worker_init_fn`, which can be used to set other seeds
374 | (e.g. NumPy) before data loading.
375 |
376 | .. warning:: If ``spawn'' start method is used, :attr:`worker_init_fn` cannot be an
377 | unpicklable object, e.g., a lambda function.
378 | """
379 |
380 | def __init__(self, dataset, batch_size=1, shuffle=False, sampler=None, batch_sampler=None,
381 | num_workers=0, collate_fn=default_collate, pin_memory=False, drop_last=False,
382 | timeout=0, worker_init_fn=None):
383 | self.dataset = dataset
384 | self.batch_size = batch_size
385 | self.num_workers = num_workers
386 | self.collate_fn = collate_fn
387 | self.pin_memory = pin_memory
388 | self.drop_last = drop_last
389 | self.timeout = timeout
390 | self.worker_init_fn = worker_init_fn
391 |
392 | if timeout < 0:
393 | raise ValueError('timeout option should be non-negative')
394 |
395 | if batch_sampler is not None:
396 | if batch_size > 1 or shuffle or sampler is not None or drop_last:
397 | raise ValueError('batch_sampler is mutually exclusive with '
398 | 'batch_size, shuffle, sampler, and drop_last')
399 |
400 | if sampler is not None and shuffle:
401 | raise ValueError('sampler is mutually exclusive with shuffle')
402 |
403 | if self.num_workers < 0:
404 | raise ValueError('num_workers cannot be negative; '
405 | 'use num_workers=0 to disable multiprocessing.')
406 |
407 | if batch_sampler is None:
408 | if sampler is None:
409 | if shuffle:
410 | sampler = RandomSampler(dataset)
411 | else:
412 | sampler = SequentialSampler(dataset)
413 | batch_sampler = BatchSampler(sampler, batch_size, drop_last)
414 |
415 | self.sampler = sampler
416 | self.batch_sampler = batch_sampler
417 |
418 | def __iter__(self):
419 | return DataLoaderIter(self)
420 |
421 | def __len__(self):
422 | return len(self.batch_sampler)
423 |
--------------------------------------------------------------------------------
/lib/utils/data/dataset.py:
--------------------------------------------------------------------------------
1 | import bisect
2 | import warnings
3 |
4 | from torch._utils import _accumulate
5 | from torch import randperm
6 |
7 |
8 | class Dataset(object):
9 | """An abstract class representing a Dataset.
10 |
11 | All other datasets should subclass it. All subclasses should override
12 | ``__len__``, that provides the size of the dataset, and ``__getitem__``,
13 | supporting integer indexing in range from 0 to len(self) exclusive.
14 | """
15 |
16 | def __getitem__(self, index):
17 | raise NotImplementedError
18 |
19 | def __len__(self):
20 | raise NotImplementedError
21 |
22 | def __add__(self, other):
23 | return ConcatDataset([self, other])
24 |
25 |
26 | class TensorDataset(Dataset):
27 | """Dataset wrapping data and target tensors.
28 |
29 | Each sample will be retrieved by indexing both tensors along the first
30 | dimension.
31 |
32 | Arguments:
33 | data_tensor (Tensor): contains sample data.
34 | target_tensor (Tensor): contains sample targets (labels).
35 | """
36 |
37 | def __init__(self, data_tensor, target_tensor):
38 | assert data_tensor.size(0) == target_tensor.size(0)
39 | self.data_tensor = data_tensor
40 | self.target_tensor = target_tensor
41 |
42 | def __getitem__(self, index):
43 | return self.data_tensor[index], self.target_tensor[index]
44 |
45 | def __len__(self):
46 | return self.data_tensor.size(0)
47 |
48 |
49 | class ConcatDataset(Dataset):
50 | """
51 | Dataset to concatenate multiple datasets.
52 | Purpose: useful to assemble different existing datasets, possibly
53 | large-scale datasets as the concatenation operation is done in an
54 | on-the-fly manner.
55 |
56 | Arguments:
57 | datasets (iterable): List of datasets to be concatenated
58 | """
59 |
60 | @staticmethod
61 | def cumsum(sequence):
62 | r, s = [], 0
63 | for e in sequence:
64 | l = len(e)
65 | r.append(l + s)
66 | s += l
67 | return r
68 |
69 | def __init__(self, datasets):
70 | super(ConcatDataset, self).__init__()
71 | assert len(datasets) > 0, 'datasets should not be an empty iterable'
72 | self.datasets = list(datasets)
73 | self.cumulative_sizes = self.cumsum(self.datasets)
74 |
75 | def __len__(self):
76 | return self.cumulative_sizes[-1]
77 |
78 | def __getitem__(self, idx):
79 | dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx)
80 | if dataset_idx == 0:
81 | sample_idx = idx
82 | else:
83 | sample_idx = idx - self.cumulative_sizes[dataset_idx - 1]
84 | return self.datasets[dataset_idx][sample_idx]
85 |
86 | @property
87 | def cummulative_sizes(self):
88 | warnings.warn("cummulative_sizes attribute is renamed to "
89 | "cumulative_sizes", DeprecationWarning, stacklevel=2)
90 | return self.cumulative_sizes
91 |
92 |
93 | class Subset(Dataset):
94 | def __init__(self, dataset, indices):
95 | self.dataset = dataset
96 | self.indices = indices
97 |
98 | def __getitem__(self, idx):
99 | return self.dataset[self.indices[idx]]
100 |
101 | def __len__(self):
102 | return len(self.indices)
103 |
104 |
105 | def random_split(dataset, lengths):
106 | """
107 | Randomly split a dataset into non-overlapping new datasets of given lengths
108 | ds
109 |
110 | Arguments:
111 | dataset (Dataset): Dataset to be split
112 | lengths (iterable): lengths of splits to be produced
113 | """
114 | if sum(lengths) != len(dataset):
115 | raise ValueError("Sum of input lengths does not equal the length of the input dataset!")
116 |
117 | indices = randperm(sum(lengths))
118 | return [Subset(dataset, indices[offset - length:offset]) for offset, length in zip(_accumulate(lengths), lengths)]
119 |
--------------------------------------------------------------------------------
/lib/utils/data/distributed.py:
--------------------------------------------------------------------------------
1 | import math
2 | import torch
3 | from .sampler import Sampler
4 | from torch.distributed import get_world_size, get_rank
5 |
6 |
7 | class DistributedSampler(Sampler):
8 | """Sampler that restricts data loading to a subset of the dataset.
9 |
10 | It is especially useful in conjunction with
11 | :class:`torch.nn.parallel.DistributedDataParallel`. In such case, each
12 | process can pass a DistributedSampler instance as a DataLoader sampler,
13 | and load a subset of the original dataset that is exclusive to it.
14 |
15 | .. note::
16 | Dataset is assumed to be of constant size.
17 |
18 | Arguments:
19 | dataset: Dataset used for sampling.
20 | num_replicas (optional): Number of processes participating in
21 | distributed training.
22 | rank (optional): Rank of the current process within num_replicas.
23 | """
24 |
25 | def __init__(self, dataset, num_replicas=None, rank=None):
26 | if num_replicas is None:
27 | num_replicas = get_world_size()
28 | if rank is None:
29 | rank = get_rank()
30 | self.dataset = dataset
31 | self.num_replicas = num_replicas
32 | self.rank = rank
33 | self.epoch = 0
34 | self.num_samples = int(math.ceil(len(self.dataset) * 1.0 / self.num_replicas))
35 | self.total_size = self.num_samples * self.num_replicas
36 |
37 | def __iter__(self):
38 | # deterministically shuffle based on epoch
39 | g = torch.Generator()
40 | g.manual_seed(self.epoch)
41 | indices = list(torch.randperm(len(self.dataset), generator=g))
42 |
43 | # add extra samples to make it evenly divisible
44 | indices += indices[:(self.total_size - len(indices))]
45 | assert len(indices) == self.total_size
46 |
47 | # subsample
48 | offset = self.num_samples * self.rank
49 | indices = indices[offset:offset + self.num_samples]
50 | assert len(indices) == self.num_samples
51 |
52 | return iter(indices)
53 |
54 | def __len__(self):
55 | return self.num_samples
56 |
57 | def set_epoch(self, epoch):
58 | self.epoch = epoch
59 |
--------------------------------------------------------------------------------
/lib/utils/data/sampler.py:
--------------------------------------------------------------------------------
1 | import torch
2 |
3 |
4 | class Sampler(object):
5 | """Base class for all Samplers.
6 |
7 | Every Sampler subclass has to provide an __iter__ method, providing a way
8 | to iterate over indices of dataset elements, and a __len__ method that
9 | returns the length of the returned iterators.
10 | """
11 |
12 | def __init__(self, data_source):
13 | pass
14 |
15 | def __iter__(self):
16 | raise NotImplementedError
17 |
18 | def __len__(self):
19 | raise NotImplementedError
20 |
21 |
22 | class SequentialSampler(Sampler):
23 | """Samples elements sequentially, always in the same order.
24 |
25 | Arguments:
26 | data_source (Dataset): dataset to sample from
27 | """
28 |
29 | def __init__(self, data_source):
30 | self.data_source = data_source
31 |
32 | def __iter__(self):
33 | return iter(range(len(self.data_source)))
34 |
35 | def __len__(self):
36 | return len(self.data_source)
37 |
38 |
39 | class RandomSampler(Sampler):
40 | """Samples elements randomly, without replacement.
41 |
42 | Arguments:
43 | data_source (Dataset): dataset to sample from
44 | """
45 |
46 | def __init__(self, data_source):
47 | self.data_source = data_source
48 |
49 | def __iter__(self):
50 | return iter(torch.randperm(len(self.data_source)).long())
51 |
52 | def __len__(self):
53 | return len(self.data_source)
54 |
55 |
56 | class SubsetRandomSampler(Sampler):
57 | """Samples elements randomly from a given list of indices, without replacement.
58 |
59 | Arguments:
60 | indices (list): a list of indices
61 | """
62 |
63 | def __init__(self, indices):
64 | self.indices = indices
65 |
66 | def __iter__(self):
67 | return (self.indices[i] for i in torch.randperm(len(self.indices)))
68 |
69 | def __len__(self):
70 | return len(self.indices)
71 |
72 |
73 | class WeightedRandomSampler(Sampler):
74 | """Samples elements from [0,..,len(weights)-1] with given probabilities (weights).
75 |
76 | Arguments:
77 | weights (list) : a list of weights, not necessary summing up to one
78 | num_samples (int): number of samples to draw
79 | replacement (bool): if ``True``, samples are drawn with replacement.
80 | If not, they are drawn without replacement, which means that when a
81 | sample index is drawn for a row, it cannot be drawn again for that row.
82 | """
83 |
84 | def __init__(self, weights, num_samples, replacement=True):
85 | self.weights = torch.DoubleTensor(weights)
86 | self.num_samples = num_samples
87 | self.replacement = replacement
88 |
89 | def __iter__(self):
90 | return iter(torch.multinomial(self.weights, self.num_samples, self.replacement))
91 |
92 | def __len__(self):
93 | return self.num_samples
94 |
95 |
96 | class BatchSampler(object):
97 | """Wraps another sampler to yield a mini-batch of indices.
98 |
99 | Args:
100 | sampler (Sampler): Base sampler.
101 | batch_size (int): Size of mini-batch.
102 | drop_last (bool): If ``True``, the sampler will drop the last batch if
103 | its size would be less than ``batch_size``
104 |
105 | Example:
106 | >>> list(BatchSampler(range(10), batch_size=3, drop_last=False))
107 | [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]]
108 | >>> list(BatchSampler(range(10), batch_size=3, drop_last=True))
109 | [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
110 | """
111 |
112 | def __init__(self, sampler, batch_size, drop_last):
113 | self.sampler = sampler
114 | self.batch_size = batch_size
115 | self.drop_last = drop_last
116 |
117 | def __iter__(self):
118 | batch = []
119 | for idx in self.sampler:
120 | batch.append(idx)
121 | if len(batch) == self.batch_size:
122 | yield batch
123 | batch = []
124 | if len(batch) > 0 and not self.drop_last:
125 | yield batch
126 |
127 | def __len__(self):
128 | if self.drop_last:
129 | return len(self.sampler) // self.batch_size
130 | else:
131 | return (len(self.sampler) + self.batch_size - 1) // self.batch_size
132 |
--------------------------------------------------------------------------------
/lib/utils/th.py:
--------------------------------------------------------------------------------
1 | import torch
2 | from torch.autograd import Variable
3 | import numpy as np
4 | import collections
5 |
6 | __all__ = ['as_variable', 'as_numpy', 'mark_volatile']
7 |
8 | def as_variable(obj):
9 | if isinstance(obj, Variable):
10 | return obj
11 | if isinstance(obj, collections.Sequence):
12 | return [as_variable(v) for v in obj]
13 | elif isinstance(obj, collections.Mapping):
14 | return {k: as_variable(v) for k, v in obj.items()}
15 | else:
16 | return Variable(obj)
17 |
18 | def as_numpy(obj):
19 | if isinstance(obj, collections.Sequence):
20 | return [as_numpy(v) for v in obj]
21 | elif isinstance(obj, collections.Mapping):
22 | return {k: as_numpy(v) for k, v in obj.items()}
23 | elif isinstance(obj, Variable):
24 | return obj.data.cpu().numpy()
25 | elif torch.is_tensor(obj):
26 | return obj.cpu().numpy()
27 | else:
28 | return np.array(obj)
29 |
30 | def mark_volatile(obj):
31 | if torch.is_tensor(obj):
32 | obj = Variable(obj)
33 | if isinstance(obj, Variable):
34 | obj.no_grad = True
35 | return obj
36 | elif isinstance(obj, collections.Mapping):
37 | return {k: mark_volatile(o) for k, o in obj.items()}
38 | elif isinstance(obj, collections.Sequence):
39 | return [mark_volatile(o) for o in obj]
40 | else:
41 | return obj
42 |
--------------------------------------------------------------------------------
/models/__init__.py:
--------------------------------------------------------------------------------
1 | from .models import ModelBuilder, SegmentationModule
2 |
--------------------------------------------------------------------------------
/models/models.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 | import torchvision
4 | from . import resnet
5 | from lib.nn import SynchronizedBatchNorm2d
6 |
7 |
8 | class SegmentationModuleBase(nn.Module):
9 | def __init__(self):
10 | super(SegmentationModuleBase, self).__init__()
11 |
12 | def pixel_acc(self, pred, label):
13 | _, preds = torch.max(pred, dim=1)
14 | valid = (label >= 0).long()
15 | acc_sum = torch.sum(valid * (preds == label).long())
16 | pixel_sum = torch.sum(valid)
17 | acc = acc_sum.float() / (pixel_sum.float() + 1e-10)
18 | return acc
19 |
20 |
21 | class SegmentationModule(SegmentationModuleBase):
22 | def __init__(self, net_enc, net_dec, crit, deep_sup_scale=None):
23 | super(SegmentationModule, self).__init__()
24 | self.encoder = net_enc
25 | self.decoder = net_dec
26 | self.crit = crit
27 | self.deep_sup_scale = deep_sup_scale
28 |
29 | def forward(self, feed_dict, *, segSize=None):
30 | # training
31 | if segSize is None:
32 | if self.deep_sup_scale is not None: # use deep supervision technique
33 | (pred, pred_deepsup) = self.decoder(self.encoder(feed_dict['img_data'], return_feature_maps=True))
34 | else:
35 | pred = self.decoder(self.encoder(feed_dict['img_data'], return_feature_maps=True))
36 |
37 | loss = self.crit(pred, feed_dict['seg_label'])
38 | if self.deep_sup_scale is not None:
39 | loss_deepsup = self.crit(pred_deepsup, feed_dict['seg_label'])
40 | loss = loss + loss_deepsup * self.deep_sup_scale
41 |
42 | acc = self.pixel_acc(pred, feed_dict['seg_label'])
43 | return loss, acc
44 | # inference
45 | else:
46 | pred = self.decoder(self.encoder(feed_dict['img_data'], return_feature_maps=True), segSize=segSize)
47 | return pred
48 |
49 |
50 | def conv3x3(in_planes, out_planes, stride=1, has_bias=False):
51 | """3x3 convolution with padding"""
52 | return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
53 | padding=1, bias=has_bias)
54 |
55 |
56 | def conv3x3_bn_relu(in_planes, out_planes, stride=1):
57 | return nn.Sequential(
58 | conv3x3(in_planes, out_planes, stride),
59 | SynchronizedBatchNorm2d(out_planes),
60 | nn.ReLU(inplace=True))
61 |
62 |
63 | class ModelBuilder:
64 | # custom weights initialization
65 | def weights_init(self, m):
66 | classname = m.__class__.__name__
67 | if classname.find('Conv') != -1:
68 | nn.init.kaiming_normal_(m.weight.data)
69 | elif classname.find('BatchNorm') != -1:
70 | m.weight.data.fill_(1.)
71 | m.bias.data.fill_(1e-4)
72 |
73 | def build_encoder(self, arch='resnet101dilated', fc_dim=512, weights=''):
74 | pretrained = True if len(weights) == 0 else False
75 | arch = arch.lower()
76 | if arch == 'resnet101dilated':
77 | orig_resnet = resnet.__dict__['resnet101'](pretrained=pretrained)
78 | net_encoder = ResnetDilated(orig_resnet, dilate_scale=8)
79 | else:
80 | raise Exception('Architecture undefined!')
81 |
82 | # net_encoder.apply(self.weights_init)
83 | if len(weights) > 0:
84 | print('Loading weights for net_encoder')
85 | net_encoder.load_state_dict(
86 | torch.load(weights, map_location=lambda storage, loc: storage), strict=False)
87 | return net_encoder
88 |
89 | def build_decoder(self, arch='ppm_deepsup',
90 | fc_dim=512, num_class=150,
91 | weights='', use_softmax=False):
92 | arch = arch.lower()
93 | if arch == 'ppm_deepsup':
94 | net_decoder = PPMDeepsup(
95 | num_class=num_class,
96 | fc_dim=fc_dim,
97 | use_softmax=use_softmax)
98 | else:
99 | raise Exception('Architecture undefined!')
100 |
101 | net_decoder.apply(self.weights_init)
102 | if len(weights) > 0:
103 | print('Loading weights for net_decoder')
104 | net_decoder.load_state_dict(
105 | torch.load(weights, map_location=lambda storage, loc: storage), strict=False)
106 | return net_decoder
107 |
108 |
109 | class ResnetDilated(nn.Module):
110 | def __init__(self, orig_resnet, dilate_scale=8):
111 | super(ResnetDilated, self).__init__()
112 | from functools import partial
113 |
114 | if dilate_scale == 8:
115 | orig_resnet.layer3.apply(
116 | partial(self._nostride_dilate, dilate=2))
117 | orig_resnet.layer4.apply(
118 | partial(self._nostride_dilate, dilate=4))
119 | elif dilate_scale == 16:
120 | orig_resnet.layer4.apply(
121 | partial(self._nostride_dilate, dilate=2))
122 |
123 | # take pretrained resnet, except AvgPool and FC
124 | self.conv1 = orig_resnet.conv1
125 | self.bn1 = orig_resnet.bn1
126 | self.relu1 = orig_resnet.relu1
127 | self.conv2 = orig_resnet.conv2
128 | self.bn2 = orig_resnet.bn2
129 | self.relu2 = orig_resnet.relu2
130 | self.conv3 = orig_resnet.conv3
131 | self.bn3 = orig_resnet.bn3
132 | self.relu3 = orig_resnet.relu3
133 | self.maxpool = orig_resnet.maxpool
134 | self.layer1 = orig_resnet.layer1
135 | self.layer2 = orig_resnet.layer2
136 | self.layer3 = orig_resnet.layer3
137 | self.layer4 = orig_resnet.layer4
138 |
139 | def _nostride_dilate(self, m, dilate):
140 | classname = m.__class__.__name__
141 | if classname.find('Conv') != -1:
142 | # the convolution with stride
143 | if m.stride == (2, 2):
144 | m.stride = (1, 1)
145 | if m.kernel_size == (3, 3):
146 | m.dilation = (dilate // 2, dilate // 2)
147 | m.padding = (dilate // 2, dilate // 2)
148 | # other convolutions
149 | else:
150 | if m.kernel_size == (3, 3):
151 | m.dilation = (dilate, dilate)
152 | m.padding = (dilate, dilate)
153 |
154 | def forward(self, x, return_feature_maps=False):
155 | conv_out = []
156 |
157 | x = self.relu1(self.bn1(self.conv1(x)))
158 | x = self.relu2(self.bn2(self.conv2(x)))
159 | x = self.relu3(self.bn3(self.conv3(x)))
160 | x = self.maxpool(x)
161 |
162 | x = self.layer1(x);
163 | conv_out.append(x);
164 | x = self.layer2(x);
165 | conv_out.append(x);
166 | x = self.layer3(x);
167 | conv_out.append(x);
168 | x = self.layer4(x);
169 | conv_out.append(x);
170 |
171 | if return_feature_maps:
172 | return conv_out
173 | return [x]
174 |
175 |
176 | # pyramid pooling, deep supervision
177 | class PPMDeepsup(nn.Module):
178 | def __init__(self, num_class=150, fc_dim=4096,
179 | use_softmax=False, pool_scales=(1, 2, 3, 6)):
180 | super(PPMDeepsup, self).__init__()
181 | self.use_softmax = use_softmax
182 |
183 | self.ppm = []
184 | for scale in pool_scales:
185 | self.ppm.append(nn.Sequential(
186 | nn.AdaptiveAvgPool2d(scale),
187 | nn.Conv2d(fc_dim, 512, kernel_size=1, bias=False),
188 | SynchronizedBatchNorm2d(512),
189 | nn.ReLU(inplace=True)
190 | ))
191 | self.ppm = nn.ModuleList(self.ppm)
192 | self.cbr_deepsup = conv3x3_bn_relu(fc_dim // 2, fc_dim // 4, 1)
193 |
194 | self.conv_last = nn.Sequential(
195 | nn.Conv2d(fc_dim + len(pool_scales) * 512, 512,
196 | kernel_size=3, padding=1, bias=False),
197 | SynchronizedBatchNorm2d(512),
198 | nn.ReLU(inplace=True),
199 | nn.Dropout2d(0.1),
200 | nn.Conv2d(512, num_class, kernel_size=1)
201 | )
202 | self.conv_last_deepsup = nn.Conv2d(fc_dim // 4, num_class, 1, 1, 0)
203 | self.dropout_deepsup = nn.Dropout2d(0.1)
204 |
205 | def forward(self, conv_out, segSize=None):
206 | conv5 = conv_out[-1]
207 |
208 | input_size = conv5.size()
209 | ppm_out = [conv5]
210 | for pool_scale in self.ppm:
211 | ppm_out.append(nn.functional.interpolate(
212 | pool_scale(conv5),
213 | (input_size[2], input_size[3]),
214 | mode='bilinear', align_corners=False))
215 | ppm_out = torch.cat(ppm_out, 1)
216 |
217 | x = self.conv_last(ppm_out)
218 |
219 | if self.use_softmax: # is True during inference
220 | x = nn.functional.interpolate(
221 | x, size=segSize, mode='bilinear', align_corners=False)
222 | x = nn.functional.softmax(x, dim=1)
223 | return x
224 |
225 | # deep sup
226 | conv4 = conv_out[-2]
227 | ds = self.cbr_deepsup(conv4)
228 | ds = self.dropout_deepsup(ds)
229 | ds = self.conv_last_deepsup(ds)
230 |
231 | x = nn.functional.log_softmax(x, dim=1)
232 | ds = nn.functional.log_softmax(ds, dim=1)
233 |
234 | return x, ds
235 |
--------------------------------------------------------------------------------
/models/resnet.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 | import torch
4 | import torch.nn as nn
5 | import math
6 | from lib.nn import SynchronizedBatchNorm2d
7 |
8 | try:
9 | from urllib import urlretrieve
10 | except ImportError:
11 | from urllib.request import urlretrieve
12 |
13 | __all__ = ['resnet101'] # resnet101 is coming soon!
14 |
15 | model_urls = {
16 | 'resnet101': 'http://sceneparsing.csail.mit.edu/model/pretrained_resnet/resnet101-imagenet.pth'
17 | }
18 |
19 |
20 | def conv3x3(in_planes, out_planes, stride=1):
21 | "3x3 convolution with padding"
22 | return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
23 | padding=1, bias=False)
24 |
25 |
26 | class BasicBlock(nn.Module):
27 | expansion = 1
28 |
29 | def __init__(self, inplanes, planes, stride=1, downsample=None):
30 | super(BasicBlock, self).__init__()
31 | self.conv1 = conv3x3(inplanes, planes, stride)
32 | self.bn1 = SynchronizedBatchNorm2d(planes)
33 | self.relu = nn.ReLU(inplace=True)
34 | self.conv2 = conv3x3(planes, planes)
35 | self.bn2 = SynchronizedBatchNorm2d(planes)
36 | self.downsample = downsample
37 | self.stride = stride
38 |
39 | def forward(self, x):
40 | residual = x
41 |
42 | out = self.conv1(x)
43 | out = self.bn1(out)
44 | out = self.relu(out)
45 |
46 | out = self.conv2(out)
47 | out = self.bn2(out)
48 |
49 | if self.downsample is not None:
50 | residual = self.downsample(x)
51 |
52 | out += residual
53 | out = self.relu(out)
54 |
55 | return out
56 |
57 |
58 | class Bottleneck(nn.Module):
59 | expansion = 4
60 |
61 | def __init__(self, inplanes, planes, stride=1, downsample=None):
62 | super(Bottleneck, self).__init__()
63 | self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
64 | self.bn1 = SynchronizedBatchNorm2d(planes)
65 | self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
66 | padding=1, bias=False)
67 | self.bn2 = SynchronizedBatchNorm2d(planes)
68 | self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
69 | self.bn3 = SynchronizedBatchNorm2d(planes * 4)
70 | self.relu = nn.ReLU(inplace=True)
71 | self.downsample = downsample
72 | self.stride = stride
73 |
74 | def forward(self, x):
75 | residual = x
76 |
77 | out = self.conv1(x)
78 | out = self.bn1(out)
79 | out = self.relu(out)
80 |
81 | out = self.conv2(out)
82 | out = self.bn2(out)
83 | out = self.relu(out)
84 |
85 | out = self.conv3(out)
86 | out = self.bn3(out)
87 |
88 | if self.downsample is not None:
89 | residual = self.downsample(x)
90 |
91 | out += residual
92 | out = self.relu(out)
93 |
94 | return out
95 |
96 |
97 | class ResNet(nn.Module):
98 |
99 | def __init__(self, block, layers, num_classes=1000):
100 | self.inplanes = 128
101 | super(ResNet, self).__init__()
102 | self.conv1 = conv3x3(3, 64, stride=2)
103 | self.bn1 = SynchronizedBatchNorm2d(64)
104 | self.relu1 = nn.ReLU(inplace=True)
105 | self.conv2 = conv3x3(64, 64)
106 | self.bn2 = SynchronizedBatchNorm2d(64)
107 | self.relu2 = nn.ReLU(inplace=True)
108 | self.conv3 = conv3x3(64, 128)
109 | self.bn3 = SynchronizedBatchNorm2d(128)
110 | self.relu3 = nn.ReLU(inplace=True)
111 | self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
112 |
113 | self.layer1 = self._make_layer(block, 64, layers[0])
114 | self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
115 | self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
116 | self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
117 | self.avgpool = nn.AvgPool2d(7, stride=1)
118 | self.fc = nn.Linear(512 * block.expansion, num_classes)
119 |
120 | for m in self.modules():
121 | if isinstance(m, nn.Conv2d):
122 | n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
123 | m.weight.data.normal_(0, math.sqrt(2. / n))
124 | elif isinstance(m, SynchronizedBatchNorm2d):
125 | m.weight.data.fill_(1)
126 | m.bias.data.zero_()
127 |
128 | def _make_layer(self, block, planes, blocks, stride=1):
129 | downsample = None
130 | if stride != 1 or self.inplanes != planes * block.expansion:
131 | downsample = nn.Sequential(
132 | nn.Conv2d(self.inplanes, planes * block.expansion,
133 | kernel_size=1, stride=stride, bias=False),
134 | SynchronizedBatchNorm2d(planes * block.expansion),
135 | )
136 |
137 | layers = []
138 | layers.append(block(self.inplanes, planes, stride, downsample))
139 | self.inplanes = planes * block.expansion
140 | for i in range(1, blocks):
141 | layers.append(block(self.inplanes, planes))
142 |
143 | return nn.Sequential(*layers)
144 |
145 | def forward(self, x):
146 | x = self.relu1(self.bn1(self.conv1(x)))
147 | x = self.relu2(self.bn2(self.conv2(x)))
148 | x = self.relu3(self.bn3(self.conv3(x)))
149 | x = self.maxpool(x)
150 |
151 | x = self.layer1(x)
152 | x = self.layer2(x)
153 | x = self.layer3(x)
154 | x = self.layer4(x)
155 |
156 | x = self.avgpool(x)
157 | x = x.view(x.size(0), -1)
158 | x = self.fc(x)
159 |
160 | return x
161 |
162 |
163 | def resnet101(pretrained=False, **kwargs):
164 | """Constructs a ResNet-101 model.
165 |
166 | Args:
167 | pretrained (bool): If True, returns a model pre-trained on ImageNet
168 | """
169 | model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs)
170 | if pretrained:
171 | model.load_state_dict(load_url(model_urls['resnet101']), strict=False)
172 | return model
173 |
174 |
175 | def load_url(url, model_dir='./pretrained', map_location=None):
176 | if not os.path.exists(model_dir):
177 | os.makedirs(model_dir)
178 | filename = url.split('/')[-1]
179 | cached_file = os.path.join(model_dir, filename)
180 | if not os.path.exists(cached_file):
181 | sys.stderr.write('Downloading: "{}" to {}\n'.format(url, cached_file))
182 | urlretrieve(url, cached_file)
183 | return torch.load(cached_file, map_location=map_location)
184 |
--------------------------------------------------------------------------------
/test.py:
--------------------------------------------------------------------------------
1 | # System libs
2 | import os
3 | import argparse
4 | from distutils.version import LooseVersion
5 | # Numerical libs
6 | import numpy as np
7 | import torch
8 | import torch.nn as nn
9 | from scipy.io import loadmat
10 | # Our libs
11 | from dataset import TestDataset
12 | from models import ModelBuilder, SegmentationModule
13 | from utils import colorEncode
14 | from lib.nn import user_scattered_collate, async_copy_to
15 | from lib.utils import as_numpy
16 | import lib.utils.data as torchdata
17 | import cv2
18 | from tqdm import tqdm
19 |
20 | colors = loadmat('data/color150.mat')['colors']
21 |
22 |
23 | def visualize_result(data, pred, args):
24 | (img, info) = data
25 |
26 | # prediction
27 | pred_color = colorEncode(pred, colors)
28 |
29 | # aggregate images and save
30 | im_vis = np.concatenate((img, pred_color),
31 | axis=1).astype(np.uint8)
32 |
33 | img_name = info.split('/')[-1]
34 | cv2.imwrite(os.path.join(args.result,
35 | img_name.replace('.jpg', '.png')), im_vis)
36 |
37 |
38 | def test(segmentation_module, loader, args):
39 | segmentation_module.eval()
40 |
41 | pbar = tqdm(total=len(loader))
42 | for batch_data in loader:
43 | # process data
44 | batch_data = batch_data[0]
45 | segSize = (batch_data['img_ori'].shape[0],
46 | batch_data['img_ori'].shape[1])
47 | img_resized_list = batch_data['img_data']
48 |
49 | with torch.no_grad():
50 | scores = torch.zeros(1, args.num_class, segSize[0], segSize[1])
51 | scores = async_copy_to(scores, args.gpu)
52 |
53 | for img in img_resized_list:
54 | feed_dict = batch_data.copy()
55 | feed_dict['img_data'] = img
56 | del feed_dict['img_ori']
57 | del feed_dict['info']
58 | feed_dict = async_copy_to(feed_dict, args.gpu)
59 |
60 | # forward pass
61 | pred_tmp = segmentation_module(feed_dict, segSize=segSize)
62 | scores = scores + pred_tmp / len(args.imgSize)
63 |
64 | _, pred = torch.max(scores, dim=1)
65 | pred = as_numpy(pred.squeeze(0).cpu())
66 |
67 | # visualization
68 | visualize_result(
69 | (batch_data['img_ori'], batch_data['info']),
70 | pred, args)
71 |
72 | pbar.update(1)
73 |
74 |
75 | def main(args):
76 | torch.cuda.set_device(args.gpu)
77 |
78 | # Network Builders
79 | builder = ModelBuilder()
80 | net_encoder = builder.build_encoder(
81 | arch=args.arch_encoder,
82 | fc_dim=args.fc_dim,
83 | weights=args.weights_encoder)
84 | net_decoder = builder.build_decoder(
85 | arch=args.arch_decoder,
86 | fc_dim=args.fc_dim,
87 | num_class=args.num_class,
88 | weights=args.weights_decoder,
89 | use_softmax=True)
90 |
91 | crit = nn.NLLLoss(ignore_index=-1)
92 |
93 | segmentation_module = SegmentationModule(net_encoder, net_decoder, crit)
94 |
95 | # Dataset and Loader
96 | # list_test = [{'fpath_img': args.test_img}]
97 | list_test = [{'fpath_img': x} for x in args.test_imgs]
98 | dataset_test = TestDataset(
99 | list_test, args, max_sample=args.num_val)
100 | loader_test = torchdata.DataLoader(
101 | dataset_test,
102 | batch_size=args.batch_size,
103 | shuffle=False,
104 | collate_fn=user_scattered_collate,
105 | num_workers=5,
106 | drop_last=True)
107 |
108 | segmentation_module.cuda()
109 |
110 | # Main loop
111 | test(segmentation_module, loader_test, args)
112 |
113 | print('Inference done!')
114 |
115 |
116 | if __name__ == '__main__':
117 | assert LooseVersion(torch.__version__) >= LooseVersion('0.4.0'), \
118 | 'PyTorch>=0.4.0 is required'
119 |
120 | parser = argparse.ArgumentParser()
121 | # Path related arguments
122 | parser.add_argument('--test_imgs', required=True, nargs='+', type=str,
123 | help='a list of image paths that needs to be tested')
124 | parser.add_argument('--model_path', required=True,
125 | help='folder to model path')
126 | parser.add_argument('--suffix', default='_epoch_20.pth',
127 | help="which snapshot to load")
128 |
129 | # Model related arguments
130 | parser.add_argument('--arch_encoder', default='resnet50dilated',
131 | help="architecture of net_encoder")
132 | parser.add_argument('--arch_decoder', default='ppm_deepsup',
133 | help="architecture of net_decoder")
134 | parser.add_argument('--fc_dim', default=2048, type=int,
135 | help='number of features between encoder and decoder')
136 |
137 | # Data related arguments
138 | parser.add_argument('--num_val', default=-1, type=int,
139 | help='number of images to evalutate')
140 | parser.add_argument('--num_class', default=150, type=int,
141 | help='number of classes')
142 | parser.add_argument('--batch_size', default=1, type=int,
143 | help='batchsize. current only supports 1')
144 | parser.add_argument('--imgSize', default=[300, 400, 500, 600],
145 | nargs='+', type=int,
146 | help='list of input image sizes.'
147 | 'for multiscale testing, e.g. 300 400 500')
148 | parser.add_argument('--imgMaxSize', default=1000, type=int,
149 | help='maximum input image size of long edge')
150 | parser.add_argument('--padding_constant', default=8, type=int,
151 | help='maxmimum downsampling rate of the network')
152 | parser.add_argument('--segm_downsampling_rate', default=8, type=int,
153 | help='downsampling rate of the segmentation label')
154 |
155 | # Misc arguments
156 | parser.add_argument('--result', default='.',
157 | help='folder to output visualization results')
158 | parser.add_argument('--gpu', default=0, type=int,
159 | help='gpu id for evaluation')
160 |
161 | args = parser.parse_args()
162 | args.arch_encoder = args.arch_encoder.lower()
163 | args.arch_decoder = args.arch_decoder.lower()
164 | print("Input arguments:")
165 | for key, val in vars(args).items():
166 | print("{:16} {}".format(key, val))
167 |
168 | # absolute paths of model weights
169 | args.weights_encoder = os.path.join(args.model_path,
170 | 'encoder' + args.suffix)
171 | args.weights_decoder = os.path.join(args.model_path,
172 | 'decoder' + args.suffix)
173 |
174 | assert os.path.exists(args.weights_encoder) and \
175 | os.path.exists(args.weights_encoder), 'checkpoint does not exitst!'
176 |
177 | if not os.path.isdir(args.result):
178 | os.makedirs(args.result)
179 |
180 | main(args)
181 |
--------------------------------------------------------------------------------
/utils.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import re
3 | import functools
4 |
5 |
6 | class AverageMeter(object):
7 | """Computes and stores the average and current value"""
8 | def __init__(self):
9 | self.initialized = False
10 | self.val = None
11 | self.avg = None
12 | self.sum = None
13 | self.count = None
14 |
15 | def initialize(self, val, weight):
16 | self.val = val
17 | self.avg = val
18 | self.sum = val * weight
19 | self.count = weight
20 | self.initialized = True
21 |
22 | def update(self, val, weight=1):
23 | if not self.initialized:
24 | self.initialize(val, weight)
25 | else:
26 | self.add(val, weight)
27 |
28 | def add(self, val, weight):
29 | self.val = val
30 | self.sum += val * weight
31 | self.count += weight
32 | self.avg = self.sum / self.count
33 |
34 | def value(self):
35 | return self.val
36 |
37 | def average(self):
38 | return self.avg
39 |
40 |
41 | def unique(ar, return_index=False, return_inverse=False, return_counts=False):
42 | ar = np.asanyarray(ar).flatten()
43 |
44 | optional_indices = return_index or return_inverse
45 | optional_returns = optional_indices or return_counts
46 |
47 | if ar.size == 0:
48 | if not optional_returns:
49 | ret = ar
50 | else:
51 | ret = (ar,)
52 | if return_index:
53 | ret += (np.empty(0, np.bool),)
54 | if return_inverse:
55 | ret += (np.empty(0, np.bool),)
56 | if return_counts:
57 | ret += (np.empty(0, np.intp),)
58 | return ret
59 | if optional_indices:
60 | perm = ar.argsort(kind='mergesort' if return_index else 'quicksort')
61 | aux = ar[perm]
62 | else:
63 | ar.sort()
64 | aux = ar
65 | flag = np.concatenate(([True], aux[1:] != aux[:-1]))
66 |
67 | if not optional_returns:
68 | ret = aux[flag]
69 | else:
70 | ret = (aux[flag],)
71 | if return_index:
72 | ret += (perm[flag],)
73 | if return_inverse:
74 | iflag = np.cumsum(flag) - 1
75 | inv_idx = np.empty(ar.shape, dtype=np.intp)
76 | inv_idx[perm] = iflag
77 | ret += (inv_idx,)
78 | if return_counts:
79 | idx = np.concatenate(np.nonzero(flag) + ([ar.size],))
80 | ret += (np.diff(idx),)
81 | return ret
82 |
83 |
84 | def colorEncode(labelmap, colors, mode='BGR'):
85 | labelmap = labelmap.astype('int')
86 | labelmap_rgb = np.zeros((labelmap.shape[0], labelmap.shape[1], 3),
87 | dtype=np.uint8)
88 | for label in unique(labelmap):
89 | if label < 0:
90 | continue
91 | labelmap_rgb += (labelmap == label)[:, :, np.newaxis] * \
92 | np.tile(colors[label],
93 | (labelmap.shape[0], labelmap.shape[1], 1))
94 |
95 | if mode == 'BGR':
96 | return labelmap_rgb[:, :, ::-1]
97 | else:
98 | return labelmap_rgb
99 |
100 |
101 | def accuracy(preds, label):
102 | valid = (label >= 0)
103 | acc_sum = (valid * (preds == label)).sum()
104 | valid_sum = valid.sum()
105 | acc = float(acc_sum) / (valid_sum + 1e-10)
106 | return acc, valid_sum
107 |
108 |
109 | def intersectionAndUnion(imPred, imLab, numClass):
110 | imPred = np.asarray(imPred).copy()
111 | imLab = np.asarray(imLab).copy()
112 |
113 | imPred += 1
114 | imLab += 1
115 | # Remove classes from unlabeled pixels in gt image.
116 | # We should not penalize detections in unlabeled portions of the image.
117 | imPred = imPred * (imLab > 0)
118 |
119 | # Compute area intersection:
120 | intersection = imPred * (imPred == imLab)
121 | (area_intersection, _) = np.histogram(
122 | intersection, bins=numClass, range=(1, numClass))
123 |
124 | # Compute area union:
125 | (area_pred, _) = np.histogram(imPred, bins=numClass, range=(1, numClass))
126 | (area_lab, _) = np.histogram(imLab, bins=numClass, range=(1, numClass))
127 | area_union = area_pred + area_lab - area_intersection
128 |
129 | return (area_intersection, area_union)
130 |
131 |
132 | class NotSupportedCliException(Exception):
133 | pass
134 |
135 |
136 | def process_range(xpu, inp):
137 | start, end = map(int, inp)
138 | if start > end:
139 | end, start = start, end
140 | return map(lambda x: '{}{}'.format(xpu, x), range(start, end+1))
141 |
142 |
143 | REGEX = [
144 | (re.compile(r'^gpu(\d+)$'), lambda x: ['gpu%s' % x[0]]),
145 | (re.compile(r'^(\d+)$'), lambda x: ['gpu%s' % x[0]]),
146 | (re.compile(r'^gpu(\d+)-(?:gpu)?(\d+)$'),
147 | functools.partial(process_range, 'gpu')),
148 | (re.compile(r'^(\d+)-(\d+)$'),
149 | functools.partial(process_range, 'gpu')),
150 | ]
151 |
152 |
153 | def parse_devices(input_devices):
154 |
155 | """Parse user's devices input str to standard format.
156 | e.g. [gpu0, gpu1, ...]
157 |
158 | """
159 | ret = []
160 | for d in input_devices.split(','):
161 | for regex, func in REGEX:
162 | m = regex.match(d.lower().strip())
163 | if m:
164 | tmp = func(m.groups())
165 | # prevent duplicate
166 | for x in tmp:
167 | if x not in ret:
168 | ret.append(x)
169 | break
170 | else:
171 | raise NotSupportedCliException(
172 | 'Can not recognize device: "{}"'.format(d))
173 | return ret
174 |
--------------------------------------------------------------------------------
/wiki/img/artifacts.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/wiki/img/artifacts.jpg
--------------------------------------------------------------------------------
/wiki/img/bottleneck.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/wiki/img/bottleneck.jpg
--------------------------------------------------------------------------------
/wiki/img/compare-res.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/wiki/img/compare-res.jpg
--------------------------------------------------------------------------------
/wiki/img/dilation.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/wiki/img/dilation.jpg
--------------------------------------------------------------------------------
/wiki/img/drna-drnb-drnc.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/wiki/img/drna-drnb-drnc.jpg
--------------------------------------------------------------------------------
/wiki/img/error-rate-dilation.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/wiki/img/error-rate-dilation.jpg
--------------------------------------------------------------------------------
/wiki/img/example-psp.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/wiki/img/example-psp.jpg
--------------------------------------------------------------------------------
/wiki/img/loss-supervision.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/wiki/img/loss-supervision.jpg
--------------------------------------------------------------------------------
/wiki/img/overall-loss-supervision.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/wiki/img/overall-loss-supervision.jpg
--------------------------------------------------------------------------------
/wiki/img/psp-on-imgenet.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/wiki/img/psp-on-imgenet.jpg
--------------------------------------------------------------------------------
/wiki/img/psp-semantic-cityscapes.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/wiki/img/psp-semantic-cityscapes.jpg
--------------------------------------------------------------------------------
/wiki/img/pspnet-comparison.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/wiki/img/pspnet-comparison.jpg
--------------------------------------------------------------------------------
/wiki/img/pspnet-structure.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/wiki/img/pspnet-structure.jpg
--------------------------------------------------------------------------------
/wiki/img/regularization-supervision.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/wiki/img/regularization-supervision.jpg
--------------------------------------------------------------------------------
/wiki/img/resblock.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/wiki/img/resblock.jpg
--------------------------------------------------------------------------------
/wiki/img/resplainacc.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/wiki/img/resplainacc.jpg
--------------------------------------------------------------------------------
/wiki/img/scene-parsing-issues-pyramid.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/wiki/img/scene-parsing-issues-pyramid.jpg
--------------------------------------------------------------------------------
/wiki/img/semantic-acc-dilated.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/wiki/img/semantic-acc-dilated.jpg
--------------------------------------------------------------------------------
/wiki/img/semantic-sample.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/wiki/img/semantic-sample.jpg
--------------------------------------------------------------------------------
/wiki/img/supervision.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/wiki/img/supervision.jpg
--------------------------------------------------------------------------------
/wiki/img/top-accuracies-on-places-supervision.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Nikronic/ObjectNet/8eceec7a86b6bc9a8ef719bd0c404d03429161e3/wiki/img/top-accuracies-on-places-supervision.jpg
--------------------------------------------------------------------------------