├── .gitignore
├── README.md
├── data
└── test-demo
│ ├── T49QGF_20191017_6Bands_Urban_Subs.tif
│ ├── T49QGF_20191017_6Bands_Urban_Subs_water.tif
│ └── val_sam.csv
├── dataloader
├── img_aug.py
├── loader.ipynb
├── path_io.py
├── tfrecord_io.py
└── tfrecord_writter.ipynb
├── environmental.yml
├── figures
├── cloudy
│ ├── cloudy-awei.png
│ ├── cloudy-obia.png
│ ├── cloudy-scene.png
│ └── cloudy-watnet.png
├── dataset.png
├── label_sam_1.png
├── label_sam_2.png
├── mountain
│ ├── mountain-awei.png
│ ├── mountain-obia.png
│ ├── mountain-scene.png
│ └── mountain-watnet.png
├── urban
│ ├── urban-awei.png
│ ├── urban-mndwi.png
│ ├── urban-scene.png
│ └── urban-watnet.png
└── watnet_structure.png
├── model
├── base_model
│ ├── mobilenetv2.py
│ └── xception.py
├── pretrained
│ └── watnet.h5
└── seg_model
│ ├── deeplabv3_plus.py
│ ├── deepwatermapv2.py
│ └── watnet.py
├── notebooks
├── config.py
├── infer_demo.ipynb
├── metric_plot.ipynb
├── model_eval.ipynb
└── trainer.ipynb
├── utils
├── acc_patch.py
├── acc_pixel.py
├── geotif_io.py
├── imgPatch.py
└── imgShow.py
└── watnet_infer.py
/.gitignore:
--------------------------------------------------------------------------------
1 | data/dset-s2/*
2 | data/tfrecord-s2/*
3 | data/test-data
4 | result/
5 | data/visual-data-gis
6 | model/pretrained/*
7 | !model/pretrained/watnet*
8 | .vscode
9 | .DS_Store
10 | *__pycache__/
11 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | [](https://doi.org/10.5281/zenodo.5205674)
2 |
3 | # WatNet:
4 |
5 | - A deep ConvNet for surface water mapping based on Sentinel-2 image
6 |
7 | ## -- Model
8 | - We use Deeplabv3+ with MobileNetv2 backbone as the main model structure, additionly, some simple yet effective modifications are designed for improving the satellite image-based surface water mapping.
9 |
10 | 
11 |
12 | ## -- DataSet
13 | - Surface water dataset for Deep learning could be downloaded from Zenodo [**[Link]**](https://doi.org/10.5281/zenodo.5205674).
14 |
15 | 
16 | |Labeling example 1:|Labeling example 2:|
17 | |:--|:--|
18 | |||
19 |
20 | ## -- Performance
21 | - Examples for surface water mapping
22 |
23 | **Urban region**
24 | |Urban scene|AWEI|MNDWI|WatNet|
25 | |:--|:--|:--|:--|
26 | |
|
|
|
|
27 |
28 | **Cloudy region**
29 | |Cloudy scene|AWEI|OBIA|WatNet|
30 | |:--|:--|:--|:--|
31 | |
|
|
|
|
32 |
33 | **Mountainous region**
34 | |Mountain scene|AWEI|OBIA|WatNet|
35 | |:--|:--|:--|:--|
36 | |
|
|
|
|
37 |
38 |
39 | ## **-- How to use the trained WatNet?**
40 |
41 | ### -- Step 1
42 | - Enter the following commands for downloading the code files, and then configure the python and deep learning environment. The deep learning software used in this repo is [Tensorflow 2.5](https://www.tensorflow.org/).
43 |
44 | ~~~console
45 | git clone https://github.com/xinluo2018/WatNet.git
46 | ~~~
47 |
48 | ### -- Step 2
49 | - Download Sentinel-2 images, and select four visible-near infrared 10-m bands and two 20-m shortwave infrared bands, which corresponding to the band number of 2, 3, 4, 8, 11, and 12 of sentinel-2 image.
50 |
51 | ### -- Step 3
52 | - Add the prepared sentinel-2 image (6 bands) to the **_data/test-demo_** directory, modify the data name in the **_notebooks/infer_demo.ipynb_** file, then running the code file: **_notebooks/infer_demo.ipynb_** and surface water map can be generated.
53 | - Users also can specify surface water mapping by using the watnet_infer.py, specifically,
54 | - --- funtional API:
55 | ~~~python
56 | from watnet_infer import watnet_infer
57 | water_map = watnet_infer(rsimg) # full example in notebooks/infer_demo.ipynb.
58 | ~~~
59 | - --- command line API:
60 | ~~~console
61 | python watnet_infer.py data/test-demo/*.tif -o data/test-demo/result
62 | ~~~
63 |
64 | ## **-- How to train the WatNet?**
65 |
66 | - With the Dataset, the user can train the WatNet through running the code file **_train/trainer.ipynb_**. Since [**tfrecords**](https://www.tensorflow.org/tutorials/load_data/tfrecord?hl=zh-tw) format data is required in the model training, the user should convert the .tif format dataset to .tfrecords format dataset by running code file **_dataloader/tfrecord_writter.ipynb_** firstly.
67 |
68 |
69 | ## -- Citation
70 |
71 | - Xin Luo, Xiaohua Tong, Zhongwen Hu. An applicable and automatic method for earth surface water mapping based on multispectral images. International Journal of Applied Earth Observation and Geoinformation, 2021, 103, 102472. [[**Link**](https://www.sciencedirect.com/science/article/pii/S0303243421001793)]
72 | ```
73 | @article{luo2021_watnet,
74 | title = {An applicable and automatic method for earth surface water mapping based on multispectral images},
75 | author = {Xin Luo and Xiaohua Tong and Zhongwen Hu},
76 | journal = {International Journal of Applied Earth Observation and Geoinformation},
77 | volume = {103},
78 | pages = {102472},
79 | year = {2021}
80 | }
81 | ```
82 |
83 |
84 | ## -- Acknowledgement
85 | - We thanks the authors for providing some of the code in this repo:
86 | [deeplabv3_plus](https://github.com/luyanger1799/amazing-semantic-segmentation) and [deepwatmapv2](https://github.com/isikdogan/deepwatermap)
87 |
88 |
--------------------------------------------------------------------------------
/data/test-demo/T49QGF_20191017_6Bands_Urban_Subs.tif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinluo2018/WatNet/7ff53a8e89dcf74c24392f6ba8f31d9277662122/data/test-demo/T49QGF_20191017_6Bands_Urban_Subs.tif
--------------------------------------------------------------------------------
/data/test-demo/T49QGF_20191017_6Bands_Urban_Subs_water.tif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinluo2018/WatNet/7ff53a8e89dcf74c24392f6ba8f31d9277662122/data/test-demo/T49QGF_20191017_6Bands_Urban_Subs_water.tif
--------------------------------------------------------------------------------
/data/test-demo/val_sam.csv:
--------------------------------------------------------------------------------
1 | row,col,label
2 | 14,2207,1
3 | 14,2208,1
4 | 14,2206,1
5 | 15,2207,1
6 | 15,2208,1
7 | 15,2206,1
8 | 16,2206,1
9 | 16,2207,1
10 | 16,2208,1
11 | 316,1621,1
12 | 316,1622,1
13 | 316,1620,1
14 | 317,1620,1
15 | 317,1621,1
16 | 317,1622,1
17 | 318,1621,1
18 | 318,1622,1
19 | 318,1620,1
20 | 319,1620,1
21 | 319,1621,1
22 | 319,1622,1
23 | 826,238,1
24 | 826,237,1
25 | 827,238,1
26 | 827,237,1
27 | 828,237,1
28 | 828,238,1
29 | 990,2079,1
30 | 990,2078,1
31 | 990,2077,1
32 | 990,2080,1
33 | 991,2077,1
34 | 991,2078,1
35 | 991,2079,1
36 | 991,2080,1
37 | 996,1490,1
38 | 996,1489,1
39 | 997,1490,1
40 | 997,1489,1
41 | 998,1489,1
42 | 998,1490,1
43 | 1071,425,1
44 | 1071,424,1
45 | 1071,426,1
46 | 1072,425,1
47 | 1072,424,1
48 | 1072,426,1
49 | 1073,426,1
50 | 1073,425,1
51 | 1073,424,1
52 | 1074,426,1
53 | 1074,425,1
54 | 1074,424,1
55 | 1075,426,1
56 | 1075,425,1
57 | 1075,424,1
58 | 1076,424,1
59 | 1076,426,1
60 | 1076,425,1
61 | 1077,425,1
62 | 1077,424,1
63 | 1077,426,1
64 | 1089,836,1
65 | 1089,835,1
66 | 1090,835,1
67 | 1090,836,1
68 | 1091,835,1
69 | 1091,836,1
70 | 1116,350,1
71 | 1116,348,1
72 | 1116,349,1
73 | 1117,350,1
74 | 1117,349,1
75 | 1117,348,1
76 | 1118,349,1
77 | 1118,350,1
78 | 1118,348,1
79 | 1119,349,1
80 | 1119,348,1
81 | 1119,350,1
82 | 1153,1818,1
83 | 1153,1819,1
84 | 1153,1817,1
85 | 1154,1819,1
86 | 1154,1818,1
87 | 1154,1817,1
88 | 1155,1818,1
89 | 1155,1819,1
90 | 1155,1817,1
91 | 1169,1058,1
92 | 1169,1057,1
93 | 1170,1058,1
94 | 1170,1057,1
95 | 1171,1057,1
96 | 1171,1058,1
97 | 1212,43,1
98 | 1212,44,1
99 | 1212,45,1
100 | 1212,42,1
101 | 1213,43,1
102 | 1213,44,1
103 | 1213,45,1
104 | 1213,42,1
105 | 1286,1084,1
106 | 1286,1083,1
107 | 1287,1083,1
108 | 1287,1084,1
109 | 1311,613,1
110 | 1311,615,1
111 | 1311,614,1
112 | 1312,613,1
113 | 1312,615,1
114 | 1312,614,1
115 | 1313,615,1
116 | 1313,613,1
117 | 1313,614,1
118 | 1314,613,1
119 | 1314,615,1
120 | 1314,614,1
121 | 1356,2270,1
122 | 1356,2269,1
123 | 1356,1402,1
124 | 1356,2271,1
125 | 1357,2269,1
126 | 1357,2270,1
127 | 1357,2271,1
128 | 1357,1402,1
129 | 1358,1402,1
130 | 1358,2270,1
131 | 1358,2269,1
132 | 1358,2271,1
133 | 1405,1587,1
134 | 1405,1586,1
135 | 1406,1586,1
136 | 1406,1587,1
137 | 1407,1587,1
138 | 1407,1586,1
139 | 1408,1586,1
140 | 1408,1587,1
141 | 1422,366,1
142 | 1422,367,1
143 | 1423,367,1
144 | 1423,366,1
145 | 1424,367,1
146 | 1424,366,1
147 | 1425,366,1
148 | 1425,367,1
149 | 1430,2218,1
150 | 1430,2219,1
151 | 1430,2217,1
152 | 1431,2218,1
153 | 1431,2219,1
154 | 1431,2217,1
155 | 1432,2218,1
156 | 1432,2219,1
157 | 1432,2217,1
158 | 1433,2218,1
159 | 1433,2217,1
160 | 1433,2219,1
161 | 1489,1111,1
162 | 1489,1112,1
163 | 1489,1113,1
164 | 1490,1112,1
165 | 1490,1113,1
166 | 1490,1111,1
167 | 1491,1112,1
168 | 1491,1113,1
169 | 1491,1111,1
170 | 1586,209,1
171 | 1587,209,1
172 | 1588,209,1
173 | 1672,2005,1
174 | 1672,2004,1
175 | 1672,2003,1
176 | 1672,2007,1
177 | 1672,2006,1
178 | 1673,2005,1
179 | 1673,2006,1
180 | 1673,2007,1
181 | 1673,2003,1
182 | 1673,2004,1
183 | 1674,2007,1
184 | 1674,2005,1
185 | 1674,2004,1
186 | 1674,2003,1
187 | 1674,2006,1
188 | 1675,2004,1
189 | 1675,2005,1
190 | 1675,2006,1
191 | 1675,2007,1
192 | 1675,2003,1
193 | 1676,2004,1
194 | 1676,2003,1
195 | 1676,2006,1
196 | 1676,2007,1
197 | 1676,2005,1
198 | 1677,2005,1
199 | 1677,2007,1
200 | 1677,2004,1
201 | 1677,2003,1
202 | 1677,2006,1
203 | 1678,2005,1
204 | 1678,2004,1
205 | 1678,2003,1
206 | 1678,2006,1
207 | 1678,2007,1
208 | 1748,1012,1
209 | 1748,1011,1
210 | 1749,1012,1
211 | 1749,1011,1
212 | 1750,1012,1
213 | 1750,1011,1
214 | 1842,1062,1
215 | 1842,1063,1
216 | 1842,1061,1
217 | 1843,1061,1
218 | 1843,1062,1
219 | 1843,1063,1
220 | 1929,1436,1
221 | 1929,1435,1
222 | 1930,1436,1
223 | 1930,1435,1
224 | 1968,1822,1
225 | 1968,1823,1
226 | 1968,1824,1
227 | 1969,1823,1
228 | 1969,1824,1
229 | 1969,1822,1
230 | 1970,1822,1
231 | 1970,1823,1
232 | 1970,1824,1
233 | 1971,1823,1
234 | 1971,1824,1
235 | 1971,1822,1
236 | 35,1440,0
237 | 35,1441,0
238 | 35,1439,0
239 | 36,1440,0
240 | 36,1441,0
241 | 36,1439,0
242 | 37,1439,0
243 | 37,1440,0
244 | 37,1441,0
245 | 38,1440,0
246 | 38,1441,0
247 | 38,1439,0
248 | 39,1439,0
249 | 39,1440,0
250 | 39,1441,0
251 | 40,1440,0
252 | 40,1441,0
253 | 40,1439,0
254 | 109,1493,0
255 | 109,1494,0
256 | 109,1492,0
257 | 110,1492,0
258 | 110,1493,0
259 | 110,1494,0
260 | 111,1493,0
261 | 111,1494,0
262 | 111,1492,0
263 | 112,1492,0
264 | 112,1493,0
265 | 112,1494,0
266 | 113,1493,0
267 | 113,1494,0
268 | 113,1492,0
269 | 114,1492,0
270 | 114,1493,0
271 | 114,1494,0
272 | 138,1802,0
273 | 138,1803,0
274 | 138,1804,0
275 | 138,1801,0
276 | 139,1801,0
277 | 139,1802,0
278 | 139,1803,0
279 | 139,1804,0
280 | 140,1802,0
281 | 140,1803,0
282 | 140,1804,0
283 | 140,1801,0
284 | 141,1801,0
285 | 141,1802,0
286 | 141,1803,0
287 | 141,1804,0
288 | 278,1004,0
289 | 303,1034,0
290 | 377,1257,0
291 | 482,1351,0
292 | 483,1318,0
293 | 543,513,0
294 | 543,514,0
295 | 543,515,0
296 | 543,516,0
297 | 544,514,0
298 | 544,515,0
299 | 544,516,0
300 | 544,513,0
301 | 545,513,0
302 | 545,514,0
303 | 545,515,0
304 | 545,516,0
305 | 546,514,0
306 | 546,515,0
307 | 546,516,0
308 | 546,513,0
309 | 547,513,0
310 | 547,514,0
311 | 547,515,0
312 | 547,516,0
313 | 697,2218,0
314 | 697,2219,0
315 | 697,2217,0
316 | 698,2217,0
317 | 698,2218,0
318 | 698,2219,0
319 | 699,2218,0
320 | 699,2219,0
321 | 699,2217,0
322 | 700,2217,0
323 | 700,2218,0
324 | 700,2219,0
325 | 701,2218,0
326 | 701,2219,0
327 | 701,2217,0
328 | 822,324,0
329 | 822,325,0
330 | 822,323,0
331 | 822,326,0
332 | 823,325,0
333 | 823,326,0
334 | 823,324,0
335 | 823,323,0
336 | 824,324,0
337 | 824,325,0
338 | 824,323,0
339 | 824,326,0
340 | 825,323,0
341 | 825,325,0
342 | 825,326,0
343 | 825,324,0
344 | 826,325,0
345 | 826,323,0
346 | 826,326,0
347 | 826,324,0
348 | 827,325,0
349 | 827,326,0
350 | 827,324,0
351 | 827,323,0
352 | 828,325,0
353 | 828,324,0
354 | 828,323,0
355 | 828,326,0
356 | 964,136,0
357 | 964,137,0
358 | 964,135,0
359 | 965,136,0
360 | 965,137,0
361 | 965,135,0
362 | 966,136,0
363 | 966,135,0
364 | 966,137,0
365 | 967,136,0
366 | 967,137,0
367 | 967,135,0
368 | 968,136,0
369 | 968,135,0
370 | 968,137,0
371 | 1094,136,0
372 | 1194,1276,0
373 | 1207,953,0
374 | 1315,415,0
375 | 1395,425,0
376 | 1659,550,0
377 | 1659,551,0
378 | 1659,549,0
379 | 1660,549,0
380 | 1660,550,0
381 | 1660,551,0
382 | 1661,550,0
383 | 1661,551,0
384 | 1661,549,0
385 | 1708,2065,0
386 | 1708,2064,0
387 | 1708,2066,0
388 | 1708,2067,0
389 | 1708,2068,0
390 | 1708,2069,0
391 | 1709,2065,0
392 | 1709,2066,0
393 | 1709,2067,0
394 | 1709,2069,0
395 | 1709,2068,0
396 | 1709,2064,0
397 | 1710,2065,0
398 | 1710,2066,0
399 | 1710,2067,0
400 | 1710,2068,0
401 | 1710,2069,0
402 | 1710,2064,0
403 | 1711,2064,0
404 | 1711,2065,0
405 | 1711,2066,0
406 | 1711,2067,0
407 | 1711,2068,0
408 | 1711,2069,0
409 | 1712,2064,0
410 | 1712,2065,0
411 | 1712,2066,0
412 | 1712,2067,0
413 | 1712,2068,0
414 | 1712,2069,0
415 | 1713,2065,0
416 | 1713,2066,0
417 | 1713,2068,0
418 | 1713,2067,0
419 | 1713,2064,0
420 | 1713,2069,0
421 | 1714,2067,0
422 | 1714,2065,0
423 | 1714,2064,0
424 | 1714,2068,0
425 | 1714,2069,0
426 | 1714,2066,0
427 | 1715,2067,0
428 | 1715,2064,0
429 | 1715,2068,0
430 | 1715,2065,0
431 | 1715,2069,0
432 | 1715,2066,0
433 | 1771,1552,0
434 | 1771,1550,0
435 | 1771,1553,0
436 | 1771,1551,0
437 | 1772,1550,0
438 | 1772,1551,0
439 | 1772,1552,0
440 | 1772,1553,0
441 | 1773,1551,0
442 | 1773,1552,0
443 | 1773,1553,0
444 | 1773,1550,0
445 | 1774,1550,0
446 | 1774,1551,0
447 | 1774,1552,0
448 | 1774,1553,0
449 | 1775,1553,0
450 | 1775,1551,0
451 | 1775,1552,0
452 | 1775,1550,0
453 | 1776,1552,0
454 | 1776,1551,0
455 | 1776,1550,0
456 | 1776,1553,0
457 | 1777,1551,0
458 | 1777,1552,0
459 | 1777,1553,0
460 | 1777,1550,0
461 | 1796,304,0
462 | 1796,303,0
463 | 1797,304,0
464 | 1797,303,0
465 | 1798,303,0
466 | 1798,304,0
467 | 1851,961,0
468 | 1851,960,0
469 | 1851,959,0
470 | 1851,962,0
471 | 1852,959,0
472 | 1852,960,0
473 | 1852,961,0
474 | 1852,962,0
475 | 1853,960,0
476 | 1853,962,0
477 | 1853,961,0
478 | 1853,959,0
479 | 1854,959,0
480 | 1854,960,0
481 | 1854,961,0
482 | 1854,962,0
483 | 1859,605,0
484 | 1859,606,0
485 | 1859,607,0
486 | 1860,606,0
487 | 1860,605,0
488 | 1860,607,0
489 | 1861,606,0
490 | 1861,607,0
491 | 1861,605,0
492 | 1862,606,0
493 | 1862,607,0
494 | 1862,605,0
495 |
--------------------------------------------------------------------------------
/dataloader/img_aug.py:
--------------------------------------------------------------------------------
1 | import random
2 | import tensorflow as tf
3 |
4 | def img_aug(image, truth, flip = True, rot = True, noisy = True):
5 | '''Data augmentation: noisy, filp, rotate. '''
6 | # image = tf.convert_to_tensor(image, dtype=tf.float32)
7 | # truth = tf.convert_to_tensor(truth, dtype=tf.float32)
8 | if len(truth.shape) == 2:
9 | truth = tf.expand_dims(truth, axis=-1)
10 | if flip == True:
11 | if tf.random.uniform(()) > 0.5:
12 | if random.randint(1,2) == 1: ## horizontal or vertical mirroring
13 | image = tf.image.flip_left_right(image)
14 | truth = tf.image.flip_left_right(truth)
15 | else:
16 | image = tf.image.flip_up_down(image)
17 | truth = tf.image.flip_up_down(truth)
18 | if rot == True:
19 | if tf.random.uniform(()) > 0.5:
20 | degree = random.randint(1,3)
21 | image = tf.image.rot90(image, k=degree)
22 | truth = tf.image.rot90(truth, k=degree)
23 | if noisy == True:
24 | if tf.random.uniform(()) > 0.5:
25 | std = random.uniform(0.001, 0.05)
26 | gnoise = tf.random.normal(shape=tf.shape(image), mean=0.0, stddev=std, dtype=tf.float32)
27 | image = tf.add(image, gnoise)
28 | return image, truth
29 |
30 |
31 |
--------------------------------------------------------------------------------
/dataloader/path_io.py:
--------------------------------------------------------------------------------
1 | import random
2 | import numpy as np
3 | from utils.geotif_io import readTiff
4 |
5 | def read_scene_pair(paths_scene, paths_truth):
6 | '''read data from path and 0-1 normalization
7 | '''
8 | scenes = []
9 | truths = []
10 | paths_scene_pair = zip(paths_scene, paths_truth)
11 | for path_scene, path_truth in paths_scene_pair:
12 | scene,_ = readTiff(path_scene)
13 | truth,_ = readTiff(path_truth)
14 | scene = np.clip(scene/10000,0,1) # normalization
15 | scenes.append(scene)
16 | truths.append(truth)
17 | return scenes, truths
18 |
19 | def crop_patch(img, truth, width=512, height=512, _random=True):
20 | '''crop image to patch'''
21 | if _random:
22 | row_start = random.randint(0, img.shape[0]-height)
23 | col_start = random.randint(0, img.shape[1]-width)
24 | else:
25 | row_start = 0
26 | col_start = 0
27 | patch = img[row_start:row_start+height, col_start:col_start+width]
28 | truth = truth[row_start:row_start+height, col_start:col_start+width]
29 | patch, truth = patch.astype(np.float32), truth.astype(np.float32)
30 | return patch, truth
31 |
32 | def crop_patches(imgs, truths, width=512, height=512, _random=True):
33 | '''crop images to patchs
34 | augs:
35 | imgs: a list contains images
36 | truths: a list contains image truths
37 | return:
38 | cropted patches list truth list
39 | '''
40 | patches, ptruths = [],[]
41 | for i in range(len(imgs)):
42 | patch, truth = crop_patch(img=imgs[i], truth=truths[i], \
43 | width=width, height=height, _random=_random)
44 | patches.append(patch)
45 | ptruths.append(truth)
46 | return patches, ptruths
47 |
48 |
--------------------------------------------------------------------------------
/dataloader/tfrecord_io.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 |
3 | feaBand = ['blue','green','red','nir','swir1','swir2']
4 | truBand = ['truth']
5 | mergeBand = feaBand+truBand
6 |
7 |
8 | '''-------------write out-------------'''
9 | def int64_feature(value):
10 | """Returns an int64_list from a bool / enum / int / uint."""
11 | return tf.train.Feature(int64_list=tf.train.Int64List(value=value))
12 |
13 | def float_feature(value):
14 | """Returns a float_list from a float / double."""
15 | return tf.train.Feature(float_list=tf.train.FloatList(value=value))
16 |
17 | # Create a dictionary with features that may be relevant.
18 | def image_example(image,truth):
19 | feature = {
20 | 'bandShape': int64_feature(image[:,:,0].shape),
21 | 'blue': float_feature(image[:,:,0].flatten()),
22 | 'green': float_feature(image[:,:,1].flatten()),
23 | 'red': float_feature(image[:,:,2].flatten()),
24 | 'nir': float_feature(image[:,:,3].flatten()),
25 | 'swir1': float_feature(image[:,:,4].flatten()),
26 | 'swir2': float_feature(image[:,:,5].flatten()),
27 | 'truth': float_feature(truth.flatten()),
28 | }
29 | return tf.train.Example(features=tf.train.Features(feature=feature))
30 |
31 | '''-------------read in-------------'''
32 | featuresDict = {
33 | 'bandShape': tf.io.FixedLenFeature([2,],dtype=tf.int64),
34 | 'blue': tf.io.VarLenFeature(dtype=tf.float32),
35 | 'green': tf.io.VarLenFeature(dtype=tf.float32),
36 | 'red': tf.io.VarLenFeature(dtype=tf.float32),
37 | 'nir': tf.io.VarLenFeature(dtype=tf.float32),
38 | 'swir1': tf.io.VarLenFeature(dtype=tf.float32),
39 | 'swir2': tf.io.VarLenFeature(dtype=tf.float32),
40 | 'truth': tf.io.VarLenFeature(dtype=tf.float32),
41 | }
42 |
43 | def parse_image(example_proto):
44 | # Parse the input tf.train.Example proto using the dictionary above.
45 | return tf.io.parse_single_example(example_proto, featuresDict)
46 |
47 | def parse_shape(example_parsed):
48 | for fea in mergeBand:
49 | example_parsed[fea] = tf.sparse.to_dense(example_parsed[fea])
50 | example_parsed[fea] = tf.reshape(example_parsed[fea], example_parsed['bandShape'])
51 | return example_parsed
52 |
53 | def toPatchPair(inputs):
54 | inputsList = [inputs.get(key) for key in mergeBand]
55 | stacked = tf.stack(inputsList, axis=2)
56 | cropped_stacked = tf.image.random_crop(
57 | stacked, size=[512, 512, len(mergeBand)])
58 | return cropped_stacked[:,:,:len(feaBand)], cropped_stacked[:,:,len(feaBand):]
59 |
60 |
--------------------------------------------------------------------------------
/dataloader/tfrecord_writter.ipynb:
--------------------------------------------------------------------------------
1 | {"nbformat":4,"nbformat_minor":2,"metadata":{"colab":{"name":"tfrecord_writer.ipynb","provenance":[],"collapsed_sections":[]},"kernelspec":{"name":"python3","display_name":"Python 3.6.13 64-bit ('venv-tf': conda)"},"language_info":{"name":"python","version":"3.6.13","mimetype":"text/x-python","codemirror_mode":{"name":"ipython","version":3},"pygments_lexer":"ipython3","nbconvert_exporter":"python","file_extension":".py"},"interpreter":{"hash":"540331ab3874cc423b0568d77e5ba6321eb807f34352bd2dfeeaef70fdba3ae8"}},"cells":[{"cell_type":"code","execution_count":2,"source":["# ### mount on google drive if you use colab\n","# from google.colab import drive\n","# drive.mount('/content/drive/')\n","# import os\n","# os.chdir(\"/content/drive/My Drive/WatNet/notebooks\")\n"],"outputs":[{"output_type":"stream","name":"stdout","text":["Drive already mounted at /content/drive/; to attempt to forcibly remount, call drive.mount(\"/content/drive/\", force_remount=True).\n"]}],"metadata":{"id":"g1ppHTj0WylH","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1622636666681,"user_tz":-480,"elapsed":447,"user":{"displayName":"Xin Luo","photoUrl":"","userId":"06301970496892076570"}},"outputId":"39d90e1a-8e25-41af-e7d0-4725abb7d679"}},{"cell_type":"code","execution_count":1,"source":["import os\n","os.chdir('..')\n","import glob\n","import numpy as np\n","import tensorflow as tf\n","from utils.geotif_io import readTiff\n","from dataloader.path_io import crop_patch\n","from dataloader.tfrecord_io import image_example\n"],"outputs":[],"metadata":{"id":"gFkuYIjIYwj6","executionInfo":{"status":"ok","timestamp":1622636673866,"user_tz":-480,"elapsed":2810,"user":{"displayName":"Xin Luo","photoUrl":"","userId":"06301970496892076570"}}}},{"cell_type":"code","execution_count":3,"source":["path_tra_tfrecord_scene = 'data/tfrecord-s2/tra_scene.tfrecords'\n","path_val_tfrecord_patch = 'data/tfrecord-s2/val_patch.tfrecords'\n","path_val_tfrecord_scene = 'data/tfrecord-s2/val_scene.tfrecords'\n","tra_scene_paths = sorted(glob.glob('data/dset-s2/tra_scene/*.tif'))\n","tra_truth_paths = sorted(glob.glob('data/dset-s2/tra_truth/*.tif'))\n","tra_pair_data = list(zip(tra_scene_paths, tra_truth_paths))\n","print('tra_data length:', len(tra_pair_data))\n","val_scene_paths = sorted(glob.glob('data/dset-s2/val_scene/*.tif'))\n","val_truth_paths = sorted(glob.glob('data/dset-s2/val_truth/*.tif'))\n","val_pair_data = list(zip(val_scene_paths, val_truth_paths))\n","print('val_data length:', len(val_pair_data))\n","\n"],"outputs":[{"output_type":"stream","name":"stdout","text":["tra_data length: 64\n","val_data length: 31\n"]}],"metadata":{"id":"uhy8whHOWzps","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1622636679077,"user_tz":-480,"elapsed":943,"user":{"displayName":"Xin Luo","photoUrl":"","userId":"06301970496892076570"}},"outputId":"d9de8cf7-773a-40ff-8dca-93dc6d354904"}},{"cell_type":"code","execution_count":4,"source":["# Trainging data: Write to a `.tfrecords` file.\n","with tf.io.TFRecordWriter(path_tra_tfrecord_scene) as writer:\n"," for path_scene, path_truth in tra_pair_data:\n"," print(path_scene)\n"," scene,_ = readTiff(path_scene)\n"," truth,_ = readTiff(path_truth)\n"," scene = np.clip(scene/10000,0,1) \n"," tf_example = image_example(scene, truth)\n"," # writer.write(tf_example.SerializeToString())\n"],"outputs":[{"output_type":"stream","name":"stdout","text":["data/dset-s2/tra_scene/S2A_L2A_20190125_N0211_R034_6Bands_S1.tif\n","data/dset-s2/tra_scene/S2A_L2A_20190125_N0211_R034_6Bands_S2.tif\n","data/dset-s2/tra_scene/S2A_L2A_20190125_N0211_R034_6Bands_S3.tif\n","data/dset-s2/tra_scene/S2A_L2A_20190206_N0211_R067_6Bands_S1.tif\n","data/dset-s2/tra_scene/S2A_L2A_20190206_N0211_R067_6Bands_S2.tif\n","data/dset-s2/tra_scene/S2A_L2A_20190206_N0211_R067_6Bands_S3.tif\n","data/dset-s2/tra_scene/S2A_L2A_20190314_N0211_R008_6Bands_S1.tif\n","data/dset-s2/tra_scene/S2A_L2A_20190314_N0211_R008_6Bands_S2.tif\n","data/dset-s2/tra_scene/S2A_L2A_20190314_N0211_R008_6Bands_S3.tif\n","data/dset-s2/tra_scene/S2A_L2A_20190716_N0213_R063_6Bands_S1.tif\n","data/dset-s2/tra_scene/S2A_L2A_20190716_N0213_R063_6Bands_S2.tif\n","data/dset-s2/tra_scene/S2A_L2A_20190716_N0213_R063_6Bands_S3.tif\n","data/dset-s2/tra_scene/S2A_L2A_20190811_N0213_R013_6Bands_S1.tif\n","data/dset-s2/tra_scene/S2A_L2A_20190811_N0213_R013_6Bands_S2.tif\n","data/dset-s2/tra_scene/S2A_L2A_20190811_N0213_R013_6Bands_S3.tif\n","data/dset-s2/tra_scene/S2A_L2A_20190817_N0213_R089_6Bands_S1.tif\n","data/dset-s2/tra_scene/S2A_L2A_20190817_N0213_R089_6Bands_S2.tif\n","data/dset-s2/tra_scene/S2A_L2A_20190817_N0213_R089_6Bands_S3.tif\n","data/dset-s2/tra_scene/S2A_L2A_20190817_N0213_R089_6Bands_S4.tif\n","data/dset-s2/tra_scene/S2A_L2A_20190830_N0213_R133_6Bands_S1.tif\n","data/dset-s2/tra_scene/S2A_L2A_20190830_N0213_R133_6Bands_S2.tif\n","data/dset-s2/tra_scene/S2A_L2A_20190830_N0213_R133_6Bands_S3.tif\n","data/dset-s2/tra_scene/S2B_L2A_20181226_N0211_R102_6Bands_S1.tif\n","data/dset-s2/tra_scene/S2B_L2A_20181226_N0211_R102_6Bands_S2.tif\n","data/dset-s2/tra_scene/S2B_L2A_20181226_N0211_R102_6Bands_S3.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190225_N0211_R117_6Bands_S1.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190225_N0211_R117_6Bands_S2.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190225_N0211_R117_6Bands_S3.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190430_N0211_R035_6Bands_S1.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190430_N0211_R035_6Bands_S2.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190430_N0211_R035_6Bands_S3.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190506_N0212_R126_6Bands_S1.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190506_N0212_R126_6Bands_S2.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190506_N0212_R126_6Bands_S3.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190607_N0212_R012_6Bands_S1.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190607_N0212_R012_6Bands_S2.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190607_N0212_R012_6Bands_S3.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190620_N0212_R053_6Bands_S1.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190620_N0212_R053_6Bands_S2.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190620_N0212_R053_6Bands_S3.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190801_N0213_R084_6Bands_S1.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190801_N0213_R084_6Bands_S2.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190801_N0213_R084_6Bands_S3.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190807_N0213_R018_6Bands_S1.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190807_N0213_R018_6Bands_S2.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190807_N0213_R018_6Bands_S3.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190811_N0213_R079_6Bands_S1.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190811_N0213_R079_6Bands_S2.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190811_N0213_R079_6Bands_S3.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190818_N0213_R035_6Bands_S1.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190818_N0213_R035_6Bands_S2.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190818_N0213_R035_6Bands_S3.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190831_N0213_R082_6Bands_S1.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190831_N0213_R082_6Bands_S2.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190831_N0213_R082_6Bands_S3.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190904_N0213_R132_6Bands_S1.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190904_N0213_R132_6Bands_S2.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190904_N0213_R132_6Bands_S3.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190912_N0213_R102_6Bands_S1.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190912_N0213_R102_6Bands_S2.tif\n","data/dset-s2/tra_scene/S2B_L2A_20190912_N0213_R102_6Bands_S3.tif\n","data/dset-s2/tra_scene/S2B_L2A_20191023_N0213_R121_6Bands_S1.tif\n","data/dset-s2/tra_scene/S2B_L2A_20191023_N0213_R121_6Bands_S2.tif\n","data/dset-s2/tra_scene/S2B_L2A_20191023_N0213_R121_6Bands_S3.tif\n"]}],"metadata":{"id":"4vu-8itudUwx"}},{"cell_type":"code","execution_count":5,"source":["# Val data: Write to a `_scene.tfrecords` file.\n","with tf.io.TFRecordWriter(path_val_tfrecord_scene) as writer:\n"," for path_scene, path_truth in val_pair_data:\n"," print(path_scene)\n"," scene,_ = readTiff(path_scene)\n"," truth,_ = readTiff(path_truth)\n"," scene = np.clip(scene/10000,0,1)\n"," tf_example = image_example(scene, truth)\n"," # writer.write(tf_example.SerializeToString())\n"],"outputs":[{"output_type":"stream","name":"stdout","text":["data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S4.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S3.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S3.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S3.tif\n"]}],"metadata":{}},{"cell_type":"code","execution_count":6,"source":["## Validation data: Write to a `_patch.tfrecords` file.\n","with tf.io.TFRecordWriter(path_val_tfrecord_patch) as writer:\n"," for i in range(10): # random croping 10 times for each scene\n"," for path_scene, path_truth in val_pair_data:\n"," print(path_scene)\n"," scene,_ = readTiff(path_scene)\n"," truth,_ = readTiff(path_truth)\n"," scene = np.clip(scene/10000,0,1)\n"," patch, truth = crop_patch(img=scene, truth=truth)\n"," tf_example = image_example(patch, truth)\n"," # writer.write(tf_example.SerializeToString())\n"],"outputs":[{"output_type":"stream","name":"stdout","text":["data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S4.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S3.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S3.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S4.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S3.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S3.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S4.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S3.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S3.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S4.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S3.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S3.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S4.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S3.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S3.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S4.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S3.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S3.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S4.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S3.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S3.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S4.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S3.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S3.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S4.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S3.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S3.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190318_N0211_R061_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190426_N0211_R053_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190429_N0211_R096_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190508_N0212_R078_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190518_N0212_R073_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190724_N0213_R033_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S1.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S2.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S3.tif\n","data/dset-s2/val_scene/S2A_L2A_20190725_N0213_R054_6Bands_S4.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20190303_N0211_R065_6Bands_S3.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20190620_N0212_R047_6Bands_S3.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S1.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S2.tif\n","data/dset-s2/val_scene/S2B_L2A_20191015_N0213_R007_6Bands_S3.tif\n"]}],"metadata":{"id":"A4_vq76_rb_O","executionInfo":{"status":"ok","timestamp":1622636805400,"user_tz":-480,"elapsed":368,"user":{"displayName":"Xin Luo","photoUrl":"","userId":"06301970496892076570"}},"tags":[]}}]}
--------------------------------------------------------------------------------
/environmental.yml:
--------------------------------------------------------------------------------
1 | name: venv-tf
2 | channels:
3 | - conda-forge
4 | - defaults
5 | dependencies:
6 | - gdal=3.2.1=py36h5adf297_0
7 | - geos=3.8.1=he1b5a44_0
8 | - geotiff=1.6.0=h5d11630_3
9 | - gettext=0.19.8.1=h0b5b191_1005
10 | - matplotlib=3.2.2=1
11 | - numpy=1.19.5=py36h2aa4a07_1
12 | - pandas=1.1.5=py36h284efc9_0
13 | - proj=7.1.1=h966b41f_3
14 | - python=3.6.13=hffdb5ce_0_cpython
15 | - scikit-learn=0.24.2=py36h2fdd933_0
16 | - scipy=1.5.3=py36h9e8f40b_0
17 | - six=1.16.0=pyh6c4a22f_0
18 | - pip:
19 | - tensorflow==2.5.0
20 | prefix: /home/yons/miniconda3/envs/venv-tf
21 |
--------------------------------------------------------------------------------
/figures/cloudy/cloudy-awei.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinluo2018/WatNet/7ff53a8e89dcf74c24392f6ba8f31d9277662122/figures/cloudy/cloudy-awei.png
--------------------------------------------------------------------------------
/figures/cloudy/cloudy-obia.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinluo2018/WatNet/7ff53a8e89dcf74c24392f6ba8f31d9277662122/figures/cloudy/cloudy-obia.png
--------------------------------------------------------------------------------
/figures/cloudy/cloudy-scene.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinluo2018/WatNet/7ff53a8e89dcf74c24392f6ba8f31d9277662122/figures/cloudy/cloudy-scene.png
--------------------------------------------------------------------------------
/figures/cloudy/cloudy-watnet.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinluo2018/WatNet/7ff53a8e89dcf74c24392f6ba8f31d9277662122/figures/cloudy/cloudy-watnet.png
--------------------------------------------------------------------------------
/figures/dataset.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinluo2018/WatNet/7ff53a8e89dcf74c24392f6ba8f31d9277662122/figures/dataset.png
--------------------------------------------------------------------------------
/figures/label_sam_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinluo2018/WatNet/7ff53a8e89dcf74c24392f6ba8f31d9277662122/figures/label_sam_1.png
--------------------------------------------------------------------------------
/figures/label_sam_2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinluo2018/WatNet/7ff53a8e89dcf74c24392f6ba8f31d9277662122/figures/label_sam_2.png
--------------------------------------------------------------------------------
/figures/mountain/mountain-awei.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinluo2018/WatNet/7ff53a8e89dcf74c24392f6ba8f31d9277662122/figures/mountain/mountain-awei.png
--------------------------------------------------------------------------------
/figures/mountain/mountain-obia.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinluo2018/WatNet/7ff53a8e89dcf74c24392f6ba8f31d9277662122/figures/mountain/mountain-obia.png
--------------------------------------------------------------------------------
/figures/mountain/mountain-scene.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinluo2018/WatNet/7ff53a8e89dcf74c24392f6ba8f31d9277662122/figures/mountain/mountain-scene.png
--------------------------------------------------------------------------------
/figures/mountain/mountain-watnet.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinluo2018/WatNet/7ff53a8e89dcf74c24392f6ba8f31d9277662122/figures/mountain/mountain-watnet.png
--------------------------------------------------------------------------------
/figures/urban/urban-awei.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinluo2018/WatNet/7ff53a8e89dcf74c24392f6ba8f31d9277662122/figures/urban/urban-awei.png
--------------------------------------------------------------------------------
/figures/urban/urban-mndwi.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinluo2018/WatNet/7ff53a8e89dcf74c24392f6ba8f31d9277662122/figures/urban/urban-mndwi.png
--------------------------------------------------------------------------------
/figures/urban/urban-scene.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinluo2018/WatNet/7ff53a8e89dcf74c24392f6ba8f31d9277662122/figures/urban/urban-scene.png
--------------------------------------------------------------------------------
/figures/urban/urban-watnet.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinluo2018/WatNet/7ff53a8e89dcf74c24392f6ba8f31d9277662122/figures/urban/urban-watnet.png
--------------------------------------------------------------------------------
/figures/watnet_structure.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinluo2018/WatNet/7ff53a8e89dcf74c24392f6ba8f31d9277662122/figures/watnet_structure.png
--------------------------------------------------------------------------------
/model/base_model/mobilenetv2.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 |
3 | ##### MobileNetV2
4 | relu6 = tf.keras.layers.ReLU(6.)
5 | def _conv_block(inputs, filters, kernel, strides):
6 | x = tf.keras.layers.Conv2D(filters, kernel, padding='same', strides=strides)(inputs)
7 | x = tf.keras.layers.BatchNormalization()(x)
8 | return relu6(x)
9 | def _bottleneck(inputs, filters, kernel, t, s, r=False):
10 | tchannel = inputs.shape[-1] * t
11 | x = _conv_block(inputs, tchannel, (1, 1), (1, 1))
12 | x = tf.keras.layers.DepthwiseConv2D(kernel, strides=(s, s), depth_multiplier=1, padding='same')(x)
13 | x = tf.keras.layers.BatchNormalization()(x)
14 | x = relu6(x)
15 | x = tf.keras.layers.Conv2D(filters, (1, 1), strides=(1, 1), padding='same')(x) # 降维,改层为瓶颈层
16 | x = tf.keras.layers.BatchNormalization()(x)
17 | if r:
18 | x = tf.keras.layers.add([x, inputs])
19 | return x
20 |
21 | def _inverted_residual_block(inputs, filters, kernel, t, strides, n):
22 | x = _bottleneck(inputs, filters, kernel, t, strides)
23 | for i in range(1, n):
24 | x = _bottleneck(x, filters, kernel, t, 1, True)
25 | return x
26 |
27 | def MobileNetV2(input_shape, nclasses=2):
28 |
29 | """
30 | # Arguments
31 | input_shape: An integer or tuple/list of 3 integers, shape
32 | of input tensor.
33 | classes: Integer, number of classes.
34 | # Returns
35 | MobileNetv2 model.
36 | """
37 |
38 | inputs = tf.keras.layers.Input(shape=input_shape, name='input')
39 | x = _conv_block(inputs, 32, (3, 3), strides=(2, 2)) # 0.5*size n_layers = 1
40 |
41 | x = _inverted_residual_block(x, 16, (3, 3), t=1, strides=1, n=1) # n_layers = 3
42 | x = _inverted_residual_block(x, 24, (3, 3), t=6, strides=2, n=2) # 0.5*size, n_layers = 6
43 |
44 | x = _inverted_residual_block(x, 32, (3, 3), t=6, strides=2, n=3) # 0.5*size, n_layers = 9
45 | x = _inverted_residual_block(x, 64, (3, 3), t=6, strides=2, n=4) # 0.5*size, n_layers = 12
46 | x = _inverted_residual_block(x, 96, (3, 3), t=6, strides=1, n=3) # n_layers = 9
47 | x = _inverted_residual_block(x, 160, (3, 3), t=6, strides=2, n=3) # 0.5*size, n_layers = 9
48 | x = _inverted_residual_block(x, 320, (3, 3), t=6, strides=1, n=1) # n_layers = 3
49 |
50 | x = _conv_block(x, 1280, (1, 1), strides=(1, 1)) # n_layers = 1
51 | x = tf.keras.layers.GlobalAveragePooling2D()(x)
52 | x = tf.keras.layers.Reshape((1, 1, 1280))(x)
53 | x = tf.keras.layers.Dropout(0.3, name='Dropout')(x)
54 | x = tf.keras.layers.Conv2D(nclasses, (1, 1), padding='same')(x) # n_layers = 1
55 | x = tf.keras.layers.Activation('softmax', name='final_activation')(x)
56 | output = tf.keras.layers.Reshape((nclasses,), name='output')(x)
57 | model = tf.keras.models.Model(inputs, output)
58 | return model
59 |
60 | # model = MobileNetV2(input_shape=(512,512,6), nclasses=2)
61 | # model.summary()
62 |
63 |
--------------------------------------------------------------------------------
/model/base_model/xception.py:
--------------------------------------------------------------------------------
1 |
2 | '''reference: https://github.com/luyanger1799/Amazing-Semantic-Segmentation
3 | '''
4 |
5 | import os
6 | import sys
7 | sys.path.append(os.getcwd())
8 | import tensorflow as tf
9 | import tensorflow.keras.layers as layers
10 | import tensorflow.keras.backend as backend
11 |
12 | class Xception():
13 | def __init__(self, version='Xception', dilation=None, **kwargs):
14 | """
15 | The Xception is used to structure DeepLabV3Plus based on Tensorflow.
16 | :param dilation: Whether to use dilation strategy
17 | """
18 | super(Xception, self).__init__(**kwargs)
19 | self.version = version
20 | if dilation is None:
21 | self.strides = [2, 2]
22 | else:
23 | self.strides = [2 if dilation[0] == 1 else 1] + [2 if dilation[1] == 1 else 1]
24 | assert len(self.strides) == 2
25 | assert version in ['Xception', 'Xception-DeepLab']
26 |
27 | def __call__(self, inputs, output_stages='c5', **kwargs):
28 | """
29 | call for Xception or Xception-DeepLab.
30 | :param inputs: a 4-D tensor.
31 | :param output_stages: str or a list of str containing the output stages.
32 | :param kwargs: other parameters.
33 | :return: the output of different stages.
34 | """
35 | strides = self.strides
36 | if self.version == 'Xception-DeepLab':
37 | rm_pool = True
38 | num_middle_flow = 16
39 | else:
40 | rm_pool = False
41 | num_middle_flow = 8
42 |
43 | channel_axis = 1 if backend.image_data_format() == 'channels_first' else -1
44 |
45 | x = layers.Conv2D(32, (3, 3),
46 | strides=(2, 2),
47 | use_bias=False,
48 | padding='same',
49 | name='block1_conv1')(inputs)
50 | x = layers.BatchNormalization(axis=channel_axis, name='block1_conv1_bn')(x)
51 | x = layers.Activation('relu', name='block1_conv1_act')(x)
52 | x = layers.Conv2D(64, (3, 3), use_bias=False, padding='same', name='block1_conv2')(x)
53 | x = layers.BatchNormalization(axis=channel_axis, name='block1_conv2_bn')(x)
54 | x = layers.Activation('relu', name='block1_conv2_act')(x)
55 |
56 | residual = layers.Conv2D(128, (1, 1),
57 | strides=(2, 2),
58 | padding='same',
59 | use_bias=False)(x)
60 | residual = layers.BatchNormalization(axis=channel_axis)(residual)
61 |
62 | x = layers.SeparableConv2D(128, (3, 3),
63 | padding='same',
64 | use_bias=False,
65 | name='block2_sepconv1')(x)
66 | x = layers.BatchNormalization(axis=channel_axis, name='block2_sepconv1_bn')(x)
67 | x = layers.Activation('relu', name='block2_sepconv2_act')(x)
68 | x = layers.SeparableConv2D(128, (3, 3),
69 | padding='same',
70 | use_bias=False,
71 | name='block2_sepconv2')(x)
72 | x = layers.BatchNormalization(axis=channel_axis, name='block2_sepconv2_bn')(x)
73 |
74 | x = layers.MaxPooling2D((3, 3),
75 | strides=(2, 2),
76 | padding='same',
77 | name='block2_pool')(x)
78 | x = layers.add([x, residual])
79 | c1 = x
80 |
81 | residual = layers.Conv2D(256, (1, 1), strides=(2, 2),
82 | padding='same', use_bias=False)(x)
83 | residual = layers.BatchNormalization(axis=channel_axis)(residual)
84 |
85 | x = layers.Activation('relu', name='block3_sepconv1_act')(x)
86 | x = layers.SeparableConv2D(256, (3, 3),
87 | padding='same',
88 | use_bias=False,
89 | name='block3_sepconv1')(x)
90 | x = layers.BatchNormalization(axis=channel_axis, name='block3_sepconv1_bn')(x)
91 | x = layers.Activation('relu', name='block3_sepconv2_act')(x)
92 | x = layers.SeparableConv2D(256, (3, 3),
93 | padding='same',
94 | use_bias=False,
95 | name='block3_sepconv2')(x)
96 | x = layers.BatchNormalization(axis=channel_axis, name='block3_sepconv2_bn')(x)
97 |
98 | if rm_pool:
99 | x = layers.Activation('relu', name='block3_sepconv3_act')(x)
100 | x = layers.SeparableConv2D(256, (3, 3),
101 | strides=(2, 2),
102 | padding='same',
103 | use_bias=False,
104 | name='block3_sepconv3')(x)
105 | x = layers.BatchNormalization(axis=channel_axis, name='block3_sepconv3_bn')(x)
106 | else:
107 | x = layers.MaxPooling2D((3, 3), strides=(2, 2),
108 | padding='same',
109 | name='block3_pool')(x)
110 | x = layers.add([x, residual])
111 | c2 = x
112 |
113 | residual = layers.Conv2D(728, (1, 1),
114 | strides=strides[0],
115 | padding='same',
116 | use_bias=False)(x)
117 | residual = layers.BatchNormalization(axis=channel_axis)(residual)
118 |
119 | x = layers.Activation('relu', name='block4_sepconv1_act')(x)
120 | x = layers.SeparableConv2D(728, (3, 3),
121 | padding='same',
122 | use_bias=False,
123 | name='block4_sepconv1')(x)
124 | x = layers.BatchNormalization(axis=channel_axis, name='block4_sepconv1_bn')(x)
125 | x = layers.Activation('relu', name='block4_sepconv2_act')(x)
126 | x = layers.SeparableConv2D(728, (3, 3),
127 | padding='same',
128 | use_bias=False,
129 | name='block4_sepconv2')(x)
130 | x = layers.BatchNormalization(axis=channel_axis, name='block4_sepconv2_bn')(x)
131 |
132 | if rm_pool:
133 | x = layers.Activation('relu', name='block4_sepconv3_act')(x)
134 | x = layers.SeparableConv2D(728, (3, 3),
135 | strides=strides[0],
136 | padding='same',
137 | use_bias=False,
138 | name='block4_sepconv3')(x)
139 | x = layers.BatchNormalization(axis=channel_axis, name='block4_sepconv3_bn')(x)
140 | else:
141 | x = layers.MaxPooling2D((3, 3), strides=(2, 2),
142 | padding='same',
143 | name='block4_pool')(x)
144 | x = layers.add([x, residual])
145 | c3 = x
146 |
147 | for i in range(num_middle_flow):
148 | residual = x
149 | prefix = 'block' + str(i + 5)
150 |
151 | x = layers.Activation('relu', name=prefix + '_sepconv1_act')(x)
152 | x = layers.SeparableConv2D(728, (3, 3),
153 | padding='same',
154 | use_bias=False,
155 | name=prefix + '_sepconv1')(x)
156 | x = layers.BatchNormalization(axis=channel_axis,
157 | name=prefix + '_sepconv1_bn')(x)
158 | x = layers.Activation('relu', name=prefix + '_sepconv2_act')(x)
159 | x = layers.SeparableConv2D(728, (3, 3),
160 | padding='same',
161 | use_bias=False,
162 | name=prefix + '_sepconv2')(x)
163 | x = layers.BatchNormalization(axis=channel_axis,
164 | name=prefix + '_sepconv2_bn')(x)
165 | x = layers.Activation('relu', name=prefix + '_sepconv3_act')(x)
166 | x = layers.SeparableConv2D(728, (3, 3),
167 | padding='same',
168 | use_bias=False,
169 | name=prefix + '_sepconv3')(x)
170 | x = layers.BatchNormalization(axis=channel_axis,
171 | name=prefix + '_sepconv3_bn')(x)
172 |
173 | x = layers.add([x, residual])
174 | c4 = x
175 |
176 | residual = layers.Conv2D(1024, (1, 1), strides=strides[1],
177 | padding='same', use_bias=False)(x)
178 | residual = layers.BatchNormalization(axis=channel_axis)(residual)
179 |
180 | id = 5 + num_middle_flow
181 | x = layers.Activation('relu', name='block{id}_sepconv1_act'.format(id=id))(x)
182 | x = layers.SeparableConv2D(728, (3, 3),
183 | padding='same',
184 | use_bias=False,
185 | name='block{id}_sepconv1'.format(id=id))(x)
186 | x = layers.BatchNormalization(axis=channel_axis, name='block{id}_sepconv1_bn'.format(id=id))(x)
187 | x = layers.Activation('relu', name='block{id}_sepconv2_act'.format(id=id))(x)
188 | x = layers.SeparableConv2D(1024, (3, 3),
189 | padding='same',
190 | use_bias=False,
191 | name='block{id}_sepconv2'.format(id=id))(x)
192 | x = layers.BatchNormalization(axis=channel_axis, name='block{id}_sepconv2_bn'.format(id=id))(x)
193 |
194 | if rm_pool:
195 | x = layers.Activation('relu', name='block{id}_sepconv3_act'.format(id=id))(x)
196 | x = layers.SeparableConv2D(1024, (3, 3),
197 | strides=strides[1],
198 | padding='same',
199 | use_bias=False,
200 | name='block{id}_sepconv3'.format(id=id))(x)
201 | x = layers.BatchNormalization(axis=channel_axis, name='block{id}_sepconv3_bn'.format(id=id))(x)
202 | else:
203 | x = layers.MaxPooling2D((3, 3),
204 | strides=(2, 2),
205 | padding='same',
206 | name='block{id}_pool'.format(id=id))(x)
207 | x = layers.add([x, residual])
208 |
209 | x = layers.SeparableConv2D(1536, (3, 3),
210 | padding='same',
211 | use_bias=False,
212 | name='block{id}_sepconv1'.format(id=id + 1))(x)
213 | x = layers.BatchNormalization(axis=channel_axis, name='block{id}_sepconv1_bn'.format(id=id + 1))(x)
214 | x = layers.Activation('relu', name='block{id}_sepconv1_act'.format(id=id + 1))(x)
215 |
216 | if self.version == 'Xception-DeepLab':
217 | x = layers.SeparableConv2D(1536, (3, 3),
218 | padding='same',
219 | use_bias=False,
220 | name='block{id}_sepconv1_1'.format(id=id + 1))(x)
221 | x = layers.BatchNormalization(axis=channel_axis, name='block{id}_sepconv1_1_bn'.format(id=id + 1))(x)
222 | x = layers.Activation('relu', name='block{id}_sepconv1_1_act'.format(id=id + 1))(x)
223 |
224 | x = layers.SeparableConv2D(2048, (3, 3),
225 | padding='same',
226 | use_bias=False,
227 | name='block{id}_sepconv2'.format(id=id + 1))(x)
228 | x = layers.BatchNormalization(axis=channel_axis, name='block{id}_sepconv2_bn'.format(id=id + 1))(x)
229 | x = layers.Activation('relu', name='block{id}_sepconv2_act'.format(id=id + 1))(x)
230 |
231 | c5 = x
232 |
233 | self.outputs = {'c1': c1,
234 | 'c2': c2,
235 | 'c3': c3,
236 | 'c4': c4,
237 | 'c5': c5}
238 |
239 | if type(output_stages) is not list:
240 | return self.outputs[output_stages]
241 | else:
242 | return [self.outputs[ci] for ci in output_stages]
243 |
244 |
245 | # input = tf.ones([4, 256, 256, 4],tf.float32)
246 | # model = Xception()
247 | # oupt = model(inputs=input, output_stages=['c1', 'c5'])
248 | # print(oupt[0].shape, oupt[1].shape)
249 | # print(model.summary())
250 |
251 |
--------------------------------------------------------------------------------
/model/pretrained/watnet.h5:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinluo2018/WatNet/7ff53a8e89dcf74c24392f6ba8f31d9277662122/model/pretrained/watnet.h5
--------------------------------------------------------------------------------
/model/seg_model/deeplabv3_plus.py:
--------------------------------------------------------------------------------
1 |
2 | """
3 | @Reference: https://github.com/luyanger1799/amazing-semantic-segmentation
4 | """
5 | import os
6 | import sys
7 | sys.path.append(os.getcwd())
8 | import tensorflow as tf
9 | import tensorflow.keras.layers as layers
10 | import tensorflow.keras.models as models
11 | import tensorflow.keras.backend as backend
12 | from model.base_model.xception import Xception
13 |
14 | class GlobalAveragePooling2D(layers.GlobalAveragePooling2D):
15 | def __init__(self, keep_dims=False, **kwargs):
16 | super(GlobalAveragePooling2D, self).__init__(**kwargs)
17 | self.keep_dims = keep_dims
18 | def call(self, inputs):
19 | if self.keep_dims is False:
20 | return super(GlobalAveragePooling2D, self).call(inputs)
21 | else:
22 | return backend.mean(inputs, axis=[1, 2], keepdims=True)
23 |
24 | class Concatenate(layers.Concatenate):
25 | def __init__(self, out_size=None, axis=-1, name=None):
26 | super(Concatenate, self).__init__(axis=axis, name=name)
27 | self.out_size = out_size
28 | def call(self, inputs):
29 | return backend.concatenate(inputs, self.axis)
30 |
31 | def _conv_bn_relu(x, filters, kernel_size, strides=1):
32 | x = layers.Conv2D(filters, kernel_size, strides=strides, padding='same')(x)
33 | x = layers.BatchNormalization()(x)
34 | x = layers.ReLU()(x)
35 | return x
36 |
37 | def _aspp(x, out_filters, aspp_size):
38 | xs = list()
39 | x1 = layers.Conv2D(out_filters, 1, strides=1)(x)
40 | xs.append(x1)
41 |
42 | for i in range(3):
43 | xi = layers.Conv2D(out_filters, 3, strides=1, padding='same', dilation_rate=6 * (i + 1))(x)
44 | xs.append(xi)
45 | img_pool = GlobalAveragePooling2D(keep_dims=True)(x)
46 | img_pool = layers.Conv2D(out_filters, 1, 1, kernel_initializer='he_normal')(img_pool)
47 | img_pool = layers.UpSampling2D(size=aspp_size, interpolation='bilinear')(img_pool)
48 | xs.append(img_pool)
49 |
50 | x = Concatenate(out_size=aspp_size)(xs)
51 | x = layers.Conv2D(out_filters, 1, strides=1, kernel_initializer='he_normal')(x)
52 | x = layers.BatchNormalization()(x)
53 | return x
54 |
55 | def deeplabv3_plus(nclasses, input_shape=(256,256,6)):
56 | dilation = [1, 2]
57 | input = layers.Input(shape=input_shape)
58 | aspp_size = (input_shape[0] // 16, input_shape[1] // 16)
59 | encoder = Xception(version='Xception-DeepLab', dilation=dilation)
60 | c2, c5 = encoder(input, output_stages=['c1', 'c5'])
61 | x = _aspp(c5, 256, aspp_size)
62 | x = layers.Dropout(rate=0.5)(x)
63 |
64 | x = layers.UpSampling2D(size=(4, 4), interpolation='bilinear')(x)
65 | x = _conv_bn_relu(x, 48, 1, strides=1)
66 |
67 | x = Concatenate(out_size=aspp_size)([x, c2])
68 | x = _conv_bn_relu(x, 256, 3, 1)
69 | x = layers.Dropout(rate=0.5)(x)
70 |
71 | x = _conv_bn_relu(x, 256, 3, 1)
72 | x = layers.Dropout(rate=0.1)(x)
73 | if nclasses == 2:
74 | x = layers.Conv2D(1, 1, strides=1, activation= 'sigmoid')(x)
75 | else:
76 | x = layers.Conv2D(nclasses, 1, strides=1, activation='softmax')(x)
77 | x = layers.UpSampling2D(size=(4, 4), interpolation='bilinear')(x)
78 | outputs = x
79 | # return outputs
80 | return models.Model(input, outputs, name='deeplabv3_plus')
81 |
82 |
83 | # input_img = tf.ones([4, 512, 512, 6], tf.float32)
84 | # model = deeplabv3_plus(nclasses=2, input_shape=(512,512,6))
85 | # oupt = model(inputs=input_img)
86 | # print(oupt.shape)
87 | # model.summary()
88 |
89 |
--------------------------------------------------------------------------------
/model/seg_model/deepwatermapv2.py:
--------------------------------------------------------------------------------
1 |
2 | ''' Implementation of DeepWaterMapV2.
3 | The model architecture is explained in:
4 | L.F. Isikdogan, A.C. Bovik, and P. Passalacqua,
5 | "Seeing Through the Clouds with DeepWaterMap," IEEE GRSL, 2019.
6 | '''
7 |
8 | import tensorflow as tf
9 |
10 | def deepwatermapv2(min_width=4):
11 | inputs = tf.keras.layers.Input(shape=[None, None, 6])
12 |
13 | def conv_block(x, num_filters, kernel_size, stride=1, use_relu=True):
14 | x = tf.keras.layers.Conv2D(
15 | filters=num_filters,
16 | kernel_size=kernel_size,
17 | kernel_initializer='he_uniform',
18 | strides=stride,
19 | padding='same',
20 | use_bias=False)(x)
21 | x = tf.keras.layers.BatchNormalization()(x)
22 | if use_relu:
23 | x = tf.keras.layers.Activation('relu')(x)
24 | return x
25 |
26 | def downscaling_unit(x):
27 | num_filters = int(x.get_shape()[-1]) * 4
28 | x_1 = conv_block(x, num_filters, kernel_size=5, stride=2)
29 | x_2 = conv_block(x_1, num_filters, kernel_size=3, stride=1)
30 | x = tf.keras.layers.Add()([x_1, x_2])
31 | return x
32 |
33 | def upscaling_unit(x):
34 | num_filters = int(x.get_shape()[-1]) // 4
35 | x = tf.keras.layers.Lambda(lambda x: tf.nn.depth_to_space(x, 2))(x)
36 | x_1 = conv_block(x, num_filters, kernel_size=3)
37 | x_2 = conv_block(x_1, num_filters, kernel_size=3)
38 | x = tf.keras.layers.Add()([x_1, x_2])
39 | return x
40 |
41 | def bottleneck_unit(x):
42 | num_filters = int(x.get_shape()[-1])
43 | x_1 = conv_block(x, num_filters, kernel_size=3)
44 | x_2 = conv_block(x_1, num_filters, kernel_size=3)
45 | x = tf.keras.layers.Add()([x_1, x_2])
46 | return x
47 |
48 | # model flow
49 | skip_connections = []
50 | num_filters = min_width
51 |
52 | # first layer
53 | x = conv_block(inputs, num_filters, kernel_size=1, use_relu=False)
54 | skip_connections.append(x)
55 |
56 | # encoder
57 | for i in range(4):
58 | x = downscaling_unit(x)
59 | skip_connections.append(x)
60 |
61 | # bottleneck
62 | x = bottleneck_unit(x)
63 |
64 | # decoder
65 | for i in range(4):
66 | x = tf.keras.layers.Add()([x, skip_connections.pop()])
67 | x = upscaling_unit(x)
68 |
69 | # last layer
70 | x = tf.keras.layers.Add()([x, skip_connections.pop()])
71 | x = conv_block(x, 1, kernel_size=1, use_relu=False)
72 | x = tf.keras.layers.Activation('sigmoid')(x)
73 |
74 | model = tf.keras.Model(inputs=inputs, outputs=x)
75 | return model
76 |
77 | # model = deepwatermapv2(min_width=4)
78 | # model.summary()
--------------------------------------------------------------------------------
/model/seg_model/watnet.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 | sys.path.append(os.getcwd())
4 | import tensorflow as tf
5 | import tensorflow.keras.backend as backend
6 | import tensorflow.keras.layers as layers
7 | import tensorflow.keras.models as models
8 | from model.base_model.mobilenetv2 import MobileNetV2
9 |
10 | ##### improved DeepLabV3+
11 | def upsample(tensor, size_1):
12 | '''bilinear upsampling'''
13 | y = tf.image.resize(images=tensor, size=size_1)
14 | return y
15 |
16 | def aspp_2(tensor):
17 | '''atrous spatial pyramid pooling'''
18 | dims = backend.int_shape(tensor)
19 | y_pool = tf.keras.layers.AveragePooling2D(pool_size=(
20 | dims[1], dims[2]), name='average_pooling')(tensor)
21 | y_pool = tf.keras.layers.Conv2D(filters=128, kernel_size=1,
22 | padding='same',
23 | kernel_initializer='he_normal',
24 | name='pool_1x1conv2d',
25 | use_bias=False)(y_pool)
26 | y_pool = tf.keras.layers.BatchNormalization(name=f'bn_1')(y_pool)
27 | y_pool = tf.keras.layers.Activation('relu', name=f'relu_1')(y_pool)
28 | y_pool = upsample(tensor=y_pool, size_1=[dims[1], dims[2]])
29 |
30 | ## 1x1 conv
31 | y_1 = tf.keras.layers.Conv2D(filters=128, kernel_size=1,
32 | dilation_rate=1, padding='same',
33 | kernel_initializer='he_normal',
34 | name='ASPP_conv2d_d1',
35 | use_bias=False)(tensor)
36 | y_1 = tf.keras.layers.BatchNormalization(name=f'bn_2')(y_1)
37 | y_1 = tf.keras.layers.Activation('relu', name=f'relu_2')(y_1)
38 |
39 | ## 3x3 dilated conv
40 | y_6 = tf.keras.layers.Conv2D(filters=128, kernel_size=3,
41 | dilation_rate=6, padding='same',
42 | kernel_initializer='he_normal',
43 | name='ASPP_conv2d_d6',
44 | use_bias=False)(tensor)
45 | y_6 = tf.keras.layers.BatchNormalization(name=f'bn_3')(y_6)
46 | y_6 = tf.keras.layers.Activation('relu', name=f'relu_3')(y_6)
47 |
48 | ## 3x3 dilated conv
49 | y_12 = tf.keras.layers.Conv2D(filters=128, kernel_size=3,
50 | dilation_rate=12, padding='same',
51 | kernel_initializer='he_normal',
52 | name='ASPP_conv2d_d12',
53 | use_bias=False)(tensor)
54 | y_12 = tf.keras.layers.BatchNormalization(name=f'bn_4')(y_12)
55 | y_12 = tf.keras.layers.Activation('relu', name=f'relu_4')(y_12)
56 |
57 | ## 3x3 dilated conv
58 | y_18 = tf.keras.layers.Conv2D(filters=128, kernel_size=3,
59 | dilation_rate=18, padding='same',
60 | kernel_initializer='he_normal',
61 | name='ASPP_conv2d_d18',
62 | use_bias=False)(tensor)
63 | y_18 = tf.keras.layers.BatchNormalization(name=f'bn_5')(y_18)
64 | y_18 = tf.keras.layers.Activation('relu', name=f'relu_5')(y_18)
65 |
66 | ## concat
67 | y = tf.keras.layers.concatenate([y_pool, y_1, y_6, y_12, y_18], name='ASPP_concat')
68 | y = tf.keras.layers.Conv2D(filters=128, kernel_size=1,
69 | dilation_rate=1, padding='same',
70 | kernel_initializer='he_normal',
71 | name='ASPP_conv2d_final',
72 | use_bias=False)(y)
73 | y = tf.keras.layers.BatchNormalization(name=f'bn_final')(y)
74 | y = tf.keras.layers.Activation('relu', name=f'relu_final')(y)
75 | return y
76 |
77 | def watnet(input_shape, nclasses=2):
78 | '''
79 | Arguments:
80 | input_shape: (img_height, img_width, img_channel)
81 | base_model: backbone network
82 | d_feature, m_feature, l_feature: features corresponding
83 | to the deep, middle, and low layers of the backbone model
84 | nclass: number of classes.
85 | '''
86 | print('*** Building watnet network ***')
87 | d_feature, m_feature, l_feature = 91, 24, 11
88 | (img_height, img_width, img_channel) = input_shape
89 | ## deep features
90 | base_model = MobileNetV2(input_shape, nclasses)
91 | image_features = base_model.get_layer(index = d_feature).output
92 | x_a = aspp_2(image_features)
93 | x_a = upsample(tensor=x_a, size_1=[img_height // 4, img_width // 4])
94 | ## middle features (1/4 patch size)
95 | x_b = base_model.get_layer(index = m_feature).output
96 | x_b = layers.Conv2D(filters=48, kernel_size=1, padding='same',
97 | kernel_initializer='he_normal', name='low_level_projection', use_bias=False)(x_b)
98 | x_b = layers.BatchNormalization(name=f'bn_low_level_projection')(x_b)
99 | x_b = layers.Activation('relu', name='low_level_activation')(x_b)
100 | ## middle features (1/2 patch size)
101 | x_c = base_model.get_layer(index = l_feature).output
102 | x_c = layers.Conv2D(filters=48,
103 | kernel_size=1,
104 | padding='same',
105 | kernel_initializer='he_normal',
106 | name='low_level_projection_2',
107 | use_bias=False)(x_c)
108 | x_c = layers.BatchNormalization(name=f'bn_low_level_projection_2')(x_c)
109 | x_c = layers.Activation('relu', name='low_level_activation_2')(x_c)
110 | ## concat
111 | x = layers.concatenate([x_a, x_b], name='decoder_concat_1')
112 | x = layers.Conv2D(filters=128,
113 | kernel_size=3,
114 | padding='same',
115 | activation='relu',
116 | kernel_initializer='he_normal',
117 | name='decoder_conv2d_1',
118 | use_bias=False)(x)
119 | x = layers.BatchNormalization(name=f'bn_decoder_1')(x)
120 | x = layers.Activation('relu', name='activation_decoder_1')(x)
121 | x = layers.Conv2D(filters=128,
122 | kernel_size=3,
123 | padding='same',
124 | activation='relu',
125 | kernel_initializer='he_normal',
126 | name='decoder_conv2d_2',
127 | use_bias=False)(x)
128 | x = layers.BatchNormalization(name=f'bn_decoder_2')(x)
129 | x = layers.Activation('relu', name='activation_decoder_2')(x)
130 | x = upsample(x, [img_height//2, img_width//2])
131 | ## concat
132 | x_2 = layers.concatenate([x, x_c], name='decoder_concat_3')
133 | x_2 = layers.Conv2DTranspose(filters=128,
134 | kernel_size=3,
135 | strides=2,
136 | padding='same',
137 | kernel_initializer='he_normal',
138 | name='decoder_deconv2d', use_bias=False)(x_2)
139 | x_2 = layers.BatchNormalization(name=f'bn_decoder_4')(x_2)
140 | x_2 = layers.Activation('relu', name='activation_decoder_4')(x_2)
141 | last = tf.keras.layers.Conv2D(1, (1,1),
142 | strides=1,
143 | padding='same',
144 | kernel_initializer='he_normal',
145 | activation= 'sigmoid') ## (bs, 256, 256, 1)
146 | x_2 = last(x_2)
147 | model = models.Model(inputs=base_model.input, outputs=x_2, name='watnet')
148 | print(f'*** Output_Shape => {model.output_shape} ***')
149 | return model
150 |
151 | # model = watnet(input_shape=(512, 512, 6), nclasses=2)
152 | # model.summary()
153 |
154 |
--------------------------------------------------------------------------------
/notebooks/config.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | from utils.acc_patch import miou_binary
3 | import math
4 | ## ---- root dir ---- ##
5 | root = '/home/yons/Desktop/developer-luo/WatNet' # local sever
6 | # root = '/content/drive/My Drive/WatNet' # colab
7 |
8 | ## ---- super-parameter for model training ---- ##
9 | patch_size = 512
10 | num_bands = 6
11 | epochs = 200
12 | lr = 0.002
13 | batch_size = 4
14 | buffer_size = 200
15 | # size_tra_scene = 64
16 | size_scene = 95
17 | step_per_epoch = math.ceil(size_scene/batch_size)
18 |
19 |
20 | ## ---- configuration for model training ---- ##
21 | class lr_schedule(tf.keras.optimizers.schedules.LearningRateSchedule):
22 | def __init__(self, initial_learning_rate, steps_all):
23 | self.initial_learning_rate = initial_learning_rate
24 | self.steps_all = steps_all
25 | def __call__(self, step):
26 | return self.initial_learning_rate*((1-step/self.steps_all)**0.9)
27 | # lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
28 | # initial_learning_rate=lr,
29 | # decay_steps=100, # 1 step = 1 batch data
30 | # decay_rate=0.9)
31 | loss_bce = tf.keras.losses.BinaryCrossentropy()
32 | opt_adam = tf.keras.optimizers.Adam(learning_rate=\
33 | lr_schedule(lr,step_per_epoch*epochs))
34 |
35 | ## ---- metrics ---- ##
36 | tra_loss = tf.keras.metrics.Mean(name="tra_loss")
37 | tra_oa = tf.keras.metrics.BinaryAccuracy('tra_oa')
38 | tra_miou = miou_binary(num_classes=2, name='tra_miou')
39 | val_loss = tf.keras.metrics.Mean(name="test_loss")
40 | val_oa = tf.keras.metrics.BinaryAccuracy('test_oa')
41 | val_miou = miou_binary(num_classes=2, name='test_miou')
42 |
43 |
44 |
--------------------------------------------------------------------------------
/utils/acc_patch.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 |
3 | class miou_binary(tf.keras.metrics.MeanIoU):
4 | def update_state(self, y_true, y_pred, sample_weight=None):
5 | y_pred = tf.where(y_pred>0.5, 1, 0)
6 | super().update_state(y_true, y_pred, sample_weight)
7 |
8 |
--------------------------------------------------------------------------------
/utils/acc_pixel.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from sklearn.metrics import accuracy_score, confusion_matrix
3 |
4 | def acc_matrix(cla_map, sam_pixel, id_label):
5 | '''
6 | Arguments:
7 | cla_map: classification result of the full image
8 | sam_pixel: array(num_samples,3), col 1,2,3 are the row,col and label.
9 | id_label: 0,1,2...
10 | Return:
11 | the overall accuracy and confusion matrix
12 | '''
13 | sam_result = []
14 | num_cla = sam_pixel[:,2].max()+1
15 | labels = list(range(num_cla))
16 | for i in range(sam_pixel.shape[0]):
17 | sam_result.append(cla_map[sam_pixel[i,0], sam_pixel[i,1]])
18 | sam_result = np.array(sam_result)
19 | acc_oa = np.around(accuracy_score(sam_pixel[:,2], sam_result), 4)
20 | confus_mat = confusion_matrix(sam_pixel[:,2], sam_result, labels=labels)
21 | acc_prod=np.around(confus_mat[id_label,id_label]/confus_mat[id_label,:].sum(), 4)
22 | acc_user=np.around(confus_mat[id_label,id_label]/confus_mat[:,id_label].sum(), 4)
23 |
24 | return acc_oa, acc_prod, acc_user, confus_mat
25 |
26 |
--------------------------------------------------------------------------------
/utils/geotif_io.py:
--------------------------------------------------------------------------------
1 | ## author: luo xin, creat: 2021.6.18, modify: 2021.7.14
2 |
3 | import numpy as np
4 | from osgeo import gdal
5 | from osgeo import osr
6 |
7 | ### tiff image reading
8 | def readTiff(path_in):
9 | '''
10 | return:
11 | img: numpy array, exent: tuple, (x_min, x_max, y_min, y_max)
12 | proj info, and dimentions: (row, col, band)
13 | '''
14 | RS_Data=gdal.Open(path_in)
15 | im_col = RS_Data.RasterXSize #
16 | im_row = RS_Data.RasterYSize #
17 | im_bands =RS_Data.RasterCount #
18 | im_geotrans = RS_Data.GetGeoTransform() #
19 | im_proj = RS_Data.GetProjection() #
20 | img_array = RS_Data.ReadAsArray(0, 0, im_col, im_row).astype(np.float) #
21 | left = im_geotrans[0]
22 | up = im_geotrans[3]
23 | right = left + im_geotrans[1] * im_col + im_geotrans[2] * im_row
24 | bottom = up + im_geotrans[5] * im_row + im_geotrans[4] * im_col
25 | extent = (left, right, bottom, up)
26 | espg_code = osr.SpatialReference(wkt=im_proj).GetAttrValue('AUTHORITY',1)
27 |
28 | img_info = {'geoextent': extent, 'geotrans':im_geotrans, \
29 | 'geosrs': espg_code, 'row': im_row, 'col': im_col,\
30 | 'bands': im_bands}
31 |
32 | if im_bands > 1:
33 | img_array = np.transpose(img_array, (1, 2, 0)) #
34 | return img_array, img_info
35 | else:
36 | return img_array, img_info
37 |
38 | ### .tiff image write
39 | def writeTiff(im_data, im_geotrans, im_geosrs, path_out):
40 | '''
41 | input:
42 | im_data: tow dimentions (order: row, col),or three dimentions (order: row, col, band)
43 | im_geosrs: espg code correspond to image spatial reference system.
44 | '''
45 | im_data = np.squeeze(im_data)
46 | if 'int8' in im_data.dtype.name:
47 | datatype = gdal.GDT_Byte
48 | elif 'int16' in im_data.dtype.name:
49 | datatype = gdal.GDT_UInt16
50 | else:
51 | datatype = gdal.GDT_Float32
52 | if len(im_data.shape) == 3:
53 | im_data = np.transpose(im_data, (2, 0, 1))
54 | im_bands, im_height, im_width = im_data.shape
55 | else:
56 | im_bands,(im_height, im_width) = 1,im_data.shape
57 | driver = gdal.GetDriverByName("GTiff")
58 | dataset = driver.Create(path_out, im_width, im_height, im_bands, datatype)
59 | if(dataset!= None):
60 | dataset.SetGeoTransform(im_geotrans) #
61 | dataset.SetProjection("EPSG:" + str(im_geosrs)) #
62 | if im_bands > 1:
63 | for i in range(im_bands):
64 | dataset.GetRasterBand(i+1).WriteArray(im_data[i])
65 | del dataset
66 | else:
67 | dataset.GetRasterBand(1).WriteArray(im_data)
68 | del dataset
--------------------------------------------------------------------------------
/utils/imgPatch.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 | class imgPatch():
4 | '''
5 | author: xin luo, date: 2021.3.19
6 | description: 1. remote sensing image to multi-scale patches
7 | 2. patches to remote sensing image
8 | '''
9 | def __init__(self, img, patch_size, edge_overlay):
10 | ''' edge_overlay = left overlay or, right overlay
11 | edge_overlay should be an even number. '''
12 | self.patch_size = patch_size
13 | self.edge_overlay = edge_overlay
14 | self.img = img[:,:,np.newaxis] if len(img.shape) == 2 else img
15 | self.img_row = img.shape[0]
16 | self.img_col = img.shape[1]
17 |
18 | def toPatch(self):
19 | '''
20 | description: convert img to patches.
21 | return:
22 | patch_list, contains all generated patches.
23 | start_list, contains all start positions(row, col) of the generated patches.
24 | '''
25 | patch_list = []
26 | start_list = []
27 | patch_step = self.patch_size - self.edge_overlay
28 | img_expand = np.pad(self.img, ((self.edge_overlay, patch_step),
29 | (self.edge_overlay, patch_step), (0,0)), 'constant')
30 | img_patch_row = (img_expand.shape[0]-self.edge_overlay)//patch_step
31 | img_patch_col = (img_expand.shape[1]-self.edge_overlay)//patch_step
32 | for i in range(img_patch_row):
33 | for j in range(img_patch_col):
34 | patch_list.append(img_expand[i*patch_step:i*patch_step+self.patch_size,
35 | j*patch_step:j*patch_step+self.patch_size, :])
36 | start_list.append([i*patch_step-self.edge_overlay, j*patch_step-self.edge_overlay])
37 | return patch_list, start_list, img_patch_row, img_patch_col
38 |
39 | def higher_patch_crop(self, higher_patch_size, start_list):
40 | '''
41 | author: xin luo, date: 2021.3.19
42 | description: crop the higher-scale patch (centered by the given low-scale patch)
43 | input:
44 | img, np.array, the original image
45 | patch_size, int, the lower-scale patch size
46 | crop_size, int, the higher-scale patch size
47 | start_list, list, the start position (row,col) corresponding to the original image (generated by the toPatch function)
48 | return:
49 | higher_patch_list, list, contains higher-scale patches corresponding to the lower-scale patches.
50 | '''
51 | higher_patch_list = []
52 | radius_bias = higher_patch_size//2-self.patch_size//2
53 | patch_step = self.patch_size - self.edge_overlay
54 | img_expand = np.pad(self.img, ((self.edge_overlay, patch_step), (self.edge_overlay, patch_step), (0,0)), 'constant')
55 | img_expand_higher = np.pad(img_expand, ((radius_bias, radius_bias), (radius_bias, radius_bias), (0,0)), 'constant')
56 | start_list_new = list(np.array(start_list)+self.edge_overlay+radius_bias)
57 | for start_i in start_list_new:
58 | higher_row_start, higher_col_start = start_i[0]-radius_bias, start_i[1]-radius_bias
59 | higher_patch = img_expand_higher[higher_row_start:higher_row_start+higher_patch_size,higher_col_start:higher_col_start+higher_patch_size,:]
60 | higher_patch_list.append(higher_patch)
61 | return higher_patch_list
62 |
63 | def toImage(self, patch_list, img_patch_row, img_patch_col):
64 | patch_list = [patch[self.edge_overlay//2:-self.edge_overlay//2, self.edge_overlay//2:-self.edge_overlay//2,:]
65 | for patch in patch_list]
66 | patch_list = [np.hstack((patch_list[i*img_patch_col:i*img_patch_col+img_patch_col]))
67 | for i in range(img_patch_row)]
68 | img_array = np.vstack(patch_list)
69 | img_array = img_array[self.edge_overlay//2:self.img_row+self.edge_overlay//2, \
70 | self.edge_overlay//2:self.img_col+self.edge_overlay//2,:]
71 |
72 | return img_array
--------------------------------------------------------------------------------
/utils/imgShow.py:
--------------------------------------------------------------------------------
1 | ## date: 2021.6.2
2 |
3 | import matplotlib.pyplot as plt
4 | import numpy as np
5 |
6 | def imgShow(img, extent=None, color_bands=(2,1,0), \
7 | clip_percent=2, per_band_clip='False'):
8 | '''
9 | Arguments:
10 | img: (row, col, band) or (row, col)
11 | num_bands: a list/tuple, [red_band,green_band,blue_band]
12 | clip_percent: for linear strech, value within the range of 0-100.
13 | per_band: if 'True', the band values will be clipped by each band respectively.
14 | '''
15 | img = img/(np.amax(img)+0.00001) # normalization
16 | img = np.squeeze(img)
17 | while np.isnan(np.sum(img)) == True:
18 | where_are_NaNs = np.isnan(img)
19 | img[where_are_NaNs] = 0
20 | if np.min(img) == np.max(img):
21 | if len(img.shape) == 2:
22 | plt.imshow(np.clip(img, 0, 1), extent=extent, vmin=0,vmax=1)
23 | else:
24 | plt.imshow(np.clip(img[:,:,0], 0, 1), extent=extent, vmin=0,vmax=1)
25 | else:
26 | if len(img.shape) == 2:
27 | img_color = np.expand_dims(img, axis=2)
28 | else:
29 | img_color = img[:,:,[color_bands[0], color_bands[1], color_bands[2]]]
30 | img_color_clip = np.zeros_like(img_color)
31 | if per_band_clip == 'True':
32 | for i in range(img_color.shape[-1]):
33 | img_color_hist = np.percentile(img_color[:,:,i], [clip_percent, 100-clip_percent])
34 | img_color_clip[:,:,i] = (img_color[:,:,i]-img_color_hist[0])\
35 | /(img_color_hist[1]-img_color_hist[0]+0.0001)
36 | else:
37 | img_color_hist = np.percentile(img_color, [clip_percent, 100-clip_percent])
38 | img_color_clip = (img_color-img_color_hist[0])\
39 | /(img_color_hist[1]-img_color_hist[0]+0.0001)
40 |
41 | img_color_clip = np.squeeze(img_color_clip)
42 | plt.imshow(np.clip(img_color_clip, 0, 1), extent=extent, vmin=0,vmax=1)
43 |
44 |
45 | def imsShow(img_list, img_name_list, clip_list=None, color_bands_list=None):
46 | ''' des: visualize multiple images.
47 | input:
48 | img_list: containes all images
49 | img_names_list: image names corresponding to the images
50 | clip_list: percent clips (histogram) corresponding to the images
51 | color_bands_list: color bands combination corresponding to the images
52 | '''
53 | if not clip_list:
54 | clip_list = [0 for i in range(len(img_list))]
55 | if not color_bands_list:
56 | color_bands_list = [[2, 1, 0] for i in range(len(img_list))]
57 | for i in range(len(img_list)):
58 | plt.subplot(1, len(img_list), i+1)
59 | plt.title(img_name_list[i])
60 | imgShow(img=img_list[i],\
61 | color_bands=color_bands_list[i], clip_percent=clip_list[i])
62 | plt.axis('off')
63 |
--------------------------------------------------------------------------------
/watnet_infer.py:
--------------------------------------------------------------------------------
1 | ## author: xin luo, creat: 2021.8.11
2 |
3 | '''
4 | des: perform surface water mapping by using pretrained watnet
5 | through funtional api and command line, respectively.
6 |
7 | example:
8 | funtional api:
9 | water_map = watnet_infer(rsimg)
10 | command line:
11 | python watnet_infer.py data/test-demo/*.tif
12 | python watnet_infer.py data/test-demo/*.tif -o data/test-demo/result
13 | note:
14 | rsimg is np.array (row,col,band), value: [0,1]
15 | data/test-demo/*.tif is the sentinel-2 image path
16 | data/test-demo/result is output directory
17 | '''
18 |
19 | import os
20 | import numpy as np
21 | import tensorflow as tf
22 | import argparse
23 | from utils.imgPatch import imgPatch
24 | from utils.geotif_io import readTiff,writeTiff
25 |
26 | ## default path of the pretrained watnet model
27 | path_watnet = 'model/pretrained/watnet.h5'
28 |
29 | def get_args():
30 |
31 | description = 'surface water mapping by using pretrained watnet'
32 | parser = argparse.ArgumentParser(description=description)
33 |
34 | parser.add_argument(
35 | 'ifile', metavar='ifile', type=str, nargs='+',
36 | help=('file(s) to process (.tiff)'))
37 |
38 | parser.add_argument(
39 | '-m', metavar='watnet', dest='watnet', type=str,
40 | nargs='+', default=path_watnet,
41 | help=('pretrained watnet model (tensorflow2, .h5)'))
42 |
43 | parser.add_argument(
44 | '-o', metavar='odir', dest='odir', type=str, nargs='+',
45 | help=('directory to write'))
46 |
47 | return parser.parse_args()
48 |
49 |
50 | def watnet_infer(rsimg, path_model = path_watnet):
51 |
52 | ''' des: surface water mapping by using pretrained watnet
53 | arg:
54 | img: np.array, surface reflectance data (!!data value: 0-1),
55 | consist of 6 bands (blue,green,red,nir,swir-1,swir-2).
56 | path_model: str, the path of the pretrained model.
57 | retrun:
58 | water_map: np.array.
59 | '''
60 | ### ----- load the pretrained model -----#
61 | model = tf.keras.models.load_model(path_model, compile=False)
62 | ### ------ apply the pre-trained model
63 | imgPatch_ins = imgPatch(rsimg, patch_size=512, edge_overlay=80)
64 | patch_list, start_list, img_patch_row, img_patch_col = imgPatch_ins.toPatch()
65 | result_patch_list = [model(patch[np.newaxis, :]) for patch in patch_list]
66 | result_patch_list = [np.squeeze(patch, axis = 0) for patch in result_patch_list]
67 | pro_map = imgPatch_ins.toImage(result_patch_list, img_patch_row, img_patch_col)
68 | water_map = np.where(pro_map>0.5, 1, 0)
69 |
70 | return water_map
71 |
72 |
73 | if __name__ == '__main__':
74 | args = get_args()
75 | ifile = args.ifile
76 | path_model = args.watnet
77 | odir = args.odir
78 | ## write path
79 | if odir:
80 | if not os.path.exists(odir[0]):
81 | os.makedirs(odir[0])
82 | ofile = [os.path.splitext(file)[0] + '_water.tif' for file in ifile]
83 | ofile = [os.path.join(odir[0], os.path.split(file)[1]) for file in ofile]
84 | else:
85 | ofile = [os.path.splitext(file)[0] + '_water.tif' for file in ifile]
86 |
87 | for i in range(len(ifile)):
88 | print('file in -->', ifile[i])
89 | ## image reading and normalization
90 | sen2_img, img_info = readTiff(path_in=ifile[i])
91 | sen2_img = np.float32(np.clip(sen2_img/10000, a_min=0, a_max=1)) ## normalization
92 | ## surface water mapping by using watnet
93 | water_map = watnet_infer(rsimg=sen2_img)
94 | # write out the result
95 | print('write out -->', ofile[i])
96 | writeTiff(im_data = water_map.astype(np.int8),
97 | im_geotrans = img_info['geotrans'],
98 | im_geosrs = img_info['geosrs'],
99 | path_out = ofile[i])
100 |
101 |
102 |
--------------------------------------------------------------------------------