├── README.md
├── data
├── image
│ ├── potsdam
│ │ └── instruction.txt
│ └── vaihingen
│ │ └── instruction.txt
└── label
│ ├── potsdam
│ └── instruction.txt
│ └── vaihingen
│ └── instruction.txt
├── evaluation.so
├── model
├── potsdam
│ └── instruction.txt
└── vaihingen
│ └── instruction.txt
├── prediction.so
├── res101.so
├── resnet_utils.so
├── result
├── potsdam
│ └── instruction.txt
└── vaihingen
│ └── instruction.txt
└── run.py
/README.md:
--------------------------------------------------------------------------------
1 | # AFNet
2 | The code of Adaptive Fusion Network for Remote Sensing Image Semantic Segmentation.
3 |
4 | ## Requirements
5 | Platform, linux ubuntu 16.04.5 LTS
6 | python 3.5
7 | tensorflow-gpu>=1.8.0
8 | numpy
9 | opencv-python
10 | tqdm
11 | argparse
12 |
13 | ## Instructions for the folders
14 | ./data/image/potsdam/ folder for test images of the Potsdam dataset.
15 | ./data/image/vaihingen/ folder for test images of the Vaihingen dataset.
16 | ./data/label/potsdam/ folder for test labels (boundaries eroded) of the Potsdam dataset.
17 | ./data/label/vaihingen/ folder for test labels (boundaries eroded) of the Vaihingen dataset.
18 | you can download the images and labels from the official websites of [ISPRS 2d Semantic labeling](http://www2.isprs.org/commissions/comm3/wg4/semantic-labeling.html)
19 | After downloading the images and labels, extracting them to corresponding folders and delete the *instruction.txt* in the folder.
20 |
21 | ---------
22 | ./model/potsdam/ folder for the trained model of the Potsdam dataset.
23 | ./model/vaihingen/ folder for the trained model of the Vaihingen dataset.
24 | you can download the trained models from [OneDrive](https://1drv.ms/u/s!ArZbL8yRkEMfh0HiEGHbLZQ-HVdD?e=HT0rnX) now.
25 | After downloading the models, extracting them to corresponding folders and delete the *instruction.txt* in the folder.
26 |
27 | ---------
28 | ./result/potsdam/ folder for the predictions of the Potsdam dataset.
29 | ./result/vaihingen/ folder for the predictions of the Vaihingen dataset.
30 | ./result/ folder for the accuracy statistics for the two datasets. The statistics will be generated after running the evaluation code.
31 | The predictions of both datasets could be obtained either by running prediction code or download form hyperlink.
32 | **Notice** : Remove all *instruction.txt* in folders before running.
33 |
34 | ---------
35 | ## Running code
36 | Generally, the prediction should be done before evaluation, unless the predictions have been downloaded. Before running prediction, the aforementioned test images should be placed in the correct folder, and the *instruction.txt* should be deleted.
37 | If you want to run predictions yourself, you can run the following script:
38 | `python run.py dataset prediction`
39 | where *dataset* has two optional values, *potsdam* and *vaihingen*, on which dataset the prediction will be conducted. The predictions will be output in the ./result/*dataset*/ folder. But notice that for each sample, this version of the prediction code uses strategies such as flipping, rotation, and overlapping sampling to process the input image patches. Therefore, the code running time will be longer.
40 | If there are predictions on the corresponding folders, after the labels (boundaries eroded) have been placed correctly, and the *instruction.txt* has been removed from the label folder, we can test the accuracy of predictions by the following script:
41 | `python run.py dataset evaluation`
42 | where *dataset* has two optional values, *potsdam* and *vaihingen*, on which dataset the evaluation will be conducted. The statistics are recorded in the *.txt* file under the ./result/ folder.
43 | **Notice** : The evaluation code is implemented in Python, but it is consistent with the results of the official test code released by ISPRS. You can also run the official test code to do an evaluation. Apart from OA and F1 calculated in the official test code, the IoU is calculated in our evaluation code. The code for IoU calculation refers to [ADE20k](https://github.com/CSAILVision/sceneparsing/tree/master/evaluationCode), and we have made some minor changes.
44 |
--------------------------------------------------------------------------------
/data/image/potsdam/instruction.txt:
--------------------------------------------------------------------------------
1 | 1) download images and labels from http://www2.isprs.org/commissions/comm3/wg4/semantic-labeling.html
2 | 2) Extract test images of the Potsdam dataset from download file to this folder.
3 | 3) remove this instruction.txt
--------------------------------------------------------------------------------
/data/image/vaihingen/instruction.txt:
--------------------------------------------------------------------------------
1 | 1) download images and labels from http://www2.isprs.org/commissions/comm3/wg4/semantic-labeling.html
2 | 2) Extract test images of the Vaihingen dataset from download file to this folder.
3 | 3) remove this instruction.txt
--------------------------------------------------------------------------------
/data/label/potsdam/instruction.txt:
--------------------------------------------------------------------------------
1 | 1) download images and labels from http://www2.isprs.org/commissions/comm3/wg4/semantic-labeling.html
2 | 2) Extract test labels (Boundaries eroded) of the Potsdam dataset from download file to this folder.
3 | 3) remove this instruction.txt
--------------------------------------------------------------------------------
/data/label/vaihingen/instruction.txt:
--------------------------------------------------------------------------------
1 | 1) download images and labels from http://www2.isprs.org/commissions/comm3/wg4/semantic-labeling.html
2 | 2) Extract test labels (Boundaries eroded) of the Vaihingen dataset from download file to this folder.
3 | 3) remove this instruction.txt
--------------------------------------------------------------------------------
/evaluation.so:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/athauna/AFNet/6a1a7c9943845d406fcf63d4382a6663f0b60291/evaluation.so
--------------------------------------------------------------------------------
/model/potsdam/instruction.txt:
--------------------------------------------------------------------------------
1 | 1) Download the trained model of Potsdam dataset and place it in this folder.
2 | 2) Remove the instruction.txt
--------------------------------------------------------------------------------
/model/vaihingen/instruction.txt:
--------------------------------------------------------------------------------
1 | 1) Download the trained model of Vaihingen dataset and place it in this folder.
2 | 2) Remove the instruction.txt
--------------------------------------------------------------------------------
/prediction.so:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/athauna/AFNet/6a1a7c9943845d406fcf63d4382a6663f0b60291/prediction.so
--------------------------------------------------------------------------------
/res101.so:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/athauna/AFNet/6a1a7c9943845d406fcf63d4382a6663f0b60291/res101.so
--------------------------------------------------------------------------------
/resnet_utils.so:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/athauna/AFNet/6a1a7c9943845d406fcf63d4382a6663f0b60291/resnet_utils.so
--------------------------------------------------------------------------------
/result/potsdam/instruction.txt:
--------------------------------------------------------------------------------
1 | 1) Download predictions of Potsdam dataset and place it in this folder, or just run the run.py to obtain predictions.
2 | 2) Remove the instruction.txt
--------------------------------------------------------------------------------
/result/vaihingen/instruction.txt:
--------------------------------------------------------------------------------
1 | 1) Download predictions of Vaihingen dataset and place it in this folder, or just run the run.py to obtain predictions.
2 | 2) Remove the instruction.txt
--------------------------------------------------------------------------------
/run.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import prediction
3 | import evaluation
4 |
5 | def main():
6 | parser = argparse.ArgumentParser(description="Code for the prediction and evaluation of AFNet.")
7 | parser.add_argument("dataset", type=str, choices=["potsdam","vaihingen"], help="choose dataset to do prediction or evaluation.")
8 | parser.add_argument("mode", type=str, choices=["prediction","evaluation"], help="whether do prediction or evaluation.")
9 | args = parser.parse_args()
10 |
11 | if args.mode == 'prediction':
12 | prediction.main(args.dataset)
13 | if args.mode == 'evaluation':
14 | evaluation.main(args.dataset)
15 |
16 | if __name__ == "__main__":
17 | main()
18 |
19 |
20 |
21 |
22 |
23 |
--------------------------------------------------------------------------------