├── .gitignore
├── Figs
├── framework.png
├── gta5_results.png
├── idea.png
├── synthia_results.png
└── visualization.png
├── README.md
├── SSL.py
├── compute_iou.py
├── dataset
├── __init__.py
├── cityscapes_dataset.py
├── cityscapes_list
│ ├── .DS_Store
│ ├── info.json
│ ├── label.txt
│ ├── train.txt
│ └── val.txt
├── gta5_dataset.py
├── gta5_list
│ └── train.txt
├── synthia_dataset.py
└── synthia_list
│ └── train.txt
├── eva.sh
├── evaluate_cityscapes.py
├── model
├── __init__.py
├── anchor_label.py
├── deeplab.py
├── deeplab_multi.py
├── deeplab_vgg.py
├── discriminator.py
└── fcn8s.py
├── requirements.txt
├── train_sim.py
├── train_sim_ssl.py
└── utils
├── __init__.py
├── constant.py
├── functions.py
└── loss.py
/.gitignore:
--------------------------------------------------------------------------------
1 | *.pyc
2 | *.pth
3 | *.jpg
4 | data/*
5 | __pycache__
6 | result*
7 | snapshots*
8 | ssl*
9 | target_ssl_gt
10 | *.out
11 | eva.bash
12 |
--------------------------------------------------------------------------------
/Figs/framework.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SHI-Labs/Unsupervised-Domain-Adaptation-with-Differential-Treatment/7438f06387e559ebaa09a02b4e9bc45c272d5edc/Figs/framework.png
--------------------------------------------------------------------------------
/Figs/gta5_results.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SHI-Labs/Unsupervised-Domain-Adaptation-with-Differential-Treatment/7438f06387e559ebaa09a02b4e9bc45c272d5edc/Figs/gta5_results.png
--------------------------------------------------------------------------------
/Figs/idea.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SHI-Labs/Unsupervised-Domain-Adaptation-with-Differential-Treatment/7438f06387e559ebaa09a02b4e9bc45c272d5edc/Figs/idea.png
--------------------------------------------------------------------------------
/Figs/synthia_results.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SHI-Labs/Unsupervised-Domain-Adaptation-with-Differential-Treatment/7438f06387e559ebaa09a02b4e9bc45c272d5edc/Figs/synthia_results.png
--------------------------------------------------------------------------------
/Figs/visualization.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SHI-Labs/Unsupervised-Domain-Adaptation-with-Differential-Treatment/7438f06387e559ebaa09a02b4e9bc45c272d5edc/Figs/visualization.png
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Differential Treatment for Stuff and Things: A Simple Unsupervised Domain Adaptation Method for Semantic Segmentation
2 |
3 | This repository is for Stuff Instance Matching (SIM) framework introduced in the following paper accepted by CVPR2020
4 |
5 | **[Differential Treatment for Stuff and Things: A Simple Unsupervised Domain Adaptation Method for Semantic Segmentation](https://arxiv.org/abs/2003.08040)**
6 | [Zhonghao Wang](https://scholar.google.com/citations?user=opL6CL8AAAAJ&hl=en),
7 | [Mo Yu](https://sites.google.com/site/moyunlp/),
8 | [Yunchao Wei](https://weiyc.github.io/),
9 | [Rogerio Feris](http://rogerioferis.com/),
10 | [Jinjun Xiong](https://scholar.google.com/citations?user=tRt1xPYAAAAJ&hl=en),
11 | [Wen-mei Hwu](https://scholar.google.com/citations?user=ohjQPx8AAAAJ&hl=en),
12 | [Thomas S. Huang](https://scholar.google.com/citations?user=rGF6-WkAAAAJ&hl=en),
13 | and [Humphrey Shi](https://www.humphreyshi.com/)
14 |
15 |
16 |

17 |
18 |
19 |
20 | ## Introduction
21 |
22 | We consider the problem of unsupervised domain adaptation for semantic segmentation by easing the domain shift between the source domain (synthetic data) and the target domain (real data) in this work. State-of-the-art approaches prove that performing semantic-level alignment is helpful in tackling the domain shift issue. Based on the observation that stuff categories usually share similar appearances across images of different domains while things (i.e. object instances) have much larger differences, we propose to improve the semantic-level alignment with different strategies for stuff regions and for things: 1) for the stuff categories, we generate feature representation for each class and conduct the alignment operation from the target domain to the source domain; 2) for the thing categories, we generate feature representation for each individual instance and encourage the instance in the target domain to align with the most similar one in the source domain. In this way, the individual differences within thing categories will also be considered to alleviate over-alignment. In addition to our proposed method, we further reveal the reason why the current adversarial loss is often unstable in minimizing the distribution discrepancy and show that our method can help ease this issue by minimizing the most similar stuff and instance features between the source and the target domains. We conduct extensive experiments in two unsupervised domain adaptation tasks, i.e. GTA5 to Cityscapes and SYNTHIA to Cityscapes, and achieve the new state-of-the-art segmentation accuracy.
23 |
24 |
25 |

26 | Training framework
27 |
28 |
29 |
30 | ## Prerequisites
31 |
32 | Download our repo:
33 | ```
34 | git clone https://github.com/SHI-Labs/Unsupervised-Domain-Adaptation-with-Differential-Treatment.git
35 | cd Unsupervised-Domain-Adaptation-with-Differential-Treatment
36 | ```
37 |
38 | ### Data preparation
39 |
40 | Download [Cityscapes](https://www.cityscapes-dataset.com/), [CycleGAN transferred GTA5](https://drive.google.com/open?id=1OBvYVz2ND4ipdfnkhSaseT8yu2ru5n5l) and [gta5 labels](https://drive.google.com/file/d/11E42F_4InoZTnoATi-Ob1yEHfz7lfZWg/view?usp=sharing). Symbolic link them under ```data``` folder:
41 | ```
42 | ln -s path_to_Cityscapes_folder ./data/Cityscapes
43 | ln -s path_to_gta5_deeplab_folder ./data/gta5_deeplab
44 | ln -s path_to_gta5_labels_folder ./data/gta5_deeplab/labels
45 | ```
46 |
47 | ### Environment setup
48 |
49 | The code is built on Ubuntu 18.04 environment with CUDA 10.0 and cuDNN 7.6.5, and it is trained and tested with a Nvidia Rtx 2080Ti GPU.
50 |
51 | Create a new conda environment and install dependencies:
52 | ```
53 | conda create -n simenv python=3.7
54 | conda activate simenv
55 | pip install -r requirements.txt
56 | ```
57 | Please install apex from the [official repo](https://github.com/NVIDIA/apex).
58 |
59 | ## Train
60 |
61 | ### First phase
62 |
63 | Train the SIM model:
64 | ```
65 | python train_sim.py
66 | ```
67 |
68 | ### Second phase
69 | Generate the sudo labels for Cityscapes training set:
70 | ```
71 | python SSL.py
72 | ```
73 | Train SIM with self-supervised learning:
74 | ```
75 | python train_sim_ssl.py
76 | ```
77 |
78 | ## Test
79 | Test the final model ([trained model](https://drive.google.com/file/d/1PgAaM6ySMdxPiZWJhgSCdFKP4c88E4y0/view?usp=sharing)):
80 | ```
81 | ./eva.sh snapshots_ssl/BestGTA5.pth
82 | ```
83 |
84 | Test the SIM model without self-supervised training ([trained model](https://drive.google.com/file/d/1ulC_GTSxVPiUW6jMylZrTgiKfggqM4Sr/view?usp=sharing)):
85 | ```
86 | ./eva.sh snapshots/BestGTA5.pth
87 | ```
88 |
89 | ## Results
90 |
91 |
92 |

93 |
94 | Comparison to the state-of-the-art results of adapting GTA5 to Cityscapes.
95 |
96 |
97 |
98 |

99 |
100 | Comparison to the state-of-the-art results of adapting SYNTHIA to Cityscapes.
101 |
102 |
103 | ## Visual results
104 |
105 |
106 |

107 | Visualization of the segmentation results.
108 |
109 |
110 | ## Citation
111 |
112 | ```
113 | @InProceedings{Wang_2020_CVPR,
114 | author = {Wang, Zhonghao and Yu, Mo and Wei, Yunchao and Feris, Rogerio and Xiong, Jinjun and Hwu, Wen-mei and Huang, Thomas S. and Shi, Honghui},
115 | title = {Differential Treatment for Stuff and Things: A Simple Unsupervised Domain Adaptation Method for Semantic Segmentation},
116 | booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
117 | month = {June},
118 | year = {2020}
119 | }
120 | @InProceedings{Wang_2020_CVPR_Workshops,
121 | author = {Wang, Zhonghao and Wei, Yunchao and Feris, Rogerio and Xiong, Jinjun and Hwu, Wen-mei and Huang, Thomas S. and Shi, Honghui},
122 | title = {Alleviating Semantic-Level Shift: A Semi-Supervised Domain Adaptation Method for Semantic Segmentation},
123 | booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
124 | month = {June},
125 | year = {2020}
126 | }
127 | ```
128 |
129 | ## Acknowledgements
130 |
131 | This code is developed on the code base of [AdaptSegNet](https://github.com/wasidennis/AdaptSegNet) and utilizes the CycleGAN transferred target images released by [BDL](https://github.com/liyunsheng13/BDL). Many thanks to the authors of these works.
132 |
--------------------------------------------------------------------------------
/SSL.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import scipy
3 | from scipy import ndimage
4 | import numpy as np
5 | import sys
6 |
7 | import torch
8 | from torch.autograd import Variable
9 | import torchvision.models as models
10 | import torch.nn.functional as F
11 | from torch.utils import data, model_zoo
12 | from model.deeplab import Res_Deeplab
13 | from model.deeplab_multi import DeeplabMultiFeature
14 | from model.deeplab_vgg import DeeplabVGG
15 | from dataset.cityscapes_dataset import cityscapesDataSet
16 | from collections import OrderedDict
17 | import os
18 | from PIL import Image
19 |
20 | import torch.nn as nn
21 | IMG_MEAN = np.array((104.00698793,116.66876762,122.67891434), dtype=np.float32)
22 |
23 | DATA_DIRECTORY = './data/Cityscapes'
24 | DATA_LIST_PATH = './dataset/cityscapes_list/train.txt'
25 | SAVE_PATH = './target_ssl_gt'
26 |
27 | IGNORE_LABEL = 255
28 | NUM_CLASSES = 19
29 | RESTORE_FROM = './snapshots/BestGTA5.pth'
30 | SET = 'train'
31 |
32 | MODEL = 'Deeplab'
33 |
34 | palette = [128, 64, 128, 244, 35, 232, 70, 70, 70, 102, 102, 156, 190, 153, 153, 153, 153, 153, 250, 170, 30,
35 | 220, 220, 0, 107, 142, 35, 152, 251, 152, 70, 130, 180, 220, 20, 60, 255, 0, 0, 0, 0, 142, 0, 0, 70,
36 | 0, 60, 100, 0, 80, 100, 0, 0, 230, 119, 11, 32]
37 | zero_pad = 256 * 3 - len(palette)
38 | for i in range(zero_pad):
39 | palette.append(0)
40 |
41 |
42 | def colorize_mask(mask):
43 | # mask: numpy array of the mask
44 | new_mask = Image.fromarray(mask.astype(np.uint8)).convert('P')
45 | new_mask.putpalette(palette)
46 |
47 | return new_mask
48 |
49 | def get_arguments():
50 | """Parse all the arguments provided from the CLI.
51 |
52 | Returns:
53 | A list of parsed arguments.
54 | """
55 | parser = argparse.ArgumentParser(description="DeepLab-ResNet Network")
56 | parser.add_argument("--model", type=str, default=MODEL,
57 | help="Model Choice Deeplab.")
58 | parser.add_argument("--data-dir", type=str, default=DATA_DIRECTORY,
59 | help="Path to the directory containing the Cityscapes dataset.")
60 | parser.add_argument("--data-list", type=str, default=DATA_LIST_PATH,
61 | help="Path to the file listing the images in the dataset.")
62 | parser.add_argument("--ignore-label", type=int, default=IGNORE_LABEL,
63 | help="The index of the label to ignore during the training.")
64 | parser.add_argument("--num-classes", type=int, default=NUM_CLASSES,
65 | help="Number of classes to predict (including background).")
66 | parser.add_argument("--restore-from", type=str, default=RESTORE_FROM,
67 | help="Where restore model parameters from.")
68 | parser.add_argument("--set", type=str, default=SET,
69 | help="choose evaluation set.")
70 | parser.add_argument("--save", type=str, default=SAVE_PATH,
71 | help="Path to save result.")
72 | parser.add_argument("--cpu", action='store_true', help="choose to use cpu device.")
73 | return parser.parse_args()
74 |
75 |
76 | def main():
77 | """Create the model and start the evaluation process."""
78 |
79 | args = get_arguments()
80 |
81 | if not os.path.exists(args.save):
82 | os.makedirs(args.save)
83 |
84 | model = DeeplabMultiFeature(num_classes=args.num_classes)
85 | saved_state_dict = torch.load(args.restore_from)
86 | if list(saved_state_dict.keys())[0].split('.')[0] == 'module':
87 | for key in saved_state_dict.keys():
88 | saved_state_dict['.'.join(key.split('.')[1:])] = saved_state_dict.pop(key)
89 | model.load_state_dict(saved_state_dict)
90 |
91 | device = torch.device("cuda" if not args.cpu else "cpu")
92 | model = model.to(device)
93 | model.eval()
94 |
95 | trainset = cityscapesDataSet(args.data_dir, args.data_list,
96 | crop_size=(1024, 512), mean=IMG_MEAN, scale=False, mirror=False, set=args.set)
97 | trainloader = data.DataLoader(trainset, batch_size=1, shuffle=False, pin_memory=True)
98 |
99 | interp = nn.Upsample(size=(512, 1024), mode='bilinear', align_corners=True)
100 |
101 | predicted_label = np.zeros((len(trainset), 512, 1024))
102 | predicted_prob = np.zeros((len(trainset), 512, 1024))
103 | image_name = []
104 |
105 | for index, batch in enumerate(trainloader):
106 | if index % 100 == 0:
107 | print('%d processd' % index)
108 |
109 | image, _, name = batch
110 | image = image.to(device)
111 | with torch.no_grad():
112 | _, output = model(image)
113 | output = F.softmax(output, dim=1)
114 | output = interp(output).cpu().data[0].numpy()
115 | output = output.transpose(1,2,0)
116 |
117 | label, prob = np.argmax(output, axis=2), np.max(output, axis=2)
118 | predicted_label[index] = label.copy()
119 | predicted_prob[index] = prob.copy()
120 | image_name.append(name[0])
121 |
122 | thres = []
123 | for i in range(19):
124 | x = predicted_prob[predicted_label==i]
125 | if len(x) == 0:
126 | thres.append(0)
127 | continue
128 | x = np.sort(x)
129 | thres.append(x[np.int(np.round(len(x)*0.5))])
130 | print(thres)
131 | thres = np.array(thres)
132 | thres[thres>0.9]=0.9
133 | print(thres)
134 | for index in range(len(trainset)):
135 | name = image_name[index]
136 | label = predicted_label[index]
137 | prob = predicted_prob[index]
138 | for i in range(19):
139 | label[(prob= 0) & (a < n)
10 | return np.bincount(n * a[k].astype(int) + b[k], minlength=n ** 2).reshape(n, n)
11 |
12 |
13 | def per_class_iu(hist):
14 | return np.diag(hist) / (hist.sum(1) + hist.sum(0) - np.diag(hist))
15 |
16 |
17 | def label_mapping(input, mapping):
18 | output = np.copy(input)
19 | for ind in range(len(mapping)):
20 | output[input == mapping[ind][0]] = mapping[ind][1]
21 | return np.array(output, dtype=np.int64)
22 |
23 |
24 | def compute_mIoU(gt_dir, pred_dir, devkit_dir='', silence=False):
25 | """
26 | Compute IoU given the predicted colorized images and
27 | """
28 | with open(join(devkit_dir, 'info.json'), 'r') as fp:
29 | info = json.load(fp)
30 | num_classes = np.int(info['classes'])
31 | print('Num classes', num_classes)
32 | name_classes = np.array(info['label'], dtype=np.str)
33 | mapping = np.array(info['label2train'], dtype=np.int)
34 | hist = np.zeros((num_classes, num_classes))
35 |
36 | image_path_list = join(devkit_dir, 'val.txt')
37 | label_path_list = join(devkit_dir, 'label.txt')
38 | gt_imgs = open(label_path_list, 'r').read().splitlines()
39 | gt_imgs = [join(gt_dir, x) for x in gt_imgs]
40 | pred_imgs = open(image_path_list, 'r').read().splitlines()
41 | pred_imgs = [join(pred_dir, x.split('/')[-1]) for x in pred_imgs]
42 |
43 | for ind in range(len(gt_imgs)):
44 | pred = np.array(Image.open(pred_imgs[ind]))
45 | label = np.array(Image.open(gt_imgs[ind]))
46 | label = label_mapping(label, mapping)
47 | if len(label.flatten()) != len(pred.flatten()):
48 | print('Skipping: len(gt) = {:d}, len(pred) = {:d}, {:s}, {:s}'.format(len(label.flatten()), len(pred.flatten()), gt_imgs[ind], pred_imgs[ind]))
49 | continue
50 | hist += fast_hist(label.flatten(), pred.flatten(), num_classes)
51 | if ind > 0 and ind % 10 == 0:
52 | print('{:d} / {:d}: {:0.2f}'.format(ind, len(gt_imgs), 100*np.mean(per_class_iu(hist))))
53 |
54 | mIoUs = per_class_iu(hist)
55 | if not silence:
56 | for ind_class in range(num_classes):
57 | print('===>' + name_classes[ind_class] + ':\t' + str(round(mIoUs[ind_class] * 100, 2)))
58 | print('===> mIoU: ' + str(round(np.nanmean(mIoUs) * 100, 2)))
59 | return mIoUs
60 |
61 |
62 | def main(args):
63 | compute_mIoU(args.gt_dir, args.pred_dir, args.devkit_dir)
64 |
65 |
66 | if __name__ == "__main__":
67 | parser = argparse.ArgumentParser()
68 | parser.add_argument('gt_dir', type=str, help='directory which stores CityScapes val gt images')
69 | parser.add_argument('pred_dir', type=str, help='directory which stores CityScapes val pred images')
70 | parser.add_argument('--devkit_dir', default='dataset/cityscapes_list', help='base directory of cityscapes')
71 | args = parser.parse_args()
72 | main(args)
73 |
--------------------------------------------------------------------------------
/dataset/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SHI-Labs/Unsupervised-Domain-Adaptation-with-Differential-Treatment/7438f06387e559ebaa09a02b4e9bc45c272d5edc/dataset/__init__.py
--------------------------------------------------------------------------------
/dataset/cityscapes_dataset.py:
--------------------------------------------------------------------------------
1 | import os
2 | import os.path as osp
3 | import numpy as np
4 | import random
5 | import matplotlib.pyplot as plt
6 | import collections
7 | import torch
8 | import torchvision
9 | from torch.utils import data
10 | from PIL import Image
11 |
12 | class cityscapesDataSet(data.Dataset):
13 | def __init__(self, root, list_path, max_iters=None, crop_size=(321, 321),
14 | mean=(128, 128, 128), scale=True, mirror=True, ignore_label=255, set='val', ssl_dir=''):
15 | self.root = root
16 | self.list_path = list_path
17 | self.crop_size = crop_size
18 | self.scale = scale
19 | self.ignore_label = ignore_label
20 | self.mean = mean
21 | self.is_mirror = mirror
22 | self.ssl_dir = ssl_dir
23 | # self.mean_bgr = np.array([104.00698793, 116.66876762, 122.67891434])
24 | self.img_ids = [i_id.strip() for i_id in open(list_path)]
25 | if not max_iters==None:
26 | self.img_ids = self.img_ids * int(np.ceil(float(max_iters) / len(self.img_ids)))
27 | self.files = []
28 | self.set = set
29 | # for split in ["train", "trainval", "val"]:
30 | for name in self.img_ids:
31 | img_file = osp.join(self.root, "leftImg8bit/%s/%s" % (self.set, name))
32 | self.files.append({
33 | "img": img_file,
34 | "name": name
35 | })
36 |
37 | def __len__(self):
38 | return len(self.files)
39 |
40 | def __getitem__(self, index):
41 | datafiles = self.files[index]
42 |
43 | image = Image.open(datafiles["img"]).convert('RGB')
44 | name = datafiles["name"]
45 |
46 | # resize
47 | image = image.resize(self.crop_size, Image.BICUBIC)
48 |
49 | image = np.asarray(image, np.float32)
50 |
51 | size = image.shape
52 | image = image[:, :, ::-1] # change to BGR
53 | image -= self.mean
54 | image = image.transpose((2, 0, 1))
55 |
56 | if len(self.ssl_dir)>0:
57 | label = Image.open(osp.join(self.ssl_dir, name.split('/')[-1]))
58 | label = label.resize(self.crop_size, Image.NEAREST)
59 | label = np.asarray(label, np.int64)
60 | return image.copy(), label.copy(), np.array(size), name
61 |
62 | return image.copy(), np.array(size), name
63 |
64 |
65 | if __name__ == '__main__':
66 | dst = GTA5DataSet("./data", is_transform=True)
67 | trainloader = data.DataLoader(dst, batch_size=4)
68 | for i, data in enumerate(trainloader):
69 | imgs, labels = data
70 | if i == 0:
71 | img = torchvision.utils.make_grid(imgs).numpy()
72 | img = np.transpose(img, (1, 2, 0))
73 | img = img[:, :, ::-1]
74 | plt.imshow(img)
75 | plt.show()
76 |
--------------------------------------------------------------------------------
/dataset/cityscapes_list/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SHI-Labs/Unsupervised-Domain-Adaptation-with-Differential-Treatment/7438f06387e559ebaa09a02b4e9bc45c272d5edc/dataset/cityscapes_list/.DS_Store
--------------------------------------------------------------------------------
/dataset/cityscapes_list/info.json:
--------------------------------------------------------------------------------
1 | {
2 | "classes":19,
3 | "label2train":[
4 | [0, 255],
5 | [1, 255],
6 | [2, 255],
7 | [3, 255],
8 | [4, 255],
9 | [5, 255],
10 | [6, 255],
11 | [7, 0],
12 | [8, 1],
13 | [9, 255],
14 | [10, 255],
15 | [11, 2],
16 | [12, 3],
17 | [13, 4],
18 | [14, 255],
19 | [15, 255],
20 | [16, 255],
21 | [17, 5],
22 | [18, 255],
23 | [19, 6],
24 | [20, 7],
25 | [21, 8],
26 | [22, 9],
27 | [23, 10],
28 | [24, 11],
29 | [25, 12],
30 | [26, 13],
31 | [27, 14],
32 | [28, 15],
33 | [29, 255],
34 | [30, 255],
35 | [31, 16],
36 | [32, 17],
37 | [33, 18],
38 | [-1, 255]],
39 | "label":[
40 | "road",
41 | "sidewalk",
42 | "building",
43 | "wall",
44 | "fence",
45 | "pole",
46 | "light",
47 | "sign",
48 | "vegetation",
49 | "terrain",
50 | "sky",
51 | "person",
52 | "rider",
53 | "car",
54 | "truck",
55 | "bus",
56 | "train",
57 | "motocycle",
58 | "bicycle"],
59 | "palette":[
60 | [128,64,128],
61 | [244,35,232],
62 | [70,70,70],
63 | [102,102,156],
64 | [190,153,153],
65 | [153,153,153],
66 | [250,170,30],
67 | [220,220,0],
68 | [107,142,35],
69 | [152,251,152],
70 | [70,130,180],
71 | [220,20,60],
72 | [255,0,0],
73 | [0,0,142],
74 | [0,0,70],
75 | [0,60,100],
76 | [0,80,100],
77 | [0,0,230],
78 | [119,11,32],
79 | [0,0,0]],
80 | "mean":[
81 | 73.158359210711552,
82 | 82.908917542625858,
83 | 72.392398761941593],
84 | "std":[
85 | 47.675755341814678,
86 | 48.494214368814916,
87 | 47.736546325441594]
88 | }
89 |
--------------------------------------------------------------------------------
/dataset/cityscapes_list/label.txt:
--------------------------------------------------------------------------------
1 | frankfurt/frankfurt_000001_007973_gtFine_labelIds.png
2 | frankfurt/frankfurt_000001_025921_gtFine_labelIds.png
3 | frankfurt/frankfurt_000001_062016_gtFine_labelIds.png
4 | frankfurt/frankfurt_000001_049078_gtFine_labelIds.png
5 | frankfurt/frankfurt_000000_009561_gtFine_labelIds.png
6 | frankfurt/frankfurt_000001_013710_gtFine_labelIds.png
7 | frankfurt/frankfurt_000001_041664_gtFine_labelIds.png
8 | frankfurt/frankfurt_000000_013240_gtFine_labelIds.png
9 | frankfurt/frankfurt_000001_044787_gtFine_labelIds.png
10 | frankfurt/frankfurt_000001_015328_gtFine_labelIds.png
11 | frankfurt/frankfurt_000001_073243_gtFine_labelIds.png
12 | frankfurt/frankfurt_000001_034816_gtFine_labelIds.png
13 | frankfurt/frankfurt_000001_041074_gtFine_labelIds.png
14 | frankfurt/frankfurt_000001_005898_gtFine_labelIds.png
15 | frankfurt/frankfurt_000000_022254_gtFine_labelIds.png
16 | frankfurt/frankfurt_000001_044658_gtFine_labelIds.png
17 | frankfurt/frankfurt_000001_009504_gtFine_labelIds.png
18 | frankfurt/frankfurt_000001_024927_gtFine_labelIds.png
19 | frankfurt/frankfurt_000001_017842_gtFine_labelIds.png
20 | frankfurt/frankfurt_000001_068208_gtFine_labelIds.png
21 | frankfurt/frankfurt_000001_013016_gtFine_labelIds.png
22 | frankfurt/frankfurt_000001_010156_gtFine_labelIds.png
23 | frankfurt/frankfurt_000000_002963_gtFine_labelIds.png
24 | frankfurt/frankfurt_000001_020693_gtFine_labelIds.png
25 | frankfurt/frankfurt_000001_078803_gtFine_labelIds.png
26 | frankfurt/frankfurt_000001_025713_gtFine_labelIds.png
27 | frankfurt/frankfurt_000001_007285_gtFine_labelIds.png
28 | frankfurt/frankfurt_000001_070099_gtFine_labelIds.png
29 | frankfurt/frankfurt_000000_009291_gtFine_labelIds.png
30 | frankfurt/frankfurt_000000_019607_gtFine_labelIds.png
31 | frankfurt/frankfurt_000001_068063_gtFine_labelIds.png
32 | frankfurt/frankfurt_000000_003920_gtFine_labelIds.png
33 | frankfurt/frankfurt_000001_077233_gtFine_labelIds.png
34 | frankfurt/frankfurt_000001_029086_gtFine_labelIds.png
35 | frankfurt/frankfurt_000001_060545_gtFine_labelIds.png
36 | frankfurt/frankfurt_000001_001464_gtFine_labelIds.png
37 | frankfurt/frankfurt_000001_028590_gtFine_labelIds.png
38 | frankfurt/frankfurt_000001_016462_gtFine_labelIds.png
39 | frankfurt/frankfurt_000001_060422_gtFine_labelIds.png
40 | frankfurt/frankfurt_000001_009058_gtFine_labelIds.png
41 | frankfurt/frankfurt_000001_080830_gtFine_labelIds.png
42 | frankfurt/frankfurt_000001_012870_gtFine_labelIds.png
43 | frankfurt/frankfurt_000001_077434_gtFine_labelIds.png
44 | frankfurt/frankfurt_000001_033655_gtFine_labelIds.png
45 | frankfurt/frankfurt_000001_051516_gtFine_labelIds.png
46 | frankfurt/frankfurt_000001_044413_gtFine_labelIds.png
47 | frankfurt/frankfurt_000001_055172_gtFine_labelIds.png
48 | frankfurt/frankfurt_000001_040575_gtFine_labelIds.png
49 | frankfurt/frankfurt_000000_020215_gtFine_labelIds.png
50 | frankfurt/frankfurt_000000_017228_gtFine_labelIds.png
51 | frankfurt/frankfurt_000001_041354_gtFine_labelIds.png
52 | frankfurt/frankfurt_000000_008206_gtFine_labelIds.png
53 | frankfurt/frankfurt_000001_043564_gtFine_labelIds.png
54 | frankfurt/frankfurt_000001_032711_gtFine_labelIds.png
55 | frankfurt/frankfurt_000001_064130_gtFine_labelIds.png
56 | frankfurt/frankfurt_000001_053102_gtFine_labelIds.png
57 | frankfurt/frankfurt_000001_082087_gtFine_labelIds.png
58 | frankfurt/frankfurt_000001_057478_gtFine_labelIds.png
59 | frankfurt/frankfurt_000001_007407_gtFine_labelIds.png
60 | frankfurt/frankfurt_000001_008200_gtFine_labelIds.png
61 | frankfurt/frankfurt_000001_038844_gtFine_labelIds.png
62 | frankfurt/frankfurt_000001_016029_gtFine_labelIds.png
63 | frankfurt/frankfurt_000001_058176_gtFine_labelIds.png
64 | frankfurt/frankfurt_000001_057181_gtFine_labelIds.png
65 | frankfurt/frankfurt_000001_039895_gtFine_labelIds.png
66 | frankfurt/frankfurt_000000_000294_gtFine_labelIds.png
67 | frankfurt/frankfurt_000001_055062_gtFine_labelIds.png
68 | frankfurt/frankfurt_000001_083029_gtFine_labelIds.png
69 | frankfurt/frankfurt_000001_010444_gtFine_labelIds.png
70 | frankfurt/frankfurt_000001_041517_gtFine_labelIds.png
71 | frankfurt/frankfurt_000001_069633_gtFine_labelIds.png
72 | frankfurt/frankfurt_000001_020287_gtFine_labelIds.png
73 | frankfurt/frankfurt_000001_012038_gtFine_labelIds.png
74 | frankfurt/frankfurt_000001_046504_gtFine_labelIds.png
75 | frankfurt/frankfurt_000001_032556_gtFine_labelIds.png
76 | frankfurt/frankfurt_000000_001751_gtFine_labelIds.png
77 | frankfurt/frankfurt_000001_000538_gtFine_labelIds.png
78 | frankfurt/frankfurt_000001_083852_gtFine_labelIds.png
79 | frankfurt/frankfurt_000001_077092_gtFine_labelIds.png
80 | frankfurt/frankfurt_000001_017101_gtFine_labelIds.png
81 | frankfurt/frankfurt_000001_044525_gtFine_labelIds.png
82 | frankfurt/frankfurt_000001_005703_gtFine_labelIds.png
83 | frankfurt/frankfurt_000001_080391_gtFine_labelIds.png
84 | frankfurt/frankfurt_000001_038418_gtFine_labelIds.png
85 | frankfurt/frankfurt_000001_066832_gtFine_labelIds.png
86 | frankfurt/frankfurt_000000_003357_gtFine_labelIds.png
87 | frankfurt/frankfurt_000000_020880_gtFine_labelIds.png
88 | frankfurt/frankfurt_000001_062396_gtFine_labelIds.png
89 | frankfurt/frankfurt_000001_046272_gtFine_labelIds.png
90 | frankfurt/frankfurt_000001_062509_gtFine_labelIds.png
91 | frankfurt/frankfurt_000001_054415_gtFine_labelIds.png
92 | frankfurt/frankfurt_000001_021406_gtFine_labelIds.png
93 | frankfurt/frankfurt_000001_030310_gtFine_labelIds.png
94 | frankfurt/frankfurt_000000_014480_gtFine_labelIds.png
95 | frankfurt/frankfurt_000001_005410_gtFine_labelIds.png
96 | frankfurt/frankfurt_000000_022797_gtFine_labelIds.png
97 | frankfurt/frankfurt_000001_035144_gtFine_labelIds.png
98 | frankfurt/frankfurt_000001_014565_gtFine_labelIds.png
99 | frankfurt/frankfurt_000001_065850_gtFine_labelIds.png
100 | frankfurt/frankfurt_000000_000576_gtFine_labelIds.png
101 | frankfurt/frankfurt_000001_065617_gtFine_labelIds.png
102 | frankfurt/frankfurt_000000_005543_gtFine_labelIds.png
103 | frankfurt/frankfurt_000001_055709_gtFine_labelIds.png
104 | frankfurt/frankfurt_000001_027325_gtFine_labelIds.png
105 | frankfurt/frankfurt_000001_011835_gtFine_labelIds.png
106 | frankfurt/frankfurt_000001_046779_gtFine_labelIds.png
107 | frankfurt/frankfurt_000001_064305_gtFine_labelIds.png
108 | frankfurt/frankfurt_000001_012738_gtFine_labelIds.png
109 | frankfurt/frankfurt_000001_048355_gtFine_labelIds.png
110 | frankfurt/frankfurt_000001_019969_gtFine_labelIds.png
111 | frankfurt/frankfurt_000001_080091_gtFine_labelIds.png
112 | frankfurt/frankfurt_000000_011007_gtFine_labelIds.png
113 | frankfurt/frankfurt_000000_015676_gtFine_labelIds.png
114 | frankfurt/frankfurt_000001_044227_gtFine_labelIds.png
115 | frankfurt/frankfurt_000001_055387_gtFine_labelIds.png
116 | frankfurt/frankfurt_000001_038245_gtFine_labelIds.png
117 | frankfurt/frankfurt_000001_059642_gtFine_labelIds.png
118 | frankfurt/frankfurt_000001_030669_gtFine_labelIds.png
119 | frankfurt/frankfurt_000001_068772_gtFine_labelIds.png
120 | frankfurt/frankfurt_000001_079206_gtFine_labelIds.png
121 | frankfurt/frankfurt_000001_055306_gtFine_labelIds.png
122 | frankfurt/frankfurt_000001_012699_gtFine_labelIds.png
123 | frankfurt/frankfurt_000001_042384_gtFine_labelIds.png
124 | frankfurt/frankfurt_000001_054077_gtFine_labelIds.png
125 | frankfurt/frankfurt_000001_010830_gtFine_labelIds.png
126 | frankfurt/frankfurt_000001_052120_gtFine_labelIds.png
127 | frankfurt/frankfurt_000001_032018_gtFine_labelIds.png
128 | frankfurt/frankfurt_000001_051737_gtFine_labelIds.png
129 | frankfurt/frankfurt_000001_028335_gtFine_labelIds.png
130 | frankfurt/frankfurt_000001_049770_gtFine_labelIds.png
131 | frankfurt/frankfurt_000001_054884_gtFine_labelIds.png
132 | frankfurt/frankfurt_000001_019698_gtFine_labelIds.png
133 | frankfurt/frankfurt_000000_011461_gtFine_labelIds.png
134 | frankfurt/frankfurt_000000_001016_gtFine_labelIds.png
135 | frankfurt/frankfurt_000001_062250_gtFine_labelIds.png
136 | frankfurt/frankfurt_000001_004736_gtFine_labelIds.png
137 | frankfurt/frankfurt_000001_068682_gtFine_labelIds.png
138 | frankfurt/frankfurt_000000_006589_gtFine_labelIds.png
139 | frankfurt/frankfurt_000000_011810_gtFine_labelIds.png
140 | frankfurt/frankfurt_000001_066574_gtFine_labelIds.png
141 | frankfurt/frankfurt_000001_048654_gtFine_labelIds.png
142 | frankfurt/frankfurt_000001_049209_gtFine_labelIds.png
143 | frankfurt/frankfurt_000001_042098_gtFine_labelIds.png
144 | frankfurt/frankfurt_000001_031416_gtFine_labelIds.png
145 | frankfurt/frankfurt_000000_009969_gtFine_labelIds.png
146 | frankfurt/frankfurt_000001_038645_gtFine_labelIds.png
147 | frankfurt/frankfurt_000001_020046_gtFine_labelIds.png
148 | frankfurt/frankfurt_000001_054219_gtFine_labelIds.png
149 | frankfurt/frankfurt_000001_002759_gtFine_labelIds.png
150 | frankfurt/frankfurt_000001_066438_gtFine_labelIds.png
151 | frankfurt/frankfurt_000000_020321_gtFine_labelIds.png
152 | frankfurt/frankfurt_000001_002646_gtFine_labelIds.png
153 | frankfurt/frankfurt_000001_046126_gtFine_labelIds.png
154 | frankfurt/frankfurt_000000_002196_gtFine_labelIds.png
155 | frankfurt/frankfurt_000001_057954_gtFine_labelIds.png
156 | frankfurt/frankfurt_000001_011715_gtFine_labelIds.png
157 | frankfurt/frankfurt_000000_021879_gtFine_labelIds.png
158 | frankfurt/frankfurt_000001_082466_gtFine_labelIds.png
159 | frankfurt/frankfurt_000000_003025_gtFine_labelIds.png
160 | frankfurt/frankfurt_000001_023369_gtFine_labelIds.png
161 | frankfurt/frankfurt_000001_061682_gtFine_labelIds.png
162 | frankfurt/frankfurt_000001_017459_gtFine_labelIds.png
163 | frankfurt/frankfurt_000001_059789_gtFine_labelIds.png
164 | frankfurt/frankfurt_000001_073464_gtFine_labelIds.png
165 | frankfurt/frankfurt_000001_063045_gtFine_labelIds.png
166 | frankfurt/frankfurt_000001_064651_gtFine_labelIds.png
167 | frankfurt/frankfurt_000000_013382_gtFine_labelIds.png
168 | frankfurt/frankfurt_000001_002512_gtFine_labelIds.png
169 | frankfurt/frankfurt_000001_032942_gtFine_labelIds.png
170 | frankfurt/frankfurt_000001_010600_gtFine_labelIds.png
171 | frankfurt/frankfurt_000001_030067_gtFine_labelIds.png
172 | frankfurt/frankfurt_000001_014741_gtFine_labelIds.png
173 | frankfurt/frankfurt_000000_021667_gtFine_labelIds.png
174 | frankfurt/frankfurt_000001_051807_gtFine_labelIds.png
175 | frankfurt/frankfurt_000001_019854_gtFine_labelIds.png
176 | frankfurt/frankfurt_000001_015768_gtFine_labelIds.png
177 | frankfurt/frankfurt_000001_007857_gtFine_labelIds.png
178 | frankfurt/frankfurt_000001_058914_gtFine_labelIds.png
179 | frankfurt/frankfurt_000000_012868_gtFine_labelIds.png
180 | frankfurt/frankfurt_000000_013942_gtFine_labelIds.png
181 | frankfurt/frankfurt_000001_014406_gtFine_labelIds.png
182 | frankfurt/frankfurt_000001_049298_gtFine_labelIds.png
183 | frankfurt/frankfurt_000001_023769_gtFine_labelIds.png
184 | frankfurt/frankfurt_000001_012519_gtFine_labelIds.png
185 | frankfurt/frankfurt_000001_064925_gtFine_labelIds.png
186 | frankfurt/frankfurt_000001_072295_gtFine_labelIds.png
187 | frankfurt/frankfurt_000001_058504_gtFine_labelIds.png
188 | frankfurt/frankfurt_000001_059119_gtFine_labelIds.png
189 | frankfurt/frankfurt_000001_015091_gtFine_labelIds.png
190 | frankfurt/frankfurt_000001_058057_gtFine_labelIds.png
191 | frankfurt/frankfurt_000001_003056_gtFine_labelIds.png
192 | frankfurt/frankfurt_000001_007622_gtFine_labelIds.png
193 | frankfurt/frankfurt_000001_016273_gtFine_labelIds.png
194 | frankfurt/frankfurt_000001_035864_gtFine_labelIds.png
195 | frankfurt/frankfurt_000001_067092_gtFine_labelIds.png
196 | frankfurt/frankfurt_000000_013067_gtFine_labelIds.png
197 | frankfurt/frankfurt_000001_067474_gtFine_labelIds.png
198 | frankfurt/frankfurt_000001_060135_gtFine_labelIds.png
199 | frankfurt/frankfurt_000000_018797_gtFine_labelIds.png
200 | frankfurt/frankfurt_000000_005898_gtFine_labelIds.png
201 | frankfurt/frankfurt_000001_055603_gtFine_labelIds.png
202 | frankfurt/frankfurt_000001_060906_gtFine_labelIds.png
203 | frankfurt/frankfurt_000001_062653_gtFine_labelIds.png
204 | frankfurt/frankfurt_000000_004617_gtFine_labelIds.png
205 | frankfurt/frankfurt_000001_055538_gtFine_labelIds.png
206 | frankfurt/frankfurt_000000_008451_gtFine_labelIds.png
207 | frankfurt/frankfurt_000001_052594_gtFine_labelIds.png
208 | frankfurt/frankfurt_000001_004327_gtFine_labelIds.png
209 | frankfurt/frankfurt_000001_075296_gtFine_labelIds.png
210 | frankfurt/frankfurt_000001_073088_gtFine_labelIds.png
211 | frankfurt/frankfurt_000001_005184_gtFine_labelIds.png
212 | frankfurt/frankfurt_000000_016286_gtFine_labelIds.png
213 | frankfurt/frankfurt_000001_008688_gtFine_labelIds.png
214 | frankfurt/frankfurt_000000_011074_gtFine_labelIds.png
215 | frankfurt/frankfurt_000001_056580_gtFine_labelIds.png
216 | frankfurt/frankfurt_000001_067735_gtFine_labelIds.png
217 | frankfurt/frankfurt_000001_034047_gtFine_labelIds.png
218 | frankfurt/frankfurt_000001_076502_gtFine_labelIds.png
219 | frankfurt/frankfurt_000001_071288_gtFine_labelIds.png
220 | frankfurt/frankfurt_000001_067295_gtFine_labelIds.png
221 | frankfurt/frankfurt_000001_071781_gtFine_labelIds.png
222 | frankfurt/frankfurt_000000_012121_gtFine_labelIds.png
223 | frankfurt/frankfurt_000001_004859_gtFine_labelIds.png
224 | frankfurt/frankfurt_000001_073911_gtFine_labelIds.png
225 | frankfurt/frankfurt_000001_047552_gtFine_labelIds.png
226 | frankfurt/frankfurt_000001_037705_gtFine_labelIds.png
227 | frankfurt/frankfurt_000001_025512_gtFine_labelIds.png
228 | frankfurt/frankfurt_000001_047178_gtFine_labelIds.png
229 | frankfurt/frankfurt_000001_014221_gtFine_labelIds.png
230 | frankfurt/frankfurt_000000_007365_gtFine_labelIds.png
231 | frankfurt/frankfurt_000001_049698_gtFine_labelIds.png
232 | frankfurt/frankfurt_000001_065160_gtFine_labelIds.png
233 | frankfurt/frankfurt_000001_061763_gtFine_labelIds.png
234 | frankfurt/frankfurt_000000_010351_gtFine_labelIds.png
235 | frankfurt/frankfurt_000001_072155_gtFine_labelIds.png
236 | frankfurt/frankfurt_000001_023235_gtFine_labelIds.png
237 | frankfurt/frankfurt_000000_015389_gtFine_labelIds.png
238 | frankfurt/frankfurt_000000_009688_gtFine_labelIds.png
239 | frankfurt/frankfurt_000000_016005_gtFine_labelIds.png
240 | frankfurt/frankfurt_000001_054640_gtFine_labelIds.png
241 | frankfurt/frankfurt_000001_029600_gtFine_labelIds.png
242 | frankfurt/frankfurt_000001_028232_gtFine_labelIds.png
243 | frankfurt/frankfurt_000001_050686_gtFine_labelIds.png
244 | frankfurt/frankfurt_000001_013496_gtFine_labelIds.png
245 | frankfurt/frankfurt_000001_066092_gtFine_labelIds.png
246 | frankfurt/frankfurt_000001_009854_gtFine_labelIds.png
247 | frankfurt/frankfurt_000001_067178_gtFine_labelIds.png
248 | frankfurt/frankfurt_000001_028854_gtFine_labelIds.png
249 | frankfurt/frankfurt_000001_083199_gtFine_labelIds.png
250 | frankfurt/frankfurt_000001_064798_gtFine_labelIds.png
251 | frankfurt/frankfurt_000001_018113_gtFine_labelIds.png
252 | frankfurt/frankfurt_000001_050149_gtFine_labelIds.png
253 | frankfurt/frankfurt_000001_048196_gtFine_labelIds.png
254 | frankfurt/frankfurt_000000_001236_gtFine_labelIds.png
255 | frankfurt/frankfurt_000000_017476_gtFine_labelIds.png
256 | frankfurt/frankfurt_000001_003588_gtFine_labelIds.png
257 | frankfurt/frankfurt_000001_021825_gtFine_labelIds.png
258 | frankfurt/frankfurt_000000_010763_gtFine_labelIds.png
259 | frankfurt/frankfurt_000001_062793_gtFine_labelIds.png
260 | frankfurt/frankfurt_000001_029236_gtFine_labelIds.png
261 | frankfurt/frankfurt_000001_075984_gtFine_labelIds.png
262 | frankfurt/frankfurt_000001_031266_gtFine_labelIds.png
263 | frankfurt/frankfurt_000001_043395_gtFine_labelIds.png
264 | frankfurt/frankfurt_000001_040732_gtFine_labelIds.png
265 | frankfurt/frankfurt_000001_011162_gtFine_labelIds.png
266 | frankfurt/frankfurt_000000_012009_gtFine_labelIds.png
267 | frankfurt/frankfurt_000001_042733_gtFine_labelIds.png
268 | lindau/lindau_000052_000019_gtFine_labelIds.png
269 | lindau/lindau_000009_000019_gtFine_labelIds.png
270 | lindau/lindau_000037_000019_gtFine_labelIds.png
271 | lindau/lindau_000047_000019_gtFine_labelIds.png
272 | lindau/lindau_000015_000019_gtFine_labelIds.png
273 | lindau/lindau_000030_000019_gtFine_labelIds.png
274 | lindau/lindau_000012_000019_gtFine_labelIds.png
275 | lindau/lindau_000032_000019_gtFine_labelIds.png
276 | lindau/lindau_000046_000019_gtFine_labelIds.png
277 | lindau/lindau_000000_000019_gtFine_labelIds.png
278 | lindau/lindau_000031_000019_gtFine_labelIds.png
279 | lindau/lindau_000011_000019_gtFine_labelIds.png
280 | lindau/lindau_000027_000019_gtFine_labelIds.png
281 | lindau/lindau_000054_000019_gtFine_labelIds.png
282 | lindau/lindau_000026_000019_gtFine_labelIds.png
283 | lindau/lindau_000017_000019_gtFine_labelIds.png
284 | lindau/lindau_000023_000019_gtFine_labelIds.png
285 | lindau/lindau_000005_000019_gtFine_labelIds.png
286 | lindau/lindau_000056_000019_gtFine_labelIds.png
287 | lindau/lindau_000025_000019_gtFine_labelIds.png
288 | lindau/lindau_000045_000019_gtFine_labelIds.png
289 | lindau/lindau_000014_000019_gtFine_labelIds.png
290 | lindau/lindau_000004_000019_gtFine_labelIds.png
291 | lindau/lindau_000021_000019_gtFine_labelIds.png
292 | lindau/lindau_000049_000019_gtFine_labelIds.png
293 | lindau/lindau_000033_000019_gtFine_labelIds.png
294 | lindau/lindau_000042_000019_gtFine_labelIds.png
295 | lindau/lindau_000013_000019_gtFine_labelIds.png
296 | lindau/lindau_000024_000019_gtFine_labelIds.png
297 | lindau/lindau_000002_000019_gtFine_labelIds.png
298 | lindau/lindau_000043_000019_gtFine_labelIds.png
299 | lindau/lindau_000016_000019_gtFine_labelIds.png
300 | lindau/lindau_000050_000019_gtFine_labelIds.png
301 | lindau/lindau_000018_000019_gtFine_labelIds.png
302 | lindau/lindau_000007_000019_gtFine_labelIds.png
303 | lindau/lindau_000048_000019_gtFine_labelIds.png
304 | lindau/lindau_000022_000019_gtFine_labelIds.png
305 | lindau/lindau_000053_000019_gtFine_labelIds.png
306 | lindau/lindau_000038_000019_gtFine_labelIds.png
307 | lindau/lindau_000001_000019_gtFine_labelIds.png
308 | lindau/lindau_000036_000019_gtFine_labelIds.png
309 | lindau/lindau_000035_000019_gtFine_labelIds.png
310 | lindau/lindau_000003_000019_gtFine_labelIds.png
311 | lindau/lindau_000034_000019_gtFine_labelIds.png
312 | lindau/lindau_000010_000019_gtFine_labelIds.png
313 | lindau/lindau_000055_000019_gtFine_labelIds.png
314 | lindau/lindau_000006_000019_gtFine_labelIds.png
315 | lindau/lindau_000019_000019_gtFine_labelIds.png
316 | lindau/lindau_000029_000019_gtFine_labelIds.png
317 | lindau/lindau_000039_000019_gtFine_labelIds.png
318 | lindau/lindau_000051_000019_gtFine_labelIds.png
319 | lindau/lindau_000020_000019_gtFine_labelIds.png
320 | lindau/lindau_000057_000019_gtFine_labelIds.png
321 | lindau/lindau_000041_000019_gtFine_labelIds.png
322 | lindau/lindau_000040_000019_gtFine_labelIds.png
323 | lindau/lindau_000044_000019_gtFine_labelIds.png
324 | lindau/lindau_000028_000019_gtFine_labelIds.png
325 | lindau/lindau_000058_000019_gtFine_labelIds.png
326 | lindau/lindau_000008_000019_gtFine_labelIds.png
327 | munster/munster_000000_000019_gtFine_labelIds.png
328 | munster/munster_000012_000019_gtFine_labelIds.png
329 | munster/munster_000032_000019_gtFine_labelIds.png
330 | munster/munster_000068_000019_gtFine_labelIds.png
331 | munster/munster_000101_000019_gtFine_labelIds.png
332 | munster/munster_000153_000019_gtFine_labelIds.png
333 | munster/munster_000115_000019_gtFine_labelIds.png
334 | munster/munster_000029_000019_gtFine_labelIds.png
335 | munster/munster_000019_000019_gtFine_labelIds.png
336 | munster/munster_000156_000019_gtFine_labelIds.png
337 | munster/munster_000129_000019_gtFine_labelIds.png
338 | munster/munster_000169_000019_gtFine_labelIds.png
339 | munster/munster_000150_000019_gtFine_labelIds.png
340 | munster/munster_000165_000019_gtFine_labelIds.png
341 | munster/munster_000050_000019_gtFine_labelIds.png
342 | munster/munster_000025_000019_gtFine_labelIds.png
343 | munster/munster_000116_000019_gtFine_labelIds.png
344 | munster/munster_000132_000019_gtFine_labelIds.png
345 | munster/munster_000066_000019_gtFine_labelIds.png
346 | munster/munster_000096_000019_gtFine_labelIds.png
347 | munster/munster_000030_000019_gtFine_labelIds.png
348 | munster/munster_000146_000019_gtFine_labelIds.png
349 | munster/munster_000098_000019_gtFine_labelIds.png
350 | munster/munster_000059_000019_gtFine_labelIds.png
351 | munster/munster_000093_000019_gtFine_labelIds.png
352 | munster/munster_000122_000019_gtFine_labelIds.png
353 | munster/munster_000024_000019_gtFine_labelIds.png
354 | munster/munster_000036_000019_gtFine_labelIds.png
355 | munster/munster_000086_000019_gtFine_labelIds.png
356 | munster/munster_000163_000019_gtFine_labelIds.png
357 | munster/munster_000001_000019_gtFine_labelIds.png
358 | munster/munster_000053_000019_gtFine_labelIds.png
359 | munster/munster_000071_000019_gtFine_labelIds.png
360 | munster/munster_000079_000019_gtFine_labelIds.png
361 | munster/munster_000159_000019_gtFine_labelIds.png
362 | munster/munster_000038_000019_gtFine_labelIds.png
363 | munster/munster_000138_000019_gtFine_labelIds.png
364 | munster/munster_000135_000019_gtFine_labelIds.png
365 | munster/munster_000065_000019_gtFine_labelIds.png
366 | munster/munster_000139_000019_gtFine_labelIds.png
367 | munster/munster_000108_000019_gtFine_labelIds.png
368 | munster/munster_000020_000019_gtFine_labelIds.png
369 | munster/munster_000074_000019_gtFine_labelIds.png
370 | munster/munster_000035_000019_gtFine_labelIds.png
371 | munster/munster_000067_000019_gtFine_labelIds.png
372 | munster/munster_000151_000019_gtFine_labelIds.png
373 | munster/munster_000083_000019_gtFine_labelIds.png
374 | munster/munster_000118_000019_gtFine_labelIds.png
375 | munster/munster_000046_000019_gtFine_labelIds.png
376 | munster/munster_000147_000019_gtFine_labelIds.png
377 | munster/munster_000047_000019_gtFine_labelIds.png
378 | munster/munster_000043_000019_gtFine_labelIds.png
379 | munster/munster_000168_000019_gtFine_labelIds.png
380 | munster/munster_000167_000019_gtFine_labelIds.png
381 | munster/munster_000021_000019_gtFine_labelIds.png
382 | munster/munster_000073_000019_gtFine_labelIds.png
383 | munster/munster_000089_000019_gtFine_labelIds.png
384 | munster/munster_000060_000019_gtFine_labelIds.png
385 | munster/munster_000155_000019_gtFine_labelIds.png
386 | munster/munster_000140_000019_gtFine_labelIds.png
387 | munster/munster_000145_000019_gtFine_labelIds.png
388 | munster/munster_000077_000019_gtFine_labelIds.png
389 | munster/munster_000018_000019_gtFine_labelIds.png
390 | munster/munster_000045_000019_gtFine_labelIds.png
391 | munster/munster_000166_000019_gtFine_labelIds.png
392 | munster/munster_000037_000019_gtFine_labelIds.png
393 | munster/munster_000112_000019_gtFine_labelIds.png
394 | munster/munster_000080_000019_gtFine_labelIds.png
395 | munster/munster_000144_000019_gtFine_labelIds.png
396 | munster/munster_000142_000019_gtFine_labelIds.png
397 | munster/munster_000070_000019_gtFine_labelIds.png
398 | munster/munster_000044_000019_gtFine_labelIds.png
399 | munster/munster_000137_000019_gtFine_labelIds.png
400 | munster/munster_000041_000019_gtFine_labelIds.png
401 | munster/munster_000113_000019_gtFine_labelIds.png
402 | munster/munster_000075_000019_gtFine_labelIds.png
403 | munster/munster_000157_000019_gtFine_labelIds.png
404 | munster/munster_000158_000019_gtFine_labelIds.png
405 | munster/munster_000109_000019_gtFine_labelIds.png
406 | munster/munster_000033_000019_gtFine_labelIds.png
407 | munster/munster_000088_000019_gtFine_labelIds.png
408 | munster/munster_000090_000019_gtFine_labelIds.png
409 | munster/munster_000114_000019_gtFine_labelIds.png
410 | munster/munster_000171_000019_gtFine_labelIds.png
411 | munster/munster_000013_000019_gtFine_labelIds.png
412 | munster/munster_000130_000019_gtFine_labelIds.png
413 | munster/munster_000016_000019_gtFine_labelIds.png
414 | munster/munster_000136_000019_gtFine_labelIds.png
415 | munster/munster_000007_000019_gtFine_labelIds.png
416 | munster/munster_000014_000019_gtFine_labelIds.png
417 | munster/munster_000052_000019_gtFine_labelIds.png
418 | munster/munster_000104_000019_gtFine_labelIds.png
419 | munster/munster_000173_000019_gtFine_labelIds.png
420 | munster/munster_000057_000019_gtFine_labelIds.png
421 | munster/munster_000072_000019_gtFine_labelIds.png
422 | munster/munster_000003_000019_gtFine_labelIds.png
423 | munster/munster_000161_000019_gtFine_labelIds.png
424 | munster/munster_000002_000019_gtFine_labelIds.png
425 | munster/munster_000028_000019_gtFine_labelIds.png
426 | munster/munster_000051_000019_gtFine_labelIds.png
427 | munster/munster_000105_000019_gtFine_labelIds.png
428 | munster/munster_000061_000019_gtFine_labelIds.png
429 | munster/munster_000058_000019_gtFine_labelIds.png
430 | munster/munster_000094_000019_gtFine_labelIds.png
431 | munster/munster_000027_000019_gtFine_labelIds.png
432 | munster/munster_000062_000019_gtFine_labelIds.png
433 | munster/munster_000127_000019_gtFine_labelIds.png
434 | munster/munster_000110_000019_gtFine_labelIds.png
435 | munster/munster_000170_000019_gtFine_labelIds.png
436 | munster/munster_000023_000019_gtFine_labelIds.png
437 | munster/munster_000084_000019_gtFine_labelIds.png
438 | munster/munster_000121_000019_gtFine_labelIds.png
439 | munster/munster_000087_000019_gtFine_labelIds.png
440 | munster/munster_000097_000019_gtFine_labelIds.png
441 | munster/munster_000119_000019_gtFine_labelIds.png
442 | munster/munster_000128_000019_gtFine_labelIds.png
443 | munster/munster_000078_000019_gtFine_labelIds.png
444 | munster/munster_000010_000019_gtFine_labelIds.png
445 | munster/munster_000015_000019_gtFine_labelIds.png
446 | munster/munster_000048_000019_gtFine_labelIds.png
447 | munster/munster_000085_000019_gtFine_labelIds.png
448 | munster/munster_000164_000019_gtFine_labelIds.png
449 | munster/munster_000111_000019_gtFine_labelIds.png
450 | munster/munster_000099_000019_gtFine_labelIds.png
451 | munster/munster_000117_000019_gtFine_labelIds.png
452 | munster/munster_000009_000019_gtFine_labelIds.png
453 | munster/munster_000049_000019_gtFine_labelIds.png
454 | munster/munster_000148_000019_gtFine_labelIds.png
455 | munster/munster_000022_000019_gtFine_labelIds.png
456 | munster/munster_000131_000019_gtFine_labelIds.png
457 | munster/munster_000006_000019_gtFine_labelIds.png
458 | munster/munster_000005_000019_gtFine_labelIds.png
459 | munster/munster_000102_000019_gtFine_labelIds.png
460 | munster/munster_000160_000019_gtFine_labelIds.png
461 | munster/munster_000107_000019_gtFine_labelIds.png
462 | munster/munster_000095_000019_gtFine_labelIds.png
463 | munster/munster_000106_000019_gtFine_labelIds.png
464 | munster/munster_000034_000019_gtFine_labelIds.png
465 | munster/munster_000143_000019_gtFine_labelIds.png
466 | munster/munster_000017_000019_gtFine_labelIds.png
467 | munster/munster_000040_000019_gtFine_labelIds.png
468 | munster/munster_000152_000019_gtFine_labelIds.png
469 | munster/munster_000154_000019_gtFine_labelIds.png
470 | munster/munster_000100_000019_gtFine_labelIds.png
471 | munster/munster_000004_000019_gtFine_labelIds.png
472 | munster/munster_000141_000019_gtFine_labelIds.png
473 | munster/munster_000011_000019_gtFine_labelIds.png
474 | munster/munster_000055_000019_gtFine_labelIds.png
475 | munster/munster_000134_000019_gtFine_labelIds.png
476 | munster/munster_000054_000019_gtFine_labelIds.png
477 | munster/munster_000064_000019_gtFine_labelIds.png
478 | munster/munster_000039_000019_gtFine_labelIds.png
479 | munster/munster_000103_000019_gtFine_labelIds.png
480 | munster/munster_000092_000019_gtFine_labelIds.png
481 | munster/munster_000172_000019_gtFine_labelIds.png
482 | munster/munster_000042_000019_gtFine_labelIds.png
483 | munster/munster_000124_000019_gtFine_labelIds.png
484 | munster/munster_000069_000019_gtFine_labelIds.png
485 | munster/munster_000026_000019_gtFine_labelIds.png
486 | munster/munster_000120_000019_gtFine_labelIds.png
487 | munster/munster_000031_000019_gtFine_labelIds.png
488 | munster/munster_000162_000019_gtFine_labelIds.png
489 | munster/munster_000056_000019_gtFine_labelIds.png
490 | munster/munster_000081_000019_gtFine_labelIds.png
491 | munster/munster_000123_000019_gtFine_labelIds.png
492 | munster/munster_000125_000019_gtFine_labelIds.png
493 | munster/munster_000082_000019_gtFine_labelIds.png
494 | munster/munster_000133_000019_gtFine_labelIds.png
495 | munster/munster_000126_000019_gtFine_labelIds.png
496 | munster/munster_000063_000019_gtFine_labelIds.png
497 | munster/munster_000008_000019_gtFine_labelIds.png
498 | munster/munster_000149_000019_gtFine_labelIds.png
499 | munster/munster_000076_000019_gtFine_labelIds.png
500 | munster/munster_000091_000019_gtFine_labelIds.png
501 |
--------------------------------------------------------------------------------
/dataset/cityscapes_list/val.txt:
--------------------------------------------------------------------------------
1 | frankfurt/frankfurt_000001_007973_leftImg8bit.png
2 | frankfurt/frankfurt_000001_025921_leftImg8bit.png
3 | frankfurt/frankfurt_000001_062016_leftImg8bit.png
4 | frankfurt/frankfurt_000001_049078_leftImg8bit.png
5 | frankfurt/frankfurt_000000_009561_leftImg8bit.png
6 | frankfurt/frankfurt_000001_013710_leftImg8bit.png
7 | frankfurt/frankfurt_000001_041664_leftImg8bit.png
8 | frankfurt/frankfurt_000000_013240_leftImg8bit.png
9 | frankfurt/frankfurt_000001_044787_leftImg8bit.png
10 | frankfurt/frankfurt_000001_015328_leftImg8bit.png
11 | frankfurt/frankfurt_000001_073243_leftImg8bit.png
12 | frankfurt/frankfurt_000001_034816_leftImg8bit.png
13 | frankfurt/frankfurt_000001_041074_leftImg8bit.png
14 | frankfurt/frankfurt_000001_005898_leftImg8bit.png
15 | frankfurt/frankfurt_000000_022254_leftImg8bit.png
16 | frankfurt/frankfurt_000001_044658_leftImg8bit.png
17 | frankfurt/frankfurt_000001_009504_leftImg8bit.png
18 | frankfurt/frankfurt_000001_024927_leftImg8bit.png
19 | frankfurt/frankfurt_000001_017842_leftImg8bit.png
20 | frankfurt/frankfurt_000001_068208_leftImg8bit.png
21 | frankfurt/frankfurt_000001_013016_leftImg8bit.png
22 | frankfurt/frankfurt_000001_010156_leftImg8bit.png
23 | frankfurt/frankfurt_000000_002963_leftImg8bit.png
24 | frankfurt/frankfurt_000001_020693_leftImg8bit.png
25 | frankfurt/frankfurt_000001_078803_leftImg8bit.png
26 | frankfurt/frankfurt_000001_025713_leftImg8bit.png
27 | frankfurt/frankfurt_000001_007285_leftImg8bit.png
28 | frankfurt/frankfurt_000001_070099_leftImg8bit.png
29 | frankfurt/frankfurt_000000_009291_leftImg8bit.png
30 | frankfurt/frankfurt_000000_019607_leftImg8bit.png
31 | frankfurt/frankfurt_000001_068063_leftImg8bit.png
32 | frankfurt/frankfurt_000000_003920_leftImg8bit.png
33 | frankfurt/frankfurt_000001_077233_leftImg8bit.png
34 | frankfurt/frankfurt_000001_029086_leftImg8bit.png
35 | frankfurt/frankfurt_000001_060545_leftImg8bit.png
36 | frankfurt/frankfurt_000001_001464_leftImg8bit.png
37 | frankfurt/frankfurt_000001_028590_leftImg8bit.png
38 | frankfurt/frankfurt_000001_016462_leftImg8bit.png
39 | frankfurt/frankfurt_000001_060422_leftImg8bit.png
40 | frankfurt/frankfurt_000001_009058_leftImg8bit.png
41 | frankfurt/frankfurt_000001_080830_leftImg8bit.png
42 | frankfurt/frankfurt_000001_012870_leftImg8bit.png
43 | frankfurt/frankfurt_000001_077434_leftImg8bit.png
44 | frankfurt/frankfurt_000001_033655_leftImg8bit.png
45 | frankfurt/frankfurt_000001_051516_leftImg8bit.png
46 | frankfurt/frankfurt_000001_044413_leftImg8bit.png
47 | frankfurt/frankfurt_000001_055172_leftImg8bit.png
48 | frankfurt/frankfurt_000001_040575_leftImg8bit.png
49 | frankfurt/frankfurt_000000_020215_leftImg8bit.png
50 | frankfurt/frankfurt_000000_017228_leftImg8bit.png
51 | frankfurt/frankfurt_000001_041354_leftImg8bit.png
52 | frankfurt/frankfurt_000000_008206_leftImg8bit.png
53 | frankfurt/frankfurt_000001_043564_leftImg8bit.png
54 | frankfurt/frankfurt_000001_032711_leftImg8bit.png
55 | frankfurt/frankfurt_000001_064130_leftImg8bit.png
56 | frankfurt/frankfurt_000001_053102_leftImg8bit.png
57 | frankfurt/frankfurt_000001_082087_leftImg8bit.png
58 | frankfurt/frankfurt_000001_057478_leftImg8bit.png
59 | frankfurt/frankfurt_000001_007407_leftImg8bit.png
60 | frankfurt/frankfurt_000001_008200_leftImg8bit.png
61 | frankfurt/frankfurt_000001_038844_leftImg8bit.png
62 | frankfurt/frankfurt_000001_016029_leftImg8bit.png
63 | frankfurt/frankfurt_000001_058176_leftImg8bit.png
64 | frankfurt/frankfurt_000001_057181_leftImg8bit.png
65 | frankfurt/frankfurt_000001_039895_leftImg8bit.png
66 | frankfurt/frankfurt_000000_000294_leftImg8bit.png
67 | frankfurt/frankfurt_000001_055062_leftImg8bit.png
68 | frankfurt/frankfurt_000001_083029_leftImg8bit.png
69 | frankfurt/frankfurt_000001_010444_leftImg8bit.png
70 | frankfurt/frankfurt_000001_041517_leftImg8bit.png
71 | frankfurt/frankfurt_000001_069633_leftImg8bit.png
72 | frankfurt/frankfurt_000001_020287_leftImg8bit.png
73 | frankfurt/frankfurt_000001_012038_leftImg8bit.png
74 | frankfurt/frankfurt_000001_046504_leftImg8bit.png
75 | frankfurt/frankfurt_000001_032556_leftImg8bit.png
76 | frankfurt/frankfurt_000000_001751_leftImg8bit.png
77 | frankfurt/frankfurt_000001_000538_leftImg8bit.png
78 | frankfurt/frankfurt_000001_083852_leftImg8bit.png
79 | frankfurt/frankfurt_000001_077092_leftImg8bit.png
80 | frankfurt/frankfurt_000001_017101_leftImg8bit.png
81 | frankfurt/frankfurt_000001_044525_leftImg8bit.png
82 | frankfurt/frankfurt_000001_005703_leftImg8bit.png
83 | frankfurt/frankfurt_000001_080391_leftImg8bit.png
84 | frankfurt/frankfurt_000001_038418_leftImg8bit.png
85 | frankfurt/frankfurt_000001_066832_leftImg8bit.png
86 | frankfurt/frankfurt_000000_003357_leftImg8bit.png
87 | frankfurt/frankfurt_000000_020880_leftImg8bit.png
88 | frankfurt/frankfurt_000001_062396_leftImg8bit.png
89 | frankfurt/frankfurt_000001_046272_leftImg8bit.png
90 | frankfurt/frankfurt_000001_062509_leftImg8bit.png
91 | frankfurt/frankfurt_000001_054415_leftImg8bit.png
92 | frankfurt/frankfurt_000001_021406_leftImg8bit.png
93 | frankfurt/frankfurt_000001_030310_leftImg8bit.png
94 | frankfurt/frankfurt_000000_014480_leftImg8bit.png
95 | frankfurt/frankfurt_000001_005410_leftImg8bit.png
96 | frankfurt/frankfurt_000000_022797_leftImg8bit.png
97 | frankfurt/frankfurt_000001_035144_leftImg8bit.png
98 | frankfurt/frankfurt_000001_014565_leftImg8bit.png
99 | frankfurt/frankfurt_000001_065850_leftImg8bit.png
100 | frankfurt/frankfurt_000000_000576_leftImg8bit.png
101 | frankfurt/frankfurt_000001_065617_leftImg8bit.png
102 | frankfurt/frankfurt_000000_005543_leftImg8bit.png
103 | frankfurt/frankfurt_000001_055709_leftImg8bit.png
104 | frankfurt/frankfurt_000001_027325_leftImg8bit.png
105 | frankfurt/frankfurt_000001_011835_leftImg8bit.png
106 | frankfurt/frankfurt_000001_046779_leftImg8bit.png
107 | frankfurt/frankfurt_000001_064305_leftImg8bit.png
108 | frankfurt/frankfurt_000001_012738_leftImg8bit.png
109 | frankfurt/frankfurt_000001_048355_leftImg8bit.png
110 | frankfurt/frankfurt_000001_019969_leftImg8bit.png
111 | frankfurt/frankfurt_000001_080091_leftImg8bit.png
112 | frankfurt/frankfurt_000000_011007_leftImg8bit.png
113 | frankfurt/frankfurt_000000_015676_leftImg8bit.png
114 | frankfurt/frankfurt_000001_044227_leftImg8bit.png
115 | frankfurt/frankfurt_000001_055387_leftImg8bit.png
116 | frankfurt/frankfurt_000001_038245_leftImg8bit.png
117 | frankfurt/frankfurt_000001_059642_leftImg8bit.png
118 | frankfurt/frankfurt_000001_030669_leftImg8bit.png
119 | frankfurt/frankfurt_000001_068772_leftImg8bit.png
120 | frankfurt/frankfurt_000001_079206_leftImg8bit.png
121 | frankfurt/frankfurt_000001_055306_leftImg8bit.png
122 | frankfurt/frankfurt_000001_012699_leftImg8bit.png
123 | frankfurt/frankfurt_000001_042384_leftImg8bit.png
124 | frankfurt/frankfurt_000001_054077_leftImg8bit.png
125 | frankfurt/frankfurt_000001_010830_leftImg8bit.png
126 | frankfurt/frankfurt_000001_052120_leftImg8bit.png
127 | frankfurt/frankfurt_000001_032018_leftImg8bit.png
128 | frankfurt/frankfurt_000001_051737_leftImg8bit.png
129 | frankfurt/frankfurt_000001_028335_leftImg8bit.png
130 | frankfurt/frankfurt_000001_049770_leftImg8bit.png
131 | frankfurt/frankfurt_000001_054884_leftImg8bit.png
132 | frankfurt/frankfurt_000001_019698_leftImg8bit.png
133 | frankfurt/frankfurt_000000_011461_leftImg8bit.png
134 | frankfurt/frankfurt_000000_001016_leftImg8bit.png
135 | frankfurt/frankfurt_000001_062250_leftImg8bit.png
136 | frankfurt/frankfurt_000001_004736_leftImg8bit.png
137 | frankfurt/frankfurt_000001_068682_leftImg8bit.png
138 | frankfurt/frankfurt_000000_006589_leftImg8bit.png
139 | frankfurt/frankfurt_000000_011810_leftImg8bit.png
140 | frankfurt/frankfurt_000001_066574_leftImg8bit.png
141 | frankfurt/frankfurt_000001_048654_leftImg8bit.png
142 | frankfurt/frankfurt_000001_049209_leftImg8bit.png
143 | frankfurt/frankfurt_000001_042098_leftImg8bit.png
144 | frankfurt/frankfurt_000001_031416_leftImg8bit.png
145 | frankfurt/frankfurt_000000_009969_leftImg8bit.png
146 | frankfurt/frankfurt_000001_038645_leftImg8bit.png
147 | frankfurt/frankfurt_000001_020046_leftImg8bit.png
148 | frankfurt/frankfurt_000001_054219_leftImg8bit.png
149 | frankfurt/frankfurt_000001_002759_leftImg8bit.png
150 | frankfurt/frankfurt_000001_066438_leftImg8bit.png
151 | frankfurt/frankfurt_000000_020321_leftImg8bit.png
152 | frankfurt/frankfurt_000001_002646_leftImg8bit.png
153 | frankfurt/frankfurt_000001_046126_leftImg8bit.png
154 | frankfurt/frankfurt_000000_002196_leftImg8bit.png
155 | frankfurt/frankfurt_000001_057954_leftImg8bit.png
156 | frankfurt/frankfurt_000001_011715_leftImg8bit.png
157 | frankfurt/frankfurt_000000_021879_leftImg8bit.png
158 | frankfurt/frankfurt_000001_082466_leftImg8bit.png
159 | frankfurt/frankfurt_000000_003025_leftImg8bit.png
160 | frankfurt/frankfurt_000001_023369_leftImg8bit.png
161 | frankfurt/frankfurt_000001_061682_leftImg8bit.png
162 | frankfurt/frankfurt_000001_017459_leftImg8bit.png
163 | frankfurt/frankfurt_000001_059789_leftImg8bit.png
164 | frankfurt/frankfurt_000001_073464_leftImg8bit.png
165 | frankfurt/frankfurt_000001_063045_leftImg8bit.png
166 | frankfurt/frankfurt_000001_064651_leftImg8bit.png
167 | frankfurt/frankfurt_000000_013382_leftImg8bit.png
168 | frankfurt/frankfurt_000001_002512_leftImg8bit.png
169 | frankfurt/frankfurt_000001_032942_leftImg8bit.png
170 | frankfurt/frankfurt_000001_010600_leftImg8bit.png
171 | frankfurt/frankfurt_000001_030067_leftImg8bit.png
172 | frankfurt/frankfurt_000001_014741_leftImg8bit.png
173 | frankfurt/frankfurt_000000_021667_leftImg8bit.png
174 | frankfurt/frankfurt_000001_051807_leftImg8bit.png
175 | frankfurt/frankfurt_000001_019854_leftImg8bit.png
176 | frankfurt/frankfurt_000001_015768_leftImg8bit.png
177 | frankfurt/frankfurt_000001_007857_leftImg8bit.png
178 | frankfurt/frankfurt_000001_058914_leftImg8bit.png
179 | frankfurt/frankfurt_000000_012868_leftImg8bit.png
180 | frankfurt/frankfurt_000000_013942_leftImg8bit.png
181 | frankfurt/frankfurt_000001_014406_leftImg8bit.png
182 | frankfurt/frankfurt_000001_049298_leftImg8bit.png
183 | frankfurt/frankfurt_000001_023769_leftImg8bit.png
184 | frankfurt/frankfurt_000001_012519_leftImg8bit.png
185 | frankfurt/frankfurt_000001_064925_leftImg8bit.png
186 | frankfurt/frankfurt_000001_072295_leftImg8bit.png
187 | frankfurt/frankfurt_000001_058504_leftImg8bit.png
188 | frankfurt/frankfurt_000001_059119_leftImg8bit.png
189 | frankfurt/frankfurt_000001_015091_leftImg8bit.png
190 | frankfurt/frankfurt_000001_058057_leftImg8bit.png
191 | frankfurt/frankfurt_000001_003056_leftImg8bit.png
192 | frankfurt/frankfurt_000001_007622_leftImg8bit.png
193 | frankfurt/frankfurt_000001_016273_leftImg8bit.png
194 | frankfurt/frankfurt_000001_035864_leftImg8bit.png
195 | frankfurt/frankfurt_000001_067092_leftImg8bit.png
196 | frankfurt/frankfurt_000000_013067_leftImg8bit.png
197 | frankfurt/frankfurt_000001_067474_leftImg8bit.png
198 | frankfurt/frankfurt_000001_060135_leftImg8bit.png
199 | frankfurt/frankfurt_000000_018797_leftImg8bit.png
200 | frankfurt/frankfurt_000000_005898_leftImg8bit.png
201 | frankfurt/frankfurt_000001_055603_leftImg8bit.png
202 | frankfurt/frankfurt_000001_060906_leftImg8bit.png
203 | frankfurt/frankfurt_000001_062653_leftImg8bit.png
204 | frankfurt/frankfurt_000000_004617_leftImg8bit.png
205 | frankfurt/frankfurt_000001_055538_leftImg8bit.png
206 | frankfurt/frankfurt_000000_008451_leftImg8bit.png
207 | frankfurt/frankfurt_000001_052594_leftImg8bit.png
208 | frankfurt/frankfurt_000001_004327_leftImg8bit.png
209 | frankfurt/frankfurt_000001_075296_leftImg8bit.png
210 | frankfurt/frankfurt_000001_073088_leftImg8bit.png
211 | frankfurt/frankfurt_000001_005184_leftImg8bit.png
212 | frankfurt/frankfurt_000000_016286_leftImg8bit.png
213 | frankfurt/frankfurt_000001_008688_leftImg8bit.png
214 | frankfurt/frankfurt_000000_011074_leftImg8bit.png
215 | frankfurt/frankfurt_000001_056580_leftImg8bit.png
216 | frankfurt/frankfurt_000001_067735_leftImg8bit.png
217 | frankfurt/frankfurt_000001_034047_leftImg8bit.png
218 | frankfurt/frankfurt_000001_076502_leftImg8bit.png
219 | frankfurt/frankfurt_000001_071288_leftImg8bit.png
220 | frankfurt/frankfurt_000001_067295_leftImg8bit.png
221 | frankfurt/frankfurt_000001_071781_leftImg8bit.png
222 | frankfurt/frankfurt_000000_012121_leftImg8bit.png
223 | frankfurt/frankfurt_000001_004859_leftImg8bit.png
224 | frankfurt/frankfurt_000001_073911_leftImg8bit.png
225 | frankfurt/frankfurt_000001_047552_leftImg8bit.png
226 | frankfurt/frankfurt_000001_037705_leftImg8bit.png
227 | frankfurt/frankfurt_000001_025512_leftImg8bit.png
228 | frankfurt/frankfurt_000001_047178_leftImg8bit.png
229 | frankfurt/frankfurt_000001_014221_leftImg8bit.png
230 | frankfurt/frankfurt_000000_007365_leftImg8bit.png
231 | frankfurt/frankfurt_000001_049698_leftImg8bit.png
232 | frankfurt/frankfurt_000001_065160_leftImg8bit.png
233 | frankfurt/frankfurt_000001_061763_leftImg8bit.png
234 | frankfurt/frankfurt_000000_010351_leftImg8bit.png
235 | frankfurt/frankfurt_000001_072155_leftImg8bit.png
236 | frankfurt/frankfurt_000001_023235_leftImg8bit.png
237 | frankfurt/frankfurt_000000_015389_leftImg8bit.png
238 | frankfurt/frankfurt_000000_009688_leftImg8bit.png
239 | frankfurt/frankfurt_000000_016005_leftImg8bit.png
240 | frankfurt/frankfurt_000001_054640_leftImg8bit.png
241 | frankfurt/frankfurt_000001_029600_leftImg8bit.png
242 | frankfurt/frankfurt_000001_028232_leftImg8bit.png
243 | frankfurt/frankfurt_000001_050686_leftImg8bit.png
244 | frankfurt/frankfurt_000001_013496_leftImg8bit.png
245 | frankfurt/frankfurt_000001_066092_leftImg8bit.png
246 | frankfurt/frankfurt_000001_009854_leftImg8bit.png
247 | frankfurt/frankfurt_000001_067178_leftImg8bit.png
248 | frankfurt/frankfurt_000001_028854_leftImg8bit.png
249 | frankfurt/frankfurt_000001_083199_leftImg8bit.png
250 | frankfurt/frankfurt_000001_064798_leftImg8bit.png
251 | frankfurt/frankfurt_000001_018113_leftImg8bit.png
252 | frankfurt/frankfurt_000001_050149_leftImg8bit.png
253 | frankfurt/frankfurt_000001_048196_leftImg8bit.png
254 | frankfurt/frankfurt_000000_001236_leftImg8bit.png
255 | frankfurt/frankfurt_000000_017476_leftImg8bit.png
256 | frankfurt/frankfurt_000001_003588_leftImg8bit.png
257 | frankfurt/frankfurt_000001_021825_leftImg8bit.png
258 | frankfurt/frankfurt_000000_010763_leftImg8bit.png
259 | frankfurt/frankfurt_000001_062793_leftImg8bit.png
260 | frankfurt/frankfurt_000001_029236_leftImg8bit.png
261 | frankfurt/frankfurt_000001_075984_leftImg8bit.png
262 | frankfurt/frankfurt_000001_031266_leftImg8bit.png
263 | frankfurt/frankfurt_000001_043395_leftImg8bit.png
264 | frankfurt/frankfurt_000001_040732_leftImg8bit.png
265 | frankfurt/frankfurt_000001_011162_leftImg8bit.png
266 | frankfurt/frankfurt_000000_012009_leftImg8bit.png
267 | frankfurt/frankfurt_000001_042733_leftImg8bit.png
268 | lindau/lindau_000052_000019_leftImg8bit.png
269 | lindau/lindau_000009_000019_leftImg8bit.png
270 | lindau/lindau_000037_000019_leftImg8bit.png
271 | lindau/lindau_000047_000019_leftImg8bit.png
272 | lindau/lindau_000015_000019_leftImg8bit.png
273 | lindau/lindau_000030_000019_leftImg8bit.png
274 | lindau/lindau_000012_000019_leftImg8bit.png
275 | lindau/lindau_000032_000019_leftImg8bit.png
276 | lindau/lindau_000046_000019_leftImg8bit.png
277 | lindau/lindau_000000_000019_leftImg8bit.png
278 | lindau/lindau_000031_000019_leftImg8bit.png
279 | lindau/lindau_000011_000019_leftImg8bit.png
280 | lindau/lindau_000027_000019_leftImg8bit.png
281 | lindau/lindau_000054_000019_leftImg8bit.png
282 | lindau/lindau_000026_000019_leftImg8bit.png
283 | lindau/lindau_000017_000019_leftImg8bit.png
284 | lindau/lindau_000023_000019_leftImg8bit.png
285 | lindau/lindau_000005_000019_leftImg8bit.png
286 | lindau/lindau_000056_000019_leftImg8bit.png
287 | lindau/lindau_000025_000019_leftImg8bit.png
288 | lindau/lindau_000045_000019_leftImg8bit.png
289 | lindau/lindau_000014_000019_leftImg8bit.png
290 | lindau/lindau_000004_000019_leftImg8bit.png
291 | lindau/lindau_000021_000019_leftImg8bit.png
292 | lindau/lindau_000049_000019_leftImg8bit.png
293 | lindau/lindau_000033_000019_leftImg8bit.png
294 | lindau/lindau_000042_000019_leftImg8bit.png
295 | lindau/lindau_000013_000019_leftImg8bit.png
296 | lindau/lindau_000024_000019_leftImg8bit.png
297 | lindau/lindau_000002_000019_leftImg8bit.png
298 | lindau/lindau_000043_000019_leftImg8bit.png
299 | lindau/lindau_000016_000019_leftImg8bit.png
300 | lindau/lindau_000050_000019_leftImg8bit.png
301 | lindau/lindau_000018_000019_leftImg8bit.png
302 | lindau/lindau_000007_000019_leftImg8bit.png
303 | lindau/lindau_000048_000019_leftImg8bit.png
304 | lindau/lindau_000022_000019_leftImg8bit.png
305 | lindau/lindau_000053_000019_leftImg8bit.png
306 | lindau/lindau_000038_000019_leftImg8bit.png
307 | lindau/lindau_000001_000019_leftImg8bit.png
308 | lindau/lindau_000036_000019_leftImg8bit.png
309 | lindau/lindau_000035_000019_leftImg8bit.png
310 | lindau/lindau_000003_000019_leftImg8bit.png
311 | lindau/lindau_000034_000019_leftImg8bit.png
312 | lindau/lindau_000010_000019_leftImg8bit.png
313 | lindau/lindau_000055_000019_leftImg8bit.png
314 | lindau/lindau_000006_000019_leftImg8bit.png
315 | lindau/lindau_000019_000019_leftImg8bit.png
316 | lindau/lindau_000029_000019_leftImg8bit.png
317 | lindau/lindau_000039_000019_leftImg8bit.png
318 | lindau/lindau_000051_000019_leftImg8bit.png
319 | lindau/lindau_000020_000019_leftImg8bit.png
320 | lindau/lindau_000057_000019_leftImg8bit.png
321 | lindau/lindau_000041_000019_leftImg8bit.png
322 | lindau/lindau_000040_000019_leftImg8bit.png
323 | lindau/lindau_000044_000019_leftImg8bit.png
324 | lindau/lindau_000028_000019_leftImg8bit.png
325 | lindau/lindau_000058_000019_leftImg8bit.png
326 | lindau/lindau_000008_000019_leftImg8bit.png
327 | munster/munster_000000_000019_leftImg8bit.png
328 | munster/munster_000012_000019_leftImg8bit.png
329 | munster/munster_000032_000019_leftImg8bit.png
330 | munster/munster_000068_000019_leftImg8bit.png
331 | munster/munster_000101_000019_leftImg8bit.png
332 | munster/munster_000153_000019_leftImg8bit.png
333 | munster/munster_000115_000019_leftImg8bit.png
334 | munster/munster_000029_000019_leftImg8bit.png
335 | munster/munster_000019_000019_leftImg8bit.png
336 | munster/munster_000156_000019_leftImg8bit.png
337 | munster/munster_000129_000019_leftImg8bit.png
338 | munster/munster_000169_000019_leftImg8bit.png
339 | munster/munster_000150_000019_leftImg8bit.png
340 | munster/munster_000165_000019_leftImg8bit.png
341 | munster/munster_000050_000019_leftImg8bit.png
342 | munster/munster_000025_000019_leftImg8bit.png
343 | munster/munster_000116_000019_leftImg8bit.png
344 | munster/munster_000132_000019_leftImg8bit.png
345 | munster/munster_000066_000019_leftImg8bit.png
346 | munster/munster_000096_000019_leftImg8bit.png
347 | munster/munster_000030_000019_leftImg8bit.png
348 | munster/munster_000146_000019_leftImg8bit.png
349 | munster/munster_000098_000019_leftImg8bit.png
350 | munster/munster_000059_000019_leftImg8bit.png
351 | munster/munster_000093_000019_leftImg8bit.png
352 | munster/munster_000122_000019_leftImg8bit.png
353 | munster/munster_000024_000019_leftImg8bit.png
354 | munster/munster_000036_000019_leftImg8bit.png
355 | munster/munster_000086_000019_leftImg8bit.png
356 | munster/munster_000163_000019_leftImg8bit.png
357 | munster/munster_000001_000019_leftImg8bit.png
358 | munster/munster_000053_000019_leftImg8bit.png
359 | munster/munster_000071_000019_leftImg8bit.png
360 | munster/munster_000079_000019_leftImg8bit.png
361 | munster/munster_000159_000019_leftImg8bit.png
362 | munster/munster_000038_000019_leftImg8bit.png
363 | munster/munster_000138_000019_leftImg8bit.png
364 | munster/munster_000135_000019_leftImg8bit.png
365 | munster/munster_000065_000019_leftImg8bit.png
366 | munster/munster_000139_000019_leftImg8bit.png
367 | munster/munster_000108_000019_leftImg8bit.png
368 | munster/munster_000020_000019_leftImg8bit.png
369 | munster/munster_000074_000019_leftImg8bit.png
370 | munster/munster_000035_000019_leftImg8bit.png
371 | munster/munster_000067_000019_leftImg8bit.png
372 | munster/munster_000151_000019_leftImg8bit.png
373 | munster/munster_000083_000019_leftImg8bit.png
374 | munster/munster_000118_000019_leftImg8bit.png
375 | munster/munster_000046_000019_leftImg8bit.png
376 | munster/munster_000147_000019_leftImg8bit.png
377 | munster/munster_000047_000019_leftImg8bit.png
378 | munster/munster_000043_000019_leftImg8bit.png
379 | munster/munster_000168_000019_leftImg8bit.png
380 | munster/munster_000167_000019_leftImg8bit.png
381 | munster/munster_000021_000019_leftImg8bit.png
382 | munster/munster_000073_000019_leftImg8bit.png
383 | munster/munster_000089_000019_leftImg8bit.png
384 | munster/munster_000060_000019_leftImg8bit.png
385 | munster/munster_000155_000019_leftImg8bit.png
386 | munster/munster_000140_000019_leftImg8bit.png
387 | munster/munster_000145_000019_leftImg8bit.png
388 | munster/munster_000077_000019_leftImg8bit.png
389 | munster/munster_000018_000019_leftImg8bit.png
390 | munster/munster_000045_000019_leftImg8bit.png
391 | munster/munster_000166_000019_leftImg8bit.png
392 | munster/munster_000037_000019_leftImg8bit.png
393 | munster/munster_000112_000019_leftImg8bit.png
394 | munster/munster_000080_000019_leftImg8bit.png
395 | munster/munster_000144_000019_leftImg8bit.png
396 | munster/munster_000142_000019_leftImg8bit.png
397 | munster/munster_000070_000019_leftImg8bit.png
398 | munster/munster_000044_000019_leftImg8bit.png
399 | munster/munster_000137_000019_leftImg8bit.png
400 | munster/munster_000041_000019_leftImg8bit.png
401 | munster/munster_000113_000019_leftImg8bit.png
402 | munster/munster_000075_000019_leftImg8bit.png
403 | munster/munster_000157_000019_leftImg8bit.png
404 | munster/munster_000158_000019_leftImg8bit.png
405 | munster/munster_000109_000019_leftImg8bit.png
406 | munster/munster_000033_000019_leftImg8bit.png
407 | munster/munster_000088_000019_leftImg8bit.png
408 | munster/munster_000090_000019_leftImg8bit.png
409 | munster/munster_000114_000019_leftImg8bit.png
410 | munster/munster_000171_000019_leftImg8bit.png
411 | munster/munster_000013_000019_leftImg8bit.png
412 | munster/munster_000130_000019_leftImg8bit.png
413 | munster/munster_000016_000019_leftImg8bit.png
414 | munster/munster_000136_000019_leftImg8bit.png
415 | munster/munster_000007_000019_leftImg8bit.png
416 | munster/munster_000014_000019_leftImg8bit.png
417 | munster/munster_000052_000019_leftImg8bit.png
418 | munster/munster_000104_000019_leftImg8bit.png
419 | munster/munster_000173_000019_leftImg8bit.png
420 | munster/munster_000057_000019_leftImg8bit.png
421 | munster/munster_000072_000019_leftImg8bit.png
422 | munster/munster_000003_000019_leftImg8bit.png
423 | munster/munster_000161_000019_leftImg8bit.png
424 | munster/munster_000002_000019_leftImg8bit.png
425 | munster/munster_000028_000019_leftImg8bit.png
426 | munster/munster_000051_000019_leftImg8bit.png
427 | munster/munster_000105_000019_leftImg8bit.png
428 | munster/munster_000061_000019_leftImg8bit.png
429 | munster/munster_000058_000019_leftImg8bit.png
430 | munster/munster_000094_000019_leftImg8bit.png
431 | munster/munster_000027_000019_leftImg8bit.png
432 | munster/munster_000062_000019_leftImg8bit.png
433 | munster/munster_000127_000019_leftImg8bit.png
434 | munster/munster_000110_000019_leftImg8bit.png
435 | munster/munster_000170_000019_leftImg8bit.png
436 | munster/munster_000023_000019_leftImg8bit.png
437 | munster/munster_000084_000019_leftImg8bit.png
438 | munster/munster_000121_000019_leftImg8bit.png
439 | munster/munster_000087_000019_leftImg8bit.png
440 | munster/munster_000097_000019_leftImg8bit.png
441 | munster/munster_000119_000019_leftImg8bit.png
442 | munster/munster_000128_000019_leftImg8bit.png
443 | munster/munster_000078_000019_leftImg8bit.png
444 | munster/munster_000010_000019_leftImg8bit.png
445 | munster/munster_000015_000019_leftImg8bit.png
446 | munster/munster_000048_000019_leftImg8bit.png
447 | munster/munster_000085_000019_leftImg8bit.png
448 | munster/munster_000164_000019_leftImg8bit.png
449 | munster/munster_000111_000019_leftImg8bit.png
450 | munster/munster_000099_000019_leftImg8bit.png
451 | munster/munster_000117_000019_leftImg8bit.png
452 | munster/munster_000009_000019_leftImg8bit.png
453 | munster/munster_000049_000019_leftImg8bit.png
454 | munster/munster_000148_000019_leftImg8bit.png
455 | munster/munster_000022_000019_leftImg8bit.png
456 | munster/munster_000131_000019_leftImg8bit.png
457 | munster/munster_000006_000019_leftImg8bit.png
458 | munster/munster_000005_000019_leftImg8bit.png
459 | munster/munster_000102_000019_leftImg8bit.png
460 | munster/munster_000160_000019_leftImg8bit.png
461 | munster/munster_000107_000019_leftImg8bit.png
462 | munster/munster_000095_000019_leftImg8bit.png
463 | munster/munster_000106_000019_leftImg8bit.png
464 | munster/munster_000034_000019_leftImg8bit.png
465 | munster/munster_000143_000019_leftImg8bit.png
466 | munster/munster_000017_000019_leftImg8bit.png
467 | munster/munster_000040_000019_leftImg8bit.png
468 | munster/munster_000152_000019_leftImg8bit.png
469 | munster/munster_000154_000019_leftImg8bit.png
470 | munster/munster_000100_000019_leftImg8bit.png
471 | munster/munster_000004_000019_leftImg8bit.png
472 | munster/munster_000141_000019_leftImg8bit.png
473 | munster/munster_000011_000019_leftImg8bit.png
474 | munster/munster_000055_000019_leftImg8bit.png
475 | munster/munster_000134_000019_leftImg8bit.png
476 | munster/munster_000054_000019_leftImg8bit.png
477 | munster/munster_000064_000019_leftImg8bit.png
478 | munster/munster_000039_000019_leftImg8bit.png
479 | munster/munster_000103_000019_leftImg8bit.png
480 | munster/munster_000092_000019_leftImg8bit.png
481 | munster/munster_000172_000019_leftImg8bit.png
482 | munster/munster_000042_000019_leftImg8bit.png
483 | munster/munster_000124_000019_leftImg8bit.png
484 | munster/munster_000069_000019_leftImg8bit.png
485 | munster/munster_000026_000019_leftImg8bit.png
486 | munster/munster_000120_000019_leftImg8bit.png
487 | munster/munster_000031_000019_leftImg8bit.png
488 | munster/munster_000162_000019_leftImg8bit.png
489 | munster/munster_000056_000019_leftImg8bit.png
490 | munster/munster_000081_000019_leftImg8bit.png
491 | munster/munster_000123_000019_leftImg8bit.png
492 | munster/munster_000125_000019_leftImg8bit.png
493 | munster/munster_000082_000019_leftImg8bit.png
494 | munster/munster_000133_000019_leftImg8bit.png
495 | munster/munster_000126_000019_leftImg8bit.png
496 | munster/munster_000063_000019_leftImg8bit.png
497 | munster/munster_000008_000019_leftImg8bit.png
498 | munster/munster_000149_000019_leftImg8bit.png
499 | munster/munster_000076_000019_leftImg8bit.png
500 | munster/munster_000091_000019_leftImg8bit.png
501 |
--------------------------------------------------------------------------------
/dataset/gta5_dataset.py:
--------------------------------------------------------------------------------
1 | import os
2 | import os.path as osp
3 | import numpy as np
4 | import random
5 | import matplotlib.pyplot as plt
6 | import collections
7 | import torch
8 | import torchvision
9 | from torch.utils import data
10 | from PIL import Image
11 |
12 |
13 | class GTA5DataSet(data.Dataset):
14 | def __init__(self, root, list_path, max_iters=None, crop_size=(321, 321), mean=(128, 128, 128), scale=True, mirror=True, ignore_label=255):
15 | self.root = root
16 | self.list_path = list_path
17 | self.crop_size = crop_size
18 | self.scale = scale
19 | self.ignore_label = ignore_label
20 | self.mean = mean
21 | self.is_mirror = mirror
22 | # self.mean_bgr = np.array([104.00698793, 116.66876762, 122.67891434])
23 | self.img_ids = [i_id.strip() for i_id in open(list_path)]
24 | if not max_iters==None:
25 | self.img_ids = self.img_ids * int(np.ceil(float(max_iters) / len(self.img_ids)))
26 | self.files = []
27 |
28 | self.id_to_trainid = {7: 0, 8: 1, 11: 2, 12: 3, 13: 4, 17: 5,
29 | 19: 6, 20: 7, 21: 8, 22: 9, 23: 10, 24: 11, 25: 12,
30 | 26: 13, 27: 14, 28: 15, 31: 16, 32: 17, 33: 18}
31 |
32 | # for split in ["train", "trainval", "val"]:
33 | for name in self.img_ids:
34 | img_file = osp.join(self.root, "images/%s" % name)
35 | label_file = osp.join(self.root, "labels/%s" % name)
36 | self.files.append({
37 | "img": img_file,
38 | "label": label_file,
39 | "name": name
40 | })
41 |
42 | def __len__(self):
43 | return len(self.files)
44 |
45 |
46 | def __getitem__(self, index):
47 | datafiles = self.files[index]
48 |
49 | image = Image.open(datafiles["img"]).convert('RGB')
50 | label = Image.open(datafiles["label"])
51 | name = datafiles["name"]
52 |
53 | # resize
54 | image = image.resize(self.crop_size, Image.BICUBIC)
55 | label = label.resize(self.crop_size, Image.NEAREST)
56 |
57 | image = np.asarray(image, np.float32)
58 | label = np.asarray(label, np.float32)
59 |
60 | # re-assign labels to match the format of Cityscapes
61 | label_copy = 255 * np.ones(label.shape, dtype=np.float32)
62 | for k, v in self.id_to_trainid.items():
63 | label_copy[label == k] = v
64 |
65 | size = image.shape
66 | image = image[:, :, ::-1] # change to BGR
67 | image -= self.mean
68 | image = image.transpose((2, 0, 1))
69 |
70 | return image.copy(), label_copy.copy(), np.array(size), name
71 |
72 |
73 | if __name__ == '__main__':
74 | dst = GTA5DataSet("./data", is_transform=True)
75 | trainloader = data.DataLoader(dst, batch_size=4)
76 | for i, data in enumerate(trainloader):
77 | imgs, labels = data
78 | if i == 0:
79 | img = torchvision.utils.make_grid(imgs).numpy()
80 | img = np.transpose(img, (1, 2, 0))
81 | img = img[:, :, ::-1]
82 | plt.imshow(img)
83 | plt.show()
84 |
--------------------------------------------------------------------------------
/dataset/synthia_dataset.py:
--------------------------------------------------------------------------------
1 | import os
2 | import os.path as osp
3 | import numpy as np
4 | import random
5 | import matplotlib
6 | matplotlib.use('agg')
7 | import matplotlib.pyplot as plt
8 | import collections
9 | import torch
10 | import torchvision
11 | from torch.utils import data
12 | from PIL import Image
13 | import pdb
14 | import cv2
15 | #from scipy.misc import imresize
16 |
17 | class synthiaDataSet(data.Dataset):
18 | def __init__(self, root, list_path, max_iters=None, crop_size=(321, 321), mean=(128, 128, 128), scale=True, mirror=True, ignore_label=255):
19 | self.root = root
20 | self.list_path = list_path
21 | self.crop_size = crop_size
22 | self.scale = scale
23 | self.ignore_label = ignore_label
24 | self.mean = mean
25 | self.is_mirror = mirror
26 | # self.mean_bgr = np.array([104.00698793, 116.66876762, 122.67891434])
27 | self.img_ids = [i_id.strip() for i_id in open(list_path)]
28 | if not max_iters==None:
29 | self.img_ids = self.img_ids * int(np.ceil(float(max_iters) / len(self.img_ids)))
30 | self.files = []
31 |
32 | self.id_to_trainid = {3: 0, 4: 1, 2: 2, 21: 3, 5: 4, 7: 5,
33 | 15: 6, 9: 7, 6: 8, 16: 9, 1: 10, 10: 11, 17: 12,
34 | 8: 13, 18: 14, 19: 15, 20: 16, 12: 17, 11: 18}
35 |
36 | # for split in ["train", "trainval", "val"]:
37 | for name in self.img_ids:
38 | img_file = osp.join(self.root, "RGB/%s" % name)
39 | label_file = osp.join(self.root, "synthia_mapped_to_cityscapes/%s" % name)
40 | self.files.append({
41 | "img": img_file,
42 | "label": label_file,
43 | "name": name
44 | })
45 |
46 | def __len__(self):
47 | return len(self.files)
48 |
49 |
50 | def __getitem__(self, index):
51 | datafiles = self.files[index]
52 |
53 | image = Image.open(datafiles["img"]).convert('RGB')
54 | label = Image.open(datafiles["label"])
55 | name = datafiles["name"]
56 |
57 | # resize
58 | image = image.resize(self.crop_size, Image.BICUBIC)
59 | label = label.resize(self.crop_size, Image.NEAREST)
60 |
61 | image = np.asarray(image, np.float32)
62 | label = np.asarray(label, np.float32)
63 | #labelcv = cv2.imread(datafiles["label"], cv2.IMREAD_UNCHANGED)[:,:,2].astype(np.float32)
64 | #print(label.shape, labelcv.shape)
65 | #assert label == labelcv
66 | #pdb.set_trace()
67 | #label = imresize(label[:,:,2], self.crop_size, interp='nearest').astype(np.float32)
68 | #for i in range(23):
69 | # print(np.sum(label==i))
70 | #pdb.set_trace()
71 | # re-assign labels to match the format of Cityscapes
72 | label_copy = 255 * np.ones(label.shape, dtype=np.float32)
73 | for k, v in self.id_to_trainid.items():
74 | label_copy[label == k] = v
75 |
76 | size = image.shape
77 | image = image[:, :, ::-1] # change to BGR
78 | image -= self.mean
79 | image = image.transpose((2, 0, 1))
80 |
81 | return image.copy(), label_copy.copy(), np.array(size), name
82 |
83 |
84 | if __name__ == '__main__':
85 | dst = synthiaDataSet("./data/SYNTHIA", "./dataset/synthia_list/train.txt", crop_size=(1280,760))
86 | trainloader = data.DataLoader(dst, batch_size=1)
87 | for i, data in enumerate(trainloader):
88 | imgs, labels, _,_ = data
89 | for j in range(13):
90 |
91 | print(j, np.sum((labels == j).numpy()))
92 | pdb.set_trace()
93 | plt.imshow(np.moveaxis(imgs.int().numpy()[0,:,:,:]+128, 0, -1))
94 | plt.show()
95 | if i == 0:
96 | img = torchvision.utils.make_grid(imgs).numpy()
97 | img = np.transpose(img, (1, 2, 0))
98 | img = img[:, :, ::-1]
99 | plt.imshow(img)
100 | plt.show()
101 | print(i)
102 |
--------------------------------------------------------------------------------
/eva.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | file="$1"
3 | python evaluate_cityscapes.py --restore-from "$file"
4 | python compute_iou.py ./data/Cityscapes/gtFine/val result/cityscapes
5 |
--------------------------------------------------------------------------------
/evaluate_cityscapes.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import scipy
3 | from scipy import ndimage
4 | import numpy as np
5 | import sys
6 | from packaging import version
7 |
8 | import torch
9 | from torch.autograd import Variable
10 | import torchvision.models as models
11 | import torch.nn.functional as F
12 | from torch.utils import data, model_zoo
13 | from model.deeplab import Res_Deeplab
14 | from model.deeplab_multi import DeeplabMulti
15 | from model.deeplab_vgg import DeeplabVGG
16 | from dataset.cityscapes_dataset import cityscapesDataSet
17 | from collections import OrderedDict
18 | import os
19 | from PIL import Image
20 |
21 | import matplotlib.pyplot as plt
22 | import torch.nn as nn
23 | IMG_MEAN = np.array((104.00698793,116.66876762,122.67891434), dtype=np.float32)
24 |
25 | DATA_DIRECTORY = './data/Cityscapes'
26 | DATA_LIST_PATH = './dataset/cityscapes_list/val.txt'
27 | SAVE_PATH = './result/cityscapes'
28 |
29 | IGNORE_LABEL = 255
30 | NUM_CLASSES = 19
31 | NUM_STEPS = 500 # Number of images in the validation set.
32 | RESTORE_FROM = 'http://vllab.ucmerced.edu/ytsai/CVPR18/GTA2Cityscapes_multi-ed35151c.pth'
33 | RESTORE_FROM_VGG = 'http://vllab.ucmerced.edu/ytsai/CVPR18/GTA2Cityscapes_vgg-ac4ac9f6.pth'
34 | RESTORE_FROM_ORC = 'http://vllab1.ucmerced.edu/~whung/adaptSeg/cityscapes_oracle-b7b9934.pth'
35 | SET = 'val'
36 |
37 | MODEL = 'DeeplabMulti'
38 |
39 | palette = [128, 64, 128, 244, 35, 232, 70, 70, 70, 102, 102, 156, 190, 153, 153, 153, 153, 153, 250, 170, 30,
40 | 220, 220, 0, 107, 142, 35, 152, 251, 152, 70, 130, 180, 220, 20, 60, 255, 0, 0, 0, 0, 142, 0, 0, 70,
41 | 0, 60, 100, 0, 80, 100, 0, 0, 230, 119, 11, 32]
42 | zero_pad = 256 * 3 - len(palette)
43 | for i in range(zero_pad):
44 | palette.append(0)
45 |
46 |
47 | def colorize_mask(mask):
48 | # mask: numpy array of the mask
49 | new_mask = Image.fromarray(mask.astype(np.uint8)).convert('P')
50 | new_mask.putpalette(palette)
51 |
52 | return new_mask
53 |
54 | def get_arguments():
55 | """Parse all the arguments provided from the CLI.
56 |
57 | Returns:
58 | A list of parsed arguments.
59 | """
60 | parser = argparse.ArgumentParser(description="DeepLab-ResNet Network")
61 | parser.add_argument("--model", type=str, default=MODEL,
62 | help="Model Choice (DeeplabMulti/DeeplabVGG/Oracle).")
63 | parser.add_argument("--data-dir", type=str, default=DATA_DIRECTORY,
64 | help="Path to the directory containing the Cityscapes dataset.")
65 | parser.add_argument("--data-list", type=str, default=DATA_LIST_PATH,
66 | help="Path to the file listing the images in the dataset.")
67 | parser.add_argument("--ignore-label", type=int, default=IGNORE_LABEL,
68 | help="The index of the label to ignore during the training.")
69 | parser.add_argument("--num-classes", type=int, default=NUM_CLASSES,
70 | help="Number of classes to predict (including background).")
71 | parser.add_argument("--restore-from", type=str, default=RESTORE_FROM,
72 | help="Where restore model parameters from.")
73 | parser.add_argument("--gpu", type=int, default=0,
74 | help="choose gpu device.")
75 | parser.add_argument("--set", type=str, default=SET,
76 | help="choose evaluation set.")
77 | parser.add_argument("--save", type=str, default=SAVE_PATH,
78 | help="Path to save result.")
79 | return parser.parse_args()
80 |
81 |
82 | def main():
83 | """Create the model and start the evaluation process."""
84 |
85 | args = get_arguments()
86 |
87 | gpu0 = args.gpu
88 |
89 | if not os.path.exists(args.save):
90 | os.makedirs(args.save)
91 |
92 | if args.model == 'DeeplabMulti':
93 | model = DeeplabMulti(num_classes=args.num_classes)
94 | elif args.model == 'Oracle':
95 | model = Res_Deeplab(num_classes=args.num_classes)
96 | if args.restore_from == RESTORE_FROM:
97 | args.restore_from = RESTORE_FROM_ORC
98 | elif args.model == 'DeeplabVGG':
99 | model = DeeplabVGG(num_classes=args.num_classes)
100 | if args.restore_from == RESTORE_FROM:
101 | args.restore_from = RESTORE_FROM_VGG
102 |
103 | if args.restore_from[:4] == 'http' :
104 | saved_state_dict = model_zoo.load_url(args.restore_from)
105 | else:
106 | saved_state_dict = torch.load(args.restore_from)
107 | model.load_state_dict(saved_state_dict)
108 |
109 | model.eval()
110 | model.cuda(gpu0)
111 |
112 | testloader = data.DataLoader(cityscapesDataSet(args.data_dir, args.data_list, crop_size=(1024, 512), mean=IMG_MEAN, scale=False, mirror=False, set=args.set),
113 | batch_size=1, shuffle=False, pin_memory=True)
114 |
115 |
116 | if version.parse(torch.__version__) >= version.parse('0.4.0'):
117 | interp = nn.Upsample(size=(1024, 2048), mode='bilinear', align_corners=True)
118 | else:
119 | interp = nn.Upsample(size=(1024, 2048), mode='bilinear')
120 |
121 | for index, batch in enumerate(testloader):
122 | if index % 100 == 0:
123 | print ('%d processd' % index)
124 | image, _, name = batch
125 | if args.model == 'DeeplabMulti':
126 | with torch.no_grad():
127 | output1, output2 = model(Variable(image).cuda(gpu0))
128 | output = interp(output2).cpu().data[0].numpy()
129 | elif args.model == 'DeeplabVGG' or args.model == 'Oracle':
130 | output = model(Variable(image, volatile=True).cuda(gpu0))
131 | output = interp(output).cpu().data[0].numpy()
132 |
133 | output = output.transpose(1,2,0)
134 | output = np.asarray(np.argmax(output, axis=2), dtype=np.uint8)
135 |
136 | output_col = colorize_mask(output)
137 | output = Image.fromarray(output)
138 |
139 | name = name[0].split('/')[-1]
140 | output.save('%s/%s' % (args.save, name))
141 | output_col.save('%s/%s_color.png' % (args.save, name.split('.')[0]))
142 |
143 |
144 | if __name__ == '__main__':
145 | main()
146 |
--------------------------------------------------------------------------------
/model/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SHI-Labs/Unsupervised-Domain-Adaptation-with-Differential-Treatment/7438f06387e559ebaa09a02b4e9bc45c272d5edc/model/__init__.py
--------------------------------------------------------------------------------
/model/anchor_label.py:
--------------------------------------------------------------------------------
1 | from PIL import Image
2 | import numpy as np
3 | #import cv2 as cv
4 | #import cv2
5 | import random
6 | import matplotlib.pyplot as plt
7 | import matplotlib.image as mpimg
8 | import time
9 | from multiprocessing import Pool
10 | import torch
11 | import numpy as np
12 | import matplotlib.pyplot as plt
13 | from scipy import ndimage as ndi
14 |
15 | from skimage.measure import label as sklabel
16 | from skimage.feature import peak_local_max
17 | import queue
18 | import pdb
19 |
20 | #FG_LABEL = [6,7,11,12,13,14,15,16,17,18] #light,sign,person,rider,car,truck,bus,train,motor,bike
21 | FG_LABEL = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18]
22 | BG_LABEL = [0,1,2,3,4,8,9,10]
23 | #w, h= 161, 91
24 | palette = [128, 64, 128, 244, 35, 232, 70, 70, 70, 102, 102, 156, 190, 153, 153, 153, 153, 153, 250, 170, 30,
25 | 220, 220, 0, 107, 142, 35, 152, 251, 152, 70, 130, 180, 220, 20, 60, 255, 0, 0, 0, 0, 142, 0, 0, 70,
26 | 0, 60, 100, 0, 80, 100, 0, 0, 230, 119, 11, 32]
27 |
28 | def save_img(img_path, anchors):
29 | img = Image.open(img_path).convert('RGB')
30 | w, h= 1280, 720
31 | img = img.resize((w,h), Image.BICUBIC)
32 |
33 | img = np.asarray(img)
34 | ih, iw, _ = img.shape
35 | for i in range(len(FG_LABEL)):
36 | if len(anchors[i]) > 0:
37 | color = (palette[3*FG_LABEL[i]], palette[3*FG_LABEL[i]+1], palette[3*FG_LABEL[i]+2])
38 | for j in range(len(anchors[i])):
39 | x1, y1, x2, y2, _ = anchors[i][j,:].astype(int)
40 | #cv2.rectangle(img, (y1, x1), (y2, x2), color, 2)
41 | imgsave = Image.fromarray(img, 'RGB')
42 | imgsave.save('sample.png')
43 | #pdb.set_trace()
44 | #img=mpimg.imread('sample.png')
45 | #imgplot = plt.imshow(img)
46 | #plt.show()
47 |
48 |
49 | def colorize_mask(mask):
50 | # mask: numpy array of the mask
51 | new_mask = Image.fromarray(mask.astype(np.uint8)).convert('P')
52 | new_mask.putpalette(palette)
53 |
54 | return new_mask
55 |
56 | def anchorsbi(label, origin_size=(720,1280), iou_thresh=0.4):
57 | h,w = label.shape[0], label.shape[1]
58 | h_scale, w_scale = float(origin_size[0])/h, float(origin_size[1])/w
59 | hthre = np.ceil(32./h_scale)
60 | wthre = np.ceil(32./w_scale)
61 | anchors = []
62 | for fg in FG_LABEL:
63 | mask = label==fg
64 | foreidx = 1
65 | if torch.sum(mask)>0:
66 |
67 | masknp = mask.cpu().clone().detach().numpy().astype(int)
68 | seg, foreidx = sklabel(masknp, background=0, return_num=True, connectivity=2)
69 | foreidx += 1
70 |
71 | anc_cls = np.zeros((foreidx-1,5))
72 | for fi in range(1, foreidx):
73 | x,y = np.where(seg==fi)
74 | anc_cls[fi-1,:4] = np.min(x), np.min(y), np.max(x), np.max(y)
75 | area = (anc_cls[fi-1,2] - anc_cls[fi-1,0])*(anc_cls[fi-1,3] - anc_cls[fi-1,1])
76 | anc_cls[fi-1,4] = float(len(x)) / max(area, 1e-5)
77 | if len(anc_cls) > 0:
78 | hdis = anc_cls[:,2] - anc_cls[:,0]
79 | wdis = anc_cls[:,3] - anc_cls[:,1]
80 | anc_cls = anc_cls[np.where((hdis>=hthre)&(wdis>=wthre))[0],:]
81 | area = (anc_cls[:,2] - anc_cls[:,0])*(anc_cls[:,3] - anc_cls[:,1])
82 | sortidx = np.argsort(area)[::-1]
83 | anc_cls = anc_cls[sortidx,:]
84 | if len(anc_cls) > 0:
85 | anc_cls = anc_cls[np.where(anc_cls[:,4]>=iou_thresh)[0],:]
86 | if len(anc_cls) > 0:
87 | anc_cls[:,0] = np.floor(h_scale*anc_cls[:,0])
88 | anc_cls[:,2] = np.ceil(h_scale*anc_cls[:,2])
89 | anc_cls[:,2] = np.where(anc_cls[:,2] 1:
190 | optimizer.param_groups[1]['lr'] = args.learning_rate * (0.1**(int(i/50000))) * 2
191 |
192 | def CrossEntropy2d(self, predict, target, weight=None, size_average=True):
193 | assert not target.requires_grad
194 | assert predict.dim() == 4
195 | assert target.dim() == 3
196 | assert predict.size(0) == target.size(0), "{0} vs {1} ".format(predict.size(0), target.size(0))
197 | assert predict.size(2) == target.size(1), "{0} vs {1} ".format(predict.size(2), target.size(1))
198 | assert predict.size(3) == target.size(2), "{0} vs {1} ".format(predict.size(3), target.size(3))
199 | n, c, h, w = predict.size()
200 | target_mask = (target >= 0) * (target != 255)
201 | target = target[target_mask]
202 | if not target.data.dim():
203 | return Variable(torch.zeros(1))
204 | predict = predict.transpose(1, 2).transpose(2, 3).contiguous()
205 | predict = predict[target_mask.view(n, h, w, 1).repeat(1, 1, 1, c)].view(-1, c)
206 | loss = F.cross_entropy(predict, target, weight=weight, size_average=size_average)
207 | return loss
208 |
209 | def VGG16_FCN8s(num_classes=21, init_weights=None, restore_from=None):
210 | model = FCN8s(num_classes=num_classes)
211 | if init_weights is not None:
212 | model.load_state_dict(torch.load(init_weights, map_location=lambda storage, loc: storage))
213 | if restore_from is not None:
214 | model.load_state_dict(torch.load(restore_from + '.pth', map_location=lambda storage, loc: storage))
215 | return model
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | torch==1.3
2 | torchvision==0.4.1
3 | scipy==1.4
4 | tensorboardX==2.0
5 | scikit-image==0.16.2
6 | Pillow==7.1
7 | opencv-python==4.1.1.26
8 |
--------------------------------------------------------------------------------
/train_sim.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import torch
3 | import torch.nn as nn
4 | from torch.utils import data, model_zoo
5 | import numpy as np
6 | import pickle
7 | from torch.autograd import Variable
8 | import torch.optim as optim
9 | import scipy.misc
10 | import torch.backends.cudnn as cudnn
11 | import torch.nn.functional as F
12 | import sys
13 | import os
14 | import os.path as osp
15 | import random
16 | from tensorboardX import SummaryWriter
17 | import PIL.Image as Image
18 | try:
19 | from apex import amp
20 | APEX_AVAILABLE = True
21 | except ModuleNotFoundError:
22 | APEX_AVAILABLE = False
23 | from model.deeplab_multi import DeeplabMultiFeature
24 | from model.discriminator import FCDiscriminator
25 | from utils.loss import CrossEntropy2d
26 | from utils.functions import *
27 | from dataset.gta5_dataset import GTA5DataSet
28 | from dataset.synthia_dataset import synthiaDataSet
29 | from dataset.cityscapes_dataset import cityscapesDataSet
30 | from skimage.measure import label as sklabel
31 | from compute_iou import compute_mIoU
32 | import pdb
33 |
34 | IMG_MEAN = np.array((104.00698793, 116.66876762, 122.67891434), dtype=np.float32)
35 | BG_LABEL = [0,1,2,3,4,8,9,10]
36 | FG_LABEL = [5,6,7,11,12,13,14,15,16,17,18]
37 |
38 | MODEL = 'DeepLab'
39 | BATCH_SIZE = 1
40 | ITER_SIZE = 1
41 | NUM_WORKERS = 4
42 | DATA_DIRECTORY = './data/gta5_deeplab'
43 | DATA_LIST_PATH = './dataset/gta5_list/train.txt'
44 | IGNORE_LABEL = 255
45 | INPUT_SIZE = '1280,720'
46 | DATA_DIRECTORY_TARGET = './data/Cityscapes'
47 | DATA_LIST_PATH_TARGET = './dataset/cityscapes_list/train.txt'
48 | DATA_LIST_PATH_TARGET_TEST = './dataset/cityscapes_list/val.txt'
49 | INPUT_SIZE_TARGET = '1024,512'
50 | LEARNING_RATE = 2.5e-4
51 | MOMENTUM = 0.9
52 | NUM_CLASSES = 19
53 | NUM_STEPS = 250000
54 | NUM_STEPS_STOP = 200000 # early stopping
55 | NUM_PROTOTYPE = 50
56 | POWER = 0.9
57 | RANDOM_SEED = 1234
58 | RESTORE_FROM = 'http://vllab.ucmerced.edu/ytsai/CVPR18/DeepLab_resnet_pretrained_init-f81d91e8.pth'
59 | SAVE_NUM_IMAGES = 2
60 | SAVE_PRED_EVERY = 5000
61 | SNAPSHOT_DIR = './snapshots/'
62 | WEIGHT_DECAY = 0.0005
63 | LOG_DIR = './log'
64 | SAVE_PATH = './result/cityscapes'
65 |
66 | LEARNING_RATE_D = 1e-4
67 | LAMBDA_ADV_TARGET = 0.001
68 |
69 | TARGET = 'cityscapes'
70 | SET = 'train'
71 |
72 | LAMBDA_ADV_CLS = 0.01
73 | LAMBDA_ADV_INS = 0.01
74 |
75 | def get_arguments():
76 | """Parse all the arguments provided from the CLI.
77 |
78 | Returns:
79 | A list of parsed arguments.
80 | """
81 | parser = argparse.ArgumentParser(description="DeepLab-ResNet Network")
82 | parser.add_argument("--model", type=str, default=MODEL,
83 | help="available options : DeepLab")
84 | parser.add_argument("--target", type=str, default=TARGET,
85 | help="available options : cityscapes")
86 | parser.add_argument("--batch-size", type=int, default=BATCH_SIZE,
87 | help="Number of images sent to the network in one step.")
88 | parser.add_argument("--iter-size", type=int, default=ITER_SIZE,
89 | help="Accumulate gradients for ITER_SIZE iterations.")
90 | parser.add_argument("--num-workers", type=int, default=NUM_WORKERS,
91 | help="number of workers for multithread dataloading.")
92 | parser.add_argument("--data-dir", type=str, default=DATA_DIRECTORY,
93 | help="Path to the directory containing the source dataset.")
94 | parser.add_argument("--data-list", type=str, default=DATA_LIST_PATH,
95 | help="Path to the file listing the images in the source dataset.")
96 | parser.add_argument("--ignore-label", type=int, default=IGNORE_LABEL,
97 | help="The index of the label to ignore during the training.")
98 | parser.add_argument("--input-size", type=str, default=INPUT_SIZE,
99 | help="Comma-separated string with height and width of source images.")
100 | parser.add_argument("--data-dir-target", type=str, default=DATA_DIRECTORY_TARGET,
101 | help="Path to the directory containing the target dataset.")
102 | parser.add_argument("--data-list-target", type=str, default=DATA_LIST_PATH_TARGET,
103 | help="Path to the file listing the images in the target dataset.")
104 | parser.add_argument("--data-list-target-test", type=str, default=DATA_LIST_PATH_TARGET_TEST,
105 | help="Path to the file listing the images in the target val dataset.")
106 | parser.add_argument("--input-size-target", type=str, default=INPUT_SIZE_TARGET,
107 | help="Comma-separated string with height and width of target images.")
108 | parser.add_argument("--is-training", action="store_true",
109 | help="Whether to updates the running means and variances during the training.")
110 | parser.add_argument("--learning-rate", type=float, default=LEARNING_RATE,
111 | help="Base learning rate for training with polynomial decay.")
112 | parser.add_argument("--learning-rate-D", type=float, default=LEARNING_RATE_D,
113 | help="Base learning rate for discriminator.")
114 | parser.add_argument("--lambda-adv-target", type=float, default=LAMBDA_ADV_TARGET,
115 | help="lambda_adv for adversarial training.")
116 | parser.add_argument("--lambda-adv-cls", type=float, default=LAMBDA_ADV_CLS,
117 | help="lambda_cls for adversarial training.")
118 | parser.add_argument("--lambda-adv-ins", type=float, default=LAMBDA_ADV_INS,
119 | help="lambda_ins for adversarial training.")
120 | parser.add_argument("--momentum", type=float, default=MOMENTUM,
121 | help="Momentum component of the optimiser.")
122 | parser.add_argument("--not-restore-last", action="store_true",
123 | help="Whether to not restore last (FC) layers.")
124 | parser.add_argument("--num-classes", type=int, default=NUM_CLASSES,
125 | help="Number of classes to predict (including background).")
126 | parser.add_argument("--num-steps", type=int, default=NUM_STEPS,
127 | help="Number of training steps.")
128 | parser.add_argument("--num-steps-stop", type=int, default=NUM_STEPS_STOP,
129 | help="Number of training steps for early stopping.")
130 | parser.add_argument("--num-prototype", type=int, default=NUM_PROTOTYPE,
131 | help="Number of prototypes.")
132 | parser.add_argument("--power", type=float, default=POWER,
133 | help="Decay parameter to compute the learning rate.")
134 | parser.add_argument("--random-mirror", action="store_true",
135 | help="Whether to randomly mirror the inputs during the training.")
136 | parser.add_argument("--random-scale", action="store_true",
137 | help="Whether to randomly scale the inputs during the training.")
138 | parser.add_argument("--random-seed", type=int, default=RANDOM_SEED,
139 | help="Random seed to have reproducible results.")
140 | parser.add_argument("--restore-from", type=str, default=RESTORE_FROM,
141 | help="Where restore model parameters from.")
142 | parser.add_argument("--save-num-images", type=int, default=SAVE_NUM_IMAGES,
143 | help="How many images to save.")
144 | parser.add_argument("--save-pred-every", type=int, default=SAVE_PRED_EVERY,
145 | help="Save summaries and checkpoint every often.")
146 | parser.add_argument("--snapshot-dir", type=str, default=SNAPSHOT_DIR,
147 | help="Where to save snapshots of the model.")
148 | parser.add_argument("--weight-decay", type=float, default=WEIGHT_DECAY,
149 | help="Regularisation parameter for L2-loss.")
150 | parser.add_argument("--cpu", action='store_true', help="choose to use cpu device.")
151 | parser.add_argument("--tensorboard", action='store_true', help="choose whether to use tensorboard.")
152 | parser.add_argument("--log-dir", type=str, default=LOG_DIR,
153 | help="Path to the directory of log.")
154 | parser.add_argument("--set", type=str, default=SET,
155 | help="choose adaptation set.")
156 | parser.add_argument("--continue-train", action="store_true",
157 | help="continue training")
158 | parser.add_argument("--save", type=str, default=SAVE_PATH,
159 | help="Path to save result.")
160 | return parser.parse_args()
161 |
162 |
163 | args = get_arguments()
164 |
165 |
166 | def lr_poly(base_lr, iter, max_iter, power):
167 | return base_lr * ((1 - float(iter) / max_iter) ** (power))
168 |
169 |
170 | def adjust_learning_rate(optimizer, i_iter):
171 | lr = lr_poly(args.learning_rate, i_iter, args.num_steps, args.power)
172 | optimizer.param_groups[0]['lr'] = lr
173 | if len(optimizer.param_groups) > 1:
174 | optimizer.param_groups[1]['lr'] = lr * 10
175 |
176 |
177 | def adjust_learning_rate_D(optimizer, i_iter):
178 | lr = lr_poly(args.learning_rate_D, i_iter, args.num_steps, args.power)
179 | optimizer.param_groups[0]['lr'] = lr
180 | if len(optimizer.param_groups) > 1:
181 | optimizer.param_groups[1]['lr'] = lr * 10
182 |
183 |
184 | def amp_backward(loss, optimizer, retain_graph=False):
185 | if APEX_AVAILABLE:
186 | with amp.scale_loss(loss, optimizer) as scaled_loss:
187 | scaled_loss.backward(retain_graph=retain_graph)
188 | else:
189 | loss.backward(retain_graph=retain_graph)
190 |
191 | def seg_label(label):
192 | segs = []
193 | for fg in FG_LABEL:
194 | mask = label==fg
195 | if torch.sum(mask)>0:
196 | masknp = mask.cpu().numpy().astype(int)
197 | seg, forenum = sklabel(masknp, background=0, return_num=True, connectivity=2)
198 | seg = torch.LongTensor(seg).cuda()
199 | pixelnum = np.zeros(forenum, dtype=int)
200 | for i in range(forenum):
201 | pixelnum[i] = torch.sum(seg==(i+1)).item()
202 | segs.append([seg, pixelnum])
203 | else:
204 | segs.append([mask.long(), np.zeros(0)])
205 | return segs
206 |
207 |
208 |
209 | def main():
210 | """Create the model and start the training."""
211 |
212 | device = torch.device("cuda" if not args.cpu else "cpu")
213 | cudnn.benchmark = True
214 | cudnn.enabled = True
215 |
216 | w, h = map(int, args.input_size.split(','))
217 | input_size = (w, h)
218 |
219 | w, h = map(int, args.input_size_target.split(','))
220 | input_size_target = (w, h)
221 |
222 | Iter = 0
223 | bestIoU = 0
224 |
225 | # Create network
226 | # init G
227 | if args.model == 'DeepLab':
228 | model = DeeplabMultiFeature(num_classes=args.num_classes)
229 | if args.restore_from[:4] == 'http' :
230 | saved_state_dict = model_zoo.load_url(args.restore_from)
231 | else:
232 | saved_state_dict = torch.load(args.restore_from)
233 | if args.continue_train:
234 | if list(saved_state_dict.keys())[0].split('.')[0] == 'module':
235 | for key in saved_state_dict.keys():
236 | saved_state_dict['.'.join(key.split('.')[1:])] = saved_state_dict.pop(key)
237 | model.load_state_dict(saved_state_dict)
238 | else:
239 | new_params = model.state_dict().copy()
240 | for i in saved_state_dict:
241 | i_parts = i.split('.')
242 | if not args.num_classes == 19 or not i_parts[1] == 'layer5':
243 | new_params['.'.join(i_parts[1:])] = saved_state_dict[i]
244 | model.load_state_dict(new_params)
245 |
246 | # init D
247 | model_D = FCDiscriminator(num_classes=args.num_classes).to(device)
248 |
249 | if args.continue_train:
250 | model_weights_path = args.restore_from
251 | temp = model_weights_path.split('.')
252 | temp[-2] = temp[-2] + '_D'
253 | model_D_weights_path = '.'.join(temp)
254 | model_D.load_state_dict(torch.load(model_D_weights_path))
255 | temp = model_weights_path.split('.')
256 | temp = temp[-2][-9:]
257 | Iter = int(temp.split('_')[1]) + 1
258 |
259 | model.train()
260 | model.to(device)
261 |
262 | model_D.train()
263 | model_D.to(device)
264 |
265 | if not os.path.exists(args.snapshot_dir):
266 | os.makedirs(args.snapshot_dir)
267 |
268 | # init data loader
269 | if args.data_dir.split('/')[-1] == 'gta5_deeplab':
270 | trainset = GTA5DataSet(args.data_dir, args.data_list, max_iters=args.num_steps * args.iter_size * args.batch_size,
271 | crop_size=input_size,
272 | scale=args.random_scale, mirror=args.random_mirror, mean=IMG_MEAN)
273 | elif args.data_dir.split('/')[-1] == 'syn_deeplab':
274 | trainset = synthiaDataSet(args.data_dir, args.data_list, max_iters=args.num_steps * args.iter_size * args.batch_size,
275 | crop_size=input_size,
276 | scale=args.random_scale, mirror=args.random_mirror, mean=IMG_MEAN)
277 |
278 | trainloader = data.DataLoader(trainset,
279 | batch_size=args.batch_size, shuffle=True, num_workers=args.num_workers, pin_memory=True)
280 | trainloader_iter = enumerate(trainloader)
281 |
282 | targetloader = data.DataLoader(cityscapesDataSet(args.data_dir_target, args.data_list_target,
283 | max_iters=args.num_steps * args.iter_size * args.batch_size,
284 | crop_size=input_size_target,
285 | scale=False, mirror=args.random_mirror, mean=IMG_MEAN,
286 | set=args.set),
287 | batch_size=args.batch_size, shuffle=True, num_workers=args.num_workers,
288 | pin_memory=True)
289 | targetloader_iter = enumerate(targetloader)
290 |
291 | # init optimizer
292 | optimizer = optim.SGD(model.optim_parameters(args),
293 | lr=args.learning_rate, momentum=args.momentum, weight_decay=args.weight_decay)
294 | optimizer.zero_grad()
295 |
296 | optimizer_D = optim.Adam(model_D.parameters(), lr=args.learning_rate_D, betas=(0.9, 0.99))
297 | optimizer_D.zero_grad()
298 |
299 | model, optimizer = amp.initialize(
300 | model, optimizer, opt_level="O2",
301 | keep_batchnorm_fp32=True, loss_scale="dynamic"
302 | )
303 |
304 | model_D, optimizer_D = amp.initialize(
305 | model_D, optimizer_D, opt_level="O2",
306 | keep_batchnorm_fp32=True, loss_scale="dynamic"
307 | )
308 |
309 | # init loss
310 | bce_loss = torch.nn.BCEWithLogitsLoss()
311 | seg_loss = torch.nn.CrossEntropyLoss(ignore_index=255)
312 | L1_loss = torch.nn.L1Loss(reduction='none')
313 |
314 | interp = nn.Upsample(size=(input_size[1], input_size[0]), mode='bilinear', align_corners=True)
315 | interp_target = nn.Upsample(size=(input_size_target[1], input_size_target[0]), mode='bilinear', align_corners=True)
316 | test_interp = nn.Upsample(size=(1024, 2048), mode='bilinear', align_corners=True)
317 |
318 | # labels for adversarial training
319 | source_label = 0
320 | target_label = 1
321 |
322 | # init prototype
323 | num_prototype = args.num_prototype
324 | num_ins = args.num_prototype * 10
325 | src_cls_features = torch.zeros([len(BG_LABEL),num_prototype,2048], dtype=torch.float32).to(device)
326 | src_cls_ptr = np.zeros(len(BG_LABEL), dtype=np.uint64)
327 | src_ins_features = torch.zeros([len(FG_LABEL),num_ins,2048], dtype=torch.float32).to(device)
328 | src_ins_ptr = np.zeros(len(FG_LABEL), dtype=np.uint64)
329 |
330 |
331 | # set up tensor board
332 | if args.tensorboard:
333 | if not os.path.exists(args.log_dir):
334 | os.makedirs(args.log_dir)
335 | writer = SummaryWriter(args.log_dir)
336 |
337 | # start training
338 | for i_iter in range(Iter, args.num_steps):
339 |
340 | loss_seg_value = 0
341 | loss_adv_target_value = 0
342 | loss_D_value = 0
343 | loss_cls_value = 0
344 | loss_ins_value = 0
345 |
346 | optimizer.zero_grad()
347 | adjust_learning_rate(optimizer, i_iter)
348 |
349 | optimizer_D.zero_grad()
350 | adjust_learning_rate_D(optimizer_D, i_iter)
351 |
352 | for sub_i in range(args.iter_size):
353 |
354 | # train G
355 |
356 | # don't accumulate grads in D
357 |
358 | for param in model_D.parameters():
359 | param.requires_grad = False
360 |
361 | # train with source
362 |
363 | _, batch = trainloader_iter.__next__()
364 |
365 | images, labels, _, _ = batch
366 | images = images.to(device)
367 | labels = labels.long().to(device)
368 |
369 | src_feature, pred = model(images)
370 | pred_softmax = F.softmax(pred, dim=1)
371 | pred_idx = torch.argmax(pred_softmax, dim=1)
372 |
373 | right_label = F.interpolate(labels.unsqueeze(0).float(), (pred_idx.size(1),pred_idx.size(2)), mode='nearest').squeeze(0).long()
374 | right_label[right_label!=pred_idx] = 255
375 |
376 | for ii in range(len(BG_LABEL)):
377 | cls_idx = BG_LABEL[ii]
378 | mask = right_label==cls_idx
379 | if torch.sum(mask) == 0:
380 | continue
381 | feature = global_avg_pool(src_feature, mask.float())
382 | if cls_idx != torch.argmax(torch.squeeze(model.layer6(feature.half()).float())).item():
383 | continue
384 | src_cls_features[ii,int(src_cls_ptr[ii]%num_prototype),:] = torch.squeeze(feature).clone().detach()
385 | src_cls_ptr[ii] += 1
386 |
387 | seg_ins = seg_label(right_label.squeeze())
388 | for ii in range(len(FG_LABEL)):
389 | cls_idx = FG_LABEL[ii]
390 | segmask, pixelnum = seg_ins[ii]
391 | if len(pixelnum) == 0:
392 | continue
393 | sortmax = np.argsort(pixelnum)[::-1]
394 | for i in range(min(10, len(sortmax))):
395 | mask = segmask==(sortmax[i]+1)
396 | feature = global_avg_pool(src_feature, mask.float())
397 | if cls_idx != torch.argmax(torch.squeeze(model.layer6(feature.half()).float())).item():
398 | continue
399 | src_ins_features[ii,int(src_ins_ptr[ii]%num_ins),:] = torch.squeeze(feature).clone().detach()
400 | src_ins_ptr[ii] += 1
401 |
402 | pred = interp(pred)
403 | loss_seg = seg_loss(pred, labels)
404 | loss = loss_seg
405 |
406 | # proper normalization
407 | loss = loss / args.iter_size
408 | amp_backward(loss, optimizer)
409 | loss_seg_value += loss_seg.item() / args.iter_size
410 |
411 | # train with target
412 |
413 | _, batch = targetloader_iter.__next__()
414 | images, _, _ = batch
415 | images = images.to(device)
416 |
417 | trg_feature, pred_target = model(images)
418 |
419 | pred_target_softmax = F.softmax(pred_target, dim=1)
420 | pred_target_idx = torch.argmax(pred_target_softmax, dim=1)
421 |
422 | loss_cls = torch.zeros(1).to(device)
423 | loss_ins = torch.zeros(1).to(device)
424 | if i_iter > 0:
425 | for ii in range(len(BG_LABEL)):
426 | cls_idx = BG_LABEL[ii]
427 | if src_cls_ptr[ii] / num_prototype <= 1:
428 | continue
429 | mask = pred_target_idx==cls_idx
430 | feature = global_avg_pool(trg_feature, mask.float())
431 | if cls_idx != torch.argmax(torch.squeeze(model.layer6(feature.half()).float())).item():
432 | continue
433 | ext_feature = feature.squeeze().expand(num_prototype, 2048)
434 | loss_cls += torch.min(torch.sum(L1_loss(ext_feature, src_cls_features[ii,:,:]),dim=1) / 2048.)
435 |
436 | seg_ins = seg_label(pred_target_idx.squeeze())
437 | for ii in range(len(FG_LABEL)):
438 | cls_idx = FG_LABEL[ii]
439 | if src_ins_ptr[ii] / num_ins <= 1:
440 | continue
441 | segmask, pixelnum = seg_ins[ii]
442 | if len(pixelnum) == 0:
443 | continue
444 | sortmax = np.argsort(pixelnum)[::-1]
445 | for i in range(min(10, len(sortmax))):
446 | mask = segmask==(sortmax[i]+1)
447 | feature = global_avg_pool(trg_feature, mask.float())
448 | feature = feature.squeeze().expand(num_ins, 2048)
449 | loss_ins += torch.min(torch.sum(L1_loss(feature, src_ins_features[ii,:,:]),dim=1) / 2048.) / min(10, len(sortmax))
450 |
451 | pred_target = interp_target(pred_target)
452 |
453 | D_out = model_D(F.softmax(pred_target, dim=1))
454 | loss_adv_target = bce_loss(D_out, torch.FloatTensor(D_out.data.size()).fill_(source_label).to(device))
455 |
456 | loss = args.lambda_adv_target * loss_adv_target + args.lambda_adv_cls * loss_cls + args.lambda_adv_ins * loss_ins
457 | loss = loss / args.iter_size
458 | amp_backward(loss, optimizer)
459 | loss_adv_target_value += loss_adv_target.item() / args.iter_size
460 |
461 | # train D
462 |
463 | # bring back requires_grad
464 |
465 | for param in model_D.parameters():
466 | param.requires_grad = True
467 |
468 | # train with source
469 | pred = pred.detach()
470 | D_out = model_D(F.softmax(pred, dim=1))
471 |
472 | loss_D = bce_loss(D_out, torch.FloatTensor(D_out.data.size()).fill_(source_label).to(device))
473 | loss_D = loss_D / args.iter_size / 2
474 | amp_backward(loss_D, optimizer_D)
475 | loss_D_value += loss_D.item()
476 |
477 | # train with target
478 | pred_target = pred_target.detach()
479 | D_out = model_D(F.softmax(pred_target, dim=1))
480 |
481 | loss_D = bce_loss(D_out, torch.FloatTensor(D_out.data.size()).fill_(target_label).to(device))
482 | loss_D = loss_D / args.iter_size / 2
483 | amp_backward(loss_D, optimizer_D)
484 | loss_D_value += loss_D.item()
485 |
486 | optimizer.step()
487 | optimizer_D.step()
488 |
489 | if args.tensorboard:
490 | scalar_info = {
491 | 'loss_seg': loss_seg_value,
492 | 'loss_adv_target': loss_adv_target_value,
493 | 'loss_D': loss_D_value,
494 | }
495 |
496 | if i_iter % 10 == 0:
497 | for key, val in scalar_info.items():
498 | writer.add_scalar(key, val, i_iter)
499 |
500 | print('exp = {}'.format(args.snapshot_dir))
501 | print(
502 | 'iter = {0:8d}/{1:8d}, loss_seg = {2:.3f}, loss_adv = {3:.3f} loss_D = {4:.3f} loss_cls = {5:.3f} loss_ins = {6:.3f}'.format(
503 | i_iter, args.num_steps, loss_seg_value, loss_adv_target_value, loss_D_value, loss_cls.item(), loss_ins.item()))
504 |
505 | if i_iter >= args.num_steps_stop - 1:
506 | print('save model ...')
507 | torch.save(model.state_dict(), osp.join(args.snapshot_dir, 'GTA5_' + str(args.num_steps_stop) + '.pth'))
508 | torch.save(model_D.state_dict(), osp.join(args.snapshot_dir, 'GTA5_' + str(args.num_steps_stop) + '_D.pth'))
509 | break
510 |
511 | if i_iter % args.save_pred_every == 0 and i_iter != 0:
512 | print('taking snapshot ...')
513 | if not os.path.exists(args.save):
514 | os.makedirs(args.save)
515 | testloader = data.DataLoader(cityscapesDataSet(args.data_dir_target, args.data_list_target_test,
516 | crop_size=(1024, 512), mean=IMG_MEAN, scale=False, mirror=False, set='val'),
517 | batch_size=1, shuffle=False, pin_memory=True)
518 | model.eval()
519 | for index, batch in enumerate(testloader):
520 | image, _, name = batch
521 | with torch.no_grad():
522 | output1, output2 = model(Variable(image).to(device))
523 | output = test_interp(output2).cpu().data[0].numpy()
524 | output = output.transpose(1,2,0)
525 | output = np.asarray(np.argmax(output, axis=2), dtype=np.uint8)
526 | output = Image.fromarray(output)
527 | name = name[0].split('/')[-1]
528 | output.save('%s/%s' % (args.save, name))
529 | mIoUs = compute_mIoU(osp.join(args.data_dir_target,'gtFine/val'), args.save, 'dataset/cityscapes_list')
530 | mIoU = round(np.nanmean(mIoUs) * 100, 2)
531 | if mIoU > bestIoU:
532 | bestIoU = mIoU
533 | torch.save(model.state_dict(), osp.join(args.snapshot_dir, 'BestGTA5.pth'))
534 | torch.save(model_D.state_dict(), osp.join(args.snapshot_dir, 'BestGTA5_D.pth'))
535 | torch.save(model.state_dict(), osp.join(args.snapshot_dir, 'GTA5_' + str(i_iter) + '.pth'))
536 | torch.save(model_D.state_dict(), osp.join(args.snapshot_dir, 'GTA5_' + str(i_iter) + '_D.pth'))
537 | model.train()
538 |
539 | if args.tensorboard:
540 | writer.close()
541 |
542 |
543 | if __name__ == '__main__':
544 | main()
545 |
--------------------------------------------------------------------------------
/train_sim_ssl.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import torch
3 | import torch.nn as nn
4 | from torch.utils import data, model_zoo
5 | import numpy as np
6 | import pickle
7 | from torch.autograd import Variable
8 | import torch.optim as optim
9 | import scipy.misc
10 | import torch.backends.cudnn as cudnn
11 | import torch.nn.functional as F
12 | import sys
13 | import os
14 | import os.path as osp
15 | import random
16 | from tensorboardX import SummaryWriter
17 | import PIL.Image as Image
18 | try:
19 | from apex import amp
20 | APEX_AVAILABLE = True
21 | except ModuleNotFoundError:
22 | APEX_AVAILABLE = False
23 | from model.deeplab_multi import DeeplabMultiFeature
24 | from model.discriminator import FCDiscriminator
25 | from utils.loss import CrossEntropy2d
26 | from utils.functions import *
27 | from dataset.gta5_dataset import GTA5DataSet
28 | from dataset.synthia_dataset import synthiaDataSet
29 | from dataset.cityscapes_dataset import cityscapesDataSet
30 | from skimage.measure import label as sklabel
31 | from compute_iou import compute_mIoU
32 | import pdb
33 |
34 | IMG_MEAN = np.array((104.00698793, 116.66876762, 122.67891434), dtype=np.float32)
35 | BG_LABEL = [0,1,2,3,4,8,9,10]
36 | FG_LABEL = [5,6,7,11,12,13,14,15,16,17,18]
37 |
38 | MODEL = 'DeepLab'
39 | BATCH_SIZE = 1
40 | ITER_SIZE = 1
41 | NUM_WORKERS = 4
42 | DATA_DIRECTORY = './data/gta5_deeplab'
43 | DATA_LIST_PATH = './dataset/gta5_list/train.txt'
44 | IGNORE_LABEL = 255
45 | INPUT_SIZE = '1280,720'
46 | DATA_DIRECTORY_TARGET = './data/Cityscapes'
47 | DATA_LIST_PATH_TARGET = './dataset/cityscapes_list/train.txt'
48 | DATA_LIST_PATH_TARGET_TEST = './dataset/cityscapes_list/val.txt'
49 | INPUT_SIZE_TARGET = '1024,512'
50 | LEARNING_RATE = 2.5e-4
51 | MOMENTUM = 0.9
52 | NUM_CLASSES = 19
53 | NUM_STEPS = 250000
54 | NUM_STEPS_STOP = 200000 # early stopping
55 | NUM_PROTOTYPE = 50
56 | POWER = 0.9
57 | RANDOM_SEED = 1234
58 | RESTORE_FROM = 'http://vllab.ucmerced.edu/ytsai/CVPR18/DeepLab_resnet_pretrained_init-f81d91e8.pth'
59 | SAVE_NUM_IMAGES = 2
60 | SAVE_PRED_EVERY = 5000
61 | SNAPSHOT_DIR = './snapshots_ssl/'
62 | WEIGHT_DECAY = 0.0005
63 | LOG_DIR = './log'
64 | SAVE_PATH = './result/cityscapes'
65 | SSL_TARGET_DIR = './target_ssl_gt'
66 |
67 | LEARNING_RATE_D = 1e-4
68 | LAMBDA_ADV_TARGET = 0.001
69 |
70 | TARGET = 'cityscapes'
71 | SET = 'train'
72 |
73 | LAMBDA_ADV_CLS = 0.001
74 | LAMBDA_ADV_INS = 0.001
75 |
76 | def get_arguments():
77 | """Parse all the arguments provided from the CLI.
78 |
79 | Returns:
80 | A list of parsed arguments.
81 | """
82 | parser = argparse.ArgumentParser(description="DeepLab-ResNet Network")
83 | parser.add_argument("--model", type=str, default=MODEL,
84 | help="available options : DeepLab")
85 | parser.add_argument("--target", type=str, default=TARGET,
86 | help="available options : cityscapes")
87 | parser.add_argument("--batch-size", type=int, default=BATCH_SIZE,
88 | help="Number of images sent to the network in one step.")
89 | parser.add_argument("--iter-size", type=int, default=ITER_SIZE,
90 | help="Accumulate gradients for ITER_SIZE iterations.")
91 | parser.add_argument("--num-workers", type=int, default=NUM_WORKERS,
92 | help="number of workers for multithread dataloading.")
93 | parser.add_argument("--data-dir", type=str, default=DATA_DIRECTORY,
94 | help="Path to the directory containing the source dataset.")
95 | parser.add_argument("--data-list", type=str, default=DATA_LIST_PATH,
96 | help="Path to the file listing the images in the source dataset.")
97 | parser.add_argument("--ignore-label", type=int, default=IGNORE_LABEL,
98 | help="The index of the label to ignore during the training.")
99 | parser.add_argument("--input-size", type=str, default=INPUT_SIZE,
100 | help="Comma-separated string with height and width of source images.")
101 | parser.add_argument("--data-dir-target", type=str, default=DATA_DIRECTORY_TARGET,
102 | help="Path to the directory containing the target dataset.")
103 | parser.add_argument("--data-list-target", type=str, default=DATA_LIST_PATH_TARGET,
104 | help="Path to the file listing the images in the target dataset.")
105 | parser.add_argument("--ssl-target-dir", type=str, default=SSL_TARGET_DIR,
106 | help="Path to folder storing the ground truth of the target dataset.")
107 | parser.add_argument("--data-list-target-test", type=str, default=DATA_LIST_PATH_TARGET_TEST,
108 | help="Path to the file listing the images in the target val dataset.")
109 | parser.add_argument("--input-size-target", type=str, default=INPUT_SIZE_TARGET,
110 | help="Comma-separated string with height and width of target images.")
111 | parser.add_argument("--is-training", action="store_true",
112 | help="Whether to updates the running means and variances during the training.")
113 | parser.add_argument("--learning-rate", type=float, default=LEARNING_RATE,
114 | help="Base learning rate for training with polynomial decay.")
115 | parser.add_argument("--learning-rate-D", type=float, default=LEARNING_RATE_D,
116 | help="Base learning rate for discriminator.")
117 | parser.add_argument("--lambda-adv-target", type=float, default=LAMBDA_ADV_TARGET,
118 | help="lambda_adv for adversarial training.")
119 | parser.add_argument("--lambda-adv-cls", type=float, default=LAMBDA_ADV_CLS,
120 | help="lambda_cls for adversarial training.")
121 | parser.add_argument("--lambda-adv-ins", type=float, default=LAMBDA_ADV_INS,
122 | help="lambda_ins for adversarial training.")
123 | parser.add_argument("--momentum", type=float, default=MOMENTUM,
124 | help="Momentum component of the optimiser.")
125 | parser.add_argument("--not-restore-last", action="store_true",
126 | help="Whether to not restore last (FC) layers.")
127 | parser.add_argument("--num-classes", type=int, default=NUM_CLASSES,
128 | help="Number of classes to predict (including background).")
129 | parser.add_argument("--num-steps", type=int, default=NUM_STEPS,
130 | help="Number of training steps.")
131 | parser.add_argument("--num-steps-stop", type=int, default=NUM_STEPS_STOP,
132 | help="Number of training steps for early stopping.")
133 | parser.add_argument("--num-prototype", type=int, default=NUM_PROTOTYPE,
134 | help="Number of prototypes.")
135 | parser.add_argument("--power", type=float, default=POWER,
136 | help="Decay parameter to compute the learning rate.")
137 | parser.add_argument("--random-mirror", action="store_true",
138 | help="Whether to randomly mirror the inputs during the training.")
139 | parser.add_argument("--random-scale", action="store_true",
140 | help="Whether to randomly scale the inputs during the training.")
141 | parser.add_argument("--random-seed", type=int, default=RANDOM_SEED,
142 | help="Random seed to have reproducible results.")
143 | parser.add_argument("--restore-from", type=str, default=RESTORE_FROM,
144 | help="Where restore model parameters from.")
145 | parser.add_argument("--save-num-images", type=int, default=SAVE_NUM_IMAGES,
146 | help="How many images to save.")
147 | parser.add_argument("--save-pred-every", type=int, default=SAVE_PRED_EVERY,
148 | help="Save summaries and checkpoint every often.")
149 | parser.add_argument("--snapshot-dir", type=str, default=SNAPSHOT_DIR,
150 | help="Where to save snapshots of the model.")
151 | parser.add_argument("--weight-decay", type=float, default=WEIGHT_DECAY,
152 | help="Regularisation parameter for L2-loss.")
153 | parser.add_argument("--cpu", action='store_true', help="choose to use cpu device.")
154 | parser.add_argument("--tensorboard", action='store_true', help="choose whether to use tensorboard.")
155 | parser.add_argument("--log-dir", type=str, default=LOG_DIR,
156 | help="Path to the directory of log.")
157 | parser.add_argument("--set", type=str, default=SET,
158 | help="choose adaptation set.")
159 | parser.add_argument("--continue-train", action="store_true",
160 | help="continue training")
161 | parser.add_argument("--save", type=str, default=SAVE_PATH,
162 | help="Path to save result.")
163 | return parser.parse_args()
164 |
165 |
166 | args = get_arguments()
167 |
168 |
169 | def lr_poly(base_lr, iter, max_iter, power):
170 | return base_lr * ((1 - float(iter) / max_iter) ** (power))
171 |
172 |
173 | def adjust_learning_rate(optimizer, i_iter):
174 | lr = lr_poly(args.learning_rate, i_iter, args.num_steps, args.power)
175 | optimizer.param_groups[0]['lr'] = lr
176 | if len(optimizer.param_groups) > 1:
177 | optimizer.param_groups[1]['lr'] = lr * 10
178 |
179 |
180 | def adjust_learning_rate_D(optimizer, i_iter):
181 | lr = lr_poly(args.learning_rate_D, i_iter, args.num_steps, args.power)
182 | optimizer.param_groups[0]['lr'] = lr
183 | if len(optimizer.param_groups) > 1:
184 | optimizer.param_groups[1]['lr'] = lr * 10
185 |
186 |
187 | def amp_backward(loss, optimizer, retain_graph=False):
188 | if APEX_AVAILABLE:
189 | with amp.scale_loss(loss, optimizer) as scaled_loss:
190 | scaled_loss.backward(retain_graph=retain_graph)
191 | else:
192 | loss.backward(retain_graph=retain_graph)
193 |
194 | def seg_label(label):
195 | segs = []
196 | for fg in FG_LABEL:
197 | mask = label==fg
198 | if torch.sum(mask)>0:
199 | masknp = mask.cpu().numpy().astype(int)
200 | seg, forenum = sklabel(masknp, background=0, return_num=True, connectivity=2)
201 | seg = torch.LongTensor(seg).cuda()
202 | pixelnum = np.zeros(forenum, dtype=int)
203 | for i in range(forenum):
204 | pixelnum[i] = torch.sum(seg==(i+1)).item()
205 | segs.append([seg, pixelnum])
206 | else:
207 | segs.append([mask.long(), np.zeros(0)])
208 | return segs
209 |
210 |
211 |
212 | def main():
213 | """Create the model and start the training."""
214 |
215 | device = torch.device("cuda" if not args.cpu else "cpu")
216 | cudnn.benchmark = True
217 | cudnn.enabled = True
218 |
219 | w, h = map(int, args.input_size.split(','))
220 | input_size = (w, h)
221 |
222 | w, h = map(int, args.input_size_target.split(','))
223 | input_size_target = (w, h)
224 |
225 | Iter = 0
226 | bestIoU = 0
227 |
228 | # Create network
229 | # init G
230 | if args.model == 'DeepLab':
231 | model = DeeplabMultiFeature(num_classes=args.num_classes)
232 | if args.restore_from[:4] == 'http' :
233 | saved_state_dict = model_zoo.load_url(args.restore_from)
234 | else:
235 | saved_state_dict = torch.load(args.restore_from)
236 | if args.continue_train:
237 | if list(saved_state_dict.keys())[0].split('.')[0] == 'module':
238 | for key in saved_state_dict.keys():
239 | saved_state_dict['.'.join(key.split('.')[1:])] = saved_state_dict.pop(key)
240 | model.load_state_dict(saved_state_dict)
241 | else:
242 | new_params = model.state_dict().copy()
243 | for i in saved_state_dict:
244 | i_parts = i.split('.')
245 | if not args.num_classes == 19 or not i_parts[1] == 'layer5':
246 | new_params['.'.join(i_parts[1:])] = saved_state_dict[i]
247 | model.load_state_dict(new_params)
248 |
249 | # init D
250 | model_D = FCDiscriminator(num_classes=args.num_classes).to(device)
251 |
252 | if args.continue_train:
253 | model_weights_path = args.restore_from
254 | temp = model_weights_path.split('.')
255 | temp[-2] = temp[-2] + '_D'
256 | model_D_weights_path = '.'.join(temp)
257 | model_D.load_state_dict(torch.load(model_D_weights_path))
258 | temp = model_weights_path.split('.')
259 | temp = temp[-2][-9:]
260 | Iter = int(temp.split('_')[1]) + 1
261 |
262 | model.train()
263 | model.to(device)
264 |
265 | model_D.train()
266 | model_D.to(device)
267 |
268 | if not os.path.exists(args.snapshot_dir):
269 | os.makedirs(args.snapshot_dir)
270 |
271 | # init data loader
272 | if args.data_dir.split('/')[-1] == 'gta5_deeplab':
273 | trainset = GTA5DataSet(args.data_dir, args.data_list, max_iters=args.num_steps * args.iter_size * args.batch_size,
274 | crop_size=input_size,
275 | scale=args.random_scale, mirror=args.random_mirror, mean=IMG_MEAN)
276 | elif args.data_dir.split('/')[-1] == 'syn_deeplab':
277 | trainset = synthiaDataSet(args.data_dir, args.data_list, max_iters=args.num_steps * args.iter_size * args.batch_size,
278 | crop_size=input_size,
279 | scale=args.random_scale, mirror=args.random_mirror, mean=IMG_MEAN)
280 |
281 | trainloader = data.DataLoader(trainset,
282 | batch_size=args.batch_size, shuffle=True, num_workers=args.num_workers, pin_memory=True)
283 | trainloader_iter = enumerate(trainloader)
284 |
285 | targetloader = data.DataLoader(cityscapesDataSet(args.data_dir_target, args.data_list_target,
286 | max_iters=args.num_steps * args.iter_size * args.batch_size,
287 | crop_size=input_size_target,
288 | scale=False, mirror=args.random_mirror, mean=IMG_MEAN,
289 | set=args.set, ssl_dir=args.ssl_target_dir),
290 | batch_size=args.batch_size, shuffle=True, num_workers=args.num_workers,
291 | pin_memory=True)
292 | targetloader_iter = enumerate(targetloader)
293 |
294 | # init optimizer
295 | optimizer = optim.SGD(model.optim_parameters(args),
296 | lr=args.learning_rate, momentum=args.momentum, weight_decay=args.weight_decay)
297 | optimizer.zero_grad()
298 |
299 | optimizer_D = optim.Adam(model_D.parameters(), lr=args.learning_rate_D, betas=(0.9, 0.99))
300 | optimizer_D.zero_grad()
301 |
302 | model, optimizer = amp.initialize(
303 | model, optimizer, opt_level="O2",
304 | keep_batchnorm_fp32=True, loss_scale="dynamic"
305 | )
306 |
307 | model_D, optimizer_D = amp.initialize(
308 | model_D, optimizer_D, opt_level="O2",
309 | keep_batchnorm_fp32=True, loss_scale="dynamic"
310 | )
311 |
312 | # init loss
313 | bce_loss = torch.nn.BCEWithLogitsLoss()
314 | seg_loss = torch.nn.CrossEntropyLoss(ignore_index=255)
315 | L1_loss = torch.nn.L1Loss(reduction='none')
316 |
317 | interp = nn.Upsample(size=(input_size[1], input_size[0]), mode='bilinear', align_corners=True)
318 | interp_target = nn.Upsample(size=(input_size_target[1], input_size_target[0]), mode='bilinear', align_corners=True)
319 | test_interp = nn.Upsample(size=(1024, 2048), mode='bilinear', align_corners=True)
320 |
321 | # labels for adversarial training
322 | source_label = 0
323 | target_label = 1
324 |
325 | # init prototype
326 | num_prototype = args.num_prototype
327 | num_ins = args.num_prototype * 10
328 | src_cls_features = torch.zeros([len(BG_LABEL),num_prototype,2048], dtype=torch.float32).to(device)
329 | src_cls_ptr = np.zeros(len(BG_LABEL), dtype=np.uint64)
330 | src_ins_features = torch.zeros([len(FG_LABEL),num_ins,2048], dtype=torch.float32).to(device)
331 | src_ins_ptr = np.zeros(len(FG_LABEL), dtype=np.uint64)
332 |
333 |
334 | # set up tensor board
335 | if args.tensorboard:
336 | if not os.path.exists(args.log_dir):
337 | os.makedirs(args.log_dir)
338 | writer = SummaryWriter(args.log_dir)
339 |
340 | # start training
341 | for i_iter in range(Iter, args.num_steps):
342 |
343 | loss_seg_value = 0
344 | loss_adv_target_value = 0
345 | loss_D_value = 0
346 | loss_cls_value = 0
347 | loss_ins_value = 0
348 |
349 | optimizer.zero_grad()
350 | adjust_learning_rate(optimizer, i_iter)
351 |
352 | optimizer_D.zero_grad()
353 | adjust_learning_rate_D(optimizer_D, i_iter)
354 |
355 | for sub_i in range(args.iter_size):
356 |
357 | # train G
358 |
359 | # don't accumulate grads in D
360 |
361 | for param in model_D.parameters():
362 | param.requires_grad = False
363 |
364 | # train with source
365 |
366 | _, batch = trainloader_iter.__next__()
367 |
368 | images, labels, _, _ = batch
369 | images = images.to(device)
370 | labels = labels.long().to(device)
371 |
372 | src_feature, pred = model(images)
373 | pred_softmax = F.softmax(pred, dim=1)
374 | pred_idx = torch.argmax(pred_softmax, dim=1)
375 |
376 | right_label = F.interpolate(labels.unsqueeze(0).float(), (pred_idx.size(1),pred_idx.size(2)), mode='nearest').squeeze(0).long()
377 | right_label[right_label!=pred_idx] = 255
378 |
379 | for ii in range(len(BG_LABEL)):
380 | cls_idx = BG_LABEL[ii]
381 | mask = right_label==cls_idx
382 | if torch.sum(mask) == 0:
383 | continue
384 | feature = global_avg_pool(src_feature, mask.float())
385 | if cls_idx != torch.argmax(torch.squeeze(model.layer6(feature.half()).float())).item():
386 | continue
387 | src_cls_features[ii,int(src_cls_ptr[ii]%num_prototype),:] = torch.squeeze(feature).clone().detach()
388 | src_cls_ptr[ii] += 1
389 |
390 | seg_ins = seg_label(right_label.squeeze())
391 | for ii in range(len(FG_LABEL)):
392 | cls_idx = FG_LABEL[ii]
393 | segmask, pixelnum = seg_ins[ii]
394 | if len(pixelnum) == 0:
395 | continue
396 | sortmax = np.argsort(pixelnum)[::-1]
397 | for i in range(min(10, len(sortmax))):
398 | mask = segmask==(sortmax[i]+1)
399 | feature = global_avg_pool(src_feature, mask.float())
400 | if cls_idx != torch.argmax(torch.squeeze(model.layer6(feature.half()).float())).item():
401 | continue
402 | src_ins_features[ii,int(src_ins_ptr[ii]%num_ins),:] = torch.squeeze(feature).clone().detach()
403 | src_ins_ptr[ii] += 1
404 |
405 | pred = interp(pred)
406 | loss_seg = seg_loss(pred, labels)
407 | loss = loss_seg
408 |
409 | # proper normalization
410 | loss = loss / args.iter_size
411 | amp_backward(loss, optimizer)
412 | loss_seg_value += loss_seg.item() / args.iter_size
413 |
414 | # train with target
415 |
416 | _, batch = targetloader_iter.__next__()
417 | images, trg_labels, _, _ = batch
418 | images = images.to(device)
419 | trg_labels = trg_labels.long().to(device)
420 |
421 | trg_feature, pred_target = model(images)
422 |
423 | pred_target_softmax = F.softmax(pred_target, dim=1)
424 | pred_target_idx = torch.argmax(pred_target_softmax, dim=1)
425 |
426 | loss_cls = torch.zeros(1).to(device)
427 | loss_ins = torch.zeros(1).to(device)
428 | if i_iter > 0:
429 | for ii in range(len(BG_LABEL)):
430 | cls_idx = BG_LABEL[ii]
431 | if src_cls_ptr[ii] / num_prototype <= 1:
432 | continue
433 | mask = pred_target_idx==cls_idx
434 | feature = global_avg_pool(trg_feature, mask.float())
435 | if cls_idx != torch.argmax(torch.squeeze(model.layer6(feature.half()).float())).item():
436 | continue
437 | ext_feature = feature.squeeze().expand(num_prototype, 2048)
438 | loss_cls += torch.min(torch.sum(L1_loss(ext_feature, src_cls_features[ii,:,:]),dim=1) / 2048.)
439 |
440 | seg_ins = seg_label(pred_target_idx.squeeze())
441 | for ii in range(len(FG_LABEL)):
442 | cls_idx = FG_LABEL[ii]
443 | if src_ins_ptr[ii] / num_ins <= 1:
444 | continue
445 | segmask, pixelnum = seg_ins[ii]
446 | if len(pixelnum) == 0:
447 | continue
448 | sortmax = np.argsort(pixelnum)[::-1]
449 | for i in range(min(10, len(sortmax))):
450 | mask = segmask==(sortmax[i]+1)
451 | feature = global_avg_pool(trg_feature, mask.float())
452 | feature = feature.squeeze().expand(num_ins, 2048)
453 | loss_ins += torch.min(torch.sum(L1_loss(feature, src_ins_features[ii,:,:]),dim=1) / 2048.) / min(10, len(sortmax))
454 |
455 | pred_target = interp_target(pred_target)
456 |
457 | D_out = model_D(F.softmax(pred_target, dim=1))
458 | loss_adv_target = bce_loss(D_out, torch.FloatTensor(D_out.data.size()).fill_(source_label).to(device))
459 |
460 | pred_target = interp_target(pred_target)
461 | loss_seg_trg = seg_loss(pred_target, trg_labels)
462 |
463 | loss = loss_seg_trg + args.lambda_adv_target * loss_adv_target + args.lambda_adv_cls * loss_cls + args.lambda_adv_ins * loss_ins
464 | loss = loss / args.iter_size
465 | amp_backward(loss, optimizer)
466 | loss_adv_target_value += loss_adv_target.item() / args.iter_size
467 |
468 | # train D
469 |
470 | # bring back requires_grad
471 |
472 | for param in model_D.parameters():
473 | param.requires_grad = True
474 |
475 | # train with source
476 | pred = pred.detach()
477 | D_out = model_D(F.softmax(pred, dim=1))
478 |
479 | loss_D = bce_loss(D_out, torch.FloatTensor(D_out.data.size()).fill_(source_label).to(device))
480 | loss_D = loss_D / args.iter_size / 2
481 | amp_backward(loss_D, optimizer_D)
482 | loss_D_value += loss_D.item()
483 |
484 | # train with target
485 | pred_target = pred_target.detach()
486 | D_out = model_D(F.softmax(pred_target, dim=1))
487 |
488 | loss_D = bce_loss(D_out, torch.FloatTensor(D_out.data.size()).fill_(target_label).to(device))
489 | loss_D = loss_D / args.iter_size / 2
490 | amp_backward(loss_D, optimizer_D)
491 | loss_D_value += loss_D.item()
492 |
493 | optimizer.step()
494 | optimizer_D.step()
495 |
496 | if args.tensorboard:
497 | scalar_info = {
498 | 'loss_seg': loss_seg_value,
499 | 'loss_adv_target': loss_adv_target_value,
500 | 'loss_D': loss_D_value,
501 | }
502 |
503 | if i_iter % 10 == 0:
504 | for key, val in scalar_info.items():
505 | writer.add_scalar(key, val, i_iter)
506 |
507 | print('exp = {}'.format(args.snapshot_dir))
508 | print(
509 | 'iter = {0:8d}/{1:8d}, loss_seg = {2:.3f}, loss_seg_trg = {3:.3f}, loss_adv = {4:.3f} loss_D = {5:.3f} loss_cls = {6:.3f} loss_ins = {7:.3f}'.format(
510 | i_iter, args.num_steps, loss_seg_value, loss_seg_trg.item(), loss_adv_target_value, loss_D_value, loss_cls.item(), loss_ins.item()))
511 |
512 | if i_iter >= args.num_steps_stop - 1:
513 | print('save model ...')
514 | torch.save(model.state_dict(), osp.join(args.snapshot_dir, 'GTA5_' + str(args.num_steps_stop) + '.pth'))
515 | torch.save(model_D.state_dict(), osp.join(args.snapshot_dir, 'GTA5_' + str(args.num_steps_stop) + '_D.pth'))
516 | break
517 |
518 | if i_iter % args.save_pred_every == 0 and i_iter != 0:
519 | print('taking snapshot ...')
520 | if not os.path.exists(args.save):
521 | os.makedirs(args.save)
522 | testloader = data.DataLoader(cityscapesDataSet(args.data_dir_target, args.data_list_target_test,
523 | crop_size=(1024, 512), mean=IMG_MEAN, scale=False, mirror=False, set='val'),
524 | batch_size=1, shuffle=False, pin_memory=True)
525 | model.eval()
526 | for index, batch in enumerate(testloader):
527 | image, _, name = batch
528 | with torch.no_grad():
529 | output1, output2 = model(Variable(image).to(device))
530 | output = test_interp(output2).cpu().data[0].numpy()
531 | output = output.transpose(1,2,0)
532 | output = np.asarray(np.argmax(output, axis=2), dtype=np.uint8)
533 | output = Image.fromarray(output)
534 | name = name[0].split('/')[-1]
535 | output.save('%s/%s' % (args.save, name))
536 | mIoUs = compute_mIoU(osp.join(args.data_dir_target,'gtFine/val'), args.save, 'dataset/cityscapes_list')
537 | mIoU = round(np.nanmean(mIoUs) * 100, 2)
538 | if mIoU > bestIoU:
539 | bestIoU = mIoU
540 | torch.save(model.state_dict(), osp.join(args.snapshot_dir, 'BestGTA5.pth'))
541 | torch.save(model_D.state_dict(), osp.join(args.snapshot_dir, 'BestGTA5_D.pth'))
542 | torch.save(model.state_dict(), osp.join(args.snapshot_dir, 'GTA5_' + str(i_iter) + '.pth'))
543 | torch.save(model_D.state_dict(), osp.join(args.snapshot_dir, 'GTA5_' + str(i_iter) + '_D.pth'))
544 | model.train()
545 |
546 | if args.tensorboard:
547 | writer.close()
548 |
549 |
550 | if __name__ == '__main__':
551 | main()
552 |
--------------------------------------------------------------------------------
/utils/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SHI-Labs/Unsupervised-Domain-Adaptation-with-Differential-Treatment/7438f06387e559ebaa09a02b4e9bc45c272d5edc/utils/__init__.py
--------------------------------------------------------------------------------
/utils/constant.py:
--------------------------------------------------------------------------------
1 | import torch
2 |
3 | cls_vector = torch.tensor([[-2.0905e-01, 7.6159e-02, 2.0476e-01, -6.1146e-02, 3.8538e-01,
4 | 2.8977e-01, -1.5205e-01, -1.5475e-01, 2.0594e-01, 1.5075e-02,
5 | -3.0165e-01, 3.0277e-02, -5.2649e-02, -3.1679e-01, -1.3434e-01,
6 | 3.3429e-02, -5.3884e-01, -2.7324e-01, 5.8548e-02],
7 | [-7.2004e-02, 1.3898e-01, 1.2121e-01, -3.4833e-02, -1.6457e-01,
8 | -1.1275e-01, -2.0859e-01, 1.3453e-01, 1.2065e-01, 1.1246e-01,
9 | -1.0470e-01, 4.1838e-01, -3.0120e-01, 3.0297e-01, 1.2694e-01,
10 | -4.5950e-01, -4.4619e-02, -2.6992e-01, -4.0047e-01],
11 | [-8.7340e-02, -4.7691e-01, -5.4570e-01, -1.6203e-01, -3.4535e-01,
12 | 2.5958e-01, 8.6487e-02, 1.3437e-01, 2.4876e-01, -7.6262e-02,
13 | 6.7205e-02, 1.7503e-01, -9.2149e-02, 3.6307e-02, -1.3047e-01,
14 | 9.1401e-02, -1.8027e-01, -2.1754e-01, 1.0411e-01],
15 | [-3.1545e-02, -1.0727e-01, -4.3146e-02, 8.1858e-02, -3.5247e-02,
16 | 3.7605e-01, -1.4239e-01, -1.1872e-01, -4.4906e-01, -5.2841e-01,
17 | 1.2744e-01, 2.3622e-02, -1.1965e-01, -2.8065e-01, 3.2957e-01,
18 | -1.0401e-01, 2.1484e-02, 3.8672e-05, -3.0210e-01],
19 | [-4.9508e-01, -4.1679e-01, 3.3606e-01, -2.4871e-01, -1.0965e-01,
20 | -1.6376e-01, -1.0669e-01, -8.7130e-02, -3.7808e-01, 1.8349e-01,
21 | 9.5753e-04, -8.9594e-02, 1.2110e-01, 1.1447e-01, 9.2191e-02,
22 | 9.7822e-02, 1.0322e-01, -2.9997e-01, 1.3214e-01],
23 | [-1.9718e-01, -5.6295e-02, 3.1861e-01, 2.9729e-01, -2.3540e-01,
24 | 3.0471e-01, -2.5572e-02, -3.9709e-02, -4.2028e-02, 1.8793e-02,
25 | 1.0739e-01, 1.7314e-01, -2.4264e-01, 1.9300e-01, -1.2943e-01,
26 | -1.7033e-01, -1.6070e-01, 4.7821e-01, 4.2135e-01],
27 | [ 1.8954e-01, -2.8798e-01, 3.2754e-01, -1.4448e-01, -6.2873e-02,
28 | -1.0477e-01, 1.7213e-01, -5.8604e-02, 8.6229e-03, -1.2372e-01,
29 | -8.3111e-02, 2.8022e-01, 1.3977e-01, 1.2273e-01, -1.4383e-01,
30 | 3.8119e-01, -2.8298e-01, 3.1767e-01, -4.7181e-01],
31 | [ 3.6380e-01, -2.9240e-01, -1.3951e-01, 9.0889e-02, 2.4272e-01,
32 | 3.0823e-01, -4.9492e-01, -9.6840e-02, -1.5726e-01, 3.0838e-01,
33 | -1.2193e-01, 8.1617e-02, 3.2946e-01, 2.3771e-01, -9.7246e-02,
34 | -1.3251e-01, 9.4791e-02, 5.8477e-02, -1.8633e-02],
35 | [-3.5953e-01, 1.0411e-01, -3.2194e-01, -1.0584e-01, 4.2926e-01,
36 | 1.5669e-01, 4.4000e-01, 3.0125e-03, -3.3333e-01, 7.3241e-02,
37 | -1.5836e-01, 1.0312e-01, -8.7544e-02, 2.9820e-01, -1.8928e-01,
38 | -8.1306e-02, 5.8836e-02, 1.5589e-01, -1.5376e-01],
39 | [-8.1522e-02, -1.2881e-01, -1.9082e-02, 3.8732e-01, 2.8086e-01,
40 | 6.8053e-02, 9.1492e-02, -4.4414e-02, 1.5546e-01, 3.7200e-01,
41 | 4.8248e-01, 2.7771e-02, -1.7260e-01, 8.3057e-02, 4.0174e-01,
42 | 3.0313e-01, -9.9150e-02, -8.8184e-02, -1.6982e-01],
43 | [ 5.5283e-03, -2.1310e-01, 8.8797e-02, 3.2461e-01, -1.7232e-01,
44 | 1.6161e-01, 1.8501e-01, 7.4985e-02, 1.3260e-01, 1.8081e-02,
45 | -4.7101e-01, -6.2111e-01, -9.6465e-02, 1.9528e-01, 9.0882e-02,
46 | -7.8360e-02, -5.6922e-03, -3.6667e-02, -2.3852e-01],
47 | [ 3.3081e-01, -3.0518e-01, -4.8841e-02, -2.6317e-01, 2.3669e-01,
48 | -2.5247e-01, -1.7162e-02, -1.2871e-02, -1.2269e-01, 7.1143e-02,
49 | -2.1269e-01, -2.8869e-02, -6.3032e-01, -8.8002e-02, 2.2597e-01,
50 | -2.3682e-02, -4.9539e-02, 1.6642e-01, 2.2926e-01],
51 | [-2.1415e-01, -5.7692e-02, -1.3425e-01, 1.6528e-01, -1.8066e-01,
52 | -9.7596e-02, -1.9554e-01, -1.8169e-01, -2.1039e-02, 3.4623e-01,
53 | -6.7646e-02, 1.3857e-02, -2.5975e-01, -4.8695e-01, -3.9392e-01,
54 | 8.1133e-02, 2.5744e-01, 1.8590e-01, -3.1590e-01],
55 | [-3.3737e-01, 1.2382e-01, -1.0949e-01, -2.2890e-01, 1.1268e-01,
56 | 2.8695e-02, -5.3011e-01, 1.7168e-01, 2.7354e-01, -2.2202e-01,
57 | -4.2928e-02, -1.7395e-01, -1.0442e-01, 2.6223e-01, 1.2330e-01,
58 | 3.3314e-01, 1.1853e-01, 3.1755e-01, -7.4114e-02],
59 | [ 1.3139e-01, 3.3645e-01, -2.0257e-01, -4.2801e-03, -2.7894e-01,
60 | -7.7428e-02, -1.8733e-01, -5.4272e-03, -4.5748e-01, 1.6368e-01,
61 | 9.6491e-02, -2.4920e-01, -1.3524e-01, 1.9494e-01, -1.2413e-01,
62 | 2.2121e-01, -5.2175e-01, -8.7228e-02, -4.0087e-02],
63 | [ 6.2680e-02, 1.0564e-01, 5.1959e-02, 3.5334e-01, -1.8097e-02,
64 | 6.2542e-02, -2.0771e-02, 3.9429e-01, -2.0583e-01, -5.9528e-02,
65 | -3.8636e-01, 3.5918e-01, -8.6004e-02, -2.0779e-02, 1.6253e-02,
66 | 4.8008e-01, 2.2287e-01, -2.3209e-01, 1.7588e-01],
67 | [ 1.8460e-01, 2.3874e-01, 2.2342e-01, -4.8515e-01, -1.9486e-01,
68 | 5.7049e-01, 1.5704e-01, 7.6239e-04, 3.2375e-02, 3.2915e-01,
69 | 3.8903e-02, -4.4624e-02, -1.2967e-01, -4.3929e-02, 1.0203e-01,
70 | 1.5980e-01, 2.5364e-01, -2.7999e-02, -5.3304e-02],
71 | [-1.7283e-01, 7.4303e-02, -2.0853e-01, -4.3103e-02, -1.9818e-01,
72 | -4.7509e-02, 4.5480e-02, 1.0534e-01, -3.5979e-02, 3.0042e-01,
73 | -3.0807e-01, 1.5283e-01, 3.4009e-01, -2.5162e-01, 5.5012e-01,
74 | -7.6136e-02, -2.2719e-01, 3.3888e-01, 3.0235e-02],
75 | [ 6.0282e-03, 1.3820e-01, -1.4528e-01, 7.1170e-02, -1.5495e-01,
76 | -4.3258e-02, 3.1540e-02, -8.1669e-01, 1.1314e-01, -1.0214e-01,
77 | -2.4002e-01, 1.5728e-01, -3.8528e-02, 2.3365e-01, 1.7082e-01,
78 | 1.8335e-01, 1.2335e-01, -9.2474e-02, 1.1235e-01]])
79 |
--------------------------------------------------------------------------------
/utils/functions.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 | import numpy as np
4 | import torch.nn.functional as F
5 |
6 |
7 | def global_avg_pool(inputs, weight):
8 | b,c,h,w = inputs.shape[-4], inputs.shape[-3], inputs.shape[-2], inputs.shape[-1]
9 | weight_new = weight.detach().clone()
10 | weight_sum = torch.sum(weight_new)
11 | weight_new = weight_new.view(h,w)
12 | weight_new = weight_new.expand(b,c,h,w)
13 | weight_sum = max(weight_sum, 1e-12)
14 | return torch.sum(inputs*weight_new,dim=(-1,-2),keepdim=True) / weight_sum
15 |
16 |
--------------------------------------------------------------------------------
/utils/loss.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn.functional as F
3 | import torch.nn as nn
4 | from torch.autograd import Variable
5 |
6 |
7 | class CrossEntropy2d(nn.Module):
8 |
9 | def __init__(self, size_average=True, ignore_label=255):
10 | super(CrossEntropy2d, self).__init__()
11 | self.size_average = size_average
12 | self.ignore_label = ignore_label
13 |
14 | def forward(self, predict, target, weight=None):
15 | """
16 | Args:
17 | predict:(n, c, h, w)
18 | target:(n, h, w)
19 | weight (Tensor, optional): a manual rescaling weight given to each class.
20 | If given, has to be a Tensor of size "nclasses"
21 | """
22 | assert not target.requires_grad
23 | assert predict.dim() == 4
24 | assert target.dim() == 3
25 | assert predict.size(0) == target.size(0), "{0} vs {1} ".format(predict.size(0), target.size(0))
26 | assert predict.size(2) == target.size(1), "{0} vs {1} ".format(predict.size(2), target.size(1))
27 | assert predict.size(3) == target.size(2), "{0} vs {1} ".format(predict.size(3), target.size(3))
28 | n, c, h, w = predict.size()
29 | target_mask = (target >= 0) * (target != self.ignore_label)
30 | target = target[target_mask]
31 | if not target.data.dim():
32 | return Variable(torch.zeros(1))
33 | predict = predict.transpose(1, 2).transpose(2, 3).contiguous()
34 | predict = predict[target_mask.view(n, h, w, 1).repeat(1, 1, 1, c)].view(-1, c)
35 | loss = F.cross_entropy(predict, target, weight=weight, size_average=self.size_average)
36 | return loss
37 |
--------------------------------------------------------------------------------