├── ClassDist ├── ClassDist_adapt.npy ├── ClassDist_bapa.npy ├── ClassDist_dsp.npy ├── ClassDist_ltir.npy └── ClassDist_sfdaseg.npy ├── LICENSE ├── README.md ├── dataset ├── cityscapes_dataset.py ├── cityscapes_list │ ├── info.json │ ├── label.txt │ ├── pseudo_adapt.lst │ ├── pseudo_bapa.lst │ ├── pseudo_dsp.lst │ ├── pseudo_ltir.lst │ ├── pseudo_sfdaseg.lst │ ├── pseudo_sfdaseg_so.lst │ ├── train.txt │ └── val.txt ├── gta5_dataset.py └── gta5_list │ └── train.txt ├── logs ├── BAPA_SimT_lr25.out ├── BAPA_SimT_lr6.out └── SFDA_SimT.out ├── model ├── deeplab.py ├── deeplab_multi.py ├── deeplab_vgg.py ├── deeplabv3.py └── discriminator.py ├── network.png ├── network1.png ├── network2.png ├── sh_simt.sh ├── sh_warmup.sh ├── tools ├── __pycache__ │ ├── _init_paths.cpython-36.pyc │ └── evaluate_cityscapes.cpython-36.pyc ├── _init_paths.py ├── compute_ClassDistribution.py ├── compute_ConfusionMatrix.py ├── compute_iou.py ├── evaluate_cityscapes.py ├── test.py ├── trainV1_warmup.py └── trainV2_simt.py └── utils └── loss.py /ClassDist/ClassDist_adapt.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CityU-AIM-Group/SimT/0abe8bae984b1bb18b6fe5d32358d15f034db5a5/ClassDist/ClassDist_adapt.npy -------------------------------------------------------------------------------- /ClassDist/ClassDist_bapa.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CityU-AIM-Group/SimT/0abe8bae984b1bb18b6fe5d32358d15f034db5a5/ClassDist/ClassDist_bapa.npy -------------------------------------------------------------------------------- /ClassDist/ClassDist_dsp.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CityU-AIM-Group/SimT/0abe8bae984b1bb18b6fe5d32358d15f034db5a5/ClassDist/ClassDist_dsp.npy -------------------------------------------------------------------------------- /ClassDist/ClassDist_ltir.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CityU-AIM-Group/SimT/0abe8bae984b1bb18b6fe5d32358d15f034db5a5/ClassDist/ClassDist_ltir.npy -------------------------------------------------------------------------------- /ClassDist/ClassDist_sfdaseg.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CityU-AIM-Group/SimT/0abe8bae984b1bb18b6fe5d32358d15f034db5a5/ClassDist/ClassDist_sfdaseg.npy -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2022 CityU-AIM-Group 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # SimT: Handling Open-set Noise for Domain Adaptive Semantic Segmentation 2 | 3 | by [Xiaoqing Guo](https://guo-xiaoqing.github.io/). 4 | 5 | ## Summary: 6 | 7 | ### Intoduction: 8 | This repository is for our CVPR 2022 paper ["SimT: Handling Open-set Noise for Domain Adaptive Semantic Segmentation"](https://arxiv.org/abs/2203.15202)([知乎](https://zhuanlan.zhihu.com/p/475830652)) and IEEE TPAMI 2023 paper ["Handling Open-set Noise and Novel Target Recognition in Domain Adaptive Semantic Segmentation"]() 9 | 10 | Two branches of the project: 11 | - Main branch (SimT-CVPR): ```git clone https://github.com/CityU-AIM-Group/SimT.git``` 12 | - [SimT-TPAMI](https://github.com/CityU-AIM-Group/SimT/tree/SimT-TPAMI23) branch: ```git clone -b SimT-TPAMI23 https://github.com/CityU-AIM-Group/SimT.git``` 13 | 14 | ### Framework: 15 | ![](https://github.com/CityU-AIM-Group/SimT/blob/main/network.png) 16 | 17 | ## Usage: 18 | ### Requirement: 19 | Pytorch 1.3 & Pytorch 1.7 are ok 20 | 21 | Python 3.6 22 | 23 | ### Preprocessing: 24 | Clone the repository: 25 | ``` 26 | git clone https://github.com/Guo-Xiaoqing/SimT.git 27 | cd SimT 28 | bash sh_warmup.sh ## Stage of warmup 29 | bash sh_simt.sh ## Stage of training with SimT 30 | ``` 31 | 32 | ### Data preparation: 33 | The pseudo labels generated from the UDA black box of BAPA-Net [1] can be downloaded from [Google Drive](https://drive.google.com/drive/folders/1Y1ujTw9PzrX61jirSZgypMumQrjZXp18?usp=sharing) 34 | 35 | The pseudo labels generated from the SFDA black box of SFDASeg [2] can be downloaded from [Google Drive](https://drive.google.com/drive/folders/1oi98NhGngXCoCQPhJ9IpX_GvRY1XgA2R?usp=sharing) 36 | 37 | [1] Yahao Liu, Jinhong Deng, Xinchen Gao, Wen Li, and Lixin Duan. Bapa-net: Boundary adaptation and prototype align- ment for cross-domain semantic segmentation. In ICCV, pages 8801–8811, 2021. 38 | 39 | [2] Jogendra Nath Kundu, Akshay Kulkarni, Amit Singh,Varun Jampani, and R Venkatesh Babu. Generalize then adapt: Source-free domain adaptive semantic segmentation. In ICCV, pages 7046–7056, 2021. 40 | 41 | ### Pretrained model: 42 | You should download the pretrained model, warmup UDA model, and warmup SFDA model from [Google Drive](https://drive.google.com/file/d/18do3btOhtW4q_9d24S9HsFQSvcI1p0iK/view?usp=sharing), and then put them in the './snapshots' folder for initialization. 43 | 44 | ### Well trained model: 45 | You could download the well trained UDA and SFDA models from [Google Drive](https://drive.google.com/file/d/18do3btOhtW4q_9d24S9HsFQSvcI1p0iK/view?usp=sharing). 46 | 47 | ## Log file 48 | Log file can be found [here](https://github.com/CityU-AIM-Group/SimT/blob/main/logs/) 49 | 50 | ## Citation: 51 | ``` 52 | @inproceedings{guo2022simt, 53 | title={SimT: Handling Open-set Noise for Domain Adaptive Semantic Segmentation}, 54 | author={Guo, Xiaoqing and Liu, Jie and Liu, Tongliang and Yuan, Yixuan}, 55 | booktitle= {CVPR}, 56 | year={2022} 57 | } 58 | ``` 59 | 60 | ## Questions: 61 | Please contact "xiaoqingguo1128@gmail.com" 62 | -------------------------------------------------------------------------------- /dataset/cityscapes_dataset.py: -------------------------------------------------------------------------------- 1 | import os 2 | import os.path as osp 3 | import numpy as np 4 | import random 5 | import matplotlib.pyplot as plt 6 | import collections 7 | import torch 8 | import torchvision 9 | from torch.utils import data 10 | from PIL import Image 11 | from torchvision.transforms import functional as tf 12 | 13 | class RandomRotate(object): 14 | def __init__(self, degree): 15 | self.degree = degree 16 | 17 | def __call__(self, img, mask): 18 | rotate_degree = random.random() * 2 * self.degree - self.degree 19 | return tf.affine(img, translate=(0, 0), scale=1.0, angle=rotate_degree, resample=Image.BILINEAR, fillcolor=(0, 0, 0), shear=0.0), tf.affine(mask, translate=(0, 0), scale=1.0, angle=rotate_degree, resample=Image.NEAREST, fillcolor=255, shear=0.0) 20 | 21 | class cityscapesDataSet(data.Dataset): 22 | def __init__(self, root, list_path, max_iters=None, crop_size=(321, 321), mean=(128, 128, 128), scale=True, mirror=True, ignore_label=255, set='val'): 23 | self.root = root 24 | self.list_path = list_path 25 | self.crop_size = crop_size 26 | self.scale = scale 27 | self.ignore_label = ignore_label 28 | self.mean = mean 29 | self.is_mirror = mirror 30 | # self.mean_bgr = np.array([104.00698793, 116.66876762, 122.67891434]) 31 | self.img_ids = [i_id.strip() for i_id in open(list_path)] 32 | if not max_iters==None: 33 | self.img_ids = self.img_ids * int(np.ceil(float(max_iters) / len(self.img_ids))) 34 | self.files = [] 35 | self.set = set 36 | # for split in ["train", "trainval", "val"]: 37 | for name in self.img_ids: 38 | img_file = osp.join(self.root, "%s/%s" % (self.set, name)) 39 | self.files.append({ 40 | "img": img_file, 41 | "name": name 42 | }) 43 | 44 | def __len__(self): 45 | return len(self.files) 46 | 47 | def __getitem__(self, index): 48 | datafiles = self.files[index] 49 | 50 | image = Image.open(datafiles["img"]).convert('RGB') 51 | name = datafiles["name"] 52 | 53 | # resize 54 | image = image.resize(self.crop_size, Image.BICUBIC) 55 | 56 | image = np.asarray(image, np.float32) 57 | 58 | size = image.shape 59 | image = image[:, :, ::-1] # change to BGR 60 | image -= self.mean 61 | image = image.transpose((2, 0, 1)) 62 | 63 | return image.copy(), np.array(size), name 64 | 65 | 66 | class cityscapesPseudo(data.Dataset): 67 | def __init__(self, root, list_path, max_iters=None, crop_size=(321, 321), mean=(128, 128, 128), scale=True, mirror=False, ignore_label=255): 68 | self.root = root 69 | self.list_path = list_path 70 | self.crop_size = crop_size 71 | self.scale = scale 72 | self.ignore_label = ignore_label 73 | self.mean = mean 74 | self.is_mirror = mirror 75 | # self.mean_bgr = np.array([104.00698793, 116.66876762, 122.67891434]) 76 | self.img_ids = [i_id.strip().split() for i_id in open(list_path)] 77 | if not max_iters==None: 78 | self.img_ids = self.img_ids * int(np.ceil(float(max_iters) / len(self.img_ids))) 79 | self.files = [] 80 | self.rotate = RandomRotate(5) 81 | 82 | for item in self.img_ids: 83 | image_path, label_path = item 84 | name = osp.splitext(osp.basename(label_path))[0] 85 | img_file = osp.join(self.root, image_path) 86 | label_file = osp.join(self.root, label_path) 87 | self.files.append({ 88 | "img": img_file, 89 | "label": label_file, 90 | "name": name 91 | }) 92 | #print(self.files) 93 | 94 | def __len__(self): 95 | return len(self.files) 96 | 97 | def __getitem__(self, index): 98 | datafiles = self.files[index] 99 | 100 | image = Image.open(datafiles["img"]).convert('RGB') 101 | label = Image.open(datafiles["label"]) 102 | name = datafiles["name"] 103 | 104 | # resize 105 | image = image.resize(self.crop_size, Image.BICUBIC) 106 | label = label.resize(self.crop_size, Image.NEAREST) 107 | 108 | image = np.asarray(image, np.float32) 109 | label = np.asarray(label, np.float32) 110 | 111 | if self.is_mirror: 112 | flip = np.random.choice(2) * 2 - 1 113 | image = image[:, :, ::flip] 114 | label = label[:, ::flip] 115 | 116 | size = image.shape 117 | image = image[:, :, ::-1] # change to BGR 118 | image -= self.mean 119 | image = image.transpose((2, 0, 1)) 120 | return image.copy(), label.copy(), np.array(size), name 121 | 122 | 123 | if __name__ == '__main__': 124 | dst = GTA5DataSet("./data", is_transform=True) 125 | trainloader = data.DataLoader(dst, batch_size=4) 126 | for i, data in enumerate(trainloader): 127 | imgs, labels = data 128 | if i == 0: 129 | img = torchvision.utils.make_grid(imgs).numpy() 130 | img = np.transpose(img, (1, 2, 0)) 131 | img = img[:, :, ::-1] 132 | plt.imshow(img) 133 | plt.show() 134 | -------------------------------------------------------------------------------- /dataset/cityscapes_list/info.json: -------------------------------------------------------------------------------- 1 | { 2 | "classes":19, 3 | "label2train":[ 4 | [0, 255], 5 | [1, 255], 6 | [2, 255], 7 | [3, 255], 8 | [4, 255], 9 | [5, 255], 10 | [6, 255], 11 | [7, 0], 12 | [8, 1], 13 | [9, 255], 14 | [10, 255], 15 | [11, 2], 16 | [12, 3], 17 | [13, 4], 18 | [14, 255], 19 | [15, 255], 20 | [16, 255], 21 | [17, 5], 22 | [18, 255], 23 | [19, 6], 24 | [20, 7], 25 | [21, 8], 26 | [22, 9], 27 | [23, 10], 28 | [24, 11], 29 | [25, 12], 30 | [26, 13], 31 | [27, 14], 32 | [28, 15], 33 | [29, 255], 34 | [30, 255], 35 | [31, 16], 36 | [32, 17], 37 | [33, 18], 38 | [-1, 255]], 39 | "label":[ 40 | "road", 41 | "sidewalk", 42 | "building", 43 | "wall", 44 | "fence", 45 | "pole", 46 | "light", 47 | "sign", 48 | "vegetation", 49 | "terrain", 50 | "sky", 51 | "person", 52 | "rider", 53 | "car", 54 | "truck", 55 | "bus", 56 | "train", 57 | "motocycle", 58 | "bicycle"], 59 | "palette":[ 60 | [128,64,128], 61 | [244,35,232], 62 | [70,70,70], 63 | [102,102,156], 64 | [190,153,153], 65 | [153,153,153], 66 | [250,170,30], 67 | [220,220,0], 68 | [107,142,35], 69 | [152,251,152], 70 | [70,130,180], 71 | [220,20,60], 72 | [255,0,0], 73 | [0,0,142], 74 | [0,0,70], 75 | [0,60,100], 76 | [0,80,100], 77 | [0,0,230], 78 | [119,11,32], 79 | [0,0,0]], 80 | "mean":[ 81 | 73.158359210711552, 82 | 82.908917542625858, 83 | 72.392398761941593], 84 | "std":[ 85 | 47.675755341814678, 86 | 48.494214368814916, 87 | 47.736546325441594] 88 | } 89 | -------------------------------------------------------------------------------- /dataset/cityscapes_list/label.txt: -------------------------------------------------------------------------------- 1 | frankfurt/frankfurt_000001_007973_gtFine_labelIds.png 2 | frankfurt/frankfurt_000001_025921_gtFine_labelIds.png 3 | frankfurt/frankfurt_000001_062016_gtFine_labelIds.png 4 | frankfurt/frankfurt_000001_049078_gtFine_labelIds.png 5 | frankfurt/frankfurt_000000_009561_gtFine_labelIds.png 6 | frankfurt/frankfurt_000001_013710_gtFine_labelIds.png 7 | frankfurt/frankfurt_000001_041664_gtFine_labelIds.png 8 | frankfurt/frankfurt_000000_013240_gtFine_labelIds.png 9 | frankfurt/frankfurt_000001_044787_gtFine_labelIds.png 10 | frankfurt/frankfurt_000001_015328_gtFine_labelIds.png 11 | frankfurt/frankfurt_000001_073243_gtFine_labelIds.png 12 | frankfurt/frankfurt_000001_034816_gtFine_labelIds.png 13 | frankfurt/frankfurt_000001_041074_gtFine_labelIds.png 14 | frankfurt/frankfurt_000001_005898_gtFine_labelIds.png 15 | frankfurt/frankfurt_000000_022254_gtFine_labelIds.png 16 | frankfurt/frankfurt_000001_044658_gtFine_labelIds.png 17 | frankfurt/frankfurt_000001_009504_gtFine_labelIds.png 18 | frankfurt/frankfurt_000001_024927_gtFine_labelIds.png 19 | frankfurt/frankfurt_000001_017842_gtFine_labelIds.png 20 | frankfurt/frankfurt_000001_068208_gtFine_labelIds.png 21 | frankfurt/frankfurt_000001_013016_gtFine_labelIds.png 22 | frankfurt/frankfurt_000001_010156_gtFine_labelIds.png 23 | frankfurt/frankfurt_000000_002963_gtFine_labelIds.png 24 | frankfurt/frankfurt_000001_020693_gtFine_labelIds.png 25 | frankfurt/frankfurt_000001_078803_gtFine_labelIds.png 26 | frankfurt/frankfurt_000001_025713_gtFine_labelIds.png 27 | frankfurt/frankfurt_000001_007285_gtFine_labelIds.png 28 | frankfurt/frankfurt_000001_070099_gtFine_labelIds.png 29 | frankfurt/frankfurt_000000_009291_gtFine_labelIds.png 30 | frankfurt/frankfurt_000000_019607_gtFine_labelIds.png 31 | frankfurt/frankfurt_000001_068063_gtFine_labelIds.png 32 | frankfurt/frankfurt_000000_003920_gtFine_labelIds.png 33 | frankfurt/frankfurt_000001_077233_gtFine_labelIds.png 34 | frankfurt/frankfurt_000001_029086_gtFine_labelIds.png 35 | frankfurt/frankfurt_000001_060545_gtFine_labelIds.png 36 | frankfurt/frankfurt_000001_001464_gtFine_labelIds.png 37 | frankfurt/frankfurt_000001_028590_gtFine_labelIds.png 38 | frankfurt/frankfurt_000001_016462_gtFine_labelIds.png 39 | frankfurt/frankfurt_000001_060422_gtFine_labelIds.png 40 | frankfurt/frankfurt_000001_009058_gtFine_labelIds.png 41 | frankfurt/frankfurt_000001_080830_gtFine_labelIds.png 42 | frankfurt/frankfurt_000001_012870_gtFine_labelIds.png 43 | frankfurt/frankfurt_000001_077434_gtFine_labelIds.png 44 | frankfurt/frankfurt_000001_033655_gtFine_labelIds.png 45 | frankfurt/frankfurt_000001_051516_gtFine_labelIds.png 46 | frankfurt/frankfurt_000001_044413_gtFine_labelIds.png 47 | frankfurt/frankfurt_000001_055172_gtFine_labelIds.png 48 | frankfurt/frankfurt_000001_040575_gtFine_labelIds.png 49 | frankfurt/frankfurt_000000_020215_gtFine_labelIds.png 50 | frankfurt/frankfurt_000000_017228_gtFine_labelIds.png 51 | frankfurt/frankfurt_000001_041354_gtFine_labelIds.png 52 | frankfurt/frankfurt_000000_008206_gtFine_labelIds.png 53 | frankfurt/frankfurt_000001_043564_gtFine_labelIds.png 54 | frankfurt/frankfurt_000001_032711_gtFine_labelIds.png 55 | frankfurt/frankfurt_000001_064130_gtFine_labelIds.png 56 | frankfurt/frankfurt_000001_053102_gtFine_labelIds.png 57 | frankfurt/frankfurt_000001_082087_gtFine_labelIds.png 58 | frankfurt/frankfurt_000001_057478_gtFine_labelIds.png 59 | frankfurt/frankfurt_000001_007407_gtFine_labelIds.png 60 | frankfurt/frankfurt_000001_008200_gtFine_labelIds.png 61 | frankfurt/frankfurt_000001_038844_gtFine_labelIds.png 62 | frankfurt/frankfurt_000001_016029_gtFine_labelIds.png 63 | frankfurt/frankfurt_000001_058176_gtFine_labelIds.png 64 | frankfurt/frankfurt_000001_057181_gtFine_labelIds.png 65 | frankfurt/frankfurt_000001_039895_gtFine_labelIds.png 66 | frankfurt/frankfurt_000000_000294_gtFine_labelIds.png 67 | frankfurt/frankfurt_000001_055062_gtFine_labelIds.png 68 | frankfurt/frankfurt_000001_083029_gtFine_labelIds.png 69 | frankfurt/frankfurt_000001_010444_gtFine_labelIds.png 70 | frankfurt/frankfurt_000001_041517_gtFine_labelIds.png 71 | frankfurt/frankfurt_000001_069633_gtFine_labelIds.png 72 | frankfurt/frankfurt_000001_020287_gtFine_labelIds.png 73 | frankfurt/frankfurt_000001_012038_gtFine_labelIds.png 74 | frankfurt/frankfurt_000001_046504_gtFine_labelIds.png 75 | frankfurt/frankfurt_000001_032556_gtFine_labelIds.png 76 | frankfurt/frankfurt_000000_001751_gtFine_labelIds.png 77 | frankfurt/frankfurt_000001_000538_gtFine_labelIds.png 78 | frankfurt/frankfurt_000001_083852_gtFine_labelIds.png 79 | frankfurt/frankfurt_000001_077092_gtFine_labelIds.png 80 | frankfurt/frankfurt_000001_017101_gtFine_labelIds.png 81 | frankfurt/frankfurt_000001_044525_gtFine_labelIds.png 82 | frankfurt/frankfurt_000001_005703_gtFine_labelIds.png 83 | frankfurt/frankfurt_000001_080391_gtFine_labelIds.png 84 | frankfurt/frankfurt_000001_038418_gtFine_labelIds.png 85 | frankfurt/frankfurt_000001_066832_gtFine_labelIds.png 86 | frankfurt/frankfurt_000000_003357_gtFine_labelIds.png 87 | frankfurt/frankfurt_000000_020880_gtFine_labelIds.png 88 | frankfurt/frankfurt_000001_062396_gtFine_labelIds.png 89 | frankfurt/frankfurt_000001_046272_gtFine_labelIds.png 90 | frankfurt/frankfurt_000001_062509_gtFine_labelIds.png 91 | frankfurt/frankfurt_000001_054415_gtFine_labelIds.png 92 | frankfurt/frankfurt_000001_021406_gtFine_labelIds.png 93 | frankfurt/frankfurt_000001_030310_gtFine_labelIds.png 94 | frankfurt/frankfurt_000000_014480_gtFine_labelIds.png 95 | frankfurt/frankfurt_000001_005410_gtFine_labelIds.png 96 | frankfurt/frankfurt_000000_022797_gtFine_labelIds.png 97 | frankfurt/frankfurt_000001_035144_gtFine_labelIds.png 98 | frankfurt/frankfurt_000001_014565_gtFine_labelIds.png 99 | frankfurt/frankfurt_000001_065850_gtFine_labelIds.png 100 | frankfurt/frankfurt_000000_000576_gtFine_labelIds.png 101 | frankfurt/frankfurt_000001_065617_gtFine_labelIds.png 102 | frankfurt/frankfurt_000000_005543_gtFine_labelIds.png 103 | frankfurt/frankfurt_000001_055709_gtFine_labelIds.png 104 | frankfurt/frankfurt_000001_027325_gtFine_labelIds.png 105 | frankfurt/frankfurt_000001_011835_gtFine_labelIds.png 106 | frankfurt/frankfurt_000001_046779_gtFine_labelIds.png 107 | frankfurt/frankfurt_000001_064305_gtFine_labelIds.png 108 | frankfurt/frankfurt_000001_012738_gtFine_labelIds.png 109 | frankfurt/frankfurt_000001_048355_gtFine_labelIds.png 110 | frankfurt/frankfurt_000001_019969_gtFine_labelIds.png 111 | frankfurt/frankfurt_000001_080091_gtFine_labelIds.png 112 | frankfurt/frankfurt_000000_011007_gtFine_labelIds.png 113 | frankfurt/frankfurt_000000_015676_gtFine_labelIds.png 114 | frankfurt/frankfurt_000001_044227_gtFine_labelIds.png 115 | frankfurt/frankfurt_000001_055387_gtFine_labelIds.png 116 | frankfurt/frankfurt_000001_038245_gtFine_labelIds.png 117 | frankfurt/frankfurt_000001_059642_gtFine_labelIds.png 118 | frankfurt/frankfurt_000001_030669_gtFine_labelIds.png 119 | frankfurt/frankfurt_000001_068772_gtFine_labelIds.png 120 | frankfurt/frankfurt_000001_079206_gtFine_labelIds.png 121 | frankfurt/frankfurt_000001_055306_gtFine_labelIds.png 122 | frankfurt/frankfurt_000001_012699_gtFine_labelIds.png 123 | frankfurt/frankfurt_000001_042384_gtFine_labelIds.png 124 | frankfurt/frankfurt_000001_054077_gtFine_labelIds.png 125 | frankfurt/frankfurt_000001_010830_gtFine_labelIds.png 126 | frankfurt/frankfurt_000001_052120_gtFine_labelIds.png 127 | frankfurt/frankfurt_000001_032018_gtFine_labelIds.png 128 | frankfurt/frankfurt_000001_051737_gtFine_labelIds.png 129 | frankfurt/frankfurt_000001_028335_gtFine_labelIds.png 130 | frankfurt/frankfurt_000001_049770_gtFine_labelIds.png 131 | frankfurt/frankfurt_000001_054884_gtFine_labelIds.png 132 | frankfurt/frankfurt_000001_019698_gtFine_labelIds.png 133 | frankfurt/frankfurt_000000_011461_gtFine_labelIds.png 134 | frankfurt/frankfurt_000000_001016_gtFine_labelIds.png 135 | frankfurt/frankfurt_000001_062250_gtFine_labelIds.png 136 | frankfurt/frankfurt_000001_004736_gtFine_labelIds.png 137 | frankfurt/frankfurt_000001_068682_gtFine_labelIds.png 138 | frankfurt/frankfurt_000000_006589_gtFine_labelIds.png 139 | frankfurt/frankfurt_000000_011810_gtFine_labelIds.png 140 | frankfurt/frankfurt_000001_066574_gtFine_labelIds.png 141 | frankfurt/frankfurt_000001_048654_gtFine_labelIds.png 142 | frankfurt/frankfurt_000001_049209_gtFine_labelIds.png 143 | frankfurt/frankfurt_000001_042098_gtFine_labelIds.png 144 | frankfurt/frankfurt_000001_031416_gtFine_labelIds.png 145 | frankfurt/frankfurt_000000_009969_gtFine_labelIds.png 146 | frankfurt/frankfurt_000001_038645_gtFine_labelIds.png 147 | frankfurt/frankfurt_000001_020046_gtFine_labelIds.png 148 | frankfurt/frankfurt_000001_054219_gtFine_labelIds.png 149 | frankfurt/frankfurt_000001_002759_gtFine_labelIds.png 150 | frankfurt/frankfurt_000001_066438_gtFine_labelIds.png 151 | frankfurt/frankfurt_000000_020321_gtFine_labelIds.png 152 | frankfurt/frankfurt_000001_002646_gtFine_labelIds.png 153 | frankfurt/frankfurt_000001_046126_gtFine_labelIds.png 154 | frankfurt/frankfurt_000000_002196_gtFine_labelIds.png 155 | frankfurt/frankfurt_000001_057954_gtFine_labelIds.png 156 | frankfurt/frankfurt_000001_011715_gtFine_labelIds.png 157 | frankfurt/frankfurt_000000_021879_gtFine_labelIds.png 158 | frankfurt/frankfurt_000001_082466_gtFine_labelIds.png 159 | frankfurt/frankfurt_000000_003025_gtFine_labelIds.png 160 | frankfurt/frankfurt_000001_023369_gtFine_labelIds.png 161 | frankfurt/frankfurt_000001_061682_gtFine_labelIds.png 162 | frankfurt/frankfurt_000001_017459_gtFine_labelIds.png 163 | frankfurt/frankfurt_000001_059789_gtFine_labelIds.png 164 | frankfurt/frankfurt_000001_073464_gtFine_labelIds.png 165 | frankfurt/frankfurt_000001_063045_gtFine_labelIds.png 166 | frankfurt/frankfurt_000001_064651_gtFine_labelIds.png 167 | frankfurt/frankfurt_000000_013382_gtFine_labelIds.png 168 | frankfurt/frankfurt_000001_002512_gtFine_labelIds.png 169 | frankfurt/frankfurt_000001_032942_gtFine_labelIds.png 170 | frankfurt/frankfurt_000001_010600_gtFine_labelIds.png 171 | frankfurt/frankfurt_000001_030067_gtFine_labelIds.png 172 | frankfurt/frankfurt_000001_014741_gtFine_labelIds.png 173 | frankfurt/frankfurt_000000_021667_gtFine_labelIds.png 174 | frankfurt/frankfurt_000001_051807_gtFine_labelIds.png 175 | frankfurt/frankfurt_000001_019854_gtFine_labelIds.png 176 | frankfurt/frankfurt_000001_015768_gtFine_labelIds.png 177 | frankfurt/frankfurt_000001_007857_gtFine_labelIds.png 178 | frankfurt/frankfurt_000001_058914_gtFine_labelIds.png 179 | frankfurt/frankfurt_000000_012868_gtFine_labelIds.png 180 | frankfurt/frankfurt_000000_013942_gtFine_labelIds.png 181 | frankfurt/frankfurt_000001_014406_gtFine_labelIds.png 182 | frankfurt/frankfurt_000001_049298_gtFine_labelIds.png 183 | frankfurt/frankfurt_000001_023769_gtFine_labelIds.png 184 | frankfurt/frankfurt_000001_012519_gtFine_labelIds.png 185 | frankfurt/frankfurt_000001_064925_gtFine_labelIds.png 186 | frankfurt/frankfurt_000001_072295_gtFine_labelIds.png 187 | frankfurt/frankfurt_000001_058504_gtFine_labelIds.png 188 | frankfurt/frankfurt_000001_059119_gtFine_labelIds.png 189 | frankfurt/frankfurt_000001_015091_gtFine_labelIds.png 190 | frankfurt/frankfurt_000001_058057_gtFine_labelIds.png 191 | frankfurt/frankfurt_000001_003056_gtFine_labelIds.png 192 | frankfurt/frankfurt_000001_007622_gtFine_labelIds.png 193 | frankfurt/frankfurt_000001_016273_gtFine_labelIds.png 194 | frankfurt/frankfurt_000001_035864_gtFine_labelIds.png 195 | frankfurt/frankfurt_000001_067092_gtFine_labelIds.png 196 | frankfurt/frankfurt_000000_013067_gtFine_labelIds.png 197 | frankfurt/frankfurt_000001_067474_gtFine_labelIds.png 198 | frankfurt/frankfurt_000001_060135_gtFine_labelIds.png 199 | frankfurt/frankfurt_000000_018797_gtFine_labelIds.png 200 | frankfurt/frankfurt_000000_005898_gtFine_labelIds.png 201 | frankfurt/frankfurt_000001_055603_gtFine_labelIds.png 202 | frankfurt/frankfurt_000001_060906_gtFine_labelIds.png 203 | frankfurt/frankfurt_000001_062653_gtFine_labelIds.png 204 | frankfurt/frankfurt_000000_004617_gtFine_labelIds.png 205 | frankfurt/frankfurt_000001_055538_gtFine_labelIds.png 206 | frankfurt/frankfurt_000000_008451_gtFine_labelIds.png 207 | frankfurt/frankfurt_000001_052594_gtFine_labelIds.png 208 | frankfurt/frankfurt_000001_004327_gtFine_labelIds.png 209 | frankfurt/frankfurt_000001_075296_gtFine_labelIds.png 210 | frankfurt/frankfurt_000001_073088_gtFine_labelIds.png 211 | frankfurt/frankfurt_000001_005184_gtFine_labelIds.png 212 | frankfurt/frankfurt_000000_016286_gtFine_labelIds.png 213 | frankfurt/frankfurt_000001_008688_gtFine_labelIds.png 214 | frankfurt/frankfurt_000000_011074_gtFine_labelIds.png 215 | frankfurt/frankfurt_000001_056580_gtFine_labelIds.png 216 | frankfurt/frankfurt_000001_067735_gtFine_labelIds.png 217 | frankfurt/frankfurt_000001_034047_gtFine_labelIds.png 218 | frankfurt/frankfurt_000001_076502_gtFine_labelIds.png 219 | frankfurt/frankfurt_000001_071288_gtFine_labelIds.png 220 | frankfurt/frankfurt_000001_067295_gtFine_labelIds.png 221 | frankfurt/frankfurt_000001_071781_gtFine_labelIds.png 222 | frankfurt/frankfurt_000000_012121_gtFine_labelIds.png 223 | frankfurt/frankfurt_000001_004859_gtFine_labelIds.png 224 | frankfurt/frankfurt_000001_073911_gtFine_labelIds.png 225 | frankfurt/frankfurt_000001_047552_gtFine_labelIds.png 226 | frankfurt/frankfurt_000001_037705_gtFine_labelIds.png 227 | frankfurt/frankfurt_000001_025512_gtFine_labelIds.png 228 | frankfurt/frankfurt_000001_047178_gtFine_labelIds.png 229 | frankfurt/frankfurt_000001_014221_gtFine_labelIds.png 230 | frankfurt/frankfurt_000000_007365_gtFine_labelIds.png 231 | frankfurt/frankfurt_000001_049698_gtFine_labelIds.png 232 | frankfurt/frankfurt_000001_065160_gtFine_labelIds.png 233 | frankfurt/frankfurt_000001_061763_gtFine_labelIds.png 234 | frankfurt/frankfurt_000000_010351_gtFine_labelIds.png 235 | frankfurt/frankfurt_000001_072155_gtFine_labelIds.png 236 | frankfurt/frankfurt_000001_023235_gtFine_labelIds.png 237 | frankfurt/frankfurt_000000_015389_gtFine_labelIds.png 238 | frankfurt/frankfurt_000000_009688_gtFine_labelIds.png 239 | frankfurt/frankfurt_000000_016005_gtFine_labelIds.png 240 | frankfurt/frankfurt_000001_054640_gtFine_labelIds.png 241 | frankfurt/frankfurt_000001_029600_gtFine_labelIds.png 242 | frankfurt/frankfurt_000001_028232_gtFine_labelIds.png 243 | frankfurt/frankfurt_000001_050686_gtFine_labelIds.png 244 | frankfurt/frankfurt_000001_013496_gtFine_labelIds.png 245 | frankfurt/frankfurt_000001_066092_gtFine_labelIds.png 246 | frankfurt/frankfurt_000001_009854_gtFine_labelIds.png 247 | frankfurt/frankfurt_000001_067178_gtFine_labelIds.png 248 | frankfurt/frankfurt_000001_028854_gtFine_labelIds.png 249 | frankfurt/frankfurt_000001_083199_gtFine_labelIds.png 250 | frankfurt/frankfurt_000001_064798_gtFine_labelIds.png 251 | frankfurt/frankfurt_000001_018113_gtFine_labelIds.png 252 | frankfurt/frankfurt_000001_050149_gtFine_labelIds.png 253 | frankfurt/frankfurt_000001_048196_gtFine_labelIds.png 254 | frankfurt/frankfurt_000000_001236_gtFine_labelIds.png 255 | frankfurt/frankfurt_000000_017476_gtFine_labelIds.png 256 | frankfurt/frankfurt_000001_003588_gtFine_labelIds.png 257 | frankfurt/frankfurt_000001_021825_gtFine_labelIds.png 258 | frankfurt/frankfurt_000000_010763_gtFine_labelIds.png 259 | frankfurt/frankfurt_000001_062793_gtFine_labelIds.png 260 | frankfurt/frankfurt_000001_029236_gtFine_labelIds.png 261 | frankfurt/frankfurt_000001_075984_gtFine_labelIds.png 262 | frankfurt/frankfurt_000001_031266_gtFine_labelIds.png 263 | frankfurt/frankfurt_000001_043395_gtFine_labelIds.png 264 | frankfurt/frankfurt_000001_040732_gtFine_labelIds.png 265 | frankfurt/frankfurt_000001_011162_gtFine_labelIds.png 266 | frankfurt/frankfurt_000000_012009_gtFine_labelIds.png 267 | frankfurt/frankfurt_000001_042733_gtFine_labelIds.png 268 | lindau/lindau_000052_000019_gtFine_labelIds.png 269 | lindau/lindau_000009_000019_gtFine_labelIds.png 270 | lindau/lindau_000037_000019_gtFine_labelIds.png 271 | lindau/lindau_000047_000019_gtFine_labelIds.png 272 | lindau/lindau_000015_000019_gtFine_labelIds.png 273 | lindau/lindau_000030_000019_gtFine_labelIds.png 274 | lindau/lindau_000012_000019_gtFine_labelIds.png 275 | lindau/lindau_000032_000019_gtFine_labelIds.png 276 | lindau/lindau_000046_000019_gtFine_labelIds.png 277 | lindau/lindau_000000_000019_gtFine_labelIds.png 278 | lindau/lindau_000031_000019_gtFine_labelIds.png 279 | lindau/lindau_000011_000019_gtFine_labelIds.png 280 | lindau/lindau_000027_000019_gtFine_labelIds.png 281 | lindau/lindau_000054_000019_gtFine_labelIds.png 282 | lindau/lindau_000026_000019_gtFine_labelIds.png 283 | lindau/lindau_000017_000019_gtFine_labelIds.png 284 | lindau/lindau_000023_000019_gtFine_labelIds.png 285 | lindau/lindau_000005_000019_gtFine_labelIds.png 286 | lindau/lindau_000056_000019_gtFine_labelIds.png 287 | lindau/lindau_000025_000019_gtFine_labelIds.png 288 | lindau/lindau_000045_000019_gtFine_labelIds.png 289 | lindau/lindau_000014_000019_gtFine_labelIds.png 290 | lindau/lindau_000004_000019_gtFine_labelIds.png 291 | lindau/lindau_000021_000019_gtFine_labelIds.png 292 | lindau/lindau_000049_000019_gtFine_labelIds.png 293 | lindau/lindau_000033_000019_gtFine_labelIds.png 294 | lindau/lindau_000042_000019_gtFine_labelIds.png 295 | lindau/lindau_000013_000019_gtFine_labelIds.png 296 | lindau/lindau_000024_000019_gtFine_labelIds.png 297 | lindau/lindau_000002_000019_gtFine_labelIds.png 298 | lindau/lindau_000043_000019_gtFine_labelIds.png 299 | lindau/lindau_000016_000019_gtFine_labelIds.png 300 | lindau/lindau_000050_000019_gtFine_labelIds.png 301 | lindau/lindau_000018_000019_gtFine_labelIds.png 302 | lindau/lindau_000007_000019_gtFine_labelIds.png 303 | lindau/lindau_000048_000019_gtFine_labelIds.png 304 | lindau/lindau_000022_000019_gtFine_labelIds.png 305 | lindau/lindau_000053_000019_gtFine_labelIds.png 306 | lindau/lindau_000038_000019_gtFine_labelIds.png 307 | lindau/lindau_000001_000019_gtFine_labelIds.png 308 | lindau/lindau_000036_000019_gtFine_labelIds.png 309 | lindau/lindau_000035_000019_gtFine_labelIds.png 310 | lindau/lindau_000003_000019_gtFine_labelIds.png 311 | lindau/lindau_000034_000019_gtFine_labelIds.png 312 | lindau/lindau_000010_000019_gtFine_labelIds.png 313 | lindau/lindau_000055_000019_gtFine_labelIds.png 314 | lindau/lindau_000006_000019_gtFine_labelIds.png 315 | lindau/lindau_000019_000019_gtFine_labelIds.png 316 | lindau/lindau_000029_000019_gtFine_labelIds.png 317 | lindau/lindau_000039_000019_gtFine_labelIds.png 318 | lindau/lindau_000051_000019_gtFine_labelIds.png 319 | lindau/lindau_000020_000019_gtFine_labelIds.png 320 | lindau/lindau_000057_000019_gtFine_labelIds.png 321 | lindau/lindau_000041_000019_gtFine_labelIds.png 322 | lindau/lindau_000040_000019_gtFine_labelIds.png 323 | lindau/lindau_000044_000019_gtFine_labelIds.png 324 | lindau/lindau_000028_000019_gtFine_labelIds.png 325 | lindau/lindau_000058_000019_gtFine_labelIds.png 326 | lindau/lindau_000008_000019_gtFine_labelIds.png 327 | munster/munster_000000_000019_gtFine_labelIds.png 328 | munster/munster_000012_000019_gtFine_labelIds.png 329 | munster/munster_000032_000019_gtFine_labelIds.png 330 | munster/munster_000068_000019_gtFine_labelIds.png 331 | munster/munster_000101_000019_gtFine_labelIds.png 332 | munster/munster_000153_000019_gtFine_labelIds.png 333 | munster/munster_000115_000019_gtFine_labelIds.png 334 | munster/munster_000029_000019_gtFine_labelIds.png 335 | munster/munster_000019_000019_gtFine_labelIds.png 336 | munster/munster_000156_000019_gtFine_labelIds.png 337 | munster/munster_000129_000019_gtFine_labelIds.png 338 | munster/munster_000169_000019_gtFine_labelIds.png 339 | munster/munster_000150_000019_gtFine_labelIds.png 340 | munster/munster_000165_000019_gtFine_labelIds.png 341 | munster/munster_000050_000019_gtFine_labelIds.png 342 | munster/munster_000025_000019_gtFine_labelIds.png 343 | munster/munster_000116_000019_gtFine_labelIds.png 344 | munster/munster_000132_000019_gtFine_labelIds.png 345 | munster/munster_000066_000019_gtFine_labelIds.png 346 | munster/munster_000096_000019_gtFine_labelIds.png 347 | munster/munster_000030_000019_gtFine_labelIds.png 348 | munster/munster_000146_000019_gtFine_labelIds.png 349 | munster/munster_000098_000019_gtFine_labelIds.png 350 | munster/munster_000059_000019_gtFine_labelIds.png 351 | munster/munster_000093_000019_gtFine_labelIds.png 352 | munster/munster_000122_000019_gtFine_labelIds.png 353 | munster/munster_000024_000019_gtFine_labelIds.png 354 | munster/munster_000036_000019_gtFine_labelIds.png 355 | munster/munster_000086_000019_gtFine_labelIds.png 356 | munster/munster_000163_000019_gtFine_labelIds.png 357 | munster/munster_000001_000019_gtFine_labelIds.png 358 | munster/munster_000053_000019_gtFine_labelIds.png 359 | munster/munster_000071_000019_gtFine_labelIds.png 360 | munster/munster_000079_000019_gtFine_labelIds.png 361 | munster/munster_000159_000019_gtFine_labelIds.png 362 | munster/munster_000038_000019_gtFine_labelIds.png 363 | munster/munster_000138_000019_gtFine_labelIds.png 364 | munster/munster_000135_000019_gtFine_labelIds.png 365 | munster/munster_000065_000019_gtFine_labelIds.png 366 | munster/munster_000139_000019_gtFine_labelIds.png 367 | munster/munster_000108_000019_gtFine_labelIds.png 368 | munster/munster_000020_000019_gtFine_labelIds.png 369 | munster/munster_000074_000019_gtFine_labelIds.png 370 | munster/munster_000035_000019_gtFine_labelIds.png 371 | munster/munster_000067_000019_gtFine_labelIds.png 372 | munster/munster_000151_000019_gtFine_labelIds.png 373 | munster/munster_000083_000019_gtFine_labelIds.png 374 | munster/munster_000118_000019_gtFine_labelIds.png 375 | munster/munster_000046_000019_gtFine_labelIds.png 376 | munster/munster_000147_000019_gtFine_labelIds.png 377 | munster/munster_000047_000019_gtFine_labelIds.png 378 | munster/munster_000043_000019_gtFine_labelIds.png 379 | munster/munster_000168_000019_gtFine_labelIds.png 380 | munster/munster_000167_000019_gtFine_labelIds.png 381 | munster/munster_000021_000019_gtFine_labelIds.png 382 | munster/munster_000073_000019_gtFine_labelIds.png 383 | munster/munster_000089_000019_gtFine_labelIds.png 384 | munster/munster_000060_000019_gtFine_labelIds.png 385 | munster/munster_000155_000019_gtFine_labelIds.png 386 | munster/munster_000140_000019_gtFine_labelIds.png 387 | munster/munster_000145_000019_gtFine_labelIds.png 388 | munster/munster_000077_000019_gtFine_labelIds.png 389 | munster/munster_000018_000019_gtFine_labelIds.png 390 | munster/munster_000045_000019_gtFine_labelIds.png 391 | munster/munster_000166_000019_gtFine_labelIds.png 392 | munster/munster_000037_000019_gtFine_labelIds.png 393 | munster/munster_000112_000019_gtFine_labelIds.png 394 | munster/munster_000080_000019_gtFine_labelIds.png 395 | munster/munster_000144_000019_gtFine_labelIds.png 396 | munster/munster_000142_000019_gtFine_labelIds.png 397 | munster/munster_000070_000019_gtFine_labelIds.png 398 | munster/munster_000044_000019_gtFine_labelIds.png 399 | munster/munster_000137_000019_gtFine_labelIds.png 400 | munster/munster_000041_000019_gtFine_labelIds.png 401 | munster/munster_000113_000019_gtFine_labelIds.png 402 | munster/munster_000075_000019_gtFine_labelIds.png 403 | munster/munster_000157_000019_gtFine_labelIds.png 404 | munster/munster_000158_000019_gtFine_labelIds.png 405 | munster/munster_000109_000019_gtFine_labelIds.png 406 | munster/munster_000033_000019_gtFine_labelIds.png 407 | munster/munster_000088_000019_gtFine_labelIds.png 408 | munster/munster_000090_000019_gtFine_labelIds.png 409 | munster/munster_000114_000019_gtFine_labelIds.png 410 | munster/munster_000171_000019_gtFine_labelIds.png 411 | munster/munster_000013_000019_gtFine_labelIds.png 412 | munster/munster_000130_000019_gtFine_labelIds.png 413 | munster/munster_000016_000019_gtFine_labelIds.png 414 | munster/munster_000136_000019_gtFine_labelIds.png 415 | munster/munster_000007_000019_gtFine_labelIds.png 416 | munster/munster_000014_000019_gtFine_labelIds.png 417 | munster/munster_000052_000019_gtFine_labelIds.png 418 | munster/munster_000104_000019_gtFine_labelIds.png 419 | munster/munster_000173_000019_gtFine_labelIds.png 420 | munster/munster_000057_000019_gtFine_labelIds.png 421 | munster/munster_000072_000019_gtFine_labelIds.png 422 | munster/munster_000003_000019_gtFine_labelIds.png 423 | munster/munster_000161_000019_gtFine_labelIds.png 424 | munster/munster_000002_000019_gtFine_labelIds.png 425 | munster/munster_000028_000019_gtFine_labelIds.png 426 | munster/munster_000051_000019_gtFine_labelIds.png 427 | munster/munster_000105_000019_gtFine_labelIds.png 428 | munster/munster_000061_000019_gtFine_labelIds.png 429 | munster/munster_000058_000019_gtFine_labelIds.png 430 | munster/munster_000094_000019_gtFine_labelIds.png 431 | munster/munster_000027_000019_gtFine_labelIds.png 432 | munster/munster_000062_000019_gtFine_labelIds.png 433 | munster/munster_000127_000019_gtFine_labelIds.png 434 | munster/munster_000110_000019_gtFine_labelIds.png 435 | munster/munster_000170_000019_gtFine_labelIds.png 436 | munster/munster_000023_000019_gtFine_labelIds.png 437 | munster/munster_000084_000019_gtFine_labelIds.png 438 | munster/munster_000121_000019_gtFine_labelIds.png 439 | munster/munster_000087_000019_gtFine_labelIds.png 440 | munster/munster_000097_000019_gtFine_labelIds.png 441 | munster/munster_000119_000019_gtFine_labelIds.png 442 | munster/munster_000128_000019_gtFine_labelIds.png 443 | munster/munster_000078_000019_gtFine_labelIds.png 444 | munster/munster_000010_000019_gtFine_labelIds.png 445 | munster/munster_000015_000019_gtFine_labelIds.png 446 | munster/munster_000048_000019_gtFine_labelIds.png 447 | munster/munster_000085_000019_gtFine_labelIds.png 448 | munster/munster_000164_000019_gtFine_labelIds.png 449 | munster/munster_000111_000019_gtFine_labelIds.png 450 | munster/munster_000099_000019_gtFine_labelIds.png 451 | munster/munster_000117_000019_gtFine_labelIds.png 452 | munster/munster_000009_000019_gtFine_labelIds.png 453 | munster/munster_000049_000019_gtFine_labelIds.png 454 | munster/munster_000148_000019_gtFine_labelIds.png 455 | munster/munster_000022_000019_gtFine_labelIds.png 456 | munster/munster_000131_000019_gtFine_labelIds.png 457 | munster/munster_000006_000019_gtFine_labelIds.png 458 | munster/munster_000005_000019_gtFine_labelIds.png 459 | munster/munster_000102_000019_gtFine_labelIds.png 460 | munster/munster_000160_000019_gtFine_labelIds.png 461 | munster/munster_000107_000019_gtFine_labelIds.png 462 | munster/munster_000095_000019_gtFine_labelIds.png 463 | munster/munster_000106_000019_gtFine_labelIds.png 464 | munster/munster_000034_000019_gtFine_labelIds.png 465 | munster/munster_000143_000019_gtFine_labelIds.png 466 | munster/munster_000017_000019_gtFine_labelIds.png 467 | munster/munster_000040_000019_gtFine_labelIds.png 468 | munster/munster_000152_000019_gtFine_labelIds.png 469 | munster/munster_000154_000019_gtFine_labelIds.png 470 | munster/munster_000100_000019_gtFine_labelIds.png 471 | munster/munster_000004_000019_gtFine_labelIds.png 472 | munster/munster_000141_000019_gtFine_labelIds.png 473 | munster/munster_000011_000019_gtFine_labelIds.png 474 | munster/munster_000055_000019_gtFine_labelIds.png 475 | munster/munster_000134_000019_gtFine_labelIds.png 476 | munster/munster_000054_000019_gtFine_labelIds.png 477 | munster/munster_000064_000019_gtFine_labelIds.png 478 | munster/munster_000039_000019_gtFine_labelIds.png 479 | munster/munster_000103_000019_gtFine_labelIds.png 480 | munster/munster_000092_000019_gtFine_labelIds.png 481 | munster/munster_000172_000019_gtFine_labelIds.png 482 | munster/munster_000042_000019_gtFine_labelIds.png 483 | munster/munster_000124_000019_gtFine_labelIds.png 484 | munster/munster_000069_000019_gtFine_labelIds.png 485 | munster/munster_000026_000019_gtFine_labelIds.png 486 | munster/munster_000120_000019_gtFine_labelIds.png 487 | munster/munster_000031_000019_gtFine_labelIds.png 488 | munster/munster_000162_000019_gtFine_labelIds.png 489 | munster/munster_000056_000019_gtFine_labelIds.png 490 | munster/munster_000081_000019_gtFine_labelIds.png 491 | munster/munster_000123_000019_gtFine_labelIds.png 492 | munster/munster_000125_000019_gtFine_labelIds.png 493 | munster/munster_000082_000019_gtFine_labelIds.png 494 | munster/munster_000133_000019_gtFine_labelIds.png 495 | munster/munster_000126_000019_gtFine_labelIds.png 496 | munster/munster_000063_000019_gtFine_labelIds.png 497 | munster/munster_000008_000019_gtFine_labelIds.png 498 | munster/munster_000149_000019_gtFine_labelIds.png 499 | munster/munster_000076_000019_gtFine_labelIds.png 500 | munster/munster_000091_000019_gtFine_labelIds.png 501 | -------------------------------------------------------------------------------- /dataset/cityscapes_list/val.txt: -------------------------------------------------------------------------------- 1 | frankfurt/frankfurt_000001_007973_leftImg8bit.png 2 | frankfurt/frankfurt_000001_025921_leftImg8bit.png 3 | frankfurt/frankfurt_000001_062016_leftImg8bit.png 4 | frankfurt/frankfurt_000001_049078_leftImg8bit.png 5 | frankfurt/frankfurt_000000_009561_leftImg8bit.png 6 | frankfurt/frankfurt_000001_013710_leftImg8bit.png 7 | frankfurt/frankfurt_000001_041664_leftImg8bit.png 8 | frankfurt/frankfurt_000000_013240_leftImg8bit.png 9 | frankfurt/frankfurt_000001_044787_leftImg8bit.png 10 | frankfurt/frankfurt_000001_015328_leftImg8bit.png 11 | frankfurt/frankfurt_000001_073243_leftImg8bit.png 12 | frankfurt/frankfurt_000001_034816_leftImg8bit.png 13 | frankfurt/frankfurt_000001_041074_leftImg8bit.png 14 | frankfurt/frankfurt_000001_005898_leftImg8bit.png 15 | frankfurt/frankfurt_000000_022254_leftImg8bit.png 16 | frankfurt/frankfurt_000001_044658_leftImg8bit.png 17 | frankfurt/frankfurt_000001_009504_leftImg8bit.png 18 | frankfurt/frankfurt_000001_024927_leftImg8bit.png 19 | frankfurt/frankfurt_000001_017842_leftImg8bit.png 20 | frankfurt/frankfurt_000001_068208_leftImg8bit.png 21 | frankfurt/frankfurt_000001_013016_leftImg8bit.png 22 | frankfurt/frankfurt_000001_010156_leftImg8bit.png 23 | frankfurt/frankfurt_000000_002963_leftImg8bit.png 24 | frankfurt/frankfurt_000001_020693_leftImg8bit.png 25 | frankfurt/frankfurt_000001_078803_leftImg8bit.png 26 | frankfurt/frankfurt_000001_025713_leftImg8bit.png 27 | frankfurt/frankfurt_000001_007285_leftImg8bit.png 28 | frankfurt/frankfurt_000001_070099_leftImg8bit.png 29 | frankfurt/frankfurt_000000_009291_leftImg8bit.png 30 | frankfurt/frankfurt_000000_019607_leftImg8bit.png 31 | frankfurt/frankfurt_000001_068063_leftImg8bit.png 32 | frankfurt/frankfurt_000000_003920_leftImg8bit.png 33 | frankfurt/frankfurt_000001_077233_leftImg8bit.png 34 | frankfurt/frankfurt_000001_029086_leftImg8bit.png 35 | frankfurt/frankfurt_000001_060545_leftImg8bit.png 36 | frankfurt/frankfurt_000001_001464_leftImg8bit.png 37 | frankfurt/frankfurt_000001_028590_leftImg8bit.png 38 | frankfurt/frankfurt_000001_016462_leftImg8bit.png 39 | frankfurt/frankfurt_000001_060422_leftImg8bit.png 40 | frankfurt/frankfurt_000001_009058_leftImg8bit.png 41 | frankfurt/frankfurt_000001_080830_leftImg8bit.png 42 | frankfurt/frankfurt_000001_012870_leftImg8bit.png 43 | frankfurt/frankfurt_000001_077434_leftImg8bit.png 44 | frankfurt/frankfurt_000001_033655_leftImg8bit.png 45 | frankfurt/frankfurt_000001_051516_leftImg8bit.png 46 | frankfurt/frankfurt_000001_044413_leftImg8bit.png 47 | frankfurt/frankfurt_000001_055172_leftImg8bit.png 48 | frankfurt/frankfurt_000001_040575_leftImg8bit.png 49 | frankfurt/frankfurt_000000_020215_leftImg8bit.png 50 | frankfurt/frankfurt_000000_017228_leftImg8bit.png 51 | frankfurt/frankfurt_000001_041354_leftImg8bit.png 52 | frankfurt/frankfurt_000000_008206_leftImg8bit.png 53 | frankfurt/frankfurt_000001_043564_leftImg8bit.png 54 | frankfurt/frankfurt_000001_032711_leftImg8bit.png 55 | frankfurt/frankfurt_000001_064130_leftImg8bit.png 56 | frankfurt/frankfurt_000001_053102_leftImg8bit.png 57 | frankfurt/frankfurt_000001_082087_leftImg8bit.png 58 | frankfurt/frankfurt_000001_057478_leftImg8bit.png 59 | frankfurt/frankfurt_000001_007407_leftImg8bit.png 60 | frankfurt/frankfurt_000001_008200_leftImg8bit.png 61 | frankfurt/frankfurt_000001_038844_leftImg8bit.png 62 | frankfurt/frankfurt_000001_016029_leftImg8bit.png 63 | frankfurt/frankfurt_000001_058176_leftImg8bit.png 64 | frankfurt/frankfurt_000001_057181_leftImg8bit.png 65 | frankfurt/frankfurt_000001_039895_leftImg8bit.png 66 | frankfurt/frankfurt_000000_000294_leftImg8bit.png 67 | frankfurt/frankfurt_000001_055062_leftImg8bit.png 68 | frankfurt/frankfurt_000001_083029_leftImg8bit.png 69 | frankfurt/frankfurt_000001_010444_leftImg8bit.png 70 | frankfurt/frankfurt_000001_041517_leftImg8bit.png 71 | frankfurt/frankfurt_000001_069633_leftImg8bit.png 72 | frankfurt/frankfurt_000001_020287_leftImg8bit.png 73 | frankfurt/frankfurt_000001_012038_leftImg8bit.png 74 | frankfurt/frankfurt_000001_046504_leftImg8bit.png 75 | frankfurt/frankfurt_000001_032556_leftImg8bit.png 76 | frankfurt/frankfurt_000000_001751_leftImg8bit.png 77 | frankfurt/frankfurt_000001_000538_leftImg8bit.png 78 | frankfurt/frankfurt_000001_083852_leftImg8bit.png 79 | frankfurt/frankfurt_000001_077092_leftImg8bit.png 80 | frankfurt/frankfurt_000001_017101_leftImg8bit.png 81 | frankfurt/frankfurt_000001_044525_leftImg8bit.png 82 | frankfurt/frankfurt_000001_005703_leftImg8bit.png 83 | frankfurt/frankfurt_000001_080391_leftImg8bit.png 84 | frankfurt/frankfurt_000001_038418_leftImg8bit.png 85 | frankfurt/frankfurt_000001_066832_leftImg8bit.png 86 | frankfurt/frankfurt_000000_003357_leftImg8bit.png 87 | frankfurt/frankfurt_000000_020880_leftImg8bit.png 88 | frankfurt/frankfurt_000001_062396_leftImg8bit.png 89 | frankfurt/frankfurt_000001_046272_leftImg8bit.png 90 | frankfurt/frankfurt_000001_062509_leftImg8bit.png 91 | frankfurt/frankfurt_000001_054415_leftImg8bit.png 92 | frankfurt/frankfurt_000001_021406_leftImg8bit.png 93 | frankfurt/frankfurt_000001_030310_leftImg8bit.png 94 | frankfurt/frankfurt_000000_014480_leftImg8bit.png 95 | frankfurt/frankfurt_000001_005410_leftImg8bit.png 96 | frankfurt/frankfurt_000000_022797_leftImg8bit.png 97 | frankfurt/frankfurt_000001_035144_leftImg8bit.png 98 | frankfurt/frankfurt_000001_014565_leftImg8bit.png 99 | frankfurt/frankfurt_000001_065850_leftImg8bit.png 100 | frankfurt/frankfurt_000000_000576_leftImg8bit.png 101 | frankfurt/frankfurt_000001_065617_leftImg8bit.png 102 | frankfurt/frankfurt_000000_005543_leftImg8bit.png 103 | frankfurt/frankfurt_000001_055709_leftImg8bit.png 104 | frankfurt/frankfurt_000001_027325_leftImg8bit.png 105 | frankfurt/frankfurt_000001_011835_leftImg8bit.png 106 | frankfurt/frankfurt_000001_046779_leftImg8bit.png 107 | frankfurt/frankfurt_000001_064305_leftImg8bit.png 108 | frankfurt/frankfurt_000001_012738_leftImg8bit.png 109 | frankfurt/frankfurt_000001_048355_leftImg8bit.png 110 | frankfurt/frankfurt_000001_019969_leftImg8bit.png 111 | frankfurt/frankfurt_000001_080091_leftImg8bit.png 112 | frankfurt/frankfurt_000000_011007_leftImg8bit.png 113 | frankfurt/frankfurt_000000_015676_leftImg8bit.png 114 | frankfurt/frankfurt_000001_044227_leftImg8bit.png 115 | frankfurt/frankfurt_000001_055387_leftImg8bit.png 116 | frankfurt/frankfurt_000001_038245_leftImg8bit.png 117 | frankfurt/frankfurt_000001_059642_leftImg8bit.png 118 | frankfurt/frankfurt_000001_030669_leftImg8bit.png 119 | frankfurt/frankfurt_000001_068772_leftImg8bit.png 120 | frankfurt/frankfurt_000001_079206_leftImg8bit.png 121 | frankfurt/frankfurt_000001_055306_leftImg8bit.png 122 | frankfurt/frankfurt_000001_012699_leftImg8bit.png 123 | frankfurt/frankfurt_000001_042384_leftImg8bit.png 124 | frankfurt/frankfurt_000001_054077_leftImg8bit.png 125 | frankfurt/frankfurt_000001_010830_leftImg8bit.png 126 | frankfurt/frankfurt_000001_052120_leftImg8bit.png 127 | frankfurt/frankfurt_000001_032018_leftImg8bit.png 128 | frankfurt/frankfurt_000001_051737_leftImg8bit.png 129 | frankfurt/frankfurt_000001_028335_leftImg8bit.png 130 | frankfurt/frankfurt_000001_049770_leftImg8bit.png 131 | frankfurt/frankfurt_000001_054884_leftImg8bit.png 132 | frankfurt/frankfurt_000001_019698_leftImg8bit.png 133 | frankfurt/frankfurt_000000_011461_leftImg8bit.png 134 | frankfurt/frankfurt_000000_001016_leftImg8bit.png 135 | frankfurt/frankfurt_000001_062250_leftImg8bit.png 136 | frankfurt/frankfurt_000001_004736_leftImg8bit.png 137 | frankfurt/frankfurt_000001_068682_leftImg8bit.png 138 | frankfurt/frankfurt_000000_006589_leftImg8bit.png 139 | frankfurt/frankfurt_000000_011810_leftImg8bit.png 140 | frankfurt/frankfurt_000001_066574_leftImg8bit.png 141 | frankfurt/frankfurt_000001_048654_leftImg8bit.png 142 | frankfurt/frankfurt_000001_049209_leftImg8bit.png 143 | frankfurt/frankfurt_000001_042098_leftImg8bit.png 144 | frankfurt/frankfurt_000001_031416_leftImg8bit.png 145 | frankfurt/frankfurt_000000_009969_leftImg8bit.png 146 | frankfurt/frankfurt_000001_038645_leftImg8bit.png 147 | frankfurt/frankfurt_000001_020046_leftImg8bit.png 148 | frankfurt/frankfurt_000001_054219_leftImg8bit.png 149 | frankfurt/frankfurt_000001_002759_leftImg8bit.png 150 | frankfurt/frankfurt_000001_066438_leftImg8bit.png 151 | frankfurt/frankfurt_000000_020321_leftImg8bit.png 152 | frankfurt/frankfurt_000001_002646_leftImg8bit.png 153 | frankfurt/frankfurt_000001_046126_leftImg8bit.png 154 | frankfurt/frankfurt_000000_002196_leftImg8bit.png 155 | frankfurt/frankfurt_000001_057954_leftImg8bit.png 156 | frankfurt/frankfurt_000001_011715_leftImg8bit.png 157 | frankfurt/frankfurt_000000_021879_leftImg8bit.png 158 | frankfurt/frankfurt_000001_082466_leftImg8bit.png 159 | frankfurt/frankfurt_000000_003025_leftImg8bit.png 160 | frankfurt/frankfurt_000001_023369_leftImg8bit.png 161 | frankfurt/frankfurt_000001_061682_leftImg8bit.png 162 | frankfurt/frankfurt_000001_017459_leftImg8bit.png 163 | frankfurt/frankfurt_000001_059789_leftImg8bit.png 164 | frankfurt/frankfurt_000001_073464_leftImg8bit.png 165 | frankfurt/frankfurt_000001_063045_leftImg8bit.png 166 | frankfurt/frankfurt_000001_064651_leftImg8bit.png 167 | frankfurt/frankfurt_000000_013382_leftImg8bit.png 168 | frankfurt/frankfurt_000001_002512_leftImg8bit.png 169 | frankfurt/frankfurt_000001_032942_leftImg8bit.png 170 | frankfurt/frankfurt_000001_010600_leftImg8bit.png 171 | frankfurt/frankfurt_000001_030067_leftImg8bit.png 172 | frankfurt/frankfurt_000001_014741_leftImg8bit.png 173 | frankfurt/frankfurt_000000_021667_leftImg8bit.png 174 | frankfurt/frankfurt_000001_051807_leftImg8bit.png 175 | frankfurt/frankfurt_000001_019854_leftImg8bit.png 176 | frankfurt/frankfurt_000001_015768_leftImg8bit.png 177 | frankfurt/frankfurt_000001_007857_leftImg8bit.png 178 | frankfurt/frankfurt_000001_058914_leftImg8bit.png 179 | frankfurt/frankfurt_000000_012868_leftImg8bit.png 180 | frankfurt/frankfurt_000000_013942_leftImg8bit.png 181 | frankfurt/frankfurt_000001_014406_leftImg8bit.png 182 | frankfurt/frankfurt_000001_049298_leftImg8bit.png 183 | frankfurt/frankfurt_000001_023769_leftImg8bit.png 184 | frankfurt/frankfurt_000001_012519_leftImg8bit.png 185 | frankfurt/frankfurt_000001_064925_leftImg8bit.png 186 | frankfurt/frankfurt_000001_072295_leftImg8bit.png 187 | frankfurt/frankfurt_000001_058504_leftImg8bit.png 188 | frankfurt/frankfurt_000001_059119_leftImg8bit.png 189 | frankfurt/frankfurt_000001_015091_leftImg8bit.png 190 | frankfurt/frankfurt_000001_058057_leftImg8bit.png 191 | frankfurt/frankfurt_000001_003056_leftImg8bit.png 192 | frankfurt/frankfurt_000001_007622_leftImg8bit.png 193 | frankfurt/frankfurt_000001_016273_leftImg8bit.png 194 | frankfurt/frankfurt_000001_035864_leftImg8bit.png 195 | frankfurt/frankfurt_000001_067092_leftImg8bit.png 196 | frankfurt/frankfurt_000000_013067_leftImg8bit.png 197 | frankfurt/frankfurt_000001_067474_leftImg8bit.png 198 | frankfurt/frankfurt_000001_060135_leftImg8bit.png 199 | frankfurt/frankfurt_000000_018797_leftImg8bit.png 200 | frankfurt/frankfurt_000000_005898_leftImg8bit.png 201 | frankfurt/frankfurt_000001_055603_leftImg8bit.png 202 | frankfurt/frankfurt_000001_060906_leftImg8bit.png 203 | frankfurt/frankfurt_000001_062653_leftImg8bit.png 204 | frankfurt/frankfurt_000000_004617_leftImg8bit.png 205 | frankfurt/frankfurt_000001_055538_leftImg8bit.png 206 | frankfurt/frankfurt_000000_008451_leftImg8bit.png 207 | frankfurt/frankfurt_000001_052594_leftImg8bit.png 208 | frankfurt/frankfurt_000001_004327_leftImg8bit.png 209 | frankfurt/frankfurt_000001_075296_leftImg8bit.png 210 | frankfurt/frankfurt_000001_073088_leftImg8bit.png 211 | frankfurt/frankfurt_000001_005184_leftImg8bit.png 212 | frankfurt/frankfurt_000000_016286_leftImg8bit.png 213 | frankfurt/frankfurt_000001_008688_leftImg8bit.png 214 | frankfurt/frankfurt_000000_011074_leftImg8bit.png 215 | frankfurt/frankfurt_000001_056580_leftImg8bit.png 216 | frankfurt/frankfurt_000001_067735_leftImg8bit.png 217 | frankfurt/frankfurt_000001_034047_leftImg8bit.png 218 | frankfurt/frankfurt_000001_076502_leftImg8bit.png 219 | frankfurt/frankfurt_000001_071288_leftImg8bit.png 220 | frankfurt/frankfurt_000001_067295_leftImg8bit.png 221 | frankfurt/frankfurt_000001_071781_leftImg8bit.png 222 | frankfurt/frankfurt_000000_012121_leftImg8bit.png 223 | frankfurt/frankfurt_000001_004859_leftImg8bit.png 224 | frankfurt/frankfurt_000001_073911_leftImg8bit.png 225 | frankfurt/frankfurt_000001_047552_leftImg8bit.png 226 | frankfurt/frankfurt_000001_037705_leftImg8bit.png 227 | frankfurt/frankfurt_000001_025512_leftImg8bit.png 228 | frankfurt/frankfurt_000001_047178_leftImg8bit.png 229 | frankfurt/frankfurt_000001_014221_leftImg8bit.png 230 | frankfurt/frankfurt_000000_007365_leftImg8bit.png 231 | frankfurt/frankfurt_000001_049698_leftImg8bit.png 232 | frankfurt/frankfurt_000001_065160_leftImg8bit.png 233 | frankfurt/frankfurt_000001_061763_leftImg8bit.png 234 | frankfurt/frankfurt_000000_010351_leftImg8bit.png 235 | frankfurt/frankfurt_000001_072155_leftImg8bit.png 236 | frankfurt/frankfurt_000001_023235_leftImg8bit.png 237 | frankfurt/frankfurt_000000_015389_leftImg8bit.png 238 | frankfurt/frankfurt_000000_009688_leftImg8bit.png 239 | frankfurt/frankfurt_000000_016005_leftImg8bit.png 240 | frankfurt/frankfurt_000001_054640_leftImg8bit.png 241 | frankfurt/frankfurt_000001_029600_leftImg8bit.png 242 | frankfurt/frankfurt_000001_028232_leftImg8bit.png 243 | frankfurt/frankfurt_000001_050686_leftImg8bit.png 244 | frankfurt/frankfurt_000001_013496_leftImg8bit.png 245 | frankfurt/frankfurt_000001_066092_leftImg8bit.png 246 | frankfurt/frankfurt_000001_009854_leftImg8bit.png 247 | frankfurt/frankfurt_000001_067178_leftImg8bit.png 248 | frankfurt/frankfurt_000001_028854_leftImg8bit.png 249 | frankfurt/frankfurt_000001_083199_leftImg8bit.png 250 | frankfurt/frankfurt_000001_064798_leftImg8bit.png 251 | frankfurt/frankfurt_000001_018113_leftImg8bit.png 252 | frankfurt/frankfurt_000001_050149_leftImg8bit.png 253 | frankfurt/frankfurt_000001_048196_leftImg8bit.png 254 | frankfurt/frankfurt_000000_001236_leftImg8bit.png 255 | frankfurt/frankfurt_000000_017476_leftImg8bit.png 256 | frankfurt/frankfurt_000001_003588_leftImg8bit.png 257 | frankfurt/frankfurt_000001_021825_leftImg8bit.png 258 | frankfurt/frankfurt_000000_010763_leftImg8bit.png 259 | frankfurt/frankfurt_000001_062793_leftImg8bit.png 260 | frankfurt/frankfurt_000001_029236_leftImg8bit.png 261 | frankfurt/frankfurt_000001_075984_leftImg8bit.png 262 | frankfurt/frankfurt_000001_031266_leftImg8bit.png 263 | frankfurt/frankfurt_000001_043395_leftImg8bit.png 264 | frankfurt/frankfurt_000001_040732_leftImg8bit.png 265 | frankfurt/frankfurt_000001_011162_leftImg8bit.png 266 | frankfurt/frankfurt_000000_012009_leftImg8bit.png 267 | frankfurt/frankfurt_000001_042733_leftImg8bit.png 268 | lindau/lindau_000052_000019_leftImg8bit.png 269 | lindau/lindau_000009_000019_leftImg8bit.png 270 | lindau/lindau_000037_000019_leftImg8bit.png 271 | lindau/lindau_000047_000019_leftImg8bit.png 272 | lindau/lindau_000015_000019_leftImg8bit.png 273 | lindau/lindau_000030_000019_leftImg8bit.png 274 | lindau/lindau_000012_000019_leftImg8bit.png 275 | lindau/lindau_000032_000019_leftImg8bit.png 276 | lindau/lindau_000046_000019_leftImg8bit.png 277 | lindau/lindau_000000_000019_leftImg8bit.png 278 | lindau/lindau_000031_000019_leftImg8bit.png 279 | lindau/lindau_000011_000019_leftImg8bit.png 280 | lindau/lindau_000027_000019_leftImg8bit.png 281 | lindau/lindau_000054_000019_leftImg8bit.png 282 | lindau/lindau_000026_000019_leftImg8bit.png 283 | lindau/lindau_000017_000019_leftImg8bit.png 284 | lindau/lindau_000023_000019_leftImg8bit.png 285 | lindau/lindau_000005_000019_leftImg8bit.png 286 | lindau/lindau_000056_000019_leftImg8bit.png 287 | lindau/lindau_000025_000019_leftImg8bit.png 288 | lindau/lindau_000045_000019_leftImg8bit.png 289 | lindau/lindau_000014_000019_leftImg8bit.png 290 | lindau/lindau_000004_000019_leftImg8bit.png 291 | lindau/lindau_000021_000019_leftImg8bit.png 292 | lindau/lindau_000049_000019_leftImg8bit.png 293 | lindau/lindau_000033_000019_leftImg8bit.png 294 | lindau/lindau_000042_000019_leftImg8bit.png 295 | lindau/lindau_000013_000019_leftImg8bit.png 296 | lindau/lindau_000024_000019_leftImg8bit.png 297 | lindau/lindau_000002_000019_leftImg8bit.png 298 | lindau/lindau_000043_000019_leftImg8bit.png 299 | lindau/lindau_000016_000019_leftImg8bit.png 300 | lindau/lindau_000050_000019_leftImg8bit.png 301 | lindau/lindau_000018_000019_leftImg8bit.png 302 | lindau/lindau_000007_000019_leftImg8bit.png 303 | lindau/lindau_000048_000019_leftImg8bit.png 304 | lindau/lindau_000022_000019_leftImg8bit.png 305 | lindau/lindau_000053_000019_leftImg8bit.png 306 | lindau/lindau_000038_000019_leftImg8bit.png 307 | lindau/lindau_000001_000019_leftImg8bit.png 308 | lindau/lindau_000036_000019_leftImg8bit.png 309 | lindau/lindau_000035_000019_leftImg8bit.png 310 | lindau/lindau_000003_000019_leftImg8bit.png 311 | lindau/lindau_000034_000019_leftImg8bit.png 312 | lindau/lindau_000010_000019_leftImg8bit.png 313 | lindau/lindau_000055_000019_leftImg8bit.png 314 | lindau/lindau_000006_000019_leftImg8bit.png 315 | lindau/lindau_000019_000019_leftImg8bit.png 316 | lindau/lindau_000029_000019_leftImg8bit.png 317 | lindau/lindau_000039_000019_leftImg8bit.png 318 | lindau/lindau_000051_000019_leftImg8bit.png 319 | lindau/lindau_000020_000019_leftImg8bit.png 320 | lindau/lindau_000057_000019_leftImg8bit.png 321 | lindau/lindau_000041_000019_leftImg8bit.png 322 | lindau/lindau_000040_000019_leftImg8bit.png 323 | lindau/lindau_000044_000019_leftImg8bit.png 324 | lindau/lindau_000028_000019_leftImg8bit.png 325 | lindau/lindau_000058_000019_leftImg8bit.png 326 | lindau/lindau_000008_000019_leftImg8bit.png 327 | munster/munster_000000_000019_leftImg8bit.png 328 | munster/munster_000012_000019_leftImg8bit.png 329 | munster/munster_000032_000019_leftImg8bit.png 330 | munster/munster_000068_000019_leftImg8bit.png 331 | munster/munster_000101_000019_leftImg8bit.png 332 | munster/munster_000153_000019_leftImg8bit.png 333 | munster/munster_000115_000019_leftImg8bit.png 334 | munster/munster_000029_000019_leftImg8bit.png 335 | munster/munster_000019_000019_leftImg8bit.png 336 | munster/munster_000156_000019_leftImg8bit.png 337 | munster/munster_000129_000019_leftImg8bit.png 338 | munster/munster_000169_000019_leftImg8bit.png 339 | munster/munster_000150_000019_leftImg8bit.png 340 | munster/munster_000165_000019_leftImg8bit.png 341 | munster/munster_000050_000019_leftImg8bit.png 342 | munster/munster_000025_000019_leftImg8bit.png 343 | munster/munster_000116_000019_leftImg8bit.png 344 | munster/munster_000132_000019_leftImg8bit.png 345 | munster/munster_000066_000019_leftImg8bit.png 346 | munster/munster_000096_000019_leftImg8bit.png 347 | munster/munster_000030_000019_leftImg8bit.png 348 | munster/munster_000146_000019_leftImg8bit.png 349 | munster/munster_000098_000019_leftImg8bit.png 350 | munster/munster_000059_000019_leftImg8bit.png 351 | munster/munster_000093_000019_leftImg8bit.png 352 | munster/munster_000122_000019_leftImg8bit.png 353 | munster/munster_000024_000019_leftImg8bit.png 354 | munster/munster_000036_000019_leftImg8bit.png 355 | munster/munster_000086_000019_leftImg8bit.png 356 | munster/munster_000163_000019_leftImg8bit.png 357 | munster/munster_000001_000019_leftImg8bit.png 358 | munster/munster_000053_000019_leftImg8bit.png 359 | munster/munster_000071_000019_leftImg8bit.png 360 | munster/munster_000079_000019_leftImg8bit.png 361 | munster/munster_000159_000019_leftImg8bit.png 362 | munster/munster_000038_000019_leftImg8bit.png 363 | munster/munster_000138_000019_leftImg8bit.png 364 | munster/munster_000135_000019_leftImg8bit.png 365 | munster/munster_000065_000019_leftImg8bit.png 366 | munster/munster_000139_000019_leftImg8bit.png 367 | munster/munster_000108_000019_leftImg8bit.png 368 | munster/munster_000020_000019_leftImg8bit.png 369 | munster/munster_000074_000019_leftImg8bit.png 370 | munster/munster_000035_000019_leftImg8bit.png 371 | munster/munster_000067_000019_leftImg8bit.png 372 | munster/munster_000151_000019_leftImg8bit.png 373 | munster/munster_000083_000019_leftImg8bit.png 374 | munster/munster_000118_000019_leftImg8bit.png 375 | munster/munster_000046_000019_leftImg8bit.png 376 | munster/munster_000147_000019_leftImg8bit.png 377 | munster/munster_000047_000019_leftImg8bit.png 378 | munster/munster_000043_000019_leftImg8bit.png 379 | munster/munster_000168_000019_leftImg8bit.png 380 | munster/munster_000167_000019_leftImg8bit.png 381 | munster/munster_000021_000019_leftImg8bit.png 382 | munster/munster_000073_000019_leftImg8bit.png 383 | munster/munster_000089_000019_leftImg8bit.png 384 | munster/munster_000060_000019_leftImg8bit.png 385 | munster/munster_000155_000019_leftImg8bit.png 386 | munster/munster_000140_000019_leftImg8bit.png 387 | munster/munster_000145_000019_leftImg8bit.png 388 | munster/munster_000077_000019_leftImg8bit.png 389 | munster/munster_000018_000019_leftImg8bit.png 390 | munster/munster_000045_000019_leftImg8bit.png 391 | munster/munster_000166_000019_leftImg8bit.png 392 | munster/munster_000037_000019_leftImg8bit.png 393 | munster/munster_000112_000019_leftImg8bit.png 394 | munster/munster_000080_000019_leftImg8bit.png 395 | munster/munster_000144_000019_leftImg8bit.png 396 | munster/munster_000142_000019_leftImg8bit.png 397 | munster/munster_000070_000019_leftImg8bit.png 398 | munster/munster_000044_000019_leftImg8bit.png 399 | munster/munster_000137_000019_leftImg8bit.png 400 | munster/munster_000041_000019_leftImg8bit.png 401 | munster/munster_000113_000019_leftImg8bit.png 402 | munster/munster_000075_000019_leftImg8bit.png 403 | munster/munster_000157_000019_leftImg8bit.png 404 | munster/munster_000158_000019_leftImg8bit.png 405 | munster/munster_000109_000019_leftImg8bit.png 406 | munster/munster_000033_000019_leftImg8bit.png 407 | munster/munster_000088_000019_leftImg8bit.png 408 | munster/munster_000090_000019_leftImg8bit.png 409 | munster/munster_000114_000019_leftImg8bit.png 410 | munster/munster_000171_000019_leftImg8bit.png 411 | munster/munster_000013_000019_leftImg8bit.png 412 | munster/munster_000130_000019_leftImg8bit.png 413 | munster/munster_000016_000019_leftImg8bit.png 414 | munster/munster_000136_000019_leftImg8bit.png 415 | munster/munster_000007_000019_leftImg8bit.png 416 | munster/munster_000014_000019_leftImg8bit.png 417 | munster/munster_000052_000019_leftImg8bit.png 418 | munster/munster_000104_000019_leftImg8bit.png 419 | munster/munster_000173_000019_leftImg8bit.png 420 | munster/munster_000057_000019_leftImg8bit.png 421 | munster/munster_000072_000019_leftImg8bit.png 422 | munster/munster_000003_000019_leftImg8bit.png 423 | munster/munster_000161_000019_leftImg8bit.png 424 | munster/munster_000002_000019_leftImg8bit.png 425 | munster/munster_000028_000019_leftImg8bit.png 426 | munster/munster_000051_000019_leftImg8bit.png 427 | munster/munster_000105_000019_leftImg8bit.png 428 | munster/munster_000061_000019_leftImg8bit.png 429 | munster/munster_000058_000019_leftImg8bit.png 430 | munster/munster_000094_000019_leftImg8bit.png 431 | munster/munster_000027_000019_leftImg8bit.png 432 | munster/munster_000062_000019_leftImg8bit.png 433 | munster/munster_000127_000019_leftImg8bit.png 434 | munster/munster_000110_000019_leftImg8bit.png 435 | munster/munster_000170_000019_leftImg8bit.png 436 | munster/munster_000023_000019_leftImg8bit.png 437 | munster/munster_000084_000019_leftImg8bit.png 438 | munster/munster_000121_000019_leftImg8bit.png 439 | munster/munster_000087_000019_leftImg8bit.png 440 | munster/munster_000097_000019_leftImg8bit.png 441 | munster/munster_000119_000019_leftImg8bit.png 442 | munster/munster_000128_000019_leftImg8bit.png 443 | munster/munster_000078_000019_leftImg8bit.png 444 | munster/munster_000010_000019_leftImg8bit.png 445 | munster/munster_000015_000019_leftImg8bit.png 446 | munster/munster_000048_000019_leftImg8bit.png 447 | munster/munster_000085_000019_leftImg8bit.png 448 | munster/munster_000164_000019_leftImg8bit.png 449 | munster/munster_000111_000019_leftImg8bit.png 450 | munster/munster_000099_000019_leftImg8bit.png 451 | munster/munster_000117_000019_leftImg8bit.png 452 | munster/munster_000009_000019_leftImg8bit.png 453 | munster/munster_000049_000019_leftImg8bit.png 454 | munster/munster_000148_000019_leftImg8bit.png 455 | munster/munster_000022_000019_leftImg8bit.png 456 | munster/munster_000131_000019_leftImg8bit.png 457 | munster/munster_000006_000019_leftImg8bit.png 458 | munster/munster_000005_000019_leftImg8bit.png 459 | munster/munster_000102_000019_leftImg8bit.png 460 | munster/munster_000160_000019_leftImg8bit.png 461 | munster/munster_000107_000019_leftImg8bit.png 462 | munster/munster_000095_000019_leftImg8bit.png 463 | munster/munster_000106_000019_leftImg8bit.png 464 | munster/munster_000034_000019_leftImg8bit.png 465 | munster/munster_000143_000019_leftImg8bit.png 466 | munster/munster_000017_000019_leftImg8bit.png 467 | munster/munster_000040_000019_leftImg8bit.png 468 | munster/munster_000152_000019_leftImg8bit.png 469 | munster/munster_000154_000019_leftImg8bit.png 470 | munster/munster_000100_000019_leftImg8bit.png 471 | munster/munster_000004_000019_leftImg8bit.png 472 | munster/munster_000141_000019_leftImg8bit.png 473 | munster/munster_000011_000019_leftImg8bit.png 474 | munster/munster_000055_000019_leftImg8bit.png 475 | munster/munster_000134_000019_leftImg8bit.png 476 | munster/munster_000054_000019_leftImg8bit.png 477 | munster/munster_000064_000019_leftImg8bit.png 478 | munster/munster_000039_000019_leftImg8bit.png 479 | munster/munster_000103_000019_leftImg8bit.png 480 | munster/munster_000092_000019_leftImg8bit.png 481 | munster/munster_000172_000019_leftImg8bit.png 482 | munster/munster_000042_000019_leftImg8bit.png 483 | munster/munster_000124_000019_leftImg8bit.png 484 | munster/munster_000069_000019_leftImg8bit.png 485 | munster/munster_000026_000019_leftImg8bit.png 486 | munster/munster_000120_000019_leftImg8bit.png 487 | munster/munster_000031_000019_leftImg8bit.png 488 | munster/munster_000162_000019_leftImg8bit.png 489 | munster/munster_000056_000019_leftImg8bit.png 490 | munster/munster_000081_000019_leftImg8bit.png 491 | munster/munster_000123_000019_leftImg8bit.png 492 | munster/munster_000125_000019_leftImg8bit.png 493 | munster/munster_000082_000019_leftImg8bit.png 494 | munster/munster_000133_000019_leftImg8bit.png 495 | munster/munster_000126_000019_leftImg8bit.png 496 | munster/munster_000063_000019_leftImg8bit.png 497 | munster/munster_000008_000019_leftImg8bit.png 498 | munster/munster_000149_000019_leftImg8bit.png 499 | munster/munster_000076_000019_leftImg8bit.png 500 | munster/munster_000091_000019_leftImg8bit.png 501 | -------------------------------------------------------------------------------- /dataset/gta5_dataset.py: -------------------------------------------------------------------------------- 1 | import os 2 | import os.path as osp 3 | import numpy as np 4 | import random 5 | import matplotlib.pyplot as plt 6 | import collections 7 | import torch 8 | import torchvision 9 | from torch.utils import data 10 | from PIL import Image 11 | 12 | 13 | class GTA5DataSet(data.Dataset): 14 | def __init__(self, root, list_path, max_iters=None, crop_size=(321, 321), mean=(128, 128, 128), scale=True, mirror=True, ignore_label=255): 15 | self.root = root 16 | self.list_path = list_path 17 | self.crop_size = crop_size 18 | self.scale = scale 19 | self.ignore_label = ignore_label 20 | self.mean = mean 21 | self.is_mirror = mirror 22 | # self.mean_bgr = np.array([104.00698793, 116.66876762, 122.67891434]) 23 | self.img_ids = [i_id.strip() for i_id in open(list_path)] 24 | if not max_iters==None: 25 | self.img_ids = self.img_ids * int(np.ceil(float(max_iters) / len(self.img_ids))) 26 | self.files = [] 27 | 28 | self.id_to_trainid = {7: 0, 8: 1, 11: 2, 12: 3, 13: 4, 17: 5, 29 | 19: 6, 20: 7, 21: 8, 22: 9, 23: 10, 24: 11, 25: 12, 30 | 26: 13, 27: 14, 28: 15, 31: 16, 32: 17, 33: 18} 31 | 32 | # for split in ["train", "trainval", "val"]: 33 | for name in self.img_ids: 34 | img_file = osp.join(self.root, "images/%s" % name) 35 | label_file = osp.join(self.root, "labels/%s" % name) 36 | self.files.append({ 37 | "img": img_file, 38 | "label": label_file, 39 | "name": name 40 | }) 41 | 42 | def __len__(self): 43 | return len(self.files) 44 | 45 | 46 | def __getitem__(self, index): 47 | datafiles = self.files[index] 48 | 49 | image = Image.open(datafiles["img"]).convert('RGB') 50 | label = Image.open(datafiles["label"]) 51 | name = datafiles["name"] 52 | 53 | # resize 54 | image = image.resize(self.crop_size, Image.BICUBIC) 55 | label = label.resize(self.crop_size, Image.NEAREST) 56 | 57 | image = np.asarray(image, np.float32) 58 | label = np.asarray(label, np.float32) 59 | 60 | # re-assign labels to match the format of Cityscapes 61 | label_copy = 255 * np.ones(label.shape, dtype=np.float32) 62 | for k, v in self.id_to_trainid.items(): 63 | label_copy[label == k] = v 64 | 65 | size = image.shape 66 | image = image[:, :, ::-1] # change to BGR 67 | image -= self.mean 68 | image = image.transpose((2, 0, 1)) 69 | 70 | return image.copy(), label_copy.copy(), np.array(size), name 71 | 72 | 73 | if __name__ == '__main__': 74 | dst = GTA5DataSet("./data", is_transform=True) 75 | trainloader = data.DataLoader(dst, batch_size=4) 76 | for i, data in enumerate(trainloader): 77 | imgs, labels = data 78 | if i == 0: 79 | img = torchvision.utils.make_grid(imgs).numpy() 80 | img = np.transpose(img, (1, 2, 0)) 81 | img = img[:, :, ::-1] 82 | plt.imshow(img) 83 | plt.show() 84 | -------------------------------------------------------------------------------- /model/deeplab.py: -------------------------------------------------------------------------------- 1 | import torch.nn as nn 2 | import math 3 | import torch.utils.model_zoo as model_zoo 4 | import torch 5 | import numpy as np 6 | affine_par = True 7 | 8 | 9 | def outS(i): 10 | i = int(i) 11 | i = (i+1)/2 12 | i = int(np.ceil((i+1)/2.0)) 13 | i = (i+1)/2 14 | return i 15 | 16 | def conv3x3(in_planes, out_planes, stride=1): 17 | "3x3 convolution with padding" 18 | return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, 19 | padding=1, bias=False) 20 | 21 | 22 | class BasicBlock(nn.Module): 23 | expansion = 1 24 | 25 | def __init__(self, inplanes, planes, stride=1, downsample=None): 26 | super(BasicBlock, self).__init__() 27 | self.conv1 = conv3x3(inplanes, planes, stride) 28 | self.bn1 = nn.BatchNorm2d(planes, affine = affine_par) 29 | self.relu = nn.ReLU(inplace=True) 30 | self.conv2 = conv3x3(planes, planes) 31 | self.bn2 = nn.BatchNorm2d(planes, affine = affine_par) 32 | self.downsample = downsample 33 | self.stride = stride 34 | 35 | def forward(self, x): 36 | residual = x 37 | 38 | out = self.conv1(x) 39 | out = self.bn1(out) 40 | out = self.relu(out) 41 | 42 | out = self.conv2(out) 43 | out = self.bn2(out) 44 | 45 | if self.downsample is not None: 46 | residual = self.downsample(x) 47 | 48 | out += residual 49 | out = self.relu(out) 50 | 51 | return out 52 | 53 | 54 | class Bottleneck(nn.Module): 55 | expansion = 4 56 | 57 | def __init__(self, inplanes, planes, stride=1, dilation=1, downsample=None): 58 | super(Bottleneck, self).__init__() 59 | self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, stride=stride, bias=False) # change 60 | self.bn1 = nn.BatchNorm2d(planes,affine = affine_par) 61 | for i in self.bn1.parameters(): 62 | i.requires_grad = False 63 | 64 | padding = dilation 65 | self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, # change 66 | padding=padding, bias=False, dilation = dilation) 67 | self.bn2 = nn.BatchNorm2d(planes,affine = affine_par) 68 | for i in self.bn2.parameters(): 69 | i.requires_grad = False 70 | self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False) 71 | self.bn3 = nn.BatchNorm2d(planes * 4, affine = affine_par) 72 | for i in self.bn3.parameters(): 73 | i.requires_grad = False 74 | self.relu = nn.ReLU(inplace=True) 75 | self.downsample = downsample 76 | self.stride = stride 77 | 78 | 79 | def forward(self, x): 80 | residual = x 81 | 82 | out = self.conv1(x) 83 | out = self.bn1(out) 84 | out = self.relu(out) 85 | 86 | out = self.conv2(out) 87 | out = self.bn2(out) 88 | out = self.relu(out) 89 | 90 | out = self.conv3(out) 91 | out = self.bn3(out) 92 | 93 | if self.downsample is not None: 94 | residual = self.downsample(x) 95 | 96 | out += residual 97 | out = self.relu(out) 98 | 99 | return out 100 | 101 | class Classifier_Module(nn.Module): 102 | 103 | def __init__(self, dilation_series, padding_series, num_classes): 104 | super(Classifier_Module, self).__init__() 105 | self.conv2d_list = nn.ModuleList() 106 | for dilation, padding in zip(dilation_series, padding_series): 107 | self.conv2d_list.append(nn.Conv2d(2048, num_classes, kernel_size=3, stride=1, padding=padding, dilation=dilation, bias = True)) 108 | 109 | for m in self.conv2d_list: 110 | m.weight.data.normal_(0, 0.01) 111 | 112 | def forward(self, x): 113 | out = self.conv2d_list[0](x) 114 | for i in range(len(self.conv2d_list)-1): 115 | out += self.conv2d_list[i+1](x) 116 | return out 117 | 118 | 119 | 120 | class ResNet(nn.Module): 121 | def __init__(self, block, layers, num_classes): 122 | self.inplanes = 64 123 | super(ResNet, self).__init__() 124 | self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, 125 | bias=False) 126 | self.bn1 = nn.BatchNorm2d(64, affine = affine_par) 127 | for i in self.bn1.parameters(): 128 | i.requires_grad = False 129 | self.relu = nn.ReLU(inplace=True) 130 | self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1, ceil_mode=True) # change 131 | self.layer1 = self._make_layer(block, 64, layers[0]) 132 | self.layer2 = self._make_layer(block, 128, layers[1], stride=2) 133 | self.layer3 = self._make_layer(block, 256, layers[2], stride=1, dilation=2) 134 | self.layer4 = self._make_layer(block, 512, layers[3], stride=1, dilation=4) 135 | self.layer5 = self._make_pred_layer(Classifier_Module, [6,12,18,24],[6,12,18,24],num_classes) 136 | 137 | for m in self.modules(): 138 | if isinstance(m, nn.Conv2d): 139 | n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels 140 | m.weight.data.normal_(0, 0.01) 141 | elif isinstance(m, nn.BatchNorm2d): 142 | m.weight.data.fill_(1) 143 | m.bias.data.zero_() 144 | # for i in m.parameters(): 145 | # i.requires_grad = False 146 | 147 | def _make_layer(self, block, planes, blocks, stride=1, dilation=1): 148 | downsample = None 149 | if stride != 1 or self.inplanes != planes * block.expansion or dilation == 2 or dilation == 4: 150 | downsample = nn.Sequential( 151 | nn.Conv2d(self.inplanes, planes * block.expansion, 152 | kernel_size=1, stride=stride, bias=False), 153 | nn.BatchNorm2d(planes * block.expansion,affine = affine_par)) 154 | for i in downsample._modules['1'].parameters(): 155 | i.requires_grad = False 156 | layers = [] 157 | layers.append(block(self.inplanes, planes, stride,dilation=dilation, downsample=downsample)) 158 | self.inplanes = planes * block.expansion 159 | for i in range(1, blocks): 160 | layers.append(block(self.inplanes, planes, dilation=dilation)) 161 | 162 | return nn.Sequential(*layers) 163 | def _make_pred_layer(self,block, dilation_series, padding_series,num_classes): 164 | return block(dilation_series,padding_series,num_classes) 165 | 166 | def forward(self, x): 167 | x = self.conv1(x) 168 | x = self.bn1(x) 169 | x = self.relu(x) 170 | x = self.maxpool(x) 171 | x = self.layer1(x) 172 | x = self.layer2(x) 173 | x = self.layer3(x) 174 | x = self.layer4(x) 175 | x = self.layer5(x) 176 | 177 | return x, x 178 | 179 | def get_1x_lr_params_NOscale(self): 180 | """ 181 | This generator returns all the parameters of the net except for 182 | the last classification layer. Note that for each batchnorm layer, 183 | requires_grad is set to False in deeplab_resnet.py, therefore this function does not return 184 | any batchnorm parameter 185 | """ 186 | b = [] 187 | 188 | b.append(self.conv1) 189 | b.append(self.bn1) 190 | b.append(self.layer1) 191 | b.append(self.layer2) 192 | b.append(self.layer3) 193 | b.append(self.layer4) 194 | 195 | 196 | for i in range(len(b)): 197 | for j in b[i].modules(): 198 | jj = 0 199 | for k in j.parameters(): 200 | jj+=1 201 | if k.requires_grad: 202 | yield k 203 | 204 | def get_10x_lr_params(self): 205 | """ 206 | This generator returns all the parameters for the last layer of the net, 207 | which does the classification of pixel into classes 208 | """ 209 | b = [] 210 | b.append(self.layer5.parameters()) 211 | 212 | for j in range(len(b)): 213 | for i in b[j]: 214 | yield i 215 | 216 | 217 | 218 | def optim_parameters(self, args): 219 | return [{'params': self.get_1x_lr_params_NOscale(), 'lr': args.lr}, 220 | {'params': self.get_10x_lr_params(), 'lr': 10*args.lr}] 221 | 222 | 223 | def Res_Deeplab(num_classes=21, pretrained=False): 224 | model = ResNet(Bottleneck,[3, 4, 23, 3], num_classes) 225 | if pretrained: 226 | # restore_from = '/data1/val/jogendra/amit/machine@54/amit_54/sdd1/amit/DLCV_project/code/phase2/pretrained_model/DeepLab_resnet_pretrained_init-f81d91e8.pth' 227 | restore_from = 'checkpoints/DeepLab_init.pth' 228 | saved_state_dict = torch.load(restore_from) 229 | 230 | new_params = model.state_dict().copy() 231 | for i in saved_state_dict: 232 | i_parts = i.split('.') 233 | if not i_parts[1] == 'layer5': 234 | new_params['.'.join(i_parts[1:])] = saved_state_dict[i] 235 | model.load_state_dict(new_params) 236 | print('ImageNet pretrained weights loaded') 237 | 238 | return model -------------------------------------------------------------------------------- /model/deeplab_multi.py: -------------------------------------------------------------------------------- 1 | import torch.nn as nn 2 | import math 3 | import torch.utils.model_zoo as model_zoo 4 | import torch 5 | import numpy as np 6 | import torch.nn.functional as F 7 | 8 | affine_par = True 9 | 10 | 11 | def outS(i): 12 | i = int(i) 13 | i = (i + 1) / 2 14 | i = int(np.ceil((i + 1) / 2.0)) 15 | i = (i + 1) / 2 16 | return i 17 | 18 | 19 | def conv3x3(in_planes, out_planes, stride=1): 20 | "3x3 convolution with padding" 21 | return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, 22 | padding=1, bias=False) 23 | 24 | 25 | class BasicBlock(nn.Module): 26 | expansion = 1 27 | 28 | def __init__(self, inplanes, planes, stride=1, downsample=None): 29 | super(BasicBlock, self).__init__() 30 | self.conv1 = conv3x3(inplanes, planes, stride) 31 | self.bn1 = nn.BatchNorm2d(planes, affine=affine_par) 32 | self.relu = nn.ReLU(inplace=True) 33 | self.conv2 = conv3x3(planes, planes) 34 | self.bn2 = nn.BatchNorm2d(planes, affine=affine_par) 35 | self.downsample = downsample 36 | self.stride = stride 37 | 38 | def forward(self, x): 39 | residual = x 40 | 41 | out = self.conv1(x) 42 | out = self.bn1(out) 43 | out = self.relu(out) 44 | 45 | out = self.conv2(out) 46 | out = self.bn2(out) 47 | 48 | if self.downsample is not None: 49 | residual = self.downsample(x) 50 | 51 | out += residual 52 | out = self.relu(out) 53 | 54 | return out 55 | 56 | 57 | class Bottleneck(nn.Module): 58 | expansion = 4 59 | 60 | def __init__(self, inplanes, planes, stride=1, dilation=1, downsample=None): 61 | super(Bottleneck, self).__init__() 62 | self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, stride=stride, bias=False) # change 63 | self.bn1 = nn.BatchNorm2d(planes, affine=affine_par) 64 | for i in self.bn1.parameters(): 65 | i.requires_grad = False 66 | 67 | padding = dilation 68 | self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, # change 69 | padding=padding, bias=False, dilation=dilation) 70 | self.bn2 = nn.BatchNorm2d(planes, affine=affine_par) 71 | for i in self.bn2.parameters(): 72 | i.requires_grad = False 73 | self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False) 74 | self.bn3 = nn.BatchNorm2d(planes * 4, affine=affine_par) 75 | for i in self.bn3.parameters(): 76 | i.requires_grad = False 77 | self.relu = nn.ReLU(inplace=True) 78 | self.downsample = downsample 79 | self.stride = stride 80 | 81 | def forward(self, x): 82 | residual = x 83 | 84 | out = self.conv1(x) 85 | out = self.bn1(out) 86 | out = self.relu(out) 87 | 88 | out = self.conv2(out) 89 | out = self.bn2(out) 90 | out = self.relu(out) 91 | 92 | out = self.conv3(out) 93 | out = self.bn3(out) 94 | 95 | if self.downsample is not None: 96 | residual = self.downsample(x) 97 | 98 | out += residual 99 | out = self.relu(out) 100 | 101 | return out 102 | 103 | 104 | class Classifier_Module(nn.Module): 105 | def __init__(self, inplanes, dilation_series, padding_series, num_classes): 106 | super(Classifier_Module, self).__init__() 107 | self.conv2d_list = nn.ModuleList() 108 | for dilation, padding in zip(dilation_series, padding_series): 109 | self.conv2d_list.append( 110 | nn.Conv2d(inplanes, num_classes, kernel_size=3, stride=1, padding=padding, dilation=dilation, bias=True)) 111 | 112 | for m in self.conv2d_list: 113 | m.weight.data.normal_(0, 0.01) 114 | 115 | def forward(self, x): 116 | out = self.conv2d_list[0](x) 117 | for i in range(len(self.conv2d_list) - 1): 118 | out += self.conv2d_list[i + 1](x) 119 | return out 120 | 121 | 122 | class ResNetMulti(nn.Module): 123 | def __init__(self, block, layers, num_classes, open_classes=0, openset=False): 124 | self.inplanes = 64 125 | self.openset = openset 126 | super(ResNetMulti, self).__init__() 127 | self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, 128 | bias=False) 129 | self.bn1 = nn.BatchNorm2d(64, affine=affine_par) 130 | for i in self.bn1.parameters(): 131 | i.requires_grad = False 132 | self.relu = nn.ReLU(inplace=True) 133 | self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1, ceil_mode=True) # change 134 | self.layer1 = self._make_layer(block, 64, layers[0]) 135 | self.layer2 = self._make_layer(block, 128, layers[1], stride=2) 136 | self.layer3 = self._make_layer(block, 256, layers[2], stride=1, dilation=2) 137 | self.layer4 = self._make_layer(block, 512, layers[3], stride=1, dilation=4) 138 | self.layer5 = self._make_pred_layer(Classifier_Module, 1024, [6, 12, 18, 24], [6, 12, 18, 24], num_classes) 139 | self.layer6 = self._make_pred_layer(Classifier_Module, 2048, [6, 12, 18, 24], [6, 12, 18, 24], num_classes) 140 | if self.openset: 141 | self.layer5_1 = self._make_pred_layer(Classifier_Module, 1024, [6, 12, 18, 24], [6, 12, 18, 24], open_classes) 142 | self.layer6_1 = self._make_pred_layer(Classifier_Module, 2048, [6, 12, 18, 24], [6, 12, 18, 24], open_classes) 143 | 144 | for m in self.modules(): 145 | if isinstance(m, nn.Conv2d): 146 | n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels 147 | m.weight.data.normal_(0, 0.01) 148 | elif isinstance(m, nn.BatchNorm2d): 149 | m.weight.data.fill_(1) 150 | m.bias.data.zero_() 151 | 152 | def _make_layer(self, block, planes, blocks, stride=1, dilation=1): 153 | downsample = None 154 | if stride != 1 or self.inplanes != planes * block.expansion or dilation == 2 or dilation == 4: 155 | downsample = nn.Sequential( 156 | nn.Conv2d(self.inplanes, planes * block.expansion, 157 | kernel_size=1, stride=stride, bias=False), 158 | nn.BatchNorm2d(planes * block.expansion, affine=affine_par)) 159 | for i in downsample._modules['1'].parameters(): 160 | i.requires_grad = False 161 | layers = [] 162 | layers.append(block(self.inplanes, planes, stride, dilation=dilation, downsample=downsample)) 163 | self.inplanes = planes * block.expansion 164 | for i in range(1, blocks): 165 | layers.append(block(self.inplanes, planes, dilation=dilation)) 166 | 167 | return nn.Sequential(*layers) 168 | 169 | def _make_pred_layer(self, block, inplanes, dilation_series, padding_series, num_classes): 170 | return block(inplanes, dilation_series, padding_series, num_classes) 171 | 172 | def forward(self, x): 173 | x = self.conv1(x) 174 | x = self.bn1(x) 175 | x = self.relu(x) 176 | x = self.maxpool(x) 177 | x = self.layer1(x) 178 | x = self.layer2(x) 179 | 180 | x = self.layer3(x) 181 | x1 = self.layer5(x) 182 | if self.openset: 183 | x1_1 = self.layer5_1(x) 184 | x1 = torch.cat([x1, x1_1], dim=1) 185 | 186 | x = self.layer4(x) 187 | x2 = self.layer6(x) 188 | if self.openset: 189 | x2_1 = self.layer6_1(x) 190 | x2 = torch.cat([x2, x2_1], dim=1) 191 | 192 | return x1, x2 193 | 194 | def get_1x_lr_params_NOscale(self, warmup=False): 195 | """ 196 | This generator returns all the parameters of the net except for 197 | the last classification layer. Note that for each batchnorm layer, 198 | requires_grad is set to False in deeplab_resnet.py, therefore this function does not return 199 | any batchnorm parameter 200 | """ 201 | b = [] 202 | 203 | if warmup: 204 | b.append(self.conv1) 205 | b.append(self.bn1) 206 | b.append(self.layer1) 207 | b.append(self.layer2) 208 | b.append(self.layer3) 209 | b.append(self.layer4) 210 | 211 | for i in range(len(b)): 212 | for j in b[i].modules(): 213 | jj = 0 214 | for k in j.parameters(): 215 | jj += 1 216 | # if k.requires_grad: 217 | yield k 218 | 219 | def get_10x_lr_params(self): 220 | """ 221 | This generator returns all the parameters for the last layer of the net, 222 | which does the classification of pixel into classes 223 | """ 224 | b = [] 225 | b.append(self.layer5.parameters()) 226 | b.append(self.layer6.parameters()) 227 | if self.openset: 228 | b.append(self.layer5_1.parameters()) 229 | b.append(self.layer6_1.parameters()) 230 | 231 | for j in range(len(b)): 232 | for i in b[j]: 233 | yield i 234 | 235 | def optim_parameters(self, args, warmup=False): 236 | return [{'params': self.get_1x_lr_params_NOscale(warmup), 'lr': args.learning_rate}, 237 | {'params': self.get_10x_lr_params(), 'lr': 10 * args.learning_rate}] 238 | 239 | 240 | def DeeplabMulti(num_classes=21, open_classes=0, openset=False): 241 | model = ResNetMulti(Bottleneck, [3, 4, 23, 3], num_classes, open_classes, openset) 242 | return model 243 | 244 | class sig_NTM(nn.Module): 245 | def __init__(self, num_classes, open_classes=0, init = None): 246 | super(sig_NTM, self).__init__() 247 | 248 | T = torch.ones(num_classes+open_classes, num_classes) 249 | self.register_parameter(name='NTM', param=nn.parameter.Parameter(torch.FloatTensor(T))) 250 | self.NTM 251 | 252 | nn.init.kaiming_normal_(self.NTM, mode='fan_out', nonlinearity='relu') 253 | 254 | self.Identity_prior = torch.cat([torch.eye(num_classes, num_classes), torch.zeros(open_classes, num_classes)], 0) 255 | Class_dist = np.load('../ClassDist/ClassDist_bapa.npy') 256 | # Class_dist = Class_dist / Class_dist.max() 257 | self.Class_dist = torch.FloatTensor(np.tile(Class_dist, (num_classes + open_classes, 1))) 258 | 259 | def forward(self): 260 | T = torch.sigmoid(self.NTM).cuda() 261 | T = T.mul(self.Class_dist.cuda().detach()) + self.Identity_prior.cuda().detach() 262 | T = F.normalize(T, p=1, dim=1) 263 | return T 264 | 265 | class sig_W(nn.Module): 266 | def __init__(self, num_classes, open_classes=0): 267 | super(sig_W, self).__init__() 268 | 269 | self.classes = num_classes+open_classes 270 | init = 1./(self.classes-1.) 271 | 272 | self.register_parameter(name='weight', param=nn.parameter.Parameter(init*torch.ones(self.classes, self.classes))) 273 | 274 | self.weight 275 | 276 | self.identity = torch.zeros(self.classes, self.classes) - torch.eye(self.classes) 277 | 278 | def forward(self): 279 | ind = np.diag_indices(self.classes) 280 | with torch.no_grad(): 281 | self.weight[ind[0], ind[1]] = -10000. * torch.ones(self.classes).detach() 282 | 283 | w = torch.softmax(self.weight, dim = 1).cuda() 284 | 285 | weight = self.identity.detach().cuda() + w 286 | return weight -------------------------------------------------------------------------------- /model/deeplab_vgg.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import torch 3 | from torch import nn 4 | from torchvision import models 5 | 6 | class Classifier_Module(nn.Module): 7 | 8 | def __init__(self, dims_in, dilation_series, padding_series, num_classes): 9 | super(Classifier_Module, self).__init__() 10 | self.conv2d_list = nn.ModuleList() 11 | for dilation, padding in zip(dilation_series, padding_series): 12 | self.conv2d_list.append(nn.Conv2d(dims_in, num_classes, kernel_size=3, stride=1, padding=padding, dilation=dilation, bias = True)) 13 | 14 | for m in self.conv2d_list: 15 | m.weight.data.normal_(0, 0.01) 16 | 17 | def forward(self, x): 18 | out = self.conv2d_list[0](x) 19 | for i in range(len(self.conv2d_list)-1): 20 | out += self.conv2d_list[i+1](x) 21 | return out 22 | 23 | 24 | class DeeplabVGG(nn.Module): 25 | def __init__(self, num_classes, vgg16_caffe_path=None, pretrained=False): 26 | super(DeeplabVGG, self).__init__() 27 | vgg = models.vgg16() 28 | if pretrained: 29 | vgg.load_state_dict(torch.load(vgg16_caffe_path)) 30 | 31 | features, classifier = list(vgg.features.children()), list(vgg.classifier.children()) 32 | 33 | #remove pool4/pool5 34 | features = nn.Sequential(*(features[i] for i in range(23)+range(24,30))) 35 | 36 | for i in [23,25,27]: 37 | features[i].dilation = (2,2) 38 | features[i].padding = (2,2) 39 | 40 | fc6 = nn.Conv2d(512, 1024, kernel_size=3, padding=4, dilation=4) 41 | fc7 = nn.Conv2d(1024, 1024, kernel_size=3, padding=4, dilation=4) 42 | 43 | self.features = nn.Sequential(*([features[i] for i in range(len(features))] + [ fc6, nn.ReLU(inplace=True), fc7, nn.ReLU(inplace=True)])) 44 | 45 | self.classifier = Classifier_Module(1024, [6,12,18,24],[6,12,18,24],num_classes) 46 | 47 | 48 | def forward(self, x): 49 | x = self.features(x) 50 | x = self.classifier(x) 51 | return x 52 | 53 | def optim_parameters(self, args): 54 | return self.parameters() 55 | -------------------------------------------------------------------------------- /model/deeplabv3.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from torchvision import models 4 | import torch.nn.functional as F 5 | import math 6 | import torch.utils.model_zoo as model_zoo 7 | import numpy as np 8 | 9 | class ResNet_50(nn.Module): 10 | def __init__(self, in_channels = 3, conv1_out = 64): 11 | super(ResNet_50,self).__init__() 12 | self.resnet_50 = models.resnet50(pretrained = True) 13 | self.relu = nn.ReLU(inplace=True) 14 | 15 | def forward(self,x): 16 | x = self.relu(self.resnet_50.bn1(self.resnet_50.conv1(x))) 17 | x = self.resnet_50.maxpool(x) 18 | x = self.resnet_50.layer1(x) 19 | x = self.resnet_50.layer2(x) 20 | x = self.resnet_50.layer3(x) 21 | return x 22 | 23 | class ASSP(nn.Module): 24 | def __init__(self,in_channels,out_channels = 256): 25 | super(ASSP,self).__init__() 26 | self.relu = nn.ReLU(inplace=True) 27 | self.conv1 = nn.Conv2d(in_channels = in_channels, 28 | out_channels = out_channels, 29 | kernel_size = 1, 30 | padding = 0, 31 | dilation=1, 32 | bias=False) 33 | self.bn1 = nn.BatchNorm2d(out_channels) 34 | 35 | self.conv2 = nn.Conv2d(in_channels = in_channels, 36 | out_channels = out_channels, 37 | kernel_size = 3, 38 | stride=1, 39 | padding = 6, 40 | dilation = 6, 41 | bias=False) 42 | self.bn2 = nn.BatchNorm2d(out_channels) 43 | 44 | self.conv3 = nn.Conv2d(in_channels = in_channels, 45 | out_channels = out_channels, 46 | kernel_size = 3, 47 | stride=1, 48 | padding = 12, 49 | dilation = 12, 50 | bias=False) 51 | self.bn3 = nn.BatchNorm2d(out_channels) 52 | 53 | self.conv4 = nn.Conv2d(in_channels = in_channels, 54 | out_channels = out_channels, 55 | kernel_size = 3, 56 | stride=1, 57 | padding = 18, 58 | dilation = 18, 59 | bias=False) 60 | self.bn4 = nn.BatchNorm2d(out_channels) 61 | 62 | self.conv5 = nn.Conv2d(in_channels = in_channels, 63 | out_channels = out_channels, 64 | kernel_size = 1, 65 | stride=1, 66 | padding = 0, 67 | dilation=1, 68 | bias=False) 69 | self.bn5 = nn.BatchNorm2d(out_channels) 70 | 71 | self.convf = nn.Conv2d(in_channels = out_channels * 5, 72 | out_channels = out_channels, 73 | kernel_size = 1, 74 | stride=1, 75 | padding = 0, 76 | dilation=1, 77 | bias=False) 78 | self.bnf = nn.BatchNorm2d(out_channels) 79 | self.adapool = nn.AdaptiveAvgPool2d(1) 80 | 81 | def forward(self,x): 82 | x1 = self.conv1(x) 83 | x1 = self.bn1(x1) 84 | x1 = self.relu(x1) 85 | 86 | x2 = self.conv2(x) 87 | x2 = self.bn2(x2) 88 | x2 = self.relu(x2) 89 | 90 | x3 = self.conv3(x) 91 | x3 = self.bn3(x3) 92 | x3 = self.relu(x3) 93 | 94 | x4 = self.conv4(x) 95 | x4 = self.bn4(x4) 96 | x4 = self.relu(x4) 97 | 98 | # x5 = self.adapool(x) 99 | x5 = self.conv5(x) 100 | x5 = self.bn5(x5) 101 | x5 = self.relu(x5) 102 | x5 = F.interpolate(x5, size = tuple(x4.shape[-2:]), mode='bilinear') 103 | 104 | x = torch.cat((x1,x2,x3,x4,x5), dim = 1) #channels first 105 | x = self.convf(x) 106 | x = self.bnf(x) 107 | x = self.relu(x) 108 | return x 109 | 110 | 111 | class DeepLabv3(nn.Module): 112 | def __init__(self, nc, openc=0, openset=False): 113 | super(DeepLabv3, self).__init__() 114 | 115 | self.nc = nc 116 | self.openc = openc 117 | self.openset = openset 118 | 119 | self.resnet = ResNet_50() 120 | 121 | self.assp = ASSP(in_channels = 1024) 122 | 123 | self.conv = nn.Conv2d(in_channels = 256, out_channels = self.nc, 124 | kernel_size = 1, stride=1, padding=0) 125 | if openset: 126 | self.conv_1 = nn.Conv2d(in_channels = 256, out_channels = self.openc, 127 | kernel_size = 1, stride=1, padding=0) 128 | 129 | def forward(self,x): 130 | _, _, h, w = x.shape 131 | x = self.resnet(x) 132 | x = self.assp(x) 133 | x1 = self.conv(x) 134 | if self.openset: 135 | x1_1 = self.conv_1(x) 136 | x1 = torch.cat([x1, x1_1], dim=1) 137 | x1 = F.interpolate(x1, size=(h, w), mode='bilinear') #scale_factor = 16, mode='bilinear') 138 | return x1 139 | 140 | def get_1x_lr_params_NOscale(self): 141 | b = [] 142 | b.append(self.resnet) 143 | 144 | for i in range(len(b)): 145 | for j in b[i].modules(): 146 | jj = 0 147 | for k in j.named_parameters(): 148 | jj += 1 149 | if 'resnet_50.layer3' in k[0] or 'resnet_50.layer4' in k[0] or 'resnet_50.fc' in k[0]: 150 | # if k.requires_grad: 151 | yield k[1] 152 | 153 | def get_10x_lr_params(self): 154 | b = [] 155 | b.append(self.assp.parameters()) 156 | b.append(self.conv.parameters()) 157 | if self.openset: 158 | b.append(self.conv_1.parameters()) 159 | 160 | for j in range(len(b)): 161 | for i in b[j]: 162 | yield i 163 | 164 | def optim_parameters(self, args): 165 | return [{'params': self.get_1x_lr_params_NOscale(), 'lr': args.learning_rate}, 166 | {'params': self.get_10x_lr_params(), 'lr': 10 * args.learning_rate}] 167 | 168 | class sig_NTM(nn.Module): 169 | def __init__(self, num_classes, open_classes=0, init = None): 170 | super(sig_NTM, self).__init__() 171 | 172 | T = torch.ones(num_classes+open_classes, num_classes) 173 | self.register_parameter(name='NTM', param=nn.parameter.Parameter(torch.FloatTensor(T))) 174 | self.NTM 175 | 176 | nn.init.kaiming_normal_(self.NTM, mode='fan_out', nonlinearity='relu') 177 | 178 | self.Identity_prior = torch.cat([torch.eye(num_classes, num_classes), torch.zeros(open_classes, num_classes)], 0) 179 | Class_dist = np.load('../ClassDist/ClassDist_source.npy') 180 | # Class_dist = Class_dist / Class_dist.max() 181 | self.Class_dist = torch.FloatTensor(np.tile(Class_dist, (num_classes + open_classes, 1))) 182 | 183 | def forward(self): 184 | T = torch.sigmoid(self.NTM).cuda() 185 | T = T.mul(self.Class_dist.cuda().detach()) + self.Identity_prior.cuda().detach() 186 | T = F.normalize(T, p=1, dim=1) 187 | return T 188 | 189 | class sig_W(nn.Module): 190 | def __init__(self, num_classes, open_classes=0): 191 | super(sig_W, self).__init__() 192 | 193 | self.classes = num_classes+open_classes 194 | init = 1./(self.classes-1.) 195 | 196 | self.register_parameter(name='weight', param=nn.parameter.Parameter(init*torch.ones(self.classes, self.classes))) 197 | 198 | self.weight 199 | 200 | self.identity = torch.zeros(self.classes, self.classes) - torch.eye(self.classes) 201 | 202 | def forward(self): 203 | ind = np.diag_indices(self.classes) 204 | with torch.no_grad(): 205 | self.weight[ind[0], ind[1]] = -10000. * torch.ones(self.classes).detach() 206 | 207 | w = torch.softmax(self.weight, dim = 1).cuda() 208 | 209 | weight = self.identity.detach().cuda() + w 210 | return weight -------------------------------------------------------------------------------- /model/discriminator.py: -------------------------------------------------------------------------------- 1 | import torch.nn as nn 2 | import torch.nn.functional as F 3 | 4 | 5 | class FCDiscriminator(nn.Module): 6 | 7 | def __init__(self, num_classes, ndf = 64): 8 | super(FCDiscriminator, self).__init__() 9 | 10 | self.conv1 = nn.Conv2d(num_classes, ndf, kernel_size=4, stride=2, padding=1) 11 | self.conv2 = nn.Conv2d(ndf, ndf*2, kernel_size=4, stride=2, padding=1) 12 | self.conv3 = nn.Conv2d(ndf*2, ndf*4, kernel_size=4, stride=2, padding=1) 13 | self.conv4 = nn.Conv2d(ndf*4, ndf*8, kernel_size=4, stride=2, padding=1) 14 | self.classifier = nn.Conv2d(ndf*8, 1, kernel_size=4, stride=2, padding=1) 15 | 16 | self.leaky_relu = nn.LeakyReLU(negative_slope=0.2, inplace=True) 17 | #self.up_sample = nn.Upsample(scale_factor=32, mode='bilinear') 18 | #self.sigmoid = nn.Sigmoid() 19 | 20 | 21 | def forward(self, x): 22 | x = self.conv1(x) 23 | x = self.leaky_relu(x) 24 | x = self.conv2(x) 25 | x = self.leaky_relu(x) 26 | x = self.conv3(x) 27 | x = self.leaky_relu(x) 28 | x = self.conv4(x) 29 | x = self.leaky_relu(x) 30 | x = self.classifier(x) 31 | #x = self.up_sample(x) 32 | #x = self.sigmoid(x) 33 | 34 | return x 35 | -------------------------------------------------------------------------------- /network.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CityU-AIM-Group/SimT/0abe8bae984b1bb18b6fe5d32358d15f034db5a5/network.png -------------------------------------------------------------------------------- /network1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CityU-AIM-Group/SimT/0abe8bae984b1bb18b6fe5d32358d15f034db5a5/network1.png -------------------------------------------------------------------------------- /network2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CityU-AIM-Group/SimT/0abe8bae984b1bb18b6fe5d32358d15f034db5a5/network2.png -------------------------------------------------------------------------------- /sh_simt.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #SBATCH -J Polyp 3 | #SBATCH -o SimT_BAPA1.out 4 | #SBATCH -e error.err 5 | #SBATCH --gres=gpu:1 6 | #SBATCH -w node2 7 | 8 | echo "Submitted from:"$SLURM_SUBMIT_DIR" on node:"$SLURM_SUBMIT_HOST 9 | echo "Running on node "$SLURM_JOB_NODELIST 10 | echo "Allocate Gpu Units:"$CUDA_VISIBLE_DEVICES 11 | 12 | source /home/xiaoqiguo2/.bashrc 13 | 14 | conda activate torch020 15 | 16 | cd /home/xiaoqiguo2/SimT/tools/ 17 | python -u ./trainV2_simt.py --open-classes 15 --learning-rate 6e-4 --learning-rate-T 6e-3 --Threshold-high 0.8 --Threshold-low 0.2 --lambda-Place 0.1 --lambda-Convex 0.1 --lambda-Volume 1.0 --lambda-Anchor 1.0 --restore-from '../snapshots/GTA5_BAPA_warmup_iter129000_mIoU57.44.pth' -------------------------------------------------------------------------------- /sh_warmup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #SBATCH -J Polyp 3 | #SBATCH -o Warmup.out 4 | #SBATCH -e error.err 5 | #SBATCH --gres=gpu:1 6 | #SBATCH -w node3 7 | 8 | echo "Submitted from:"$SLURM_SUBMIT_DIR" on node:"$SLURM_SUBMIT_HOST 9 | echo "Running on node "$SLURM_JOB_NODELIST 10 | echo "Allocate Gpu Units:"$CUDA_VISIBLE_DEVICES 11 | 12 | source /home/xiaoqiguo2/.bashrc 13 | 14 | conda activate torch020 15 | 16 | cd /home/xiaoqiguo2/SimT/tools/ 17 | python -u ./trainV1_warmup.py -------------------------------------------------------------------------------- /tools/__pycache__/_init_paths.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CityU-AIM-Group/SimT/0abe8bae984b1bb18b6fe5d32358d15f034db5a5/tools/__pycache__/_init_paths.cpython-36.pyc -------------------------------------------------------------------------------- /tools/__pycache__/evaluate_cityscapes.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CityU-AIM-Group/SimT/0abe8bae984b1bb18b6fe5d32358d15f034db5a5/tools/__pycache__/evaluate_cityscapes.cpython-36.pyc -------------------------------------------------------------------------------- /tools/_init_paths.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import 2 | from __future__ import division 3 | from __future__ import print_function 4 | 5 | import os.path as osp 6 | import sys 7 | 8 | def add_path(path): 9 | if path not in sys.path: 10 | sys.path.insert(0, path) 11 | 12 | this_dir = osp.dirname(__file__) 13 | 14 | lib_path = osp.join(this_dir, '/home/xiaoqiguo2/SimT/') 15 | add_path(lib_path) -------------------------------------------------------------------------------- /tools/compute_ClassDistribution.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import scipy 3 | from scipy import ndimage 4 | import numpy as np 5 | import sys 6 | import ttach as tta 7 | import torch 8 | from torch.autograd import Variable 9 | import torchvision.models as models 10 | import torch.nn.functional as F 11 | from torch.utils import data, model_zoo 12 | import _init_paths 13 | from model.deeplab_multi import DeeplabMulti 14 | from model.deeplab_vgg import DeeplabVGG 15 | from dataset.cityscapes_dataset import cityscapesDataSet 16 | from collections import OrderedDict 17 | import os 18 | import time 19 | from PIL import Image 20 | import json 21 | from os.path import join 22 | import torch.nn as nn 23 | 24 | import itertools 25 | import pandas as pd 26 | import matplotlib.pyplot as plt 27 | plt.switch_backend('agg') 28 | 29 | IMG_MEAN = np.array((104.00698793,116.66876762,122.67891434), dtype=np.float32) 30 | 31 | DATA_DIRECTORY = '/home/xiaoqiguo2/scratch/UDA_Natural/Cityscapes' 32 | DATA_LIST_PATH = '/home/xiaoqiguo2/MetaCorrection/datasets/cityscapes_list/val.txt' 33 | SAVE_PATH = '/home/xiaoqiguo2/scratch/MetaCorrection/result/cityscapes' 34 | 35 | IGNORE_LABEL = 255 36 | NUM_CLASSES = 19 37 | NUM_STEPS = 500 # Number of images in the validation set. 38 | RESTORE_FROM = 'http://vllab.ucmerced.edu/ytsai/CVPR18/GTA2Cityscapes_multi-ed35151c.pth' 39 | RESTORE_FROM_VGG = 'http://vllab.ucmerced.edu/ytsai/CVPR18/GTA2Cityscapes_vgg-ac4ac9f6.pth' 40 | RESTORE_FROM_ORC = 'http://vllab1.ucmerced.edu/~whung/adaptSeg/cityscapes_oracle-b7b9934.pth' 41 | SET = 'val' 42 | 43 | MODEL = 'DeeplabMulti' 44 | 45 | palette = [128, 64, 128, 244, 35, 232, 70, 70, 70, 102, 102, 156, 190, 153, 153, 153, 153, 153, 250, 170, 30, 46 | 220, 220, 0, 107, 142, 35, 152, 251, 152, 70, 130, 180, 220, 20, 60, 255, 0, 0, 0, 0, 142, 0, 0, 70, 47 | 0, 60, 100, 0, 80, 100, 0, 0, 230, 119, 11, 32] 48 | zero_pad = 256 * 3 - len(palette) 49 | for i in range(zero_pad): 50 | palette.append(0) 51 | 52 | def fast_hist(a, n): 53 | ka = (a >= 0) & (a < n) 54 | return np.bincount(a[ka], minlength=n) 55 | 56 | def per_class_iu(hist): 57 | return np.diag(hist) / (hist.sum(1) + hist.sum(0) - np.diag(hist)) 58 | 59 | def label_mapping(input, mapping): 60 | output = np.copy(input) 61 | for ind in range(len(mapping)): 62 | output[input == mapping[ind][0]] = mapping[ind][1] 63 | return np.array(output, dtype=np.int64) 64 | 65 | 66 | def compute_CD(gt_dir, pred_dir, devkit_dir='/home/xiaoqiguo2/MetaCorrection/datasets/cityscapes_list'): 67 | """ 68 | Compute IoU given the predicted colorized images and 69 | """ 70 | with open(join(devkit_dir, 'info.json'), 'r') as fp: 71 | info = json.load(fp) 72 | num_classes = np.int(info['classes']) 73 | print('Num classes', num_classes) 74 | name_classes = np.array(info['label'], dtype=np.str) 75 | mapping = np.array(info['label2train'], dtype=np.int) 76 | 77 | image_path_list = join(devkit_dir, 'train.txt') 78 | label_path_list = join(devkit_dir, 'train_label.txt') 79 | pred_imgs = open(image_path_list, 'r').read().splitlines() 80 | pred_imgs = [join(pred_dir, x.split('/')[-1]) for x in pred_imgs] 81 | 82 | CM = np.zeros(19) 83 | for ind in range(len(pred_imgs)): 84 | pred = np.array(Image.open(pred_imgs[ind])) 85 | CM += fast_hist(pred.flatten(), 19) 86 | return CM 87 | 88 | if __name__ == '__main__': 89 | gt_dir ='/home/xiaoqiguo2/scratch/UDA_Natural/Cityscapes/train_label' 90 | pred_dir ='/home/xiaoqiguo2/scratch/UDA_Natural/Cityscapes/pseudo_bapa/' 91 | Class_dist = compute_CD(gt_dir, pred_dir) 92 | Class_dist_norm = Class_dist/(np.sum(Class_dist)+10e-10) 93 | np.save("../ClassDist/ClassDist_bapa.npy", Class_dist_norm) 94 | print(Class_dist, Class_dist_norm) -------------------------------------------------------------------------------- /tools/compute_ConfusionMatrix.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import scipy 3 | from scipy import ndimage 4 | import numpy as np 5 | import sys 6 | import ttach as tta 7 | import torch 8 | from torch.autograd import Variable 9 | import torchvision.models as models 10 | import torch.nn.functional as F 11 | from torch.utils import data, model_zoo 12 | import _init_paths 13 | from model.deeplab_multi import DeeplabMulti, Shallow_layers, Deep_layers 14 | from model.deeplab_vgg import DeeplabVGG 15 | from model.meta_deeplab_multi import Res_Deeplab 16 | from model.deeplab_dsp import Res_Deeplab_DSP 17 | from dataset.cityscapes_dataset import cityscapesDataSet 18 | from collections import OrderedDict 19 | import os 20 | import time 21 | from PIL import Image 22 | import json 23 | from os.path import join 24 | import torch.nn as nn 25 | 26 | import itertools 27 | import pandas as pd 28 | import matplotlib.pyplot as plt 29 | plt.switch_backend('agg') 30 | 31 | IMG_MEAN = np.array((104.00698793,116.66876762,122.67891434), dtype=np.float32) 32 | 33 | DATA_DIRECTORY = '/home/xiaoqiguo2/scratch/UDA_Natural/Cityscapes' 34 | DATA_LIST_PATH = '/home/xiaoqiguo2/MetaCorrection/datasets/cityscapes_list/val.txt' 35 | SAVE_PATH = '/home/xiaoqiguo2/scratch/MetaCorrection/result/cityscapes' 36 | 37 | IGNORE_LABEL = 255 38 | NUM_CLASSES = 19 39 | NUM_STEPS = 500 # Number of images in the validation set. 40 | RESTORE_FROM = 'http://vllab.ucmerced.edu/ytsai/CVPR18/GTA2Cityscapes_multi-ed35151c.pth' 41 | RESTORE_FROM_VGG = 'http://vllab.ucmerced.edu/ytsai/CVPR18/GTA2Cityscapes_vgg-ac4ac9f6.pth' 42 | RESTORE_FROM_ORC = 'http://vllab1.ucmerced.edu/~whung/adaptSeg/cityscapes_oracle-b7b9934.pth' 43 | SET = 'val' 44 | 45 | MODEL = 'DeeplabMulti' 46 | 47 | palette = [128, 64, 128, 244, 35, 232, 70, 70, 70, 102, 102, 156, 190, 153, 153, 153, 153, 153, 250, 170, 30, 48 | 220, 220, 0, 107, 142, 35, 152, 251, 152, 70, 130, 180, 220, 20, 60, 255, 0, 0, 0, 0, 142, 0, 0, 70, 49 | 0, 60, 100, 0, 80, 100, 0, 0, 230, 119, 11, 32] 50 | zero_pad = 256 * 3 - len(palette) 51 | for i in range(zero_pad): 52 | palette.append(0) 53 | 54 | def fast_hist(a, b, n33, n19): 55 | ka = (a >= 0) & (a < n33) 56 | return np.bincount(n19 * a[ka].astype(int) + b[ka], minlength=n33 * n19).reshape(n33, n19) 57 | 58 | def per_class_iu(hist): 59 | return np.diag(hist) / (hist.sum(1) + hist.sum(0) - np.diag(hist)) 60 | 61 | def label_mapping(input, mapping): 62 | output = np.copy(input) 63 | for ind in range(len(mapping)): 64 | output[input == mapping[ind][0]] = mapping[ind][1] 65 | return np.array(output, dtype=np.int64) 66 | 67 | 68 | def compute_CM(gt_dir, pred_dir, devkit_dir='/home/xiaoqiguo2/MetaCorrection/datasets/cityscapes_list'): 69 | """ 70 | Compute IoU given the predicted colorized images and 71 | """ 72 | with open(join(devkit_dir, 'info.json'), 'r') as fp: 73 | info = json.load(fp) 74 | num_classes = np.int(info['classes']) 75 | print('Num classes', num_classes) 76 | name_classes = np.array(info['label'], dtype=np.str) 77 | mapping = np.array(info['label2train_1'], dtype=np.int) 78 | hist = np.zeros((num_classes, num_classes)) 79 | 80 | image_path_list = join(devkit_dir, 'train.txt') 81 | label_path_list = join(devkit_dir, 'train_label.txt') 82 | # image_path_list = join(devkit_dir, 'val.txt') 83 | # label_path_list = join(devkit_dir, 'label.txt') 84 | gt_imgs = open(label_path_list, 'r').read().splitlines() 85 | gt_imgs = [join(gt_dir, x) for x in gt_imgs] 86 | pred_imgs = open(image_path_list, 'r').read().splitlines() 87 | pred_imgs = [join(pred_dir, x.split('/')[-1]) for x in pred_imgs] 88 | 89 | CM = np.zeros((34,19)) 90 | for ind in range(len(gt_imgs)): 91 | pred = np.array(Image.open(pred_imgs[ind])) 92 | label = np.array(Image.open(gt_imgs[ind])) 93 | label = label_mapping(label, mapping) 94 | if len(label.flatten()) != len(pred.flatten()): 95 | print('Skipping: len(gt) = {:d}, len(pred) = {:d}, {:s}, {:s}'.format(len(label.flatten()), len(pred.flatten()), gt_imgs[ind], pred_imgs[ind])) 96 | continue 97 | CM += fast_hist(label.flatten(), pred.flatten(), 34, 19) 98 | return CM 99 | 100 | def plot_NTM(trans_mat, normalize=True, title='NTM1', cmap=plt.cm.Blues): 101 | plt.figure() 102 | plt.imshow(trans_mat, interpolation='nearest', cmap=cmap) 103 | plt.colorbar() 104 | 105 | thresh = trans_mat.max() / 2. 106 | for i, j in itertools.product(range(trans_mat.shape[0]), range(trans_mat.shape[1])): 107 | num = '{:.3f}'.format(trans_mat[i, j]) if normalize else int(trans_mat[i, j]) 108 | plt.text(j, i, num, 109 | fontsize=2, 110 | verticalalignment='center', 111 | horizontalalignment="center", 112 | color="white" if np.float(num) > thresh else "black") 113 | plt.savefig('/home/xiaoqiguo2/MetaCorrection/'+title+'.png', transparent=True, dpi=600) 114 | 115 | 116 | if __name__ == '__main__': 117 | gt_dir ='/home/xiaoqiguo2/scratch/UDA_Natural/Cityscapes/train_label' 118 | # gt_dir ='/home/xiaoqiguo2/scratch/UDA_Natural/Cityscapes/label' 119 | pred_dir ='/home/xiaoqiguo2/scratch/UDA_Natural/Cityscapes/pseudo_adaptsegnet' 120 | Confusion_matrix = compute_CM(gt_dir, pred_dir) 121 | Confusion_matrix_norm = Confusion_matrix/(np.sum(Confusion_matrix, axis=1, keepdims=True)+10e-6) 122 | np.save("../CM.npy", Confusion_matrix) 123 | np.save("../CM_norm.npy", Confusion_matrix_norm) 124 | plot_NTM(Confusion_matrix_norm, normalize=True, title='Training_CM_Adapt', cmap=plt.cm.Blues) 125 | # plot_NTM(Confusion_matrix_norm, normalize=True, title='Testing_CM', cmap=plt.cm.Blues) 126 | # print(Confusion_matrix) 127 | print(Confusion_matrix_norm) 128 | print(np.mean(np.diag(Confusion_matrix_norm[:19,:]))) -------------------------------------------------------------------------------- /tools/compute_iou.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import argparse 3 | import json 4 | from PIL import Image 5 | from os.path import join 6 | import _init_paths 7 | 8 | 9 | def fast_hist(a, b, n): 10 | k = (a >= 0) & (a < n) 11 | return np.bincount(n * a[k].astype(int) + b[k], minlength=n ** 2).reshape(n, n) 12 | 13 | 14 | def per_class_iu(hist): 15 | return np.diag(hist) / (hist.sum(1) + hist.sum(0) - np.diag(hist)) 16 | 17 | 18 | def label_mapping(input, mapping): 19 | output = np.copy(input) 20 | for ind in range(len(mapping)): 21 | output[input == mapping[ind][0]] = mapping[ind][1] 22 | return np.array(output, dtype=np.int64) 23 | 24 | 25 | def compute_mIoU(gt_dir, pred_dir, devkit_dir=''): 26 | """ 27 | Compute IoU given the predicted colorized images and 28 | """ 29 | with open(join(devkit_dir, 'info.json'), 'r') as fp: 30 | info = json.load(fp) 31 | num_classes = np.int(info['classes']) 32 | print('Num classes', num_classes) 33 | name_classes = np.array(info['label'], dtype=np.str) 34 | mapping = np.array(info['label2train'], dtype=np.int) 35 | hist = np.zeros((num_classes, num_classes)) 36 | 37 | image_path_list = join(devkit_dir, 'val.txt') 38 | label_path_list = join(devkit_dir, 'label.txt') 39 | gt_imgs = open(label_path_list, 'r').read().splitlines() 40 | gt_imgs = [join(gt_dir, x) for x in gt_imgs] 41 | pred_imgs = open(image_path_list, 'r').read().splitlines() 42 | pred_imgs = [join(pred_dir, x.split('/')[-1]) for x in pred_imgs] 43 | 44 | for ind in range(len(gt_imgs)): 45 | pred = np.array(Image.open(pred_imgs[ind])) 46 | label = np.array(Image.open(gt_imgs[ind])) 47 | label = label_mapping(label, mapping) 48 | if len(label.flatten()) != len(pred.flatten()): 49 | print('Skipping: len(gt) = {:d}, len(pred) = {:d}, {:s}, {:s}'.format(len(label.flatten()), len(pred.flatten()), gt_imgs[ind], pred_imgs[ind])) 50 | continue 51 | hist += fast_hist(label.flatten(), pred.flatten(), num_classes) 52 | if ind > 0 and ind % 10 == 0: 53 | print('{:d} / {:d}: {:0.2f}'.format(ind, len(gt_imgs), 100*np.mean(per_class_iu(hist)))) 54 | 55 | mIoUs = per_class_iu(hist) 56 | for ind_class in range(num_classes): 57 | print('===>' + name_classes[ind_class] + ':\t' + str(round(mIoUs[ind_class] * 100, 2))) 58 | print('===> mIoU: ' + str(round(np.nanmean(mIoUs) * 100, 2))) 59 | return mIoUs 60 | 61 | 62 | def main(args): 63 | compute_mIoU(args.gt_dir, args.pred_dir, args.devkit_dir) 64 | 65 | 66 | if __name__ == "__main__": 67 | parser = argparse.ArgumentParser() 68 | parser.add_argument('gt_dir', type=str, help='directory which stores CityScapes val gt images') 69 | parser.add_argument('pred_dir', type=str, help='directory which stores CityScapes val pred images') 70 | parser.add_argument('--devkit_dir', default='dataset/cityscapes_list', help='base directory of cityscapes') 71 | args = parser.parse_args() 72 | main(args) 73 | -------------------------------------------------------------------------------- /tools/evaluate_cityscapes.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import scipy 3 | from scipy import ndimage 4 | import numpy as np 5 | import sys 6 | 7 | import torch 8 | from torch.autograd import Variable 9 | import torchvision.models as models 10 | import torch.nn.functional as F 11 | from torch.utils import data, model_zoo 12 | from model.deeplab import Res_Deeplab 13 | from model.deeplab_multi import DeeplabMulti 14 | from model.deeplab_vgg import DeeplabVGG 15 | from dataset.cityscapes_dataset import cityscapesDataSet 16 | from collections import OrderedDict 17 | import os 18 | from PIL import Image 19 | import json 20 | from os.path import join 21 | import matplotlib.pyplot as plt 22 | import torch.nn as nn 23 | 24 | IMG_MEAN = np.array((104.00698793,116.66876762,122.67891434), dtype=np.float32) 25 | 26 | DATA_DIRECTORY = '/home/xiaoqiguo2/scratch/UDA_Natural/Cityscapes' 27 | DATA_LIST_PATH = '../dataset/cityscapes_list/val.txt' 28 | SAVE_PATH = '../result/cityscapes' 29 | 30 | IGNORE_LABEL = 255 31 | NUM_CLASSES = 19 32 | NUM_STEPS = 500 # Number of images in the validation set. 33 | RESTORE_FROM = 'http://vllab.ucmerced.edu/ytsai/CVPR18/GTA2Cityscapes_multi-ed35151c.pth' 34 | RESTORE_FROM_VGG = 'http://vllab.ucmerced.edu/ytsai/CVPR18/GTA2Cityscapes_vgg-ac4ac9f6.pth' 35 | RESTORE_FROM_ORC = 'http://vllab1.ucmerced.edu/~whung/adaptSeg/cityscapes_oracle-b7b9934.pth' 36 | SET = 'val' 37 | 38 | MODEL = 'DeeplabMulti' 39 | 40 | palette = [128, 64, 128, 244, 35, 232, 70, 70, 70, 102, 102, 156, 190, 153, 153, 153, 153, 153, 250, 170, 30, 41 | 220, 220, 0, 107, 142, 35, 152, 251, 152, 70, 130, 180, 220, 20, 60, 255, 0, 0, 0, 0, 142, 0, 0, 70, 42 | 0, 60, 100, 0, 80, 100, 0, 0, 230, 119, 11, 32, 255, 255, 255] 43 | zero_pad = 256 * 3 - len(palette) 44 | for i in range(zero_pad): 45 | palette.append(0) 46 | 47 | 48 | def colorize_mask(mask): 49 | # mask: numpy array of the mask 50 | new_mask = Image.fromarray(mask.astype(np.uint8)).convert('P') 51 | new_mask.putpalette(palette) 52 | 53 | return new_mask 54 | 55 | def get_arguments(): 56 | """Parse all the arguments provided from the CLI. 57 | Returns: 58 | A list of parsed arguments. 59 | """ 60 | parser = argparse.ArgumentParser(description="DeepLab-ResNet Network") 61 | parser.add_argument("--model", type=str, default=MODEL, 62 | help="Model Choice (DeeplabMulti/DeeplabVGG/Oracle).") 63 | parser.add_argument("--data-dir", type=str, default=DATA_DIRECTORY, 64 | help="Path to the directory containing the Cityscapes dataset.") 65 | parser.add_argument("--data-list", type=str, default=DATA_LIST_PATH, 66 | help="Path to the file listing the images in the dataset.") 67 | parser.add_argument("--ignore-label", type=int, default=IGNORE_LABEL, 68 | help="The index of the label to ignore during the training.") 69 | parser.add_argument("--num-classes", type=int, default=NUM_CLASSES, 70 | help="Number of classes to predict (including background).") 71 | parser.add_argument("--restore-from", type=str, default=RESTORE_FROM, 72 | help="Where restore model parameters from.") 73 | parser.add_argument("--gpu", type=int, default=0, 74 | help="choose gpu device.") 75 | parser.add_argument("--set", type=str, default=SET, 76 | help="choose evaluation set.") 77 | parser.add_argument("--save", type=str, default=SAVE_PATH, 78 | help="Path to save result.") 79 | return parser.parse_args() 80 | 81 | def fast_hist(a, b, n): 82 | k = (a >= 0) & (a < n) 83 | return np.bincount(n * a[k].astype(int) + b[k], minlength=n ** 2).reshape(n, n) 84 | 85 | 86 | def per_class_iu(hist): 87 | return np.diag(hist) / (hist.sum(1) + hist.sum(0) - np.diag(hist)) 88 | 89 | 90 | def label_mapping(input, mapping): 91 | output = np.copy(input) 92 | for ind in range(len(mapping)): 93 | output[input == mapping[ind][0]] = mapping[ind][1] 94 | return np.array(output, dtype=np.int64) 95 | 96 | def evaluate_simt(seg_model, pred_dir=None, devkit_dir='/home/xiaoqiguo2/SimT/dataset/cityscapes_list', post=False): 97 | """Create the model and start the evaluation process.""" 98 | 99 | # if not os.path.exists(pred_dir): 100 | # os.makedirs(pred_dir) 101 | device = torch.device("cuda") 102 | 103 | testloader = data.DataLoader(cityscapesDataSet(DATA_DIRECTORY, DATA_LIST_PATH, crop_size=(1024, 512), mean=IMG_MEAN, scale=False, mirror=False, set=SET), 104 | batch_size=1, shuffle=False, pin_memory=True) 105 | testloader_640 = data.DataLoader(cityscapesDataSet(DATA_DIRECTORY, DATA_LIST_PATH, crop_size=(1280, 640), mean=IMG_MEAN, scale=False, mirror=False, set=SET), 106 | batch_size=1, shuffle=False, pin_memory=True) 107 | 108 | interp = nn.Upsample(size=(1024, 2048), mode='bilinear', align_corners=True) 109 | print('Evaluate for testing data') 110 | 111 | with open(join(devkit_dir, 'info.json'), 'r') as fp: 112 | info = json.load(fp) 113 | num_classes = np.int(info['classes']) 114 | name_classes = np.array(info['label'], dtype=np.str) 115 | mapping = np.array(info['label2train'], dtype=np.int) 116 | hist = np.zeros((num_classes, num_classes)) 117 | 118 | seg_model.eval() 119 | with torch.no_grad(): 120 | for index, (batch, batch_640) in enumerate(zip(testloader, testloader_640)): 121 | image, _, name = batch 122 | image = image.to(device) 123 | 124 | image_640, _, name = batch_640 125 | image_640 = image_640.to(device) 126 | 127 | output1, output2 = seg_model(image) 128 | output = interp(output2[:,:num_classes,:,:]).cpu().data[0].numpy() 129 | del output1 130 | del output2 131 | 132 | output1, output2 = seg_model(image_640) 133 | output += interp(output2[:,:num_classes,:,:]).cpu().data[0].numpy() 134 | del output1 135 | del output2 136 | 137 | output = output.transpose(1,2,0) 138 | output = np.asarray(np.argmax(output, axis=2)) 139 | 140 | gt_dir ='/home/xiaoqiguo2/scratch/UDA_Natural/Cityscapes/label' 141 | gt_path = '%s/%s' % (gt_dir, name[0].split('leftImg8bit')[0]+'gtFine_labelIds.png') 142 | 143 | label = np.array(Image.open(gt_path)) 144 | label = label_mapping(label, mapping) 145 | if len(label.flatten()) != len(output.flatten()): 146 | print('Skipping: len(gt) = {:d}, len(pred) = {:d}, {:s}, {:s}'.format(len(label.flatten()), len(pred.flatten()), gt_imgs[ind], pred_imgs[ind])) 147 | continue 148 | hist += fast_hist(label.flatten(), output.flatten(), num_classes) 149 | 150 | # output_col = colorize_mask(np.uint8(output)) 151 | # output = Image.fromarray(np.uint8(output)) 152 | # name = name[0].split('/')[-1] 153 | # # pred_dir = '/home/xiaoqiguo2/scratch/UDA_Natural/Cityscapes/pseudo_sfdaseg_so' 154 | # pred_dir = '/home/xiaoqiguo2/SimT/result_SFDA' 155 | # output.save('%s/%s' % (pred_dir, name)) 156 | # output_col.save('%s/%s_color.png' % (pred_dir, name.split('.')[0])) 157 | 158 | mIoUs = per_class_iu(hist) 159 | for ind_class in range(num_classes): 160 | print('===>' + name_classes[ind_class] + ':\t' + str(round(mIoUs[ind_class] * 100, 2))) 161 | print('===> mIoU: ' + str(round(np.nanmean(mIoUs) * 100, 2))) 162 | return round(np.nanmean(mIoUs) * 100, 2) 163 | 164 | 165 | def evaluate_warmup(seg_model, pred_dir=None, devkit_dir='/home/xiaoqiguo2/SimT/dataset/cityscapes_list', post=False): 166 | """Create the model and start the evaluation process.""" 167 | 168 | # if not os.path.exists(pred_dir): 169 | # os.makedirs(pred_dir) 170 | device = torch.device("cuda") 171 | 172 | testloader = data.DataLoader(cityscapesDataSet(DATA_DIRECTORY, DATA_LIST_PATH, crop_size=(1024, 512), mean=IMG_MEAN, scale=False, mirror=False, set=SET), 173 | batch_size=1, shuffle=False, pin_memory=True) 174 | testloader_640 = data.DataLoader(cityscapesDataSet(DATA_DIRECTORY, DATA_LIST_PATH, crop_size=(1280, 640), mean=IMG_MEAN, scale=False, mirror=False, set=SET), 175 | batch_size=1, shuffle=False, pin_memory=True) 176 | 177 | interp = nn.Upsample(size=(1024, 2048), mode='bilinear', align_corners=True) 178 | print('Evaluate for testing data') 179 | 180 | with open(join(devkit_dir, 'info.json'), 'r') as fp: 181 | info = json.load(fp) 182 | num_classes = np.int(info['classes']) 183 | name_classes = np.array(info['label'], dtype=np.str) 184 | mapping = np.array(info['label2train'], dtype=np.int) 185 | hist = np.zeros((num_classes, num_classes)) 186 | 187 | seg_model.eval() 188 | with torch.no_grad(): 189 | for index, (batch, batch_640) in enumerate(zip(testloader, testloader_640)): 190 | image, _, name = batch 191 | image = image.to(device) 192 | 193 | image_640, _, name = batch_640 194 | image_640 = image_640.to(device) 195 | 196 | output1, output2 = seg_model(image) 197 | output = interp(output2).cpu().data[0].numpy() 198 | del output1 199 | del output2 200 | 201 | output = output.transpose(1,2,0) 202 | output = np.asarray(np.argmax(output, axis=2)) 203 | 204 | gt_dir ='/home/xiaoqiguo2/scratch/UDA_Natural/Cityscapes/label' 205 | gt_path = '%s/%s' % (gt_dir, name[0].split('leftImg8bit')[0]+'gtFine_labelIds.png') 206 | 207 | label = np.array(Image.open(gt_path)) 208 | label = label_mapping(label, mapping) 209 | if len(label.flatten()) != len(output.flatten()): 210 | print('Skipping: len(gt) = {:d}, len(pred) = {:d}, {:s}, {:s}'.format(len(label.flatten()), len(pred.flatten()), gt_imgs[ind], pred_imgs[ind])) 211 | continue 212 | hist += fast_hist(label.flatten(), output.flatten(), num_classes) 213 | 214 | # output_col = colorize_mask(np.uint8(output)) 215 | # output = Image.fromarray(np.uint8(output)) 216 | # name = name[0].split('/')[-1] 217 | # pred_dir = '/home/xiaoqiguo2/scratch/UDA_Natural/Cityscapes/pseudo_adapt_warmup' 218 | # output.save('%s/%s' % (pred_dir, name)) 219 | # output_col.save('%s/%s_color.png' % (pred_dir, name.split('.')[0])) 220 | 221 | mIoUs = per_class_iu(hist) 222 | for ind_class in range(num_classes): 223 | print('===>' + name_classes[ind_class] + ':\t' + str(round(mIoUs[ind_class] * 100, 2))) 224 | print('===> mIoU: ' + str(round(np.nanmean(mIoUs) * 100, 2))) 225 | return round(np.nanmean(mIoUs) * 100, 2) 226 | -------------------------------------------------------------------------------- /tools/test.py: -------------------------------------------------------------------------------- 1 | import _init_paths 2 | import argparse 3 | import torch 4 | import torch.nn as nn 5 | from torch.utils import data, model_zoo 6 | import numpy as np 7 | import pickle 8 | from torch.autograd import Variable 9 | import torch.optim as optim 10 | import scipy.misc 11 | import torch.backends.cudnn as cudnn 12 | import torch.nn.functional as F 13 | import sys 14 | import os 15 | import os.path as osp 16 | import random 17 | 18 | from model.deeplab_multi import DeeplabMulti, sig_NTM, sig_W 19 | # from model.deeplab import Res_Deeplab as DeeplabMulti 20 | from utils.loss import CrossEntropy2d, EntropyLoss 21 | from dataset.gta5_dataset import GTA5DataSet 22 | from dataset.cityscapes_dataset import cityscapesDataSet, cityscapesPseudo 23 | 24 | import time 25 | import datetime 26 | import itertools 27 | import pandas as pd 28 | import _init_paths 29 | from evaluate_cityscapes import evaluate_simt, evaluate_warmup 30 | import matplotlib.pyplot as plt 31 | plt.switch_backend('agg') 32 | 33 | 34 | IMG_MEAN = np.array((104.00698793, 116.66876762, 122.67891434), dtype=np.float32) 35 | 36 | MODEL = 'DeepLab' 37 | BATCH_SIZE = 1 38 | ITER_SIZE = 1 39 | NUM_WORKERS = 4 40 | DATA_DIRECTORY = '/home/xiaoqiguo2/scratch/UDA_Natural/GTA5' 41 | DATA_LIST_PATH = '../dataset/gta5_list/val.txt' 42 | IGNORE_LABEL = 255 43 | INPUT_SIZE = '1024,512' 44 | DATA_DIRECTORY_TARGET = '/home/xiaoqiguo2/scratch/UDA_Natural/Cityscapes' 45 | DATA_LIST_PATH_TARGET = '../dataset/cityscapes_list/pseudo_bapa.lst' 46 | INPUT_SIZE_TARGET = '1024,512' 47 | LEARNING_RATE = 2.5e-4 48 | LEARNING_RATE_T = 2.5e-3 49 | MOMENTUM = 0.9 50 | NUM_CLASSES = 19 51 | OPEN_CLASSES = 15 52 | NUM_STEPS = 250000 53 | NUM_STEPS_STOP = 40000 # early stopping 54 | POWER = 0.9 55 | RANDOM_SEED = 1234 56 | RESTORE_FROM = '../snapshots/AdaptSeg.pth' 57 | SAVE_PRED_EVERY = 1000 58 | SNAPSHOT_DIR = '../snapshots/AdaptSegNet/' 59 | WEIGHT_DECAY = 0.0005 60 | LOG_DIR = './log/' 61 | Threshold_high = 0.8 62 | Threshold_low = 0.2 63 | lambda_Place = 0.1 64 | lambda_Convex = 0.5 65 | lambda_Volume = 0.1 66 | lambda_Anchor = 0.5 67 | 68 | LAMBDA_SEG = 0.1 69 | TARGET = 'cityscapes' 70 | SET = 'val' 71 | 72 | def get_arguments(): 73 | """Parse all the arguments provided from the CLI. 74 | 75 | Returns: 76 | A list of parsed arguments. 77 | """ 78 | parser = argparse.ArgumentParser(description="DeepLab-ResNet Network") 79 | parser.add_argument("--model", type=str, default=MODEL, 80 | help="available options : DeepLab") 81 | parser.add_argument("--target", type=str, default=TARGET, 82 | help="available options : cityscapes") 83 | parser.add_argument("--batch-size", type=int, default=BATCH_SIZE, 84 | help="Number of images sent to the network in one step.") 85 | parser.add_argument("--iter-size", type=int, default=ITER_SIZE, 86 | help="Accumulate gradients for ITER_SIZE iterations.") 87 | parser.add_argument("--num-workers", type=int, default=NUM_WORKERS, 88 | help="number of workers for multithread dataloading.") 89 | parser.add_argument("--data-dir", type=str, default=DATA_DIRECTORY, 90 | help="Path to the directory containing the source dataset.") 91 | parser.add_argument("--data-list", type=str, default=DATA_LIST_PATH, 92 | help="Path to the file listing the images in the source dataset.") 93 | parser.add_argument("--ignore-label", type=int, default=IGNORE_LABEL, 94 | help="The index of the label to ignore during the training.") 95 | parser.add_argument("--input-size", type=str, default=INPUT_SIZE, 96 | help="Comma-separated string with height and width of source images.") 97 | parser.add_argument("--data-dir-target", type=str, default=DATA_DIRECTORY_TARGET, 98 | help="Path to the directory containing the target dataset.") 99 | parser.add_argument("--data-list-target", type=str, default=DATA_LIST_PATH_TARGET, 100 | help="Path to the file listing the images in the target dataset.") 101 | parser.add_argument("--input-size-target", type=str, default=INPUT_SIZE_TARGET, 102 | help="Comma-separated string with height and width of target images.") 103 | parser.add_argument("--is-training", action="store_true", 104 | help="Whether to updates the running means and variances during the training.") 105 | parser.add_argument("--learning-rate", type=float, default=LEARNING_RATE, 106 | help="Base learning rate for training with polynomial decay.") 107 | parser.add_argument("--learning-rate-T", type=float, default=LEARNING_RATE_T, 108 | help="Base learning rate for discriminator.") 109 | parser.add_argument("--lambda-seg", type=float, default=LAMBDA_SEG, 110 | help="lambda_seg.") 111 | parser.add_argument("--Threshold-high", type=float, default=Threshold_high, 112 | help="Threshold_high") 113 | parser.add_argument("--Threshold-low", type=float, default=Threshold_low, 114 | help="Threshold_low") 115 | parser.add_argument("--lambda-Place", type=float, default=lambda_Place, 116 | help="lambda_Place") 117 | parser.add_argument("--lambda-Convex", type=float, default=lambda_Convex, 118 | help="lambda_Convex") 119 | parser.add_argument("--lambda-Volume", type=float, default=lambda_Volume, 120 | help="lambda_Volume") 121 | parser.add_argument("--lambda-Anchor", type=float, default=lambda_Anchor, 122 | help="lambda_Anchor") 123 | parser.add_argument("--momentum", type=float, default=MOMENTUM, 124 | help="Momentum component of the optimiser.") 125 | parser.add_argument("--not-restore-last", action="store_true", 126 | help="Whether to not restore last (FC) layers.") 127 | parser.add_argument("--num-classes", type=int, default=NUM_CLASSES, 128 | help="Number of classes to predict (including background).") 129 | parser.add_argument("--open-classes", type=int, default=OPEN_CLASSES, 130 | help="Number of classes to predict (including background).") 131 | parser.add_argument("--num-steps", type=int, default=NUM_STEPS, 132 | help="Number of training steps.") 133 | parser.add_argument("--num-steps-stop", type=int, default=NUM_STEPS_STOP, 134 | help="Number of training steps for early stopping.") 135 | parser.add_argument("--power", type=float, default=POWER, 136 | help="Decay parameter to compute the learning rate.") 137 | parser.add_argument("--random-mirror", action="store_true", 138 | help="Whether to randomly mirror the inputs during the training.") 139 | parser.add_argument("--random-scale", action="store_true", 140 | help="Whether to randomly scale the inputs during the training.") 141 | parser.add_argument("--random-seed", type=int, default=RANDOM_SEED, 142 | help="Random seed to have reproducible results.") 143 | parser.add_argument("--restore-from", type=str, default=RESTORE_FROM, 144 | help="Where restore model parameters from.") 145 | parser.add_argument("--save-pred-every", type=int, default=SAVE_PRED_EVERY, 146 | help="Save summaries and checkpoint every often.") 147 | parser.add_argument("--snapshot-dir", type=str, default=SNAPSHOT_DIR, 148 | help="Where to save snapshots of the model.") 149 | parser.add_argument("--weight-decay", type=float, default=WEIGHT_DECAY, 150 | help="Regularisation parameter for L2-loss.") 151 | parser.add_argument("--gpu", type=int, default=0, 152 | help="choose gpu device.") 153 | parser.add_argument("--set", type=str, default=SET, 154 | help="choose adaptation set.") 155 | parser.add_argument("--log-dir", type=str, default=LOG_DIR, 156 | help="Path to the directory of log.") 157 | return parser.parse_args() 158 | 159 | 160 | args = get_arguments() 161 | # if not os.path.exists(args.log_dir): 162 | # os.makedirs(args.log_dir) 163 | print('Leanring_rate: ', args.learning_rate) 164 | print('Leanring_rate_T: ', args.learning_rate_T) 165 | print('Open-set class: ', args.open_classes) 166 | print('Threshold_high: ', args.Threshold_high) 167 | print('Threshold_low: ', args.Threshold_low) 168 | print('lambda_Place: ', args.lambda_Place) 169 | print('lambda_Convex: ', args.lambda_Convex) 170 | print('lambda_Volume: ', args.lambda_Volume) 171 | print('lambda_Anchor: ', args.lambda_Anchor) 172 | print('restore_from: ', args.restore_from) 173 | 174 | def lr_poly(base_lr, iter, max_iter, power): 175 | return base_lr * ((1 - float(iter) / max_iter) ** (power)) 176 | 177 | def adjust_learning_rate(optimizer, i_iter): 178 | lr = lr_poly(args.learning_rate, i_iter, args.num_steps, args.power) 179 | optimizer.param_groups[0]['lr'] = lr 180 | if len(optimizer.param_groups) > 1: 181 | optimizer.param_groups[1]['lr'] = lr * 10 182 | 183 | def adjust_learning_rate_T(optimizer, i_iter): 184 | lr = lr_poly(args.learning_rate_T, i_iter, args.num_steps, args.power) 185 | optimizer.param_groups[0]['lr'] = lr 186 | 187 | def plot_NTM(trans_mat, normalize=True, title='NTM1', cmap=plt.cm.Blues): 188 | plt.figure() 189 | plt.imshow(trans_mat, interpolation='nearest', cmap=cmap) 190 | plt.colorbar() 191 | 192 | thresh = trans_mat.max() / 2. 193 | for i, j in itertools.product(range(trans_mat.shape[0]), range(trans_mat.shape[1])): 194 | num = '{:.2f}'.format(trans_mat[i, j]) if normalize else int(trans_mat[i, j]) 195 | plt.text(j, i, num, 196 | fontsize=2, 197 | verticalalignment='center', 198 | horizontalalignment="center", 199 | color="white" if np.float(num) > thresh else "black") 200 | plt.savefig('../NTM_vis/'+title+'.png', transparent=True, dpi=600) 201 | 202 | def Placeholder_loss(pred, num_classes, open_classes, thres=None): 203 | seg_loss = torch.nn.CrossEntropyLoss(ignore_index=255) 204 | #### del maximum elements in prediction#### 205 | pseudo = torch.argmax(pred, dim=1).long() 206 | pseudo_onehot = torch.eye(num_classes + open_classes)[pseudo].permute(0, 3, 1, 2).float().cuda() 207 | zeros = torch.zeros_like(pseudo_onehot) 208 | ones = torch.zeros_like(pseudo_onehot) 209 | predict = torch.where(pseudo_onehot > zeros, -100. * ones, pred) 210 | 211 | #### del pixels with armgmax < num_classes #### 212 | ones = torch.ones_like(pseudo) 213 | pseudo1 = torch.where(pseudo < num_classes * ones, pseudo, 255 * ones) 214 | if thres is not None: 215 | pred_max = torch.max(torch.softmax(pred.clone().detach(), dim=1), 1)[0] 216 | pseudo1 = torch.where(pred_max > thres, pseudo1, 255 * ones) 217 | loss_known = seg_loss(pred, pseudo1) 218 | 219 | #### find out the maximum logit within open set classes as the label #### 220 | predict_open = torch.zeros_like(predict) 221 | predict_open[:,args.num_classes:,:,:] = predict[:,args.num_classes:,:,:].clone().detach() 222 | Placeholder_y = torch.argmax(predict_open, dim=1) 223 | Placeholder_y = torch.where(pseudo1 == 255 * ones, 255 * ones, Placeholder_y) 224 | 225 | loss_unknown = seg_loss(predict, Placeholder_y) 226 | return loss_known + args.lambda_Place * loss_unknown 227 | 228 | def main(): 229 | cudnn.enabled = True 230 | gpu = args.gpu 231 | 232 | # Create network 233 | pretrained_dict = torch.load(args.restore_from) 234 | model = DeeplabMulti(num_classes=args.num_classes, open_classes=args.open_classes, openset=True).cuda() 235 | net_dict = model.state_dict() 236 | pretrained_dict = {k: v for k, v in pretrained_dict.items() if (k in net_dict)} 237 | net_dict.update(pretrained_dict) 238 | model.load_state_dict(net_dict) 239 | 240 | now = datetime.datetime.now() 241 | print(now.strftime("%Y-%m-%d %H:%M:%S")) 242 | mIoU = evaluate_simt(model) 243 | print('Finish Evaluation: '+time.asctime(time.localtime(time.time()))) 244 | 245 | if __name__ == '__main__': 246 | main() 247 | -------------------------------------------------------------------------------- /tools/trainV1_warmup.py: -------------------------------------------------------------------------------- 1 | import _init_paths 2 | import argparse 3 | import torch 4 | import torch.nn as nn 5 | from torch.utils import data, model_zoo 6 | import numpy as np 7 | import pickle 8 | from torch.autograd import Variable 9 | import torch.optim as optim 10 | import scipy.misc 11 | import torch.backends.cudnn as cudnn 12 | import torch.nn.functional as F 13 | import sys 14 | import os 15 | import os.path as osp 16 | import random 17 | 18 | from model.deeplab_multi import DeeplabMulti 19 | from utils.loss import CrossEntropy2d 20 | from dataset.gta5_dataset import GTA5DataSet 21 | from dataset.cityscapes_dataset import cityscapesDataSet, cityscapesPseudo 22 | 23 | import time 24 | import datetime 25 | import itertools 26 | import pandas as pd 27 | import _init_paths 28 | from evaluate_cityscapes import evaluate_warmup 29 | import matplotlib.pyplot as plt 30 | plt.switch_backend('agg') 31 | 32 | 33 | IMG_MEAN = np.array((104.00698793, 116.66876762, 122.67891434), dtype=np.float32) 34 | 35 | MODEL = 'DeepLab' 36 | BATCH_SIZE = 1 37 | ITER_SIZE = 1 38 | NUM_WORKERS = 4 39 | DATA_DIRECTORY = '/home/xiaoqiguo2/scratch/UDA_Natural/GTA5' 40 | DATA_LIST_PATH = '../dataset/gta5_list/train.txt' 41 | IGNORE_LABEL = 255 42 | INPUT_SIZE = '1024,512' 43 | DATA_DIRECTORY_TARGET = '/home/xiaoqiguo2/scratch/UDA_Natural/Cityscapes' 44 | DATA_LIST_PATH_TARGET = '../dataset/cityscapes_list/pseudo_bapa.lst' 45 | INPUT_SIZE_TARGET = '1024,512' 46 | LEARNING_RATE = 6e-4 47 | LEARNING_RATE_T = 6e-3 48 | MOMENTUM = 0.9 49 | NUM_CLASSES = 19 50 | OPEN_CLASSES = 15 51 | NUM_STEPS = 250000 52 | NUM_STEPS_STOP = 150000 # early stopping 53 | POWER = 0.9 54 | RANDOM_SEED = 1234 55 | # RESTORE_FROM = '../snapshots/AdaptSeg.pth' 56 | RESTORE_FROM = '../snapshots/resnet_pretrain.pth' 57 | SAVE_PRED_EVERY = 1000 58 | SNAPSHOT_DIR = '../snapshots/' 59 | WEIGHT_DECAY = 0.0005 60 | LOG_DIR = './log/' 61 | 62 | LAMBDA_SEG = 0.1 63 | TARGET = 'cityscapes' 64 | SET = 'train' 65 | 66 | def get_arguments(): 67 | """Parse all the arguments provided from the CLI. 68 | 69 | Returns: 70 | A list of parsed arguments. 71 | """ 72 | parser = argparse.ArgumentParser(description="DeepLab-ResNet Network") 73 | parser.add_argument("--model", type=str, default=MODEL, 74 | help="available options : DeepLab") 75 | parser.add_argument("--target", type=str, default=TARGET, 76 | help="available options : cityscapes") 77 | parser.add_argument("--batch-size", type=int, default=BATCH_SIZE, 78 | help="Number of images sent to the network in one step.") 79 | parser.add_argument("--iter-size", type=int, default=ITER_SIZE, 80 | help="Accumulate gradients for ITER_SIZE iterations.") 81 | parser.add_argument("--num-workers", type=int, default=NUM_WORKERS, 82 | help="number of workers for multithread dataloading.") 83 | parser.add_argument("--data-dir", type=str, default=DATA_DIRECTORY, 84 | help="Path to the directory containing the source dataset.") 85 | parser.add_argument("--data-list", type=str, default=DATA_LIST_PATH, 86 | help="Path to the file listing the images in the source dataset.") 87 | parser.add_argument("--ignore-label", type=int, default=IGNORE_LABEL, 88 | help="The index of the label to ignore during the training.") 89 | parser.add_argument("--input-size", type=str, default=INPUT_SIZE, 90 | help="Comma-separated string with height and width of source images.") 91 | parser.add_argument("--data-dir-target", type=str, default=DATA_DIRECTORY_TARGET, 92 | help="Path to the directory containing the target dataset.") 93 | parser.add_argument("--data-list-target", type=str, default=DATA_LIST_PATH_TARGET, 94 | help="Path to the file listing the images in the target dataset.") 95 | parser.add_argument("--input-size-target", type=str, default=INPUT_SIZE_TARGET, 96 | help="Comma-separated string with height and width of target images.") 97 | parser.add_argument("--is-training", action="store_true", 98 | help="Whether to updates the running means and variances during the training.") 99 | parser.add_argument("--learning-rate", type=float, default=LEARNING_RATE, 100 | help="Base learning rate for training with polynomial decay.") 101 | parser.add_argument("--learning-rate-T", type=float, default=LEARNING_RATE_T, 102 | help="Base learning rate for discriminator.") 103 | parser.add_argument("--lambda-seg", type=float, default=LAMBDA_SEG, 104 | help="lambda_seg.") 105 | parser.add_argument("--momentum", type=float, default=MOMENTUM, 106 | help="Momentum component of the optimiser.") 107 | parser.add_argument("--not-restore-last", action="store_true", 108 | help="Whether to not restore last (FC) layers.") 109 | parser.add_argument("--num-classes", type=int, default=NUM_CLASSES, 110 | help="Number of classes to predict (including background).") 111 | parser.add_argument("--open-classes", type=int, default=OPEN_CLASSES, 112 | help="Number of classes to predict (including background).") 113 | parser.add_argument("--num-steps", type=int, default=NUM_STEPS, 114 | help="Number of training steps.") 115 | parser.add_argument("--num-steps-stop", type=int, default=NUM_STEPS_STOP, 116 | help="Number of training steps for early stopping.") 117 | parser.add_argument("--power", type=float, default=POWER, 118 | help="Decay parameter to compute the learning rate.") 119 | parser.add_argument("--random-mirror", action="store_true", 120 | help="Whether to randomly mirror the inputs during the training.") 121 | parser.add_argument("--random-scale", action="store_true", 122 | help="Whether to randomly scale the inputs during the training.") 123 | parser.add_argument("--random-seed", type=int, default=RANDOM_SEED, 124 | help="Random seed to have reproducible results.") 125 | parser.add_argument("--restore-from", type=str, default=RESTORE_FROM, 126 | help="Where restore model parameters from.") 127 | parser.add_argument("--save-pred-every", type=int, default=SAVE_PRED_EVERY, 128 | help="Save summaries and checkpoint every often.") 129 | parser.add_argument("--snapshot-dir", type=str, default=SNAPSHOT_DIR, 130 | help="Where to save snapshots of the model.") 131 | parser.add_argument("--weight-decay", type=float, default=WEIGHT_DECAY, 132 | help="Regularisation parameter for L2-loss.") 133 | parser.add_argument("--gpu", type=int, default=0, 134 | help="choose gpu device.") 135 | parser.add_argument("--set", type=str, default=SET, 136 | help="choose adaptation set.") 137 | parser.add_argument("--log-dir", type=str, default=LOG_DIR, 138 | help="Path to the directory of log.") 139 | return parser.parse_args() 140 | 141 | 142 | args = get_arguments() 143 | # if not os.path.exists(args.log_dir): 144 | # os.makedirs(args.log_dir) 145 | 146 | def lr_poly(base_lr, iter, max_iter, power): 147 | return base_lr * ((1 - float(iter) / max_iter) ** (power)) 148 | 149 | 150 | def adjust_learning_rate(optimizer, i_iter): 151 | lr = lr_poly(args.learning_rate, i_iter, args.num_steps, args.power) 152 | optimizer.param_groups[0]['lr'] = lr 153 | if len(optimizer.param_groups) > 1: 154 | optimizer.param_groups[1]['lr'] = lr * 10 155 | 156 | def main(): 157 | """Create the model and start the training.""" 158 | print('Start: '+time.asctime(time.localtime(time.time()))) 159 | best_iter = 0 160 | best_mIoU = 0 161 | mIoU = 0 162 | 163 | w, h = map(int, args.input_size.split(',')) 164 | input_size = (w, h) 165 | 166 | w, h = map(int, args.input_size_target.split(',')) 167 | input_size_target = (w, h) 168 | 169 | cudnn.enabled = True 170 | gpu = args.gpu 171 | 172 | pretrained_dict = torch.load(args.restore_from) 173 | # Create network 174 | model = DeeplabMulti(num_classes=args.num_classes).cuda() 175 | net_dict = model.state_dict() 176 | # pretrained_dict = {k: v for k, v in pretrained_dict.items() if (k in net_dict)} 177 | pretrained_dict = {k[6:]: v for k, v in pretrained_dict.items() if (k[6:] in net_dict) and (v.shape==net_dict[k[6:]].shape)} 178 | net_dict.update(pretrained_dict) 179 | model.load_state_dict(net_dict) 180 | model.train() 181 | 182 | cudnn.benchmark = True 183 | 184 | if not os.path.exists(args.snapshot_dir): 185 | os.makedirs(args.snapshot_dir) 186 | 187 | targetloader = data.DataLoader(cityscapesPseudo(args.data_dir_target, args.data_list_target, 188 | max_iters=args.num_steps * args.batch_size, 189 | crop_size=input_size_target, 190 | scale=False, mirror=args.random_mirror, mean=IMG_MEAN), 191 | batch_size=args.batch_size, shuffle=True, num_workers=args.num_workers, 192 | pin_memory=True) 193 | 194 | targetloader_iter = enumerate(targetloader) 195 | 196 | optimizer = optim.SGD(model.optim_parameters(args, warmup=True), 197 | lr=args.learning_rate, momentum=args.momentum, weight_decay=args.weight_decay) 198 | optimizer.zero_grad() 199 | 200 | interp = nn.Upsample(size=(input_size[1], input_size[0]), mode='bilinear', align_corners=True) 201 | interp_target = nn.Upsample(size=(input_size_target[1], input_size_target[0]), mode='bilinear', align_corners=True) 202 | 203 | seg_loss = torch.nn.CrossEntropyLoss(ignore_index=255) 204 | for i_iter in range(args.num_steps): 205 | model.train() 206 | loss_seg_value1 = 0 207 | loss_seg_value2 = 0 208 | 209 | optimizer.zero_grad() 210 | adjust_learning_rate(optimizer, i_iter) 211 | 212 | for sub_i in range(args.iter_size): 213 | _, batch = targetloader_iter.__next__() 214 | image_target, label_target, _, name = batch 215 | image_target = image_target.cuda() 216 | label_target = label_target.long().cuda() 217 | 218 | pred1, pred2 = model(image_target) 219 | pred1 = interp_target(pred1) 220 | pred2 = interp_target(pred2) 221 | 222 | loss_seg1 = seg_loss(pred1, label_target) 223 | loss_seg2 = seg_loss(pred2, label_target) 224 | loss = loss_seg2 + args.lambda_seg * loss_seg1 225 | 226 | # proper normalization 227 | loss = loss / args.iter_size 228 | loss.backward() 229 | loss_seg_value1 += loss_seg1.data.cpu().numpy() / args.iter_size 230 | loss_seg_value2 += loss_seg2.data.cpu().numpy() / args.iter_size 231 | 232 | optimizer.step() 233 | 234 | if (i_iter) % 100 == 0: 235 | print( 236 | 'iter = {0:8d}/{1:8d}, loss_seg1 = {2:.3f} loss_seg2 = {3:.3f}'.format( 237 | i_iter, args.num_steps, loss_seg_value1, loss_seg_value2)) 238 | 239 | if i_iter >= args.num_steps_stop - 1: 240 | print('save model ...') 241 | torch.save(model.state_dict(), osp.join(args.snapshot_dir, 'GTA5_' + str(args.num_steps_stop) + '.pth')) 242 | break 243 | 244 | if i_iter % args.save_pred_every == 0 and i_iter != 0: 245 | now = datetime.datetime.now() 246 | print(now.strftime("%Y-%m-%d %H:%M:%S"), ' Begin evaluation on iter {0:8d}/{1:8d} '.format(i_iter, args.num_steps)) 247 | mIoU = evaluate_warmup(model) 248 | print('Finish Evaluation: '+time.asctime(time.localtime(time.time()))) 249 | if mIoU > best_mIoU: 250 | old_file = osp.join(args.snapshot_dir, 'GTA5_BAPA_warmup_iter' + str(best_iter) + '_mIoU' + str(best_mIoU) + '.pth') 251 | if os.path.exists(old_file) is True: 252 | os.remove(old_file) 253 | print('Saving model with mIoU: ', mIoU) 254 | torch.save(model.state_dict(), osp.join(args.snapshot_dir, 'GTA5_BAPA_warmup_iter' + str(i_iter) + '_mIoU' + str(mIoU) + '.pth')) 255 | best_mIoU = mIoU 256 | best_iter = i_iter 257 | 258 | 259 | if __name__ == '__main__': 260 | main() 261 | -------------------------------------------------------------------------------- /tools/trainV2_simt.py: -------------------------------------------------------------------------------- 1 | import _init_paths 2 | import argparse 3 | import torch 4 | import torch.nn as nn 5 | from torch.utils import data, model_zoo 6 | import numpy as np 7 | import pickle 8 | from torch.autograd import Variable 9 | import torch.optim as optim 10 | import scipy.misc 11 | import torch.backends.cudnn as cudnn 12 | import torch.nn.functional as F 13 | import sys 14 | import os 15 | import os.path as osp 16 | import random 17 | 18 | from model.deeplab_multi import DeeplabMulti, sig_NTM, sig_W 19 | # from model.discriminator import FCDiscriminator 20 | from utils.loss import CrossEntropy2d, EntropyLoss 21 | from dataset.gta5_dataset import GTA5DataSet 22 | from dataset.cityscapes_dataset import cityscapesDataSet, cityscapesPseudo 23 | 24 | import time 25 | import datetime 26 | import itertools 27 | import pandas as pd 28 | import _init_paths 29 | from evaluate_cityscapes import evaluate_simt 30 | import matplotlib.pyplot as plt 31 | plt.switch_backend('agg') 32 | 33 | 34 | IMG_MEAN = np.array((104.00698793, 116.66876762, 122.67891434), dtype=np.float32) 35 | 36 | MODEL = 'DeepLab' 37 | BATCH_SIZE = 1 38 | ITER_SIZE = 1 39 | NUM_WORKERS = 4 40 | DATA_DIRECTORY = '/home/xiaoqiguo2/scratch/UDA_Natural/GTA5' 41 | DATA_LIST_PATH = '../dataset/gta5_list/train.txt' 42 | IGNORE_LABEL = 255 43 | INPUT_SIZE = '1024,512' 44 | DATA_DIRECTORY_TARGET = '/home/xiaoqiguo2/scratch/UDA_Natural/Cityscapes' 45 | DATA_LIST_PATH_TARGET = '../dataset/cityscapes_list/pseudo_bapa.lst' 46 | INPUT_SIZE_TARGET = '1024,512' 47 | LEARNING_RATE = 2.5e-4 48 | LEARNING_RATE_T = 2.5e-4 49 | MOMENTUM = 0.9 50 | NUM_CLASSES = 19 51 | OPEN_CLASSES = 15 52 | NUM_STEPS = 250000 53 | NUM_STEPS_STOP = 40000 # early stopping 54 | POWER = 0.9 55 | RANDOM_SEED = 1234 56 | RESTORE_FROM = '../snapshots/resnet_pretrain.pth' 57 | SAVE_PRED_EVERY = 1000 58 | SNAPSHOT_DIR = '../snapshots/SimT/' 59 | WEIGHT_DECAY = 0.0005 60 | LOG_DIR = './log/' 61 | Threshold_high = 0.8 62 | Threshold_low = 0.2 63 | lambda_Place = 0.1 64 | lambda_Convex = 0.5 65 | lambda_Volume = 0.1 66 | lambda_Anchor = 0.5 67 | 68 | LAMBDA_SEG = 0.1 69 | TARGET = 'cityscapes' 70 | SET = 'train' 71 | 72 | def get_arguments(): 73 | """Parse all the arguments provided from the CLI. 74 | 75 | Returns: 76 | A list of parsed arguments. 77 | """ 78 | parser = argparse.ArgumentParser(description="DeepLab-ResNet Network") 79 | parser.add_argument("--model", type=str, default=MODEL, 80 | help="available options : DeepLab") 81 | parser.add_argument("--target", type=str, default=TARGET, 82 | help="available options : cityscapes") 83 | parser.add_argument("--batch-size", type=int, default=BATCH_SIZE, 84 | help="Number of images sent to the network in one step.") 85 | parser.add_argument("--iter-size", type=int, default=ITER_SIZE, 86 | help="Accumulate gradients for ITER_SIZE iterations.") 87 | parser.add_argument("--num-workers", type=int, default=NUM_WORKERS, 88 | help="number of workers for multithread dataloading.") 89 | parser.add_argument("--data-dir", type=str, default=DATA_DIRECTORY, 90 | help="Path to the directory containing the source dataset.") 91 | parser.add_argument("--data-list", type=str, default=DATA_LIST_PATH, 92 | help="Path to the file listing the images in the source dataset.") 93 | parser.add_argument("--ignore-label", type=int, default=IGNORE_LABEL, 94 | help="The index of the label to ignore during the training.") 95 | parser.add_argument("--input-size", type=str, default=INPUT_SIZE, 96 | help="Comma-separated string with height and width of source images.") 97 | parser.add_argument("--data-dir-target", type=str, default=DATA_DIRECTORY_TARGET, 98 | help="Path to the directory containing the target dataset.") 99 | parser.add_argument("--data-list-target", type=str, default=DATA_LIST_PATH_TARGET, 100 | help="Path to the file listing the images in the target dataset.") 101 | parser.add_argument("--input-size-target", type=str, default=INPUT_SIZE_TARGET, 102 | help="Comma-separated string with height and width of target images.") 103 | parser.add_argument("--is-training", action="store_true", 104 | help="Whether to updates the running means and variances during the training.") 105 | parser.add_argument("--learning-rate", type=float, default=LEARNING_RATE, 106 | help="Base learning rate for training with polynomial decay.") 107 | parser.add_argument("--learning-rate-T", type=float, default=LEARNING_RATE_T, 108 | help="Base learning rate for discriminator.") 109 | parser.add_argument("--lambda-seg", type=float, default=LAMBDA_SEG, 110 | help="lambda_seg.") 111 | parser.add_argument("--Threshold-high", type=float, default=Threshold_high, 112 | help="Threshold_high") 113 | parser.add_argument("--Threshold-low", type=float, default=Threshold_low, 114 | help="Threshold_low") 115 | parser.add_argument("--lambda-Place", type=float, default=lambda_Place, 116 | help="lambda_Place") 117 | parser.add_argument("--lambda-Convex", type=float, default=lambda_Convex, 118 | help="lambda_Convex") 119 | parser.add_argument("--lambda-Volume", type=float, default=lambda_Volume, 120 | help="lambda_Volume") 121 | parser.add_argument("--lambda-Anchor", type=float, default=lambda_Anchor, 122 | help="lambda_Anchor") 123 | parser.add_argument("--momentum", type=float, default=MOMENTUM, 124 | help="Momentum component of the optimiser.") 125 | parser.add_argument("--not-restore-last", action="store_true", 126 | help="Whether to not restore last (FC) layers.") 127 | parser.add_argument("--num-classes", type=int, default=NUM_CLASSES, 128 | help="Number of classes to predict (including background).") 129 | parser.add_argument("--open-classes", type=int, default=OPEN_CLASSES, 130 | help="Number of classes to predict (including background).") 131 | parser.add_argument("--num-steps", type=int, default=NUM_STEPS, 132 | help="Number of training steps.") 133 | parser.add_argument("--num-steps-stop", type=int, default=NUM_STEPS_STOP, 134 | help="Number of training steps for early stopping.") 135 | parser.add_argument("--power", type=float, default=POWER, 136 | help="Decay parameter to compute the learning rate.") 137 | parser.add_argument("--random-mirror", action="store_true", 138 | help="Whether to randomly mirror the inputs during the training.") 139 | parser.add_argument("--random-scale", action="store_true", 140 | help="Whether to randomly scale the inputs during the training.") 141 | parser.add_argument("--random-seed", type=int, default=RANDOM_SEED, 142 | help="Random seed to have reproducible results.") 143 | parser.add_argument("--restore-from", type=str, default=RESTORE_FROM, 144 | help="Where restore model parameters from.") 145 | parser.add_argument("--save-pred-every", type=int, default=SAVE_PRED_EVERY, 146 | help="Save summaries and checkpoint every often.") 147 | parser.add_argument("--snapshot-dir", type=str, default=SNAPSHOT_DIR, 148 | help="Where to save snapshots of the model.") 149 | parser.add_argument("--weight-decay", type=float, default=WEIGHT_DECAY, 150 | help="Regularisation parameter for L2-loss.") 151 | parser.add_argument("--gpu", type=int, default=0, 152 | help="choose gpu device.") 153 | parser.add_argument("--set", type=str, default=SET, 154 | help="choose adaptation set.") 155 | parser.add_argument("--log-dir", type=str, default=LOG_DIR, 156 | help="Path to the directory of log.") 157 | return parser.parse_args() 158 | 159 | 160 | args = get_arguments() 161 | # if not os.path.exists(args.log_dir): 162 | # os.makedirs(args.log_dir) 163 | print('Leanring_rate: ', args.learning_rate) 164 | print('Leanring_rate_T: ', args.learning_rate_T) 165 | print('Open-set class: ', args.open_classes) 166 | print('Threshold_high: ', args.Threshold_high) 167 | print('Threshold_low: ', args.Threshold_low) 168 | print('lambda_Place: ', args.lambda_Place) 169 | print('lambda_Convex: ', args.lambda_Convex) 170 | print('lambda_Volume: ', args.lambda_Volume) 171 | print('lambda_Anchor: ', args.lambda_Anchor) 172 | print('restore_from: ', args.restore_from) 173 | 174 | def lr_poly(base_lr, iter, max_iter, power): 175 | return base_lr * ((1 - float(iter) / max_iter) ** (power)) 176 | 177 | def adjust_learning_rate(optimizer, i_iter): 178 | lr = lr_poly(args.learning_rate, i_iter, args.num_steps, args.power) 179 | optimizer.param_groups[0]['lr'] = lr 180 | if len(optimizer.param_groups) > 1: 181 | optimizer.param_groups[1]['lr'] = lr * 10 182 | 183 | def adjust_learning_rate_T(optimizer, i_iter): 184 | lr = lr_poly(args.learning_rate_T, i_iter, args.num_steps, args.power) 185 | optimizer.param_groups[0]['lr'] = lr 186 | 187 | def plot_NTM(trans_mat, normalize=True, title='NTM1', cmap=plt.cm.Blues): 188 | plt.figure() 189 | plt.imshow(trans_mat, interpolation='nearest', cmap=cmap) 190 | plt.colorbar() 191 | 192 | thresh = trans_mat.max() / 2. 193 | for i, j in itertools.product(range(trans_mat.shape[0]), range(trans_mat.shape[1])): 194 | num = '{:.2f}'.format(trans_mat[i, j]) if normalize else int(trans_mat[i, j]) 195 | plt.text(j, i, num, 196 | fontsize=2, 197 | verticalalignment='center', 198 | horizontalalignment="center", 199 | color="white" if np.float(num) > thresh else "black") 200 | plt.savefig('../NTM_vis/'+title+'.png', transparent=True, dpi=600) 201 | 202 | def Placeholder_loss(pred, num_classes, open_classes, thres=None): 203 | seg_loss = torch.nn.CrossEntropyLoss(ignore_index=255) 204 | #### del maximum elements in prediction#### 205 | pseudo = torch.argmax(pred, dim=1).long() 206 | pseudo_onehot = torch.eye(num_classes + open_classes)[pseudo].permute(0, 3, 1, 2).float().cuda() 207 | zeros = torch.zeros_like(pseudo_onehot) 208 | ones = torch.zeros_like(pseudo_onehot) 209 | predict = torch.where(pseudo_onehot > zeros, -1000. * ones, pred) 210 | 211 | #### del pixels with armgmax < num_classes #### 212 | ones = torch.ones_like(pseudo) 213 | pseudo1 = torch.where(pseudo < num_classes * ones, pseudo, 255 * ones) 214 | if thres is not None: 215 | pred_max = torch.max(torch.softmax(pred.clone().detach(), dim=1), 1)[0] 216 | pseudo1 = torch.where(pred_max > thres, pseudo1, 255 * ones) 217 | loss_known = seg_loss(pred, pseudo1) 218 | 219 | #### find out the maximum logit within open set classes as the label #### 220 | predict_open = torch.zeros_like(predict) 221 | predict_open[:,args.num_classes:,:,:] = predict[:,args.num_classes:,:,:].clone().detach() 222 | Placeholder_y = torch.argmax(predict_open, dim=1) 223 | Placeholder_y = torch.where(pseudo1 == 255 * ones, 255 * ones, Placeholder_y) 224 | 225 | # yy = torch.where(pseudo1 == 255 * ones, (num_classes + open_classes) * ones, Placeholder_y) 226 | # Placeholder_y_onehot = torch.eye(num_classes + open_classes + 1)[yy].permute(0, 3, 1, 2).float().cuda()[:,:(num_classes + open_classes),:,:] 227 | # predict[:,args.num_classes:,:,:] = torch.where(Placeholder_y_onehot > zeros, predict, -1000. * ones)[:,args.num_classes:,:,:] 228 | 229 | loss_unknown = seg_loss(predict, Placeholder_y) 230 | return loss_known + args.lambda_Place * loss_unknown 231 | 232 | def main(): 233 | """Create the model and start the training.""" 234 | print('Start: '+time.asctime(time.localtime(time.time()))) 235 | best_iter = 0 236 | best_mIoU = 0 237 | mIoU = 0 238 | 239 | w, h = map(int, args.input_size.split(',')) 240 | input_size = (w, h) 241 | 242 | w, h = map(int, args.input_size_target.split(',')) 243 | input_size_target = (w, h) 244 | 245 | cudnn.enabled = True 246 | gpu = args.gpu 247 | 248 | pretrained_dict = torch.load(args.restore_from) 249 | # Create network 250 | model = DeeplabMulti(num_classes=args.num_classes, open_classes=args.open_classes, openset=True).cuda() 251 | net_dict = model.state_dict() 252 | pretrained_dict = {k: v for k, v in pretrained_dict.items() if (k in net_dict)} 253 | # pretrained_dict = {k[6:]: v for k, v in pretrained_dict.items() if (k[6:] in net_dict) and (v.shape==net_dict[k[6:]].shape)} 254 | net_dict.update(pretrained_dict) 255 | model.load_state_dict(net_dict) 256 | model.train() 257 | 258 | # Create fixed network 259 | pretrained_dict = torch.load(args.restore_from) 260 | fixed_model = DeeplabMulti(num_classes=args.num_classes).cuda() 261 | net_dict = fixed_model.state_dict() 262 | pretrained_dict = {k: v for k, v in pretrained_dict.items() if (k in net_dict)} 263 | net_dict.update(pretrained_dict) 264 | fixed_model.load_state_dict(net_dict) 265 | fixed_model.eval() 266 | for param in fixed_model.parameters(): 267 | param.requires_grad = False 268 | 269 | # Create NTM 270 | NTM1 = sig_NTM(args.num_classes, args.open_classes) 271 | optimizer_t1 = optim.Adam(NTM1.parameters(), lr=args.learning_rate_T, weight_decay=0) 272 | 273 | NTM2 = sig_NTM(args.num_classes, args.open_classes) 274 | optimizer_t2 = optim.Adam(NTM2.parameters(), lr=args.learning_rate_T, weight_decay=0) 275 | 276 | NTM_W1 = sig_W(args.num_classes, args.open_classes) 277 | optimizer_w1 = optim.Adam(NTM_W1.parameters(), lr=args.learning_rate_T, weight_decay=0) 278 | 279 | NTM_W2 = sig_W(args.num_classes, args.open_classes) 280 | optimizer_w2 = optim.Adam(NTM_W2.parameters(), lr=args.learning_rate_T, weight_decay=0) 281 | 282 | cudnn.benchmark = True 283 | 284 | if not os.path.exists(args.snapshot_dir): 285 | os.makedirs(args.snapshot_dir) 286 | 287 | targetloader = data.DataLoader(cityscapesPseudo(args.data_dir_target, args.data_list_target, 288 | max_iters=args.num_steps * args.batch_size, 289 | crop_size=input_size_target, 290 | scale=False, mirror=args.random_mirror, mean=IMG_MEAN), 291 | batch_size=args.batch_size, shuffle=True, num_workers=args.num_workers, 292 | pin_memory=True) 293 | 294 | targetloader_iter = enumerate(targetloader) 295 | 296 | optimizer = optim.SGD(model.optim_parameters(args), 297 | lr=args.learning_rate, momentum=args.momentum, weight_decay=args.weight_decay) 298 | optimizer.zero_grad() 299 | 300 | interp = nn.Upsample(size=(input_size[1], input_size[0]), mode='bilinear', align_corners=True) 301 | interp_target = nn.Upsample(size=(input_size_target[1], input_size_target[0]), mode='bilinear', align_corners=True) 302 | 303 | seg_loss = torch.nn.CrossEntropyLoss(ignore_index=255) 304 | Tseg_loss = CrossEntropy2d(is_softmax=False).cuda() 305 | loss_mse = torch.nn.MSELoss(reduction='sum').cuda() 306 | Info_loss = EntropyLoss().cuda() 307 | for i_iter in range(args.num_steps): 308 | model.train() 309 | loss_seg_p1 = 0 310 | loss_seg_p2 = 0 311 | loss_seg_y1 = 0 312 | loss_seg_y2 = 0 313 | 314 | optimizer.zero_grad() 315 | adjust_learning_rate(optimizer, i_iter) 316 | 317 | optimizer_t1.zero_grad() 318 | optimizer_t2.zero_grad() 319 | optimizer_w1.zero_grad() 320 | optimizer_w2.zero_grad() 321 | adjust_learning_rate_T(optimizer_t1, i_iter) 322 | adjust_learning_rate_T(optimizer_t2, i_iter) 323 | adjust_learning_rate_T(optimizer_w1, i_iter) 324 | adjust_learning_rate_T(optimizer_w2, i_iter) 325 | 326 | zeros = torch.zeros(args.num_classes+args.open_classes, args.num_classes).cuda() 327 | for iter in range(10): 328 | ## optimize weight ### 329 | T1 = NTM1() 330 | T2 = NTM2() 331 | W1 = NTM_W1() 332 | W2 = NTM_W2() 333 | optimizer_w1.zero_grad() 334 | optimizer_w2.zero_grad() 335 | 336 | NTM_loss = (loss_mse(W1.mm(T1), zeros) + loss_mse(W2.mm(T2), zeros)) 337 | NTM_loss.backward(retain_graph=True) 338 | optimizer_w1.step() 339 | optimizer_w2.step() 340 | 341 | for sub_i in range(args.iter_size): 342 | T1 = NTM1() 343 | T2 = NTM2() 344 | 345 | _, batch = targetloader_iter.__next__() 346 | image_target, label_target, _, name = batch 347 | image_target = image_target.cuda() 348 | label_target = label_target.long().cuda() 349 | 350 | ##### Generate pseudo label ##### 351 | with torch.no_grad(): 352 | fixed_model.load_state_dict(net_dict) 353 | output1, output2 = fixed_model(image_target) 354 | labelC = interp_target(torch.softmax(output2.clone(), dim=1)) 355 | labelC_max = torch.max(labelC, 1) 356 | labelC_argmax = torch.argmax(labelC, dim=1).float() 357 | labelC_flat = labelC.permute(0,2,3,1).view(-1, args.num_classes) 358 | thres = args.Threshold_high 359 | labelC = torch.where(labelC_max[0] > thres, labelC_argmax, 255. * torch.ones_like(labelC_argmax)) 360 | thres = args.Threshold_low 361 | labelC = torch.where(labelC_max[0] < thres, args.num_classes * torch.ones_like(labelC_argmax), labelC) 362 | Conf_label_target = torch.from_numpy(labelC.detach().clone().cpu().numpy()).long().cuda() 363 | del labelC 364 | del output1 365 | del output2 366 | 367 | ############################### 368 | ##### Train target images ##### 369 | ############################### 370 | pred1, pred2 = model(image_target) 371 | pred1 = interp_target(pred1) 372 | pred2 = interp_target(pred2) 373 | 374 | ######## Anchor loss ######## 375 | pseudo_flat1 = pred1.clone().permute(0,2,3,1).view(-1, args.num_classes+args.open_classes).detach() 376 | Anchor_index = torch.argmax(pseudo_flat1, dim=0) 377 | Exist_label = torch.unique(torch.argmax(pseudo_flat1, dim=1)) 378 | Anchor1 = labelC_flat[Anchor_index] 379 | NTM_Anchor_loss = loss_mse(T1[Exist_label], Anchor1[Exist_label]) 380 | pseudo_flat2 = pred2.clone().permute(0,2,3,1).view(-1, args.num_classes+args.open_classes).detach() 381 | Anchor_index = torch.argmax(pseudo_flat2, dim=0) 382 | Exist_label = torch.unique(torch.argmax(pseudo_flat2, dim=1)) 383 | Anchor2 = labelC_flat[Anchor_index] 384 | NTM_Anchor_loss += loss_mse(T2[Exist_label], Anchor2[Exist_label]) 385 | 386 | ######## Class posterior constraint ######## 387 | pseudo = torch.argmax(pred2.clone(), dim=1).detach() 388 | ones = torch.ones_like(Conf_label_target) 389 | zeros = torch.zeros_like(Conf_label_target) 390 | mask = torch.where(Conf_label_target == args.num_classes * ones, ones, zeros) 391 | pseudo1 = mask * pseudo 392 | pseudo1 = torch.where(pseudo1 >= args.num_classes * ones, pseudo1, 255 * ones) 393 | Conf_label_target = torch.where(Conf_label_target == args.num_classes * ones, pseudo1, Conf_label_target) 394 | loss_p1 = seg_loss(pred1, Conf_label_target) 395 | loss_p2 = seg_loss(pred2, Conf_label_target) 396 | 397 | ######## Placeholder loss ######## 398 | Place_loss = args.lambda_seg * Placeholder_loss(pred1, args.num_classes, args.open_classes, thres=args.Threshold_high) 399 | Place_loss += Placeholder_loss(pred2, args.num_classes, args.open_classes, thres=args.Threshold_high) 400 | 401 | ######## Noise class posterior constraint ######## 402 | pred1 = torch.softmax(interp_target(pred1), dim=1).permute(0, 2, 3, 1).contiguous().view(-1, args.num_classes + args.open_classes) 403 | pred1 = torch.mm(pred1, T1).view(args.batch_size, h, w, args.num_classes).permute(0, 3, 1, 2) 404 | 405 | pred2 = torch.softmax(interp_target(pred2), dim=1).permute(0, 2, 3, 1).contiguous().view(-1, args.num_classes + args.open_classes) 406 | pred2 = torch.mm(pred2, T2).view(args.batch_size, h, w, args.num_classes).permute(0, 3, 1, 2) 407 | 408 | loss_y1 = Tseg_loss(pred1, label_target) 409 | loss_y2 = Tseg_loss(pred2, label_target) 410 | 411 | ## optimze NTM ### 412 | W1 = NTM_W1() 413 | W2 = NTM_W2() 414 | zeros = torch.zeros(args.num_classes+args.open_classes, args.num_classes).cuda() 415 | NTM_Convex_loss = 0. - (loss_mse(W1.mm(T1), zeros) + loss_mse(W2.mm(T2), zeros)) 416 | 417 | NTM_Volume_loss = torch.log(torch.sqrt(torch.abs(torch.linalg.det( T1.transpose(1,0).mm(T1) )))) 418 | NTM_Volume_loss += torch.log(torch.sqrt(torch.abs(torch.linalg.det( T2.transpose(1,0).mm(T2) )))) 419 | 420 | if torch.isinf(NTM_Volume_loss) or torch.isnan(NTM_Volume_loss): 421 | NTM_Volume_loss = 0. 422 | 423 | loss_target = loss_p2 + loss_y2 + args.lambda_seg * loss_p1 + args.lambda_seg * loss_y1 424 | loss = Place_loss + loss_target + args.lambda_Convex * NTM_Convex_loss + args.lambda_Volume * NTM_Volume_loss + args.lambda_Anchor * NTM_Anchor_loss 425 | 426 | # proper normalization 427 | loss = loss / args.iter_size 428 | loss.backward() 429 | loss_seg_p1 += loss_p1.data.cpu().numpy() / args.iter_size 430 | loss_seg_p2 += loss_p2.data.cpu().numpy() / args.iter_size 431 | loss_seg_y1 += loss_y1.data.cpu().numpy() / args.iter_size 432 | loss_seg_y2 += loss_y2.data.cpu().numpy() / args.iter_size 433 | 434 | optimizer.step() 435 | optimizer_t1.step() 436 | optimizer_t2.step() 437 | 438 | if (i_iter) % 100 == 0: 439 | print( 440 | 'iter = {0:8d}/{1:8d}, loss_seg_p = {2:.3f} loss_seg_y = {3:.3f} Convex = {4:.3f} Volume = {5:.3f} Anchor = {6:.3f} Place_loss = {7:.3f}'.format( 441 | i_iter, args.num_steps, loss_seg_p1 + loss_seg_p2, loss_seg_y1 + loss_seg_y2, NTM_Convex_loss, NTM_Volume_loss, NTM_Anchor_loss, Place_loss)) 442 | 443 | # if (i_iter) % 5000 == 0: 444 | # plot_NTM(NTM1().detach().cpu().numpy(), normalize=True, title='NTM1_'+str(i_iter), cmap=plt.cm.Blues) 445 | # plot_NTM(NTM2().detach().cpu().numpy(), normalize=True, title='NTM2_'+str(i_iter), cmap=plt.cm.Blues) 446 | 447 | if i_iter >= args.num_steps_stop - 1: 448 | print('save model ...') 449 | torch.save(model.state_dict(), osp.join(args.snapshot_dir, 'GTA5_' + str(args.num_steps_stop) + '.pth')) 450 | break 451 | 452 | if i_iter % args.save_pred_every == 0 and i_iter != 0: 453 | now = datetime.datetime.now() 454 | print(now.strftime("%Y-%m-%d %H:%M:%S"), ' Begin evaluation on iter {0:8d}/{1:8d} '.format(i_iter, args.num_steps)) 455 | mIoU = evaluate_simt(model) 456 | print('Finish Evaluation: '+time.asctime(time.localtime(time.time()))) 457 | if mIoU > best_mIoU: 458 | old_file = osp.join(args.snapshot_dir, 'GTA5_iter' + str(best_iter) + '_mIoU' + str(best_mIoU) + '.pth') 459 | if os.path.exists(old_file) is True: 460 | os.remove(old_file) 461 | print('Saving model with mIoU: ', mIoU) 462 | torch.save(model.state_dict(), osp.join(args.snapshot_dir, 'GTA5_iter' + str(i_iter) + '_mIoU' + str(mIoU) + '.pth')) 463 | best_mIoU = mIoU 464 | best_iter = i_iter 465 | 466 | 467 | if __name__ == '__main__': 468 | main() 469 | -------------------------------------------------------------------------------- /utils/loss.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn.functional as F 3 | import torch.nn as nn 4 | from torch.autograd import Variable 5 | 6 | class CrossEntropy2d(nn.Module): 7 | 8 | def __init__(self, size_average=True, ignore_label=255, is_softmax=True): 9 | super(CrossEntropy2d, self).__init__() 10 | self.size_average = size_average 11 | self.ignore_label = ignore_label 12 | self.is_softmax = is_softmax 13 | 14 | def forward(self, predict, target, weight=None): 15 | """ 16 | Args: 17 | predict:(n, c, h, w) 18 | target:(n, h, w) 19 | weight (Tensor, optional): a manual rescaling weight given to each class. 20 | If given, has to be a Tensor of size "nclasses" 21 | """ 22 | assert not target.requires_grad 23 | assert predict.dim() == 4 24 | assert target.dim() == 3 25 | assert predict.size(0) == target.size(0), "{0} vs {1} ".format(predict.size(0), target.size(0)) 26 | assert predict.size(2) == target.size(1), "{0} vs {1} ".format(predict.size(2), target.size(1)) 27 | assert predict.size(3) == target.size(2), "{0} vs {1} ".format(predict.size(3), target.size(3)) 28 | n, c, h, w = predict.size() 29 | target_mask = (target >= 0) * (target != self.ignore_label) 30 | target = target[target_mask] 31 | if not target.data.dim(): 32 | return Variable(torch.zeros(1)) 33 | predict = predict.transpose(1, 2).transpose(2, 3).contiguous() 34 | predict = predict[target_mask.view(n, h, w, 1).repeat(1, 1, 1, c)].view(-1, c) 35 | if self.is_softmax: 36 | loss = F.cross_entropy(predict, target, weight=weight, reduction='mean') 37 | else: 38 | log_out = torch.log(predict) 39 | loss = F.nll_loss(log_out, target, weight=weight, reduction='mean') 40 | return loss 41 | 42 | class EntropyLoss(nn.Module): 43 | def __init__(self): 44 | super(EntropyLoss, self).__init__() 45 | 46 | def forward(self, x): 47 | b = F.softmax(x, dim=1) * F.log_softmax(x, dim=1) 48 | b = -1.0 * b.sum(1) 49 | return b.mean() --------------------------------------------------------------------------------