├── .gitignore ├── Adapt_Road_Scene ├── README.md ├── data │ ├── Cityscapes.txt │ ├── Rio.txt │ ├── Rio_val.txt │ ├── Roma.txt │ ├── Roma_val.txt │ ├── Taipei.txt │ ├── Taipei_val.txt │ ├── Tokyo.txt │ ├── Tokyo_val.txt │ ├── cscape_val.txt │ ├── synthia_train.txt │ ├── synthia_train_full.txt │ └── synthia_val.txt ├── models │ ├── model.py │ ├── model_static_normalized.py │ └── zero_gradient.py ├── scripts │ ├── cmd_for_DL.sh │ ├── download_demo.sh │ ├── download_src.sh │ ├── infer_city2ours.sh │ ├── infer_syn2city.sh │ ├── train_city2ours.sh │ └── train_syn2city.sh └── tools │ ├── cityscapesscripts │ ├── __init__.py │ ├── annotation │ │ ├── cityscapesLabelTool.py │ │ └── icons │ │ │ ├── back.png │ │ │ ├── checked6.png │ │ │ ├── checked6_red.png │ │ │ ├── clearpolygon.png │ │ │ ├── deleteobject.png │ │ │ ├── exit.png │ │ │ ├── filepath.png │ │ │ ├── help19.png │ │ │ ├── highlight.png │ │ │ ├── layerdown.png │ │ │ ├── layerup.png │ │ │ ├── minus.png │ │ │ ├── modify.png │ │ │ ├── newobject.png │ │ │ ├── next.png │ │ │ ├── open.png │ │ │ ├── play.png │ │ │ ├── plus.png │ │ │ ├── save.png │ │ │ ├── screenshot.png │ │ │ ├── screenshotToggle.png │ │ │ ├── shuffle.png │ │ │ ├── undo.png │ │ │ └── zoom.png │ ├── evaluation │ │ ├── __init__.py │ │ ├── addToConfusionMatrix.pyx │ │ ├── addToConfusionMatrix_impl.c │ │ ├── evalInstanceLevelSemanticLabeling.py │ │ ├── evalPixelLevelSemanticLabeling.py │ │ ├── evalPixelLevelSemanticLabeling.py~ │ │ ├── instance.py │ │ ├── instances2dict.py │ │ └── setup.py │ ├── helpers │ │ ├── __init__.py │ │ ├── annotation.py │ │ ├── csHelpers.py │ │ ├── csHelpers.py~ │ │ └── labels.py │ ├── preparation │ │ ├── __init__.py │ │ ├── createTrainIdInstanceImgs.py │ │ ├── createTrainIdLabelImgs.py │ │ ├── json2instanceImg.py │ │ └── json2labelImg.py │ └── viewer │ │ ├── __init__.py │ │ ├── cityscapesViewer.py │ │ └── icons │ │ ├── back.png │ │ ├── disp.png │ │ ├── exit.png │ │ ├── filepath.png │ │ ├── help19.png │ │ ├── minus.png │ │ ├── next.png │ │ ├── open.png │ │ ├── play.png │ │ ├── plus.png │ │ ├── shuffle.png │ │ └── zoom.png │ ├── data_reader.py │ ├── data_reader_static.py │ ├── evaluationResults │ └── resultPixelLevelSemanticLabeling.json │ ├── infer.py │ └── train_adv.py ├── Adapt_Structured_Output ├── README.md ├── compute_iou.py ├── compute_synthia_iou.py ├── dataset │ ├── Synthia_dataset.py │ ├── Synthia_evaluate_dataset.py │ ├── Synthia_list │ │ ├── evaluate.txt │ │ ├── info.json │ │ ├── label.txt │ │ ├── train.txt │ │ └── val.txt │ ├── __init__.py │ ├── cityscapes_dataset.py │ ├── cityscapes_evaluate_dataset.py │ ├── cityscapes_list │ │ ├── .DS_Store │ │ ├── info.json │ │ ├── label.txt │ │ ├── pretrain.txt │ │ ├── train.txt │ │ ├── val.txt │ │ └── val_new.txt │ ├── gta5_dataset.py │ ├── gta5_list │ │ └── train.txt │ └── ours_dataset.py ├── evaluate_cityscapes.py ├── evaluate_synthia.py ├── model │ ├── __init__.py │ ├── deeplab_multi.py │ └── discriminator.py ├── train_cityscapes_multi.py ├── train_gta2cityscapes_multi.py ├── train_synthia2cityscapes_multi.py ├── train_synthia_multi.py └── utils │ ├── __init__.py │ └── loss.py ├── FCNs_Wild ├── CMakeLists.txt ├── README.md ├── cmake │ ├── FindEigen3.cmake │ └── FindNumPy.cmake ├── data │ ├── Cityscapes.txt │ ├── Rio.txt │ ├── Rio_val.txt │ ├── Roma.txt │ ├── Roma_val.txt │ ├── Taipei.txt │ ├── Taipei_val.txt │ ├── Tokyo.txt │ ├── Tokyo_val.txt │ ├── cscape_val.txt │ ├── synthia_train.txt │ ├── synthia_train_full.txt │ └── synthia_val.txt ├── doc │ └── LICENSE ├── example_image │ ├── Taipei_test.txt │ ├── labels │ │ ├── temp.txt │ │ └── val │ │ │ ├── Taipei │ │ │ └── pano_00506_0_0_eval.png │ │ │ └── temp.txt │ └── pano_00506_0_0.png ├── lib │ ├── CMakeLists.txt │ ├── constraintloss │ │ ├── CMakeLists.txt │ │ ├── constraintsoftmax.cpp │ │ └── constraintsoftmax.h │ ├── optimization │ │ ├── CMakeLists.txt │ │ ├── fista.cpp │ │ └── fista.h │ ├── python │ │ ├── CMakeLists.txt │ │ ├── boost.cpp │ │ ├── boost.h │ │ ├── ccnn.cpp │ │ ├── ccnn.h │ │ ├── constraintloss.cpp │ │ ├── constraintloss.h │ │ ├── util.cpp │ │ └── util.h │ └── util │ │ ├── CMakeLists.txt │ │ ├── eigen.cpp │ │ ├── eigen.h │ │ ├── win_util.cpp │ │ └── win_util.h ├── models │ ├── model.py │ └── zero_gradient.py ├── scripts │ ├── cmd_for_DL.sh │ ├── create_img_list.py │ ├── data_path.sh │ ├── download_demo.sh │ ├── download_src.sh │ ├── infer_city2NMD.sh │ ├── infer_syn2city.sh │ ├── train_city2NMD.sh │ └── train_syn2city.sh └── src │ ├── __init__.py │ ├── ccnn.py │ ├── cityscapesscripts │ ├── __init__.py │ ├── annotation │ │ ├── cityscapesLabelTool.py │ │ └── icons │ │ │ ├── back.png │ │ │ ├── checked6.png │ │ │ ├── checked6_red.png │ │ │ ├── clearpolygon.png │ │ │ ├── deleteobject.png │ │ │ ├── exit.png │ │ │ ├── filepath.png │ │ │ ├── help19.png │ │ │ ├── highlight.png │ │ │ ├── layerdown.png │ │ │ ├── layerup.png │ │ │ ├── minus.png │ │ │ ├── modify.png │ │ │ ├── newobject.png │ │ │ ├── next.png │ │ │ ├── open.png │ │ │ ├── play.png │ │ │ ├── plus.png │ │ │ ├── save.png │ │ │ ├── screenshot.png │ │ │ ├── screenshotToggle.png │ │ │ ├── shuffle.png │ │ │ ├── undo.png │ │ │ └── zoom.png │ ├── evaluation │ │ ├── __init__.py │ │ ├── addToConfusionMatrix.pyx │ │ ├── addToConfusionMatrix_impl.c │ │ ├── evalInstanceLevelSemanticLabeling.py │ │ ├── evalPixelLevelSemanticLabeling.py │ │ ├── evalPixelLevelSemanticLabeling.py~ │ │ ├── instance.py │ │ ├── instances2dict.py │ │ └── setup.py │ ├── helpers │ │ ├── __init__.py │ │ ├── annotation.py │ │ ├── csHelpers.py │ │ ├── csHelpers.py~ │ │ └── labels.py │ ├── preparation │ │ ├── __init__.py │ │ ├── createTrainIdInstanceImgs.py │ │ ├── createTrainIdLabelImgs.py │ │ ├── json2instanceImg.py │ │ └── json2labelImg.py │ └── viewer │ │ ├── __init__.py │ │ ├── cityscapesViewer.py │ │ └── icons │ │ ├── back.png │ │ ├── disp.png │ │ ├── exit.png │ │ ├── filepath.png │ │ ├── help19.png │ │ ├── minus.png │ │ ├── next.png │ │ ├── open.png │ │ ├── play.png │ │ ├── plus.png │ │ ├── shuffle.png │ │ └── zoom.png │ ├── custom_grad.py │ ├── data_reader.py │ ├── evaluationResults │ └── resultPixelLevelSemanticLabeling.json │ ├── infer.py │ └── train_adv.py ├── MCD_DA_seg ├── .gitignore ├── README.md ├── adapt_tester.py ├── adapt_trainer.py ├── adapt_trainer_onestep.py ├── argmyparse.py ├── backup │ ├── eval.py │ ├── run_eval.sh │ └── run_src.sh ├── cityscapesscripts │ ├── __init__.py │ ├── annotation │ │ ├── cityscapesLabelTool.py │ │ └── icons │ │ │ ├── back.png │ │ │ ├── checked6.png │ │ │ ├── checked6_red.png │ │ │ ├── clearpolygon.png │ │ │ ├── deleteobject.png │ │ │ ├── exit.png │ │ │ ├── filepath.png │ │ │ ├── help19.png │ │ │ ├── highlight.png │ │ │ ├── layerdown.png │ │ │ ├── layerup.png │ │ │ ├── minus.png │ │ │ ├── modify.png │ │ │ ├── newobject.png │ │ │ ├── next.png │ │ │ ├── open.png │ │ │ ├── play.png │ │ │ ├── plus.png │ │ │ ├── save.png │ │ │ ├── screenshot.png │ │ │ ├── screenshotToggle.png │ │ │ ├── shuffle.png │ │ │ ├── undo.png │ │ │ └── zoom.png │ ├── evaluation │ │ ├── __init__.py │ │ ├── addToConfusionMatrix.pyx │ │ ├── addToConfusionMatrix_impl.c │ │ ├── evalInstanceLevelSemanticLabeling.py │ │ ├── evalPixelLevelSemanticLabeling.py │ │ ├── evalPixelLevelSemanticLabeling.py~ │ │ ├── instance.py │ │ ├── instances2dict.py │ │ └── setup.py │ ├── helpers │ │ ├── __init__.py │ │ ├── annotation.py │ │ ├── csHelpers.py │ │ ├── csHelpers.py~ │ │ └── labels.py │ ├── preparation │ │ ├── __init__.py │ │ ├── createTrainIdInstanceImgs.py │ │ ├── createTrainIdLabelImgs.py │ │ ├── json2instanceImg.py │ │ └── json2labelImg.py │ └── viewer │ │ ├── __init__.py │ │ ├── cityscapesViewer.py │ │ └── icons │ │ ├── back.png │ │ ├── disp.png │ │ ├── exit.png │ │ ├── filepath.png │ │ ├── help19.png │ │ ├── minus.png │ │ ├── next.png │ │ ├── open.png │ │ ├── play.png │ │ ├── plus.png │ │ ├── shuffle.png │ │ └── zoom.png ├── data_path │ ├── countries │ │ └── imgs │ │ │ ├── train │ │ │ ├── Rio.txt │ │ │ ├── Roma.txt │ │ │ ├── Taipei.txt │ │ │ └── Tokyo.txt │ │ │ └── val │ │ │ ├── Rio.txt │ │ │ ├── Roma.txt │ │ │ ├── Taipei.txt │ │ │ └── Tokyo.txt │ └── cscapes │ │ ├── gtFine_labelTrainIds │ │ ├── train.txt │ │ └── val.txt │ │ └── leftImg8bit │ │ ├── train.txt │ │ └── val.txt ├── dataset │ ├── city_info.json │ ├── convert_label.py │ ├── gt_coloring.py │ ├── split_gta.py │ └── synthia2cityscapes_info.json ├── datasets.py ├── docs │ ├── LICENSE │ ├── README.md │ ├── _config.yml │ ├── overview.png │ └── result_seg.png ├── evaluationResults │ └── resultPixelLevelSemanticLabeling.json ├── loss.py ├── models │ ├── __init__.py │ ├── dilated_fcn.py │ ├── dilated_resnet.py │ ├── drn.py │ ├── extended_resnet.py │ ├── fcn.py │ ├── grad_reversal.py │ ├── model_util.py │ ├── resnet.py │ └── vgg_fcn.py ├── requirements.txt.py ├── scripts │ ├── cmd_for_DL.sh │ ├── download_demo.sh │ ├── run_eval.sh │ ├── run_test.sh │ ├── run_train_city2Tokyo.sh │ └── run_train_syn2city.sh ├── source_tester.py ├── source_trainer.py ├── test │ └── test_loss.py ├── tools │ ├── __init__.py │ ├── compare_predicted_png.py │ ├── concat_rgb_gt_pred_img.py │ ├── crf.py │ └── visualize_result.py ├── transform.py ├── util.py └── visualize.py └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | #lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | MANIFEST 27 | 28 | # PyInstaller 29 | # Usually these files are written by a python script from a template 30 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 31 | *.manifest 32 | *.spec 33 | 34 | # Installer logs 35 | pip-log.txt 36 | pip-delete-this-directory.txt 37 | 38 | # Unit test / coverage reports 39 | htmlcov/ 40 | .tox/ 41 | .coverage 42 | .coverage.* 43 | .cache 44 | nosetests.xml 45 | coverage.xml 46 | *.cover 47 | .hypothesis/ 48 | .pytest_cache/ 49 | 50 | # Translations 51 | *.mo 52 | *.pot 53 | 54 | # Django stuff: 55 | *.log 56 | local_settings.py 57 | db.sqlite3 58 | 59 | # Flask stuff: 60 | instance/ 61 | .webassets-cache 62 | 63 | # Scrapy stuff: 64 | .scrapy 65 | 66 | # Sphinx documentation 67 | docs/_build/ 68 | 69 | # PyBuilder 70 | target/ 71 | 72 | # Jupyter Notebook 73 | .ipynb_checkpoints 74 | 75 | # pyenv 76 | .python-version 77 | 78 | # celery beat schedule file 79 | celerybeat-schedule 80 | 81 | # SageMath parsed files 82 | *.sage.py 83 | 84 | # Environments 85 | .env 86 | .venv 87 | env/ 88 | venv/ 89 | ENV/ 90 | env.bak/ 91 | venv.bak/ 92 | 93 | # Spyder project settings 94 | .spyderproject 95 | .spyproject 96 | 97 | # Rope project settings 98 | .ropeproject 99 | 100 | # mkdocs documentation 101 | /site 102 | 103 | # mypy 104 | .mypy_cache/ 105 | -------------------------------------------------------------------------------- /Adapt_Road_Scene/README.md: -------------------------------------------------------------------------------- 1 | # No More Discrimination: Cross City Adaptation of Road Scene Segmenters Implemented by Tenosrflow 2 | Paper link: [https://arxiv.org/abs/1704.08509](https://arxiv.org/abs/1704.08509) 3 | 4 | *** 5 | 6 | ## Intro 7 | Tensorflow implementation of the paper for adapting semantic segmentation from the (A) Synthia dataset to Cityscapes dataset and (B) Cityscapes dataset to Our dataset. 8 | 9 | ## Installation 10 | * Use Tensorflow version-1.1.0 with Python2 11 | 12 | ## Dataset 13 | 14 | * Download [Cityscapes Dataset](https://www.cityscapes-dataset.com/) 15 | * Download [Synthia Dataset](http://synthia-dataset.com/download-2/) 16 | * download the subset "SYNTHIA-RAND-CITYSCAPES" 17 | * Download [Our Dataset](https://yihsinchen.github.io/segmentation_adaptation/#Dataset) 18 | * contains four subsets --- Taipei, Tokyo, Roma, Rio --- used as target domain (only testing data has annotations) 19 | * Change the data path in files under folder "./data" 20 | 21 | ## Testing 22 | 23 | * Download and testing the trained model 24 | 25 | ``` 26 | cd fcns-wild 27 | sh scripts/download_demo.sh 28 | sh scripts/infer_city2Taipei.sh 29 | ``` 30 | 31 | The demo model is cityscapes-to-Taipei(a subset of our dataset), and results will be saved in the `./train_results/` folder. Also, it shows evaluated performance. (the evaluation code is provided by Cityscapes-dataset) 32 | 33 | 34 | ## Training Examples 35 | * Download the pretrained weights (model trained on source) 36 | ``` 37 | sh download_src.sh 38 | ``` 39 | * Train the Cityscapes-to-Ours{city} model 40 | 41 | ``` 42 | python ./src/train_adv.py \ 43 | --weight_path ./pretrained/train_cscape.npy \ 44 | --city {city_name} \ 45 | --src_data_path ./data/Cityscapes.txt \ 46 | --tgt_data_path ./data/{city_name}.txt \ 47 | --method FullMethod \ 48 | ``` 49 | 50 | The training scripts for adapt from (A) Synthia dataset to Cityscapes dataset and (B) Cityscapes dataset to Our dataset are prepared in "scripts/run_train_syn2city.sh" and "scripts/run_train_city2ours.sh". 51 | 52 | 53 | ## Reference codes 54 | [https://github.com/MarvinTeichmann/tensorflow-fcn](https://github.com/MarvinTeichmann/tensorflow-fcn) 55 | -------------------------------------------------------------------------------- /Adapt_Road_Scene/models/zero_gradient.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | from tensorflow.python.framework import ops 3 | 4 | 5 | class ZeroGradientBuilder(object): 6 | def __init__(self): 7 | self.num_calls = 0 8 | 9 | def __call__(self, x): 10 | grad_name = "ZeroGradient%d" % self.num_calls 11 | @ops.RegisterGradient(grad_name) 12 | def _zero_gradients(op, grad): 13 | return [grad * 0] 14 | 15 | g = tf.get_default_graph() 16 | with g.gradient_override_map({"Identity": grad_name}): 17 | y = tf.identity(x) 18 | 19 | self.num_calls += 1 20 | return y 21 | 22 | zero_gradient = ZeroGradientBuilder() 23 | -------------------------------------------------------------------------------- /Adapt_Road_Scene/scripts/cmd_for_DL.sh: -------------------------------------------------------------------------------- 1 | CONFIRM=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate "https://docs.google.com/uc?export=download&id=$1" -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p') 2 | wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$CONFIRM&id=$1" -O $2 3 | rm -rf /tmp/cookies.txt 4 | 5 | -------------------------------------------------------------------------------- /Adapt_Road_Scene/scripts/download_demo.sh: -------------------------------------------------------------------------------- 1 | mkdir trained_weights 2 | mkdir trained_weights/FullMethod 3 | mkdir trained_weights/FullMethod/Taipei 4 | sh scripts/cmd_for_DL.sh 1McsS8y-igDvdFyrvyURS7eREypWTf8Vv trained_weights/FullMethod/Taipei/model-1400.zip 5 | unzip trained_weights/FullMethod/Taipei/model-1400.zip -d trained_weights/FullMethod/Taipei -------------------------------------------------------------------------------- /Adapt_Road_Scene/scripts/download_src.sh: -------------------------------------------------------------------------------- 1 | mkdir pretrained 2 | sh scripts/cmd_for_DL.sh 1HkAewAjxyQXF8jrI-7lfNdMTA5AV6NVL pretrained/pretrained_vgg.npy 3 | sh scripts/cmd_for_DL.sh 1euYkvtI0Op99mjgv2Nl4eDKk6t2_zLu2 pretrained/train_cscape_frontend.npy 4 | sh scripts/cmd_for_DL.sh 1noylQcOyXGB_QPCC0T994QQP-8I4li4z pretrained/train_synthia_frontend.zip 5 | uzip pretrained/train_synthia_frontend.zip -d pretrained -------------------------------------------------------------------------------- /Adapt_Road_Scene/scripts/infer_city2ours.sh: -------------------------------------------------------------------------------- 1 | python ./tools/infer.py \ 2 | --img_path_file ./data/Taipei_val.txt \ 3 | --eval_script ./tools/cityscapesscripts/evaluation/evalPixelLevelSemanticLabeling.py \ 4 | --city Taipei \ 5 | --method GA \ 6 | --pretrained_weight ./pretrained/train_cscape_frontend.npy \ 7 | --gt_dir XXX/NTHU_512/labels \ 8 | --output_dir ./train_results/ \ 9 | --weights_dir ./trained_weights/ \ 10 | --_format 'model' \ 11 | --gpu 2 \ 12 | --iter_lower 1600 \ 13 | --iter_upper 2000 14 | 15 | 16 | 17 | -------------------------------------------------------------------------------- /Adapt_Road_Scene/scripts/infer_syn2city.sh: -------------------------------------------------------------------------------- 1 | python ./tools/infer.py \ 2 | --img_path_file ./data/cscape_val.txt \ 3 | --eval_script ./tools/cityscapesscripts/evaluation/evalPixelLevelSemanticLabeling.py \ 4 | --city syn2real \ 5 | --method GACA \ 6 | --pretrained_weight ./pretrained/pretrained_vgg.npy \ 7 | --gt_dir xxx/gtFine/val/ \ 8 | --output_dir ./train_results/ \ 9 | --weights_dir ./trained_weights/ \ 10 | --_format 'model' \ 11 | --gpu 0 \ 12 | --iter_lower 6000 \ 13 | --iter_upper 10000 14 | -------------------------------------------------------------------------------- /Adapt_Road_Scene/scripts/train_city2ours.sh: -------------------------------------------------------------------------------- 1 | python -u ./tools/train_adv.py \ 2 | --weight_path ./pretrained/train_cscape_frontend.npy \ 3 | --city Taipei \ 4 | --src_data_path ./data/Cityscapes.txt \ 5 | --tgt_data_path ./data/Taipei.txt \ 6 | --method FullMethod \ 7 | --batch_size 8 \ 8 | --iter_size 4 \ 9 | --start_step 0 \ 10 | --max_step 3000 \ 11 | --save_step 400 \ 12 | --train_dir ./trained_weights/ \ 13 | --gpu 5,6 \ 14 | 2>&1 | tee ./logfiles/Taipei_FullMethod.log 15 | -------------------------------------------------------------------------------- /Adapt_Road_Scene/scripts/train_syn2city.sh: -------------------------------------------------------------------------------- 1 | python -u ./tools/train_adv.py \ 2 | --weight_path ./pretrained/pretrained_vgg.npy \ 3 | --restore_path './pretrained/train_synthia_frontend' \ 4 | --city syn2real \ 5 | --src_data_path ./data/synthia_train.txt \ 6 | --tgt_data_path ./data/Cityscapes.txt \ 7 | --method GACA \ 8 | --batch_size 8 \ 9 | --iter_size 4 \ 10 | --start_step 0 \ 11 | --max_step 10000 \ 12 | --save_step 400 \ 13 | --train_dir ./trained_weights/ \ 14 | --gpu 5,6 \ 15 | 2>&1 | tee ./logfiles/syn2real_GACA.log 16 | -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/__init__.py -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/back.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/back.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/checked6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/checked6.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/checked6_red.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/checked6_red.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/clearpolygon.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/clearpolygon.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/deleteobject.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/deleteobject.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/exit.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/exit.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/filepath.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/filepath.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/help19.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/help19.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/highlight.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/highlight.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/layerdown.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/layerdown.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/layerup.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/layerup.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/minus.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/minus.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/modify.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/modify.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/newobject.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/newobject.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/next.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/next.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/open.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/open.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/play.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/play.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/plus.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/plus.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/save.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/save.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/screenshot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/screenshot.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/screenshotToggle.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/screenshotToggle.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/shuffle.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/shuffle.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/undo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/undo.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/zoom.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/annotation/icons/zoom.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/evaluation/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/evaluation/__init__.py -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/evaluation/addToConfusionMatrix.pyx: -------------------------------------------------------------------------------- 1 | # cython methods to speed-up evaluation 2 | 3 | import numpy as np 4 | cimport cython 5 | cimport numpy as np 6 | import ctypes 7 | 8 | np.import_array() 9 | 10 | cdef extern from "addToConfusionMatrix_impl.c": 11 | void addToConfusionMatrix( const unsigned char* f_prediction_p , 12 | const unsigned char* f_groundTruth_p , 13 | const unsigned int f_width_i , 14 | const unsigned int f_height_i , 15 | unsigned long long* f_confMatrix_p , 16 | const unsigned int f_confMatDim_i ) 17 | 18 | 19 | cdef tonumpyarray(unsigned long long* data, unsigned long long size): 20 | if not (data and size >= 0): raise ValueError 21 | return np.PyArray_SimpleNewFromData(2, [size, size], np.NPY_UINT64, data) 22 | 23 | @cython.boundscheck(False) 24 | def cEvaluatePair( np.ndarray[np.uint8_t , ndim=2] predictionArr , 25 | np.ndarray[np.uint8_t , ndim=2] groundTruthArr , 26 | np.ndarray[np.uint64_t, ndim=2] confMatrix , 27 | evalLabels ): 28 | cdef np.ndarray[np.uint8_t , ndim=2, mode="c"] predictionArr_c 29 | cdef np.ndarray[np.uint8_t , ndim=2, mode="c"] groundTruthArr_c 30 | cdef np.ndarray[np.ulonglong_t, ndim=2, mode="c"] confMatrix_c 31 | 32 | predictionArr_c = np.ascontiguousarray(predictionArr , dtype=np.uint8 ) 33 | groundTruthArr_c = np.ascontiguousarray(groundTruthArr, dtype=np.uint8 ) 34 | confMatrix_c = np.ascontiguousarray(confMatrix , dtype=np.ulonglong) 35 | 36 | cdef np.uint32_t height_ui = predictionArr.shape[1] 37 | cdef np.uint32_t width_ui = predictionArr.shape[0] 38 | cdef np.uint32_t confMatDim_ui = confMatrix.shape[0] 39 | 40 | addToConfusionMatrix(&predictionArr_c[0,0], &groundTruthArr_c[0,0], height_ui, width_ui, &confMatrix_c[0,0], confMatDim_ui) 41 | 42 | confMatrix = np.ascontiguousarray(tonumpyarray(&confMatrix_c[0,0], confMatDim_ui)) 43 | 44 | return np.copy(confMatrix) -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/evaluation/addToConfusionMatrix_impl.c: -------------------------------------------------------------------------------- 1 | // cython methods to speed-up evaluation 2 | 3 | void addToConfusionMatrix( const unsigned char* f_prediction_p , 4 | const unsigned char* f_groundTruth_p , 5 | const unsigned int f_width_i , 6 | const unsigned int f_height_i , 7 | unsigned long long* f_confMatrix_p , 8 | const unsigned int f_confMatDim_i ) 9 | { 10 | const unsigned int size_ui = f_height_i * f_width_i; 11 | for (unsigned int i = 0; i < size_ui; ++i) 12 | { 13 | const unsigned char predPx = f_prediction_p [i]; 14 | const unsigned char gtPx = f_groundTruth_p[i]; 15 | f_confMatrix_p[f_confMatDim_i*gtPx + predPx] += 1u; 16 | } 17 | } -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/evaluation/instance.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # 3 | # Instance class 4 | # 5 | 6 | class Instance(object): 7 | instID = 0 8 | labelID = 0 9 | pixelCount = 0 10 | medDist = -1 11 | distConf = 0.0 12 | 13 | def __init__(self, imgNp, instID): 14 | if (instID == -1): 15 | return 16 | self.instID = int(instID) 17 | self.labelID = int(self.getLabelID(instID)) 18 | self.pixelCount = int(self.getInstancePixels(imgNp, instID)) 19 | 20 | def getLabelID(self, instID): 21 | if (instID < 1000): 22 | return instID 23 | else: 24 | return int(instID / 1000) 25 | 26 | def getInstancePixels(self, imgNp, instLabel): 27 | return (imgNp == instLabel).sum() 28 | 29 | def toJSON(self): 30 | return json.dumps(self, default=lambda o: o.__dict__, sort_keys=True, indent=4) 31 | 32 | def toDict(self): 33 | buildDict = {} 34 | buildDict["instID"] = self.instID 35 | buildDict["labelID"] = self.labelID 36 | buildDict["pixelCount"] = self.pixelCount 37 | buildDict["medDist"] = self.medDist 38 | buildDict["distConf"] = self.distConf 39 | return buildDict 40 | 41 | def fromJSON(self, data): 42 | self.instID = int(data["instID"]) 43 | self.labelID = int(data["labelID"]) 44 | self.pixelCount = int(data["pixelCount"]) 45 | if ("medDist" in data): 46 | self.medDist = float(data["medDist"]) 47 | self.distConf = float(data["distConf"]) 48 | 49 | def __str__(self): 50 | return "("+str(self.instID)+")" -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/evaluation/instances2dict.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # 3 | # Convert instances from png files to a dictionary 4 | # 5 | 6 | from __future__ import print_function 7 | import os, sys 8 | 9 | # Cityscapes imports 10 | from instance import * 11 | sys.path.append( os.path.normpath( os.path.join( os.path.dirname( __file__ ) , '..' , 'helpers' ) ) ) 12 | from csHelpers import * 13 | 14 | def instances2dict(imageFileList, verbose=False): 15 | imgCount = 0 16 | instanceDict = {} 17 | 18 | if not isinstance(imageFileList, list): 19 | imageFileList = [imageFileList] 20 | 21 | if verbose: 22 | print("Processing {} images...".format(len(imageFileList))) 23 | 24 | for imageFileName in imageFileList: 25 | # Load image 26 | img = Image.open(imageFileName) 27 | 28 | # Image as numpy array 29 | imgNp = np.array(img) 30 | 31 | # Initialize label categories 32 | instances = {} 33 | for label in labels: 34 | instances[label.name] = [] 35 | 36 | # Loop through all instance ids in instance image 37 | for instanceId in np.unique(imgNp): 38 | instanceObj = Instance(imgNp, instanceId) 39 | 40 | instances[id2label[instanceObj.labelID].name].append(instanceObj.toDict()) 41 | 42 | imgKey = os.path.abspath(imageFileName) 43 | instanceDict[imgKey] = instances 44 | imgCount += 1 45 | 46 | if verbose: 47 | print("\rImages Processed: {}".format(imgCount), end=' ') 48 | sys.stdout.flush() 49 | 50 | if verbose: 51 | print("") 52 | 53 | return instanceDict 54 | 55 | def main(argv): 56 | fileList = [] 57 | if (len(argv) > 2): 58 | for arg in argv: 59 | if ("png" in arg): 60 | fileList.append(arg) 61 | instances2dict(fileList, True) 62 | 63 | if __name__ == "__main__": 64 | main(sys.argv[1:]) 65 | -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/evaluation/setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # 3 | # Enable cython support for eval scripts 4 | # Run as 5 | # setup.py build_ext --inplace 6 | # 7 | # WARNING: Only tested for Ubuntu 64bit OS. 8 | 9 | try: 10 | from distutils.core import setup 11 | from Cython.Build import cythonize 12 | except: 13 | print("Unable to setup. Please use pip to install: cython") 14 | print("sudo pip install cython") 15 | import os 16 | import numpy 17 | 18 | os.environ["CC"] = "g++" 19 | os.environ["CXX"] = "g++" 20 | 21 | setup(ext_modules = cythonize("addToConfusionMatrix.pyx"),include_dirs=[numpy.get_include()]) 22 | -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/helpers/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/helpers/__init__.py -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/preparation/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/preparation/__init__.py -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/preparation/createTrainIdInstanceImgs.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # 3 | # Converts the polygonal annotations of the Cityscapes dataset 4 | # to images, where pixel values encode the ground truth classes and the 5 | # individual instance of that classes. 6 | # 7 | # The Cityscapes downloads already include such images 8 | # a) *color.png : the class is encoded by its color 9 | # b) *labelIds.png : the class is encoded by its ID 10 | # c) *instanceIds.png : the class and the instance are encoded by an instance ID 11 | # 12 | # With this tool, you can generate option 13 | # d) *instanceTrainIds.png : the class and the instance are encoded by an instance training ID 14 | # This encoding might come handy for training purposes. You can use 15 | # the file labes.py to define the training IDs that suit your needs. 16 | # Note however, that once you submit or evaluate results, the regular 17 | # IDs are needed. 18 | # 19 | # Please refer to 'json2instanceImg.py' for an explanation of instance IDs. 20 | # 21 | # Uses the converter tool in 'json2instanceImg.py' 22 | # Uses the mapping defined in 'labels.py' 23 | # 24 | 25 | # python imports 26 | from __future__ import print_function 27 | import os, glob, sys 28 | 29 | # cityscapes imports 30 | sys.path.append( os.path.normpath( os.path.join( os.path.dirname( __file__ ) , '..' , 'helpers' ) ) ) 31 | from csHelpers import printError 32 | from json2instanceImg import json2instanceImg 33 | 34 | 35 | # The main method 36 | def main(): 37 | # Where to look for Cityscapes 38 | if 'CITYSCAPES_DATASET' in os.environ: 39 | cityscapesPath = os.environ['CITYSCAPES_DATASET'] 40 | else: 41 | cityscapesPath = os.path.join(os.path.dirname(os.path.realpath(__file__)),'..','..') 42 | # how to search for all ground truth 43 | searchFine = os.path.join( cityscapesPath , "gtFine" , "*" , "*" , "*_gt*_polygons.json" ) 44 | searchCoarse = os.path.join( cityscapesPath , "gtCoarse" , "*" , "*" , "*_gt*_polygons.json" ) 45 | 46 | # search files 47 | filesFine = glob.glob( searchFine ) 48 | filesFine.sort() 49 | filesCoarse = glob.glob( searchCoarse ) 50 | filesCoarse.sort() 51 | 52 | # concatenate fine and coarse 53 | files = filesFine + filesCoarse 54 | # files = filesFine # use this line if fine is enough for now. 55 | 56 | # quit if we did not find anything 57 | if not files: 58 | printError( "Did not find any files. Please consult the README." ) 59 | 60 | # a bit verbose 61 | print("Processing {} annotation files".format(len(files))) 62 | 63 | # iterate through files 64 | progress = 0 65 | print("Progress: {:>3} %".format( progress * 100 / len(files) ), end=' ') 66 | for f in files: 67 | # create the output filename 68 | dst = f.replace( "_polygons.json" , "_instanceTrainIds.png" ) 69 | 70 | # do the conversion 71 | try: 72 | json2instanceImg( f , dst , "trainIds" ) 73 | except: 74 | print("Failed to convert: {}".format(f)) 75 | raise 76 | 77 | # status 78 | progress += 1 79 | print("\rProgress: {:>3} %".format( progress * 100 / len(files) ), end=' ') 80 | sys.stdout.flush() 81 | 82 | 83 | # call the main 84 | if __name__ == "__main__": 85 | main() 86 | -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/preparation/createTrainIdLabelImgs.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # 3 | # Converts the polygonal annotations of the Cityscapes dataset 4 | # to images, where pixel values encode ground truth classes. 5 | # 6 | # The Cityscapes downloads already include such images 7 | # a) *color.png : the class is encoded by its color 8 | # b) *labelIds.png : the class is encoded by its ID 9 | # c) *instanceIds.png : the class and the instance are encoded by an instance ID 10 | # 11 | # With this tool, you can generate option 12 | # d) *labelTrainIds.png : the class is encoded by its training ID 13 | # This encoding might come handy for training purposes. You can use 14 | # the file labes.py to define the training IDs that suit your needs. 15 | # Note however, that once you submit or evaluate results, the regular 16 | # IDs are needed. 17 | # 18 | # Uses the converter tool in 'json2labelImg.py' 19 | # Uses the mapping defined in 'labels.py' 20 | # 21 | 22 | # python imports 23 | from __future__ import print_function 24 | import os, glob, sys 25 | import pdb 26 | # cityscapes imports 27 | sys.path.append( os.path.normpath( os.path.join( os.path.dirname( __file__ ) , '..' , 'helpers' ) ) ) 28 | from csHelpers import printError 29 | from json2labelImg import json2labelImg 30 | 31 | # The main method 32 | def main(): 33 | # Where to look for Cityscapes 34 | if 'CITYSCAPES_DATASET' in os.environ: 35 | cityscapesPath = os.environ['CITYSCAPES_DATASET'] 36 | else: 37 | cityscapesPath = os.path.join(os.path.dirname(os.path.realpath(__file__)),'..','..') 38 | # how to search for all ground truth 39 | #searchFine = os.path.join( cityscapesPath , "gtFine" , "*" , "*" , "*_gt*_polygons.json" ) 40 | searchFine = os.path.join( '/home/bctsai/gtFine/val' ,"*", "*_gt*_polygons.json" ) ###CVPR_eval 41 | searchCoarse = os.path.join( cityscapesPath , "gtCoarse" , "*" , "*" , "*_gt*_polygons.json" ) 42 | 43 | # search files 44 | #pdb.set_trace() 45 | filesFine = glob.glob( searchFine ) 46 | filesFine.sort() 47 | filesCoarse = glob.glob( searchCoarse ) 48 | filesCoarse.sort() 49 | 50 | # concatenate fine and coarse 51 | files = filesFine + filesCoarse 52 | # files = filesFine # use this line if fine is enough for now. 53 | 54 | # quit if we did not find anything 55 | if not files: 56 | printError( "Did not find any files. Please consult the README." ) 57 | 58 | # a bit verbose 59 | print("Processing {} annotation files".format(len(files))) 60 | 61 | # iterate through files 62 | progress = 0 63 | print("Progress: {:>3} %".format( progress * 100 / len(files) ), end=' ') 64 | for f in files: 65 | # create the output filename 66 | dst = f.replace( "_polygons.json" , "_CVPR_eval.png" ) 67 | 68 | # do the conversion 69 | try: 70 | json2labelImg( f , dst , "trainIds" ) 71 | except: 72 | print("Failed to convert: {}".format(f)) 73 | raise 74 | 75 | # status 76 | progress += 1 77 | print("\rProgress: {:>3} %".format( progress * 100 / len(files) ), end=' ') 78 | sys.stdout.flush() 79 | 80 | 81 | # call the main 82 | if __name__ == "__main__": 83 | main() 84 | -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/viewer/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/viewer/__init__.py -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/back.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/back.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/disp.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/disp.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/exit.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/exit.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/filepath.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/filepath.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/help19.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/help19.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/minus.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/minus.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/next.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/next.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/open.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/open.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/play.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/play.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/plus.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/plus.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/shuffle.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/shuffle.png -------------------------------------------------------------------------------- /Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/zoom.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Road_Scene/tools/cityscapesscripts/viewer/icons/zoom.png -------------------------------------------------------------------------------- /Adapt_Structured_Output/README.md: -------------------------------------------------------------------------------- 1 | # Learning to Adapt Structured Output Space for Semantic Segmentation 2 | We use the code provided by [https://github.com/wasidennis/AdaptSegNet](https://github.com/wasidennis/AdaptSegNet) and additionally create data-loaders for our dataset. 3 | 4 | ---------- 5 | 6 | 7 | ## Installation 8 | * Use Python2 9 | 10 | ## Dataset 11 | Download the fowllowing dataset and put them in the `data` folder 12 | 13 | * Download [Cityscapes Dataset](https://www.cityscapes-dataset.com/) 14 | * Download [Synthia Dataset](http://synthia-dataset.com/download-2/) 15 | * Download [Our Dataset](https://yihsinchen.github.io/segmentation_adaptation/#Dataset) 16 | * contains four subsets --- Taipei, Tokyo, Roma, Rio --- as target domain (only testing data has annotations) 17 | 18 | ## Testing 19 | * Download and testing [the trained model](https://drive.google.com/uc?export=download&confirm=z1cS&id=1MKnzjzl0aovlUH1NDK_6qw8LRB1AoZFa) and put it in the `model` folder 20 | 21 | * Test the model and results will be saved in the `result` folder 22 | 23 | ``` 24 | python evaluate_cityscapes.py --restore-from ./model/Synthia2cityscapes.pth 25 | ``` 26 | 27 | * Compute the IoU on Cityscapes (thanks to the code from [VisDA Challenge](http://ai.bu.edu/visda-2017/)) 28 | 29 | ``` 30 | python compute_iou.py ./data/Cityscapes/data/gtFine/val result/cityscapes 31 | ``` 32 | 33 | The demo model is sythia-to-cityscapes, and results will be saved in the `./result/` folder. Also, it shows evaluated performance. (the evaluation code is provided by Cityscapes-dataset) 34 | 35 | ## Training Examples 36 | 37 | * Train the synthia-to-Cityscapes model (multi-level) 38 | 39 | ``` 40 | python train_synthia2cityscapes_multi.py \ 41 | --snapshot-dir ./snapshots/GTA2Cityscapes_multi \ 42 | --lambda-seg 0.1 \ 43 | --lambda-adv-target1 0.0002 \ 44 | --lambda-adv-target2 0.001 45 | ``` 46 | 47 | 48 | ## Reference code 49 | [https://github.com/wasidennis/AdaptSegNet](https://github.com/wasidennis/AdaptSegNet) 50 | -------------------------------------------------------------------------------- /Adapt_Structured_Output/compute_iou.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import argparse 3 | import json 4 | from PIL import Image 5 | from os.path import join 6 | import pdb 7 | 8 | def fast_hist(a, b, n): 9 | k = (a >= 0) & (a < n) 10 | return np.bincount(n * a[k].astype(int) + b[k], minlength=n ** 2).reshape(n, n) 11 | 12 | 13 | def per_class_iu(hist): 14 | return np.diag(hist) / (hist.sum(1) + hist.sum(0) - np.diag(hist)) 15 | 16 | 17 | def label_mapping(input, mapping): 18 | output = np.copy(input) 19 | for ind in range(len(mapping)): 20 | output[input == mapping[ind][0]] = mapping[ind][1] 21 | return np.array(output, dtype=np.int64) 22 | 23 | 24 | def compute_mIoU(gt_dir, pred_dir, devkit_dir=''): 25 | """ 26 | Compute IoU given the predicted colorized images and 27 | """ 28 | with open(join(devkit_dir, 'info.json'), 'r') as fp: 29 | info = json.load(fp) 30 | num_classes = np.int(info['classes']) 31 | print('Num classes', num_classes) 32 | name_classes = np.array(info['label'], dtype=np.str) 33 | mapping = np.array(info['label2train'], dtype=np.int) 34 | hist = np.zeros((num_classes, num_classes)) 35 | 36 | image_path_list = join(devkit_dir, 'val.txt') 37 | label_path_list = join(devkit_dir, 'label.txt') 38 | gt_imgs = open(label_path_list, 'r').read().splitlines() 39 | gt_imgs = [join(gt_dir, x) for x in gt_imgs] 40 | pred_imgs = open(image_path_list, 'r').read().splitlines() 41 | pred_imgs = [join(pred_dir, x.split('/')[-1]) for x in pred_imgs] 42 | 43 | for ind in range(len(gt_imgs)): 44 | pred = np.array(Image.open(pred_imgs[ind])) 45 | label = np.array(Image.open(gt_imgs[ind])) 46 | label = label_mapping(label, mapping) 47 | #pdb.set_trace() 48 | if len(label.flatten()) != len(pred.flatten()): 49 | #pdb.set_trace() 50 | print('Skipping: len(gt) = {:d}, len(pred) = {:d}, {:s}, {:s}'.format(len(label.flatten()), len(pred.flatten()), gt_imgs[ind], pred_imgs[ind])) 51 | continue 52 | hist += fast_hist(label.flatten(), pred.flatten(), num_classes) 53 | if ind > 0 and ind % 10 == 0: 54 | print('{:d} / {:d}: {:0.2f}'.format(ind, len(gt_imgs), 100*np.mean(per_class_iu(hist)))) 55 | #pdb.set_trace() 56 | 57 | mIoUs = per_class_iu(hist) 58 | for ind_class in range(num_classes): 59 | print('===>' + name_classes[ind_class] + ':\t' + str(round(mIoUs[ind_class] * 100, 2))) 60 | print('===> mIoU: ' + str(round(np.nanmean(mIoUs) * 100, 2))) 61 | return mIoUs 62 | 63 | 64 | def main(args): 65 | compute_mIoU(args.gt_dir, args.pred_dir, args.devkit_dir) 66 | 67 | 68 | if __name__ == "__main__": 69 | parser = argparse.ArgumentParser() 70 | parser.add_argument('gt_dir', type=str, help='directory which stores CityScapes val gt images') 71 | parser.add_argument('pred_dir', type=str, help='directory which stores CityScapes val pred images') 72 | parser.add_argument('--devkit_dir', default='dataset/cityscapes_list', help='base directory of cityscapes') 73 | args = parser.parse_args() 74 | main(args) 75 | -------------------------------------------------------------------------------- /Adapt_Structured_Output/compute_synthia_iou.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import argparse 3 | import json 4 | from PIL import Image 5 | from os.path import join 6 | import pdb 7 | 8 | def fast_hist(a, b, n): 9 | k = (a >= 0) & (a < n) 10 | return np.bincount(n * a[k].astype(int) + b[k], minlength=n ** 2).reshape(n, n) 11 | 12 | 13 | def per_class_iu(hist): 14 | return np.diag(hist) / (hist.sum(1) + 1e-8 + hist.sum(0) - np.diag(hist)) 15 | 16 | 17 | def label_mapping(input, mapping): 18 | output = np.copy(input) 19 | for ind in range(len(mapping)): 20 | output[input == mapping[ind][0]] = mapping[ind][1] 21 | return np.array(output, dtype=np.int64) 22 | 23 | 24 | def compute_mIoU(gt_dir, pred_dir, devkit_dir=''): 25 | """ 26 | Compute IoU given the predicted colorized images and 27 | """ 28 | with open(join(devkit_dir, 'info.json'), 'r') as fp: 29 | info = json.load(fp) 30 | num_classes = np.int(info['classes']) 31 | print('Num classes', num_classes) 32 | name_classes = np.array(info['label'], dtype=np.str) 33 | mapping = np.array(info['label2train'], dtype=np.int) 34 | hist = np.zeros((num_classes, num_classes)) 35 | 36 | image_path_list = join(devkit_dir, 'val.txt') 37 | label_path_list = join(devkit_dir, 'label.txt') 38 | gt_imgs = open(label_path_list, 'r').read().splitlines() 39 | gt_imgs = [join(gt_dir, x) for x in gt_imgs] 40 | pred_imgs = open(image_path_list, 'r').read().splitlines() 41 | pred_imgs = [join(pred_dir, x.split('/')[-1]) for x in pred_imgs] 42 | 43 | for ind in range(len(gt_imgs)): 44 | pred = np.array(Image.open(pred_imgs[ind])) 45 | label = np.array(Image.open(gt_imgs[ind])) 46 | label = label_mapping(label, mapping) 47 | if len(label.flatten()) != len(pred.flatten()): 48 | print('Skipping: len(gt) = {:d}, len(pred) = {:d}, {:s}, {:s}'.format(len(label.flatten()), len(pred.flatten()), gt_imgs[ind], pred_imgs[ind])) 49 | pdb.set_trace() 50 | continue 51 | hist += fast_hist(label.flatten(), pred.flatten(), num_classes) 52 | if ind > 0 and ind % 10 == 0: 53 | print('{:d} / {:d}: {:0.2f}'.format(ind, len(gt_imgs), 100*np.mean(per_class_iu(hist)))) 54 | 55 | mIoUs = per_class_iu(hist) 56 | for ind_class in range(num_classes): 57 | print('===>' + name_classes[ind_class] + ':\t' + str(round(mIoUs[ind_class] * 100, 2))) 58 | print('===> mIoU: ' + str(round(np.nanmean(mIoUs) * 100, 2))) 59 | return mIoUs 60 | 61 | 62 | def main(args): 63 | compute_mIoU(args.gt_dir, args.pred_dir, args.devkit_dir) 64 | 65 | 66 | if __name__ == "__main__": 67 | parser = argparse.ArgumentParser() 68 | parser.add_argument('gt_dir', type=str, help='directory which stores CityScapes val gt images') 69 | parser.add_argument('pred_dir', type=str, help='directory which stores CityScapes val pred images') 70 | parser.add_argument('--devkit_dir', default='dataset/Synthia_list', help='base directory of cityscapes') 71 | args = parser.parse_args() 72 | main(args) 73 | -------------------------------------------------------------------------------- /Adapt_Structured_Output/dataset/Synthia_list/info.json: -------------------------------------------------------------------------------- 1 | { 2 | "classes":19, 3 | "label2train":[ 4 | [0, 255], 5 | [1, 255], 6 | [2, 255], 7 | [3, 255], 8 | [4, 255], 9 | [5, 255], 10 | [6, 255], 11 | [7, 0], 12 | [8, 1], 13 | [9, 255], 14 | [10, 255], 15 | [11, 2], 16 | [12, 3], 17 | [13, 4], 18 | [14, 255], 19 | [15, 255], 20 | [16, 255], 21 | [17, 5], 22 | [18, 255], 23 | [19, 6], 24 | [20, 7], 25 | [21, 8], 26 | [22, 9], 27 | [23, 10], 28 | [24, 11], 29 | [25, 12], 30 | [26, 13], 31 | [27, 14], 32 | [28, 15], 33 | [29, 255], 34 | [30, 255], 35 | [31, 16], 36 | [32, 17], 37 | [33, 18], 38 | [-1, 255]], 39 | "label":[ 40 | "road", 41 | "sidewalk", 42 | "building", 43 | "wall", 44 | "fence", 45 | "pole", 46 | "light", 47 | "sign", 48 | "vegetation", 49 | "terrain", 50 | "sky", 51 | "person", 52 | "rider", 53 | "car", 54 | "truck", 55 | "bus", 56 | "train", 57 | "motocycle", 58 | "bicycle"], 59 | "palette":[ 60 | [128,64,128], 61 | [244,35,232], 62 | [70,70,70], 63 | [102,102,156], 64 | [190,153,153], 65 | [153,153,153], 66 | [250,170,30], 67 | [220,220,0], 68 | [107,142,35], 69 | [152,251,152], 70 | [70,130,180], 71 | [220,20,60], 72 | [255,0,0], 73 | [0,0,142], 74 | [0,0,70], 75 | [0,60,100], 76 | [0,80,100], 77 | [0,0,230], 78 | [119,11,32], 79 | [0,0,0]], 80 | "mean":[ 81 | 73.158359210711552, 82 | 82.908917542625858, 83 | 72.392398761941593], 84 | "std":[ 85 | 47.675755341814678, 86 | 48.494214368814916, 87 | 47.736546325441594] 88 | } 89 | -------------------------------------------------------------------------------- /Adapt_Structured_Output/dataset/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Structured_Output/dataset/__init__.py -------------------------------------------------------------------------------- /Adapt_Structured_Output/dataset/cityscapes_dataset.py: -------------------------------------------------------------------------------- 1 | import os 2 | import os.path as osp 3 | import numpy as np 4 | import random 5 | import matplotlib.pyplot as plt 6 | import collections 7 | import torch 8 | import torchvision 9 | from torch.utils import data 10 | from PIL import Image 11 | 12 | class cityscapesDataSet(data.Dataset): 13 | def __init__(self, root, list_path, max_iters=None, crop_size=(321, 321), mean=(128, 128, 128), scale=True, mirror=True, ignore_label=255, set='val'): 14 | self.root = root 15 | self.list_path = list_path 16 | self.crop_size = crop_size 17 | self.scale = scale 18 | self.ignore_label = ignore_label 19 | self.mean = mean 20 | self.is_mirror = mirror 21 | # self.mean_bgr = np.array([104.00698793, 116.66876762, 122.67891434]) 22 | self.img_ids = [i_id.strip() for i_id in open(list_path)] 23 | if not max_iters==None: 24 | self.img_ids = self.img_ids * int(np.ceil(float(max_iters) / len(self.img_ids))) 25 | self.files = [] 26 | self.id_to_trainid = {7: 0, 8: 1, 11: 2, 12: 3, 13: 4, 17: 5, 27 | 19: 6, 20: 7, 21: 8, 22: 9, 23: 10, 24: 11, 25: 12, 28 | 26: 13, 27: 14, 28: 15, 31: 16, 32: 17, 33: 18} 29 | self.set = set 30 | # for split in ["train", "trainval", "val"]: 31 | for name in self.img_ids: 32 | img_file = osp.join(self.root, "%s/%s_leftImg8bit.png" % (self.set, name)) 33 | label_file = osp.join(self.root, "../re_gtFine512/%s/%s_gtFine_labelTrainIds.png" % (self.set, name)) 34 | self.files.append({ 35 | "img": img_file, 36 | "label": label_file, 37 | "name": name 38 | }) 39 | 40 | def __len__(self): 41 | return len(self.files) 42 | 43 | def __getitem__(self, index): 44 | datafiles = self.files[index] 45 | 46 | image = Image.open(datafiles["img"]).convert('RGB') 47 | label = Image.open(datafiles["label"]) 48 | name = datafiles["name"] 49 | 50 | # resize 51 | image = image.resize(self.crop_size, Image.BICUBIC) 52 | label = label.resize(self.crop_size, Image.NEAREST) 53 | 54 | image = np.asarray(image, np.float32) 55 | label = np.asarray(label, np.float32) 56 | 57 | # 58 | label_copy = 255 * np.ones(label.shape, dtype=np.float32) 59 | #for k, v in self.id_to_trainid.items(): 60 | #label_copy[label == k] = v 61 | # 62 | label_copy = label 63 | 64 | size = image.shape 65 | image = image[:, :, ::-1] # change to BGR 66 | image -= self.mean 67 | image = image.transpose((2, 0, 1)) 68 | 69 | return image.copy(), label_copy.copy(), np.array(size), name 70 | 71 | 72 | if __name__ == '__main__': 73 | dst = cityscapesDataSet("../../../../VSlab2/nitahaha/re_leftImg8bit512/", is_transform=True) 74 | trainloader = data.DataLoader(dst, batch_size=4) 75 | for i, data in enumerate(trainloader): 76 | imgs, labels, size, names = data 77 | if i == 0: 78 | img = torchvision.utils.make_grid(imgs).numpy() 79 | img = np.transpose(img, (1, 2, 0)) 80 | img = img[:, :, ::-1] 81 | plt.imshow(img) 82 | plt.show() 83 | -------------------------------------------------------------------------------- /Adapt_Structured_Output/dataset/cityscapes_evaluate_dataset.py: -------------------------------------------------------------------------------- 1 | import os 2 | import os.path as osp 3 | import numpy as np 4 | import random 5 | import matplotlib.pyplot as plt 6 | import collections 7 | import torch 8 | import torchvision 9 | from torch.utils import data 10 | from PIL import Image 11 | 12 | class cityscapesDataSet(data.Dataset): 13 | def __init__(self, root, list_path, max_iters=None, crop_size=(321, 321), mean=(128, 128, 128), scale=True, mirror=True, ignore_label=255, set='val'): 14 | self.root = root 15 | self.list_path = list_path 16 | self.crop_size = crop_size 17 | self.scale = scale 18 | self.ignore_label = ignore_label 19 | self.mean = mean 20 | self.is_mirror = mirror 21 | # self.mean_bgr = np.array([104.00698793, 116.66876762, 122.67891434]) 22 | self.img_ids = [i_id.strip() for i_id in open(list_path)] 23 | if not max_iters==None: 24 | self.img_ids = self.img_ids * int(np.ceil(float(max_iters) / len(self.img_ids))) 25 | self.files = [] 26 | self.set = set 27 | # for split in ["train", "trainval", "val"]: 28 | for name in self.img_ids: 29 | img_file = osp.join(self.root, "%s/%s" % (self.set, name)) 30 | self.files.append({ 31 | "img": img_file, 32 | "name": name 33 | }) 34 | 35 | def __len__(self): 36 | return len(self.files) 37 | 38 | def __getitem__(self, index): 39 | datafiles = self.files[index] 40 | 41 | image = Image.open(datafiles["img"]).convert('RGB') 42 | name = datafiles["name"] 43 | 44 | # resize 45 | image = image.resize(self.crop_size, Image.BICUBIC) 46 | 47 | image = np.asarray(image, np.float32) 48 | 49 | size = image.shape 50 | image = image[:, :, ::-1] # change to BGR 51 | image -= self.mean 52 | image = image.transpose((2, 0, 1)) 53 | 54 | return image.copy(), np.array(size), name 55 | 56 | 57 | if __name__ == '__main__': 58 | dst = GTA5DataSet("./data", is_transform=True) 59 | trainloader = data.DataLoader(dst, batch_size=4) 60 | for i, data in enumerate(trainloader): 61 | imgs, labels = data 62 | if i == 0: 63 | img = torchvision.utils.make_grid(imgs).numpy() 64 | img = np.transpose(img, (1, 2, 0)) 65 | img = img[:, :, ::-1] 66 | plt.imshow(img) 67 | plt.show() 68 | -------------------------------------------------------------------------------- /Adapt_Structured_Output/dataset/cityscapes_list/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Structured_Output/dataset/cityscapes_list/.DS_Store -------------------------------------------------------------------------------- /Adapt_Structured_Output/dataset/cityscapes_list/info.json: -------------------------------------------------------------------------------- 1 | { 2 | "classes":19, 3 | "label2train":[ 4 | [0, 255], 5 | [1, 255], 6 | [2, 255], 7 | [3, 255], 8 | [4, 255], 9 | [5, 255], 10 | [6, 255], 11 | [7, 0], 12 | [8, 1], 13 | [9, 255], 14 | [10, 255], 15 | [11, 2], 16 | [12, 3], 17 | [13, 4], 18 | [14, 255], 19 | [15, 255], 20 | [16, 255], 21 | [17, 5], 22 | [18, 255], 23 | [19, 6], 24 | [20, 7], 25 | [21, 8], 26 | [22, 9], 27 | [23, 10], 28 | [24, 11], 29 | [25, 12], 30 | [26, 13], 31 | [27, 14], 32 | [28, 15], 33 | [29, 255], 34 | [30, 255], 35 | [31, 16], 36 | [32, 17], 37 | [33, 18], 38 | [-1, 255]], 39 | "label":[ 40 | "road", 41 | "sidewalk", 42 | "building", 43 | "wall", 44 | "fence", 45 | "pole", 46 | "light", 47 | "sign", 48 | "vegetation", 49 | "terrain", 50 | "sky", 51 | "person", 52 | "rider", 53 | "car", 54 | "truck", 55 | "bus", 56 | "train", 57 | "motocycle", 58 | "bicycle"], 59 | "palette":[ 60 | [128,64,128], 61 | [244,35,232], 62 | [70,70,70], 63 | [102,102,156], 64 | [190,153,153], 65 | [153,153,153], 66 | [250,170,30], 67 | [220,220,0], 68 | [107,142,35], 69 | [152,251,152], 70 | [70,130,180], 71 | [220,20,60], 72 | [255,0,0], 73 | [0,0,142], 74 | [0,0,70], 75 | [0,60,100], 76 | [0,80,100], 77 | [0,0,230], 78 | [119,11,32], 79 | [0,0,0]], 80 | "mean":[ 81 | 73.158359210711552, 82 | 82.908917542625858, 83 | 72.392398761941593], 84 | "std":[ 85 | 47.675755341814678, 86 | 48.494214368814916, 87 | 47.736546325441594] 88 | } 89 | -------------------------------------------------------------------------------- /Adapt_Structured_Output/dataset/gta5_dataset.py: -------------------------------------------------------------------------------- 1 | import os 2 | import os.path as osp 3 | import numpy as np 4 | import random 5 | import matplotlib.pyplot as plt 6 | import collections 7 | import torch 8 | import torchvision 9 | from torch.utils import data 10 | from PIL import Image 11 | 12 | 13 | class GTA5DataSet(data.Dataset): 14 | def __init__(self, root, list_path, max_iters=None, crop_size=(321, 321), mean=(128, 128, 128), scale=True, mirror=True, ignore_label=255): 15 | self.root = root 16 | self.list_path = list_path 17 | self.crop_size = crop_size 18 | self.scale = scale 19 | self.ignore_label = ignore_label 20 | self.mean = mean 21 | self.is_mirror = mirror 22 | # self.mean_bgr = np.array([104.00698793, 116.66876762, 122.67891434]) 23 | self.img_ids = [i_id.strip() for i_id in open(list_path)] 24 | if not max_iters==None: 25 | self.img_ids = self.img_ids * int(np.ceil(float(max_iters) / len(self.img_ids))) 26 | self.files = [] 27 | 28 | self.id_to_trainid = {7: 0, 8: 1, 11: 2, 12: 3, 13: 4, 17: 5, 29 | 19: 6, 20: 7, 21: 8, 22: 9, 23: 10, 24: 11, 25: 12, 30 | 26: 13, 27: 14, 28: 15, 31: 16, 32: 17, 33: 18} 31 | 32 | # for split in ["train", "trainval", "val"]: 33 | for name in self.img_ids: 34 | img_file = osp.join(self.root, "images/%s" % name) 35 | label_file = osp.join(self.root, "labels/%s" % name) 36 | self.files.append({ 37 | "img": img_file, 38 | "label": label_file, 39 | "name": name 40 | }) 41 | 42 | def __len__(self): 43 | return len(self.files) 44 | 45 | 46 | def __getitem__(self, index): 47 | datafiles = self.files[index] 48 | 49 | image = Image.open(datafiles["img"]).convert('RGB') 50 | label = Image.open(datafiles["label"]) 51 | name = datafiles["name"] 52 | 53 | # resize 54 | image = image.resize(self.crop_size, Image.BICUBIC) 55 | label = label.resize(self.crop_size, Image.NEAREST) 56 | 57 | image = np.asarray(image, np.float32) 58 | label = np.asarray(label, np.float32) 59 | 60 | # re-assign labels to match the format of Cityscapes 61 | label_copy = 255 * np.ones(label.shape, dtype=np.float32) 62 | for k, v in self.id_to_trainid.items(): 63 | label_copy[label == k] = v 64 | 65 | size = image.shape 66 | image = image[:, :, ::-1] # change to BGR 67 | image -= self.mean 68 | image = image.transpose((2, 0, 1)) 69 | 70 | return image.copy(), label_copy.copy(), np.array(size), name 71 | 72 | 73 | if __name__ == '__main__': 74 | dst = GTA5DataSet("../../../../VSlab/stu92054/GTA5/", is_transform=True) 75 | trainloader = data.DataLoader(dst, batch_size=4) 76 | for i, data in enumerate(trainloader): 77 | imgs, labels = data 78 | if i == 0: 79 | img = torchvision.utils.make_grid(imgs).numpy() 80 | img = np.transpose(img, (1, 2, 0)) 81 | img = img[:, :, ::-1] 82 | plt.imshow(img) 83 | plt.show() 84 | -------------------------------------------------------------------------------- /Adapt_Structured_Output/dataset/ours_dataset.py: -------------------------------------------------------------------------------- 1 | import os 2 | import os.path as osp 3 | import numpy as np 4 | import random 5 | import matplotlib.pyplot as plt 6 | import collections 7 | import torch 8 | import torchvision 9 | from torch.utils import data 10 | from PIL import Image 11 | import pdb 12 | 13 | 14 | class OursDataSet(data.Dataset): 15 | def __init__(self, root, list_path, max_iters=None, crop_size=(321, 321), mean=(128, 128, 128), scale=True, mirror=True, ignore_label=255): 16 | self.root = root 17 | self.list_path = list_path 18 | self.crop_size = crop_size 19 | self.scale = scale 20 | self.ignore_label = ignore_label 21 | self.mean = mean 22 | self.is_mirror = mirror 23 | self.img_ids = [i_id.strip() for i_id in open(list_path)] 24 | if not max_iters==None: 25 | self.img_ids = self.img_ids * int(np.ceil(float(max_iters) / len(self.img_ids))) 26 | self.files = [] 27 | 28 | for name in self.img_ids: 29 | img_file = osp.join(self.root, "%s/%s" % (self.set, name)) 30 | self.files.append({ 31 | "img": img_file, 32 | "name": name 33 | }) 34 | 35 | def __len__(self): 36 | return len(self.files) 37 | 38 | def __getitem__(self, index): 39 | datafiles = self.files[index] 40 | 41 | image = Image.open(datafiles["img"]).convert('RGB') 42 | name = datafiles["name"] 43 | 44 | # resize 45 | image = image.resize(self.crop_size, Image.BICUBIC) 46 | 47 | image = np.asarray(image, np.float32) 48 | 49 | size = image.shape 50 | image = image[:, :, ::-1] # change to BGR 51 | image -= self.mean 52 | image = image.transpose((2, 0, 1)) 53 | 54 | return image.copy(), np.array(size), name 55 | 56 | if __name__ == '__main__': 57 | dst = OursDataSet("../../../../VSlab2/BCTsai/Lab/datasets/NTHU_512/", is_transform=True) 58 | trainloader = data.DataLoader(dst, batch_size=4) 59 | for i, data in enumerate(trainloader): 60 | imgs, labels = data 61 | if i == 0: 62 | img = torchvision.utils.make_grid(imgs).numpy() 63 | img = np.transpose(img, (1, 2, 0)) 64 | img = img[:, :, ::-1] 65 | plt.imshow(img) 66 | plt.show() 67 | -------------------------------------------------------------------------------- /Adapt_Structured_Output/model/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Structured_Output/model/__init__.py -------------------------------------------------------------------------------- /Adapt_Structured_Output/model/discriminator.py: -------------------------------------------------------------------------------- 1 | import torch.nn as nn 2 | import torch.nn.functional as F 3 | 4 | 5 | class FCDiscriminator(nn.Module): 6 | 7 | def __init__(self, num_classes, ndf = 64): 8 | super(FCDiscriminator, self).__init__() 9 | 10 | self.conv1 = nn.Conv2d(num_classes, ndf, kernel_size=4, stride=2, padding=1) 11 | self.conv2 = nn.Conv2d(ndf, ndf*2, kernel_size=4, stride=2, padding=1) 12 | self.conv3 = nn.Conv2d(ndf*2, ndf*4, kernel_size=4, stride=2, padding=1) 13 | self.conv4 = nn.Conv2d(ndf*4, ndf*8, kernel_size=4, stride=2, padding=1) 14 | self.classifier = nn.Conv2d(ndf*8, 1, kernel_size=4, stride=2, padding=1) 15 | 16 | self.leaky_relu = nn.LeakyReLU(negative_slope=0.2, inplace=True) 17 | #self.up_sample = nn.Upsample(scale_factor=32, mode='bilinear') 18 | #self.sigmoid = nn.Sigmoid() 19 | 20 | 21 | def forward(self, x): 22 | x = self.conv1(x) 23 | x = self.leaky_relu(x) 24 | x = self.conv2(x) 25 | x = self.leaky_relu(x) 26 | x = self.conv3(x) 27 | x = self.leaky_relu(x) 28 | x = self.conv4(x) 29 | x = self.leaky_relu(x) 30 | x = self.classifier(x) 31 | #x = self.up_sample(x) 32 | #x = self.sigmoid(x) 33 | 34 | return x 35 | -------------------------------------------------------------------------------- /Adapt_Structured_Output/utils/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/Adapt_Structured_Output/utils/__init__.py -------------------------------------------------------------------------------- /Adapt_Structured_Output/utils/loss.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn.functional as F 3 | import torch.nn as nn 4 | from torch.autograd import Variable 5 | 6 | 7 | class CrossEntropy2d(nn.Module): 8 | 9 | def __init__(self, size_average=True, ignore_label=255): 10 | super(CrossEntropy2d, self).__init__() 11 | self.size_average = size_average 12 | self.ignore_label = ignore_label 13 | 14 | def forward(self, predict, target, weight=None): 15 | """ 16 | Args: 17 | predict:(n, c, h, w) 18 | target:(n, h, w) 19 | weight (Tensor, optional): a manual rescaling weight given to each class. 20 | If given, has to be a Tensor of size "nclasses" 21 | """ 22 | assert not target.requires_grad 23 | assert predict.dim() == 4 24 | assert target.dim() == 3 25 | assert predict.size(0) == target.size(0), "{0} vs {1} ".format(predict.size(0), target.size(0)) 26 | assert predict.size(2) == target.size(1), "{0} vs {1} ".format(predict.size(2), target.size(1)) 27 | assert predict.size(3) == target.size(2), "{0} vs {1} ".format(predict.size(3), target.size(3)) 28 | n, c, h, w = predict.size() 29 | target_mask = (target >= 0) * (target != self.ignore_label) 30 | target = target[target_mask] 31 | if not target.data.dim(): 32 | return Variable(torch.zeros(1)) 33 | predict = predict.transpose(1, 2).transpose(2, 3).contiguous() 34 | predict = predict[target_mask.view(n, h, w, 1).repeat(1, 1, 1, c)].view(-1, c) 35 | loss = F.cross_entropy(predict, target, weight=weight, size_average=self.size_average) 36 | return loss 37 | -------------------------------------------------------------------------------- /FCNs_Wild/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | project( ccnn ) 2 | cmake_minimum_required(VERSION 2.8) 3 | add_definitions( -DLBFGS_FLOAT=32 ) 4 | set( CMAKE_POSITION_INDEPENDENT_CODE True ) 5 | 6 | list(APPEND CMAKE_MODULE_PATH ${CMAKE_CURRENT_SOURCE_DIR}/cmake ) 7 | add_subdirectory( lib ) -------------------------------------------------------------------------------- /FCNs_Wild/README.md: -------------------------------------------------------------------------------- 1 | # FCNs in the Wild: Pixel-level Adversarial and Constraint-based Adaptation Implemented by Tensorflow 2 | Paper link: [https://arxiv.org/abs/1612.02649](https://arxiv.org/abs/1612.02649) 3 | 4 | 5 | ## Intro 6 | Tensorflow implementation of the paper for adapting semantic segmentation from the (A) Synthia dataset to Cityscapes dataset and (B) Cityscapes dataset to Our dataset. 7 | 8 | ## Installation 9 | * Use Tensorflow version-1.1.0 with Python2 10 | * Build ccnn 11 | 12 | ``` 13 | cd fcns-wild 14 | mkdir build 15 | cd build 16 | cmake .. 17 | make -j8 18 | ``` 19 | 20 | ## Dataset 21 | 22 | * Download [Cityscapes Dataset](https://www.cityscapes-dataset.com/) 23 | * Download [Synthia Dataset](http://synthia-dataset.com/download-2/) 24 | * download the subset "SYNTHIA-RAND-CITYSCAPES" 25 | * Download [NMD Dataset](https://yihsinchen.github.io/segmentation_adaptation/#Dataset) 26 | * contains four subsets --- Taipei, Tokyo, Roma, Rio --- used as target domain (only testing data has annotations) 27 | * Change the data path in files under folder "./data" 28 | ## Testing 29 | * Download and testing the trained model 30 | 31 | ``` 32 | cd fcns-wild 33 | sh scripts/download_demo.sh 34 | sh scripts/infer_city2NMD.sh # This shell NMD is using Taipei 35 | ``` 36 | 37 | The demo model is cityscapes-to-Taipei, and results will be saved in the `./train_results/` folder. Also, it shows evaluated performance. (the evaluation code is provided by Cityscapes-dataset) 38 | 39 | 40 | ## Training Examples 41 | * Download the pretrained weights (model trained on source) 42 | ``` 43 | sh scripts/download_src.sh 44 | ``` 45 | * Train the Cityscapes-to-Ours{subset} model 46 | 47 | ``` 48 | python ./src/train_adv.py \ 49 | --weight_path ./pretrained/train_cscape.npy \ 50 | --city {city_name} \ 51 | --src_data_path ./data/Cityscapes.txt \ 52 | --tgt_data_path ./data/{city_name}.txt \ 53 | --method GACA \ 54 | ``` 55 | 56 | 57 | The training scripts for adapt from (A) Synthia dataset to Cityscapes dataset and (B) Cityscapes dataset to Our dataset are prepared in "scripts/run_train_syn2city.sh" and "scripts/run_train_city2ours.sh". 58 | 59 | ## Reference code 60 | [https://github.com/pathak22/ccnn](https://github.com/pathak22/ccnn) 61 | 62 | [https://github.com/MarvinTeichmann/tensorflow-fcn](https://github.com/MarvinTeichmann/tensorflow-fcn) 63 | 64 | 65 | -------------------------------------------------------------------------------- /FCNs_Wild/cmake/FindEigen3.cmake: -------------------------------------------------------------------------------- 1 | # - Try to find Eigen3 lib 2 | # 3 | # This module supports requiring a minimum version, e.g. you can do 4 | # find_package(Eigen3 3.1.2) 5 | # to require version 3.1.2 or newer of Eigen3. 6 | # 7 | # Once done this will define 8 | # 9 | # EIGEN3_FOUND - system has eigen lib with correct version 10 | # EIGEN3_INCLUDE_DIR - the eigen include directory 11 | # EIGEN3_VERSION - eigen version 12 | 13 | # Copyright (c) 2006, 2007 Montel Laurent, 14 | # Copyright (c) 2008, 2009 Gael Guennebaud, 15 | # Copyright (c) 2009 Benoit Jacob 16 | # Redistribution and use is allowed according to the terms of the 2-clause BSD license. 17 | 18 | if(NOT Eigen3_FIND_VERSION) 19 | if(NOT Eigen3_FIND_VERSION_MAJOR) 20 | set(Eigen3_FIND_VERSION_MAJOR 2) 21 | endif(NOT Eigen3_FIND_VERSION_MAJOR) 22 | if(NOT Eigen3_FIND_VERSION_MINOR) 23 | set(Eigen3_FIND_VERSION_MINOR 91) 24 | endif(NOT Eigen3_FIND_VERSION_MINOR) 25 | if(NOT Eigen3_FIND_VERSION_PATCH) 26 | set(Eigen3_FIND_VERSION_PATCH 0) 27 | endif(NOT Eigen3_FIND_VERSION_PATCH) 28 | 29 | set(Eigen3_FIND_VERSION "${Eigen3_FIND_VERSION_MAJOR}.${Eigen3_FIND_VERSION_MINOR}.${Eigen3_FIND_VERSION_PATCH}") 30 | endif(NOT Eigen3_FIND_VERSION) 31 | 32 | macro(_eigen3_check_version) 33 | file(READ "${EIGEN3_INCLUDE_DIR}/Eigen/src/Core/util/Macros.h" _eigen3_version_header) 34 | 35 | string(REGEX MATCH "define[ \t]+EIGEN_WORLD_VERSION[ \t]+([0-9]+)" _eigen3_world_version_match "${_eigen3_version_header}") 36 | set(EIGEN3_WORLD_VERSION "${CMAKE_MATCH_1}") 37 | string(REGEX MATCH "define[ \t]+EIGEN_MAJOR_VERSION[ \t]+([0-9]+)" _eigen3_major_version_match "${_eigen3_version_header}") 38 | set(EIGEN3_MAJOR_VERSION "${CMAKE_MATCH_1}") 39 | string(REGEX MATCH "define[ \t]+EIGEN_MINOR_VERSION[ \t]+([0-9]+)" _eigen3_minor_version_match "${_eigen3_version_header}") 40 | set(EIGEN3_MINOR_VERSION "${CMAKE_MATCH_1}") 41 | 42 | set(EIGEN3_VERSION ${EIGEN3_WORLD_VERSION}.${EIGEN3_MAJOR_VERSION}.${EIGEN3_MINOR_VERSION}) 43 | if(${EIGEN3_VERSION} VERSION_LESS ${Eigen3_FIND_VERSION}) 44 | set(EIGEN3_VERSION_OK FALSE) 45 | else(${EIGEN3_VERSION} VERSION_LESS ${Eigen3_FIND_VERSION}) 46 | set(EIGEN3_VERSION_OK TRUE) 47 | endif(${EIGEN3_VERSION} VERSION_LESS ${Eigen3_FIND_VERSION}) 48 | 49 | if(NOT EIGEN3_VERSION_OK) 50 | 51 | message(STATUS "Eigen3 version ${EIGEN3_VERSION} found in ${EIGEN3_INCLUDE_DIR}, " 52 | "but at least version ${Eigen3_FIND_VERSION} is required") 53 | endif(NOT EIGEN3_VERSION_OK) 54 | endmacro(_eigen3_check_version) 55 | 56 | if (EIGEN3_INCLUDE_DIR) 57 | 58 | # in cache already 59 | _eigen3_check_version() 60 | set(EIGEN3_FOUND ${EIGEN3_VERSION_OK}) 61 | 62 | else (EIGEN3_INCLUDE_DIR) 63 | 64 | find_path(EIGEN3_INCLUDE_DIR NAMES signature_of_eigen3_matrix_library 65 | PATHS 66 | ${CMAKE_INSTALL_PREFIX}/include 67 | ${KDE4_INCLUDE_DIR} 68 | PATH_SUFFIXES eigen3 eigen 69 | ) 70 | 71 | if(EIGEN3_INCLUDE_DIR) 72 | _eigen3_check_version() 73 | endif(EIGEN3_INCLUDE_DIR) 74 | 75 | include(FindPackageHandleStandardArgs) 76 | find_package_handle_standard_args(Eigen3 DEFAULT_MSG EIGEN3_INCLUDE_DIR EIGEN3_VERSION_OK) 77 | 78 | mark_as_advanced(EIGEN3_INCLUDE_DIR) 79 | 80 | endif(EIGEN3_INCLUDE_DIR) 81 | 82 | -------------------------------------------------------------------------------- /FCNs_Wild/cmake/FindNumPy.cmake: -------------------------------------------------------------------------------- 1 | # - Find the NumPy libraries 2 | # This module finds if NumPy is installed, and sets the following variables 3 | # indicating where it is. 4 | # 5 | # TODO: Update to provide the libraries and paths for linking npymath lib. 6 | # 7 | # NUMPY_FOUND - was NumPy found 8 | # NUMPY_VERSION - the version of NumPy found as a string 9 | # NUMPY_VERSION_MAJOR - the major version number of NumPy 10 | # NUMPY_VERSION_MINOR - the minor version number of NumPy 11 | # NUMPY_VERSION_PATCH - the patch version number of NumPy 12 | # NUMPY_VERSION_DECIMAL - e.g. version 1.6.1 is 10601 13 | # NUMPY_INCLUDE_DIRS - path to the NumPy include files 14 | 15 | #============================================================================ 16 | # Copyright 2012 Continuum Analytics, Inc. 17 | # 18 | # MIT License 19 | # 20 | # Permission is hereby granted, free of charge, to any person obtaining 21 | # a copy of this software and associated documentation files 22 | # (the "Software"), to deal in the Software without restriction, including 23 | # without limitation the rights to use, copy, modify, merge, publish, 24 | # distribute, sublicense, and/or sell copies of the Software, and to permit 25 | # persons to whom the Software is furnished to do so, subject to 26 | # the following conditions: 27 | # 28 | # The above copyright notice and this permission notice shall be included 29 | # in all copies or substantial portions of the Software. 30 | # 31 | # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS 32 | # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 33 | # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 34 | # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR 35 | # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 36 | # ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 37 | # OTHER DEALINGS IN THE SOFTWARE. 38 | # 39 | #============================================================================ 40 | 41 | # Finding NumPy involves calling the Python interpreter 42 | if(NumPy_FIND_REQUIRED) 43 | find_package(PythonInterp REQUIRED) 44 | else() 45 | find_package(PythonInterp) 46 | endif() 47 | 48 | if(NOT PYTHONINTERP_FOUND) 49 | set(NUMPY_FOUND FALSE) 50 | endif() 51 | 52 | execute_process(COMMAND "${PYTHON_EXECUTABLE}" "-c" 53 | "import numpy as n; print(n.__version__); print(n.get_include());" 54 | RESULT_VARIABLE _NUMPY_SEARCH_SUCCESS 55 | OUTPUT_VARIABLE _NUMPY_VALUES 56 | ERROR_VARIABLE _NUMPY_ERROR_VALUE 57 | OUTPUT_STRIP_TRAILING_WHITESPACE) 58 | 59 | if(NOT _NUMPY_SEARCH_SUCCESS MATCHES 0) 60 | if(NumPy_FIND_REQUIRED) 61 | message(FATAL_ERROR 62 | "NumPy import failure:\n${_NUMPY_ERROR_VALUE}") 63 | endif() 64 | set(NUMPY_FOUND FALSE) 65 | endif() 66 | 67 | # Convert the process output into a list 68 | string(REGEX REPLACE ";" "\\\\;" _NUMPY_VALUES ${_NUMPY_VALUES}) 69 | string(REGEX REPLACE "\n" ";" _NUMPY_VALUES ${_NUMPY_VALUES}) 70 | list(GET _NUMPY_VALUES 0 NUMPY_VERSION) 71 | list(GET _NUMPY_VALUES 1 NUMPY_INCLUDE_DIRS) 72 | 73 | # Make sure all directory separators are '/' 74 | string(REGEX REPLACE "\\\\" "/" NUMPY_INCLUDE_DIRS ${NUMPY_INCLUDE_DIRS}) 75 | 76 | # Get the major and minor version numbers 77 | string(REGEX REPLACE "\\." ";" _NUMPY_VERSION_LIST ${NUMPY_VERSION}) 78 | list(GET _NUMPY_VERSION_LIST 0 NUMPY_VERSION_MAJOR) 79 | list(GET _NUMPY_VERSION_LIST 1 NUMPY_VERSION_MINOR) 80 | list(GET _NUMPY_VERSION_LIST 2 NUMPY_VERSION_PATCH) 81 | string(REGEX MATCH "[0-9]*" NUMPY_VERSION_PATCH ${NUMPY_VERSION_PATCH}) 82 | math(EXPR NUMPY_VERSION_DECIMAL 83 | "(${NUMPY_VERSION_MAJOR} * 10000) + (${NUMPY_VERSION_MINOR} * 100) + ${NUMPY_VERSION_PATCH}") 84 | 85 | find_package_message(NUMPY 86 | "Found NumPy: version \"${NUMPY_VERSION}\" ${NUMPY_INCLUDE_DIRS}" 87 | "${NUMPY_INCLUDE_DIRS}${NUMPY_VERSION}") 88 | 89 | set(NUMPY_FOUND TRUE) 90 | 91 | -------------------------------------------------------------------------------- /FCNs_Wild/doc/LICENSE: -------------------------------------------------------------------------------- 1 | UC Berkeley's Standard Copyright and Disclaimer Notice: 2 | 3 | Copyright (c) 2015, Deepak Pathak, Philipp Krähenbühl 4 | and The Regents of the University of California (Regents). 5 | All Rights Reserved. 6 | 7 | Permission to use, copy, modify, and distribute this software and its 8 | documentation for educational, research, and not-for-profit purposes, without 9 | fee and without a signed licensing agreement, is hereby granted, provided that 10 | the above copyright notice, this paragraph and the following two paragraphs appear 11 | in all copies, modifications, and distributions. Contact The Office of Technology 12 | Licensing, UC Berkeley, 2150 Shattuck Avenue, Suite 510, Berkeley, CA 94720-1620, 13 | (510) 643-7201, for commercial licensing opportunities. 14 | 15 | IN NO EVENT SHALL REGENTS BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, 16 | INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE 17 | USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF REGENTS HAS BEEN ADVISED OF THE 18 | POSSIBILITY OF SUCH DAMAGE. REGENTS SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, 19 | BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR 20 | PURPOSE. THE SOFTWARE AND ACCOMPANYING DOCUMENTATION, IF ANY, PROVIDED HEREUNDER IS 21 | PROVIDED "AS IS". REGENTS HAS NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, 22 | ENHANCEMENTS, OR MODIFICATIONS. 23 | -------------------------------------------------------------------------------- /FCNs_Wild/example_image/Taipei_test.txt: -------------------------------------------------------------------------------- 1 | /media/disk3/wenyen/project/Domain-adaptation-on-segmentation/FCNs_Wild/example_image/pano_00506_0_0.png 2 | -------------------------------------------------------------------------------- /FCNs_Wild/example_image/labels/temp.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/example_image/labels/temp.txt -------------------------------------------------------------------------------- /FCNs_Wild/example_image/labels/val/Taipei/pano_00506_0_0_eval.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/example_image/labels/val/Taipei/pano_00506_0_0_eval.png -------------------------------------------------------------------------------- /FCNs_Wild/example_image/labels/val/temp.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/example_image/labels/val/temp.txt -------------------------------------------------------------------------------- /FCNs_Wild/example_image/pano_00506_0_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/example_image/pano_00506_0_0.png -------------------------------------------------------------------------------- /FCNs_Wild/lib/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | find_package( Eigen3 3.2.0 REQUIRED ) 2 | set(python_version "2" CACHE STRING "Specify which python version to use") 3 | MESSAGE(${python_version}) 4 | if(python_version VERSION_LESS 3.0.0) 5 | find_package(PythonInterp 2.7 REQUIRED) 6 | find_package(PythonLibs 2.7 REQUIRED) 7 | find_package(NumPy REQUIRED) 8 | find_package(Boost COMPONENTS python REQUIRED) 9 | else() 10 | find_package(PythonInterp 3.3 REQUIRED) 11 | find_package(PythonLibs 3.3 REQUIRED) 12 | find_package(NumPy REQUIRED) 13 | find_package(Boost COMPONENTS python-py34) 14 | if(NOT Boost_FOUND) 15 | find_package(Boost COMPONENTS python-py33) 16 | endif() 17 | if(NOT Boost_FOUND) 18 | find_package(Boost COMPONENTS python3 REQUIRED) 19 | endif() 20 | endif() 21 | find_package(OpenMP) 22 | 23 | set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fPIC -std=c++11 -Wall ${OpenMP_CXX_FLAGS}" ) # set global flags 24 | include_directories( ${CMAKE_CURRENT_SOURCE_DIR} ${EIGEN3_INCLUDE_DIR} ${Boost_INCLUDE_DIRS}) 25 | 26 | add_subdirectory( constraintloss ) 27 | add_subdirectory( optimization ) 28 | add_subdirectory( python ) 29 | add_subdirectory( util ) 30 | -------------------------------------------------------------------------------- /FCNs_Wild/lib/constraintloss/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | add_library( constraintloss constraintsoftmax.cpp ) 2 | target_link_libraries( constraintloss util optimization ) -------------------------------------------------------------------------------- /FCNs_Wild/lib/constraintloss/constraintsoftmax.h: -------------------------------------------------------------------------------- 1 | // -------------------------------------------------------- 2 | // CCNN 3 | // Copyright (c) 2015 [See LICENSE file for details] 4 | // Written by Deepak Pathak, Philipp Krähenbühl 5 | // -------------------------------------------------------- 6 | 7 | #pragma once 8 | #include "util/eigen.h" 9 | 10 | struct LinearConstraint { 11 | LinearConstraint( const VectorXf & a, float b, float slack=1e10 ); 12 | // A constraint \sum_i a*x_i >= b - slack 13 | VectorXf a; 14 | float b,slack; 15 | float eval( const RMatrixXf & x ) const; 16 | }; 17 | 18 | class ConstraintSoftmax { 19 | protected: 20 | float scale_; 21 | std::vector linear_constraints_; 22 | VectorXb zero_constraints_; 23 | public: 24 | ConstraintSoftmax( float scale=1.0 ); 25 | // A constraint \sum_i a*x_i >= b 26 | void addLinearConstraint( const VectorXf & a, float b, float slack=1e10 ); 27 | // A constraint \sum_i a*x_i == 0 where a >= 0 28 | void addZeroConstraint( const VectorXf & a ); 29 | RMatrixXf compute( const RMatrixXf & f ) const; 30 | RMatrixXf computeLog( const RMatrixXf & f ) const; 31 | }; 32 | -------------------------------------------------------------------------------- /FCNs_Wild/lib/optimization/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | add_library( optimization fista.cpp ) 2 | target_link_libraries( optimization util ) 3 | -------------------------------------------------------------------------------- /FCNs_Wild/lib/optimization/fista.cpp: -------------------------------------------------------------------------------- 1 | // -------------------------------------------------------- 2 | // CCNN 3 | // Copyright (c) 2015 [See LICENSE file for details] 4 | // Written by Deepak Pathak, Philipp Krähenbühl 5 | // -------------------------------------------------------- 6 | 7 | #include "fista.h" 8 | #include 9 | 10 | VectorXf identity(const VectorXf & x ) { return x; } 11 | 12 | VectorXf fista( VectorXf x0, function_t f, projection_t p, bool verbose ) { 13 | const int N_ITER = 3000; 14 | const float beta = 0.5; 15 | float alpha = 1e-1; 16 | 17 | VectorXf r = x0; 18 | float best_e = 1e10; 19 | VectorXf x1 = x0, g = 0*x0; 20 | for( int k=1; k<=N_ITER && alpha>1e-5; k++ ) { 21 | // Strictly speaking this is not "legal" FISTA, but it seems to work well in practice 22 | alpha *= 1.05; 23 | 24 | // Compute y 25 | VectorXf y = x1 + (k-2.) / (k+1.)*(x1 - x0); 26 | // Evaluate the gradient at y 27 | float fy = f(y,&g), fx = 1e10; 28 | // Update the old x 29 | x0 = x1; 30 | // Update x 31 | x1 = p( y - alpha*g ); 32 | while( alpha >= 1e-5 && (fx=f(x1,NULL)) > fy + g.dot(x1-y)+1./(2.*alpha)*(x1-y).dot(x1-y) ) { 33 | alpha *= beta; 34 | x1 = p( y - alpha*g ); 35 | } 36 | if ( fx < best_e ) { 37 | best_e = fx; 38 | r = x0; 39 | } 40 | if (verbose){ 41 | printf("it = %d df = %f alpha = %f\n", k, (x0-x1).array().abs().maxCoeff(), alpha ); 42 | std::cout<1e-8; k++ ) { 66 | VectorXf ng; 67 | float fx = f(p(x0-alpha*g),&ng); 68 | if( fx < prev_fx ) { 69 | x0 = p(x0-alpha*g); 70 | g = ng; 71 | prev_fx = fx; 72 | alpha *= 1.1; 73 | } 74 | else 75 | alpha *= beta; 76 | } 77 | 78 | // Debugging 79 | // if (k>N_ITER){ 80 | // std::cout<<"PGD didn't converge\n"; 81 | // std::cout<<"K="< 10 | 11 | typedef std::function function_t; 12 | typedef std::function projection_t; 13 | 14 | VectorXf identity(const VectorXf & x ); 15 | VectorXf fista( VectorXf x0, function_t f, projection_t p = identity, bool verbose=false ); 16 | VectorXf pgd( VectorXf x0, function_t f, projection_t p = identity, bool verbose=false, bool * converged=NULL ); 17 | -------------------------------------------------------------------------------- /FCNs_Wild/lib/python/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | include_directories( ${Boost_INCLUDE_DIRS} ${PYTHON_INCLUDE_DIRS} ${NUMPY_INCLUDE_DIRS} ${PYTHON_INCLUDE_DIRS}) 2 | link_directories( ${Boost_LIBRARY_DIR} ) 3 | file(WRITE "${CMAKE_CURRENT_BINARY_DIR}/__init__.py" "" ) 4 | 5 | add_library( ccnn SHARED boost.cpp ccnn.cpp util.cpp constraintloss.cpp ) 6 | target_link_libraries( ccnn ${Boost_LIBRARIES} ${PYTHON_LIBRARIES} constraintloss util) 7 | 8 | set_target_properties( ccnn PROPERTIES PREFIX "") 9 | if(APPLE) 10 | set_target_properties( ccnn PROPERTIES SUFFIX ".so" ) 11 | endif() 12 | -------------------------------------------------------------------------------- /FCNs_Wild/lib/python/boost.cpp: -------------------------------------------------------------------------------- 1 | /* 2 | Copyright (c) 2014, Philipp Krähenbühl 3 | All rights reserved. 4 | 5 | Redistribution and use in source and binary forms, with or without 6 | modification, are permitted provided that the following conditions are met: 7 | * Redistributions of source code must retain the above copyright 8 | notice, this list of conditions and the following disclaimer. 9 | * Redistributions in binary form must reproduce the above copyright 10 | notice, this list of conditions and the following disclaimer in the 11 | documentation and/or other materials provided with the distribution. 12 | * Neither the name of the Stanford University nor the 13 | names of its contributors may be used to endorse or promote products 14 | derived from this software without specific prior written permission. 15 | 16 | THIS SOFTWARE IS PROVIDED BY Philipp Krähenbühl ''AS IS'' AND ANY 17 | EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 18 | WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 19 | DISCLAIMED. IN NO EVENT SHALL Philipp Krähenbühl BE LIABLE FOR ANY 20 | DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 21 | (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 22 | LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 23 | ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 24 | (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 25 | SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 26 | */ 27 | #include "boost.h" 28 | -------------------------------------------------------------------------------- /FCNs_Wild/lib/python/boost.h: -------------------------------------------------------------------------------- 1 | /* 2 | Copyright (c) 2014, Philipp Krähenbühl 3 | All rights reserved. 4 | 5 | Redistribution and use in source and binary forms, with or without 6 | modification, are permitted provided that the following conditions are met: 7 | * Redistributions of source code must retain the above copyright 8 | notice, this list of conditions and the following disclaimer. 9 | * Redistributions in binary form must reproduce the above copyright 10 | notice, this list of conditions and the following disclaimer in the 11 | documentation and/or other materials provided with the distribution. 12 | * Neither the name of the Stanford University nor the 13 | names of its contributors may be used to endorse or promote products 14 | derived from this software without specific prior written permission. 15 | 16 | THIS SOFTWARE IS PROVIDED BY Philipp Krähenbühl ''AS IS'' AND ANY 17 | EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 18 | WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 19 | DISCLAIMED. IN NO EVENT SHALL Philipp Krähenbühl BE LIABLE FOR ANY 20 | DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 21 | (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 22 | LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 23 | ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 24 | (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 25 | SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 26 | */ 27 | #pragma once 28 | #include 29 | #include 30 | #include 31 | using namespace boost::python; 32 | -------------------------------------------------------------------------------- /FCNs_Wild/lib/python/ccnn.cpp: -------------------------------------------------------------------------------- 1 | // -------------------------------------------------------- 2 | // CCNN 3 | // Copyright (c) 2015 [See LICENSE file for details] 4 | // Written by Deepak Pathak, Philipp Krähenbühl 5 | // -------------------------------------------------------- 6 | 7 | #include "util.h" 8 | #include "constraintloss.h" 9 | #include "ccnn.h" 10 | 11 | BOOST_PYTHON_MODULE(ccnn) 12 | { 13 | import_array1(); 14 | 15 | defineUtil(); 16 | defineConstraintloss(); 17 | } 18 | -------------------------------------------------------------------------------- /FCNs_Wild/lib/python/ccnn.h: -------------------------------------------------------------------------------- 1 | // -------------------------------------------------------- 2 | // CCNN 3 | // Copyright (c) 2015 [See LICENSE file for details] 4 | // Written by Deepak Pathak, Philipp Krähenbühl 5 | // -------------------------------------------------------- 6 | 7 | #pragma once 8 | #include "boost.h" 9 | #include 10 | #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION 11 | #include 12 | 13 | #define ADD_MODULE( name ) object name ## Module(handle<>(borrowed(PyImport_AddModule(((std::string)"ccnn."+# name).c_str()))));\ 14 | scope().attr(# name) = name ## Module;\ 15 | scope name ## _scope = name ## Module; 16 | -------------------------------------------------------------------------------- /FCNs_Wild/lib/python/constraintloss.cpp: -------------------------------------------------------------------------------- 1 | // -------------------------------------------------------- 2 | // CCNN 3 | // Copyright (c) 2015 [See LICENSE file for details] 4 | // Written by Deepak Pathak, Philipp Krähenbühl 5 | // -------------------------------------------------------- 6 | 7 | #include "constraintloss.h" 8 | #include "ccnn.h" 9 | #include "constraintloss/constraintsoftmax.h" 10 | 11 | BOOST_PYTHON_MEMBER_FUNCTION_OVERLOADS(ConstraintSoftmax_addLinearConstraint_o, ConstraintSoftmax::addLinearConstraint, 2, 3 ); 12 | 13 | void defineConstraintloss() { 14 | ADD_MODULE( constraintloss ); 15 | 16 | class_("ConstraintSoftmax",init<>()) 17 | .def(init()) 18 | .def( "addLinearConstraint", &ConstraintSoftmax::addLinearConstraint, ConstraintSoftmax_addLinearConstraint_o() ) 19 | .def( "addZeroConstraint", &ConstraintSoftmax::addZeroConstraint ) 20 | .def( "compute", &ConstraintSoftmax::compute ) 21 | .def( "computeLog", &ConstraintSoftmax::computeLog ); 22 | } 23 | -------------------------------------------------------------------------------- /FCNs_Wild/lib/python/constraintloss.h: -------------------------------------------------------------------------------- 1 | // -------------------------------------------------------- 2 | // CCNN 3 | // Copyright (c) 2015 [See LICENSE file for details] 4 | // Written by Deepak Pathak, Philipp Krähenbühl 5 | // -------------------------------------------------------- 6 | 7 | #pragma once 8 | 9 | void defineConstraintloss(); 10 | -------------------------------------------------------------------------------- /FCNs_Wild/lib/util/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | add_library( util eigen.cpp) 2 | target_link_libraries( util ) 3 | -------------------------------------------------------------------------------- /FCNs_Wild/lib/util/eigen.cpp: -------------------------------------------------------------------------------- 1 | // -------------------------------------------------------- 2 | // CCNN 3 | // 2015. Modified by Deepak Pathak, Philipp Krähenbühl 4 | // -------------------------------------------------------- 5 | 6 | /* 7 | Copyright (c) 2014, Philipp Krähenbühl 8 | All rights reserved. 9 | 10 | Redistribution and use in source and binary forms, with or without 11 | modification, are permitted provided that the following conditions are met: 12 | * Redistributions of source code must retain the above copyright 13 | notice, this list of conditions and the following disclaimer. 14 | * Redistributions in binary form must reproduce the above copyright 15 | notice, this list of conditions and the following disclaimer in the 16 | documentation and/or other materials provided with the distribution. 17 | * Neither the name of the Stanford University nor the 18 | names of its contributors may be used to endorse or promote products 19 | derived from this software without specific prior written permission. 20 | 21 | THIS SOFTWARE IS PROVIDED BY Philipp Krähenbühl ''AS IS'' AND ANY 22 | EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 23 | WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 24 | DISCLAIMED. IN NO EVENT SHALL Philipp Krähenbühl BE LIABLE FOR ANY 25 | DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 26 | (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 27 | LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 28 | ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 29 | (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 30 | SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 31 | */ 32 | #include "eigen.h" 33 | 34 | VectorXi range( int end ) { 35 | return range( 0, end ); 36 | } 37 | VectorXi range( int start, int end ) { 38 | VectorXi r(end-start); 39 | for( int i=0; i 38 | -------------------------------------------------------------------------------- /FCNs_Wild/models/zero_gradient.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | from tensorflow.python.framework import ops 3 | 4 | 5 | class ZeroGradientBuilder(object): 6 | def __init__(self): 7 | self.num_calls = 0 8 | 9 | def __call__(self, x): 10 | grad_name = "ZeroGradient%d" % self.num_calls 11 | @ops.RegisterGradient(grad_name) 12 | def _zero_gradients(op, grad): 13 | return [grad * 0] 14 | 15 | g = tf.get_default_graph() 16 | with g.gradient_override_map({"Identity": grad_name}): 17 | y = tf.identity(x) 18 | 19 | self.num_calls += 1 20 | return y 21 | 22 | zero_gradient = ZeroGradientBuilder() 23 | -------------------------------------------------------------------------------- /FCNs_Wild/scripts/cmd_for_DL.sh: -------------------------------------------------------------------------------- 1 | CONFIRM=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate "https://docs.google.com/uc?export=download&id=$1" -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p') 2 | wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$CONFIRM&id=$1" -O $2 3 | rm -rf /tmp/cookies.txt 4 | 5 | -------------------------------------------------------------------------------- /FCNs_Wild/scripts/create_img_list.py: -------------------------------------------------------------------------------- 1 | import os 2 | import pdb 3 | 4 | fo = open("../data/val_Taipei.txt", "w") 5 | path = '/media/VSlab2/BCTsai/Lab/datasets/NTHU_512/imgs/val/Taipei' 6 | include_label = False 7 | 8 | if include_label: 9 | for file in os.listdir(path): 10 | fo.write(os.path.join(path,file)+' '+os.path.join(path.replace('imgs','labels'),file.replace('.png','_eval.png'))+'\n') 11 | else: 12 | for file in os.listdir(path): 13 | fo.write(os.path.join(path,file)+' '+'*\n') 14 | 15 | fo.close() 16 | -------------------------------------------------------------------------------- /FCNs_Wild/scripts/data_path.sh: -------------------------------------------------------------------------------- 1 | # You need to Use this shell in ./Domain-adpatation-on-segmentation/FCNs_Wild 2 | # sh data_path.sh 3 | # example: sh data_path.sh ./data/Taipei.txt /media/dataset/NMD/... util to images folder 4 | cp $1 $1.backup 5 | data_path=$2 6 | data_path="\"$data_path\"" 7 | awk -F'/' '{print $data_path $11}' $1 > $1 8 | -------------------------------------------------------------------------------- /FCNs_Wild/scripts/download_demo.sh: -------------------------------------------------------------------------------- 1 | mkdir trained_weights 2 | mkdir trained_weights/GACA 3 | mkdir trained_weights/GACA/Taipei 4 | sh scripts/cmd_for_DL.sh 1plA4NWl4e5__eVYC7TLOF51TfKRvPgh0 trained_weights/GACA/Taipei/model-800.zip 5 | unzip trained_weights/GACA/Taipei/model-800.zip -d trained_weights/GACA/Taipei -------------------------------------------------------------------------------- /FCNs_Wild/scripts/download_src.sh: -------------------------------------------------------------------------------- 1 | mkdir pretrained 2 | sh scripts/cmd_for_DL.sh 1HkAewAjxyQXF8jrI-7lfNdMTA5AV6NVL pretrained/pretrained_vgg.npy 3 | sh scripts/cmd_for_DL.sh 1euYkvtI0Op99mjgv2Nl4eDKk6t2_zLu2 pretrained/train_cscape_frontend.npy 4 | sh scripts/cmd_for_DL.sh 1noylQcOyXGB_QPCC0T994QQP-8I4li4z pretrained/train_synthia_frontend.zip 5 | uzip pretrained/train_synthia_frontend.zip -d pretrained -------------------------------------------------------------------------------- /FCNs_Wild/scripts/infer_city2NMD.sh: -------------------------------------------------------------------------------- 1 | python ./src/infer.py \ 2 | --img_path_file ./example_image/Taipei_test.txt \ 3 | --eval_script ./src/cityscapesscripts/evaluation/evalPixelLevelSemanticLabeling.py \ 4 | --city Taipei \ 5 | --pretrained_weight ./pretrained/train_cscape_frontend.npy \ 6 | --method GACA \ 7 | --gt_dir ./example_image/labels/val/ \ 8 | --weights_dir ./trained_weights/ \ 9 | --output_dir ./train_results/ \ 10 | --_format 'model' \ 11 | --gpu 7 \ 12 | --iter_lower 400 \ 13 | --iter_upper 800 14 | 15 | 16 | 17 | -------------------------------------------------------------------------------- /FCNs_Wild/scripts/infer_syn2city.sh: -------------------------------------------------------------------------------- 1 | python ./src/infer.py \ 2 | --img_path_file ./data/cscape_val.txt \ 3 | --eval_script ./src/cityscapesscripts/evaluation/evalPixelLevelSemanticLabeling.py \ 4 | --city syn2real \ 5 | --pretrained_weight ./pretrained/train_cscape_frontend.npy \ 6 | --method GACA \ 7 | --gt_dir XXX/gtFine/val/ \ 8 | --weights_dir ./trained_weights/ \ 9 | --output_dir ./train_results/ \ 10 | --_format 'model' \ 11 | --gpu 7 \ 12 | --iter_lower 800 \ 13 | --iter_upper 1200 14 | 15 | 16 | -------------------------------------------------------------------------------- /FCNs_Wild/scripts/train_city2NMD.sh: -------------------------------------------------------------------------------- 1 | python -u ./src/train_adv.py \ 2 | --weight_path ./pretrained/train_cscape_frontend.npy \ 3 | --city Roma \ 4 | --src_data_path ./data/Cityscapes.txt \ 5 | --tgt_data_path ./data/Roma.txt \ 6 | --method GACA \ 7 | --batch_size 4 \ 8 | --iter_size 4 \ 9 | --max_step 2000 \ 10 | --save_step 200 \ 11 | --train_dir ./trained_weights/ \ 12 | --gpu 5,6 \ 13 | 2>&1 | tee ./logfiles/Roma_GACA.log 14 | -------------------------------------------------------------------------------- /FCNs_Wild/scripts/train_syn2city.sh: -------------------------------------------------------------------------------- 1 | python -u ./src/train_adv.py \ 2 | --weight_path ./pretrained/pretrained_vgg.npy \ 3 | --restore_path ./pretrained/train_synthia_frontend \ 4 | --city syn2real \ 5 | --src_data_path ./data/synthia_train.txt \ 6 | --tgt_data_path ./data/Cityscapes.txt \ 7 | --method GACA \ 8 | --batch_size 8 \ 9 | --iter_size 2 \ 10 | --max_step 2000 \ 11 | --save_step 200 \ 12 | --train_dir ./trained_weights/ \ 13 | --gpu 3,4 \ 14 | 2>&1 | tee ./logfiles/syn2real_GACA.log 15 | 16 | -------------------------------------------------------------------------------- /FCNs_Wild/src/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/__init__.py -------------------------------------------------------------------------------- /FCNs_Wild/src/ccnn.py: -------------------------------------------------------------------------------- 1 | # -------------------------------------------------------- 2 | # CCNN 3 | # Copyright (c) 2015 [See LICENSE file for details] 4 | # Written by Deepak Pathak, Philipp Krahenbuhl 5 | # -------------------------------------------------------- 6 | 7 | def __setup_path(): 8 | import os, sys, inspect, numpy as np 9 | paths = ['.','..','../build/','../build/release','../build/debug'] 10 | current_path = os.path.split(inspect.getfile( inspect.currentframe() ))[0] 11 | paths = [os.path.realpath(os.path.abspath(os.path.join(current_path,x))) for x in paths] 12 | paths = list( filter( lambda x: os.path.exists(x+'/lib/python/ccnn.so'), paths ) ) 13 | ptime = [os.path.getmtime(x+'/lib/python/ccnn.so') for x in paths] 14 | if len( ptime ): 15 | path = paths[ np.argmax( ptime ) ] 16 | sys.path.insert(0, path+'/lib') 17 | __setup_path() 18 | del __setup_path 19 | from python.ccnn import * 20 | -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/__init__.py -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/back.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/back.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/checked6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/checked6.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/checked6_red.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/checked6_red.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/clearpolygon.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/clearpolygon.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/deleteobject.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/deleteobject.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/exit.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/exit.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/filepath.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/filepath.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/help19.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/help19.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/highlight.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/highlight.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/layerdown.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/layerdown.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/layerup.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/layerup.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/minus.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/minus.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/modify.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/modify.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/newobject.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/newobject.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/next.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/next.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/open.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/open.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/play.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/play.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/plus.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/plus.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/save.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/save.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/screenshot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/screenshot.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/screenshotToggle.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/screenshotToggle.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/shuffle.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/shuffle.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/undo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/undo.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/annotation/icons/zoom.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/annotation/icons/zoom.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/evaluation/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/evaluation/__init__.py -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/evaluation/addToConfusionMatrix.pyx: -------------------------------------------------------------------------------- 1 | # cython methods to speed-up evaluation 2 | 3 | import numpy as np 4 | cimport cython 5 | cimport numpy as np 6 | import ctypes 7 | 8 | np.import_array() 9 | 10 | cdef extern from "addToConfusionMatrix_impl.c": 11 | void addToConfusionMatrix( const unsigned char* f_prediction_p , 12 | const unsigned char* f_groundTruth_p , 13 | const unsigned int f_width_i , 14 | const unsigned int f_height_i , 15 | unsigned long long* f_confMatrix_p , 16 | const unsigned int f_confMatDim_i ) 17 | 18 | 19 | cdef tonumpyarray(unsigned long long* data, unsigned long long size): 20 | if not (data and size >= 0): raise ValueError 21 | return np.PyArray_SimpleNewFromData(2, [size, size], np.NPY_UINT64, data) 22 | 23 | @cython.boundscheck(False) 24 | def cEvaluatePair( np.ndarray[np.uint8_t , ndim=2] predictionArr , 25 | np.ndarray[np.uint8_t , ndim=2] groundTruthArr , 26 | np.ndarray[np.uint64_t, ndim=2] confMatrix , 27 | evalLabels ): 28 | cdef np.ndarray[np.uint8_t , ndim=2, mode="c"] predictionArr_c 29 | cdef np.ndarray[np.uint8_t , ndim=2, mode="c"] groundTruthArr_c 30 | cdef np.ndarray[np.ulonglong_t, ndim=2, mode="c"] confMatrix_c 31 | 32 | predictionArr_c = np.ascontiguousarray(predictionArr , dtype=np.uint8 ) 33 | groundTruthArr_c = np.ascontiguousarray(groundTruthArr, dtype=np.uint8 ) 34 | confMatrix_c = np.ascontiguousarray(confMatrix , dtype=np.ulonglong) 35 | 36 | cdef np.uint32_t height_ui = predictionArr.shape[1] 37 | cdef np.uint32_t width_ui = predictionArr.shape[0] 38 | cdef np.uint32_t confMatDim_ui = confMatrix.shape[0] 39 | 40 | addToConfusionMatrix(&predictionArr_c[0,0], &groundTruthArr_c[0,0], height_ui, width_ui, &confMatrix_c[0,0], confMatDim_ui) 41 | 42 | confMatrix = np.ascontiguousarray(tonumpyarray(&confMatrix_c[0,0], confMatDim_ui)) 43 | 44 | return np.copy(confMatrix) -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/evaluation/addToConfusionMatrix_impl.c: -------------------------------------------------------------------------------- 1 | // cython methods to speed-up evaluation 2 | 3 | void addToConfusionMatrix( const unsigned char* f_prediction_p , 4 | const unsigned char* f_groundTruth_p , 5 | const unsigned int f_width_i , 6 | const unsigned int f_height_i , 7 | unsigned long long* f_confMatrix_p , 8 | const unsigned int f_confMatDim_i ) 9 | { 10 | const unsigned int size_ui = f_height_i * f_width_i; 11 | for (unsigned int i = 0; i < size_ui; ++i) 12 | { 13 | const unsigned char predPx = f_prediction_p [i]; 14 | const unsigned char gtPx = f_groundTruth_p[i]; 15 | f_confMatrix_p[f_confMatDim_i*gtPx + predPx] += 1u; 16 | } 17 | } -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/evaluation/instance.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # 3 | # Instance class 4 | # 5 | 6 | class Instance(object): 7 | instID = 0 8 | labelID = 0 9 | pixelCount = 0 10 | medDist = -1 11 | distConf = 0.0 12 | 13 | def __init__(self, imgNp, instID): 14 | if (instID == -1): 15 | return 16 | self.instID = int(instID) 17 | self.labelID = int(self.getLabelID(instID)) 18 | self.pixelCount = int(self.getInstancePixels(imgNp, instID)) 19 | 20 | def getLabelID(self, instID): 21 | if (instID < 1000): 22 | return instID 23 | else: 24 | return int(instID / 1000) 25 | 26 | def getInstancePixels(self, imgNp, instLabel): 27 | return (imgNp == instLabel).sum() 28 | 29 | def toJSON(self): 30 | return json.dumps(self, default=lambda o: o.__dict__, sort_keys=True, indent=4) 31 | 32 | def toDict(self): 33 | buildDict = {} 34 | buildDict["instID"] = self.instID 35 | buildDict["labelID"] = self.labelID 36 | buildDict["pixelCount"] = self.pixelCount 37 | buildDict["medDist"] = self.medDist 38 | buildDict["distConf"] = self.distConf 39 | return buildDict 40 | 41 | def fromJSON(self, data): 42 | self.instID = int(data["instID"]) 43 | self.labelID = int(data["labelID"]) 44 | self.pixelCount = int(data["pixelCount"]) 45 | if ("medDist" in data): 46 | self.medDist = float(data["medDist"]) 47 | self.distConf = float(data["distConf"]) 48 | 49 | def __str__(self): 50 | return "("+str(self.instID)+")" -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/evaluation/instances2dict.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # 3 | # Convert instances from png files to a dictionary 4 | # 5 | 6 | from __future__ import print_function 7 | import os, sys 8 | 9 | # Cityscapes imports 10 | from instance import * 11 | sys.path.append( os.path.normpath( os.path.join( os.path.dirname( __file__ ) , '..' , 'helpers' ) ) ) 12 | from csHelpers import * 13 | 14 | def instances2dict(imageFileList, verbose=False): 15 | imgCount = 0 16 | instanceDict = {} 17 | 18 | if not isinstance(imageFileList, list): 19 | imageFileList = [imageFileList] 20 | 21 | if verbose: 22 | print("Processing {} images...".format(len(imageFileList))) 23 | 24 | for imageFileName in imageFileList: 25 | # Load image 26 | img = Image.open(imageFileName) 27 | 28 | # Image as numpy array 29 | imgNp = np.array(img) 30 | 31 | # Initialize label categories 32 | instances = {} 33 | for label in labels: 34 | instances[label.name] = [] 35 | 36 | # Loop through all instance ids in instance image 37 | for instanceId in np.unique(imgNp): 38 | instanceObj = Instance(imgNp, instanceId) 39 | 40 | instances[id2label[instanceObj.labelID].name].append(instanceObj.toDict()) 41 | 42 | imgKey = os.path.abspath(imageFileName) 43 | instanceDict[imgKey] = instances 44 | imgCount += 1 45 | 46 | if verbose: 47 | print("\rImages Processed: {}".format(imgCount), end=' ') 48 | sys.stdout.flush() 49 | 50 | if verbose: 51 | print("") 52 | 53 | return instanceDict 54 | 55 | def main(argv): 56 | fileList = [] 57 | if (len(argv) > 2): 58 | for arg in argv: 59 | if ("png" in arg): 60 | fileList.append(arg) 61 | instances2dict(fileList, True) 62 | 63 | if __name__ == "__main__": 64 | main(sys.argv[1:]) 65 | -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/evaluation/setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # 3 | # Enable cython support for eval scripts 4 | # Run as 5 | # setup.py build_ext --inplace 6 | # 7 | # WARNING: Only tested for Ubuntu 64bit OS. 8 | 9 | try: 10 | from distutils.core import setup 11 | from Cython.Build import cythonize 12 | except: 13 | print("Unable to setup. Please use pip to install: cython") 14 | print("sudo pip install cython") 15 | import os 16 | import numpy 17 | 18 | os.environ["CC"] = "g++" 19 | os.environ["CXX"] = "g++" 20 | 21 | setup(ext_modules = cythonize("addToConfusionMatrix.pyx"),include_dirs=[numpy.get_include()]) 22 | -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/helpers/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/helpers/__init__.py -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/preparation/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/preparation/__init__.py -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/preparation/createTrainIdInstanceImgs.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # 3 | # Converts the polygonal annotations of the Cityscapes dataset 4 | # to images, where pixel values encode the ground truth classes and the 5 | # individual instance of that classes. 6 | # 7 | # The Cityscapes downloads already include such images 8 | # a) *color.png : the class is encoded by its color 9 | # b) *labelIds.png : the class is encoded by its ID 10 | # c) *instanceIds.png : the class and the instance are encoded by an instance ID 11 | # 12 | # With this tool, you can generate option 13 | # d) *instanceTrainIds.png : the class and the instance are encoded by an instance training ID 14 | # This encoding might come handy for training purposes. You can use 15 | # the file labes.py to define the training IDs that suit your needs. 16 | # Note however, that once you submit or evaluate results, the regular 17 | # IDs are needed. 18 | # 19 | # Please refer to 'json2instanceImg.py' for an explanation of instance IDs. 20 | # 21 | # Uses the converter tool in 'json2instanceImg.py' 22 | # Uses the mapping defined in 'labels.py' 23 | # 24 | 25 | # python imports 26 | from __future__ import print_function 27 | import os, glob, sys 28 | 29 | # cityscapes imports 30 | sys.path.append( os.path.normpath( os.path.join( os.path.dirname( __file__ ) , '..' , 'helpers' ) ) ) 31 | from csHelpers import printError 32 | from json2instanceImg import json2instanceImg 33 | 34 | 35 | # The main method 36 | def main(): 37 | # Where to look for Cityscapes 38 | if 'CITYSCAPES_DATASET' in os.environ: 39 | cityscapesPath = os.environ['CITYSCAPES_DATASET'] 40 | else: 41 | cityscapesPath = os.path.join(os.path.dirname(os.path.realpath(__file__)),'..','..') 42 | # how to search for all ground truth 43 | searchFine = os.path.join( cityscapesPath , "gtFine" , "*" , "*" , "*_gt*_polygons.json" ) 44 | searchCoarse = os.path.join( cityscapesPath , "gtCoarse" , "*" , "*" , "*_gt*_polygons.json" ) 45 | 46 | # search files 47 | filesFine = glob.glob( searchFine ) 48 | filesFine.sort() 49 | filesCoarse = glob.glob( searchCoarse ) 50 | filesCoarse.sort() 51 | 52 | # concatenate fine and coarse 53 | files = filesFine + filesCoarse 54 | # files = filesFine # use this line if fine is enough for now. 55 | 56 | # quit if we did not find anything 57 | if not files: 58 | printError( "Did not find any files. Please consult the README." ) 59 | 60 | # a bit verbose 61 | print("Processing {} annotation files".format(len(files))) 62 | 63 | # iterate through files 64 | progress = 0 65 | print("Progress: {:>3} %".format( progress * 100 / len(files) ), end=' ') 66 | for f in files: 67 | # create the output filename 68 | dst = f.replace( "_polygons.json" , "_instanceTrainIds.png" ) 69 | 70 | # do the conversion 71 | try: 72 | json2instanceImg( f , dst , "trainIds" ) 73 | except: 74 | print("Failed to convert: {}".format(f)) 75 | raise 76 | 77 | # status 78 | progress += 1 79 | print("\rProgress: {:>3} %".format( progress * 100 / len(files) ), end=' ') 80 | sys.stdout.flush() 81 | 82 | 83 | # call the main 84 | if __name__ == "__main__": 85 | main() 86 | -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/preparation/createTrainIdLabelImgs.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # 3 | # Converts the polygonal annotations of the Cityscapes dataset 4 | # to images, where pixel values encode ground truth classes. 5 | # 6 | # The Cityscapes downloads already include such images 7 | # a) *color.png : the class is encoded by its color 8 | # b) *labelIds.png : the class is encoded by its ID 9 | # c) *instanceIds.png : the class and the instance are encoded by an instance ID 10 | # 11 | # With this tool, you can generate option 12 | # d) *labelTrainIds.png : the class is encoded by its training ID 13 | # This encoding might come handy for training purposes. You can use 14 | # the file labes.py to define the training IDs that suit your needs. 15 | # Note however, that once you submit or evaluate results, the regular 16 | # IDs are needed. 17 | # 18 | # Uses the converter tool in 'json2labelImg.py' 19 | # Uses the mapping defined in 'labels.py' 20 | # 21 | 22 | # python imports 23 | from __future__ import print_function 24 | import os, glob, sys 25 | import pdb 26 | # cityscapes imports 27 | sys.path.append( os.path.normpath( os.path.join( os.path.dirname( __file__ ) , '..' , 'helpers' ) ) ) 28 | from csHelpers import printError 29 | from json2labelImg import json2labelImg 30 | 31 | # The main method 32 | def main(): 33 | # Where to look for Cityscapes 34 | if 'CITYSCAPES_DATASET' in os.environ: 35 | cityscapesPath = os.environ['CITYSCAPES_DATASET'] 36 | else: 37 | cityscapesPath = os.path.join(os.path.dirname(os.path.realpath(__file__)),'..','..') 38 | # how to search for all ground truth 39 | #searchFine = os.path.join( cityscapesPath , "gtFine" , "*" , "*" , "*_gt*_polygons.json" ) 40 | searchFine = os.path.join( '/home/bctsai/gtFine/val' ,"*", "*_gt*_polygons.json" ) ###CVPR_eval 41 | searchCoarse = os.path.join( cityscapesPath , "gtCoarse" , "*" , "*" , "*_gt*_polygons.json" ) 42 | 43 | # search files 44 | #pdb.set_trace() 45 | filesFine = glob.glob( searchFine ) 46 | filesFine.sort() 47 | filesCoarse = glob.glob( searchCoarse ) 48 | filesCoarse.sort() 49 | 50 | # concatenate fine and coarse 51 | files = filesFine + filesCoarse 52 | # files = filesFine # use this line if fine is enough for now. 53 | 54 | # quit if we did not find anything 55 | if not files: 56 | printError( "Did not find any files. Please consult the README." ) 57 | 58 | # a bit verbose 59 | print("Processing {} annotation files".format(len(files))) 60 | 61 | # iterate through files 62 | progress = 0 63 | print("Progress: {:>3} %".format( progress * 100 / len(files) ), end=' ') 64 | for f in files: 65 | # create the output filename 66 | dst = f.replace( "_polygons.json" , "_CVPR_eval.png" ) 67 | 68 | # do the conversion 69 | try: 70 | json2labelImg( f , dst , "trainIds" ) 71 | except: 72 | print("Failed to convert: {}".format(f)) 73 | raise 74 | 75 | # status 76 | progress += 1 77 | print("\rProgress: {:>3} %".format( progress * 100 / len(files) ), end=' ') 78 | sys.stdout.flush() 79 | 80 | 81 | # call the main 82 | if __name__ == "__main__": 83 | main() 84 | -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/viewer/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/viewer/__init__.py -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/viewer/icons/back.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/viewer/icons/back.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/viewer/icons/disp.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/viewer/icons/disp.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/viewer/icons/exit.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/viewer/icons/exit.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/viewer/icons/filepath.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/viewer/icons/filepath.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/viewer/icons/help19.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/viewer/icons/help19.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/viewer/icons/minus.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/viewer/icons/minus.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/viewer/icons/next.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/viewer/icons/next.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/viewer/icons/open.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/viewer/icons/open.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/viewer/icons/play.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/viewer/icons/play.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/viewer/icons/plus.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/viewer/icons/plus.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/viewer/icons/shuffle.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/viewer/icons/shuffle.png -------------------------------------------------------------------------------- /FCNs_Wild/src/cityscapesscripts/viewer/icons/zoom.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/FCNs_Wild/src/cityscapesscripts/viewer/icons/zoom.png -------------------------------------------------------------------------------- /MCD_DA_seg/.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | env/ 12 | build/ 13 | develop-eggs/ 14 | dist/ 15 | downloads/ 16 | eggs/ 17 | .eggs/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | var/ 23 | wheels/ 24 | *.egg-info/ 25 | .installed.cfg 26 | *.egg 27 | 28 | # PyInstaller 29 | # Usually these files are written by a python script from a template 30 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 31 | *.manifest 32 | *.spec 33 | 34 | # Installer logs 35 | pip-log.txt 36 | pip-delete-this-directory.txt 37 | 38 | # Unit test / coverage reports 39 | htmlcov/ 40 | .tox/ 41 | .coverage 42 | .coverage.* 43 | .cache 44 | nosetests.xml 45 | coverage.xml 46 | *.cover 47 | .hypothesis/ 48 | 49 | # Translations 50 | *.mo 51 | *.pot 52 | 53 | # Django stuff: 54 | *.log 55 | local_settings.py 56 | 57 | # Flask stuff: 58 | instance/ 59 | .webassets-cache 60 | 61 | # Scrapy stuff: 62 | .scrapy 63 | 64 | # Sphinx documentation 65 | docs/_build/ 66 | 67 | # PyBuilder 68 | target/ 69 | 70 | # Jupyter Notebook 71 | .ipynb_checkpoints 72 | 73 | # pyenv 74 | .python-version 75 | 76 | # celery beat schedule file 77 | celerybeat-schedule 78 | 79 | # SageMath parsed files 80 | *.sage.py 81 | 82 | # dotenv 83 | .env 84 | 85 | # virtualenv 86 | .venv 87 | venv/ 88 | ENV/ 89 | 90 | # Spyder project settings 91 | .spyderproject 92 | .spyproject 93 | 94 | # Rope project settings 95 | .ropeproject 96 | 97 | # mkdocs documentation 98 | /site 99 | 100 | # mypy 101 | .mypy_cache/ 102 | 103 | # 104 | *.xml 105 | *.iml -------------------------------------------------------------------------------- /MCD_DA_seg/README.md: -------------------------------------------------------------------------------- 1 | # Maximum Classifier Discrepancy for Domain Adaptation with Semantic Segmentation Implemented by PyTorch 2 | 3 | We use the code provided by [https://github.com/mil-tokyo/MCD_DA](https://github.com/mil-tokyo/MCD_DA) and additionally create data-loaders for our dataset. 4 | 5 | *** 6 | ## Installation 7 | Use **Python 2.x** 8 | 9 | First, you need to install PyTorch following [the official site instruction](http://pytorch.org/). 10 | 11 | Next, please install the required libraries as follows; 12 | ``` 13 | pip install -r requirements.txt 14 | ``` 15 | 16 | ## Usage 17 | 18 | ### Dataset 19 | 20 | * Download [Cityscapes Dataset](https://www.cityscapes-dataset.com/) 21 | * Download [Synthia Dataset](http://synthia-dataset.com/download-2/) 22 | * download the subset "SYNTHIA-RAND-CITYSCAPES" 23 | * Download [Our Dataset](https://yihsinchen.github.io/segmentation_adaptation/#Dataset) 24 | * contains four subsets --- Taipei, Tokyo, Roma, Rio --- used as target domain (only testing data has annotations) 25 | * Check data-root-path in datasets.py 26 | 27 | ### Testing 28 | Download the trained model (synthia-to-cityscapes w. dilated-ResNet @60-epoch): 29 | 30 | ``` 31 | $ cd MCD_DA_seg 32 | $ sh download_demo.sh 33 | ``` 34 | 35 | ##### Infer the model: 36 | 37 | ``` 38 | python adapt_tester.py city ./train_output/synthia-train2city-train_3ch/pth/MCD-normal-drn_d_105-res50-60.pth.tar 39 | ``` 40 | 41 | (you can use the script "run_test.sh") 42 | 43 | Results will be saved under "./test_output/synthia-train2city-train_3ch---city-val/MCD-normal-drn_d_105-res50-60.tar" 44 | 45 | ##### Evaluation: 46 | We replace the original evaluation code with the script provided by Cityscapes-Dataset. (use "run_eval.sh") 47 | 48 | ``` 49 | python ./cityscapesscripts/evaluation/evalPixelLevelSemanticLabeling.py 50 | --gt {GroundTruth File for evalation} 51 | --pd ./test_output/synthia-train2city-train_3ch---city-val/MCD-normal-drn_d_105-60.tar/label/ 52 | ``` 53 | 54 | ### Training Examples 55 | - Dataset 56 | - Source: Synthia (synthia), Target: Cityscapes (city) 57 | - Network 58 | - Dilated Residual Network (drn_d_105) 59 | 60 | We train the model following the assumptions above; 61 | ``` 62 | python adapt_trainer.py synthia city --net drn_d_105 63 | ``` 64 | 65 | Trained models will be saved as "./train_output/synthia-train2city-train_3ch/pth/MCD-normal-drn_d_105-res50-EPOCH.pth.tar" 66 | 67 | The training scripts for adapt from (A) Synthia dataset to Cityscapes dataset and (B) Cityscapes dataset to Our dataset are prepared in "run_train_syn2city.sh" and "run_train_city2ours.sh". 68 | 69 | For using our dataset "CrossCountries" as the Target domain, in above scripts, replace Dataset-Name with the subset name {Taipei, Tokyo, Roma, Rio} 70 | 71 | ## Reference codes 72 | - [https://github.com/mil-tokyo/MCD_DA](https://github.com/mil-tokyo/MCD_DA) 73 | -------------------------------------------------------------------------------- /MCD_DA_seg/backup/run_eval.sh: -------------------------------------------------------------------------------- 1 | python eval.py city \ 2 | ./test_output/synthia-train2city-train_3ch---city-val/MCD-normal-drn_d_105-9.tar/label 3 | -------------------------------------------------------------------------------- /MCD_DA_seg/backup/run_src.sh: -------------------------------------------------------------------------------- 1 | export CUDA_VISIBLE_DEVICES="3"_ 2 | python source_trainer.py city --net drn_d_105 \ 3 | --train_img_shape 512 256 \ 4 | --batch_size 16 \ 5 | --epochs 40 6 | #--resume ./train_output/city-train2Taipei-train_3ch/pth/MCD-normal-drn_d_105-17.pth.tar 7 | 8 | -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/__init__.py -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/back.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/back.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/checked6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/checked6.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/checked6_red.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/checked6_red.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/clearpolygon.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/clearpolygon.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/deleteobject.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/deleteobject.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/exit.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/exit.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/filepath.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/filepath.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/help19.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/help19.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/highlight.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/highlight.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/layerdown.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/layerdown.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/layerup.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/layerup.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/minus.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/minus.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/modify.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/modify.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/newobject.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/newobject.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/next.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/next.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/open.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/open.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/play.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/play.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/plus.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/plus.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/save.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/save.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/screenshot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/screenshot.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/screenshotToggle.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/screenshotToggle.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/shuffle.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/shuffle.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/undo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/undo.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/annotation/icons/zoom.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/annotation/icons/zoom.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/evaluation/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/evaluation/__init__.py -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/evaluation/addToConfusionMatrix.pyx: -------------------------------------------------------------------------------- 1 | # cython methods to speed-up evaluation 2 | 3 | import numpy as np 4 | cimport cython 5 | cimport numpy as np 6 | import ctypes 7 | 8 | np.import_array() 9 | 10 | cdef extern from "addToConfusionMatrix_impl.c": 11 | void addToConfusionMatrix( const unsigned char* f_prediction_p , 12 | const unsigned char* f_groundTruth_p , 13 | const unsigned int f_width_i , 14 | const unsigned int f_height_i , 15 | unsigned long long* f_confMatrix_p , 16 | const unsigned int f_confMatDim_i ) 17 | 18 | 19 | cdef tonumpyarray(unsigned long long* data, unsigned long long size): 20 | if not (data and size >= 0): raise ValueError 21 | return np.PyArray_SimpleNewFromData(2, [size, size], np.NPY_UINT64, data) 22 | 23 | @cython.boundscheck(False) 24 | def cEvaluatePair( np.ndarray[np.uint8_t , ndim=2] predictionArr , 25 | np.ndarray[np.uint8_t , ndim=2] groundTruthArr , 26 | np.ndarray[np.uint64_t, ndim=2] confMatrix , 27 | evalLabels ): 28 | cdef np.ndarray[np.uint8_t , ndim=2, mode="c"] predictionArr_c 29 | cdef np.ndarray[np.uint8_t , ndim=2, mode="c"] groundTruthArr_c 30 | cdef np.ndarray[np.ulonglong_t, ndim=2, mode="c"] confMatrix_c 31 | 32 | predictionArr_c = np.ascontiguousarray(predictionArr , dtype=np.uint8 ) 33 | groundTruthArr_c = np.ascontiguousarray(groundTruthArr, dtype=np.uint8 ) 34 | confMatrix_c = np.ascontiguousarray(confMatrix , dtype=np.ulonglong) 35 | 36 | cdef np.uint32_t height_ui = predictionArr.shape[1] 37 | cdef np.uint32_t width_ui = predictionArr.shape[0] 38 | cdef np.uint32_t confMatDim_ui = confMatrix.shape[0] 39 | 40 | addToConfusionMatrix(&predictionArr_c[0,0], &groundTruthArr_c[0,0], height_ui, width_ui, &confMatrix_c[0,0], confMatDim_ui) 41 | 42 | confMatrix = np.ascontiguousarray(tonumpyarray(&confMatrix_c[0,0], confMatDim_ui)) 43 | 44 | return np.copy(confMatrix) -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/evaluation/addToConfusionMatrix_impl.c: -------------------------------------------------------------------------------- 1 | // cython methods to speed-up evaluation 2 | 3 | void addToConfusionMatrix( const unsigned char* f_prediction_p , 4 | const unsigned char* f_groundTruth_p , 5 | const unsigned int f_width_i , 6 | const unsigned int f_height_i , 7 | unsigned long long* f_confMatrix_p , 8 | const unsigned int f_confMatDim_i ) 9 | { 10 | const unsigned int size_ui = f_height_i * f_width_i; 11 | for (unsigned int i = 0; i < size_ui; ++i) 12 | { 13 | const unsigned char predPx = f_prediction_p [i]; 14 | const unsigned char gtPx = f_groundTruth_p[i]; 15 | f_confMatrix_p[f_confMatDim_i*gtPx + predPx] += 1u; 16 | } 17 | } -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/evaluation/instance.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # 3 | # Instance class 4 | # 5 | 6 | class Instance(object): 7 | instID = 0 8 | labelID = 0 9 | pixelCount = 0 10 | medDist = -1 11 | distConf = 0.0 12 | 13 | def __init__(self, imgNp, instID): 14 | if (instID == -1): 15 | return 16 | self.instID = int(instID) 17 | self.labelID = int(self.getLabelID(instID)) 18 | self.pixelCount = int(self.getInstancePixels(imgNp, instID)) 19 | 20 | def getLabelID(self, instID): 21 | if (instID < 1000): 22 | return instID 23 | else: 24 | return int(instID / 1000) 25 | 26 | def getInstancePixels(self, imgNp, instLabel): 27 | return (imgNp == instLabel).sum() 28 | 29 | def toJSON(self): 30 | return json.dumps(self, default=lambda o: o.__dict__, sort_keys=True, indent=4) 31 | 32 | def toDict(self): 33 | buildDict = {} 34 | buildDict["instID"] = self.instID 35 | buildDict["labelID"] = self.labelID 36 | buildDict["pixelCount"] = self.pixelCount 37 | buildDict["medDist"] = self.medDist 38 | buildDict["distConf"] = self.distConf 39 | return buildDict 40 | 41 | def fromJSON(self, data): 42 | self.instID = int(data["instID"]) 43 | self.labelID = int(data["labelID"]) 44 | self.pixelCount = int(data["pixelCount"]) 45 | if ("medDist" in data): 46 | self.medDist = float(data["medDist"]) 47 | self.distConf = float(data["distConf"]) 48 | 49 | def __str__(self): 50 | return "("+str(self.instID)+")" -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/evaluation/instances2dict.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # 3 | # Convert instances from png files to a dictionary 4 | # 5 | 6 | from __future__ import print_function 7 | import os, sys 8 | 9 | # Cityscapes imports 10 | from instance import * 11 | sys.path.append( os.path.normpath( os.path.join( os.path.dirname( __file__ ) , '..' , 'helpers' ) ) ) 12 | from csHelpers import * 13 | 14 | def instances2dict(imageFileList, verbose=False): 15 | imgCount = 0 16 | instanceDict = {} 17 | 18 | if not isinstance(imageFileList, list): 19 | imageFileList = [imageFileList] 20 | 21 | if verbose: 22 | print("Processing {} images...".format(len(imageFileList))) 23 | 24 | for imageFileName in imageFileList: 25 | # Load image 26 | img = Image.open(imageFileName) 27 | 28 | # Image as numpy array 29 | imgNp = np.array(img) 30 | 31 | # Initialize label categories 32 | instances = {} 33 | for label in labels: 34 | instances[label.name] = [] 35 | 36 | # Loop through all instance ids in instance image 37 | for instanceId in np.unique(imgNp): 38 | instanceObj = Instance(imgNp, instanceId) 39 | 40 | instances[id2label[instanceObj.labelID].name].append(instanceObj.toDict()) 41 | 42 | imgKey = os.path.abspath(imageFileName) 43 | instanceDict[imgKey] = instances 44 | imgCount += 1 45 | 46 | if verbose: 47 | print("\rImages Processed: {}".format(imgCount), end=' ') 48 | sys.stdout.flush() 49 | 50 | if verbose: 51 | print("") 52 | 53 | return instanceDict 54 | 55 | def main(argv): 56 | fileList = [] 57 | if (len(argv) > 2): 58 | for arg in argv: 59 | if ("png" in arg): 60 | fileList.append(arg) 61 | instances2dict(fileList, True) 62 | 63 | if __name__ == "__main__": 64 | main(sys.argv[1:]) 65 | -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/evaluation/setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # 3 | # Enable cython support for eval scripts 4 | # Run as 5 | # setup.py build_ext --inplace 6 | # 7 | # WARNING: Only tested for Ubuntu 64bit OS. 8 | 9 | try: 10 | from distutils.core import setup 11 | from Cython.Build import cythonize 12 | except: 13 | print("Unable to setup. Please use pip to install: cython") 14 | print("sudo pip install cython") 15 | import os 16 | import numpy 17 | 18 | os.environ["CC"] = "g++" 19 | os.environ["CXX"] = "g++" 20 | 21 | setup(ext_modules = cythonize("addToConfusionMatrix.pyx"),include_dirs=[numpy.get_include()]) 22 | -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/helpers/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/helpers/__init__.py -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/preparation/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/preparation/__init__.py -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/preparation/createTrainIdInstanceImgs.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # 3 | # Converts the polygonal annotations of the Cityscapes dataset 4 | # to images, where pixel values encode the ground truth classes and the 5 | # individual instance of that classes. 6 | # 7 | # The Cityscapes downloads already include such images 8 | # a) *color.png : the class is encoded by its color 9 | # b) *labelIds.png : the class is encoded by its ID 10 | # c) *instanceIds.png : the class and the instance are encoded by an instance ID 11 | # 12 | # With this tool, you can generate option 13 | # d) *instanceTrainIds.png : the class and the instance are encoded by an instance training ID 14 | # This encoding might come handy for training purposes. You can use 15 | # the file labes.py to define the training IDs that suit your needs. 16 | # Note however, that once you submit or evaluate results, the regular 17 | # IDs are needed. 18 | # 19 | # Please refer to 'json2instanceImg.py' for an explanation of instance IDs. 20 | # 21 | # Uses the converter tool in 'json2instanceImg.py' 22 | # Uses the mapping defined in 'labels.py' 23 | # 24 | 25 | # python imports 26 | from __future__ import print_function 27 | import os, glob, sys 28 | 29 | # cityscapes imports 30 | sys.path.append( os.path.normpath( os.path.join( os.path.dirname( __file__ ) , '..' , 'helpers' ) ) ) 31 | from csHelpers import printError 32 | from json2instanceImg import json2instanceImg 33 | 34 | 35 | # The main method 36 | def main(): 37 | # Where to look for Cityscapes 38 | if 'CITYSCAPES_DATASET' in os.environ: 39 | cityscapesPath = os.environ['CITYSCAPES_DATASET'] 40 | else: 41 | cityscapesPath = os.path.join(os.path.dirname(os.path.realpath(__file__)),'..','..') 42 | # how to search for all ground truth 43 | searchFine = os.path.join( cityscapesPath , "gtFine" , "*" , "*" , "*_gt*_polygons.json" ) 44 | searchCoarse = os.path.join( cityscapesPath , "gtCoarse" , "*" , "*" , "*_gt*_polygons.json" ) 45 | 46 | # search files 47 | filesFine = glob.glob( searchFine ) 48 | filesFine.sort() 49 | filesCoarse = glob.glob( searchCoarse ) 50 | filesCoarse.sort() 51 | 52 | # concatenate fine and coarse 53 | files = filesFine + filesCoarse 54 | # files = filesFine # use this line if fine is enough for now. 55 | 56 | # quit if we did not find anything 57 | if not files: 58 | printError( "Did not find any files. Please consult the README." ) 59 | 60 | # a bit verbose 61 | print("Processing {} annotation files".format(len(files))) 62 | 63 | # iterate through files 64 | progress = 0 65 | print("Progress: {:>3} %".format( progress * 100 / len(files) ), end=' ') 66 | for f in files: 67 | # create the output filename 68 | dst = f.replace( "_polygons.json" , "_instanceTrainIds.png" ) 69 | 70 | # do the conversion 71 | try: 72 | json2instanceImg( f , dst , "trainIds" ) 73 | except: 74 | print("Failed to convert: {}".format(f)) 75 | raise 76 | 77 | # status 78 | progress += 1 79 | print("\rProgress: {:>3} %".format( progress * 100 / len(files) ), end=' ') 80 | sys.stdout.flush() 81 | 82 | 83 | # call the main 84 | if __name__ == "__main__": 85 | main() 86 | -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/preparation/createTrainIdLabelImgs.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # 3 | # Converts the polygonal annotations of the Cityscapes dataset 4 | # to images, where pixel values encode ground truth classes. 5 | # 6 | # The Cityscapes downloads already include such images 7 | # a) *color.png : the class is encoded by its color 8 | # b) *labelIds.png : the class is encoded by its ID 9 | # c) *instanceIds.png : the class and the instance are encoded by an instance ID 10 | # 11 | # With this tool, you can generate option 12 | # d) *labelTrainIds.png : the class is encoded by its training ID 13 | # This encoding might come handy for training purposes. You can use 14 | # the file labes.py to define the training IDs that suit your needs. 15 | # Note however, that once you submit or evaluate results, the regular 16 | # IDs are needed. 17 | # 18 | # Uses the converter tool in 'json2labelImg.py' 19 | # Uses the mapping defined in 'labels.py' 20 | # 21 | 22 | # python imports 23 | from __future__ import print_function 24 | import os, glob, sys 25 | import pdb 26 | # cityscapes imports 27 | sys.path.append( os.path.normpath( os.path.join( os.path.dirname( __file__ ) , '..' , 'helpers' ) ) ) 28 | from csHelpers import printError 29 | from json2labelImg import json2labelImg 30 | 31 | # The main method 32 | def main(): 33 | # Where to look for Cityscapes 34 | if 'CITYSCAPES_DATASET' in os.environ: 35 | cityscapesPath = os.environ['CITYSCAPES_DATASET'] 36 | else: 37 | cityscapesPath = os.path.join(os.path.dirname(os.path.realpath(__file__)),'..','..') 38 | # how to search for all ground truth 39 | #searchFine = os.path.join( cityscapesPath , "gtFine" , "*" , "*" , "*_gt*_polygons.json" ) 40 | searchFine = os.path.join( '/home/bctsai/gtFine/val' ,"*", "*_gt*_polygons.json" ) ###CVPR_eval 41 | searchCoarse = os.path.join( cityscapesPath , "gtCoarse" , "*" , "*" , "*_gt*_polygons.json" ) 42 | 43 | # search files 44 | #pdb.set_trace() 45 | filesFine = glob.glob( searchFine ) 46 | filesFine.sort() 47 | filesCoarse = glob.glob( searchCoarse ) 48 | filesCoarse.sort() 49 | 50 | # concatenate fine and coarse 51 | files = filesFine + filesCoarse 52 | # files = filesFine # use this line if fine is enough for now. 53 | 54 | # quit if we did not find anything 55 | if not files: 56 | printError( "Did not find any files. Please consult the README." ) 57 | 58 | # a bit verbose 59 | print("Processing {} annotation files".format(len(files))) 60 | 61 | # iterate through files 62 | progress = 0 63 | print("Progress: {:>3} %".format( progress * 100 / len(files) ), end=' ') 64 | for f in files: 65 | # create the output filename 66 | dst = f.replace( "_polygons.json" , "_CVPR_eval.png" ) 67 | 68 | # do the conversion 69 | try: 70 | json2labelImg( f , dst , "trainIds" ) 71 | except: 72 | print("Failed to convert: {}".format(f)) 73 | raise 74 | 75 | # status 76 | progress += 1 77 | print("\rProgress: {:>3} %".format( progress * 100 / len(files) ), end=' ') 78 | sys.stdout.flush() 79 | 80 | 81 | # call the main 82 | if __name__ == "__main__": 83 | main() 84 | -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/viewer/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/viewer/__init__.py -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/viewer/icons/back.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/viewer/icons/back.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/viewer/icons/disp.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/viewer/icons/disp.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/viewer/icons/exit.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/viewer/icons/exit.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/viewer/icons/filepath.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/viewer/icons/filepath.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/viewer/icons/help19.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/viewer/icons/help19.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/viewer/icons/minus.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/viewer/icons/minus.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/viewer/icons/next.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/viewer/icons/next.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/viewer/icons/open.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/viewer/icons/open.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/viewer/icons/play.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/viewer/icons/play.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/viewer/icons/plus.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/viewer/icons/plus.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/viewer/icons/shuffle.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/viewer/icons/shuffle.png -------------------------------------------------------------------------------- /MCD_DA_seg/cityscapesscripts/viewer/icons/zoom.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/cityscapesscripts/viewer/icons/zoom.png -------------------------------------------------------------------------------- /MCD_DA_seg/data_path/countries/imgs/val/Rio.txt: -------------------------------------------------------------------------------- 1 | val/Rio/pano_00002_2_180.png 2 | val/Rio/pano_00003_1_180.png 3 | val/Rio/pano_00004_1_180.png 4 | val/Rio/pano_00009_3_180.png 5 | val/Rio/pano_00010_3_0.png 6 | val/Rio/pano_00016_3_0.png 7 | val/Rio/pano_00019_0_180.png 8 | val/Rio/pano_00020_0_0.png 9 | val/Rio/pano_00021_5_180.png 10 | val/Rio/pano_00025_0_0.png 11 | val/Rio/pano_00031_1_180.png 12 | val/Rio/pano_00033_1_0.png 13 | val/Rio/pano_00045_6_180.png 14 | val/Rio/pano_00047_0_180.png 15 | val/Rio/pano_00048_1_0.png 16 | val/Rio/pano_00048_3_180.png 17 | val/Rio/pano_00053_4_0.png 18 | val/Rio/pano_00056_3_180.png 19 | val/Rio/pano_00059_0_0.png 20 | val/Rio/pano_00061_4_0.png 21 | val/Rio/pano_00064_0_0.png 22 | val/Rio/pano_00068_0_0.png 23 | val/Rio/pano_00074_0_180.png 24 | val/Rio/pano_00083_0_180.png 25 | val/Rio/pano_00092_6_0.png 26 | val/Rio/pano_00094_2_180.png 27 | val/Rio/pano_00095_5_180.png 28 | val/Rio/pano_00096_0_180.png 29 | val/Rio/pano_00105_1_180.png 30 | val/Rio/pano_00109_2_0.png 31 | val/Rio/pano_00112_3_0.png 32 | val/Rio/pano_00116_0_0.png 33 | val/Rio/pano_00126_1_180.png 34 | val/Rio/pano_00128_0_0.png 35 | val/Rio/pano_00129_4_180.png 36 | val/Rio/pano_00139_1_180.png 37 | val/Rio/pano_00155_1_180.png 38 | val/Rio/pano_00160_0_180.png 39 | val/Rio/pano_00164_1_0.png 40 | val/Rio/pano_00166_1_0.png 41 | val/Rio/pano_00172_1_0.png 42 | val/Rio/pano_00176_1_180.png 43 | val/Rio/pano_00179_0_180.png 44 | val/Rio/pano_00183_0_180.png 45 | val/Rio/pano_00185_0_0.png 46 | val/Rio/pano_00188_1_180.png 47 | val/Rio/pano_00189_2_0.png 48 | val/Rio/pano_00197_3_0.png 49 | val/Rio/pano_00199_2_0.png 50 | val/Rio/pano_00202_1_0.png 51 | val/Rio/pano_00202_2_180.png 52 | val/Rio/pano_00204_0_180.png 53 | val/Rio/pano_00205_3_0.png 54 | val/Rio/pano_00205_3_180.png 55 | val/Rio/pano_00207_1_0.png 56 | val/Rio/pano_00207_1_180.png 57 | val/Rio/pano_00207_7_180.png 58 | val/Rio/pano_00217_2_0.png 59 | val/Rio/pano_00220_0_180.png 60 | val/Rio/pano_00221_3_0.png 61 | val/Rio/pano_00221_4_180.png 62 | val/Rio/pano_00222_2_180.png 63 | val/Rio/pano_00226_0_0.png 64 | val/Rio/pano_00226_1_180.png 65 | val/Rio/pano_00232_2_180.png 66 | val/Rio/pano_00232_5_0.png 67 | val/Rio/pano_00235_5_180.png 68 | val/Rio/pano_00235_7_0.png 69 | val/Rio/pano_00236_3_180.png 70 | val/Rio/pano_00238_2_0.png 71 | val/Rio/pano_00238_5_180.png 72 | val/Rio/pano_00239_2_0.png 73 | val/Rio/pano_00358_5_180.png 74 | val/Rio/pano_00370_0_180.png 75 | val/Rio/pano_00373_0_180.png 76 | val/Rio/pano_00376_3_0.png 77 | val/Rio/pano_00379_1_180.png 78 | val/Rio/pano_00386_0_0.png 79 | val/Rio/pano_00396_1_180.png 80 | val/Rio/pano_00399_0_180.png 81 | val/Rio/pano_00401_1_180.png 82 | val/Rio/pano_00403_3_180.png 83 | val/Rio/pano_00427_0_0.png 84 | val/Rio/pano_00430_3_0.png 85 | val/Rio/pano_00443_0_180.png 86 | val/Rio/pano_00459_0_180.png 87 | val/Rio/pano_00460_6_0.png 88 | val/Rio/pano_00531_1_0.png 89 | val/Rio/pano_00540_3_0.png 90 | val/Rio/pano_00541_0_180.png 91 | val/Rio/pano_00554_0_180.png 92 | val/Rio/pano_00558_1_0.png 93 | val/Rio/pano_00584_2_0.png 94 | val/Rio/pano_00587_1_0.png 95 | val/Rio/pano_00619_1_180.png 96 | val/Rio/pano_00631_0_0.png 97 | val/Rio/pano_00651_0_0.png 98 | val/Rio/pano_01045_1_0.png 99 | val/Rio/pano_01052_5_0.png 100 | val/Rio/pano_01053_2_0.png 101 | val/Rio/pano_01125_2_180.png 102 | val/Rio/pano_01205_0_0.png 103 | val/Rio/pano_01470_1_180.png 104 | -------------------------------------------------------------------------------- /MCD_DA_seg/data_path/countries/imgs/val/Roma.txt: -------------------------------------------------------------------------------- 1 | val/Roma/pano_00368_0_0.png 2 | val/Roma/pano_00368_0_180.png 3 | val/Roma/pano_00390_1_0.png 4 | val/Roma/pano_00412_0_0.png 5 | val/Roma/pano_00412_4_180.png 6 | val/Roma/pano_00431_1_0.png 7 | val/Roma/pano_00443_1_180.png 8 | val/Roma/pano_00474_0_0.png 9 | val/Roma/pano_00474_1_0.png 10 | val/Roma/pano_00494_0_0.png 11 | val/Roma/pano_00494_0_180.png 12 | val/Roma/pano_00498_0_0.png 13 | val/Roma/pano_00498_1_180.png 14 | val/Roma/pano_00500_0_180.png 15 | val/Roma/pano_00500_2_0.png 16 | val/Roma/pano_00507_2_180.png 17 | val/Roma/pano_00513_0_0.png 18 | val/Roma/pano_00513_1_180.png 19 | val/Roma/pano_00522_0_180.png 20 | val/Roma/pano_00537_0_180.png 21 | val/Roma/pano_00537_2_180.png 22 | val/Roma/pano_00544_1_180.png 23 | val/Roma/pano_00544_2_0.png 24 | val/Roma/pano_00545_0_0.png 25 | val/Roma/pano_00545_0_180.png 26 | val/Roma/pano_00561_0_0.png 27 | val/Roma/pano_00561_2_180.png 28 | val/Roma/pano_00562_0_0.png 29 | val/Roma/pano_00562_0_180.png 30 | val/Roma/pano_00563_0_0.png 31 | val/Roma/pano_00563_0_180.png 32 | val/Roma/pano_00564_0_0.png 33 | val/Roma/pano_00565_1_0.png 34 | val/Roma/pano_00565_3_180.png 35 | val/Roma/pano_00566_0_180.png 36 | val/Roma/pano_00566_2_0.png 37 | val/Roma/pano_00567_0_0.png 38 | val/Roma/pano_00567_2_180.png 39 | val/Roma/pano_00568_0_0.png 40 | val/Roma/pano_00568_2_180.png 41 | val/Roma/pano_00571_0_0.png 42 | val/Roma/pano_00571_2_180.png 43 | val/Roma/pano_00586_2_0.png 44 | val/Roma/pano_00586_2_180.png 45 | val/Roma/pano_00595_1_0.png 46 | val/Roma/pano_00606_1_180.png 47 | val/Roma/pano_00606_2_0.png 48 | val/Roma/pano_00608_1_0.png 49 | val/Roma/pano_00610_0_0.png 50 | val/Roma/pano_00610_2_180.png 51 | val/Roma/pano_00622_1_0.png 52 | val/Roma/pano_00651_0_0.png 53 | val/Roma/pano_00922_1_0.png 54 | val/Roma/pano_00922_1_180.png 55 | val/Roma/pano_00933_0_0.png 56 | val/Roma/pano_00937_0_0.png 57 | val/Roma/pano_00937_0_180.png 58 | val/Roma/pano_00959_0_0.png 59 | val/Roma/pano_00967_0_180.png 60 | val/Roma/pano_00967_4_0.png 61 | val/Roma/pano_00986_1_180.png 62 | val/Roma/pano_00996_0_180.png 63 | val/Roma/pano_00998_0_0.png 64 | val/Roma/pano_01001_0_0.png 65 | val/Roma/pano_01005_0_0.png 66 | val/Roma/pano_01006_4_180.png 67 | val/Roma/pano_01025_1_0.png 68 | val/Roma/pano_01027_2_180.png 69 | val/Roma/pano_01027_5_0.png 70 | val/Roma/pano_01034_0_0.png 71 | val/Roma/pano_01045_1_0.png 72 | val/Roma/pano_01052_5_0.png 73 | val/Roma/pano_01053_2_0.png 74 | val/Roma/pano_01057_0_180.png 75 | val/Roma/pano_01067_0_0.png 76 | val/Roma/pano_01113_1_0.png 77 | val/Roma/pano_01121_1_0.png 78 | val/Roma/pano_01125_2_180.png 79 | val/Roma/pano_01140_3_180.png 80 | val/Roma/pano_01195_0_180.png 81 | val/Roma/pano_01203_1_180.png 82 | val/Roma/pano_01205_0_0.png 83 | val/Roma/pano_01211_0_180.png 84 | val/Roma/pano_01223_1_0.png 85 | val/Roma/pano_01228_0_0.png 86 | val/Roma/pano_01228_3_180.png 87 | val/Roma/pano_01236_2_180.png 88 | val/Roma/pano_01260_1_0.png 89 | val/Roma/pano_01283_1_180.png 90 | val/Roma/pano_01295_0_0.png 91 | val/Roma/pano_01295_1_180.png 92 | val/Roma/pano_01311_2_0.png 93 | val/Roma/pano_01337_2_0.png 94 | val/Roma/pano_01361_1_0.png 95 | val/Roma/pano_01412_0_180.png 96 | val/Roma/pano_01444_1_0.png 97 | val/Roma/pano_01444_4_180.png 98 | val/Roma/pano_01470_1_180.png 99 | val/Roma/pano_01486_2_0.png 100 | val/Roma/pano_01507_1_0.png 101 | -------------------------------------------------------------------------------- /MCD_DA_seg/data_path/countries/imgs/val/Taipei.txt: -------------------------------------------------------------------------------- 1 | val/Taipei/pano_00506_0_0.png 2 | val/Taipei/pano_00212_0_180.png 3 | val/Taipei/pano_00040_0_0.png 4 | val/Taipei/pano_00012_0_180.png 5 | val/Taipei/pano_00316_0_0.png 6 | val/Taipei/pano_00463_0_0.png 7 | val/Taipei/pano_00246_0_0.png 8 | val/Taipei/pano_00500_0_0.png 9 | val/Taipei/pano_00260_0_0.png 10 | val/Taipei/pano_00509_0_0.png 11 | val/Taipei/pano_00509_0_180.png 12 | val/Taipei/pano_00108_0_0.png 13 | val/Taipei/pano_00272_0_0.png 14 | val/Taipei/pano_00174_0_180.png 15 | val/Taipei/pano_00496_0_180.png 16 | val/Taipei/pano_00024_0_0.png 17 | val/Taipei/pano_00466_0_180.png 18 | val/Taipei/pano_00310_0_0.png 19 | val/Taipei/pano_00230_0_0.png 20 | val/Taipei/pano_00504_0_0.png 21 | val/Taipei/pano_00116_0_0.png 22 | val/Taipei/pano_00043_0_0.png 23 | val/Taipei/pano_00323_0_0.png 24 | val/Taipei/pano_00035_0_180.png 25 | val/Taipei/pano_00271_0_0.png 26 | val/Taipei/pano_00493_0_180.png 27 | val/Taipei/pano_00335_0_0.png 28 | val/Taipei/pano_00229_0_0.png 29 | val/Taipei/pano_00337_0_180.png 30 | val/Taipei/pano_00334_0_0.png 31 | val/Taipei/pano_00271_0_180.png 32 | val/Taipei/pano_00142_0_0.png 33 | val/Taipei/pano_00205_2_0.png 34 | val/Taipei/pano_00334_0_180.png 35 | val/Taipei/pano_00215_2_0.png 36 | val/Taipei/pano_00109_0_180.png 37 | val/Taipei/pano_00450_0_0.png 38 | val/Taipei/pano_00317_0_0.png 39 | val/Taipei/pano_00213_0_180.png 40 | val/Taipei/pano_00183_0_180.png 41 | val/Taipei/pano_00498_0_0.png 42 | val/Taipei/pano_00318_0_0.png 43 | val/Taipei/pano_00010_0_0.png 44 | val/Taipei/pano_00240_0_0.png 45 | val/Taipei/pano_00315_0_0.png 46 | val/Taipei/pano_00237_0_0.png 47 | val/Taipei/pano_00012_0_0.png 48 | val/Taipei/pano_01726_0_0.png 49 | val/Taipei/pano_00260_0_180.png 50 | val/Taipei/pano_00184_0_180.png 51 | val/Taipei/pano_00238_0_0.png 52 | val/Taipei/pano_00002_0_180.png 53 | val/Taipei/pano_01718_0_0.png 54 | val/Taipei/pano_00223_0_0.png 55 | val/Taipei/pano_01726_0_180.png 56 | val/Taipei/pano_00224_0_180.png 57 | val/Taipei/pano_00001_0_0.png 58 | val/Taipei/pano_00327_0_0.png 59 | val/Taipei/pano_00221_0_180.png 60 | val/Taipei/pano_00203_0_180.png 61 | val/Taipei/pano_00481_0_0.png 62 | val/Taipei/pano_00173_0_180.png 63 | val/Taipei/pano_01728_0_180.png 64 | val/Taipei/pano_00308_0_0.png 65 | val/Taipei/pano_00267_0_0.png 66 | val/Taipei/pano_00247_0_180.png 67 | val/Taipei/pano_00143_0_180.png 68 | val/Taipei/pano_00327_0_180.png 69 | val/Taipei/pano_01727_0_0.png 70 | val/Taipei/pano_01135_0_0.png 71 | val/Taipei/pano_00233_0_0.png 72 | val/Taipei/pano_00325_0_0.png 73 | val/Taipei/pano_00114_0_0.png 74 | val/Taipei/pano_00176_0_180.png 75 | val/Taipei/pano_00223_0_180.png 76 | val/Taipei/pano_00439_0_180.png 77 | val/Taipei/pano_00487_0_0.png 78 | val/Taipei/pano_00499_0_0.png 79 | val/Taipei/pano_00011_0_0.png 80 | val/Taipei/pano_00173_0_0.png 81 | val/Taipei/pano_00074_0_0.png 82 | val/Taipei/pano_00315_0_180.png 83 | val/Taipei/pano_00222_0_180.png 84 | val/Taipei/pano_00218_0_0.png 85 | val/Taipei/pano_00325_0_180.png 86 | val/Taipei/pano_00230_0_180.png 87 | val/Taipei/pano_00270_0_0.png 88 | val/Taipei/pano_00000_0_0.png 89 | val/Taipei/pano_00225_0_180.png 90 | val/Taipei/pano_00113_0_0.png 91 | val/Taipei/pano_00110_0_0.png 92 | val/Taipei/pano_00245_0_0.png 93 | val/Taipei/pano_00035_0_0.png 94 | val/Taipei/pano_00493_0_0.png 95 | val/Taipei/pano_00037_0_180.png 96 | val/Taipei/pano_00028_0_180.png 97 | val/Taipei/pano_00232_0_0.png 98 | val/Taipei/pano_00261_0_180.png 99 | val/Taipei/pano_00525_0_180.png 100 | val/Taipei/pano_00516_0_180.png 101 | -------------------------------------------------------------------------------- /MCD_DA_seg/data_path/countries/imgs/val/Tokyo.txt: -------------------------------------------------------------------------------- 1 | val/Tokyo/pano_00002_2_0.jpg 2 | val/Tokyo/pano_00022_2_0.jpg 3 | val/Tokyo/pano_00027_0_0.jpg 4 | val/Tokyo/pano_00037_0_0.jpg 5 | val/Tokyo/pano_00042_1_180.jpg 6 | val/Tokyo/pano_00062_4_0.jpg 7 | val/Tokyo/pano_00068_1_180.jpg 8 | val/Tokyo/pano_00076_3_0.jpg 9 | val/Tokyo/pano_00087_0_0.jpg 10 | val/Tokyo/pano_00093_3_180.jpg 11 | val/Tokyo/pano_00111_0_180.jpg 12 | val/Tokyo/pano_00111_4_180.jpg 13 | val/Tokyo/pano_00117_3_180.jpg 14 | val/Tokyo/pano_00176_2_0.jpg 15 | val/Tokyo/pano_00188_1_0.jpg 16 | val/Tokyo/pano_00204_1_0.jpg 17 | val/Tokyo/pano_00236_2_0.jpg 18 | val/Tokyo/pano_00270_2_0.jpg 19 | val/Tokyo/pano_00272_0_180.jpg 20 | val/Tokyo/pano_00320_0_180.jpg 21 | val/Tokyo/pano_00328_2_180.jpg 22 | val/Tokyo/pano_00335_3_180.jpg 23 | val/Tokyo/pano_00340_2_0.jpg 24 | val/Tokyo/pano_00374_3_0.jpg 25 | val/Tokyo/pano_00391_4_0.jpg 26 | val/Tokyo/pano_00405_6_0.jpg 27 | val/Tokyo/pano_00566_3_0.jpg 28 | val/Tokyo/pano_00574_0_180.jpg 29 | val/Tokyo/pano_00585_2_180.jpg 30 | val/Tokyo/pano_00588_0_180.jpg 31 | val/Tokyo/pano_00590_1_0.jpg 32 | val/Tokyo/pano_00595_0_180.jpg 33 | val/Tokyo/pano_00601_1_180.jpg 34 | val/Tokyo/pano_00607_3_180.jpg 35 | val/Tokyo/pano_00623_1_0.jpg 36 | val/Tokyo/pano_00635_3_180.jpg 37 | val/Tokyo/pano_00645_3_0.jpg 38 | val/Tokyo/pano_00647_1_180.jpg 39 | val/Tokyo/pano_00650_3_180.jpg 40 | val/Tokyo/pano_00654_3_180.jpg 41 | val/Tokyo/pano_00665_3_0.jpg 42 | val/Tokyo/pano_00682_2_180.jpg 43 | val/Tokyo/pano_00683_0_180.jpg 44 | val/Tokyo/pano_00695_1_180.jpg 45 | val/Tokyo/pano_00709_1_0.jpg 46 | val/Tokyo/pano_00720_4_0.jpg 47 | val/Tokyo/pano_00723_2_180.jpg 48 | val/Tokyo/pano_00749_0_180.jpg 49 | val/Tokyo/pano_00750_6_180.jpg 50 | val/Tokyo/pano_00760_1_180.jpg 51 | val/Tokyo/pano_00760_4_0.jpg 52 | val/Tokyo/pano_00780_2_180.jpg 53 | val/Tokyo/pano_00792_1_180.jpg 54 | val/Tokyo/pano_00810_2_180.jpg 55 | val/Tokyo/pano_00821_1_0.jpg 56 | val/Tokyo/pano_00826_0_180.jpg 57 | val/Tokyo/pano_00870_3_0.jpg 58 | val/Tokyo/pano_00883_1_0.jpg 59 | val/Tokyo/pano_00894_1_180.jpg 60 | val/Tokyo/pano_00914_1_0.jpg 61 | val/Tokyo/pano_00915_2_180.jpg 62 | val/Tokyo/pano_00926_1_180.jpg 63 | val/Tokyo/pano_00940_2_0.jpg 64 | val/Tokyo/pano_00997_1_0.jpg 65 | val/Tokyo/pano_01022_3_0.jpg 66 | val/Tokyo/pano_01033_1_0.jpg 67 | val/Tokyo/pano_01048_2_0.jpg 68 | val/Tokyo/pano_01081_0_0.jpg 69 | val/Tokyo/pano_01098_3_180.jpg 70 | val/Tokyo/pano_01133_3_0.jpg 71 | val/Tokyo/pano_01143_1_0.jpg 72 | val/Tokyo/pano_01206_0_0.jpg 73 | val/Tokyo/pano_01208_3_180.jpg 74 | val/Tokyo/pano_01219_2_0.jpg 75 | val/Tokyo/pano_01222_2_0.jpg 76 | val/Tokyo/pano_01347_1_180.jpg 77 | val/Tokyo/pano_01349_1_0.jpg 78 | val/Tokyo/pano_01352_0_0.jpg 79 | val/Tokyo/pano_01360_0_180.jpg 80 | val/Tokyo/pano_01364_2_0.jpg 81 | val/Tokyo/pano_01375_0_180.jpg 82 | val/Tokyo/pano_01378_1_0.jpg 83 | val/Tokyo/pano_01379_0_180.jpg 84 | val/Tokyo/pano_01390_0_0.jpg 85 | val/Tokyo/pano_01404_3_0.jpg 86 | val/Tokyo/pano_01405_2_0.jpg 87 | val/Tokyo/pano_01410_0_0.jpg 88 | val/Tokyo/pano_01413_1_0.jpg 89 | val/Tokyo/pano_01418_4_180.jpg 90 | val/Tokyo/pano_01428_1_0.jpg 91 | val/Tokyo/pano_01433_1_180.jpg 92 | val/Tokyo/pano_01437_2_180.jpg 93 | val/Tokyo/pano_01446_2_0.jpg 94 | val/Tokyo/pano_01516_1_0.jpg 95 | val/Tokyo/pano_01520_1_0.jpg 96 | val/Tokyo/pano_01521_0_0.jpg 97 | val/Tokyo/pano_01528_0_180.jpg 98 | val/Tokyo/pano_01535_1_0.jpg 99 | val/Tokyo/pano_01536_3_0.jpg 100 | val/Tokyo/pano_01548_0_180.jpg 101 | -------------------------------------------------------------------------------- /MCD_DA_seg/data_path/cscapes/gtFine_labelTrainIds/val.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/data_path/cscapes/gtFine_labelTrainIds/val.txt -------------------------------------------------------------------------------- /MCD_DA_seg/dataset/city_info.json: -------------------------------------------------------------------------------- 1 | { 2 | "classes":19, 3 | "label2train":[ 4 | [0, 255], 5 | [1, 255], 6 | [2, 255], 7 | [3, 255], 8 | [4, 255], 9 | [5, 255], 10 | [6, 255], 11 | [7, 0], 12 | [8, 1], 13 | [9, 255], 14 | [10, 255], 15 | [11, 2], 16 | [12, 3], 17 | [13, 4], 18 | [14, 255], 19 | [15, 255], 20 | [16, 255], 21 | [17, 5], 22 | [18, 255], 23 | [19, 6], 24 | [20, 7], 25 | [21, 8], 26 | [22, 9], 27 | [23, 10], 28 | [24, 11], 29 | [25, 12], 30 | [26, 13], 31 | [27, 14], 32 | [28, 15], 33 | [29, 255], 34 | [30, 255], 35 | [31, 16], 36 | [32, 17], 37 | [33, 18], 38 | [-1, 255]], 39 | "label":[ 40 | "road", 41 | "sidewalk", 42 | "building", 43 | "wall", 44 | "fence", 45 | "pole", 46 | "light", 47 | "sign", 48 | "vegetation", 49 | "terrain", 50 | "sky", 51 | "person", 52 | "rider", 53 | "car", 54 | "truck", 55 | "bus", 56 | "train", 57 | "motocycle", 58 | "bicycle"], 59 | "palette":[ 60 | [128,64,128], 61 | [244,35,232], 62 | [70,70,70], 63 | [102,102,156], 64 | [190,153,153], 65 | [153,153,153], 66 | [250,170,30], 67 | [220,220,0], 68 | [107,142,35], 69 | [152,251,152], 70 | [70,130,180], 71 | [220,20,60], 72 | [255,0,0], 73 | [0,0,142], 74 | [0,0,70], 75 | [0,60,100], 76 | [0,80,100], 77 | [0,0,230], 78 | [119,11,32], 79 | [0,0,0]], 80 | "mean":[ 81 | 73.158359210711552, 82 | 82.908917542625858, 83 | 72.392398761941593], 84 | "std":[ 85 | 47.675755341814678, 86 | 48.494214368814916, 87 | 47.736546325441594] 88 | } -------------------------------------------------------------------------------- /MCD_DA_seg/dataset/convert_label.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import json 3 | import os 4 | 5 | import numpy as np 6 | from PIL import Image 7 | from tqdm import tqdm 8 | 9 | def swap_labels(np_original_gt_im, class_convert_mat): 10 | np_processed_gt_im = np.zeros(np_original_gt_im.shape) 11 | for swap in class_convert_mat: 12 | ind_swap = np.where(np_original_gt_im == swap[0]) 13 | np_processed_gt_im[ind_swap] = swap[1] 14 | processed_gt_im = Image.fromarray(np.uint8(np_processed_gt_im)) 15 | return processed_gt_im 16 | 17 | 18 | def convert_citylabelTo16label(): 19 | with open('./synthia2cityscapes_info.json', 'r') as f: 20 | paramdic = json.load(f) 21 | 22 | class_ind = paramdic['city2common'] 23 | 24 | city_gt_dir = "/data/ugui0/ksaito/D_A/image_citiscape/www.cityscapes-dataset.com/file-handling/gtFine" 25 | split_list = ["train", "test", "val"] 26 | 27 | original_suffix = "labelIds" 28 | processed_suffix = "label16IDs" 29 | 30 | for split in tqdm(split_list): 31 | base_dir = os.path.join(city_gt_dir, split) 32 | place_list = os.listdir(base_dir) 33 | for place in tqdm(place_list): 34 | target_dir = os.path.join(base_dir, place) 35 | pngfn_list = os.listdir(target_dir) 36 | original_pngfn_list = [x for x in pngfn_list if original_suffix in x] 37 | 38 | for pngfn in tqdm(original_pngfn_list): 39 | gt_fn = os.path.join(target_dir, pngfn) 40 | original_gt_im = Image.open(gt_fn) 41 | 42 | processed_gt_im = swap_labels(np.array(original_gt_im), class_ind) 43 | outfn = gt_fn.replace(original_suffix, processed_suffix) 44 | processed_gt_im.save(outfn, 'PNG') 45 | 46 | 47 | def convert_synthialabelTo16label(): 48 | with open('./synthia2cityscapes_info.json', 'r') as f: 49 | paramdic = json.load(f) 50 | 51 | class_ind = np.array(paramdic['synthia2common']) 52 | 53 | synthia_gt_dir = "/data/ugui0/dataset/adaptation/synthia/RAND_CITYSCAPES/GT" 54 | 55 | # original_dir = os.path.join(synthia_gt_dir, "LABELS") # Original dir but this contains strange files 56 | original_dir = "/data/ugui0/dataset/adaptation/synthia/new_synthia/segmentation_annotation/SYNTHIA/GT/parsed_LABELS" # Not original. Downloaded from http://crcv.ucf.edu/data/adaptationseg/ICCV_dataset.zip 57 | processed_dir = os.path.join(synthia_gt_dir, "LABELS16") 58 | 59 | original_pngfn_list = os.listdir(original_dir) 60 | 61 | for pngfn in tqdm(original_pngfn_list): 62 | gt_fn = os.path.join(original_dir, pngfn) 63 | original_gt_im = Image.open(gt_fn) 64 | processed_gt_im = swap_labels(np.array(original_gt_im), class_ind) 65 | outfn = os.path.join(processed_dir, pngfn) 66 | processed_gt_im.save(outfn, 'PNG') 67 | 68 | 69 | if __name__ == '__main__': 70 | parser = argparse.ArgumentParser(description='Convert Label Ids') 71 | parser.add_argument('dataset', type=str, choices=["city", "synthia"]) 72 | args = parser.parse_args() 73 | if args.dataset == "city": 74 | convert_citylabelTo16label() 75 | else: 76 | convert_synthialabelTo16label() 77 | -------------------------------------------------------------------------------- /MCD_DA_seg/dataset/gt_coloring.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import json 3 | import os 4 | 5 | import numpy as np 6 | from PIL import Image 7 | from tqdm import tqdm 8 | from util import mkdir_if_not_exist 9 | 10 | parser = argparse.ArgumentParser(description='GT Coloring') 11 | parser.add_argument('dataset', choices=["gta", "city", "test", "ir", "city16"]) 12 | parser.add_argument('--gt_dir', type=str, 13 | default='/data/unagi0/watanabe/DomainAdaptation/Segmentation/VisDA2017/cityscapes_gt/val') 14 | parser.add_argument('--vis_outdir', type=str, 15 | default='/data/unagi0/watanabe/DomainAdaptation/Segmentation/VisDA2017/cityscapes_vis_gt/val') 16 | 17 | args = parser.parse_args() 18 | 19 | if args.dataset in ["city16", "synthia"]: 20 | info_json_fn = "./dataset/synthia2cityscapes_info.json" 21 | else: 22 | info_json_fn = "./dataset/city_info.json" 23 | 24 | # Save visualized predicted pixel labels(pngs) 25 | with open(info_json_fn) as f: 26 | info_dic = json.load(f) 27 | 28 | gtfn_list = os.listdir(args.gt_dir) 29 | 30 | for gtfn in tqdm(gtfn_list): 31 | full_gtfn = os.path.join(args.gt_dir, gtfn) 32 | img = Image.open(full_gtfn) 33 | palette = np.array(info_dic['palette'], dtype=np.uint8) 34 | img.putpalette(palette.flatten()) 35 | mkdir_if_not_exist(args.vis_outdir) 36 | vis_fn = os.path.join(args.vis_outdir, gtfn) 37 | img.save(vis_fn) 38 | -------------------------------------------------------------------------------- /MCD_DA_seg/dataset/split_gta.py: -------------------------------------------------------------------------------- 1 | import os 2 | import random 3 | 4 | gta_dir = "/data/ugui0/dataset/adaptation/taskcv-2017-public/segmentation/data/" 5 | 6 | all_imglist_fn = os.path.join(gta_dir, "images.txt") 7 | test_imglist_fn = os.path.join(gta_dir, "test.txt") 8 | seed = 42 9 | n_test_set = 500 10 | 11 | with open(all_imglist_fn) as f: 12 | img_list = f.readlines() 13 | 14 | test_img_list = random.sample(img_list, n_test_set) 15 | 16 | print (len(test_img_list)) 17 | with open(test_imglist_fn, 'w') as f: 18 | f.writelines(test_img_list) 19 | -------------------------------------------------------------------------------- /MCD_DA_seg/dataset/synthia2cityscapes_info.json: -------------------------------------------------------------------------------- 1 | 2 | { 3 | "classes":16, 4 | "city2common":[ 5 | [0, 255], 6 | [1, 255], 7 | [2, 255], 8 | [3, 255], 9 | [4, 255], 10 | [5, 255], 11 | [6, 255], 12 | [7, 0], 13 | [8, 1], 14 | [9, 255], 15 | [10, 255], 16 | [11, 2], 17 | [12, 3], 18 | [13, 4], 19 | [14, 255], 20 | [15, 255], 21 | [16, 255], 22 | [17, 5], 23 | [18, 255], 24 | [19, 6], 25 | [20, 7], 26 | [21, 8], 27 | [22, 255], 28 | [23, 9], 29 | [24, 10], 30 | [25, 11], 31 | [26, 12], 32 | [27, 255], 33 | [28, 13], 34 | [29, 255], 35 | [30, 255], 36 | [31, 255], 37 | [32, 14], 38 | [33, 15], 39 | [-1, 255] 40 | ], 41 | "synthia2common":[ 42 | [0, 255], 43 | [1, 9], 44 | [2, 2], 45 | [3, 0], 46 | [4, 1], 47 | [5, 4], 48 | [6, 8], 49 | [7, 5], 50 | [8, 12], 51 | [9, 7], 52 | [10, 10], 53 | [11, 15], 54 | [12, 14], 55 | [13, 255], 56 | [14, 255], 57 | [15, 6], 58 | [16, 255], 59 | [17, 11], 60 | [18, 255], 61 | [19, 13], 62 | [20, 255], 63 | [21, 3], 64 | [22, 255] 65 | ], 66 | "city_label": [ 67 | "unlabeled", 68 | "ego vehicle", 69 | "rectification border", 70 | "out of roi", 71 | "static", 72 | "dynamic", 73 | "ground", 74 | "road", 75 | "sidewalk", 76 | "parking", 77 | "rail track", 78 | "building", 79 | "wall", 80 | "fence", 81 | "guard rail", 82 | "bridge", 83 | "tunnel", 84 | "pole", 85 | "polegroup", 86 | "traffic light", 87 | "traffic sign", 88 | "vegetation", 89 | "terrain", 90 | "sky", 91 | "person", 92 | "rider", 93 | "car", 94 | "truck", 95 | "bus", 96 | "caravan", 97 | "trailer", 98 | "train", 99 | "motorcycle", 100 | "bicycle", 101 | "license plate" 102 | ], 103 | "synthia_label": [ 104 | "void", 105 | "sky", 106 | "Building", 107 | "Road", 108 | "Sidewalk", 109 | "Fence", 110 | "Vegetation", 111 | "Pole", 112 | "Car", 113 | "Traffic sign", 114 | "Pedestrian", 115 | "Bicycle", 116 | "Motorcycle", 117 | "Parking-slot", 118 | "Road-work", 119 | "Traffic light", 120 | "Terrain", 121 | "Rider", 122 | "Truck", 123 | "Bus", 124 | "Train", 125 | "Wall", 126 | "Lanemarking" 127 | ], 128 | "common_label":[ 129 | "road", 130 | "sidewalk", 131 | "building", 132 | "wall", 133 | "fence", 134 | "pole", 135 | "light", 136 | "sign", 137 | "vegetation", 138 | "sky", 139 | "person", 140 | "rider", 141 | "car", 142 | "bus", 143 | "motocycle", 144 | "bicycle" 145 | ], 146 | "palette":[ 147 | [128,64,128], 148 | [244,35,232], 149 | [70,70,70], 150 | [102,102,156], 151 | [190,153,153], 152 | [153,153,153], 153 | [250,170,30], 154 | [220,220,0], 155 | [107,142,35], 156 | [70,130,180], 157 | [220,20,60], 158 | [255,0,0], 159 | [0,0,142], 160 | [0,60,100], 161 | [0,0,230], 162 | [119,11,32], 163 | [0,0,0]], 164 | "mean":[ 165 | 73.158359210711552, 166 | 82.908917542625858, 167 | 72.392398761941593], 168 | "std":[ 169 | 47.675755341814678, 170 | 48.494214368814916, 171 | 47.736546325441594] 172 | } -------------------------------------------------------------------------------- /MCD_DA_seg/docs/LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Machine Intelligence Laboratory (The University of Tokyo) 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /MCD_DA_seg/docs/README.md: -------------------------------------------------------------------------------- 1 | ## Maximum Classifier Discrepancy for Domain Adaptation 2 | 3 | ![](docs/overview.png) 4 |
5 | 6 | This is the implementation of Maximum Classifier Discrepancy for digits classification and semantic segmentation in Pytorch. 7 | The code is written by Kuniaki Saito. The work was accepted by CVPR 2018 Oral. 8 | 9 | #### Maximum Classifier Discrepancy for Domain Adaptation: [[Project]](https://mil-tokyo.github.io/MCD_DA/)[[Paper (arxiv)]](https://arxiv.org/abs/1712.02560). 10 |
11 | 12 | ![](docs/result_seg.png) 13 | 14 | 15 | ## Getting Started 16 | Go to classification or segmentation folder and see the instruction for each task. 17 | ## Citation 18 | If you use this code for your research, please cite our papers (This will be updated when cvpr paper is publicized). 19 | ``` 20 | @article{saito2017maximum, 21 | title={Maximum Classifier Discrepancy for Unsupervised Domain Adaptation}, 22 | author={Saito, Kuniaki and Watanabe, Kohei and Ushiku, Yoshitaka and Harada, Tatsuya}, 23 | journal={arXiv preprint arXiv:1712.02560}, 24 | year={2017} 25 | } 26 | ``` 27 | -------------------------------------------------------------------------------- /MCD_DA_seg/docs/_config.yml: -------------------------------------------------------------------------------- 1 | theme: jekyll-theme-cayman 2 | title: Maximum Classifier Discrepancy for Unsupervised Domain Adaptation 3 | -------------------------------------------------------------------------------- /MCD_DA_seg/docs/overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/docs/overview.png -------------------------------------------------------------------------------- /MCD_DA_seg/docs/result_seg.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/docs/result_seg.png -------------------------------------------------------------------------------- /MCD_DA_seg/loss.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.functional as F 4 | 5 | 6 | # Recommend 7 | class CrossEntropyLoss2d(nn.Module): 8 | def __init__(self, weight=None, size_average=True): 9 | super(CrossEntropyLoss2d, self).__init__() 10 | self.nll_loss = nn.NLLLoss2d(weight, size_average) 11 | 12 | def forward(self, inputs, targets): 13 | return self.nll_loss(F.log_softmax(inputs), targets) 14 | 15 | 16 | class BalanceLoss2d(nn.Module): 17 | def __init__(self, weight=None, size_average=True): 18 | super(BalanceLoss2d, self).__init__() 19 | self.weight = weight 20 | 21 | def forward(self, inputs1, inputs2): 22 | prob1 = F.softmax(inputs1)[0, :19] 23 | prob2 = F.softmax(inputs2)[0, :19] 24 | print (prob1) 25 | prob1 = torch.mean(prob1, 0) 26 | prob2 = torch.mean(prob2, 0) 27 | print (prob1) 28 | entropy_loss = - torch.mean(torch.log(prob1 + 1e-6)) 29 | entropy_loss -= torch.mean(torch.log(prob2 + 1e-6)) 30 | return entropy_loss 31 | 32 | 33 | class Entropy(nn.Module): 34 | def __init__(self, weight=None, size_average=True): 35 | super(Entropy, self).__init__() 36 | self.weight = weight 37 | 38 | def forward(self, inputs1): 39 | prob1 = F.softmax(inputs1[0, :19]) 40 | entropy_loss = torch.mean(torch.log(prob1)) # torch.mean(torch.mean(torch.log(prob1),1),0 41 | return entropy_loss 42 | 43 | class Diff2d(nn.Module): 44 | def __init__(self, weight=None, size_average=True): 45 | super(Diff2d, self).__init__() 46 | self.weight = weight 47 | 48 | def forward(self, inputs1, inputs2): 49 | return torch.mean(torch.abs(F.softmax(inputs1) - F.softmax(inputs2))) 50 | 51 | class Symkl2d(nn.Module): 52 | def __init__(self, weight=None, n_target_ch=21, size_average=True): 53 | super(Symkl2d, self).__init__() 54 | self.weight = weight 55 | self.size_average = size_average 56 | self.n_target_ch = 20 57 | def forward(self, inputs1, inputs2): 58 | self.prob1 = F.softmax(inputs1) 59 | self.prob2 = F.softmax(inputs2) 60 | self.log_prob1 = F.log_softmax(self.prob1) 61 | self.log_prob2 = F.log_softmax(self.prob2) 62 | 63 | loss = 0.5 * (F.kl_div(self.log_prob1, self.prob2, size_average=self.size_average) 64 | + F.kl_div(self.log_prob2, self.prob1, size_average=self.size_average)) 65 | 66 | return loss 67 | 68 | 69 | # this may be unstable sometimes.Notice set the size_average 70 | def CrossEntropy2d(input, target, weight=None, size_average=False): 71 | # input:(n, c, h, w) target:(n, h, w) 72 | n, c, h, w = input.size() 73 | 74 | input = input.transpose(1, 2).transpose(2, 3).contiguous() 75 | input = input[target.view(n, h, w, 1).repeat(1, 1, 1, c) >= 0].view(-1, c) 76 | 77 | target_mask = target >= 0 78 | target = target[target_mask] 79 | loss = F.cross_entropy(input, target, weight=weight, size_average=False) 80 | if size_average: 81 | loss /= target_mask.sum().data[0] 82 | 83 | return loss 84 | 85 | 86 | def get_prob_distance_criterion(criterion_name, n_class=None): 87 | if criterion_name == 'diff': 88 | criterion = Diff2d() 89 | elif criterion_name == "symkl": 90 | criterion = Symkl2d(n_target_ch=n_class) 91 | elif criterion_name == "nmlsymkl": 92 | criterion = Symkl2d(n_target_ch=n_class, size_average=True) 93 | else: 94 | raise NotImplementedError() 95 | 96 | return criterion 97 | -------------------------------------------------------------------------------- /MCD_DA_seg/models/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/models/__init__.py -------------------------------------------------------------------------------- /MCD_DA_seg/models/grad_reversal.py: -------------------------------------------------------------------------------- 1 | from torch.autograd import Function 2 | 3 | 4 | class GradReverse(Function): 5 | def __init__(self, lambd): 6 | self.lambd = lambd 7 | def forward(self, x): 8 | return x.view_as(x) 9 | def backward(self, grad_output): 10 | return (grad_output*-self.lambd) 11 | def grad_reverse(x,lambd=1.0): 12 | return GradReverse(lambd)(x) -------------------------------------------------------------------------------- /MCD_DA_seg/models/model_util.py: -------------------------------------------------------------------------------- 1 | import torch 2 | 3 | 4 | def get_full_model(net, res, n_class, input_ch): 5 | if net == "fcn": 6 | from models.fcn import ResFCN 7 | return torch.nn.DataParallel(ResFCN(n_class, res, input_ch)) 8 | elif net == "fcnvgg": 9 | from models.vgg_fcn import FCN8s 10 | return torch.nn.DataParallel(FCN8s(n_class)) 11 | 12 | elif "drn" in net: 13 | from models.dilated_fcn import DRNSeg 14 | assert net in ["drn_c_26", "drn_c_42", "drn_c_58", "drn_d_22", "drn_d_38", "drn_d_54", "drn_d_105"] 15 | return torch.nn.DataParallel(DRNSeg(net, n_class, input_ch=input_ch)) 16 | else: 17 | raise NotImplementedError("Only FCN, DRN are supported!") 18 | 19 | 20 | def get_models(net_name, input_ch, n_class, res="50", method="MCD", uses_one_classifier=False, use_ae=False, 21 | is_data_parallel=False): 22 | def get_MCD_model_list(): 23 | if net_name == "fcn": 24 | from models.fcn import ResBase, ResClassifier 25 | model_g = ResBase(n_class, layer=res, input_ch=input_ch) 26 | model_f1 = ResClassifier(n_class) 27 | model_f2 = ResClassifier(n_class) 28 | elif net_name == "fcnvgg": 29 | from models.vgg_fcn import FCN8sBase, FCN8sClassifier 30 | model_g = FCN8sBase(n_class) 31 | model_f1 = FCN8sClassifier(n_class) 32 | model_f2 = FCN8sClassifier(n_class) 33 | elif "drn" in net_name: 34 | from models.dilated_fcn import DRNSegBase, DRNSegPixelClassifier_ADR 35 | if uses_one_classifier: 36 | model_g = DRNSegBase(model_name=net_name, n_class=n_class, input_ch=input_ch) 37 | model_f1 = DRNSegPixelClassifier_ADR(n_class=n_class) 38 | model_f2 = DRNSegPixelClassifier_ADR(n_class=n_class) 39 | else: 40 | from models.dilated_fcn import DRNSegBase, DRNSegPixelClassifier 41 | model_g = DRNSegBase(model_name=net_name, n_class=n_class, input_ch=input_ch) 42 | model_f1 = DRNSegPixelClassifier(n_class=n_class) 43 | model_f2 = DRNSegPixelClassifier(n_class=n_class) 44 | 45 | else: 46 | raise NotImplementedError("Only FCN (Including Dilated FCN), SegNet, PSPNetare supported!") 47 | 48 | return model_g, model_f1, model_f2 49 | 50 | if method == "MCD": 51 | model_list = get_MCD_model_list() 52 | else: 53 | return NotImplementedError("Sorry... Only MCD is supported!") 54 | 55 | if is_data_parallel: 56 | return [torch.nn.DataParallel(x) for x in model_list] 57 | else: 58 | return model_list 59 | 60 | 61 | def get_optimizer(model_parameters, opt, lr, momentum, weight_decay): 62 | if opt == "sgd": 63 | return torch.optim.SGD(filter(lambda p: p.requires_grad, model_parameters), lr=lr, momentum=momentum, 64 | weight_decay=weight_decay) 65 | 66 | elif opt == "adadelta": 67 | return torch.optim.Adadelta(filter(lambda p: p.requires_grad, model_parameters), lr=lr, 68 | weight_decay=weight_decay) 69 | 70 | elif opt == "adam": 71 | return torch.optim.Adam(filter(lambda p: p.requires_grad, model_parameters), lr=lr, betas=[0.5, 0.999], 72 | weight_decay=weight_decay) 73 | else: 74 | raise NotImplementedError("Only (Momentum) SGD, Adadelta, Adam are supported!") 75 | 76 | 77 | 78 | def check_training(model): 79 | print (type(model)) 80 | print (model.training) 81 | for module in model.children(): 82 | check_training(module) 83 | -------------------------------------------------------------------------------- /MCD_DA_seg/requirements.txt.py: -------------------------------------------------------------------------------- 1 | git+https://github.com/lucasb-eyer/pydensecrf.git 2 | visdom 3 | tqdm 4 | easydict 5 | tensorboard_logger -------------------------------------------------------------------------------- /MCD_DA_seg/scripts/cmd_for_DL.sh: -------------------------------------------------------------------------------- 1 | CONFIRM=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate "https://docs.google.com/uc?export=download&id=$1" -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p') 2 | wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$CONFIRM&id=$1" -O $2 3 | rm -rf /tmp/cookies.txt 4 | 5 | -------------------------------------------------------------------------------- /MCD_DA_seg/scripts/download_demo.sh: -------------------------------------------------------------------------------- 1 | mkdir train_output 2 | mkdir train_output/synthia-train2city-train_3ch 3 | mkdir train_output/synthia-train2city-train_3ch/pth 4 | sh scripts/cmd_for_DL.sh 1mmpYvxe7sgjbWBpsFwy2hDDVjnu15lgC \ 5 | train_output/synthia-train2city-train_3ch/pth/MCD-normal-drn_d_105-60.pth.tar -------------------------------------------------------------------------------- /MCD_DA_seg/scripts/run_eval.sh: -------------------------------------------------------------------------------- 1 | python ./cityscapesscripts/evaluation/evalPixelLevelSemanticLabeling.py \ 2 | --gt /media/VSlab2/nitahaha/re_gtFine512/val/ \ 3 | --pd ./test_output/synthia-train2city-train_3ch---city-val/MCD-normal-drn_d_105-60.tar/label/ 4 | 5 | -------------------------------------------------------------------------------- /MCD_DA_seg/scripts/run_test.sh: -------------------------------------------------------------------------------- 1 | python adapt_tester.py city \ 2 | ./train_output/synthia-train2city-train_3ch/pth/MCD-normal-drn_d_105-60.pth.tar \ 3 | --test_img_shape 512 256 4 | -------------------------------------------------------------------------------- /MCD_DA_seg/scripts/run_train_city2Tokyo.sh: -------------------------------------------------------------------------------- 1 | export CUDA_VISIBLE_DEVICES="3"_ 2 | python adapt_trainer.py city Tokyo --net drn_d_105 \ 3 | --train_img_shape 512 256 \ 4 | --batch_size 8 \ 5 | 6 | -------------------------------------------------------------------------------- /MCD_DA_seg/scripts/run_train_syn2city.sh: -------------------------------------------------------------------------------- 1 | python adapt_trainer.py synthia city --net drn_d_105 \ 2 | --train_img_shape 512 256 \ 3 | --batch_size 8 \ 4 | 5 | -------------------------------------------------------------------------------- /MCD_DA_seg/test/test_loss.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | import sys 3 | import unittest 4 | 5 | import numpy as np 6 | import torch 7 | from torch.autograd import Variable 8 | 9 | sys.path.append("../") 10 | from loss import MySymkl2d, Symkl2d 11 | 12 | 13 | def kld(p, q): 14 | """Calculates Kullback–Leibler divergence""" 15 | p = np.array(p) 16 | q = np.array(q) 17 | return np.sum(p * np.log(p / q), axis=(p.ndim - 1)) 18 | 19 | 20 | def jsd(p, q): 21 | """Calculates Jensen-Shannon Divergence""" 22 | p = np.array(p) 23 | q = np.array(q) 24 | m = 0.5 * (p + q) 25 | return 0.5 * kld(p, m) + 0.5 * kld(q, m) 26 | 27 | 28 | def softmax(x): 29 | """Compute softmax values for each sets of scores in x.""" 30 | e_x = np.exp(x - np.max(x)) 31 | return e_x / e_x.sum() 32 | 33 | 34 | class TestKLD(unittest.TestCase): 35 | """test class for loss.py 36 | """ 37 | 38 | def test_one_d(self): 39 | np1_one_dim = np.array([0.7, 0.2, 0.1]) 40 | np2_one_dim = np.array([0.1, 0.8, 0.1]) 41 | 42 | t1_one_dim = Variable(torch.FloatTensor(np1_one_dim)) 43 | t2_one_dim = Variable(torch.FloatTensor(np2_one_dim)) 44 | 45 | in1 = softmax(np1_one_dim) 46 | in2 = softmax(np2_one_dim) 47 | actual = 0.5 * (kld(in2, in1) + kld(in1, in2)) 48 | 49 | mysymkl = MySymkl2d() 50 | nmlsymkl = Symkl2d(size_average=False) 51 | averaged_symkl = Symkl2d(size_average=True) 52 | 53 | pred_mysymkl = mysymkl(t1_one_dim, t2_one_dim).data[0] 54 | pred_nmlsymkl = nmlsymkl(t1_one_dim, t2_one_dim).data[0] 55 | pred_averaged_symkl = averaged_symkl(t1_one_dim, t2_one_dim).data[0] 56 | 57 | self.assertAlmostEqual(pred_mysymkl * len(np1_one_dim), actual) 58 | self.assertAlmostEqual(pred_nmlsymkl, actual) 59 | self.assertAlmostEqual(pred_averaged_symkl, pred_mysymkl) 60 | 61 | def test_four_d(self): 62 | batch_size, n_ch, w, h = 16, 3, 4, 5 63 | 64 | np1_four_dim = np.random.random([batch_size, n_ch, w, h]) 65 | np2_four_dim = np.random.random([batch_size, n_ch, w, h]) 66 | 67 | t1_one_dim = Variable(torch.FloatTensor(np1_four_dim)) 68 | t2_one_dim = Variable(torch.FloatTensor(np2_four_dim)) 69 | 70 | in1 = softmax(np1_four_dim) # TODO: Need to be fixed 71 | in2 = softmax(np2_four_dim) # TODO: Need to be fixed 72 | actual = 0.5 * (kld(in2, in1) + kld(in1, in2)) # TODO: Need to be fixed 73 | 74 | mysymkl = MySymkl2d() 75 | nmlsymkl = Symkl2d(size_average=False) 76 | averaged_symkl = Symkl2d(size_average=True) 77 | 78 | pred_mysymkl = mysymkl(t1_one_dim, t2_one_dim).data[0] 79 | pred_nmlsymkl = nmlsymkl(t1_one_dim, t2_one_dim).data[0] 80 | pred_symkl = averaged_symkl(t1_one_dim, t2_one_dim).data[0] 81 | 82 | # self.assertAlmostEqual(pred_mysymkl * 3, actual) # TODO: Need to be fixed 83 | # self.assertAlmostEqual(pred_nmlsymkl, actual) # TODO: Need to be fixed 84 | self.assertAlmostEqual(pred_symkl, pred_mysymkl) 85 | self.assertAlmostEqual(pred_symkl, pred_nmlsymkl / (batch_size * n_ch * w * h)) 86 | 87 | 88 | if __name__ == "__main__": 89 | unittest.main() 90 | -------------------------------------------------------------------------------- /MCD_DA_seg/tools/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stu92054/Domain-adaptation-on-segmentation/f9d747c432c9ddddbe3005759b4c193120e5ad85/MCD_DA_seg/tools/__init__.py -------------------------------------------------------------------------------- /MCD_DA_seg/tools/compare_predicted_png.py: -------------------------------------------------------------------------------- 1 | """ 2 | Compare predicted visualized png. 3 | 4 | Create merged png image. 5 | """ 6 | 7 | from PIL import Image 8 | import sys 9 | import os 10 | 11 | 12 | def merge_four_images(files, outimg): 13 | """ 14 | Create merged_img. 15 | 16 | files : list of 4 image file paths 17 | merged_img(PIL Image) 18 | - topleft: 1st 19 | - bottomleft: 2nd 20 | - topright: 3rd 21 | - bottomright: 4th 22 | """ 23 | assert len(files) == 4 24 | 25 | img = [Image.open(file_) for file_ in files] 26 | 27 | img_size = img[0].size 28 | merged_img = Image.new('RGB', (img_size[0]*2, img_size[1]*2)) 29 | for row in range(2): 30 | for col in range(2): 31 | merged_img.paste(img[row*2+col], (img_size[0]*row, img_size[1]*col)) 32 | 33 | merged_img.save(outimg) 34 | 35 | 36 | def main(vis_dirs, outdir): 37 | """Out merged_imgs from 4 directories (one directory is gt directory).""" 38 | assert len(vis_dirs) == 4 39 | 40 | if not os.path.exists(outdir): 41 | os.mkdir(outdir) 42 | 43 | for i, filename in enumerate(os.listdir(vis_dirs[0])): 44 | if i%100 == 0: 45 | print(i) 46 | try: 47 | files = [os.path.join(vis_dir, filename) for vis_dir in vis_dirs] 48 | outimg = os.path.join(outdir, filename) 49 | merge_four_images(files, outimg) 50 | except: 51 | print(filename) 52 | 53 | if __name__ == '__main__': 54 | args = sys.argv 55 | 56 | """num of args need to be 3.""" 57 | 58 | # merge_four_images(args[1:], 'sample_merged.png') 59 | vis_dirs = ['/data/ugui0/dataset/adaptation/segmentation_test'] + args[1:] 60 | 61 | for i in range(20): 62 | outdir = 'merged_imgs/merged_imgs_{0}'.format(i) 63 | if os.path.exists(outdir): 64 | continue 65 | else: 66 | break 67 | 68 | main(vis_dirs, outdir) 69 | -------------------------------------------------------------------------------- /MCD_DA_seg/tools/concat_rgb_gt_pred_img.py: -------------------------------------------------------------------------------- 1 | """ 2 | Compare predicted visualized png. 3 | 4 | Create merged png image that is randomly selected with original RGB image and GT. 5 | """ 6 | 7 | import argparse 8 | import os 9 | import random 10 | 11 | import numpy as np 12 | from PIL import Image 13 | 14 | from util import mkdir_if_not_exist 15 | 16 | VIS_GT_DIR_DIC = { 17 | "city": "/data/unagi0/watanabe/DomainAdaptation/Segmentation/VisDA2017/cityscapes_vis_gt/val", 18 | "city16": "/data/unagi0/watanabe/DomainAdaptation/Segmentation/VisDA2017/cityscapes16_vis_gt/val" 19 | } 20 | RGB_IMG_DIR_DIC = { 21 | "city": "/data/unagi0/watanabe/DomainAdaptation/Segmentation/VisDA2017/cityscapes_val_imgs", 22 | "city16": "/data/unagi0/watanabe/DomainAdaptation/Segmentation/VisDA2017/cityscapes_val_imgs" 23 | } 24 | 25 | parser = argparse.ArgumentParser(description='Visualize Some Results') 26 | parser.add_argument('dataset', choices=["gta", "city", "test", "ir", "city16"]) 27 | parser.add_argument('--n_img', type=int, default=5) 28 | parser.add_argument('--pred_vis_dirs', type=str, nargs='+', 29 | help='result directory that visualized pngs') 30 | parser.add_argument('--outdir', type=str, default="vis_comparison") 31 | parser.add_argument("--rand_sample", action="store_true", 32 | help='whether you sample results randomly') 33 | 34 | args = parser.parse_args() 35 | 36 | rgb_dir = RGB_IMG_DIR_DIC[args.dataset] 37 | vis_gt_dir = VIS_GT_DIR_DIC[args.dataset] 38 | 39 | if args.rand_sample: 40 | rgbfn_list = os.listdir(rgb_dir) 41 | else: 42 | pickup_id_list = [ 43 | "lindau_000006_000019", 44 | "frankfurt_000001_021406", 45 | "frankfurt_000001_041074", 46 | "frankfurt_000001_002512", 47 | "frankfurt_000000_009688", 48 | "frankfurt_000001_040575", 49 | "munster_000050_000019" 50 | ] 51 | rgbfn_list = [x + "_leftImg8bit.png" for x in pickup_id_list] 52 | 53 | pickup_rgbfn_list = random.sample(rgbfn_list, args.n_img) 54 | print ("pickup filename list") 55 | print (pickup_rgbfn_list) 56 | 57 | all_img_list = [] 58 | for rgbfn in pickup_rgbfn_list: 59 | full_rgbfn = os.path.join(rgb_dir, rgbfn) 60 | 61 | gtfn = rgbfn.replace("leftImg8bit", "gtFine_gtlabels") 62 | full_gtfn = os.path.join(vis_gt_dir, gtfn) 63 | 64 | one_column_img_list = [] 65 | one_column_img_list.append(Image.open(full_rgbfn)) 66 | 67 | one_column_img_list.append(Image.open(full_gtfn)) 68 | 69 | for pred_vis_dir in args.pred_vis_dirs: 70 | full_predfn = os.path.join(pred_vis_dir, rgbfn) 71 | one_column_img_list.append(Image.open(full_predfn)) 72 | 73 | all_img_list.append(one_column_img_list) 74 | 75 | 76 | def concat_imgs(imgs): 77 | n_row = len(imgs[0]) 78 | n_col = len(imgs) 79 | w, h = imgs[0][0].size 80 | 81 | merged_img = Image.new('RGB', (w * n_col, h * n_row)) 82 | for col in range(n_col): 83 | for row in range(n_row): 84 | merged_img.paste(imgs[col][row], (w * col, h * row)) 85 | 86 | return merged_img 87 | 88 | 89 | res = concat_imgs(all_img_list) 90 | size = np.array(res.size) 91 | res = res.resize(size / 8) 92 | 93 | mkdir_if_not_exist(args.outdir) 94 | shortened_pickup_rgbfn_list = [x.replace("_leftImg8bit.png", "") for x in pickup_rgbfn_list] 95 | pickup_str = "-".join(shortened_pickup_rgbfn_list) + ".pdf" 96 | outfn = os.path.join(args.outdir, pickup_str) 97 | res.save(outfn) 98 | print ("Successfully saved result to %s" % outfn) 99 | -------------------------------------------------------------------------------- /MCD_DA_seg/util.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | import shutil 4 | 5 | import torch 6 | import sys 7 | 8 | 9 | def set_debugger_org(): 10 | if not sys.excepthook == sys.__excepthook__: 11 | from IPython.core import ultratb 12 | sys.excepthook = ultratb.FormattedTB(call_pdb=True) 13 | 14 | 15 | def set_debugger_org_frc(): 16 | from IPython.core import ultratb 17 | sys.excepthook = ultratb.FormattedTB(call_pdb=True) 18 | 19 | 20 | def set_trace(): 21 | from IPython.core.debugger import Pdb 22 | Pdb(color_scheme='Linux').set_trace(sys._getframe().f_back) 23 | 24 | 25 | def mkdir_if_not_exist(dirname): 26 | if not os.path.exists(dirname): 27 | os.makedirs(dirname) 28 | 29 | 30 | def yes_no_input(): 31 | while True: 32 | if sys.version_info.major==3: 33 | choice = input("Please respond with 'yes' or 'no' [y/N]: ").lower() 34 | else: 35 | choice = raw_input("Please respond with 'yes' or 'no' [y/N]: ").lower() 36 | if choice in ['y', 'ye', 'yes']: 37 | return True 38 | elif choice in ['n', 'no']: 39 | return False 40 | 41 | 42 | def check_if_done(filename): 43 | if os.path.exists(filename): 44 | print ("%s already exists. Is it O.K. to overwrite it and start this program?" % filename) 45 | if not yes_no_input(): 46 | raise Exception("Please restart training after you set args.savename differently!") 47 | 48 | 49 | def save_checkpoint(state, is_best, filename='checkpoint.pth.tar'): 50 | torch.save(state, filename) 51 | if is_best: 52 | shutil.copyfile(filename, 'model_best.pth.tar') 53 | 54 | 55 | class AverageMeter(object): 56 | """Computes and stores the average and current value""" 57 | 58 | def __init__(self): 59 | self.reset() 60 | 61 | def reset(self): 62 | self.val = 0 63 | self.avg = 0 64 | self.sum = 0 65 | self.count = 0 66 | 67 | def update(self, val, n=1): 68 | self.val = val 69 | self.sum += val * n 70 | self.count += n 71 | self.avg = self.sum / self.count 72 | 73 | 74 | def save_dic_to_json(dic, fn, verbose=True): 75 | with open(fn, "w") as f: 76 | json_str = json.dumps(dic, sort_keys=True, indent=4) 77 | if verbose: 78 | print (json_str) 79 | f.write(json_str) 80 | print ("param file '%s' was saved!" % fn) 81 | 82 | 83 | def emphasize_str(string): 84 | print ('#' * 100) 85 | print (string) 86 | print ('#' * 100) 87 | 88 | 89 | def adjust_learning_rate(optimizer, lr_init, decay_rate, epoch, num_epochs, decay_epoch=15): 90 | """Decay Learning rate at 1/2 and 3/4 of the num_epochs""" 91 | lr = lr_init 92 | if epoch == decay_epoch: 93 | lr *= 0.1 94 | for param_group in optimizer.param_groups: 95 | param_group['lr'] = lr 96 | return lr 97 | 98 | 99 | def get_class_weight_from_file(n_class, weight_filename=None, add_bg_loss=False): 100 | weight = torch.ones(n_class) 101 | if weight_filename: 102 | import pandas as pd 103 | 104 | loss_df = pd.read_csv(weight_filename) 105 | loss_df.sort_values("class_id", inplace=True) 106 | weight *= torch.FloatTensor(loss_df.weight.values) 107 | 108 | if not add_bg_loss: 109 | weight[n_class - 1] = 0 # Ignore background loss 110 | return weight 111 | -------------------------------------------------------------------------------- /MCD_DA_seg/visualize.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import visdom 3 | 4 | 5 | class LinePlotter(object): 6 | def __init__(self, env_name="main"): 7 | self.vis = visdom.Visdom() 8 | self.env = env_name 9 | self.plots = {} 10 | 11 | def plot(self, var_name, split_name, x, y): 12 | if var_name not in self.plots: 13 | self.plots[var_name] = self.vis.line(X=np.array([x, x]), 14 | Y=np.array([y, y]), env=self.env, opts=dict( 15 | legend=[split_name], 16 | title=var_name, 17 | xlabel="Iters", 18 | ylabel=var_name 19 | )) 20 | else: 21 | self.vis.updateTrace(X=np.array([x, x]), Y=np.array([y, y]), env=self.env, 22 | win=self.plots[var_name], name=split_name) 23 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Domain-adaptation-on-segmentation 2 | 3 | Some useful Domain adaptation code 4 | 5 | > This repository collect some useful code fun others. And we will implement them in the future. So you can use them easily. 6 | 7 | ## Table of Contents 8 | [Blog](https://medium.com/deep-learning-domain-adaptation-on-image-segmentat) 9 | [papers](#papers) 10 | [Talks](#talks) 11 | [Datasets](#datasets) 12 | 13 | ---------- 14 | ## Papers 15 | **Learning to Adapt Structured Output Space for Semantic Segmentation** 16 | 17 | - paper link: [https://arxiv.org/abs/1802.10349](https://arxiv.org/abs/1802.10349) 18 | - code: provided in Folder: Adapt Structured Output 19 | - main code forked from [https://github.com/wasidennis/AdaptSegNet](https://github.com/wasidennis/AdaptSegNet) 20 | - [tutorial Video]() 21 | 22 | **No More Discrimination: Cross City Adaptation of Road Scene Segmenters** 23 | 24 | - paper link: [https://arxiv.org/abs/1704.08509](https://arxiv.org/abs/1704.08509) 25 | - code: provided in Folder: [Adapt_Road_Scene](https://github.com/stu92054/Domain-adaptation-on-segmentation/tree/master/Adapt_Road_Scene) 26 | - [tutorial Video]() 27 | 28 | **FCNs in the Wild: Pixel-level Adversarial and Constraint-based Adaptation** 29 | 30 | - paper link: [https://arxiv.org/abs/1612.02649](https://arxiv.org/abs/1612.02649) 31 | - code: provided in Folder: [FCNs_Wild](https://github.com/stu92054/Domain-adaptation-on-segmentation/tree/master/FCNs_Wild) 32 | - [tutorial Video]() 33 | 34 | **Maximum Classifier Discrepancy for Domain Adaptation with Semantic Segmentation** 35 | - paper link: [https://arxiv.org/abs/1712.02560](https://arxiv.org/abs/1712.02560) 36 | - code: provided in Folder: MCD_DA_seg 37 | - main code forked from [https://github.com/mil-tokyo/MCD_DA](https://github.com/mil-tokyo/MCD_DA) 38 | - [tutorial Video]() 39 | 40 | ## Talks 41 | - [Learning to Adapt Structured Output Space for Semantic Segmentation](https://www.youtube.com/watch?v=zVYY9HaEJnc&feature=youtu.be&t=2643), Wei-Chih Hung. 42 | - [No More Discrimination: Cross City Adaptation of Road Scene Segmenters](https://www.youtube.com/watch?v=EQ9HptjI8_U), Yu-Ting Chen. 43 | - [Maximum Classifier Discrepancy for Domain Adaptation with Semantic Segmentation](https://youtu.be/lNqXyJliVSo?t=2282), Kuniaki Saito. 44 | 45 | ## Datasets 46 | 47 | - [Synthia Dataset](http://synthia-dataset.com/download-2/) ( Download the subset **SYNTHIA-RAND-CITYSCAPES**) 48 | 49 | - [Cityscapes Dataset](https://www.cityscapes-dataset.com/) 50 | 51 | - [Our Dataset](https://yihsinchen.github.io/segmentation_adaptation/#Dataset) 52 | - contains four subsets --- Taipei, Tokyo, Roma, Rio --- used as target domain (only testing data has annotations) 53 | --------------------------------------------------------------------------------