├── FPR_IQA ├── FPR_NI │ ├── models │ │ ├── csiq │ │ │ └── pkl.txt │ │ ├── kadid │ │ │ └── pkl.txt │ │ ├── live │ │ │ └── pkl.txt │ │ └── tid2013 │ │ │ └── pkl.txt │ ├── scripts │ │ ├── csiq │ │ │ ├── test_0_data.json │ │ │ ├── test_1_data.json │ │ │ ├── test_2_data.json │ │ │ ├── test_3_data.json │ │ │ ├── test_4_data.json │ │ │ ├── test_5_data.json │ │ │ ├── test_6_data.json │ │ │ ├── test_7_data.json │ │ │ ├── test_8_data.json │ │ │ ├── test_9_data.json │ │ │ ├── train_0_data.json │ │ │ ├── train_1_data.json │ │ │ ├── train_2_data.json │ │ │ ├── train_3_data.json │ │ │ ├── train_4_data.json │ │ │ ├── train_5_data.json │ │ │ ├── train_6_data.json │ │ │ ├── train_7_data.json │ │ │ ├── train_8_data.json │ │ │ ├── train_9_data.json │ │ │ ├── val_0_data.json │ │ │ ├── val_1_data.json │ │ │ ├── val_2_data.json │ │ │ ├── val_3_data.json │ │ │ ├── val_4_data.json │ │ │ ├── val_5_data.json │ │ │ ├── val_6_data.json │ │ │ ├── val_7_data.json │ │ │ ├── val_8_data.json │ │ │ └── val_9_data.json │ │ ├── kadid │ │ │ ├── test_0_data.json │ │ │ ├── test_1_data.json │ │ │ ├── test_2_data.json │ │ │ ├── test_3_data.json │ │ │ ├── test_4_data.json │ │ │ ├── test_5_data.json │ │ │ ├── test_6_data.json │ │ │ ├── test_7_data.json │ │ │ ├── test_8_data.json │ │ │ ├── test_9_data.json │ │ │ ├── train_0_data.json │ │ │ ├── train_1_data.json │ │ │ ├── train_2_data.json │ │ │ ├── train_3_data.json │ │ │ ├── train_4_data.json │ │ │ ├── train_5_data.json │ │ │ ├── train_6_data.json │ │ │ ├── train_7_data.json │ │ │ ├── train_8_data.json │ │ │ ├── train_9_data.json │ │ │ ├── val_0_data.json │ │ │ ├── val_1_data.json │ │ │ ├── val_2_data.json │ │ │ ├── val_3_data.json │ │ │ ├── val_4_data.json │ │ │ ├── val_5_data.json │ │ │ ├── val_6_data.json │ │ │ ├── val_7_data.json │ │ │ ├── val_8_data.json │ │ │ └── val_9_data.json │ │ ├── live │ │ │ ├── test_0_data.json │ │ │ ├── test_1_data.json │ │ │ ├── test_2_data.json │ │ │ ├── test_3_data.json │ │ │ ├── test_4_data.json │ │ │ ├── test_5_data.json │ │ │ ├── test_6_data.json │ │ │ ├── test_7_data.json │ │ │ ├── test_8_data.json │ │ │ ├── test_9_data.json │ │ │ ├── train_0_data.json │ │ │ ├── train_1_data.json │ │ │ ├── train_2_data.json │ │ │ ├── train_3_data.json │ │ │ ├── train_4_data.json │ │ │ ├── train_5_data.json │ │ │ ├── train_6_data.json │ │ │ ├── train_7_data.json │ │ │ ├── train_8_data.json │ │ │ ├── train_9_data.json │ │ │ ├── val_0_data.json │ │ │ ├── val_1_data.json │ │ │ ├── val_2_data.json │ │ │ ├── val_3_data.json │ │ │ ├── val_4_data.json │ │ │ ├── val_5_data.json │ │ │ ├── val_6_data.json │ │ │ ├── val_7_data.json │ │ │ ├── val_8_data.json │ │ │ └── val_9_data.json │ │ └── tid2013 │ │ │ ├── test_0_data.json │ │ │ ├── test_1_data.json │ │ │ ├── test_2_data.json │ │ │ ├── test_3_data.json │ │ │ ├── test_4_data.json │ │ │ ├── test_5_data.json │ │ │ ├── test_6_data.json │ │ │ ├── test_7_data.json │ │ │ ├── test_8_data.json │ │ │ ├── test_9_data.json │ │ │ ├── train_0_data.json │ │ │ ├── train_1_data.json │ │ │ ├── train_2_data.json │ │ │ ├── train_3_data.json │ │ │ ├── train_4_data.json │ │ │ ├── train_5_data.json │ │ │ ├── train_6_data.json │ │ │ ├── train_7_data.json │ │ │ ├── train_8_data.json │ │ │ ├── train_9_data.json │ │ │ ├── val_0_data.json │ │ │ ├── val_1_data.json │ │ │ ├── val_2_data.json │ │ │ ├── val_3_data.json │ │ │ ├── val_4_data.json │ │ │ ├── val_5_data.json │ │ │ ├── val_6_data.json │ │ │ ├── val_7_data.json │ │ │ ├── val_8_data.json │ │ │ └── val_9_data.json │ ├── src │ │ ├── IQA_Test_All_Pros.py │ │ ├── Inv_arch.py │ │ ├── Subnet_constructor.py │ │ ├── VIDLoss.py │ │ ├── __pycache__ │ │ │ ├── FRmodel.cpython-36.pyc │ │ │ ├── FRmodel3.cpython-36.pyc │ │ │ ├── Inv_arch.cpython-36.pyc │ │ │ ├── Inv_arch.cpython-37.pyc │ │ │ ├── Inv_arch.cpython-38.pyc │ │ │ ├── NRmodel.cpython-36.pyc │ │ │ ├── NRmodel3.cpython-36.pyc │ │ │ ├── Subnet_constructor.cpython-36.pyc │ │ │ ├── Subnet_constructor.cpython-37.pyc │ │ │ ├── Subnet_constructor.cpython-38.pyc │ │ │ ├── VIDLoss.cpython-36.pyc │ │ │ ├── VIDLoss.cpython-37.pyc │ │ │ ├── VIDLoss.cpython-38.pyc │ │ │ ├── dataset.cpython-36.pyc │ │ │ ├── dataset.cpython-37.pyc │ │ │ ├── dataset.cpython-38.pyc │ │ │ ├── model.cpython-36.pyc │ │ │ ├── model.cpython-37.pyc │ │ │ ├── model.cpython-38.pyc │ │ │ ├── model2.cpython-36.pyc │ │ │ ├── modelSplit.cpython-36.pyc │ │ │ ├── modelSplit2.cpython-36.pyc │ │ │ ├── module_util.cpython-36.pyc │ │ │ ├── module_util.cpython-37.pyc │ │ │ ├── module_util.cpython-38.pyc │ │ │ ├── utils.cpython-36.pyc │ │ │ ├── utils.cpython-37.pyc │ │ │ └── utils.cpython-38.pyc │ │ ├── dataset.py │ │ ├── iqaScrach.py │ │ ├── iqaTest.py │ │ ├── model.py │ │ ├── module_util.py │ │ └── utils.py │ └── utils │ │ ├── CSIQ_make_list.py │ │ ├── KADID_make_list.py │ │ ├── LIVE_make_list.py │ │ └── TID_make_list.py ├── FPR_SCI │ ├── models │ │ ├── scid │ │ │ └── pkl.txt │ │ └── siqad │ │ │ └── pkl.txt │ ├── sci_scripts │ │ ├── scid-scripts-6-2-2 │ │ │ ├── test_0_data.json │ │ │ ├── test_1_data.json │ │ │ ├── test_2_data.json │ │ │ ├── test_3_data.json │ │ │ ├── test_4_data.json │ │ │ ├── test_5_data.json │ │ │ ├── test_6_data.json │ │ │ ├── test_7_data.json │ │ │ ├── test_8_data.json │ │ │ ├── test_9_data.json │ │ │ ├── train_0_data.json │ │ │ ├── train_1_data.json │ │ │ ├── train_2_data.json │ │ │ ├── train_3_data.json │ │ │ ├── train_4_data.json │ │ │ ├── train_5_data.json │ │ │ ├── train_6_data.json │ │ │ ├── train_7_data.json │ │ │ ├── train_8_data.json │ │ │ ├── train_9_data.json │ │ │ ├── val_0_data.json │ │ │ ├── val_1_data.json │ │ │ ├── val_2_data.json │ │ │ ├── val_3_data.json │ │ │ ├── val_4_data.json │ │ │ ├── val_5_data.json │ │ │ ├── val_6_data.json │ │ │ ├── val_7_data.json │ │ │ ├── val_8_data.json │ │ │ └── val_9_data.json │ │ └── siqad-scripts-6-2-2 │ │ │ ├── test_0_data.json │ │ │ ├── test_1_data.json │ │ │ ├── test_2_data.json │ │ │ ├── test_3_data.json │ │ │ ├── test_4_data.json │ │ │ ├── test_5_data.json │ │ │ ├── test_6_data.json │ │ │ ├── test_7_data.json │ │ │ ├── test_8_data.json │ │ │ ├── test_9_data.json │ │ │ ├── train_0_data.json │ │ │ ├── train_1_data.json │ │ │ ├── train_2_data.json │ │ │ ├── train_3_data.json │ │ │ ├── train_4_data.json │ │ │ ├── train_5_data.json │ │ │ ├── train_6_data.json │ │ │ ├── train_7_data.json │ │ │ ├── train_8_data.json │ │ │ ├── train_9_data.json │ │ │ ├── val_0_data.json │ │ │ ├── val_1_data.json │ │ │ ├── val_2_data.json │ │ │ ├── val_3_data.json │ │ │ ├── val_4_data.json │ │ │ ├── val_5_data.json │ │ │ ├── val_6_data.json │ │ │ ├── val_7_data.json │ │ │ ├── val_8_data.json │ │ │ └── val_9_data.json │ ├── src │ │ ├── IQA_Test_All_Pros.py │ │ ├── Inv_arch.py │ │ ├── Subnet_constructor.py │ │ ├── VIDLoss.py │ │ ├── __pycache__ │ │ │ ├── FRmodel.cpython-36.pyc │ │ │ ├── FRmodel3.cpython-36.pyc │ │ │ ├── Inv_arch.cpython-36.pyc │ │ │ ├── Inv_arch.cpython-37.pyc │ │ │ ├── Inv_arch.cpython-38.pyc │ │ │ ├── NRmodel.cpython-36.pyc │ │ │ ├── NRmodel3.cpython-36.pyc │ │ │ ├── Subnet_constructor.cpython-36.pyc │ │ │ ├── Subnet_constructor.cpython-37.pyc │ │ │ ├── Subnet_constructor.cpython-38.pyc │ │ │ ├── VIDLoss.cpython-36.pyc │ │ │ ├── VIDLoss.cpython-37.pyc │ │ │ ├── VIDLoss.cpython-38.pyc │ │ │ ├── dataset.cpython-36.pyc │ │ │ ├── dataset.cpython-37.pyc │ │ │ ├── dataset.cpython-38.pyc │ │ │ ├── model.cpython-36.pyc │ │ │ ├── model.cpython-37.pyc │ │ │ ├── model.cpython-38.pyc │ │ │ ├── model2.cpython-36.pyc │ │ │ ├── modelSplit.cpython-36.pyc │ │ │ ├── modelSplit2.cpython-36.pyc │ │ │ ├── module_util.cpython-36.pyc │ │ │ ├── module_util.cpython-37.pyc │ │ │ ├── module_util.cpython-38.pyc │ │ │ ├── utils.cpython-36.pyc │ │ │ ├── utils.cpython-37.pyc │ │ │ └── utils.cpython-38.pyc │ │ ├── dataset.py │ │ ├── iqaScrach.py │ │ ├── iqaTest.py │ │ ├── model.py │ │ ├── model_ini.py │ │ ├── module_util.py │ │ ├── scid-iqaScrach.py │ │ ├── scid-iqaTest.py │ │ └── utils.py │ └── utils │ │ ├── SCID_make_list.py │ │ └── SIQAD_make_list.py └── framework.png ├── LICENSE ├── README.md └── datasets ├── CSIQ └── ref_dist_mos.txt ├── SCID └── MOS_SCID.txt ├── SIQAD └── sccdmos.txt ├── databaserelease2 └── MOS_NAME_REF.txt ├── kadid10k └── names_with_mos.txt └── tid2013 └── mos_with_names.txt /FPR_IQA/FPR_NI/models/csiq/pkl.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/models/csiq/pkl.txt -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/models/kadid/pkl.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/models/kadid/pkl.txt -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/models/live/pkl.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/models/live/pkl.txt -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/models/tid2013/pkl.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/models/tid2013/pkl.txt -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/scripts/csiq/val_5_data.json: -------------------------------------------------------------------------------- 1 | {"img": ["../../datasets/CSIQ/dst_imgs/awgn/1600.AWGN.1.png", "../../datasets/CSIQ/dst_imgs/awgn/1600.AWGN.2.png", "../../datasets/CSIQ/dst_imgs/awgn/1600.AWGN.3.png", "../../datasets/CSIQ/dst_imgs/awgn/1600.AWGN.4.png", "../../datasets/CSIQ/dst_imgs/awgn/1600.AWGN.5.png", "../../datasets/CSIQ/dst_imgs/jpeg/1600.JPEG.1.png", "../../datasets/CSIQ/dst_imgs/jpeg/1600.JPEG.2.png", "../../datasets/CSIQ/dst_imgs/jpeg/1600.JPEG.3.png", "../../datasets/CSIQ/dst_imgs/jpeg/1600.JPEG.4.png", "../../datasets/CSIQ/dst_imgs/jpeg/1600.JPEG.5.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/1600.jpeg2000.1.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/1600.jpeg2000.2.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/1600.jpeg2000.3.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/1600.jpeg2000.4.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/1600.jpeg2000.5.png", "../../datasets/CSIQ/dst_imgs/fnoise/1600.fnoise.1.png", "../../datasets/CSIQ/dst_imgs/fnoise/1600.fnoise.2.png", "../../datasets/CSIQ/dst_imgs/fnoise/1600.fnoise.3.png", "../../datasets/CSIQ/dst_imgs/fnoise/1600.fnoise.4.png", "../../datasets/CSIQ/dst_imgs/fnoise/1600.fnoise.5.png", "../../datasets/CSIQ/dst_imgs/blur/1600.BLUR.1.png", "../../datasets/CSIQ/dst_imgs/blur/1600.BLUR.2.png", "../../datasets/CSIQ/dst_imgs/blur/1600.BLUR.3.png", "../../datasets/CSIQ/dst_imgs/blur/1600.BLUR.4.png", "../../datasets/CSIQ/dst_imgs/blur/1600.BLUR.5.png", "../../datasets/CSIQ/dst_imgs/contrast/1600.contrast.1.png", "../../datasets/CSIQ/dst_imgs/contrast/1600.contrast.2.png", "../../datasets/CSIQ/dst_imgs/contrast/1600.contrast.3.png", "../../datasets/CSIQ/dst_imgs/contrast/1600.contrast.4.png", "../../datasets/CSIQ/dst_imgs/awgn/aerial_city.AWGN.1.png", "../../datasets/CSIQ/dst_imgs/awgn/aerial_city.AWGN.2.png", "../../datasets/CSIQ/dst_imgs/awgn/aerial_city.AWGN.3.png", "../../datasets/CSIQ/dst_imgs/awgn/aerial_city.AWGN.4.png", "../../datasets/CSIQ/dst_imgs/awgn/aerial_city.AWGN.5.png", "../../datasets/CSIQ/dst_imgs/jpeg/aerial_city.JPEG.1.png", "../../datasets/CSIQ/dst_imgs/jpeg/aerial_city.JPEG.2.png", "../../datasets/CSIQ/dst_imgs/jpeg/aerial_city.JPEG.3.png", "../../datasets/CSIQ/dst_imgs/jpeg/aerial_city.JPEG.4.png", "../../datasets/CSIQ/dst_imgs/jpeg/aerial_city.JPEG.5.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/aerial_city.jpeg2000.1.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/aerial_city.jpeg2000.2.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/aerial_city.jpeg2000.3.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/aerial_city.jpeg2000.4.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/aerial_city.jpeg2000.5.png", "../../datasets/CSIQ/dst_imgs/fnoise/aerial_city.fnoise.1.png", "../../datasets/CSIQ/dst_imgs/fnoise/aerial_city.fnoise.2.png", "../../datasets/CSIQ/dst_imgs/fnoise/aerial_city.fnoise.3.png", "../../datasets/CSIQ/dst_imgs/fnoise/aerial_city.fnoise.4.png", "../../datasets/CSIQ/dst_imgs/fnoise/aerial_city.fnoise.5.png", "../../datasets/CSIQ/dst_imgs/blur/aerial_city.BLUR.1.png", "../../datasets/CSIQ/dst_imgs/blur/aerial_city.BLUR.2.png", "../../datasets/CSIQ/dst_imgs/blur/aerial_city.BLUR.3.png", "../../datasets/CSIQ/dst_imgs/blur/aerial_city.BLUR.4.png", "../../datasets/CSIQ/dst_imgs/blur/aerial_city.BLUR.5.png", "../../datasets/CSIQ/dst_imgs/contrast/aerial_city.contrast.1.png", "../../datasets/CSIQ/dst_imgs/contrast/aerial_city.contrast.2.png", "../../datasets/CSIQ/dst_imgs/contrast/aerial_city.contrast.3.png", "../../datasets/CSIQ/dst_imgs/contrast/aerial_city.contrast.4.png", "../../datasets/CSIQ/dst_imgs/awgn/boston.AWGN.1.png", "../../datasets/CSIQ/dst_imgs/awgn/boston.AWGN.2.png", "../../datasets/CSIQ/dst_imgs/awgn/boston.AWGN.3.png", "../../datasets/CSIQ/dst_imgs/awgn/boston.AWGN.4.png", "../../datasets/CSIQ/dst_imgs/awgn/boston.AWGN.5.png", "../../datasets/CSIQ/dst_imgs/jpeg/boston.JPEG.1.png", "../../datasets/CSIQ/dst_imgs/jpeg/boston.JPEG.2.png", "../../datasets/CSIQ/dst_imgs/jpeg/boston.JPEG.3.png", "../../datasets/CSIQ/dst_imgs/jpeg/boston.JPEG.4.png", "../../datasets/CSIQ/dst_imgs/jpeg/boston.JPEG.5.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/boston.jpeg2000.1.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/boston.jpeg2000.2.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/boston.jpeg2000.3.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/boston.jpeg2000.4.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/boston.jpeg2000.5.png", "../../datasets/CSIQ/dst_imgs/fnoise/boston.fnoise.1.png", "../../datasets/CSIQ/dst_imgs/fnoise/boston.fnoise.2.png", "../../datasets/CSIQ/dst_imgs/fnoise/boston.fnoise.3.png", "../../datasets/CSIQ/dst_imgs/fnoise/boston.fnoise.4.png", "../../datasets/CSIQ/dst_imgs/fnoise/boston.fnoise.5.png", "../../datasets/CSIQ/dst_imgs/blur/boston.BLUR.1.png", "../../datasets/CSIQ/dst_imgs/blur/boston.BLUR.2.png", "../../datasets/CSIQ/dst_imgs/blur/boston.BLUR.3.png", "../../datasets/CSIQ/dst_imgs/blur/boston.BLUR.4.png", "../../datasets/CSIQ/dst_imgs/blur/boston.BLUR.5.png", "../../datasets/CSIQ/dst_imgs/contrast/boston.contrast.1.png", "../../datasets/CSIQ/dst_imgs/contrast/boston.contrast.2.png", "../../datasets/CSIQ/dst_imgs/contrast/boston.contrast.3.png", "../../datasets/CSIQ/dst_imgs/contrast/boston.contrast.4.png", "../../datasets/CSIQ/dst_imgs/awgn/cactus.AWGN.1.png", "../../datasets/CSIQ/dst_imgs/awgn/cactus.AWGN.2.png", "../../datasets/CSIQ/dst_imgs/awgn/cactus.AWGN.3.png", "../../datasets/CSIQ/dst_imgs/awgn/cactus.AWGN.4.png", "../../datasets/CSIQ/dst_imgs/awgn/cactus.AWGN.5.png", "../../datasets/CSIQ/dst_imgs/jpeg/cactus.JPEG.1.png", "../../datasets/CSIQ/dst_imgs/jpeg/cactus.JPEG.2.png", "../../datasets/CSIQ/dst_imgs/jpeg/cactus.JPEG.3.png", "../../datasets/CSIQ/dst_imgs/jpeg/cactus.JPEG.4.png", "../../datasets/CSIQ/dst_imgs/jpeg/cactus.JPEG.5.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/cactus.jpeg2000.1.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/cactus.jpeg2000.2.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/cactus.jpeg2000.3.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/cactus.jpeg2000.4.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/cactus.jpeg2000.5.png", "../../datasets/CSIQ/dst_imgs/fnoise/cactus.fnoise.1.png", "../../datasets/CSIQ/dst_imgs/fnoise/cactus.fnoise.2.png", "../../datasets/CSIQ/dst_imgs/fnoise/cactus.fnoise.3.png", "../../datasets/CSIQ/dst_imgs/fnoise/cactus.fnoise.4.png", "../../datasets/CSIQ/dst_imgs/fnoise/cactus.fnoise.5.png", "../../datasets/CSIQ/dst_imgs/blur/cactus.BLUR.1.png", "../../datasets/CSIQ/dst_imgs/blur/cactus.BLUR.2.png", "../../datasets/CSIQ/dst_imgs/blur/cactus.BLUR.3.png", "../../datasets/CSIQ/dst_imgs/blur/cactus.BLUR.4.png", "../../datasets/CSIQ/dst_imgs/blur/cactus.BLUR.5.png", "../../datasets/CSIQ/dst_imgs/contrast/cactus.contrast.1.png", "../../datasets/CSIQ/dst_imgs/contrast/cactus.contrast.2.png", "../../datasets/CSIQ/dst_imgs/contrast/cactus.contrast.3.png", "../../datasets/CSIQ/dst_imgs/contrast/cactus.contrast.4.png", "../../datasets/CSIQ/dst_imgs/awgn/elk.AWGN.1.png", "../../datasets/CSIQ/dst_imgs/awgn/elk.AWGN.2.png", "../../datasets/CSIQ/dst_imgs/awgn/elk.AWGN.3.png", "../../datasets/CSIQ/dst_imgs/awgn/elk.AWGN.4.png", "../../datasets/CSIQ/dst_imgs/awgn/elk.AWGN.5.png", "../../datasets/CSIQ/dst_imgs/jpeg/elk.JPEG.1.png", "../../datasets/CSIQ/dst_imgs/jpeg/elk.JPEG.2.png", "../../datasets/CSIQ/dst_imgs/jpeg/elk.JPEG.3.png", "../../datasets/CSIQ/dst_imgs/jpeg/elk.JPEG.4.png", "../../datasets/CSIQ/dst_imgs/jpeg/elk.JPEG.5.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/elk.jpeg2000.1.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/elk.jpeg2000.2.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/elk.jpeg2000.3.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/elk.jpeg2000.4.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/elk.jpeg2000.5.png", "../../datasets/CSIQ/dst_imgs/fnoise/elk.fnoise.1.png", "../../datasets/CSIQ/dst_imgs/fnoise/elk.fnoise.2.png", "../../datasets/CSIQ/dst_imgs/fnoise/elk.fnoise.3.png", "../../datasets/CSIQ/dst_imgs/fnoise/elk.fnoise.4.png", "../../datasets/CSIQ/dst_imgs/fnoise/elk.fnoise.5.png", "../../datasets/CSIQ/dst_imgs/blur/elk.BLUR.1.png", "../../datasets/CSIQ/dst_imgs/blur/elk.BLUR.2.png", "../../datasets/CSIQ/dst_imgs/blur/elk.BLUR.3.png", "../../datasets/CSIQ/dst_imgs/blur/elk.BLUR.4.png", "../../datasets/CSIQ/dst_imgs/blur/elk.BLUR.5.png", "../../datasets/CSIQ/dst_imgs/contrast/elk.contrast.1.png", "../../datasets/CSIQ/dst_imgs/contrast/elk.contrast.2.png", "../../datasets/CSIQ/dst_imgs/contrast/elk.contrast.3.png", "../../datasets/CSIQ/dst_imgs/awgn/roping.AWGN.1.png", "../../datasets/CSIQ/dst_imgs/awgn/roping.AWGN.2.png", "../../datasets/CSIQ/dst_imgs/awgn/roping.AWGN.3.png", "../../datasets/CSIQ/dst_imgs/awgn/roping.AWGN.4.png", "../../datasets/CSIQ/dst_imgs/awgn/roping.AWGN.5.png", "../../datasets/CSIQ/dst_imgs/jpeg/roping.JPEG.1.png", "../../datasets/CSIQ/dst_imgs/jpeg/roping.JPEG.2.png", "../../datasets/CSIQ/dst_imgs/jpeg/roping.JPEG.3.png", "../../datasets/CSIQ/dst_imgs/jpeg/roping.JPEG.4.png", "../../datasets/CSIQ/dst_imgs/jpeg/roping.JPEG.5.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/roping.jpeg2000.1.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/roping.jpeg2000.2.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/roping.jpeg2000.3.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/roping.jpeg2000.4.png", "../../datasets/CSIQ/dst_imgs/jpeg2000/roping.jpeg2000.5.png", "../../datasets/CSIQ/dst_imgs/fnoise/roping.fnoise.1.png", "../../datasets/CSIQ/dst_imgs/fnoise/roping.fnoise.2.png", "../../datasets/CSIQ/dst_imgs/fnoise/roping.fnoise.3.png", "../../datasets/CSIQ/dst_imgs/fnoise/roping.fnoise.4.png", "../../datasets/CSIQ/dst_imgs/fnoise/roping.fnoise.5.png", "../../datasets/CSIQ/dst_imgs/blur/roping.BLUR.1.png", "../../datasets/CSIQ/dst_imgs/blur/roping.BLUR.2.png", "../../datasets/CSIQ/dst_imgs/blur/roping.BLUR.3.png", "../../datasets/CSIQ/dst_imgs/blur/roping.BLUR.4.png", "../../datasets/CSIQ/dst_imgs/blur/roping.BLUR.5.png", "../../datasets/CSIQ/dst_imgs/contrast/roping.contrast.1.png", "../../datasets/CSIQ/dst_imgs/contrast/roping.contrast.2.png", "../../datasets/CSIQ/dst_imgs/contrast/roping.contrast.3.png", "../../datasets/CSIQ/dst_imgs/contrast/roping.contrast.4.png"], "ref": ["../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/1600.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/aerial_city.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/boston.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/cactus.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/elk.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png", "../../datasets/CSIQ/src_imgs/roping.png"], "score": [6.2, 20.599999999999998, 26.200000000000003, 37.5, 46.7, 1.3, 6.9, 19.7, 50.1, 68.7, 1.2, 13.5, 36.4, 57.599999999999994, 82.69999999999999, 5.2, 25.3, 37.6, 49.0, 56.49999999999999, 4.3, 14.2, 34.1, 47.099999999999994, 75.0, 5.6000000000000005, 20.1, 31.0, 37.1, 3.2, 12.5, 24.5, 37.3, 50.1, 3.0, 6.5, 38.9, 60.699999999999996, 81.89999999999999, 2.7, 19.1, 48.6, 75.4, 95.6, 12.6, 31.900000000000002, 47.5, 60.099999999999994, 70.8, 2.5, 11.4, 25.900000000000002, 46.400000000000006, 89.60000000000001, 3.8, 16.3, 36.7, 45.300000000000004, 2.9000000000000004, 8.1, 21.5, 38.3, 47.199999999999996, 0.0, 5.0, 32.800000000000004, 73.7, 83.2, 0.0, 11.799999999999999, 38.9, 69.69999999999999, 90.4, 6.800000000000001, 24.3, 49.8, 62.0, 72.39999999999999, 8.3, 21.6, 36.3, 57.099999999999994, 91.3, 11.700000000000001, 33.900000000000006, 60.5, 69.0, 10.4, 20.599999999999998, 34.9, 43.9, 49.8, 4.1000000000000005, 16.0, 37.8, 70.39999999999999, 81.2, 7.9, 20.9, 40.5, 63.9, 78.5, 22.2, 32.5, 50.3, 59.5, 66.7, 6.7, 14.7, 39.900000000000006, 50.8, 75.0, 11.1, 28.499999999999996, 40.8, 51.7, 3.3000000000000003, 6.5, 17.2, 23.7, 35.8, 3.2, 6.800000000000001, 22.0, 55.00000000000001, 71.0, 2.7, 7.3999999999999995, 24.5, 54.6, 81.10000000000001, 4.6, 24.7, 37.1, 48.4, 64.5, 4.3999999999999995, 12.2, 24.7, 40.5, 76.3, 7.6, 20.0, 39.900000000000006, 0.3, 10.5, 18.5, 38.2, 51.300000000000004, 1.4000000000000001, 7.5, 30.0, 59.9, 82.89999999999999, 1.7000000000000002, 17.2, 36.4, 67.10000000000001, 91.0, 3.2, 13.900000000000002, 33.6, 51.300000000000004, 73.8, 3.4000000000000004, 18.8, 47.099999999999994, 67.0, 88.0, 5.6000000000000005, 19.2, 37.1, 45.0]} -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/IQA_Test_All_Pros.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """ 3 | Main Script 4 | """ 5 | 6 | import sys 7 | import os 8 | 9 | import shutil 10 | import argparse 11 | import numpy as np 12 | import torch 13 | import torch.backends.cudnn as cudnn 14 | from torch import nn 15 | 16 | from model import IQANet 17 | from dataset import TID2013Dataset, IQADataset, KADIDDataset 18 | from utils import AverageMeter, SROCC, PLCC, RMSE 19 | from utils import SimpleProgressBar as ProgressBar 20 | from utils import MMD_loss 21 | from VIDLoss import VIDLoss 22 | import os 23 | 24 | def test(test_data_loader, model,txt_name=None): 25 | srocc = SROCC() 26 | plcc = PLCC() 27 | rmse = RMSE() 28 | len_test = len(test_data_loader) 29 | pb = ProgressBar(len_test, show_step=True) 30 | 31 | print("Testing") 32 | 33 | with open(txt_name,'w') as f: 34 | model.eval() 35 | with torch.no_grad(): 36 | for i, ((img, ref), score) in enumerate(test_data_loader): 37 | img = img.cuda() 38 | ref = ref.cuda() 39 | output = model(img, ref).cpu().data.numpy() 40 | score = score.data.numpy() 41 | f.write(str(np.around(score[0], 4)) + ' '+ str(np.around(output, 4))+'\n') 42 | srocc.update(score, output) 43 | plcc.update(score, output) 44 | rmse.update(score, output) 45 | 46 | pb.show(i, "Test: [{0:5d}/{1:5d}]\t" 47 | "Score: {2:.4f}\t" 48 | "Label: {3:.4f}" 49 | .format(i+1, len_test, float(output), float(score))) 50 | 51 | print("\n\nSROCC: {0:.4f}\n" 52 | "PLCC: {1:.4f}\n" 53 | "RMSE: {2:.4f}" 54 | .format(srocc.compute(), plcc.compute(), rmse.compute()) 55 | ) 56 | 57 | 58 | 59 | def test_iqa(args): 60 | batch_size = 1 61 | pro = args.pro 62 | num_workers = args.workers 63 | subset = args.subset 64 | data_dir = args.data_dir 65 | list_dir = args.list_dir 66 | resume = args.resume 67 | 68 | for k, v in args.__dict__.items(): 69 | print(k, ':', v) 70 | 71 | model = IQANet(args.weighted) 72 | 73 | test_loader = torch.utils.data.DataLoader( 74 | Dataset(data_dir, phase='test_'+str(pro), list_dir=list_dir, 75 | n_ptchs=args.n_ptchs_per_img, 76 | subset=subset), 77 | batch_size=batch_size, shuffle=False, 78 | num_workers=num_workers, pin_memory=True 79 | ) 80 | 81 | cudnn.benchmark = True 82 | 83 | # Resume from a checkpoint 84 | if resume: 85 | resume = resume.split('t.')[0]+'t_'+str(pro)+'.pkl' 86 | if os.path.isfile(resume): 87 | print("=> loading checkpoint '{}'".format(resume)) 88 | checkpoint = torch.load(resume) 89 | model.load_state_dict(checkpoint['state_dict']) 90 | print("=> loaded checkpoint '{}' (epoch {})" 91 | .format(resume, checkpoint['epoch'])) 92 | else: 93 | print("=> no checkpoint found at '{}'".format(resume)) 94 | 95 | txt_name = 'setting'+str(pro)+'.txt' 96 | test(test_loader, model.cuda(),txt_name) 97 | 98 | 99 | def parse_args(): 100 | # Training settings 101 | parser = argparse.ArgumentParser(description='') 102 | parser.add_argument('-cmd', type=str,default='test') 103 | parser.add_argument('-d', '--data-dir', default='../../../datasets/tid2013/') 104 | parser.add_argument('-l', '--list-dir', default='../scripts/tid2013/', 105 | help='List dir to look for train_images.txt etc. ' 106 | 'It is the same with --data-dir if not set.') 107 | parser.add_argument('-n', '--n-ptchs-per-img', type=int, default=1024, metavar='N', 108 | help='number of patches for each image (default: 32)') 109 | parser.add_argument('--step', type=int, default=200) 110 | parser.add_argument('--batch-size', type=int, default=32, metavar='B', 111 | help='input batch size for training (default: 64)') 112 | parser.add_argument('--epochs', type=int, default=1000, metavar='NE', 113 | help='number of epochs to train (default: 1000)') 114 | parser.add_argument('--lr', type=float, default=1e-4, metavar='LR', 115 | help='learning rate (default: 1e-4)') 116 | parser.add_argument('--lr-mode', type=str, default='const') 117 | parser.add_argument('--weight-decay', default=1e-4, type=float, 118 | metavar='W', help='weight decay (default: 1e-4)') 119 | parser.add_argument('--resume', default='../models/tid2013/model_best.pkl',type=str, metavar='PATH', 120 | help='path to latest checkpoint') 121 | parser.add_argument('--workers', type=int, default=8) 122 | parser.add_argument('--pro', type=int, default=2) 123 | parser.add_argument('--subset', default='test') 124 | parser.add_argument('--evaluate', dest='evaluate', 125 | action='store_true', 126 | help='evaluate model on validation set') 127 | parser.add_argument('--weighted',default=True, dest='weighted') 128 | parser.add_argument('--dump_per', type=int, default=50, 129 | help='the number of epochs to make a checkpoint') 130 | parser.add_argument('--dataset', type=str, default='TID2013') 131 | parser.add_argument('--anew', action='store_true') 132 | 133 | args = parser.parse_args() 134 | 135 | return args 136 | 137 | 138 | def main(): 139 | args = parse_args() 140 | # Choose dataset 141 | global Dataset 142 | Dataset = globals().get(args.dataset+'Dataset', None) 143 | 144 | for expid in range(10): 145 | args.pro = expid 146 | if args.cmd == 'train': 147 | train_iqa(args) 148 | elif args.cmd == 'test': 149 | test_iqa(args) 150 | 151 | 152 | if __name__ == '__main__': 153 | main() 154 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/Inv_arch.py: -------------------------------------------------------------------------------- 1 | import math 2 | import torch 3 | import torch.nn as nn 4 | import torch.nn.functional as F 5 | import numpy as np 6 | from Subnet_constructor import DenseBlock,DenseBlock1X1 7 | from torch.autograd import Variable 8 | 9 | class InvBlockExp(nn.Module): 10 | def __init__(self, split_len1, split_len2, clamp=1.0, Use1x1=False): 11 | super(InvBlockExp, self).__init__() 12 | 13 | self.split_len1 = split_len1 14 | self.split_len2 = split_len2 15 | 16 | self.clamp = clamp 17 | 18 | if not Use1x1: 19 | self.F = DenseBlock(self.split_len2, self.split_len1) 20 | self.G = DenseBlock(self.split_len1, self.split_len2) 21 | self.H = DenseBlock(self.split_len1, self.split_len2) 22 | else: 23 | self.F = DenseBlock1X1(self.split_len2, self.split_len1) 24 | self.G = DenseBlock1X1(self.split_len1, self.split_len2) 25 | self.H = DenseBlock1X1(self.split_len1, self.split_len2) 26 | 27 | 28 | def forward(self, x1, x2,rev=False): 29 | if not rev: 30 | y1 = x1 + self.F(x2) 31 | self.s = self.clamp * (torch.sigmoid(self.H(y1)) * 2 - 1) 32 | y2 = x2.mul(torch.exp(self.s)) + self.G(y1) 33 | else: 34 | self.s = self.clamp * (torch.sigmoid(self.H(x1)) * 2 - 1) 35 | y2 = (x2 - self.G(x1)).div(torch.exp(self.s)) 36 | y1 = x1 - self.F(y2) 37 | 38 | return y1, y2 39 | 40 | def jacobian(self, x, rev=False): 41 | if not rev: 42 | jac = torch.sum(self.s) 43 | else: 44 | jac = -torch.sum(self.s) 45 | 46 | return jac / x.shape[0] 47 | 48 | 49 | 50 | class InvRescaleNet(nn.Module): 51 | def __init__(self, split_len1=32, split_len2=32, block_num=3,Use1x1=False): 52 | super(InvRescaleNet, self).__init__() 53 | operations = [] 54 | for j in range(block_num): 55 | b = InvBlockExp(split_len1, split_len2,Use1x1=Use1x1) 56 | operations.append(b) 57 | self.operations = nn.ModuleList(operations) 58 | 59 | 60 | def forward(self, x1,x2, rev=False, cal_jacobian=False): 61 | out1 = x1 62 | out2 = x2 63 | jacobian = 0 64 | 65 | if not rev: 66 | for op in self.operations: 67 | out1,out2 = op.forward(out1, out2,rev) 68 | if cal_jacobian: 69 | jacobian += op.jacobian(out1, rev) 70 | else: 71 | for op in reversed(self.operations): 72 | out1,out2 = op.forward(out1, out2,rev) 73 | if cal_jacobian: 74 | jacobian += op.jacobian(out1, rev) 75 | 76 | if cal_jacobian: 77 | return out1,out2, jacobian 78 | else: 79 | return out1,out2 80 | 81 | 82 | 83 | 84 | 85 | def test(): 86 | 87 | net = InvRescaleNet(split_len1=32, split_len2=32, block_num=3) 88 | net.cuda() 89 | 90 | x1 = torch.randn(2,32,32,32) 91 | x1 = Variable(x1.cuda()) 92 | 93 | x2 = torch.randn(2,32,32,32) 94 | x2 = Variable(x2.cuda()) 95 | 96 | # x2 = Variable(x2.cuda()) 97 | # y1,y2,y3,y4,y5,y6 = net.forward(x1,x1,x1,x1) 98 | # print(y1.shape,y2.shape,y3.shape,y4.shape,y5.shape,y6.shape) 99 | 100 | y1,y2 = net.forward(x1,x2) 101 | 102 | 103 | print(y1.shape,y2.shape) 104 | 105 | 106 | if __name__== '__main__': 107 | test() 108 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/Subnet_constructor.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.functional as F 4 | import module_util as mutil 5 | from torch.autograd import Variable 6 | class DenseBlock(nn.Module): 7 | def __init__(self, channel_in, channel_out, init='xavier', gc=32, bias=True): 8 | super(DenseBlock, self).__init__() 9 | self.conv1 = nn.Conv2d(channel_in, gc, 3, 1, 1, bias=bias) 10 | self.conv2 = nn.Conv2d(channel_in + gc, gc, 3, 1, 1, bias=bias) 11 | self.conv3 = nn.Conv2d(channel_in + 2 * gc, gc, 3, 1, 1, bias=bias) 12 | self.conv4 = nn.Conv2d(channel_in + 3 * gc, gc, 3, 1, 1, bias=bias) 13 | self.conv5 = nn.Conv2d(channel_in + 4 * gc, channel_out, 3, 1, 1, bias=bias) 14 | self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) 15 | 16 | if init == 'xavier': 17 | mutil.initialize_weights_xavier([self.conv1, self.conv2, self.conv3, self.conv4], 0.1) 18 | else: 19 | mutil.initialize_weights([self.conv1, self.conv2, self.conv3, self.conv4], 0.1) 20 | mutil.initialize_weights(self.conv5, 0) 21 | 22 | def forward(self, x): 23 | x1 = self.lrelu(self.conv1(x)) 24 | x2 = self.lrelu(self.conv2(torch.cat((x, x1), 1))) 25 | x3 = self.lrelu(self.conv3(torch.cat((x, x1, x2), 1))) 26 | x4 = self.lrelu(self.conv4(torch.cat((x, x1, x2, x3), 1))) 27 | x5 = self.conv5(torch.cat((x, x1, x2, x3, x4), 1)) 28 | 29 | return x5 30 | 31 | #class DenseBlock1X1(nn.Module): 32 | # def __init__(self, channel_in, channel_out, init='xavier', gc=32, bias=True): 33 | # super(DenseBlock1X1, self).__init__() 34 | # self.conv1 = nn.Conv2d(channel_in, gc, 1, 1, 0, bias=bias) 35 | # self.conv2 = nn.Conv2d(channel_in + gc, gc,1,1, 0, bias=bias) 36 | # self.conv3 = nn.Conv2d(channel_in + 2 * gc, gc, 1, 1, 0, bias=bias) 37 | # self.conv4 = nn.Conv2d(channel_in + 3 * gc, gc, 1, 1, 0, bias=bias) 38 | # self.conv5 = nn.Conv2d(channel_in + 4 * gc, channel_out, 1, 1, 0, bias=bias) 39 | # self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) 40 | # 41 | # if init == 'xavier': 42 | # mutil.initialize_weights_xavier([self.conv1, self.conv2, self.conv3, self.conv4], 0.1) 43 | # else: 44 | # mutil.initialize_weights([self.conv1, self.conv2, self.conv3, self.conv4], 0.1) 45 | # mutil.initialize_weights(self.conv5, 0) 46 | # 47 | # def forward(self, x): 48 | # x1 = self.lrelu(self.conv1(x)) 49 | # x2 = self.lrelu(self.conv2(torch.cat((x, x1), 1))) 50 | # x3 = self.lrelu(self.conv3(torch.cat((x, x1, x2), 1))) 51 | # x4 = self.lrelu(self.conv4(torch.cat((x, x1, x2, x3), 1))) 52 | # x5 = self.conv5(torch.cat((x, x1, x2, x3, x4), 1)) 53 | # 54 | # return x5 55 | 56 | 57 | 58 | class DenseBlock1X1(nn.Module): 59 | def __init__(self, channel_in, channel_out, init='xavier', gc=32, bias=True): 60 | super(DenseBlock1X1, self).__init__() 61 | self.conv1 = nn.Conv2d(channel_in, channel_in*2, 1, 1, 0, bias=bias) 62 | self.conv2 = nn.Conv2d(channel_in*2, channel_in*2,1,1, 0, bias=bias) 63 | self.conv3 = nn.Conv2d(channel_in*2, channel_in*2, 1, 1, 0, bias=bias) 64 | self.conv4 = nn.Conv2d(channel_in*2, channel_in, 1, 1, 0, bias=bias) 65 | 66 | self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) 67 | 68 | if init == 'xavier': 69 | mutil.initialize_weights_xavier([self.conv1, self.conv2, self.conv3, self.conv4], 0.1) 70 | else: 71 | mutil.initialize_weights([self.conv1, self.conv2, self.conv3, self.conv4], 0.1) 72 | 73 | 74 | def forward(self, x): 75 | x1 = self.lrelu(self.conv1(x)) 76 | x2 = self.lrelu(self.conv2(x1)) 77 | x3 = self.lrelu(self.conv3(x2)) 78 | x4 = self.conv4(x3) 79 | 80 | 81 | return x4 82 | 83 | def test(): 84 | 85 | net = DenseBlock1X1(32,32) 86 | net.cuda() 87 | 88 | x1 = torch.randn(1,32,9,9) 89 | x1 = Variable(x1.cuda()) 90 | 91 | 92 | 93 | y1 = net.forward(x1) 94 | 95 | 96 | print(y1.shape) 97 | 98 | 99 | if __name__== '__main__': 100 | test() 101 | 102 | 103 | 104 | 105 | 106 | 107 | 108 | 109 | 110 | 111 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/VIDLoss.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | 3 | import torch 4 | import torch.nn as nn 5 | import torch.nn.functional as F 6 | import numpy as np 7 | from torch.autograd import Variable 8 | 9 | class VIDLoss(nn.Module): 10 | """Variational Information Distillation for Knowledge Transfer (CVPR 2019), 11 | code from author: https://github.com/ssahn0215/variational-information-distillation""" 12 | def __init__(self, 13 | num_input_channels, 14 | num_mid_channel, 15 | num_target_channels, 16 | init_pred_var=5.0, 17 | eps=1e-5): 18 | super(VIDLoss, self).__init__() 19 | 20 | def conv1x1(in_channels, out_channels, stride=1): 21 | return nn.Conv2d( 22 | in_channels, out_channels, 23 | kernel_size=1, padding=0, 24 | bias=False, stride=stride) 25 | 26 | self.regressor = nn.Sequential( 27 | conv1x1(num_input_channels, num_mid_channel), 28 | nn.ReLU(), 29 | conv1x1(num_mid_channel, num_mid_channel), 30 | nn.ReLU(), 31 | conv1x1(num_mid_channel, num_target_channels), 32 | ) 33 | self.log_scale = torch.nn.Parameter( 34 | np.log(np.exp(init_pred_var-eps)-1.0) * torch.ones(num_target_channels) 35 | ) 36 | self.eps = eps 37 | 38 | def forward(self, input, target): 39 | # pool for dimentsion match 40 | s_H, t_H = input.shape[2], target.shape[2] 41 | if s_H > t_H: 42 | input = F.adaptive_avg_pool2d(input, (t_H, t_H)) 43 | elif s_H < t_H: 44 | target = F.adaptive_avg_pool2d(target, (s_H, s_H)) 45 | else: 46 | pass 47 | pred_mean = self.regressor(input) 48 | pred_var = torch.log(1.0+torch.exp(self.log_scale))+self.eps 49 | pred_var = pred_var.view(1, -1, 1, 1) 50 | neg_log_prob = 0.5*( 51 | (pred_mean-target)**2/pred_var+torch.log(pred_var) 52 | ) 53 | loss = torch.mean(neg_log_prob) 54 | return loss 55 | 56 | 57 | 58 | 59 | def test(): 60 | 61 | x1 = torch.randn(2, 128,4,4) 62 | x1 = Variable(x1.cuda()) 63 | net = VIDLoss(128,256,128) 64 | net.cuda() 65 | 66 | 67 | # x2 = Variable(x2.cuda()) 68 | # y1,y2,y3,y4,y5,y6 = net.forward(x1,x1,x1,x1) 69 | # print(y1.shape,y2.shape,y3.shape,y4.shape,y5.shape,y6.shape) 70 | 71 | y1 = net.forward(x1,x1) 72 | 73 | 74 | print(y1) 75 | 76 | 77 | if __name__== '__main__': 78 | test() 79 | 80 | 81 | 82 | 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | 91 | 92 | 93 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/FRmodel.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/FRmodel.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/FRmodel3.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/FRmodel3.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/Inv_arch.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/Inv_arch.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/Inv_arch.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/Inv_arch.cpython-37.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/Inv_arch.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/Inv_arch.cpython-38.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/NRmodel.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/NRmodel.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/NRmodel3.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/NRmodel3.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/Subnet_constructor.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/Subnet_constructor.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/Subnet_constructor.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/Subnet_constructor.cpython-37.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/Subnet_constructor.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/Subnet_constructor.cpython-38.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/VIDLoss.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/VIDLoss.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/VIDLoss.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/VIDLoss.cpython-37.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/VIDLoss.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/VIDLoss.cpython-38.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/dataset.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/dataset.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/dataset.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/dataset.cpython-37.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/dataset.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/dataset.cpython-38.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/model.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/model.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/model.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/model.cpython-37.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/model.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/model.cpython-38.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/model2.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/model2.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/modelSplit.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/modelSplit.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/modelSplit2.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/modelSplit2.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/module_util.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/module_util.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/module_util.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/module_util.cpython-37.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/module_util.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/module_util.cpython-38.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/utils.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/utils.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/utils.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/utils.cpython-37.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/__pycache__/utils.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_NI/src/__pycache__/utils.cpython-38.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/dataset.py: -------------------------------------------------------------------------------- 1 | """ 2 | Dataset and Transforms 3 | """ 4 | 5 | 6 | import torch.utils.data 7 | import numpy as np 8 | import random 9 | import json 10 | from skimage import io 11 | from os.path import join, exists 12 | from utils import limited_instances, SimpleProgressBar 13 | 14 | 15 | 16 | class IQADataset(torch.utils.data.Dataset): 17 | def __init__(self, data_dir, phase, n_ptchs=256, sample_once=False, subset='', list_dir=''): 18 | super(IQADataset, self).__init__() 19 | 20 | self.list_dir = data_dir if not list_dir else list_dir 21 | self.data_dir = data_dir 22 | self.phase = phase 23 | self.subset = phase if not subset.strip() else subset 24 | self.n_ptchs = n_ptchs 25 | self.img_list = [] 26 | self.ref_list = [] 27 | self.score_list = [] 28 | self.sample_once = sample_once 29 | self._from_pool = False 30 | 31 | self._read_lists() 32 | self._aug_lists() 33 | 34 | self.tfs = Transforms() 35 | if sample_once: 36 | @limited_instances(self.__len__()) 37 | class IncrementCache: 38 | def store(self, data): 39 | self.data = data 40 | 41 | self._pool = IncrementCache 42 | self._to_pool() 43 | self._from_pool = True 44 | 45 | def __getitem__(self, index): 46 | img = self._loader(self.img_list[index]) 47 | ref = self._loader(self.ref_list[index]) 48 | # print(img.shape) 49 | score = self.score_list[index] 50 | 51 | # print(img.shape) 52 | 53 | if self._from_pool: 54 | (img_ptchs, ref_ptchs) = self._pool(index).data 55 | else: 56 | if self.phase.split('_')[0] == 'train': 57 | img, ref = self.tfs.horizontal_flip(img, ref) 58 | img_ptchs, ref_ptchs = self._to_patch_tensors(img, ref) 59 | elif self.phase.split('_')[0] == 'val': 60 | img_ptchs, ref_ptchs = self._to_patch_tensors(img, ref) 61 | elif self.phase.split('_')[0] == 'test': 62 | img_ptchs, ref_ptchs = self._to_patch_tensors(img, ref) 63 | else: 64 | pass 65 | 66 | return (img_ptchs, ref_ptchs), torch.tensor(score).float() 67 | 68 | def __len__(self): 69 | return len(self.img_list) 70 | 71 | def _loader(self, name): 72 | # print(self.data_dir, name,join(self.data_dir, name)) 73 | return io.imread(join(self.data_dir, name)) 74 | 75 | def _to_patch_tensors(self, img, ref): 76 | img_ptchs, ref_ptchs = self.tfs.to_patches(img, ref, ptch_size=64, n_ptchs=self.n_ptchs) 77 | img_ptchs, ref_ptchs = self.tfs.to_tensor(img_ptchs, ref_ptchs) 78 | return img_ptchs, ref_ptchs 79 | 80 | def _to_pool(self): 81 | len_data = self.__len__() 82 | pb = SimpleProgressBar(len_data) 83 | print("\ninitializing data pool...") 84 | for index in range(len_data): 85 | self._pool(index).store(self.__getitem__(index)[0]) 86 | pb.show(index, "[{:d}]/[{:d}] ".format(index+1, len_data)) 87 | 88 | def _aug_lists(self): 89 | if self.phase.split('_')[0] == 'test': 90 | return 91 | # Make samples from the reference images 92 | # The number of the reference samples appears 93 | # CRITICAL for the training effect! 94 | len_aug = len(self.ref_list)//5 if self.phase.split('_')[0] == 'train' else 10 95 | aug_list = self.ref_list*(len_aug//len(self.ref_list)+1) 96 | random.shuffle(aug_list) 97 | aug_list = aug_list[:len_aug] 98 | self.img_list.extend(aug_list) 99 | self.score_list += [0.0]*len_aug 100 | self.ref_list.extend(aug_list) 101 | 102 | if self.phase.split('_')[0] == 'train': 103 | # More samples in one epoch 104 | # This accelerates the training indeed as the cache 105 | # of the file system could then be fully leveraged 106 | # And also, augment the data in terms of number 107 | mul_aug = 16 108 | self.img_list *= mul_aug 109 | self.ref_list *= mul_aug 110 | self.score_list *= mul_aug 111 | 112 | def _read_lists(self): 113 | img_path = join(self.list_dir, self.phase + '_data.json') 114 | 115 | assert exists(img_path) 116 | 117 | with open(img_path, 'r') as fp: 118 | data_dict = json.load(fp) 119 | 120 | self.img_list = data_dict['img'] 121 | self.ref_list = data_dict.get('ref', self.img_list) 122 | self.score_list = data_dict.get('score', [0.0]*len(self.img_list)) 123 | 124 | 125 | class TID2013Dataset(IQADataset): 126 | def _read_lists(self): 127 | super()._read_lists() 128 | # For TID2013 129 | self.score_list = [(9.0 - s) / 9.0 * 100.0 for s in self.score_list] 130 | 131 | 132 | class KADIDDataset(IQADataset): 133 | def _read_lists(self): 134 | super()._read_lists() 135 | self.score_list = [(5.0 - s) / 5.0 * 100.0 for s in self.score_list] 136 | 137 | #class LIVEDataset(IQADataset): 138 | # def _read_lists(self): 139 | # super()._read_lists() 140 | # self.score_list = [(1.0 - s) * 100.0 for s in self.score_list] 141 | 142 | class Transforms: 143 | """ 144 | Self-designed transformation class 145 | ------------------------------------ 146 | 147 | Several things to fix and improve: 148 | 1. Strong coupling with Dataset cuz transformation types can't 149 | be simply assigned in training or testing code. (e.g. given 150 | a list of transforms as parameters to construct Dataset Obj) 151 | 2. Might be unsafe in multi-thread cases 152 | 3. Too complex decorators, not pythonic 153 | 4. The number of params of the wrapper and the inner function should 154 | be the same to avoid confusion 155 | 5. The use of params and isinstance() is not so elegant. For this, 156 | consider to stipulate a fix number and type of returned values for 157 | inner tf functions and do all the forwarding and passing work inside 158 | the decorator. tf_func applies transformation, which is all it does. 159 | 6. Performance has not been optimized at all 160 | 7. Doc it 161 | 8. Supports only numpy arrays 162 | """ 163 | def __init__(self): 164 | super(Transforms, self).__init__() 165 | 166 | def _pair_deco(tf_func): 167 | def transform(self, img, ref=None, *args, **kwargs): 168 | """ image shape (w, h, c) """ 169 | if (ref is not None) and (not isinstance(ref, np.ndarray)): 170 | args = (ref,)+args 171 | ref = None 172 | ret = tf_func(self, img, None, *args, **kwargs) 173 | assert(len(ret) == 2) 174 | if ref is None: 175 | return ret[0] 176 | else: 177 | num_var = tf_func.__code__.co_argcount-3 # self, img, ref not counted 178 | if (len(args)+len(kwargs)) == (num_var-1): 179 | # The last parameter is special 180 | # Resend it if necessary 181 | var_name = tf_func.__code__.co_varnames[-1] 182 | kwargs[var_name] = ret[1] 183 | tf_ref, _ = tf_func(self, ref, None, *args, **kwargs) 184 | return ret[0], tf_ref 185 | return transform 186 | 187 | def _horizontal_flip(self, img, flip): 188 | if flip is None: 189 | flip = (random.random() > 0.5) 190 | return (img[...,::-1,:] if flip else img), flip 191 | 192 | def _to_tensor(self, img): 193 | return torch.from_numpy((img.astype(np.float32)/255).swapaxes(-3,-2).swapaxes(-3,-1)), () 194 | 195 | def _crop_square(self, img, crop_size, pos): 196 | if pos is None: 197 | h, w = img.shape[-3:-1] 198 | assert(crop_size <= h and crop_size <= w) 199 | ub = random.randint(0, h-crop_size) 200 | lb = random.randint(0, w-crop_size) 201 | pos = (ub, ub+crop_size, lb, lb+crop_size) 202 | return img[...,pos[0]:pos[1],pos[-2]:pos[-1],:], pos 203 | 204 | def _extract_patches(self, img, ptch_size): 205 | # Crop non-overlapping patches as the stride equals patch size 206 | h, w = img.shape[-3:-1] 207 | nh, nw = h//ptch_size, w//ptch_size 208 | assert(nh>0 and nw>0) 209 | vptchs = np.stack(np.split(img[...,:nh*ptch_size,:,:], nh, axis=-3)) 210 | ptchs = np.concatenate(np.split(vptchs[...,:nw*ptch_size,:], nw, axis=-2)) 211 | return ptchs, nh*nw 212 | 213 | def _to_patches(self, img, ptch_size, n_ptchs, idx): 214 | ptchs, n = self._extract_patches(img, ptch_size) 215 | if not n_ptchs: 216 | n_ptchs = n 217 | elif n_ptchs > n: 218 | n_ptchs = n 219 | if idx is None: 220 | idx = list(range(n)) 221 | random.shuffle(idx) 222 | idx = idx[:n_ptchs] 223 | return ptchs[idx], idx 224 | 225 | @_pair_deco 226 | def horizontal_flip(self, img, ref=None, flip=None): 227 | return self._horizontal_flip(img, flip=flip) 228 | 229 | @_pair_deco 230 | def to_tensor(self, img, ref=None): 231 | return self._to_tensor(img) 232 | 233 | @_pair_deco 234 | def crop_square(self, img, ref=None, crop_size=64, pos=None): 235 | return self._crop_square(img, crop_size=crop_size, pos=pos) 236 | 237 | @_pair_deco 238 | def to_patches(self, img, ref=None, ptch_size=64, n_ptchs=None, idx=None): 239 | return self._to_patches(img, ptch_size=ptch_size, n_ptchs=n_ptchs, idx=idx) 240 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/iqaScrach.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """ 3 | Main Script 4 | """ 5 | 6 | import sys 7 | import os 8 | 9 | import shutil 10 | import argparse 11 | 12 | import torch 13 | import torch.backends.cudnn as cudnn 14 | from torch import nn 15 | 16 | from model import IQANet 17 | from dataset import TID2013Dataset, IQADataset, KADIDDataset 18 | from utils import AverageMeter, SROCC, PLCC, RMSE 19 | from utils import SimpleProgressBar as ProgressBar 20 | from utils import MMD_loss 21 | from VIDLoss import VIDLoss 22 | #f=open("log.txt","a") 23 | #ftmp=sys.stdout 24 | #sys.stdout=f 25 | 26 | def validate(val_loader, model, criterion, show_step=False): 27 | losses = AverageMeter() 28 | srocc = SROCC() 29 | len_val = len(val_loader) 30 | pb = ProgressBar(len_val, show_step=show_step) 31 | 32 | print("Validation") 33 | 34 | # Switch to evaluate mode 35 | model.eval() 36 | 37 | with torch.no_grad(): 38 | for i, ((img,ref), score) in enumerate(val_loader): 39 | img, ref, score = img.cuda(), ref.cuda(), score.squeeze().cuda() 40 | 41 | # Compute output 42 | _,_,output,_,_,_,_ = model(img, img) 43 | 44 | loss = criterion(output, score) 45 | losses.update(loss, img.shape[0]) 46 | 47 | output = output.cpu() 48 | score = score.cpu() 49 | srocc.update(score.numpy(), output.numpy()) 50 | 51 | pb.show(i, "[{0:d}/{1:d}]\t" 52 | "Loss {loss.val:.4f} ({loss.avg:.4f})\t" 53 | "Output {out:.4f}\t" 54 | "Target {tar:.4f}\t" 55 | .format(i+1, len_val, loss=losses, 56 | out=output, tar=score)) 57 | 58 | 59 | return float(1.0-srocc.compute()) # losses.avg 60 | 61 | 62 | def train(train_loader, model, criterion, optimizer, epoch): 63 | losses1 = AverageMeter() 64 | losses2 = AverageMeter() 65 | losses3 = AverageMeter() 66 | losses4 = AverageMeter() 67 | losses5 = AverageMeter() 68 | # losses6 = AverageMeter() 69 | len_train = len(train_loader) 70 | pb = ProgressBar(len_train) 71 | 72 | print("Training") 73 | 74 | # Switch to train mode 75 | model.train() 76 | criterion.cuda() 77 | # vidloss = VIDLoss(128,256,128).cuda() 78 | trip_loss = nn.TripletMarginLoss(margin=0.5, p=2.0).cuda() 79 | # triplet_loss = nn.TripletMarginLoss(margin=1.0, p=2).cuda() 80 | for i, ((img,ref), score) in enumerate(train_loader): 81 | img, ref, score = img.cuda(), ref.cuda(), score.cuda() 82 | # Compute output 83 | FS,NFake_FS,NS,f1,f2,fake_f1, fake_f2 = model(img, ref) 84 | 85 | loss1 = criterion(FS, score) 86 | loss2 = criterion(NFake_FS, score) 87 | loss5 = criterion(NS, score) 88 | loss3 = 20*trip_loss(f1,fake_f1,f2) 89 | loss4 = 20*trip_loss(f2,fake_f2,f1).sum() 90 | 91 | 92 | loss = loss1+loss2+loss3+loss4+loss5 93 | # Measure accuracy and record loss 94 | losses1.update(loss1.data, img.shape[0]) 95 | losses2.update(loss2.data, img.shape[0]) 96 | losses3.update(loss3.data, img.shape[0]) 97 | losses4.update(loss4.data, img.shape[0]) 98 | losses5.update(loss5.data, img.shape[0]) 99 | 100 | # Compute gradient and do SGD step 101 | optimizer.zero_grad() 102 | loss.backward() 103 | optimizer.step() 104 | 105 | pb.show(i, "[{0:d}/{1:d}]\t" 106 | "FR {loss1.val:.4f} ({loss1.avg:.4f})\t" 107 | "NR {loss2.val:.4f} ({loss2.avg:.4f})\t" 108 | "NS {loss5.val:.4f} ({loss5.avg:.4f})\t" 109 | "fco {loss3.val:.4f} ({loss3.avg:.4f})\t" 110 | "sf {loss4.val:.4f} ({loss4.avg:.4f})\t" 111 | 112 | .format(i+1, len_train, loss1=losses1, loss2=losses2, \ 113 | loss5=losses5,loss3=losses3,loss4=losses4)) 114 | 115 | 116 | 117 | def train_iqa(args): 118 | pro = args.pro 119 | batch_size = args.batch_size 120 | num_workers = args.workers 121 | data_dir = args.data_dir 122 | list_dir = args.list_dir 123 | resume = args.resume 124 | n_ptchs = args.n_ptchs_per_img 125 | 126 | print(' '.join(sys.argv)) 127 | 128 | for k, v in args.__dict__.items(): 129 | print(k, ':', v) 130 | 131 | model = IQANet(args.weighted,istrain=True) 132 | criterion = nn.L1Loss() 133 | 134 | # Data loaders 135 | train_loader = torch.utils.data.DataLoader( 136 | Dataset(data_dir, 'train_'+str(pro), list_dir=list_dir, 137 | n_ptchs=n_ptchs), 138 | batch_size=batch_size, shuffle=True, num_workers=num_workers, 139 | pin_memory=True, drop_last=True 140 | ) 141 | val_loader = torch.utils.data.DataLoader( 142 | Dataset(data_dir, 'val_'+str(pro), list_dir=list_dir, 143 | n_ptchs=n_ptchs, sample_once=True), 144 | batch_size=1, shuffle=False, num_workers=0, 145 | pin_memory=True 146 | ) 147 | 148 | optimizer = torch.optim.Adam(model.parameters(), 149 | lr=args.lr, 150 | betas=(0.9, 0.999), 151 | weight_decay=args.weight_decay) 152 | 153 | cudnn.benchmark = True 154 | min_loss = 100.0 155 | start_epoch = 0 156 | 157 | # Resume from a checkpoint 158 | if resume: 159 | resume = resume.split('t.')[0]+'t_'+str(pro)+'.pkl' 160 | if os.path.isfile(resume): 161 | print("=> loading checkpoint '{}'".format(resume)) 162 | checkpoint = torch.load(resume) 163 | start_epoch = checkpoint['epoch'] 164 | if not args.anew: 165 | min_loss = checkpoint['min_loss'] 166 | model.load_state_dict(checkpoint['state_dict']) 167 | print("=> loaded checkpoint '{}' (epoch {})" 168 | .format(resume, start_epoch)) 169 | else: 170 | print("=> no checkpoint found at '{}'".format(resume)) 171 | 172 | if args.evaluate: 173 | validate(val_loader, model.cuda(), criterion, show_step=True) 174 | return 175 | 176 | for epoch in range(start_epoch, args.epochs): 177 | lr = adjust_learning_rate(args, optimizer, epoch) 178 | print("\nEpoch: [{0}]\tlr {1:.06f}".format(epoch, lr)) 179 | # Train for one epoch 180 | train(train_loader, model.cuda(), criterion, optimizer, epoch) 181 | 182 | if epoch % 1 == 0: 183 | # Evaluate on validation set 184 | loss = validate(val_loader, model.cuda(), criterion) 185 | 186 | is_best = loss < min_loss 187 | min_loss = min(loss, min_loss) 188 | print("Current: {:.6f}\tBest: {:.6f}\t".format(loss, min_loss)) 189 | checkpoint_path = '../models/'+ resume.split('/')[2]+'/checkpoint_latest_'+str(pro)+'.pkl' 190 | save_checkpoint({ 191 | 'epoch': epoch + 1, 192 | 'state_dict': model.state_dict(), 193 | 'min_loss': min_loss, 194 | }, is_best, filename=checkpoint_path,pro =args.pro, res = resume ) 195 | 196 | # if epoch % args.dump_per == 0: 197 | # history_path = '../models/'+ resume.split('/')[2]+'/checkpoint_{:03d}_'.format(epoch+1)+str(pro)+'.pkl' 198 | # shutil.copyfile(checkpoint_path, history_path) 199 | 200 | 201 | def adjust_learning_rate(args, optimizer, epoch): 202 | """ 203 | Sets the learning rate 204 | """ 205 | if args.lr_mode == 'step': 206 | lr = args.lr * (0.5 ** (epoch // args.step)) 207 | elif args.lr_mode == 'poly': 208 | lr = args.lr * (1 - epoch / args.epochs) ** 1.1 209 | elif args.lr_mode == 'const': 210 | lr = args.lr 211 | else: 212 | raise ValueError('Unknown lr mode {}'.format(args.lr_mode)) 213 | 214 | for param_group in optimizer.param_groups: 215 | param_group['lr'] = lr 216 | return lr 217 | 218 | def save_checkpoint(state, is_best, filename='checkpoint.pkl',pro = '0',res='./script'): 219 | torch.save(state, filename) 220 | if is_best: 221 | shutil.copyfile(filename, '../models/'+ res.split('/')[2]+'/model_best_'+str(pro)+'.pkl') 222 | 223 | 224 | def parse_args(): 225 | # Training settings 226 | parser = argparse.ArgumentParser(description='') 227 | parser.add_argument('-cmd', type=str,default='train') 228 | parser.add_argument('-d', '--data-dir', default='../../../datasets/tid2013/') 229 | parser.add_argument('-l', '--list-dir', default='../scripts/tid2013/', 230 | help='List dir to look for train_images.txt etc. ' 231 | 'It is the same with --data-dir if not set.') 232 | parser.add_argument('-n', '--n-ptchs-per-img', type=int, default=8, metavar='N', 233 | help='number of patches for each image (default: 32)') 234 | parser.add_argument('--step', type=int, default=200) 235 | parser.add_argument('--batch-size', type=int, default=32, metavar='B', 236 | help='input batch size for training (default: 64)') 237 | parser.add_argument('--epochs', type=int, default=1000, metavar='NE', 238 | help='number of epochs to train (default: 1000)') 239 | parser.add_argument('--lr', type=float, default=1e-4, metavar='LR', 240 | help='learning rate (default: 1e-4)') 241 | parser.add_argument('--lr-mode', type=str, default='const') 242 | parser.add_argument('--weight-decay', default=1e-4, type=float, 243 | metavar='W', help='weight decay (default: 1e-4)') 244 | parser.add_argument('--resume', default='../models/tid2013/checkpoint_latest.pkl', type=str, metavar='PATH', 245 | help='path to latest checkpoint') 246 | parser.add_argument('--pro', type=int, default=2) 247 | parser.add_argument('--workers', type=int, default=8) 248 | parser.add_argument('--subset', default='test') 249 | parser.add_argument('--evaluate', dest='evaluate', 250 | action='store_true', 251 | help='evaluate model on validation set') 252 | parser.add_argument('--weighted',default=True, dest='weighted') 253 | parser.add_argument('--dump_per', type=int, default=50, 254 | help='the number of epochs to make a checkpoint') 255 | parser.add_argument('--dataset', type=str, default='TID2013') 256 | parser.add_argument('--anew', action='store_true') 257 | 258 | args = parser.parse_args() 259 | 260 | return args 261 | 262 | 263 | def main(): 264 | args = parse_args() 265 | # Choose dataset 266 | global Dataset 267 | Dataset = globals().get(args.dataset+'Dataset', None) 268 | if args.cmd == 'train': 269 | train_iqa(args) 270 | elif args.cmd == 'test': 271 | test_iqa(args) 272 | 273 | 274 | if __name__ == '__main__': 275 | main() 276 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/iqaTest.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """ 3 | Main Script 4 | """ 5 | 6 | import sys 7 | import os 8 | 9 | import shutil 10 | import argparse 11 | 12 | import torch 13 | import torch.backends.cudnn as cudnn 14 | from torch import nn 15 | 16 | from model import IQANet 17 | from dataset import TID2013Dataset, IQADataset, KADIDDataset 18 | from utils import AverageMeter, SROCC, PLCC, RMSE 19 | from utils import SimpleProgressBar as ProgressBar 20 | from utils import MMD_loss 21 | from VIDLoss import VIDLoss 22 | 23 | 24 | 25 | def test(test_data_loader, model): 26 | srocc = SROCC() 27 | plcc = PLCC() 28 | rmse = RMSE() 29 | len_test = len(test_data_loader) 30 | pb = ProgressBar(len_test, show_step=True) 31 | 32 | print("Testing") 33 | 34 | model.eval() 35 | with torch.no_grad(): 36 | for i, ((img, ref), score) in enumerate(test_data_loader): 37 | img = img.cuda() 38 | ref = ref.cuda() 39 | output = model(img, img).cpu().data.numpy() 40 | score = score.data.numpy() 41 | 42 | srocc.update(score, output) 43 | plcc.update(score, output) 44 | rmse.update(score, output) 45 | 46 | pb.show(i, "Test: [{0:5d}/{1:5d}]\t" 47 | "Score: {2:.4f}\t" 48 | "Label: {3:.4f}" 49 | .format(i+1, len_test, float(output), float(score))) 50 | 51 | print("\n\nSROCC: {0:.4f}\n" 52 | "PLCC: {1:.4f}\n" 53 | "RMSE: {2:.4f}" 54 | .format(srocc.compute(), plcc.compute(), rmse.compute()) 55 | ) 56 | 57 | 58 | 59 | def test_iqa(args): 60 | batch_size = 1 61 | pro = args.pro 62 | num_workers = args.workers 63 | subset = args.subset 64 | data_dir = args.data_dir 65 | list_dir = args.list_dir 66 | resume = args.resume 67 | 68 | for k, v in args.__dict__.items(): 69 | print(k, ':', v) 70 | 71 | model = IQANet(args.weighted) 72 | 73 | test_loader = torch.utils.data.DataLoader( 74 | Dataset(data_dir, phase='test_'+str(pro), list_dir=list_dir, 75 | n_ptchs=args.n_ptchs_per_img, 76 | subset=subset), 77 | batch_size=batch_size, shuffle=False, 78 | num_workers=num_workers, pin_memory=True 79 | ) 80 | 81 | cudnn.benchmark = True 82 | 83 | if resume: 84 | resume = resume.split('t.')[0]+'t_'+str(pro)+'.pkl' 85 | if os.path.isfile(resume): 86 | print("=> loading checkpoint '{}'".format(resume)) 87 | checkpoint = torch.load(resume) 88 | model.load_state_dict(checkpoint['state_dict']) 89 | print("=> loaded checkpoint '{}' (epoch {})" 90 | .format(resume, checkpoint['epoch'])) 91 | else: 92 | print("=> no checkpoint found at '{}'".format(resume)) 93 | 94 | test(test_loader, model.cuda()) 95 | 96 | 97 | 98 | def parse_args(): 99 | # Training settings 100 | parser = argparse.ArgumentParser(description='') 101 | parser.add_argument('-cmd', type=str,default='test') 102 | parser.add_argument('-d', '--data-dir', default='../../../datasets/tid2013/') 103 | parser.add_argument('-l', '--list-dir', default='../scripts/tid2013/', 104 | help='List dir to look for train_images.txt etc. ' 105 | 'It is the same with --data-dir if not set.') 106 | parser.add_argument('-n', '--n-ptchs-per-img', type=int, default=1024, metavar='N', 107 | help='number of patches for each image (default: 32)') 108 | parser.add_argument('--step', type=int, default=200) 109 | parser.add_argument('--batch-size', type=int, default=32, metavar='B', 110 | help='input batch size for training (default: 64)') 111 | parser.add_argument('--epochs', type=int, default=1000, metavar='NE', 112 | help='number of epochs to train (default: 1000)') 113 | parser.add_argument('--lr', type=float, default=1e-4, metavar='LR', 114 | help='learning rate (default: 1e-4)') 115 | parser.add_argument('--lr-mode', type=str, default='const') 116 | parser.add_argument('--weight-decay', default=1e-4, type=float, 117 | metavar='W', help='weight decay (default: 1e-4)') 118 | parser.add_argument('--resume', default='../models/tid2013/model_best.pkl',type=str, metavar='PATH', 119 | help='path to latest checkpoint') 120 | parser.add_argument('--workers', type=int, default=8) 121 | parser.add_argument('--pro', type=int, default=2) 122 | parser.add_argument('--subset', default='test') 123 | parser.add_argument('--evaluate', dest='evaluate', 124 | action='store_true', 125 | help='evaluate model on validation set') 126 | parser.add_argument('--weighted',default=True, dest='weighted') 127 | parser.add_argument('--dump_per', type=int, default=50, 128 | help='the number of epochs to make a checkpoint') 129 | parser.add_argument('--dataset', type=str, default='TID2013') 130 | parser.add_argument('--anew', action='store_true') 131 | 132 | args = parser.parse_args() 133 | 134 | return args 135 | 136 | 137 | def main(): 138 | args = parse_args() 139 | # Choose dataset 140 | global Dataset 141 | Dataset = globals().get(args.dataset+'Dataset', None) 142 | if args.cmd == 'train': 143 | train_iqa(args) 144 | elif args.cmd == 'test': 145 | test_iqa(args) 146 | 147 | 148 | if __name__ == '__main__': 149 | main() 150 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/model.py: -------------------------------------------------------------------------------- 1 | """ 2 | The CNN Model for FR-IQA 3 | ------------------------- 4 | 5 | KVASS Tastes good! 6 | """ 7 | 8 | import math 9 | import torch 10 | import torch.nn as nn 11 | from torch.autograd import Variable 12 | from Inv_arch import InvRescaleNet 13 | 14 | class Conv3x3(nn.Module): 15 | def __init__(self, in_dim, out_dim): 16 | super(Conv3x3, self).__init__() 17 | self.conv = nn.Sequential( 18 | nn.Conv2d(in_dim, out_dim, kernel_size=3, stride=(1,1), padding=(1,1), bias=True), 19 | nn.LeakyReLU(0.2, inplace=True) 20 | ) 21 | 22 | def forward(self, x): 23 | return self.conv(x) 24 | 25 | class MaxPool2x2(nn.Module): 26 | def __init__(self): 27 | super(MaxPool2x2, self).__init__() 28 | self.pool = nn.MaxPool2d(kernel_size=2, stride=(2,2), padding=(0,0)) 29 | 30 | def forward(self, x): 31 | return self.pool(x) 32 | 33 | class DoubleConv(nn.Module): 34 | """ 35 | Double convolution as a basic block for the net 36 | 37 | Actually this is from a VGG16 block 38 | """ 39 | def __init__(self, in_dim, out_dim,ispool = True): 40 | super(DoubleConv, self).__init__() 41 | self.conv1 = Conv3x3(in_dim, out_dim) 42 | self.conv2 = Conv3x3(out_dim, out_dim) 43 | self.pool = MaxPool2x2() 44 | self.ispool = ispool 45 | 46 | def forward(self, x): 47 | y = self.conv1(x) 48 | y = self.conv2(y) 49 | if self.ispool: 50 | y = self.pool(y) 51 | return y 52 | 53 | class SingleConv(nn.Module): 54 | def __init__(self, in_dim, out_dim): 55 | super(SingleConv, self).__init__() 56 | self.conv = Conv3x3(in_dim, out_dim) 57 | self.pool = MaxPool2x2() 58 | 59 | def forward(self, x): 60 | y = self.conv(x) 61 | y = self.pool(y) 62 | return y 63 | 64 | 65 | class IQANet(nn.Module): 66 | """ 67 | The CNN model for full-reference image quality assessment 68 | 69 | Implements a siamese network at first and then there is regression 70 | """ 71 | def __init__(self, weighted=False,istrain=False,scale=4,\ 72 | block_num =3,channel_input=256): 73 | super(IQANet, self).__init__() 74 | 75 | self.weighted = weighted 76 | self.istrain = istrain 77 | self.scale = scale 78 | 79 | # Feature extraction layers 80 | self.fl1 = DoubleConv(3, 64) 81 | self.fl2 = DoubleConv(64, 128) 82 | self.fl3 = DoubleConv(128, 256) 83 | 84 | 85 | self.sfl1 = DoubleConv(3, 32*self.scale) 86 | self.sfl21 = DoubleConv(32*self.scale, 64*self.scale,ispool = False) 87 | self.sfl22 = DoubleConv(64*self.scale, 64*self.scale) 88 | self.sfl23 = DoubleConv(64*self.scale, 64*self.scale,ispool = False) 89 | self.sfl3 = DoubleConv(64*self.scale, 128*4) 90 | 91 | self.InvRescaleNet = InvRescaleNet(split_len1=channel_input, \ 92 | split_len2=channel_input, \ 93 | block_num=block_num,\ 94 | Use1x1 = True) 95 | # Fusion layers 96 | self.cl1 = SingleConv(256*2, 128) 97 | self.cl2 = nn.Conv2d(128, 64, kernel_size=3) 98 | 99 | # Regression layers 100 | self.rl1 = nn.Linear(256, 32) 101 | self.rl2 = nn.Linear(32, 1) 102 | 103 | 104 | # Fusion layers 105 | self.scl1 = SingleConv(512, 128) 106 | self.scl2 = nn.Conv2d(128, 64, kernel_size=3) 107 | 108 | # Regression layers 109 | self.srl1 = nn.Linear(256, 32) 110 | self.srl2 = nn.Linear(32, 1) 111 | 112 | self.gn=torch.nn.GroupNorm(num_channels=256,num_groups=64) 113 | 114 | if self.weighted: 115 | self.wl1 = nn.GRU(256, 32, batch_first=True) 116 | self.wl2 = nn.Linear(32, 1) 117 | 118 | self.swl1 = nn.GRU(256, 32, batch_first=True) 119 | self.swl2 = nn.Linear(32, 1) 120 | 121 | 122 | self._initialize_weights() 123 | 124 | def _get_initial_state(self, batch_size): 125 | h0 = torch.zeros(1, batch_size, 32,device=0) 126 | return h0 127 | 128 | def extract_feature(self, x): 129 | """ Forward function for feature extraction of each branch of the siamese net """ 130 | y = self.fl1(x) 131 | y = self.fl2(y) 132 | y = self.fl3(y) 133 | # y = self.gn(y) 134 | 135 | return y 136 | 137 | def NR_extract_feature(self, x): 138 | """ Forward function for feature extraction of each branch of the siamese net """ 139 | y = self.sfl1(x) 140 | y = self.sfl21(y) 141 | y = self.sfl22(y) 142 | y = self.sfl23(y) 143 | y = self.sfl3(y) 144 | y1,y2 = torch.split(y, int(y.shape[1]/2), dim=1) 145 | 146 | 147 | return y1,y2 148 | 149 | def gaussian_batch(self, dims,scale=1): 150 | lenth = dims[0]*dims[1]*dims[2]*dims[3] 151 | inv = torch.normal(mean=0, std=0.5*torch.ones(lenth)).cuda() 152 | return inv.view_as(torch.Tensor(dims[0],dims[1],dims[2],dims[3])) 153 | 154 | 155 | def forward(self, x1, x2): 156 | """ x1 as distorted and x2 as reference """ 157 | n_imgs, n_ptchs_per_img = x1.shape[0:2] 158 | 159 | 160 | # Reshape 161 | x1 = x1.view(-1,*x1.shape[-3:]) 162 | x2 = x2.view(-1,*x2.shape[-3:]) 163 | 164 | f1 = self.extract_feature(x1) 165 | f2 = self.extract_feature(x2) 166 | sf1,sf2 = self.NR_extract_feature(x1) 167 | fake_f1, fake_f2 = self.InvRescaleNet(sf1,sf2) 168 | 169 | ini_f_com = torch.cat([f2, f1], dim=1) 170 | fake_f_com = torch.cat([fake_f2, fake_f1], dim=1) 171 | f_com = torch.cat([ini_f_com,fake_f_com], dim=0) 172 | 173 | f_com = self.cl1(f_com) 174 | f_com = self.cl2(f_com) 175 | flatten = f_com.view(f_com.shape[0], -1) 176 | y = self.rl1(flatten) 177 | y = self.rl2(y) 178 | y1,y2 = torch.split(y, int(y.shape[0]/2), dim=0) 179 | 180 | fake_sf1,fake_sf2 = self.InvRescaleNet(f1,f2, rev=True) 181 | sf = torch.cat((sf1,sf2),1) 182 | 183 | 184 | NF_com = self.scl1(sf) 185 | NF_com = self.scl2(NF_com) 186 | Nflatten = NF_com.view(NF_com.shape[0], -1) 187 | Ny = self.srl1(Nflatten) 188 | Ny = self.srl2(Ny) 189 | 190 | if self.weighted: 191 | # print('use weighted') 192 | flatten = flatten.view(2*n_imgs, n_ptchs_per_img,-1) 193 | w,_ = self.wl1(flatten) 194 | w = self.wl2(w) 195 | w = torch.nn.functional.relu(w) + 1e-8 196 | # Weighted average 197 | w1,w2 = torch.split(w, int(w.shape[0]/2), dim=0) 198 | 199 | y1_by_img = y1.view(n_imgs, n_ptchs_per_img) 200 | w1_by_img = w1.view(n_imgs, n_ptchs_per_img) 201 | FS = torch.sum(y1_by_img*w1_by_img, dim=1) / torch.sum(w1_by_img, dim=1) 202 | 203 | y2_by_img = y2.view(n_imgs, n_ptchs_per_img) 204 | w2_by_img = w2.view(n_imgs, n_ptchs_per_img) 205 | NFake_FS = torch.sum(y2_by_img*w2_by_img, dim=1) / torch.sum(w2_by_img, dim=1) 206 | 207 | 208 | 209 | Nflatten = Nflatten.view(n_imgs, n_ptchs_per_img,-1) 210 | sw,_ = self.swl1(Nflatten,self._get_initial_state(Nflatten.size(0))) 211 | sw = self.swl2(sw) 212 | sw = torch.nn.functional.relu(sw) + 1e-8 213 | Ny_by_img = Ny.view(n_imgs, n_ptchs_per_img) 214 | Nw_by_img = sw.view(n_imgs, n_ptchs_per_img) 215 | NS = torch.sum(Ny_by_img*Nw_by_img, dim=1) / torch.sum(Nw_by_img, dim=1) 216 | 217 | 218 | else: 219 | print('not use weighted') 220 | # Calculate average score for each image 221 | FS = torch.mean(y1.view(n_imgs, n_ptchs_per_img), dim=1) 222 | NFake_FS = torch.mean(y2.view(n_imgs, n_ptchs_per_img), dim=1) 223 | NS = torch.mean(Ny.view(n_imgs, n_ptchs_per_img), dim=1) 224 | 225 | if self.istrain: 226 | return FS.squeeze(),NFake_FS.squeeze(),NS.squeeze(),\ 227 | f1,f2,fake_f1, fake_f2 228 | 229 | else: 230 | return NS.squeeze() 231 | 232 | 233 | 234 | def _initialize_weights(self): 235 | for m in self.modules(): 236 | if isinstance(m, nn.Conv2d): 237 | n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels 238 | m.weight.data.normal_(0, math.sqrt(2. / n)) 239 | if m.bias is not None: 240 | m.bias.data.zero_() 241 | elif isinstance(m, nn.BatchNorm2d): 242 | m.weight.data.fill_(1) 243 | m.bias.data.zero_() 244 | elif isinstance(m, nn.Linear): 245 | m.weight.data.normal_(0, 0.01) 246 | m.bias.data.zero_() 247 | else: 248 | pass 249 | 250 | 251 | def test(): 252 | 253 | net = IQANet(weighted=True,istrain=True) 254 | net.cuda() 255 | 256 | x1 = torch.randn(2, 16,3,64,64) 257 | x1 = Variable(x1.cuda()) 258 | 259 | y1,y2,y3,y4,y5,y6,y7= net.forward(x1,x1) 260 | 261 | 262 | print(y1.shape,y2.shape,y3.shape,y4.shape,y5.shape,y6.shape,y7.shape) 263 | 264 | 265 | if __name__== '__main__': 266 | test() 267 | 268 | 269 | 270 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/module_util.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.init as init 4 | import torch.nn.functional as F 5 | 6 | 7 | def initialize_weights(net_l, scale=1): 8 | if not isinstance(net_l, list): 9 | net_l = [net_l] 10 | for net in net_l: 11 | for m in net.modules(): 12 | if isinstance(m, nn.Conv2d): 13 | init.kaiming_normal_(m.weight, a=0, mode='fan_in') 14 | m.weight.data *= scale # for residual block 15 | if m.bias is not None: 16 | m.bias.data.zero_() 17 | elif isinstance(m, nn.Linear): 18 | init.kaiming_normal_(m.weight, a=0, mode='fan_in') 19 | m.weight.data *= scale 20 | if m.bias is not None: 21 | m.bias.data.zero_() 22 | elif isinstance(m, nn.BatchNorm2d): 23 | init.constant_(m.weight, 1) 24 | init.constant_(m.bias.data, 0.0) 25 | 26 | 27 | def initialize_weights_xavier(net_l, scale=1): 28 | if not isinstance(net_l, list): 29 | net_l = [net_l] 30 | for net in net_l: 31 | for m in net.modules(): 32 | if isinstance(m, nn.Conv2d): 33 | init.xavier_normal_(m.weight) 34 | m.weight.data *= scale # for residual block 35 | if m.bias is not None: 36 | m.bias.data.zero_() 37 | elif isinstance(m, nn.Linear): 38 | init.xavier_normal_(m.weight) 39 | m.weight.data *= scale 40 | if m.bias is not None: 41 | m.bias.data.zero_() 42 | elif isinstance(m, nn.BatchNorm2d): 43 | init.constant_(m.weight, 1) 44 | init.constant_(m.bias.data, 0.0) 45 | 46 | 47 | def make_layer(block, n_layers): 48 | layers = [] 49 | for _ in range(n_layers): 50 | layers.append(block()) 51 | return nn.Sequential(*layers) 52 | 53 | 54 | class ResidualBlock_noBN(nn.Module): 55 | '''Residual block w/o BN 56 | ---Conv-ReLU-Conv-+- 57 | |________________| 58 | ''' 59 | 60 | def __init__(self, nf=64): 61 | super(ResidualBlock_noBN, self).__init__() 62 | self.conv1 = nn.Conv2d(nf, nf, 3, 1, 1, bias=True) 63 | self.conv2 = nn.Conv2d(nf, nf, 3, 1, 1, bias=True) 64 | 65 | # initialization 66 | initialize_weights([self.conv1, self.conv2], 0.1) 67 | 68 | def forward(self, x): 69 | identity = x 70 | out = F.relu(self.conv1(x), inplace=True) 71 | out = self.conv2(out) 72 | return identity + out 73 | 74 | 75 | def flow_warp(x, flow, interp_mode='bilinear', padding_mode='zeros'): 76 | """Warp an image or feature map with optical flow 77 | Args: 78 | x (Tensor): size (N, C, H, W) 79 | flow (Tensor): size (N, H, W, 2), normal value 80 | interp_mode (str): 'nearest' or 'bilinear' 81 | padding_mode (str): 'zeros' or 'border' or 'reflection' 82 | 83 | Returns: 84 | Tensor: warped image or feature map 85 | """ 86 | assert x.size()[-2:] == flow.size()[1:3] 87 | B, C, H, W = x.size() 88 | # mesh grid 89 | grid_y, grid_x = torch.meshgrid(torch.arange(0, H), torch.arange(0, W)) 90 | grid = torch.stack((grid_x, grid_y), 2).float() # W(x), H(y), 2 91 | grid.requires_grad = False 92 | grid = grid.type_as(x) 93 | vgrid = grid + flow 94 | # scale grid to [-1,1] 95 | vgrid_x = 2.0 * vgrid[:, :, :, 0] / max(W - 1, 1) - 1.0 96 | vgrid_y = 2.0 * vgrid[:, :, :, 1] / max(H - 1, 1) - 1.0 97 | vgrid_scaled = torch.stack((vgrid_x, vgrid_y), dim=3) 98 | output = F.grid_sample(x, vgrid_scaled, mode=interp_mode, padding_mode=padding_mode) 99 | return output 100 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/src/utils.py: -------------------------------------------------------------------------------- 1 | """ 2 | Some Useful Functions and Classes 3 | """ 4 | 5 | import shutil 6 | from abc import ABCMeta, abstractmethod 7 | from threading import Lock 8 | from sys import stdout 9 | import torch 10 | import torch.nn as nn 11 | import numpy as np 12 | from scipy import stats 13 | 14 | 15 | def guassian_kernel(source, target, kernel_mul=2.0, kernel_num=5, fix_sigma=None): 16 | n_samples = int(source.size()[0])+int(target.size()[0]) 17 | total = torch.cat([source, target], dim=0) 18 | 19 | total0 = total.unsqueeze(0).expand(int(total.size(0)), int(total.size(0)), int(total.size(1))) 20 | total1 = total.unsqueeze(1).expand(int(total.size(0)), int(total.size(0)), int(total.size(1))) 21 | L2_distance = ((total0-total1)**2).sum(2) 22 | if fix_sigma: 23 | bandwidth = fix_sigma 24 | else: 25 | bandwidth = torch.sum(L2_distance.data) / (n_samples**2-n_samples) 26 | bandwidth /= kernel_mul ** (kernel_num // 2) 27 | bandwidth_list = [bandwidth * (kernel_mul**i) for i in range(kernel_num)] 28 | kernel_val = [torch.exp(-L2_distance / bandwidth_temp) for bandwidth_temp in bandwidth_list] 29 | return sum(kernel_val) 30 | 31 | class MMD_loss(nn.Module): 32 | def __init__(self, kernel_mul = 2.0, kernel_num = 5): 33 | super(MMD_loss, self).__init__() 34 | self.kernel_num = kernel_num 35 | self.kernel_mul = kernel_mul 36 | self.fix_sigma = None 37 | return 38 | 39 | 40 | def forward(self, source, target): 41 | batch_size = int(source.size()[0]) 42 | kernels = guassian_kernel(source, target, kernel_mul=self.kernel_mul, kernel_num=self.kernel_num, fix_sigma=self.fix_sigma) 43 | XX = kernels[:batch_size, :batch_size] 44 | YY = kernels[batch_size:, batch_size:] 45 | XY = kernels[:batch_size, batch_size:] 46 | YX = kernels[batch_size:, :batch_size] 47 | loss = torch.mean(XX + YY - XY -YX) 48 | return loss 49 | 50 | 51 | 52 | class AverageMeter: 53 | """ Computes and stores the average and current value """ 54 | def __init__(self): 55 | self.reset() 56 | 57 | def reset(self): 58 | self.val = 0 59 | self.avg = 0 60 | self.sum = 0 61 | self.count = 0 62 | 63 | def update(self, val, n=1): 64 | self.val = val 65 | self.sum += val * n 66 | self.count += n 67 | self.avg = self.sum / self.count 68 | 69 | 70 | """ 71 | Metrics for IQA performance 72 | ----------------------------------------- 73 | 74 | Including classes: 75 | * Metric (base) 76 | * MAE 77 | * SROCC 78 | * PLCC 79 | * RMSE 80 | 81 | """ 82 | 83 | class Metric(metaclass=ABCMeta): 84 | def __init__(self): 85 | super(Metric, self).__init__() 86 | self.reset() 87 | 88 | def reset(self): 89 | self.x1 = [] 90 | self.x2 = [] 91 | 92 | @abstractmethod 93 | def _compute(self, x1, x2): 94 | return 95 | 96 | def compute(self): 97 | x1_array = np.array(self.x1, dtype=np.float) 98 | x2_array = np.array(self.x2, dtype=np.float) 99 | return self._compute(x1_array.ravel(), x2_array.ravel()) 100 | 101 | def _check_type(self, x): 102 | return isinstance(x, (float, int, np.ndarray)) 103 | 104 | def update(self, x1, x2): 105 | if self._check_type(x1) and self._check_type(x2): 106 | self.x1.append(x1) 107 | self.x2.append(x2) 108 | else: 109 | raise TypeError('Data types not supported') 110 | 111 | class MAE(Metric): 112 | def __init__(self): 113 | super(MAE, self).__init__() 114 | 115 | def _compute(self, x1, x2): 116 | return np.sum(np.abs(x2-x1)) 117 | 118 | class SROCC(Metric): 119 | def __init__(self): 120 | super(SROCC, self).__init__() 121 | 122 | def _compute(self, x1, x2): 123 | return stats.spearmanr(x1, x2)[0] 124 | 125 | class PLCC(Metric): 126 | def __init__(self): 127 | super(PLCC, self).__init__() 128 | 129 | def _compute(self, x1, x2): 130 | return stats.pearsonr(x1, x2)[0] 131 | 132 | class RMSE(Metric): 133 | def __init__(self): 134 | super(RMSE, self).__init__() 135 | 136 | def _compute(self, x1, x2): 137 | return np.sqrt(((x2 - x1) ** 2).mean()) 138 | 139 | 140 | def limited_instances(n): 141 | def decorator(cls): 142 | _instances = [None]*n 143 | _lock = Lock() 144 | def wrapper(idx, *args, **kwargs): 145 | nonlocal _instances 146 | with _lock: 147 | if idx < n: 148 | if _instances[idx] is None: _instances[idx] = cls(*args, **kwargs) 149 | else: 150 | raise KeyError('index exceeds maximum number of instances') 151 | return _instances[idx] 152 | return wrapper 153 | return decorator 154 | 155 | 156 | class SimpleProgressBar: 157 | def __init__(self, total_len, pat='#', show_step=False, print_freq=1): 158 | self.len = total_len 159 | self.pat = pat 160 | self.show_step = show_step 161 | self.print_freq = print_freq 162 | self.out_stream = stdout 163 | 164 | def show(self, cur, desc): 165 | bar_len, _ = shutil.get_terminal_size() 166 | # The tab between desc and the progress bar should be counted. 167 | # And the '|'s on both ends be counted, too 168 | bar_len = bar_len - self.len_with_tabs(desc+'\t') - 2 169 | bar_len = int(bar_len*0.8) 170 | cur_pos = int(((cur+1)/self.len)*bar_len) 171 | cur_bar = '|'+self.pat*cur_pos+' '*(bar_len-cur_pos)+'|' 172 | 173 | disp_str = "{0}\t{1}".format(desc, cur_bar) 174 | 175 | # Clean 176 | self.write('\033[K') 177 | 178 | if self.show_step and (cur % self.print_freq) == 0: 179 | self.write(disp_str, new_line=True) 180 | return 181 | 182 | if (cur+1) < self.len: 183 | self.write(disp_str) 184 | else: 185 | self.write(disp_str, new_line=True) 186 | 187 | self.out_stream.flush() 188 | 189 | @staticmethod 190 | def len_with_tabs(s): 191 | return len(s.expandtabs()) 192 | 193 | def write(self, content, new_line=False): 194 | end = '\n' if new_line else '\r' 195 | self.out_stream.write(content+end) 196 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/utils/CSIQ_make_list.py: -------------------------------------------------------------------------------- 1 | # A script to make data lists for pytorch code 2 | # Database: TID2013 3 | # Date: 2018-11-6 4 | # 5 | # Edited: 2019-5-7 6 | # Change log: 7 | # + txt -> json 8 | # + mos saved as float values 9 | 10 | import random 11 | import json 12 | 13 | DATA_DIR = "../../datasets/CSIQ/dst_imgs/" 14 | REF_DIR = "../../datasets/CSIQ/src_imgs/" 15 | MOS_WITH_NAMES = "../../datasets/CSIQ/ref_dist_mos.txt" 16 | 17 | 18 | data_list = [line.strip().split() for line in open(MOS_WITH_NAMES, 'r')] 19 | 20 | idcs = ['1600.png','aerial_city.png','boston.png','bridge.png','butter_flower.png','cactus.png', 21 | 'child_swimming.png','couple.png','elk.png','family.png','fisher.png','foxy.png', 22 | 'geckos.png','lady_liberty.png','lake.png','log_seaside.png','monument.png','native_american.png', 23 | 'redwood.png','roping.png','rushmore.png','shroom.png','snow_leaves.png','sunsetcolor.png', 24 | 'sunset_sparrow.png','swarm.png','trolley.png','veggies.png','woman.png','turtle.png',] 25 | 26 | dist_list = ['AWGN','JPEG','jpeg2000','fnoise','BLUR','contrast'] 27 | 28 | for prot in range(10): 29 | 30 | random.shuffle(idcs) 31 | train_idcs = idcs[:18] 32 | val_idcs = idcs[18:24] 33 | test_idcs = idcs[24:] 34 | print('train_idcs:',train_idcs,'\n') 35 | print('val_idcs:',val_idcs,'\n') 36 | print('test_idcs:',test_idcs,'\n') 37 | 38 | def _write_list_into_file(l, f): 39 | with open(f, "w") as h: 40 | for line in l: 41 | h.write(line) 42 | h.write('\n') 43 | 44 | train_images, train_labels, train_mos = [], [], [] 45 | val_images, val_labels, val_mos = [], [], [] 46 | test_images, test_labels, test_mos = [], [], [] 47 | 48 | for ref_ini, dist_id,dist_type,dist_level,mos_std,mos in data_list: 49 | idx = ref_ini+'.png' 50 | ref = REF_DIR + idx 51 | img = DATA_DIR +dist_list[int(dist_id)-1].lower()+'/'+\ 52 | ref_ini+'.'+dist_list[int(dist_id)-1]+'.'+str(dist_level)+'.png' 53 | 54 | 55 | if idx in train_idcs: 56 | train_images.append(img) 57 | train_labels.append(ref) 58 | train_mos.append(float(mos)*100) 59 | if idx in val_idcs: 60 | val_images.append(img) 61 | val_labels.append(ref) 62 | val_mos.append(float(mos)*100) 63 | if idx in test_idcs: 64 | test_images.append(img) 65 | test_labels.append(ref) 66 | test_mos.append(float(mos)*100) 67 | 68 | 69 | ns = vars() 70 | for ph in ('train', 'val', 'test'): 71 | data_dict = dict(img=ns['{}_images'.format(ph)], \ 72 | ref=ns['{}_labels'.format(ph)], \ 73 | score=ns['{}_mos'.format(ph)]) 74 | with open('scripts/{}_'.format(ph)+str(prot)+'_data.json', 'w') as fp: 75 | json.dump(data_dict, fp) 76 | 77 | 78 | 79 | 80 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/utils/KADID_make_list.py: -------------------------------------------------------------------------------- 1 | import random 2 | import json 3 | 4 | DATA_DIR = "../../datasets/kadid10k/images/" 5 | REF_DIR = "../../datasets/kadid10k/images/" 6 | MOS_WITH_NAMES = "../../datasets/kadid10k/names_with_mos.txt" 7 | 8 | 9 | EXCLUDE_INDICES = () 10 | EXCLUDE_TYPES = () 11 | data_list = [line.strip().split() for line in open(MOS_WITH_NAMES, 'r')] 12 | # Split the dataset by index 13 | N = 81-len(EXCLUDE_INDICES) 14 | idcs = list(range(1,N+1)) 15 | 16 | 17 | for prot in range(10): 18 | random.shuffle(idcs) 19 | 20 | 21 | train_idcs = idcs[:49] 22 | val_idcs = idcs[49:65] 23 | test_idcs = idcs[65:] 24 | print('train_idcs:',train_idcs) 25 | print('val_idcs:',val_idcs) 26 | print('test_idcs:',test_idcs) 27 | 28 | def _write_list_into_file(l, f): 29 | with open(f, "w") as h: 30 | for line in l: 31 | h.write(line) 32 | h.write('\n') 33 | 34 | train_images, train_labels, train_mos = [], [], [] 35 | val_images, val_labels, val_mos = [], [], [] 36 | test_images, test_labels, test_mos = [], [], [] 37 | 38 | for image, mos in data_list: 39 | ref = REF_DIR + "I" + image[1:3] + '.png' 40 | img = DATA_DIR + image 41 | idx = int(image[1:3]) 42 | if idx not in EXCLUDE_INDICES : 43 | if idx in train_idcs: 44 | train_images.append(img) 45 | train_labels.append(ref) 46 | train_mos.append(float(mos)) 47 | if idx in val_idcs: 48 | val_images.append(img) 49 | val_labels.append(ref) 50 | val_mos.append(float(mos)) 51 | if idx in test_idcs: 52 | test_images.append(img) 53 | test_labels.append(ref) 54 | test_mos.append(float(mos)) 55 | 56 | 57 | ns = vars() 58 | for ph in ('train', 'val', 'test'): 59 | data_dict = dict(img=ns['{}_images'.format(ph)], ref=ns['{}_labels'.format(ph)], score=ns['{}_mos'.format(ph)]) 60 | with open('./scripts/{}_'.format(ph)+str(prot)+'_data.json', 'w') as fp: 61 | json.dump(data_dict, fp) 62 | 63 | 64 | 65 | 66 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/utils/LIVE_make_list.py: -------------------------------------------------------------------------------- 1 | # A script to make data lists for pytorch code 2 | # Database: TID2013 3 | # Date: 2018-11-6 4 | # 5 | # Edited: 2019-5-7 6 | # Change log: 7 | # + txt -> json 8 | # + mos saved as float values 9 | 10 | import random 11 | import json 12 | 13 | DATA_DIR = "../../datasets/databaserelease2/" 14 | REF_DIR = "../../datasets/databaserelease2/refimgs/" 15 | MOS_WITH_NAMES = "../../datasets/databaserelease2/MOS_NAME_REF.txt" 16 | data_list = [line.strip().split() for line in open(MOS_WITH_NAMES, 'r')] 17 | 18 | 19 | idcs = ['bikes.bmp', 'building2.bmp', 'buildings.bmp', 'caps.bmp', 'carnivaldolls.bmp', 20 | 'cemetry.bmp', 'churchandcapitol.bmp', 'coinsinfountain.bmp', 'dancers.bmp', 21 | 'flowersonih35.bmp', 'house.bmp', 'lighthouse.bmp', 'lighthouse2.bmp', 22 | 'manfishing.bmp',' monarch.bmp', 'ocean.bmp', 'paintedhouse.bmp', 23 | 'parrots.bmp', 'plane.bmp', 'rapids.bmp', 'sailing1.bmp', 'sailing2.bmp', 24 | 'sailing3.bmp', 'sailing4.bmp', 'statue.bmp', 'stream.bmp', 'studentsculpture.bmp', 25 | 'woman.bmp', 'womanhat.bmp'] 26 | 27 | for prot in range(10): 28 | 29 | random.shuffle(idcs) 30 | train_idcs = idcs[:17] 31 | val_idcs = idcs[17:23] 32 | test_idcs = idcs[23:29] 33 | print('train_idcs:',train_idcs,'\n') 34 | print('val_idcs:',val_idcs,'\n') 35 | print('test_idcs:',test_idcs,'\n') 36 | 37 | def _write_list_into_file(l, f): 38 | with open(f, "w") as h: 39 | for line in l: 40 | h.write(line) 41 | h.write('\n') 42 | 43 | train_images, train_labels, train_mos = [], [], [] 44 | val_images, val_labels, val_mos = [], [], [] 45 | test_images, test_labels, test_mos = [], [], [] 46 | 47 | for mos, image,ref in data_list: 48 | idx = ref 49 | ref = REF_DIR + ref 50 | img = DATA_DIR + image 51 | 52 | 53 | if idx in train_idcs: 54 | train_images.append(img) 55 | train_labels.append(ref) 56 | train_mos.append(float(mos)) 57 | if idx in val_idcs: 58 | val_images.append(img) 59 | val_labels.append(ref) 60 | val_mos.append(float(mos)) 61 | if idx in test_idcs: 62 | test_images.append(img) 63 | test_labels.append(ref) 64 | test_mos.append(float(mos)) 65 | 66 | 67 | ns = vars() 68 | for ph in ('train', 'val', 'test'): 69 | data_dict = dict(img=ns['{}_images'.format(ph)], \ 70 | ref=ns['{}_labels'.format(ph)], \ 71 | score=ns['{}_mos'.format(ph)]) 72 | with open('{}_'.format(ph)+str(prot)+'_data.json', 'w') as fp: 73 | json.dump(data_dict, fp) 74 | 75 | 76 | 77 | 78 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_NI/utils/TID_make_list.py: -------------------------------------------------------------------------------- 1 | import random 2 | import json 3 | 4 | DATA_DIR = "../../datasets/tid2013/distorted_images/" 5 | REF_DIR = "../../datasets/tid2013/reference_images/" 6 | MOS_WITH_NAMES = "../datasets/tid2013/mos_with_names.txt" 7 | 8 | 9 | EXCLUDE_INDICES = (25,) 10 | EXCLUDE_TYPES = () 11 | 12 | 13 | data_list = [line.strip().split() for line in open(MOS_WITH_NAMES, 'r')] 14 | def _write_list_into_file(l, f): 15 | with open(f, "w") as h: 16 | for line in l: 17 | h.write(line) 18 | h.write('\n') 19 | 20 | 21 | N = 25 - len(EXCLUDE_INDICES) 22 | idcs = list(range(1,N+1)) 23 | 24 | 25 | for prot in range(10): 26 | random.shuffle(idcs) 27 | print('idcs:', idcs) 28 | train_idcs = idcs[:15] 29 | val_idcs = idcs[15:20] 30 | test_idcs = idcs[20:] 31 | 32 | train_images, train_labels, train_mos = [], [], [] 33 | val_images, val_labels, val_mos = [], [], [] 34 | test_images, test_labels, test_mos = [], [], [] 35 | 36 | for mos, image in data_list: 37 | ref = REF_DIR + "I" + image[1:3] + '.BMP' 38 | img = DATA_DIR + image 39 | idx = int(image[1:3]) 40 | tpe = int(image[4:6]) 41 | if idx not in EXCLUDE_INDICES and tpe not in EXCLUDE_TYPES: 42 | if idx in train_idcs: 43 | train_images.append(img) 44 | train_labels.append(ref) 45 | train_mos.append(float(mos)) 46 | if idx in val_idcs: 47 | val_images.append(img) 48 | val_labels.append(ref) 49 | val_mos.append(float(mos)) 50 | if idx in test_idcs: 51 | test_images.append(img) 52 | test_labels.append(ref) 53 | test_mos.append(float(mos)) 54 | 55 | 56 | ns = vars() 57 | for ph in ('train', 'val', 'test'): 58 | data_dict = dict(img=ns['{}_images'.format(ph)], ref=ns['{}_labels'.format(ph)], score=ns['{}_mos'.format(ph)]) 59 | with open('TID_List_6-2-2-Wo25/{}_'.format(ph)+str(prot)+'_data.json', 'w') as fp: 60 | json.dump(data_dict, fp) 61 | 62 | 63 | 64 | 65 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/models/scid/pkl.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/models/scid/pkl.txt -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/models/siqad/pkl.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/models/siqad/pkl.txt -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/IQA_Test_All_Pros.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """ 3 | Main Script 4 | """ 5 | 6 | import sys 7 | import os 8 | 9 | import shutil 10 | import argparse 11 | import numpy as np 12 | import torch 13 | import torch.backends.cudnn as cudnn 14 | from torch import nn 15 | 16 | from model import IQANet 17 | from dataset import IQADataset,SCIDDataset 18 | from utils import AverageMeter, SROCC, PLCC, RMSE 19 | from utils import SimpleProgressBar as ProgressBar 20 | from utils import MMD_loss 21 | from VIDLoss import VIDLoss 22 | import os 23 | 24 | def test(test_data_loader, model,txt_name=None): 25 | srocc = SROCC() 26 | plcc = PLCC() 27 | rmse = RMSE() 28 | len_test = len(test_data_loader) 29 | pb = ProgressBar(len_test, show_step=True) 30 | 31 | print("Testing") 32 | 33 | with open(txt_name,'w') as f: 34 | model.eval() 35 | with torch.no_grad(): 36 | for i, ((img, ref), score) in enumerate(test_data_loader): 37 | img = img.cuda() 38 | ref = ref.cuda() 39 | output = model(img, img).cpu().data.numpy() 40 | score = score.data.numpy() 41 | f.write(str(np.around(score[0], 4)) + ' '+ str(np.around(output, 4))+'\n') 42 | srocc.update(score, output) 43 | plcc.update(score, output) 44 | rmse.update(score, output) 45 | 46 | pb.show(i, "Test: [{0:5d}/{1:5d}]\t" 47 | "Score: {2:.4f}\t" 48 | "Label: {3:.4f}" 49 | .format(i+1, len_test, float(output), float(score))) 50 | 51 | print("\n\nSROCC: {0:.4f}\n" 52 | "PLCC: {1:.4f}\n" 53 | "RMSE: {2:.4f}" 54 | .format(srocc.compute(), plcc.compute(), rmse.compute()) 55 | ) 56 | 57 | 58 | 59 | def test_iqa(args): 60 | batch_size = 1 61 | pro = args.pro 62 | num_workers = args.workers 63 | subset = args.subset 64 | data_dir = args.data_dir 65 | list_dir = args.list_dir 66 | resume = args.resume 67 | 68 | for k, v in args.__dict__.items(): 69 | print(k, ':', v) 70 | 71 | model = IQANet(args.weighted) 72 | 73 | test_loader = torch.utils.data.DataLoader( 74 | Dataset(data_dir, phase='test_'+str(pro), list_dir=list_dir, 75 | n_ptchs=args.n_ptchs_per_img, 76 | subset=subset), 77 | batch_size=batch_size, shuffle=False, 78 | num_workers=num_workers, pin_memory=True 79 | ) 80 | 81 | cudnn.benchmark = True 82 | 83 | # Resume from a checkpoint 84 | if resume: 85 | resume = resume.split('t.')[0]+'t_'+str(pro)+'.pkl' 86 | if os.path.isfile(resume): 87 | print("=> loading checkpoint '{}'".format(resume)) 88 | checkpoint = torch.load(resume) 89 | model.load_state_dict(checkpoint['state_dict']) 90 | print("=> loaded checkpoint '{}' (epoch {})" 91 | .format(resume, checkpoint['epoch'])) 92 | else: 93 | print("=> no checkpoint found at '{}'".format(resume)) 94 | 95 | txt_name = 'setting'+str(pro)+'.txt' 96 | test(test_loader, model.cuda(),txt_name) 97 | 98 | 99 | def adjust_learning_rate(args, optimizer, epoch): 100 | """ 101 | Sets the learning rate 102 | """ 103 | if args.lr_mode == 'step': 104 | lr = args.lr * (0.5 ** (epoch // args.step)) 105 | elif args.lr_mode == 'poly': 106 | lr = args.lr * (1 - epoch / args.epochs) ** 1.1 107 | elif args.lr_mode == 'const': 108 | lr = args.lr 109 | else: 110 | raise ValueError('Unknown lr mode {}'.format(args.lr_mode)) 111 | 112 | for param_group in optimizer.param_groups: 113 | param_group['lr'] = lr 114 | return lr 115 | 116 | def save_checkpoint(state, is_best, filename='checkpoint.pkl'): 117 | torch.save(state, filename) 118 | if is_best: 119 | shutil.copyfile(filename, '../models/model_best.pkl') 120 | 121 | 122 | 123 | # def parse_args(): 124 | # # Training settings 125 | # parser = argparse.ArgumentParser(description='') 126 | # parser.add_argument('-cmd', type=str,default='test') 127 | # parser.add_argument('-d', '--data-dir', default='../../../datasets/tid2013/') 128 | # parser.add_argument('-l', '--list-dir', default='../sci_scripts/siqad-scripts-6-2-2/', 129 | # help='List dir to look for train_images.txt etc. ' 130 | # 'It is the same with --data-dir if not set.') 131 | # parser.add_argument('-n', '--n-ptchs-per-img', type=int, default=1024, metavar='N', 132 | # help='number of patches for each image (default: 32)') 133 | # parser.add_argument('--step', type=int, default=200) 134 | # parser.add_argument('--batch-size', type=int, default=32, metavar='B', 135 | # help='input batch size for training (default: 64)') 136 | # parser.add_argument('--epochs', type=int, default=1000, metavar='NE', 137 | # help='number of epochs to train (default: 1000)') 138 | # parser.add_argument('--lr', type=float, default=1e-4, metavar='LR', 139 | # help='learning rate (default: 1e-4)') 140 | # parser.add_argument('--lr-mode', type=str, default='const') 141 | # parser.add_argument('--weight-decay', default=1e-4, type=float, 142 | # metavar='W', help='weight decay (default: 1e-4)') 143 | # parser.add_argument('--resume', default='../models/siqad/model_best.pkl',type=str, metavar='PATH', 144 | # help='path to latest checkpoint') 145 | # parser.add_argument('--workers', type=int, default=8) 146 | # parser.add_argument('--pro', type=int, default=2) 147 | # parser.add_argument('--subset', default='test') 148 | # parser.add_argument('--evaluate', dest='evaluate', 149 | # action='store_true', 150 | # help='evaluate model on validation set') 151 | # parser.add_argument('--weighted',default=True, dest='weighted') 152 | # parser.add_argument('--dump_per', type=int, default=50, 153 | # help='the number of epochs to make a checkpoint') 154 | # parser.add_argument('--dataset', type=str, default='IQA') 155 | # parser.add_argument('--anew', action='store_true') 156 | 157 | # args = parser.parse_args() 158 | 159 | # return args 160 | 161 | def parse_args(): 162 | # Training settings 163 | parser = argparse.ArgumentParser(description='') 164 | parser.add_argument('-cmd', type=str,default='test') 165 | parser.add_argument('-d', '--data-dir', default='../../../datasets/tid2013/') 166 | parser.add_argument('-l', '--list-dir', default='../sci_scripts/scid-scripts-6-2-2/', 167 | help='List dir to look for train_images.txt etc. ' 168 | 'It is the same with --data-dir if not set.') 169 | parser.add_argument('-n', '--n-ptchs-per-img', type=int, default=1024, metavar='N', 170 | help='number of patches for each image (default: 32)') 171 | parser.add_argument('--step', type=int, default=200) 172 | parser.add_argument('--batch-size', type=int, default=32, metavar='B', 173 | help='input batch size for training (default: 64)') 174 | parser.add_argument('--epochs', type=int, default=1000, metavar='NE', 175 | help='number of epochs to train (default: 1000)') 176 | parser.add_argument('--lr', type=float, default=1e-4, metavar='LR', 177 | help='learning rate (default: 1e-4)') 178 | parser.add_argument('--lr-mode', type=str, default='const') 179 | parser.add_argument('--weight-decay', default=1e-4, type=float, 180 | metavar='W', help='weight decay (default: 1e-4)') 181 | parser.add_argument('--resume', default='../models/scid/model_best.pkl',type=str, metavar='PATH', 182 | help='path to latest checkpoint') 183 | parser.add_argument('--workers', type=int, default=8) 184 | parser.add_argument('--pro', type=int, default=2) 185 | parser.add_argument('--subset', default='test') 186 | parser.add_argument('--evaluate', dest='evaluate', 187 | action='store_true', 188 | help='evaluate model on validation set') 189 | parser.add_argument('--weighted',default=True, dest='weighted') 190 | parser.add_argument('--dump_per', type=int, default=50, 191 | help='the number of epochs to make a checkpoint') 192 | parser.add_argument('--dataset', type=str, default='SCID') 193 | parser.add_argument('--anew', action='store_true') 194 | 195 | args = parser.parse_args() 196 | 197 | return args 198 | 199 | 200 | def main(): 201 | args = parse_args() 202 | # Choose dataset 203 | global Dataset 204 | Dataset = globals().get(args.dataset+'Dataset', None) 205 | 206 | for test_id in range(0,10): 207 | args.pro = test_id 208 | print('Test ID:', args.pro) 209 | if args.cmd == 'train': 210 | train_iqa(args) 211 | elif args.cmd == 'test': 212 | test_iqa(args) 213 | 214 | 215 | if __name__ == '__main__': 216 | main() -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/Inv_arch.py: -------------------------------------------------------------------------------- 1 | import math 2 | import torch 3 | import torch.nn as nn 4 | import torch.nn.functional as F 5 | import numpy as np 6 | from Subnet_constructor import DenseBlock,DenseBlock1X1 7 | from torch.autograd import Variable 8 | 9 | class InvBlockExp(nn.Module): 10 | def __init__(self, split_len1, split_len2, clamp=1.0, Use1x1=False): 11 | super(InvBlockExp, self).__init__() 12 | 13 | self.split_len1 = split_len1 14 | self.split_len2 = split_len2 15 | 16 | self.clamp = clamp 17 | 18 | if not Use1x1: 19 | self.F = DenseBlock(self.split_len2, self.split_len1) 20 | self.G = DenseBlock(self.split_len1, self.split_len2) 21 | self.H = DenseBlock(self.split_len1, self.split_len2) 22 | else: 23 | self.F = DenseBlock1X1(self.split_len2, self.split_len1) 24 | self.G = DenseBlock1X1(self.split_len1, self.split_len2) 25 | self.H = DenseBlock1X1(self.split_len1, self.split_len2) 26 | 27 | 28 | def forward(self, x1, x2,rev=False): 29 | if not rev: 30 | y1 = x1 + self.F(x2) 31 | self.s = self.clamp * (torch.sigmoid(self.H(y1)) * 2 - 1) 32 | y2 = x2.mul(torch.exp(self.s)) + self.G(y1) 33 | else: 34 | self.s = self.clamp * (torch.sigmoid(self.H(x1)) * 2 - 1) 35 | y2 = (x2 - self.G(x1)).div(torch.exp(self.s)) 36 | y1 = x1 - self.F(y2) 37 | 38 | return y1, y2 39 | 40 | def jacobian(self, x, rev=False): 41 | if not rev: 42 | jac = torch.sum(self.s) 43 | else: 44 | jac = -torch.sum(self.s) 45 | 46 | return jac / x.shape[0] 47 | 48 | 49 | 50 | class InvRescaleNet(nn.Module): 51 | def __init__(self, split_len1=32, split_len2=32, block_num=3,Use1x1=False): 52 | super(InvRescaleNet, self).__init__() 53 | operations = [] 54 | for j in range(block_num): 55 | b = InvBlockExp(split_len1, split_len2,Use1x1=Use1x1) 56 | operations.append(b) 57 | self.operations = nn.ModuleList(operations) 58 | 59 | 60 | def forward(self, x1,x2, rev=False, cal_jacobian=False): 61 | out1 = x1 62 | out2 = x2 63 | jacobian = 0 64 | 65 | if not rev: 66 | for op in self.operations: 67 | out1,out2 = op.forward(out1, out2,rev) 68 | if cal_jacobian: 69 | jacobian += op.jacobian(out1, rev) 70 | else: 71 | for op in reversed(self.operations): 72 | out1,out2 = op.forward(out1, out2,rev) 73 | if cal_jacobian: 74 | jacobian += op.jacobian(out1, rev) 75 | 76 | if cal_jacobian: 77 | return out1,out2, jacobian 78 | else: 79 | return out1,out2 80 | 81 | 82 | 83 | 84 | 85 | def test(): 86 | 87 | net = InvRescaleNet(split_len1=32, split_len2=32, block_num=3) 88 | net.cuda() 89 | 90 | x1 = torch.randn(2,32,32,32) 91 | x1 = Variable(x1.cuda()) 92 | 93 | x2 = torch.randn(2,32,32,32) 94 | x2 = Variable(x2.cuda()) 95 | 96 | # x2 = Variable(x2.cuda()) 97 | # y1,y2,y3,y4,y5,y6 = net.forward(x1,x1,x1,x1) 98 | # print(y1.shape,y2.shape,y3.shape,y4.shape,y5.shape,y6.shape) 99 | 100 | y1,y2 = net.forward(x1,x2) 101 | 102 | 103 | print(y1.shape,y2.shape) 104 | 105 | 106 | if __name__== '__main__': 107 | test() 108 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/Subnet_constructor.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.functional as F 4 | import module_util as mutil 5 | from torch.autograd import Variable 6 | class DenseBlock(nn.Module): 7 | def __init__(self, channel_in, channel_out, init='xavier', gc=32, bias=True): 8 | super(DenseBlock, self).__init__() 9 | self.conv1 = nn.Conv2d(channel_in, gc, 3, 1, 1, bias=bias) 10 | self.conv2 = nn.Conv2d(channel_in + gc, gc, 3, 1, 1, bias=bias) 11 | self.conv3 = nn.Conv2d(channel_in + 2 * gc, gc, 3, 1, 1, bias=bias) 12 | self.conv4 = nn.Conv2d(channel_in + 3 * gc, gc, 3, 1, 1, bias=bias) 13 | self.conv5 = nn.Conv2d(channel_in + 4 * gc, channel_out, 3, 1, 1, bias=bias) 14 | self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) 15 | 16 | if init == 'xavier': 17 | mutil.initialize_weights_xavier([self.conv1, self.conv2, self.conv3, self.conv4], 0.1) 18 | else: 19 | mutil.initialize_weights([self.conv1, self.conv2, self.conv3, self.conv4], 0.1) 20 | mutil.initialize_weights(self.conv5, 0) 21 | 22 | def forward(self, x): 23 | x1 = self.lrelu(self.conv1(x)) 24 | x2 = self.lrelu(self.conv2(torch.cat((x, x1), 1))) 25 | x3 = self.lrelu(self.conv3(torch.cat((x, x1, x2), 1))) 26 | x4 = self.lrelu(self.conv4(torch.cat((x, x1, x2, x3), 1))) 27 | x5 = self.conv5(torch.cat((x, x1, x2, x3, x4), 1)) 28 | 29 | return x5 30 | 31 | #class DenseBlock1X1(nn.Module): 32 | # def __init__(self, channel_in, channel_out, init='xavier', gc=32, bias=True): 33 | # super(DenseBlock1X1, self).__init__() 34 | # self.conv1 = nn.Conv2d(channel_in, gc, 1, 1, 0, bias=bias) 35 | # self.conv2 = nn.Conv2d(channel_in + gc, gc,1,1, 0, bias=bias) 36 | # self.conv3 = nn.Conv2d(channel_in + 2 * gc, gc, 1, 1, 0, bias=bias) 37 | # self.conv4 = nn.Conv2d(channel_in + 3 * gc, gc, 1, 1, 0, bias=bias) 38 | # self.conv5 = nn.Conv2d(channel_in + 4 * gc, channel_out, 1, 1, 0, bias=bias) 39 | # self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) 40 | # 41 | # if init == 'xavier': 42 | # mutil.initialize_weights_xavier([self.conv1, self.conv2, self.conv3, self.conv4], 0.1) 43 | # else: 44 | # mutil.initialize_weights([self.conv1, self.conv2, self.conv3, self.conv4], 0.1) 45 | # mutil.initialize_weights(self.conv5, 0) 46 | # 47 | # def forward(self, x): 48 | # x1 = self.lrelu(self.conv1(x)) 49 | # x2 = self.lrelu(self.conv2(torch.cat((x, x1), 1))) 50 | # x3 = self.lrelu(self.conv3(torch.cat((x, x1, x2), 1))) 51 | # x4 = self.lrelu(self.conv4(torch.cat((x, x1, x2, x3), 1))) 52 | # x5 = self.conv5(torch.cat((x, x1, x2, x3, x4), 1)) 53 | # 54 | # return x5 55 | 56 | 57 | 58 | class DenseBlock1X1(nn.Module): 59 | def __init__(self, channel_in, channel_out, init='xavier', gc=32, bias=True): 60 | super(DenseBlock1X1, self).__init__() 61 | self.conv1 = nn.Conv2d(channel_in, channel_in*2, 1, 1, 0, bias=bias) 62 | self.conv2 = nn.Conv2d(channel_in*2, channel_in*2,1,1, 0, bias=bias) 63 | self.conv3 = nn.Conv2d(channel_in*2, channel_in*2, 1, 1, 0, bias=bias) 64 | self.conv4 = nn.Conv2d(channel_in*2, channel_in, 1, 1, 0, bias=bias) 65 | 66 | self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) 67 | 68 | if init == 'xavier': 69 | mutil.initialize_weights_xavier([self.conv1, self.conv2, self.conv3, self.conv4], 0.1) 70 | else: 71 | mutil.initialize_weights([self.conv1, self.conv2, self.conv3, self.conv4], 0.1) 72 | 73 | 74 | def forward(self, x): 75 | x1 = self.lrelu(self.conv1(x)) 76 | x2 = self.lrelu(self.conv2(x1)) 77 | x3 = self.lrelu(self.conv3(x2)) 78 | x4 = self.conv4(x3) 79 | 80 | 81 | return x4 82 | 83 | def test(): 84 | 85 | net = DenseBlock1X1(32,32) 86 | net.cuda() 87 | 88 | x1 = torch.randn(1,32,9,9) 89 | x1 = Variable(x1.cuda()) 90 | 91 | 92 | 93 | y1 = net.forward(x1) 94 | 95 | 96 | print(y1.shape) 97 | 98 | 99 | if __name__== '__main__': 100 | test() 101 | 102 | 103 | 104 | 105 | 106 | 107 | 108 | 109 | 110 | 111 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/VIDLoss.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | 3 | import torch 4 | import torch.nn as nn 5 | import torch.nn.functional as F 6 | import numpy as np 7 | from torch.autograd import Variable 8 | 9 | class VIDLoss(nn.Module): 10 | """Variational Information Distillation for Knowledge Transfer (CVPR 2019), 11 | code from author: https://github.com/ssahn0215/variational-information-distillation""" 12 | def __init__(self, 13 | num_input_channels, 14 | num_mid_channel, 15 | num_target_channels, 16 | init_pred_var=5.0, 17 | eps=1e-5): 18 | super(VIDLoss, self).__init__() 19 | 20 | def conv1x1(in_channels, out_channels, stride=1): 21 | return nn.Conv2d( 22 | in_channels, out_channels, 23 | kernel_size=1, padding=0, 24 | bias=False, stride=stride) 25 | 26 | self.regressor = nn.Sequential( 27 | conv1x1(num_input_channels, num_mid_channel), 28 | nn.ReLU(), 29 | conv1x1(num_mid_channel, num_mid_channel), 30 | nn.ReLU(), 31 | conv1x1(num_mid_channel, num_target_channels), 32 | ) 33 | self.log_scale = torch.nn.Parameter( 34 | np.log(np.exp(init_pred_var-eps)-1.0) * torch.ones(num_target_channels) 35 | ) 36 | self.eps = eps 37 | 38 | def forward(self, input, target): 39 | # pool for dimentsion match 40 | s_H, t_H = input.shape[2], target.shape[2] 41 | if s_H > t_H: 42 | input = F.adaptive_avg_pool2d(input, (t_H, t_H)) 43 | elif s_H < t_H: 44 | target = F.adaptive_avg_pool2d(target, (s_H, s_H)) 45 | else: 46 | pass 47 | pred_mean = self.regressor(input) 48 | pred_var = torch.log(1.0+torch.exp(self.log_scale))+self.eps 49 | pred_var = pred_var.view(1, -1, 1, 1) 50 | neg_log_prob = 0.5*( 51 | (pred_mean-target)**2/pred_var+torch.log(pred_var) 52 | ) 53 | loss = torch.mean(neg_log_prob) 54 | return loss 55 | 56 | 57 | 58 | 59 | def test(): 60 | 61 | x1 = torch.randn(2, 128,4,4) 62 | x1 = Variable(x1.cuda()) 63 | net = VIDLoss(128,256,128) 64 | net.cuda() 65 | 66 | 67 | # x2 = Variable(x2.cuda()) 68 | # y1,y2,y3,y4,y5,y6 = net.forward(x1,x1,x1,x1) 69 | # print(y1.shape,y2.shape,y3.shape,y4.shape,y5.shape,y6.shape) 70 | 71 | y1 = net.forward(x1,x1) 72 | 73 | 74 | print(y1) 75 | 76 | 77 | if __name__== '__main__': 78 | test() 79 | 80 | 81 | 82 | 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | 91 | 92 | 93 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/FRmodel.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/FRmodel.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/FRmodel3.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/FRmodel3.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/Inv_arch.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/Inv_arch.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/Inv_arch.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/Inv_arch.cpython-37.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/Inv_arch.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/Inv_arch.cpython-38.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/NRmodel.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/NRmodel.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/NRmodel3.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/NRmodel3.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/Subnet_constructor.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/Subnet_constructor.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/Subnet_constructor.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/Subnet_constructor.cpython-37.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/Subnet_constructor.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/Subnet_constructor.cpython-38.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/VIDLoss.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/VIDLoss.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/VIDLoss.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/VIDLoss.cpython-37.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/VIDLoss.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/VIDLoss.cpython-38.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/dataset.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/dataset.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/dataset.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/dataset.cpython-37.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/dataset.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/dataset.cpython-38.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/model.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/model.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/model.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/model.cpython-37.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/model.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/model.cpython-38.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/model2.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/model2.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/modelSplit.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/modelSplit.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/modelSplit2.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/modelSplit2.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/module_util.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/module_util.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/module_util.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/module_util.cpython-37.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/module_util.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/module_util.cpython-38.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/utils.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/utils.cpython-36.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/utils.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/utils.cpython-37.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/__pycache__/utils.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/FPR_SCI/src/__pycache__/utils.cpython-38.pyc -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/dataset.py: -------------------------------------------------------------------------------- 1 | """ 2 | Dataset and Transforms 3 | """ 4 | 5 | 6 | import torch.utils.data 7 | import numpy as np 8 | import random 9 | import json 10 | from skimage import io 11 | from os.path import join, exists 12 | from utils import limited_instances, SimpleProgressBar 13 | 14 | 15 | 16 | class IQADataset(torch.utils.data.Dataset): 17 | def __init__(self, data_dir, phase, n_ptchs=256, sample_once=False, subset='', list_dir=''): 18 | super(IQADataset, self).__init__() 19 | 20 | self.list_dir = data_dir if not list_dir else list_dir 21 | self.data_dir = data_dir 22 | self.phase = phase 23 | self.subset = phase if not subset.strip() else subset 24 | self.n_ptchs = n_ptchs 25 | self.img_list = [] 26 | self.ref_list = [] 27 | self.score_list = [] 28 | self.sample_once = sample_once 29 | self._from_pool = False 30 | 31 | self._read_lists() 32 | self._aug_lists() 33 | 34 | self.tfs = Transforms() 35 | if sample_once: 36 | @limited_instances(self.__len__()) 37 | class IncrementCache: 38 | def store(self, data): 39 | self.data = data 40 | 41 | self._pool = IncrementCache 42 | self._to_pool() 43 | self._from_pool = True 44 | 45 | def __getitem__(self, index): 46 | img = self._loader(self.img_list[index]) 47 | ref = self._loader(self.ref_list[index]) 48 | # print(img.shape) 49 | score = self.score_list[index] 50 | 51 | # print(img.shape) 52 | 53 | if self._from_pool: 54 | (img_ptchs, ref_ptchs) = self._pool(index).data 55 | else: 56 | if self.phase.split('_')[0] == 'train': 57 | img, ref = self.tfs.horizontal_flip(img, ref) 58 | img_ptchs, ref_ptchs = self._to_patch_tensors(img, ref) 59 | elif self.phase.split('_')[0] == 'val': 60 | img_ptchs, ref_ptchs = self._to_patch_tensors(img, ref) 61 | elif self.phase.split('_')[0] == 'test': 62 | img_ptchs, ref_ptchs = self._to_patch_tensors(img, ref) 63 | else: 64 | pass 65 | 66 | return (img_ptchs, ref_ptchs), torch.tensor(score).float() 67 | 68 | def __len__(self): 69 | return len(self.img_list) 70 | 71 | def _loader(self, name): 72 | return io.imread(join(self.data_dir, name)) 73 | 74 | def _to_patch_tensors(self, img, ref): 75 | img_ptchs, ref_ptchs = self.tfs.to_patches(img, ref, ptch_size=64, n_ptchs=self.n_ptchs) 76 | img_ptchs, ref_ptchs = self.tfs.to_tensor(img_ptchs, ref_ptchs) 77 | return img_ptchs, ref_ptchs 78 | 79 | def _to_pool(self): 80 | len_data = self.__len__() 81 | pb = SimpleProgressBar(len_data) 82 | print("\ninitializing data pool...") 83 | for index in range(len_data): 84 | self._pool(index).store(self.__getitem__(index)[0]) 85 | pb.show(index, "[{:d}]/[{:d}] ".format(index+1, len_data)) 86 | 87 | def _aug_lists(self): 88 | if self.phase.split('_')[0] == 'test': 89 | return 90 | # Make samples from the reference images 91 | # The number of the reference samples appears 92 | # CRITICAL for the training effect! 93 | len_aug = len(self.ref_list)//5 if self.phase.split('_')[0] == 'train' else 10 94 | aug_list = self.ref_list*(len_aug//len(self.ref_list)+1) 95 | random.shuffle(aug_list) 96 | aug_list = aug_list[:len_aug] 97 | self.img_list.extend(aug_list) 98 | self.score_list += [0.0]*len_aug 99 | self.ref_list.extend(aug_list) 100 | 101 | if self.phase.split('_')[0] == 'train': 102 | # More samples in one epoch 103 | # This accelerates the training indeed as the cache 104 | # of the file system could then be fully leveraged 105 | # And also, augment the data in terms of number 106 | mul_aug = 16 107 | self.img_list *= mul_aug 108 | self.ref_list *= mul_aug 109 | self.score_list *= mul_aug 110 | 111 | def _read_lists(self): 112 | img_path = join(self.list_dir, self.phase + '_data.json') 113 | 114 | assert exists(img_path) 115 | 116 | with open(img_path, 'r') as fp: 117 | data_dict = json.load(fp) 118 | 119 | self.img_list = data_dict['img'] 120 | self.ref_list = data_dict.get('ref', self.img_list) 121 | self.score_list = data_dict.get('score', [0.0]*len(self.img_list)) 122 | 123 | 124 | class TID2013Dataset(IQADataset): 125 | def _read_lists(self): 126 | super()._read_lists() 127 | # For TID2013 128 | self.score_list = [(9.0 - s) / 9.0 * 100.0 for s in self.score_list] 129 | 130 | 131 | class WaterlooDataset(IQADataset): 132 | def _read_lists(self): 133 | super()._read_lists() 134 | self.score_list = [(1.0 - s) * 100.0 for s in self.score_list] 135 | 136 | #class LIVEDataset(IQADataset): 137 | # def _read_lists(self): 138 | # super()._read_lists() 139 | # self.score_list = [(1.0 - s) * 100.0 for s in self.score_list] 140 | 141 | class SCIDDataset(IQADataset): 142 | def _aug_lists(self): 143 | if self.phase.split('_')[0] == 'test': 144 | return 145 | # Make samples from the reference images 146 | # The number of the reference samples appears 147 | # CRITICAL for the training effect! 148 | len_aug = len(self.ref_list)//5 if self.phase.split('_')[0] == 'train' else 10 149 | aug_list = self.ref_list*(len_aug//len(self.ref_list)+1) 150 | random.shuffle(aug_list) 151 | aug_list = aug_list[:len_aug] 152 | self.img_list.extend(aug_list) 153 | self.score_list += [90.0]*len_aug 154 | self.ref_list.extend(aug_list) 155 | 156 | if self.phase.split('_')[0] == 'train': 157 | # More samples in one epoch 158 | # This accelerates the training indeed as the cache 159 | # of the file system could then be fully leveraged 160 | # And also, augment the data in terms of number 161 | mul_aug = 16 162 | self.img_list *= mul_aug 163 | self.ref_list *= mul_aug 164 | self.score_list *= mul_aug 165 | 166 | class Transforms: 167 | """ 168 | Self-designed transformation class 169 | ------------------------------------ 170 | 171 | Several things to fix and improve: 172 | 1. Strong coupling with Dataset cuz transformation types can't 173 | be simply assigned in training or testing code. (e.g. given 174 | a list of transforms as parameters to construct Dataset Obj) 175 | 2. Might be unsafe in multi-thread cases 176 | 3. Too complex decorators, not pythonic 177 | 4. The number of params of the wrapper and the inner function should 178 | be the same to avoid confusion 179 | 5. The use of params and isinstance() is not so elegant. For this, 180 | consider to stipulate a fix number and type of returned values for 181 | inner tf functions and do all the forwarding and passing work inside 182 | the decorator. tf_func applies transformation, which is all it does. 183 | 6. Performance has not been optimized at all 184 | 7. Doc it 185 | 8. Supports only numpy arrays 186 | """ 187 | def __init__(self): 188 | super(Transforms, self).__init__() 189 | 190 | def _pair_deco(tf_func): 191 | def transform(self, img, ref=None, *args, **kwargs): 192 | """ image shape (w, h, c) """ 193 | if (ref is not None) and (not isinstance(ref, np.ndarray)): 194 | args = (ref,)+args 195 | ref = None 196 | ret = tf_func(self, img, None, *args, **kwargs) 197 | assert(len(ret) == 2) 198 | if ref is None: 199 | return ret[0] 200 | else: 201 | num_var = tf_func.__code__.co_argcount-3 # self, img, ref not counted 202 | if (len(args)+len(kwargs)) == (num_var-1): 203 | # The last parameter is special 204 | # Resend it if necessary 205 | var_name = tf_func.__code__.co_varnames[-1] 206 | kwargs[var_name] = ret[1] 207 | tf_ref, _ = tf_func(self, ref, None, *args, **kwargs) 208 | return ret[0], tf_ref 209 | return transform 210 | 211 | def _horizontal_flip(self, img, flip): 212 | if flip is None: 213 | flip = (random.random() > 0.5) 214 | return (img[...,::-1,:] if flip else img), flip 215 | 216 | def _to_tensor(self, img): 217 | return torch.from_numpy((img.astype(np.float32)/255).swapaxes(-3,-2).swapaxes(-3,-1)), () 218 | 219 | def _crop_square(self, img, crop_size, pos): 220 | if pos is None: 221 | h, w = img.shape[-3:-1] 222 | assert(crop_size <= h and crop_size <= w) 223 | ub = random.randint(0, h-crop_size) 224 | lb = random.randint(0, w-crop_size) 225 | pos = (ub, ub+crop_size, lb, lb+crop_size) 226 | return img[...,pos[0]:pos[1],pos[-2]:pos[-1],:], pos 227 | 228 | def _extract_patches(self, img, ptch_size): 229 | # Crop non-overlapping patches as the stride equals patch size 230 | h, w = img.shape[-3:-1] 231 | nh, nw = h//ptch_size, w//ptch_size 232 | assert(nh>0 and nw>0) 233 | vptchs = np.stack(np.split(img[...,:nh*ptch_size,:,:], nh, axis=-3)) 234 | ptchs = np.concatenate(np.split(vptchs[...,:nw*ptch_size,:], nw, axis=-2)) 235 | return ptchs, nh*nw 236 | 237 | def _to_patches(self, img, ptch_size, n_ptchs, idx): 238 | ptchs, n = self._extract_patches(img, ptch_size) 239 | if not n_ptchs: 240 | n_ptchs = n 241 | elif n_ptchs > n: 242 | n_ptchs = n 243 | if idx is None: 244 | idx = list(range(n)) 245 | random.shuffle(idx) 246 | idx = idx[:n_ptchs] 247 | return ptchs[idx], idx 248 | 249 | @_pair_deco 250 | def horizontal_flip(self, img, ref=None, flip=None): 251 | return self._horizontal_flip(img, flip=flip) 252 | 253 | @_pair_deco 254 | def to_tensor(self, img, ref=None): 255 | return self._to_tensor(img) 256 | 257 | @_pair_deco 258 | def crop_square(self, img, ref=None, crop_size=64, pos=None): 259 | return self._crop_square(img, crop_size=crop_size, pos=pos) 260 | 261 | @_pair_deco 262 | def to_patches(self, img, ref=None, ptch_size=64, n_ptchs=None, idx=None): 263 | return self._to_patches(img, ptch_size=ptch_size, n_ptchs=n_ptchs, idx=idx) 264 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/iqaScrach.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """ 3 | Main Script 4 | """ 5 | 6 | import sys 7 | import os 8 | 9 | import shutil 10 | import argparse 11 | 12 | import torch 13 | import torch.backends.cudnn as cudnn 14 | from torch import nn 15 | 16 | from model import IQANet 17 | from dataset import IQADataset,SCIDDataset 18 | from utils import AverageMeter, SROCC, PLCC, RMSE 19 | from utils import SimpleProgressBar as ProgressBar 20 | from utils import MMD_loss 21 | from VIDLoss import VIDLoss 22 | #f=open("log.txt","a") 23 | #ftmp=sys.stdout 24 | #sys.stdout=f 25 | 26 | def validate(val_loader, model, criterion, show_step=False): 27 | losses = AverageMeter() 28 | srocc = SROCC() 29 | len_val = len(val_loader) 30 | pb = ProgressBar(len_val, show_step=show_step) 31 | 32 | print("Validation") 33 | 34 | # Switch to evaluate mode 35 | model.eval() 36 | 37 | with torch.no_grad(): 38 | for i, ((img,ref), score)in enumerate(val_loader): 39 | img, ref, score = img.cuda(), ref.cuda(), score.squeeze().cuda() 40 | 41 | # Compute output 42 | _,_,output,_,_,_,_ = model(img, img) 43 | 44 | loss = criterion(output, score) 45 | losses.update(loss, img.shape[0]) 46 | 47 | output = output.cpu() 48 | score = score.cpu() 49 | srocc.update(score.numpy(), output.numpy()) 50 | 51 | pb.show(i, "[{0:d}/{1:d}]\t" 52 | "Loss {loss.val:.4f} ({loss.avg:.4f})\t" 53 | "Output {out:.4f}\t" 54 | "Target {tar:.4f}\t" 55 | .format(i+1, len_val, loss=losses, 56 | out=output, tar=score)) 57 | 58 | 59 | return float(1.0-srocc.compute()) # losses.avg 60 | 61 | 62 | def train(train_loader, model, criterion, optimizer, epoch): 63 | losses1 = AverageMeter() 64 | losses2 = AverageMeter() 65 | losses3 = AverageMeter() 66 | losses4 = AverageMeter() 67 | losses5 = AverageMeter() 68 | # losses6 = AverageMeter() 69 | len_train = len(train_loader) 70 | pb = ProgressBar(len_train) 71 | 72 | print("Training") 73 | 74 | # Switch to train mode 75 | model.train() 76 | criterion.cuda() 77 | # vidloss = VIDLoss(128,256,128).cuda() 78 | trip_loss = nn.TripletMarginLoss(margin=0.5, p=2.0).cuda() 79 | # triplet_loss = nn.TripletMarginLoss(margin=1.0, p=2).cuda() 80 | for i, ((img,ref), score)in enumerate(train_loader): 81 | img, ref, score = img.cuda(), ref.cuda(), score.cuda() 82 | # Compute output 83 | FS,NFake_FS,NS,f1,f2,fake_f1, fake_f2 = model(img, ref) 84 | 85 | loss1 = criterion(FS, score) 86 | loss2 = criterion(NFake_FS, score) 87 | loss5 = criterion(NS, score) 88 | loss3 = 20*trip_loss(f1,fake_f1,f2) 89 | loss4 = 20*trip_loss(f2,fake_f2,f1).sum() 90 | 91 | 92 | loss = loss1+loss2+loss3+loss4+loss5 93 | # Measure accuracy and record loss 94 | losses1.update(loss1.data, img.shape[0]) 95 | losses2.update(loss2.data, img.shape[0]) 96 | losses3.update(loss3.data, img.shape[0]) 97 | losses4.update(loss4.data, img.shape[0]) 98 | losses5.update(loss5.data, img.shape[0]) 99 | 100 | # Compute gradient and do SGD step 101 | optimizer.zero_grad() 102 | loss.backward() 103 | optimizer.step() 104 | 105 | pb.show(i, "[{0:d}/{1:d}]\t" 106 | "FR {loss1.val:.4f} ({loss1.avg:.4f})\t" 107 | "NR {loss2.val:.4f} ({loss2.avg:.4f})\t" 108 | "NS {loss5.val:.4f} ({loss5.avg:.4f})\t" 109 | "fco {loss3.val:.4f} ({loss3.avg:.4f})\t" 110 | "sf {loss4.val:.4f} ({loss4.avg:.4f})\t" 111 | 112 | .format(i+1, len_train, loss1=losses1, loss2=losses2, \ 113 | loss5=losses5,loss3=losses3,loss4=losses4)) 114 | 115 | 116 | 117 | def train_iqa(args): 118 | pro = args.pro 119 | batch_size = args.batch_size 120 | num_workers = args.workers 121 | data_dir = args.data_dir 122 | list_dir = args.list_dir 123 | resume = args.resume 124 | n_ptchs = args.n_ptchs_per_img 125 | 126 | print(' '.join(sys.argv)) 127 | 128 | for k, v in args.__dict__.items(): 129 | print(k, ':', v) 130 | 131 | model = IQANet(args.weighted,istrain=True) 132 | criterion = nn.L1Loss() 133 | 134 | # Data loaders 135 | train_loader = torch.utils.data.DataLoader( 136 | Dataset(data_dir, 'train_'+str(pro), list_dir=list_dir, 137 | n_ptchs=n_ptchs), 138 | batch_size=batch_size, shuffle=True, num_workers=num_workers, 139 | pin_memory=True, drop_last=True 140 | ) 141 | val_loader = torch.utils.data.DataLoader( 142 | Dataset(data_dir, 'val_'+str(pro), list_dir=list_dir, 143 | n_ptchs=n_ptchs, sample_once=True), 144 | batch_size=1, shuffle=False, num_workers=0, 145 | pin_memory=True 146 | ) 147 | 148 | optimizer = torch.optim.Adam(model.parameters(), 149 | lr=args.lr, 150 | betas=(0.9, 0.999), 151 | weight_decay=args.weight_decay) 152 | 153 | cudnn.benchmark = True 154 | min_loss = 100.0 155 | start_epoch = 0 156 | 157 | # Resume from a checkpoint 158 | if resume: 159 | resume = resume.split('t.')[0]+'t_'+str(pro)+'.pkl' 160 | if os.path.isfile(resume): 161 | print("=> loading checkpoint '{}'".format(resume)) 162 | checkpoint = torch.load(resume) 163 | start_epoch = checkpoint['epoch'] 164 | if not args.anew: 165 | min_loss = checkpoint['min_loss'] 166 | model.load_state_dict(checkpoint['state_dict']) 167 | print("=> loaded checkpoint '{}' (epoch {})" 168 | .format(resume, start_epoch)) 169 | else: 170 | print("=> no checkpoint found at '{}'".format(resume)) 171 | 172 | if args.evaluate: 173 | validate(val_loader, model.cuda(), criterion, show_step=True) 174 | return 175 | 176 | for epoch in range(start_epoch, args.epochs): 177 | lr = adjust_learning_rate(args, optimizer, epoch) 178 | print("\nEpoch: [{0}]\tlr {1:.06f}".format(epoch, lr)) 179 | # Train for one epoch 180 | train(train_loader, model.cuda(), criterion, optimizer, epoch) 181 | 182 | if epoch % 1 == 0: 183 | # Evaluate on validation set 184 | loss = validate(val_loader, model.cuda(), criterion) 185 | 186 | is_best = loss < min_loss 187 | min_loss = min(loss, min_loss) 188 | print("Current: {:.6f}\tBest: {:.6f}\t".format(loss, min_loss)) 189 | checkpoint_path = '../models/'+ resume.split('/')[2]+'/checkpoint_latest_'+str(pro)+'.pkl' 190 | save_checkpoint({ 191 | 'epoch': epoch + 1, 192 | 'state_dict': model.state_dict(), 193 | 'min_loss': min_loss, 194 | }, is_best, filename=checkpoint_path,pro =args.pro, res = resume) 195 | 196 | # if epoch % args.dump_per == 0: 197 | # history_path = '../models/checkpoint_{:03d}_'.format(epoch+1)+str(pro)+'.pkl' 198 | # shutil.copyfile(checkpoint_path, history_path) 199 | 200 | 201 | 202 | 203 | 204 | 205 | def adjust_learning_rate(args, optimizer, epoch): 206 | """ 207 | Sets the learning rate 208 | """ 209 | if args.lr_mode == 'step': 210 | lr = args.lr * (0.5 ** (epoch // args.step)) 211 | elif args.lr_mode == 'poly': 212 | lr = args.lr * (1 - epoch / args.epochs) ** 1.1 213 | elif args.lr_mode == 'const': 214 | lr = args.lr 215 | else: 216 | raise ValueError('Unknown lr mode {}'.format(args.lr_mode)) 217 | 218 | for param_group in optimizer.param_groups: 219 | param_group['lr'] = lr 220 | return lr 221 | 222 | def save_checkpoint(state, is_best, filename='checkpoint.pkl',pro = '0',res='./script'): 223 | torch.save(state, filename) 224 | if is_best: 225 | shutil.copyfile(filename, '../models/'+ res.split('/')[2]+'/model_best_'+str(pro)+'.pkl') 226 | 227 | 228 | def parse_args(): 229 | # Training settings 230 | parser = argparse.ArgumentParser(description='') 231 | parser.add_argument('-cmd', type=str,default='train') 232 | parser.add_argument('-d', '--data-dir', default='../../../datasets/tid2013/') 233 | parser.add_argument('-l', '--list-dir', default='../sci_scripts/siqad-scripts-6-2-2/', 234 | help='List dir to look for train_images.txt etc. ' 235 | 'It is the same with --data-dir if not set.') 236 | parser.add_argument('-n', '--n-ptchs-per-img', type=int, default=8, metavar='N', 237 | help='number of patches for each image (default: 32)') 238 | parser.add_argument('--step', type=int, default=200) 239 | parser.add_argument('--batch-size', type=int, default=32, metavar='B', 240 | help='input batch size for training (default: 64)') 241 | parser.add_argument('--epochs', type=int, default=1000, metavar='NE', 242 | help='number of epochs to train (default: 1000)') 243 | parser.add_argument('--lr', type=float, default=1e-4, metavar='LR', 244 | help='learning rate (default: 1e-4)') 245 | parser.add_argument('--lr-mode', type=str, default='const') 246 | parser.add_argument('--weight-decay', default=1e-4, type=float, 247 | metavar='W', help='weight decay (default: 1e-4)') 248 | parser.add_argument('--resume', default='../models/siqad/checkpoint_latest.pkl', type=str, metavar='PATH', 249 | help='path to latest checkpoint') 250 | parser.add_argument('--pro', type=int, default=2) 251 | parser.add_argument('--workers', type=int, default=8) 252 | parser.add_argument('--subset', default='test') 253 | parser.add_argument('--evaluate', dest='evaluate', 254 | action='store_true', 255 | help='evaluate model on validation set') 256 | parser.add_argument('--weighted',default=True, dest='weighted') 257 | parser.add_argument('--dump_per', type=int, default=50, 258 | help='the number of epochs to make a checkpoint') 259 | parser.add_argument('--dataset', type=str, default='IQA') 260 | parser.add_argument('--anew', action='store_true') 261 | 262 | args = parser.parse_args() 263 | 264 | return args 265 | 266 | 267 | def main(): 268 | args = parse_args() 269 | # Choose dataset 270 | global Dataset 271 | Dataset = globals().get(args.dataset+'Dataset', None) 272 | if args.cmd == 'train': 273 | train_iqa(args) 274 | elif args.cmd == 'test': 275 | test_iqa(args) 276 | 277 | 278 | if __name__ == '__main__': 279 | main() 280 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/iqaTest.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """ 3 | Main Script 4 | """ 5 | 6 | import sys 7 | import os 8 | 9 | import shutil 10 | import argparse 11 | 12 | import torch 13 | import torch.backends.cudnn as cudnn 14 | from torch import nn 15 | 16 | from model import IQANet 17 | from dataset import TID2013Dataset, WaterlooDataset,IQADataset 18 | from utils import AverageMeter, SROCC, PLCC, RMSE 19 | from utils import SimpleProgressBar as ProgressBar 20 | from utils import MMD_loss 21 | from VIDLoss import VIDLoss 22 | 23 | 24 | 25 | def test(test_data_loader, model): 26 | srocc = SROCC() 27 | plcc = PLCC() 28 | rmse = RMSE() 29 | len_test = len(test_data_loader) 30 | pb = ProgressBar(len_test, show_step=True) 31 | 32 | print("Testing") 33 | 34 | model.eval() 35 | with torch.no_grad(): 36 | for i, ((img, ref), score) in enumerate(test_data_loader): 37 | img = img.cuda() 38 | ref = ref.cuda() 39 | output = model(img, img).cpu().data.numpy() 40 | score = score.data.numpy() 41 | 42 | srocc.update(score, output) 43 | plcc.update(score, output) 44 | rmse.update(score, output) 45 | 46 | pb.show(i, "Test: [{0:5d}/{1:5d}]\t" 47 | "Score: {2:.4f}\t" 48 | "Label: {3:.4f}" 49 | .format(i+1, len_test, float(output), float(score))) 50 | 51 | print("\n\nSROCC: {0:.4f}\n" 52 | "PLCC: {1:.4f}\n" 53 | "RMSE: {2:.4f}" 54 | .format(srocc.compute(), plcc.compute(), rmse.compute()) 55 | ) 56 | 57 | 58 | 59 | def test_iqa(args): 60 | batch_size = 1 61 | pro = args.pro 62 | num_workers = args.workers 63 | subset = args.subset 64 | data_dir = args.data_dir 65 | list_dir = args.list_dir 66 | resume = args.resume 67 | 68 | for k, v in args.__dict__.items(): 69 | print(k, ':', v) 70 | 71 | model = IQANet(args.weighted) 72 | 73 | test_loader = torch.utils.data.DataLoader( 74 | Dataset(data_dir, phase='test_'+str(pro), list_dir=list_dir, 75 | n_ptchs=args.n_ptchs_per_img, 76 | subset=subset), 77 | batch_size=batch_size, shuffle=False, 78 | num_workers=num_workers, pin_memory=True 79 | ) 80 | 81 | cudnn.benchmark = True 82 | 83 | # Resume from a checkpoint 84 | if resume: 85 | resume = resume.split('t.')[0]+'t_'+str(pro)+'.pkl' 86 | if os.path.isfile(resume): 87 | print("=> loading checkpoint '{}'".format(resume)) 88 | checkpoint = torch.load(resume) 89 | model.load_state_dict(checkpoint['state_dict']) 90 | print("=> loaded checkpoint '{}' (epoch {})" 91 | .format(resume, checkpoint['epoch'])) 92 | else: 93 | print("=> no checkpoint found at '{}'".format(resume)) 94 | 95 | test(test_loader, model.cuda()) 96 | 97 | 98 | def adjust_learning_rate(args, optimizer, epoch): 99 | """ 100 | Sets the learning rate 101 | """ 102 | if args.lr_mode == 'step': 103 | lr = args.lr * (0.5 ** (epoch // args.step)) 104 | elif args.lr_mode == 'poly': 105 | lr = args.lr * (1 - epoch / args.epochs) ** 1.1 106 | elif args.lr_mode == 'const': 107 | lr = args.lr 108 | else: 109 | raise ValueError('Unknown lr mode {}'.format(args.lr_mode)) 110 | 111 | for param_group in optimizer.param_groups: 112 | param_group['lr'] = lr 113 | return lr 114 | 115 | def save_checkpoint(state, is_best, filename='checkpoint.pkl'): 116 | torch.save(state, filename) 117 | if is_best: 118 | shutil.copyfile(filename, '../models/model_best.pkl') 119 | 120 | 121 | def parse_args(): 122 | # Training settings 123 | parser = argparse.ArgumentParser(description='') 124 | parser.add_argument('-cmd', type=str,default='test') 125 | parser.add_argument('-d', '--data-dir', default='../../../datasets/tid2013/') 126 | parser.add_argument('-l', '--list-dir', default='../sci_scripts/siqad-scripts-6-2-2/', 127 | help='List dir to look for train_images.txt etc. ' 128 | 'It is the same with --data-dir if not set.') 129 | parser.add_argument('-n', '--n-ptchs-per-img', type=int, default=1024, metavar='N', 130 | help='number of patches for each image (default: 32)') 131 | parser.add_argument('--step', type=int, default=200) 132 | parser.add_argument('--batch-size', type=int, default=32, metavar='B', 133 | help='input batch size for training (default: 64)') 134 | parser.add_argument('--epochs', type=int, default=1000, metavar='NE', 135 | help='number of epochs to train (default: 1000)') 136 | parser.add_argument('--lr', type=float, default=1e-4, metavar='LR', 137 | help='learning rate (default: 1e-4)') 138 | parser.add_argument('--lr-mode', type=str, default='const') 139 | parser.add_argument('--weight-decay', default=1e-4, type=float, 140 | metavar='W', help='weight decay (default: 1e-4)') 141 | parser.add_argument('--resume', default='../models/siqad/model_best.pkl',type=str, metavar='PATH', 142 | help='path to latest checkpoint') 143 | parser.add_argument('--workers', type=int, default=8) 144 | parser.add_argument('--pro', type=int, default=2) 145 | parser.add_argument('--subset', default='test') 146 | parser.add_argument('--evaluate', dest='evaluate', 147 | action='store_true', 148 | help='evaluate model on validation set') 149 | parser.add_argument('--weighted',default=True, dest='weighted') 150 | parser.add_argument('--dump_per', type=int, default=50, 151 | help='the number of epochs to make a checkpoint') 152 | parser.add_argument('--dataset', type=str, default='IQA') 153 | parser.add_argument('--anew', action='store_true') 154 | 155 | args = parser.parse_args() 156 | 157 | return args 158 | 159 | 160 | def main(): 161 | args = parse_args() 162 | # Choose dataset 163 | global Dataset 164 | Dataset = globals().get(args.dataset+'Dataset', None) 165 | if args.cmd == 'train': 166 | train_iqa(args) 167 | elif args.cmd == 'test': 168 | test_iqa(args) 169 | 170 | 171 | if __name__ == '__main__': 172 | main() 173 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/model.py: -------------------------------------------------------------------------------- 1 | """ 2 | The CNN Model for FR-IQA 3 | ------------------------- 4 | 5 | KVASS Tastes good! 6 | """ 7 | 8 | import math 9 | import torch 10 | import torch.nn as nn 11 | from torch.autograd import Variable 12 | from Inv_arch import InvRescaleNet 13 | 14 | class Conv3x3(nn.Module): 15 | def __init__(self, in_dim, out_dim): 16 | super(Conv3x3, self).__init__() 17 | self.conv = nn.Sequential( 18 | nn.Conv2d(in_dim, out_dim, kernel_size=3, stride=(1,1), padding=(1,1), bias=True), 19 | nn.LeakyReLU(0.2, inplace=True) 20 | ) 21 | 22 | def forward(self, x): 23 | return self.conv(x) 24 | 25 | class MaxPool2x2(nn.Module): 26 | def __init__(self): 27 | super(MaxPool2x2, self).__init__() 28 | self.pool = nn.MaxPool2d(kernel_size=2, stride=(2,2), padding=(0,0)) 29 | 30 | def forward(self, x): 31 | return self.pool(x) 32 | 33 | class DoubleConv(nn.Module): 34 | """ 35 | Double convolution as a basic block for the net 36 | 37 | Actually this is from a VGG16 block 38 | """ 39 | def __init__(self, in_dim, out_dim,ispool = True): 40 | super(DoubleConv, self).__init__() 41 | self.conv1 = Conv3x3(in_dim, out_dim) 42 | self.conv2 = Conv3x3(out_dim, out_dim) 43 | self.pool = MaxPool2x2() 44 | self.ispool = ispool 45 | 46 | def forward(self, x): 47 | y = self.conv1(x) 48 | y = self.conv2(y) 49 | if self.ispool: 50 | y = self.pool(y) 51 | return y 52 | 53 | class SingleConv(nn.Module): 54 | def __init__(self, in_dim, out_dim): 55 | super(SingleConv, self).__init__() 56 | self.conv = Conv3x3(in_dim, out_dim) 57 | self.pool = MaxPool2x2() 58 | 59 | def forward(self, x): 60 | y = self.conv(x) 61 | y = self.pool(y) 62 | return y 63 | 64 | 65 | class IQANet(nn.Module): 66 | """ 67 | The CNN model for full-reference image quality assessment 68 | 69 | Implements a siamese network at first and then there is regression 70 | """ 71 | def __init__(self, weighted=False,istrain=False,scale=4,\ 72 | block_num =3,channel_input=256): 73 | super(IQANet, self).__init__() 74 | 75 | self.weighted = weighted 76 | self.istrain = istrain 77 | self.scale = scale 78 | 79 | # Feature extraction layers 80 | self.fl1 = DoubleConv(3, 64) 81 | self.fl2 = DoubleConv(64, 128) 82 | self.fl3 = DoubleConv(128, 256) 83 | 84 | 85 | self.sfl1 = DoubleConv(3, 32*self.scale) 86 | self.sfl21 = DoubleConv(32*self.scale, 64*self.scale,ispool = False) 87 | self.sfl22 = DoubleConv(64*self.scale, 64*self.scale) 88 | self.sfl23 = DoubleConv(64*self.scale, 64*self.scale,ispool = False) 89 | self.sfl3 = DoubleConv(64*self.scale, 128*4) 90 | 91 | self.InvRescaleNet = InvRescaleNet(split_len1=channel_input, \ 92 | split_len2=channel_input, \ 93 | block_num=block_num,\ 94 | Use1x1 = True) 95 | # Fusion layers 96 | self.cl1 = SingleConv(256*2, 128) 97 | self.cl2 = nn.Conv2d(128, 64, kernel_size=3) 98 | 99 | # Regression layers 100 | self.rl1 = nn.Linear(256, 32) 101 | self.rl2 = nn.Linear(32, 1) 102 | 103 | 104 | # Fusion layers 105 | self.scl1 = SingleConv(512, 128) 106 | self.scl2 = nn.Conv2d(128, 64, kernel_size=3) 107 | 108 | # Regression layers 109 | self.srl1 = nn.Linear(256, 32) 110 | self.srl2 = nn.Linear(32, 1) 111 | 112 | self.gn=torch.nn.GroupNorm(num_channels=256,num_groups=64) 113 | 114 | if self.weighted: 115 | self.wl1 = nn.GRU(256, 32, batch_first=True) 116 | self.wl2 = nn.Linear(32, 1) 117 | 118 | self.swl1 = nn.GRU(256, 32, batch_first=True) 119 | self.swl2 = nn.Linear(32, 1) 120 | 121 | 122 | self._initialize_weights() 123 | 124 | def _get_initial_state(self, batch_size): 125 | h0 = torch.zeros(1, batch_size, 32,device=0) 126 | return h0 127 | 128 | def extract_feature(self, x): 129 | """ Forward function for feature extraction of each branch of the siamese net """ 130 | y = self.fl1(x) 131 | y = self.fl2(y) 132 | y = self.fl3(y) 133 | # y = self.gn(y) 134 | 135 | return y 136 | 137 | def NR_extract_feature(self, x): 138 | """ Forward function for feature extraction of each branch of the siamese net """ 139 | y = self.sfl1(x) 140 | y = self.sfl21(y) 141 | y = self.sfl22(y) 142 | y = self.sfl23(y) 143 | y = self.sfl3(y) 144 | y1,y2 = torch.split(y, int(y.shape[1]/2), dim=1) 145 | 146 | 147 | return y1,y2 148 | 149 | def gaussian_batch(self, dims,scale=1): 150 | lenth = dims[0]*dims[1]*dims[2]*dims[3] 151 | inv = torch.normal(mean=0, std=0.5*torch.ones(lenth)).cuda() 152 | return inv.view_as(torch.Tensor(dims[0],dims[1],dims[2],dims[3])) 153 | 154 | 155 | def forward(self, x1, x2): 156 | """ x1 as distorted and x2 as reference """ 157 | n_imgs, n_ptchs_per_img = x1.shape[0:2] 158 | 159 | 160 | # Reshape 161 | x1 = x1.view(-1,*x1.shape[-3:]) 162 | x2 = x2.view(-1,*x2.shape[-3:]) 163 | 164 | f1 = self.extract_feature(x1) 165 | f2 = self.extract_feature(x2) 166 | sf1,sf2 = self.NR_extract_feature(x1) 167 | fake_f1, fake_f2 = self.InvRescaleNet(sf1,sf2) 168 | 169 | ini_f_com = torch.cat([f2, f1], dim=1) 170 | fake_f_com = torch.cat([fake_f2, fake_f1], dim=1) 171 | f_com = torch.cat([ini_f_com,fake_f_com], dim=0) 172 | 173 | f_com = self.cl1(f_com) 174 | f_com = self.cl2(f_com) 175 | flatten = f_com.view(f_com.shape[0], -1) 176 | y = self.rl1(flatten) 177 | y = self.rl2(y) 178 | y1,y2 = torch.split(y, int(y.shape[0]/2), dim=0) 179 | 180 | fake_sf1,fake_sf2 = self.InvRescaleNet(f1,f2, rev=True) 181 | sf = torch.cat((sf1,sf2),1) 182 | 183 | 184 | NF_com = self.scl1(sf) 185 | NF_com = self.scl2(NF_com) 186 | Nflatten = NF_com.view(NF_com.shape[0], -1) 187 | Ny = self.srl1(Nflatten) 188 | Ny = self.srl2(Ny) 189 | 190 | if self.weighted: 191 | # print('use weighted') 192 | flatten = flatten.view(2*n_imgs, n_ptchs_per_img,-1) 193 | w,_ = self.wl1(flatten) 194 | w = self.wl2(w) 195 | w = torch.nn.functional.relu(w) + 1e-8 196 | # Weighted average 197 | w1,w2 = torch.split(w, int(w.shape[0]/2), dim=0) 198 | 199 | y1_by_img = y1.view(n_imgs, n_ptchs_per_img) 200 | w1_by_img = w1.view(n_imgs, n_ptchs_per_img) 201 | FS = torch.sum(y1_by_img*w1_by_img, dim=1) / torch.sum(w1_by_img, dim=1) 202 | 203 | y2_by_img = y2.view(n_imgs, n_ptchs_per_img) 204 | w2_by_img = w2.view(n_imgs, n_ptchs_per_img) 205 | NFake_FS = torch.sum(y2_by_img*w2_by_img, dim=1) / torch.sum(w2_by_img, dim=1) 206 | 207 | 208 | 209 | Nflatten = Nflatten.view(n_imgs, n_ptchs_per_img,-1) 210 | sw,_ = self.swl1(Nflatten,self._get_initial_state(Nflatten.size(0))) 211 | sw = self.swl2(sw) 212 | sw = torch.nn.functional.relu(sw) + 1e-8 213 | Ny_by_img = Ny.view(n_imgs, n_ptchs_per_img) 214 | Nw_by_img = sw.view(n_imgs, n_ptchs_per_img) 215 | NS = torch.sum(Ny_by_img*Nw_by_img, dim=1) / torch.sum(Nw_by_img, dim=1) 216 | 217 | 218 | else: 219 | print('not use weighted') 220 | # Calculate average score for each image 221 | FS = torch.mean(y1.view(n_imgs, n_ptchs_per_img), dim=1) 222 | NFake_FS = torch.mean(y2.view(n_imgs, n_ptchs_per_img), dim=1) 223 | NS = torch.mean(Ny.view(n_imgs, n_ptchs_per_img), dim=1) 224 | 225 | if self.istrain: 226 | return FS.squeeze(),NFake_FS.squeeze(),NS.squeeze(),\ 227 | f1,f2,fake_f1, fake_f2 228 | 229 | else: 230 | return NS.squeeze() 231 | 232 | 233 | 234 | def _initialize_weights(self): 235 | for m in self.modules(): 236 | if isinstance(m, nn.Conv2d): 237 | n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels 238 | m.weight.data.normal_(0, math.sqrt(2. / n)) 239 | if m.bias is not None: 240 | m.bias.data.zero_() 241 | elif isinstance(m, nn.BatchNorm2d): 242 | m.weight.data.fill_(1) 243 | m.bias.data.zero_() 244 | elif isinstance(m, nn.Linear): 245 | m.weight.data.normal_(0, 0.01) 246 | m.bias.data.zero_() 247 | else: 248 | pass 249 | 250 | 251 | def test(): 252 | 253 | net = IQANet(weighted=True,istrain=True) 254 | net.cuda() 255 | 256 | x1 = torch.randn(2, 16,3,64,64) 257 | x1 = Variable(x1.cuda()) 258 | 259 | y1,y2,y3,y4,y5,y6,y7= net.forward(x1,x1) 260 | 261 | 262 | print(y1.shape,y2.shape,y3.shape,y4.shape,y5.shape,y6.shape,y7.shape) 263 | 264 | 265 | if __name__== '__main__': 266 | test() 267 | 268 | 269 | 270 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/model_ini.py: -------------------------------------------------------------------------------- 1 | """ 2 | The CNN Model for FR-IQA 3 | ------------------------- 4 | 5 | KVASS Tastes good! 6 | """ 7 | 8 | import math 9 | import torch 10 | import torch.nn as nn 11 | from torch.autograd import Variable 12 | from Inv_arch import InvRescaleNet 13 | 14 | class Conv3x3(nn.Module): 15 | def __init__(self, in_dim, out_dim): 16 | super(Conv3x3, self).__init__() 17 | self.conv = nn.Sequential( 18 | nn.Conv2d(in_dim, out_dim, kernel_size=3, stride=(1,1), padding=(1,1), bias=True), 19 | nn.LeakyReLU(0.2, inplace=True) 20 | ) 21 | 22 | def forward(self, x): 23 | return self.conv(x) 24 | 25 | class MaxPool2x2(nn.Module): 26 | def __init__(self): 27 | super(MaxPool2x2, self).__init__() 28 | self.pool = nn.MaxPool2d(kernel_size=2, stride=(2,2), padding=(0,0)) 29 | 30 | def forward(self, x): 31 | return self.pool(x) 32 | 33 | class DoubleConv(nn.Module): 34 | """ 35 | Double convolution as a basic block for the net 36 | 37 | Actually this is from a VGG16 block 38 | """ 39 | def __init__(self, in_dim, out_dim,ispool = True): 40 | super(DoubleConv, self).__init__() 41 | self.conv1 = Conv3x3(in_dim, out_dim) 42 | self.conv2 = Conv3x3(out_dim, out_dim) 43 | self.pool = MaxPool2x2() 44 | self.ispool = ispool 45 | 46 | def forward(self, x): 47 | y = self.conv1(x) 48 | y = self.conv2(y) 49 | if self.ispool: 50 | y = self.pool(y) 51 | return y 52 | 53 | class SingleConv(nn.Module): 54 | def __init__(self, in_dim, out_dim): 55 | super(SingleConv, self).__init__() 56 | self.conv = Conv3x3(in_dim, out_dim) 57 | self.pool = MaxPool2x2() 58 | 59 | def forward(self, x): 60 | y = self.conv(x) 61 | y = self.pool(y) 62 | return y 63 | 64 | 65 | class IQANet(nn.Module): 66 | """ 67 | The CNN model for full-reference image quality assessment 68 | 69 | Implements a siamese network at first and then there is regression 70 | """ 71 | def __init__(self, weighted=False,istrain=False,scale=4,\ 72 | block_num =3,channel_input=256): 73 | super(IQANet, self).__init__() 74 | 75 | self.weighted = weighted 76 | self.istrain = istrain 77 | self.scale = scale 78 | 79 | # Feature extraction layers 80 | self.fl1 = DoubleConv(3, 64) 81 | self.fl2 = DoubleConv(64, 128) 82 | self.fl3 = DoubleConv(128, 256) 83 | 84 | 85 | self.sfl1 = DoubleConv(3, 32*self.scale) 86 | self.sfl21 = DoubleConv(32*self.scale, 64*self.scale,ispool = False) 87 | self.sfl22 = DoubleConv(64*self.scale, 64*self.scale) 88 | self.sfl23 = DoubleConv(64*self.scale, 64*self.scale,ispool = False) 89 | self.sfl3 = DoubleConv(64*self.scale, 128*4) 90 | 91 | self.InvRescaleNet = InvRescaleNet(split_len1=channel_input, \ 92 | split_len2=channel_input, \ 93 | block_num=block_num,\ 94 | Use1x1 = True) 95 | # Fusion layers 96 | self.cl1 = SingleConv(256*2, 128) 97 | self.cl2 = nn.Conv2d(128, 64, kernel_size=3) 98 | 99 | # Regression layers 100 | self.rl1 = nn.Linear(256, 32) 101 | self.rl2 = nn.Linear(32, 1) 102 | 103 | 104 | # Fusion layers 105 | self.scl1 = SingleConv(512, 128) 106 | self.scl2 = nn.Conv2d(128, 64, kernel_size=3) 107 | 108 | # Regression layers 109 | self.srl1 = nn.Linear(256, 32) 110 | self.srl2 = nn.Linear(32, 1) 111 | 112 | self.gn=torch.nn.GroupNorm(num_channels=256,num_groups=64) 113 | 114 | if self.weighted: 115 | self.wl1 = nn.Linear(256, 32) 116 | self.wl2 = nn.Linear(32, 1) 117 | 118 | self.swl1 = nn.Linear(256, 32) 119 | self.swl2 = nn.Linear(32, 1) 120 | 121 | 122 | self._initialize_weights() 123 | 124 | def extract_feature(self, x): 125 | """ Forward function for feature extraction of each branch of the siamese net """ 126 | y = self.fl1(x) 127 | y = self.fl2(y) 128 | y = self.fl3(y) 129 | # y = self.gn(y) 130 | 131 | return y 132 | 133 | def NR_extract_feature(self, x): 134 | """ Forward function for feature extraction of each branch of the siamese net """ 135 | y = self.sfl1(x) 136 | y = self.sfl21(y) 137 | y = self.sfl22(y) 138 | y = self.sfl23(y) 139 | y = self.sfl3(y) 140 | y1,y2 = torch.split(y, int(y.shape[1]/2), dim=1) 141 | 142 | 143 | return y1,y2 144 | 145 | def gaussian_batch(self, dims,scale=1): 146 | lenth = dims[0]*dims[1]*dims[2]*dims[3] 147 | inv = torch.normal(mean=0, std=0.5*torch.ones(lenth)).cuda() 148 | return inv.view_as(torch.Tensor(dims[0],dims[1],dims[2],dims[3])) 149 | 150 | 151 | def forward(self, x1, x2): 152 | """ x1 as distorted and x2 as reference """ 153 | n_imgs, n_ptchs_per_img = x1.shape[0:2] 154 | 155 | # Reshape 156 | x1 = x1.view(-1,*x1.shape[-3:]) 157 | x2 = x2.view(-1,*x2.shape[-3:]) 158 | 159 | f1 = self.extract_feature(x1) 160 | f2 = self.extract_feature(x2) 161 | sf1,sf2 = self.NR_extract_feature(x1) 162 | fake_f1, fake_f2 = self.InvRescaleNet(sf1,sf2) 163 | 164 | ini_f_com = torch.cat([f2, f1], dim=1) 165 | fake_f_com = torch.cat([fake_f2, fake_f1], dim=1) 166 | f_com = torch.cat([ini_f_com,fake_f_com], dim=0) 167 | 168 | f_com = self.cl1(f_com) 169 | f_com = self.cl2(f_com) 170 | flatten = f_com.view(f_com.shape[0], -1) 171 | y = self.rl1(flatten) 172 | y = self.rl2(y) 173 | y1,y2 = torch.split(y, int(y.shape[0]/2), dim=0) 174 | 175 | fake_sf1,fake_sf2 = self.InvRescaleNet(f1,f2, rev=True) 176 | sf = torch.cat((sf1,sf2),1) 177 | 178 | 179 | NF_com = self.scl1(sf) 180 | NF_com = self.scl2(NF_com) 181 | Nflatten = NF_com.view(NF_com.shape[0], -1) 182 | Ny = self.srl1(Nflatten) 183 | Ny = self.srl2(Ny) 184 | 185 | if self.weighted: 186 | w = self.wl1(flatten) 187 | w = self.wl2(w) 188 | w = torch.nn.functional.relu(w) + 1e-8 189 | # Weighted average 190 | w1,w2 = torch.split(w, int(w.shape[0]/2), dim=0) 191 | 192 | y1_by_img = y1.view(n_imgs, n_ptchs_per_img) 193 | w1_by_img = w1.view(n_imgs, n_ptchs_per_img) 194 | FS = torch.sum(y1_by_img*w1_by_img, dim=1) / torch.sum(w1_by_img, dim=1) 195 | 196 | y2_by_img = y2.view(n_imgs, n_ptchs_per_img) 197 | w2_by_img = w2.view(n_imgs, n_ptchs_per_img) 198 | NFake_FS = torch.sum(y2_by_img*w2_by_img, dim=1) / torch.sum(w2_by_img, dim=1) 199 | 200 | 201 | sw = self.swl1(Nflatten) 202 | sw = self.swl2(sw) 203 | sw = torch.nn.functional.relu(sw) + 1e-8 204 | Ny_by_img = Ny.view(n_imgs, n_ptchs_per_img) 205 | Nw_by_img = sw.view(n_imgs, n_ptchs_per_img) 206 | NS = torch.sum(Ny_by_img*Nw_by_img, dim=1) / torch.sum(Nw_by_img, dim=1) 207 | 208 | else: 209 | # Calculate average score for each image 210 | FS = torch.mean(y1.view(n_imgs, n_ptchs_per_img), dim=1) 211 | NFake_FS = torch.mean(y2.view(n_imgs, n_ptchs_per_img), dim=1) 212 | NS = torch.mean(Ny.view(n_imgs, n_ptchs_per_img), dim=1) 213 | 214 | if self.istrain: 215 | return FS.squeeze(),NFake_FS.squeeze(),NS.squeeze(),\ 216 | f1,f2,fake_f1, fake_f2 217 | 218 | else: 219 | return NFake_FS.squeeze() 220 | 221 | 222 | 223 | def _initialize_weights(self): 224 | for m in self.modules(): 225 | if isinstance(m, nn.Conv2d): 226 | n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels 227 | m.weight.data.normal_(0, math.sqrt(2. / n)) 228 | if m.bias is not None: 229 | m.bias.data.zero_() 230 | elif isinstance(m, nn.BatchNorm2d): 231 | m.weight.data.fill_(1) 232 | m.bias.data.zero_() 233 | elif isinstance(m, nn.Linear): 234 | m.weight.data.normal_(0, 0.01) 235 | m.bias.data.zero_() 236 | else: 237 | pass 238 | 239 | 240 | def test(): 241 | 242 | net = IQANet(weighted=True,istrain=True) 243 | net.cuda() 244 | 245 | x1 = torch.randn(2, 16,3,64,64) 246 | x1 = Variable(x1.cuda()) 247 | 248 | y1,y2,y3,y4,y5,y6,y7= net.forward(x1,x1) 249 | 250 | 251 | print(y1.shape,y2.shape,y3.shape,y4.shape,y5.shape,y6.shape,y7.shape) 252 | 253 | 254 | if __name__== '__main__': 255 | test() 256 | 257 | 258 | 259 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/module_util.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.init as init 4 | import torch.nn.functional as F 5 | 6 | 7 | def initialize_weights(net_l, scale=1): 8 | if not isinstance(net_l, list): 9 | net_l = [net_l] 10 | for net in net_l: 11 | for m in net.modules(): 12 | if isinstance(m, nn.Conv2d): 13 | init.kaiming_normal_(m.weight, a=0, mode='fan_in') 14 | m.weight.data *= scale # for residual block 15 | if m.bias is not None: 16 | m.bias.data.zero_() 17 | elif isinstance(m, nn.Linear): 18 | init.kaiming_normal_(m.weight, a=0, mode='fan_in') 19 | m.weight.data *= scale 20 | if m.bias is not None: 21 | m.bias.data.zero_() 22 | elif isinstance(m, nn.BatchNorm2d): 23 | init.constant_(m.weight, 1) 24 | init.constant_(m.bias.data, 0.0) 25 | 26 | 27 | def initialize_weights_xavier(net_l, scale=1): 28 | if not isinstance(net_l, list): 29 | net_l = [net_l] 30 | for net in net_l: 31 | for m in net.modules(): 32 | if isinstance(m, nn.Conv2d): 33 | init.xavier_normal_(m.weight) 34 | m.weight.data *= scale # for residual block 35 | if m.bias is not None: 36 | m.bias.data.zero_() 37 | elif isinstance(m, nn.Linear): 38 | init.xavier_normal_(m.weight) 39 | m.weight.data *= scale 40 | if m.bias is not None: 41 | m.bias.data.zero_() 42 | elif isinstance(m, nn.BatchNorm2d): 43 | init.constant_(m.weight, 1) 44 | init.constant_(m.bias.data, 0.0) 45 | 46 | 47 | def make_layer(block, n_layers): 48 | layers = [] 49 | for _ in range(n_layers): 50 | layers.append(block()) 51 | return nn.Sequential(*layers) 52 | 53 | 54 | class ResidualBlock_noBN(nn.Module): 55 | '''Residual block w/o BN 56 | ---Conv-ReLU-Conv-+- 57 | |________________| 58 | ''' 59 | 60 | def __init__(self, nf=64): 61 | super(ResidualBlock_noBN, self).__init__() 62 | self.conv1 = nn.Conv2d(nf, nf, 3, 1, 1, bias=True) 63 | self.conv2 = nn.Conv2d(nf, nf, 3, 1, 1, bias=True) 64 | 65 | # initialization 66 | initialize_weights([self.conv1, self.conv2], 0.1) 67 | 68 | def forward(self, x): 69 | identity = x 70 | out = F.relu(self.conv1(x), inplace=True) 71 | out = self.conv2(out) 72 | return identity + out 73 | 74 | 75 | def flow_warp(x, flow, interp_mode='bilinear', padding_mode='zeros'): 76 | """Warp an image or feature map with optical flow 77 | Args: 78 | x (Tensor): size (N, C, H, W) 79 | flow (Tensor): size (N, H, W, 2), normal value 80 | interp_mode (str): 'nearest' or 'bilinear' 81 | padding_mode (str): 'zeros' or 'border' or 'reflection' 82 | 83 | Returns: 84 | Tensor: warped image or feature map 85 | """ 86 | assert x.size()[-2:] == flow.size()[1:3] 87 | B, C, H, W = x.size() 88 | # mesh grid 89 | grid_y, grid_x = torch.meshgrid(torch.arange(0, H), torch.arange(0, W)) 90 | grid = torch.stack((grid_x, grid_y), 2).float() # W(x), H(y), 2 91 | grid.requires_grad = False 92 | grid = grid.type_as(x) 93 | vgrid = grid + flow 94 | # scale grid to [-1,1] 95 | vgrid_x = 2.0 * vgrid[:, :, :, 0] / max(W - 1, 1) - 1.0 96 | vgrid_y = 2.0 * vgrid[:, :, :, 1] / max(H - 1, 1) - 1.0 97 | vgrid_scaled = torch.stack((vgrid_x, vgrid_y), dim=3) 98 | output = F.grid_sample(x, vgrid_scaled, mode=interp_mode, padding_mode=padding_mode) 99 | return output 100 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/scid-iqaScrach.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """ 3 | Main Script 4 | """ 5 | 6 | import sys 7 | import os 8 | 9 | import shutil 10 | import argparse 11 | 12 | import torch 13 | import torch.backends.cudnn as cudnn 14 | from torch import nn 15 | 16 | from model import IQANet 17 | from dataset import IQADataset,SCIDDataset 18 | from utils import AverageMeter, SROCC, PLCC, RMSE 19 | from utils import SimpleProgressBar as ProgressBar 20 | from utils import MMD_loss 21 | from VIDLoss import VIDLoss 22 | #f=open("log.txt","a") 23 | #ftmp=sys.stdout 24 | #sys.stdout=f 25 | 26 | def validate(val_loader, model, criterion, show_step=False): 27 | losses = AverageMeter() 28 | srocc = SROCC() 29 | len_val = len(val_loader) 30 | pb = ProgressBar(len_val, show_step=show_step) 31 | 32 | print("Validation") 33 | 34 | # Switch to evaluate mode 35 | model.eval() 36 | 37 | with torch.no_grad(): 38 | for i, ((img,ref), score)in enumerate(val_loader): 39 | img, ref, score = img.cuda(), ref.cuda(), score.squeeze().cuda() 40 | 41 | # Compute output 42 | _,_,output,_,_,_,_ = model(img, img) 43 | 44 | loss = criterion(output, score) 45 | losses.update(loss, img.shape[0]) 46 | 47 | output = output.cpu() 48 | score = score.cpu() 49 | srocc.update(score.numpy(), output.numpy()) 50 | 51 | pb.show(i, "[{0:d}/{1:d}]\t" 52 | "Loss {loss.val:.4f} ({loss.avg:.4f})\t" 53 | "Output {out:.4f}\t" 54 | "Target {tar:.4f}\t" 55 | .format(i+1, len_val, loss=losses, 56 | out=output, tar=score)) 57 | 58 | 59 | return float(1.0-srocc.compute()) # losses.avg 60 | 61 | 62 | def train(train_loader, model, criterion, optimizer, epoch): 63 | losses1 = AverageMeter() 64 | losses2 = AverageMeter() 65 | losses3 = AverageMeter() 66 | losses4 = AverageMeter() 67 | losses5 = AverageMeter() 68 | # losses6 = AverageMeter() 69 | len_train = len(train_loader) 70 | pb = ProgressBar(len_train) 71 | 72 | print("Training") 73 | 74 | # Switch to train mode 75 | model.train() 76 | criterion.cuda() 77 | # vidloss = VIDLoss(128,256,128).cuda() 78 | trip_loss = nn.TripletMarginLoss(margin=0.5, p=2.0).cuda() 79 | # triplet_loss = nn.TripletMarginLoss(margin=1.0, p=2).cuda() 80 | for i, ((img,ref), score)in enumerate(train_loader): 81 | img, ref, score = img.cuda(), ref.cuda(), score.cuda() 82 | # Compute output 83 | FS,NFake_FS,NS,f1,f2,fake_f1, fake_f2 = model(img, ref) 84 | 85 | loss1 = criterion(FS, score) 86 | loss2 = criterion(NFake_FS, score) 87 | loss5 = criterion(NS, score) 88 | loss3 = 20*trip_loss(f1,fake_f1,f2) 89 | loss4 = 20*trip_loss(f2,fake_f2,f1).sum() 90 | 91 | 92 | loss = loss1+loss2+loss3+loss4+loss5 93 | # Measure accuracy and record loss 94 | losses1.update(loss1.data, img.shape[0]) 95 | losses2.update(loss2.data, img.shape[0]) 96 | losses3.update(loss3.data, img.shape[0]) 97 | losses4.update(loss4.data, img.shape[0]) 98 | losses5.update(loss5.data, img.shape[0]) 99 | 100 | # Compute gradient and do SGD step 101 | optimizer.zero_grad() 102 | loss.backward() 103 | optimizer.step() 104 | 105 | pb.show(i, "[{0:d}/{1:d}]\t" 106 | "FR {loss1.val:.4f} ({loss1.avg:.4f})\t" 107 | "NR {loss2.val:.4f} ({loss2.avg:.4f})\t" 108 | "NS {loss5.val:.4f} ({loss5.avg:.4f})\t" 109 | "fco {loss3.val:.4f} ({loss3.avg:.4f})\t" 110 | "sf {loss4.val:.4f} ({loss4.avg:.4f})\t" 111 | 112 | .format(i+1, len_train, loss1=losses1, loss2=losses2, \ 113 | loss5=losses5,loss3=losses3,loss4=losses4)) 114 | 115 | def test(test_data_loader, model): 116 | srocc = SROCC() 117 | plcc = PLCC() 118 | rmse = RMSE() 119 | len_test = len(test_data_loader) 120 | pb = ProgressBar(len_test, show_step=True) 121 | 122 | print("Testing") 123 | 124 | model.eval() 125 | with torch.no_grad(): 126 | for i, ((img, ref), score) in enumerate(test_data_loader): 127 | img, ref = img.cuda(), ref.cuda() 128 | output = model(img, ref).cpu().data.numpy() 129 | score = score.data.numpy() 130 | 131 | srocc.update(score, output) 132 | plcc.update(score, output) 133 | rmse.update(score, output) 134 | 135 | pb.show(i, "Test: [{0:5d}/{1:5d}]\t" 136 | "Score: {2:.4f}\t" 137 | "Label: {3:.4f}" 138 | .format(i+1, len_test, float(output), float(score))) 139 | 140 | print("\n\nSROCC: {0:.4f}\n" 141 | "PLCC: {1:.4f}\n" 142 | "RMSE: {2:.4f}" 143 | .format(srocc.compute(), plcc.compute(), rmse.compute()) 144 | ) 145 | 146 | 147 | def train_iqa(args): 148 | pro = args.pro 149 | batch_size = args.batch_size 150 | num_workers = args.workers 151 | data_dir = args.data_dir 152 | list_dir = args.list_dir 153 | resume = args.resume 154 | n_ptchs = args.n_ptchs_per_img 155 | 156 | print(' '.join(sys.argv)) 157 | 158 | for k, v in args.__dict__.items(): 159 | print(k, ':', v) 160 | 161 | model = IQANet(args.weighted,istrain=True) 162 | criterion = nn.L1Loss() 163 | 164 | # Data loaders 165 | train_loader = torch.utils.data.DataLoader( 166 | Dataset(data_dir, 'train_'+str(pro), list_dir=list_dir, 167 | n_ptchs=n_ptchs), 168 | batch_size=batch_size, shuffle=True, num_workers=num_workers, 169 | pin_memory=True, drop_last=True 170 | ) 171 | val_loader = torch.utils.data.DataLoader( 172 | Dataset(data_dir, 'val_'+str(pro), list_dir=list_dir, 173 | n_ptchs=n_ptchs, sample_once=True), 174 | batch_size=1, shuffle=False, num_workers=0, 175 | pin_memory=True 176 | ) 177 | 178 | optimizer = torch.optim.Adam(model.parameters(), 179 | lr=args.lr, 180 | betas=(0.9, 0.999), 181 | weight_decay=args.weight_decay) 182 | 183 | cudnn.benchmark = True 184 | min_loss = 100.0 185 | start_epoch = 0 186 | 187 | # Resume from a checkpoint 188 | if resume: 189 | resume = resume.split('t.')[0]+'t_'+str(pro)+'.pkl' 190 | if os.path.isfile(resume): 191 | print("=> loading checkpoint '{}'".format(resume)) 192 | checkpoint = torch.load(resume) 193 | start_epoch = checkpoint['epoch'] 194 | if not args.anew: 195 | min_loss = checkpoint['min_loss'] 196 | model.load_state_dict(checkpoint['state_dict']) 197 | print("=> loaded checkpoint '{}' (epoch {})" 198 | .format(resume, start_epoch)) 199 | else: 200 | print("=> no checkpoint found at '{}'".format(resume)) 201 | 202 | if args.evaluate: 203 | validate(val_loader, model.cuda(), criterion, show_step=True) 204 | return 205 | 206 | for epoch in range(start_epoch, args.epochs): 207 | lr = adjust_learning_rate(args, optimizer, epoch) 208 | print("\nEpoch: [{0}]\tlr {1:.06f}".format(epoch, lr)) 209 | # Train for one epoch 210 | train(train_loader, model.cuda(), criterion, optimizer, epoch) 211 | 212 | if epoch % 1 == 0: 213 | # Evaluate on validation set 214 | loss = validate(val_loader, model.cuda(), criterion) 215 | 216 | is_best = loss < min_loss 217 | min_loss = min(loss, min_loss) 218 | print("Current: {:.6f}\tBest: {:.6f}\t".format(loss, min_loss)) 219 | checkpoint_path = '../models/'+ resume.split('/')[2]+'/checkpoint_latest_'+str(pro)+'.pkl' 220 | save_checkpoint({ 221 | 'epoch': epoch + 1, 222 | 'state_dict': model.state_dict(), 223 | 'min_loss': min_loss, 224 | }, is_best, filename=checkpoint_path,pro =args.pro, res = resume) 225 | 226 | # if epoch % args.dump_per == 0: 227 | # history_path = '../models/checkpoint_{:03d}_'.format(epoch+1)+str(pro)+'.pkl' 228 | # shutil.copyfile(checkpoint_path, history_path) 229 | 230 | 231 | 232 | def test_iqa(args): 233 | batch_size = 1 234 | 235 | num_workers = args.workers 236 | subset = args.subset 237 | data_dir = args.data_dir 238 | list_dir = args.list_dir 239 | resume = args.resume 240 | 241 | for k, v in args.__dict__.items(): 242 | print(k, ':', v) 243 | 244 | model = IQANet(args.weighted) 245 | 246 | test_loader = torch.utils.data.DataLoader( 247 | Dataset(data_dir, phase='test', list_dir=list_dir, 248 | n_ptchs=args.n_ptchs_per_img, 249 | subset=subset), 250 | batch_size=batch_size, shuffle=False, 251 | num_workers=num_workers, pin_memory=True 252 | ) 253 | 254 | cudnn.benchmark = True 255 | 256 | # Resume from a checkpoint 257 | if resume: 258 | if os.path.isfile(resume): 259 | print("=> loading checkpoint '{}'".format(resume)) 260 | checkpoint = torch.load(resume) 261 | model.load_state_dict(checkpoint['state_dict']) 262 | print("=> loaded checkpoint '{}' (epoch {})" 263 | .format(resume, checkpoint['epoch'])) 264 | else: 265 | print("=> no checkpoint found at '{}'".format(resume)) 266 | 267 | test(test_loader, model.cuda()) 268 | 269 | 270 | def adjust_learning_rate(args, optimizer, epoch): 271 | """ 272 | Sets the learning rate 273 | """ 274 | if args.lr_mode == 'step': 275 | lr = args.lr * (0.5 ** (epoch // args.step)) 276 | elif args.lr_mode == 'poly': 277 | lr = args.lr * (1 - epoch / args.epochs) ** 1.1 278 | elif args.lr_mode == 'const': 279 | lr = args.lr 280 | else: 281 | raise ValueError('Unknown lr mode {}'.format(args.lr_mode)) 282 | 283 | for param_group in optimizer.param_groups: 284 | param_group['lr'] = lr 285 | return lr 286 | 287 | def save_checkpoint(state, is_best, filename='checkpoint.pkl',pro = '0',res='./script'): 288 | torch.save(state, filename) 289 | if is_best: 290 | shutil.copyfile(filename, '../models/'+ res.split('/')[2]+'/model_best_'+str(pro)+'.pkl') 291 | 292 | 293 | def parse_args(): 294 | # Training settings 295 | parser = argparse.ArgumentParser(description='') 296 | parser.add_argument('-cmd', type=str,default='train') 297 | parser.add_argument('-d', '--data-dir', default='../../../datasets/tid2013/') 298 | parser.add_argument('-l', '--list-dir', default='../sci_scripts/scid-scripts-6-2-2/', 299 | help='List dir to look for train_images.txt etc. ' 300 | 'It is the same with --data-dir if not set.') 301 | parser.add_argument('-n', '--n-ptchs-per-img', type=int, default=8, metavar='N', 302 | help='number of patches for each image (default: 32)') 303 | parser.add_argument('--step', type=int, default=200) 304 | parser.add_argument('--batch-size', type=int, default=32, metavar='B', 305 | help='input batch size for training (default: 64)') 306 | parser.add_argument('--epochs', type=int, default=1000, metavar='NE', 307 | help='number of epochs to train (default: 1000)') 308 | parser.add_argument('--lr', type=float, default=1e-4, metavar='LR', 309 | help='learning rate (default: 1e-4)') 310 | parser.add_argument('--lr-mode', type=str, default='const') 311 | parser.add_argument('--weight-decay', default=1e-4, type=float, 312 | metavar='W', help='weight decay (default: 1e-4)') 313 | parser.add_argument('--resume', default='../models/scid/checkpoint_latest.pkl', type=str, metavar='PATH', 314 | help='path to latest checkpoint') 315 | parser.add_argument('--pro', type=int, default=2) 316 | parser.add_argument('--workers', type=int, default=8) 317 | parser.add_argument('--subset', default='test') 318 | parser.add_argument('--evaluate', dest='evaluate', 319 | action='store_true', 320 | help='evaluate model on validation set') 321 | parser.add_argument('--weighted',default=True, dest='weighted') 322 | parser.add_argument('--dump_per', type=int, default=50, 323 | help='the number of epochs to make a checkpoint') 324 | parser.add_argument('--dataset', type=str, default='SCID') 325 | parser.add_argument('--anew', action='store_true') 326 | 327 | args = parser.parse_args() 328 | 329 | return args 330 | 331 | 332 | def main(): 333 | args = parse_args() 334 | # Choose dataset 335 | global Dataset 336 | Dataset = globals().get(args.dataset+'Dataset', None) 337 | if args.cmd == 'train': 338 | train_iqa(args) 339 | elif args.cmd == 'test': 340 | test_iqa(args) 341 | 342 | 343 | if __name__ == '__main__': 344 | main() 345 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/scid-iqaTest.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """ 3 | Main Script 4 | """ 5 | 6 | import sys 7 | import os 8 | 9 | import shutil 10 | import argparse 11 | 12 | import torch 13 | import torch.backends.cudnn as cudnn 14 | from torch import nn 15 | 16 | from model import IQANet 17 | from dataset import TID2013Dataset, WaterlooDataset,IQADataset,SCIDDataset 18 | from utils import AverageMeter, SROCC, PLCC, RMSE 19 | from utils import SimpleProgressBar as ProgressBar 20 | from utils import MMD_loss 21 | from VIDLoss import VIDLoss 22 | 23 | 24 | def test(test_data_loader, model): 25 | srocc = SROCC() 26 | plcc = PLCC() 27 | rmse = RMSE() 28 | len_test = len(test_data_loader) 29 | pb = ProgressBar(len_test, show_step=True) 30 | 31 | print("Testing") 32 | 33 | model.eval() 34 | with torch.no_grad(): 35 | for i, ((img, ref), score) in enumerate(test_data_loader): 36 | img = img.cuda() 37 | ref = ref.cuda() 38 | output = model(img, img).cpu().data.numpy() 39 | score = score.data.numpy() 40 | 41 | srocc.update(score, output) 42 | plcc.update(score, output) 43 | rmse.update(score, output) 44 | 45 | pb.show(i, "Test: [{0:5d}/{1:5d}]\t" 46 | "Score: {2:.4f}\t" 47 | "Label: {3:.4f}" 48 | .format(i+1, len_test, float(output), float(score))) 49 | 50 | print("\n\nSROCC: {0:.4f}\n" 51 | "PLCC: {1:.4f}\n" 52 | "RMSE: {2:.4f}" 53 | .format(srocc.compute(), plcc.compute(), rmse.compute()) 54 | ) 55 | 56 | 57 | 58 | def test_iqa(args): 59 | batch_size = 1 60 | pro = args.pro 61 | num_workers = args.workers 62 | subset = args.subset 63 | data_dir = args.data_dir 64 | list_dir = args.list_dir 65 | resume = args.resume 66 | 67 | for k, v in args.__dict__.items(): 68 | print(k, ':', v) 69 | 70 | model = IQANet(args.weighted) 71 | 72 | test_loader = torch.utils.data.DataLoader( 73 | Dataset(data_dir, phase='test_'+str(pro), list_dir=list_dir, 74 | n_ptchs=args.n_ptchs_per_img, 75 | subset=subset), 76 | batch_size=batch_size, shuffle=False, 77 | num_workers=num_workers, pin_memory=True 78 | ) 79 | 80 | cudnn.benchmark = True 81 | 82 | # Resume from a checkpoint 83 | if resume: 84 | resume = resume.split('t.')[0]+'t_'+str(pro)+'.pkl' 85 | if os.path.isfile(resume): 86 | print("=> loading checkpoint '{}'".format(resume)) 87 | checkpoint = torch.load(resume) 88 | model.load_state_dict(checkpoint['state_dict']) 89 | print("=> loaded checkpoint '{}' (epoch {})" 90 | .format(resume, checkpoint['epoch'])) 91 | else: 92 | print("=> no checkpoint found at '{}'".format(resume)) 93 | 94 | test(test_loader, model.cuda()) 95 | 96 | 97 | def adjust_learning_rate(args, optimizer, epoch): 98 | """ 99 | Sets the learning rate 100 | """ 101 | if args.lr_mode == 'step': 102 | lr = args.lr * (0.5 ** (epoch // args.step)) 103 | elif args.lr_mode == 'poly': 104 | lr = args.lr * (1 - epoch / args.epochs) ** 1.1 105 | elif args.lr_mode == 'const': 106 | lr = args.lr 107 | else: 108 | raise ValueError('Unknown lr mode {}'.format(args.lr_mode)) 109 | 110 | for param_group in optimizer.param_groups: 111 | param_group['lr'] = lr 112 | return lr 113 | 114 | def save_checkpoint(state, is_best, filename='checkpoint.pkl'): 115 | torch.save(state, filename) 116 | if is_best: 117 | shutil.copyfile(filename, '../models/model_best.pkl') 118 | 119 | def parse_args(): 120 | # Training settings 121 | parser = argparse.ArgumentParser(description='') 122 | parser.add_argument('-cmd', type=str,default='test') 123 | parser.add_argument('-d', '--data-dir', default='../../../datasets/tid2013/') 124 | parser.add_argument('-l', '--list-dir', default='../sci_scripts/scid-scripts-6-2-2/', 125 | help='List dir to look for train_images.txt etc. ' 126 | 'It is the same with --data-dir if not set.') 127 | parser.add_argument('-n', '--n-ptchs-per-img', type=int, default=1024, metavar='N', 128 | help='number of patches for each image (default: 32)') 129 | parser.add_argument('--step', type=int, default=200) 130 | parser.add_argument('--batch-size', type=int, default=32, metavar='B', 131 | help='input batch size for training (default: 64)') 132 | parser.add_argument('--epochs', type=int, default=1000, metavar='NE', 133 | help='number of epochs to train (default: 1000)') 134 | parser.add_argument('--lr', type=float, default=1e-4, metavar='LR', 135 | help='learning rate (default: 1e-4)') 136 | parser.add_argument('--lr-mode', type=str, default='const') 137 | parser.add_argument('--weight-decay', default=1e-4, type=float, 138 | metavar='W', help='weight decay (default: 1e-4)') 139 | parser.add_argument('--resume', default='../models/scid/model_best.pkl', type=str, metavar='PATH', 140 | help='path to latest checkpoint') 141 | parser.add_argument('--pro', type=int, default=2) 142 | parser.add_argument('--workers', type=int, default=8) 143 | parser.add_argument('--subset', default='test') 144 | parser.add_argument('--evaluate', dest='evaluate', 145 | action='store_true', 146 | help='evaluate model on validation set') 147 | parser.add_argument('--weighted',default=True, dest='weighted') 148 | parser.add_argument('--dump_per', type=int, default=50, 149 | help='the number of epochs to make a checkpoint') 150 | parser.add_argument('--dataset', type=str, default='SCID') 151 | parser.add_argument('--anew', action='store_true') 152 | 153 | args = parser.parse_args() 154 | 155 | return args 156 | 157 | 158 | def main(): 159 | args = parse_args() 160 | # Choose dataset 161 | global Dataset 162 | Dataset = globals().get(args.dataset+'Dataset', None) 163 | if args.cmd == 'train': 164 | train_iqa(args) 165 | elif args.cmd == 'test': 166 | test_iqa(args) 167 | 168 | 169 | if __name__ == '__main__': 170 | main() 171 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/src/utils.py: -------------------------------------------------------------------------------- 1 | """ 2 | Some Useful Functions and Classes 3 | """ 4 | 5 | import shutil 6 | from abc import ABCMeta, abstractmethod 7 | from threading import Lock 8 | from sys import stdout 9 | import torch 10 | import torch.nn as nn 11 | import numpy as np 12 | import scipy 13 | from scipy import stats 14 | 15 | 16 | def guassian_kernel(source, target, kernel_mul=2.0, kernel_num=5, fix_sigma=None): 17 | n_samples = int(source.size()[0])+int(target.size()[0]) 18 | total = torch.cat([source, target], dim=0) 19 | 20 | total0 = total.unsqueeze(0).expand(int(total.size(0)), int(total.size(0)), int(total.size(1))) 21 | total1 = total.unsqueeze(1).expand(int(total.size(0)), int(total.size(0)), int(total.size(1))) 22 | L2_distance = ((total0-total1)**2).sum(2) 23 | if fix_sigma: 24 | bandwidth = fix_sigma 25 | else: 26 | bandwidth = torch.sum(L2_distance.data) / (n_samples**2-n_samples) 27 | bandwidth /= kernel_mul ** (kernel_num // 2) 28 | bandwidth_list = [bandwidth * (kernel_mul**i) for i in range(kernel_num)] 29 | kernel_val = [torch.exp(-L2_distance / bandwidth_temp) for bandwidth_temp in bandwidth_list] 30 | return sum(kernel_val) 31 | 32 | class MMD_loss(nn.Module): 33 | def __init__(self, kernel_mul = 2.0, kernel_num = 5): 34 | super(MMD_loss, self).__init__() 35 | self.kernel_num = kernel_num 36 | self.kernel_mul = kernel_mul 37 | self.fix_sigma = None 38 | return 39 | 40 | 41 | def forward(self, source, target): 42 | batch_size = int(source.size()[0]) 43 | kernels = guassian_kernel(source, target, kernel_mul=self.kernel_mul, kernel_num=self.kernel_num, fix_sigma=self.fix_sigma) 44 | XX = kernels[:batch_size, :batch_size] 45 | YY = kernels[batch_size:, batch_size:] 46 | XY = kernels[:batch_size, batch_size:] 47 | YX = kernels[batch_size:, :batch_size] 48 | loss = torch.mean(XX + YY - XY -YX) 49 | return loss 50 | 51 | 52 | 53 | class AverageMeter: 54 | """ Computes and stores the average and current value """ 55 | def __init__(self): 56 | self.reset() 57 | 58 | def reset(self): 59 | self.val = 0 60 | self.avg = 0 61 | self.sum = 0 62 | self.count = 0 63 | 64 | def update(self, val, n=1): 65 | self.val = val 66 | self.sum += val * n 67 | self.count += n 68 | self.avg = self.sum / self.count 69 | 70 | 71 | """ 72 | Metrics for IQA performance 73 | ----------------------------------------- 74 | 75 | Including classes: 76 | * Metric (base) 77 | * MAE 78 | * SROCC 79 | * PLCC 80 | * RMSE 81 | 82 | """ 83 | 84 | class Metric(metaclass=ABCMeta): 85 | def __init__(self): 86 | super(Metric, self).__init__() 87 | self.reset() 88 | self.scale = 100.0 89 | 90 | def reset(self): 91 | self.x1 = [] 92 | self.x2 = [] 93 | 94 | @abstractmethod 95 | def _compute(self, x1, x2): 96 | return 97 | 98 | def logistic(self, X, beta1, beta2, beta3, beta4, beta5): 99 | logistic_part = 0.5 - 1./(1 + np.exp(beta2 * (X - beta3))) 100 | yhat = beta1 * logistic_part + beta4 * X + beta5 101 | return yhat 102 | 103 | def compute(self): 104 | mos = np.array(self.x1, dtype=np.float).flatten()/self.scale 105 | obj_score = np.array(self.x2, dtype=np.float).flatten()/self.scale 106 | beta1 = np.max(mos) 107 | beta2 = np.min(mos) 108 | beta3 = np.mean(obj_score) 109 | beta = [beta1, beta2, beta3, 0.1, 0.1] # inital guess for non-linear fitting 110 | 111 | fit_stat = '' 112 | try: 113 | popt, _ = scipy.optimize.curve_fit(self.logistic, xdata=obj_score, ydata=mos, p0=beta, maxfev=10000) 114 | except: 115 | popt = beta 116 | fit_stat = '[nonlinear reg failed]' 117 | ypred = self.logistic(obj_score, popt[0], popt[1], popt[2], popt[3], popt[4]) 118 | 119 | # print('mos:', mos[1:10]) 120 | # print('ypred:', ypred[1:10]) 121 | mos, ypred = mos*self.scale, ypred*self.scale 122 | return self._compute(mos.ravel(), ypred.ravel()) 123 | 124 | def _check_type(self, x): 125 | return isinstance(x, (float, int, np.ndarray)) 126 | 127 | def update(self, x1, x2): 128 | if self._check_type(x1) and self._check_type(x2): 129 | self.x1.append(x1) 130 | self.x2.append(x2) 131 | else: 132 | raise TypeError('Data types not supported') 133 | 134 | class MAE(Metric): 135 | def __init__(self): 136 | super(MAE, self).__init__() 137 | 138 | def _compute(self, x1, x2): 139 | return np.sum(np.abs(x2-x1)) 140 | 141 | class SROCC(Metric): 142 | def __init__(self): 143 | super(SROCC, self).__init__() 144 | 145 | def _compute(self, x1, x2): 146 | return stats.spearmanr(x1, x2)[0] 147 | 148 | class PLCC(Metric): 149 | def __init__(self): 150 | super(PLCC, self).__init__() 151 | 152 | def _compute(self, x1, x2): 153 | return stats.pearsonr(x1, x2)[0] 154 | 155 | class RMSE(Metric): 156 | def __init__(self): 157 | super(RMSE, self).__init__() 158 | 159 | def _compute(self, x1, x2): 160 | return np.sqrt(((x2 - x1) ** 2).mean()) 161 | 162 | 163 | def limited_instances(n): 164 | def decorator(cls): 165 | _instances = [None]*n 166 | _lock = Lock() 167 | def wrapper(idx, *args, **kwargs): 168 | nonlocal _instances 169 | with _lock: 170 | if idx < n: 171 | if _instances[idx] is None: _instances[idx] = cls(*args, **kwargs) 172 | else: 173 | raise KeyError('index exceeds maximum number of instances') 174 | return _instances[idx] 175 | return wrapper 176 | return decorator 177 | 178 | 179 | class SimpleProgressBar: 180 | def __init__(self, total_len, pat='#', show_step=False, print_freq=1): 181 | self.len = total_len 182 | self.pat = pat 183 | self.show_step = show_step 184 | self.print_freq = print_freq 185 | self.out_stream = stdout 186 | 187 | def show(self, cur, desc): 188 | bar_len, _ = shutil.get_terminal_size() 189 | # The tab between desc and the progress bar should be counted. 190 | # And the '|'s on both ends be counted, too 191 | bar_len = bar_len - self.len_with_tabs(desc+'\t') - 2 192 | bar_len = int(bar_len*0.8) 193 | cur_pos = int(((cur+1)/self.len)*bar_len) 194 | cur_bar = '|'+self.pat*cur_pos+' '*(bar_len-cur_pos)+'|' 195 | 196 | disp_str = "{0}\t{1}".format(desc, cur_bar) 197 | 198 | # Clean 199 | self.write('\033[K') 200 | 201 | if self.show_step and (cur % self.print_freq) == 0: 202 | self.write(disp_str, new_line=True) 203 | return 204 | 205 | if (cur+1) < self.len: 206 | self.write(disp_str) 207 | else: 208 | self.write(disp_str, new_line=True) 209 | 210 | self.out_stream.flush() 211 | 212 | @staticmethod 213 | def len_with_tabs(s): 214 | return len(s.expandtabs()) 215 | 216 | def write(self, content, new_line=False): 217 | end = '\n' if new_line else '\r' 218 | self.out_stream.write(content+end) 219 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/utils/SCID_make_list.py: -------------------------------------------------------------------------------- 1 | import random 2 | import json 3 | 4 | DATA_DIR = "../../datasets/SCID/DistortedSCIs/" 5 | REF_DIR = "../../datasets/SCID/ReferenceSCIs/" 6 | MOS_WITH_NAMES = "../../datasets/SCID/MOS_SCID.txt" 7 | 8 | 9 | 10 | EXCLUDE_INDICES = () 11 | EXCLUDE_TYPES = ( ) 12 | 13 | data_list = [line.strip().split() for line in open(MOS_WITH_NAMES, 'r')] 14 | 15 | N = 40 - len(EXCLUDE_INDICES) 16 | def _write_list_into_file(l, f): 17 | with open(f, "w") as h: 18 | for line in l: 19 | h.write(line) 20 | h.write('\n') 21 | 22 | for prot in range(1,31): 23 | 24 | train_images, train_labels, train_mos = [], [], [] 25 | val_images, val_labels, val_mos = [], [], [] 26 | test_images, test_labels, test_mos = [], [], [] 27 | 28 | idcs = list(range(N)) 29 | idcs = [i + 1 for i in idcs] 30 | random.shuffle(idcs) 31 | print('idcs:', idcs) 32 | train_idcs = idcs[:24] #40*0.6=24, *0.8=32 33 | val_idcs = idcs[24:32] 34 | test_idcs = idcs[32:] 35 | 36 | for ref,image,mos in data_list: 37 | idx = int(image.split('_')[0][3:]) 38 | ref = REF_DIR + ref+'.bmp' 39 | img = DATA_DIR + image+'.bmp' 40 | 41 | if idx not in EXCLUDE_INDICES : 42 | if idx in train_idcs: 43 | train_images.append(img) 44 | train_labels.append(ref) 45 | train_mos.append(float(mos)) 46 | if idx in val_idcs: 47 | val_images.append(img) 48 | val_labels.append(ref) 49 | val_mos.append(float(mos)) 50 | if idx in test_idcs: 51 | test_images.append(img) 52 | test_labels.append(ref) 53 | test_mos.append(float(mos)) 54 | 55 | 56 | ns = vars() 57 | for ph in ('train', 'val', 'test'): 58 | data_dict = dict(img=ns['{}_images'.format(ph)], ref=ns['{}_labels'.format(ph)], score=ns['{}_mos'.format(ph)]) 59 | with open('sci_scripts/scid-scripts-6-2-2/{}_'.format(ph)+str(prot)+'_data.json', 'w') as fp: 60 | json.dump(data_dict, fp) 61 | 62 | 63 | 64 | 65 | -------------------------------------------------------------------------------- /FPR_IQA/FPR_SCI/utils/SIQAD_make_list.py: -------------------------------------------------------------------------------- 1 | import random 2 | import json 3 | 4 | DATA_DIR = "../../datasets/SIQAD/DistortedImages/" 5 | REF_DIR = "../../datasets/SIQAD/references/" 6 | MOS_WITH_NAMES = "../../datasets/SIQAD/sccdmos.txt" 7 | 8 | 9 | 10 | EXCLUDE_INDICES = () 11 | EXCLUDE_TYPES = ( ) 12 | 13 | data_list = [line.strip().split() for line in open(MOS_WITH_NAMES, 'r')] 14 | 15 | N = 20 - len(EXCLUDE_INDICES) 16 | def _write_list_into_file(l, f): 17 | with open(f, "w") as h: 18 | for line in l: 19 | h.write(line) 20 | h.write('\n') 21 | 22 | 23 | 24 | for prot in range(1,31): 25 | train_images, train_labels, train_mos = [], [], [] 26 | val_images, val_labels, val_mos = [], [], [] 27 | test_images, test_labels, test_mos = [], [], [] 28 | idcs = list(range(N)) 29 | idcs = [i + 1 for i in idcs] 30 | random.shuffle(idcs) 31 | print('idcs:', idcs) 32 | # train_idcs = idcs[:14] 33 | # val_idcs = idcs[14:16] 34 | # test_idcs = idcs[16:] 35 | train_idcs = idcs[:16]#20*0.6=12 36 | # print("train content id:", train_idcs) 37 | val_idcs = idcs[12:16]#20*0.2=4, 12-16 38 | test_idcs = idcs[16:] 39 | for mos,image in data_list: 40 | idx = int(image.split('_')[0][3:]) 41 | ref = image.split('_')[0] 42 | ref = REF_DIR + ref+'.bmp' 43 | img = DATA_DIR + image 44 | 45 | if idx not in EXCLUDE_INDICES : 46 | if idx in train_idcs: 47 | train_images.append(img) 48 | train_labels.append(ref) 49 | train_mos.append(float(mos)) 50 | if idx in val_idcs: 51 | val_images.append(img) 52 | val_labels.append(ref) 53 | val_mos.append(float(mos)) 54 | if idx in test_idcs: 55 | test_images.append(img) 56 | test_labels.append(ref) 57 | test_mos.append(float(mos)) 58 | 59 | 60 | ns = vars() 61 | for ph in ('train', 'val', 'test'): 62 | data_dict = dict(img=ns['{}_images'.format(ph)], ref=ns['{}_labels'.format(ph)], score=ns['{}_mos'.format(ph)]) 63 | with open('sci_scripts/siqad-scripts-8-2/{}_'.format(ph)+str(prot)+'_data.json', 'w') as fp: 64 | json.dump(data_dict, fp) 65 | 66 | 67 | 68 | 69 | -------------------------------------------------------------------------------- /FPR_IQA/framework.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/FPR_IQA/framework.png -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # FPR 2 | Code for paper "No-Reference Image Quality Assessment by Hallucinating Pristine Features". 3 | 4 | 5 | 6 | # Environment 7 | * python=3.8.5 8 | * pytorch=1.7.1 cuda=11.0.221 cudnn=8.0.5_0 9 | 10 | # Running 11 | * Data Prepare 12 | - [x] Download the natural image (NI) datasets and screen content image (SCI) datasets into the path: `./FPR/datasets/` 13 | - [x] We provide the pretrained checkpoints [here](https://mega.nz/folder/iDxH3R6a#WF25kk1XD30fhlZeSPJzDA). You can download it and put the included files into the path: `./FPR/FPR_IQA/FPR_NI/models/" or "./FPR/FPR_IQA/FPR_SCI/models/`. 14 | 15 | * Train: 16 | - For NI: 17 | `python ./FPR/FPR_IQA/FPR_SCI/src/iqaScrach.py --list-dir='../scripts/dataset_name/' --resume='../models/model_files/checkpoint_latest.pkl' --pro=split_id --dataset='dataloader_name'` 18 | - dataset_name: "tid2013", "databaserelease2", "CSIQ", or "kadid10k" 19 | - model_files: "tid2013", "live", "csiq", or "kadid" 20 | - dataloader_name: "IQA" (for live and csiq datasets), "TID2013", or "KADID" 21 | - split_id: '0' to '9' 22 | - For SCI: 23 | - SIQAD: `python ./FPR/FPR_IQA/FPR_SCI/src/iqaScrach.py --pro=split_id` 24 | - SCID: `python ./FPR/FPR_IQA/FPR_SCI/src/scid-iqaScrach.py --pro=split_id` 25 | 26 | * Test: 27 | - For NI: 28 | `python ./FPR/FPR_IQA/FPR_SCI/src/iqaTest.py --list-dir='../scripts/dataset_name/' --resume='../models/model_files/model_best.pkl' --pro=split_id --dataset='dataloader_name'` 29 | - For SCI: 30 | - SIQAD: `python ./FPR/FPR_IQA/FPR_SCI/src/iqaTest.py --pro=split_id` 31 | - SCID: `python ./FPR/FPR_IQA/FPR_SCI/src/scid-iqaTest.py --pro=split_id` 32 | 33 | 34 | # Details 35 | * Waitting... 36 | 37 | -------------------------------------------------------------------------------- /datasets/SCID/MOS_SCID.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Baoliang93/FPR/fd8b41cdb603ab4a4a35cddd29d442819ff63b4b/datasets/SCID/MOS_SCID.txt --------------------------------------------------------------------------------