├── DA ├── BNM │ ├── __init__.py │ ├── lr_schedule.py │ ├── README.md │ ├── data_list.py │ ├── network.py │ ├── pre_process.py │ └── train_image.py ├── CDAN-BNM │ ├── __init__.py │ ├── lr_schedule.py │ ├── README.md │ ├── loss.py │ ├── data_list.py │ ├── pre_process.py │ ├── train_image.py │ └── network.py ├── requirements.txt ├── README.md └── data │ └── office │ └── dslr_list.txt ├── UODR ├── requirements.txt ├── lr_schedule.py ├── README.md ├── utils.py ├── data_list.py ├── pre_process.py ├── loss.py ├── network.py └── train_loader.py ├── SSL ├── README.md ├── base.py └── vat.py ├── LICENSE ├── .gitignore └── README.md /DA/BNM/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /DA/CDAN-BNM/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /DA/requirements.txt: -------------------------------------------------------------------------------- 1 | pytorch==1.0.1 2 | torchvision == 0.2.1 3 | numpy==1.17.2 4 | pillow==6.2.0 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | -------------------------------------------------------------------------------- /UODR/requirements.txt: -------------------------------------------------------------------------------- 1 | pytorch==1.0.1 2 | torchvision == 0.2.1 3 | numpy==1.17.2 4 | pillow==6.2.0 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | -------------------------------------------------------------------------------- /SSL/README.md: -------------------------------------------------------------------------------- 1 | # BNM for SSL 2 | 3 | The codes could be downloaded from [Mixmatch](https://github.com/google-research/mixmatch) 4 | 5 | The codes are the same except the codes of `vat.py` and 'base.py'. 6 | 7 | The experiments could be achieved by setting `method=1` `entmin_weight=0.1` 8 | -------------------------------------------------------------------------------- /UODR/lr_schedule.py: -------------------------------------------------------------------------------- 1 | def inv_lr_scheduler(param_lr, optimizer, iter_num, gamma, power, init_lr=0.001): 2 | """Decay learning rate by a factor of 0.1 every lr_decay_epoch epochs.""" 3 | lr = init_lr * (1 + gamma * iter_num) ** (-power) 4 | 5 | i=0 6 | for param_group in optimizer.param_groups: 7 | param_group['lr'] = lr * param_lr[i] 8 | i+=1 9 | 10 | return optimizer 11 | 12 | 13 | schedule_dict = {"inv":inv_lr_scheduler} 14 | -------------------------------------------------------------------------------- /DA/BNM/lr_schedule.py: -------------------------------------------------------------------------------- 1 | def inv_lr_scheduler(optimizer, iter_num, gamma, power, lr=0.001, weight_decay=0.0005): 2 | """Decay learning rate by a factor of 0.1 every lr_decay_epoch epochs.""" 3 | lr = lr * (1 + gamma * iter_num) ** (-power) 4 | i=0 5 | for param_group in optimizer.param_groups: 6 | param_group['lr'] = lr * param_group['lr_mult'] 7 | param_group['weight_decay'] = weight_decay * param_group['decay_mult'] 8 | i+=1 9 | 10 | return optimizer 11 | 12 | 13 | schedule_dict = {"inv":inv_lr_scheduler} 14 | -------------------------------------------------------------------------------- /DA/CDAN-BNM/lr_schedule.py: -------------------------------------------------------------------------------- 1 | def inv_lr_scheduler(optimizer, iter_num, gamma, power, lr=0.001, weight_decay=0.0005): 2 | """Decay learning rate by a factor of 0.1 every lr_decay_epoch epochs.""" 3 | lr = lr * (1 + gamma * iter_num) ** (-power) 4 | i=0 5 | for param_group in optimizer.param_groups: 6 | param_group['lr'] = lr * param_group['lr_mult'] 7 | param_group['weight_decay'] = weight_decay * param_group['decay_mult'] 8 | i+=1 9 | 10 | return optimizer 11 | 12 | 13 | schedule_dict = {"inv":inv_lr_scheduler} 14 | -------------------------------------------------------------------------------- /DA/README.md: -------------------------------------------------------------------------------- 1 | # BNM for Domain Adaptation 2 | 3 | ## Dataset 4 | 5 | Office-31 dataset can be found [here](https://people.eecs.berkeley.edu/~jhoffman/domainadapt/). 6 | 7 | Office-Home dataset can be found [here](http://hemanthdv.org/OfficeHome-Dataset/). 8 | 9 | VisDA 2017 dataset can be found [here](https://github.com/VisionLearningGroup/taskcv-2017-public) in the classification track. 10 | 11 | ## Requirements 12 | The code is implemented with Python(3.6) and Pytorch(1.0.0). 13 | 14 | To install the required python packages, run 15 | 16 | ``` 17 | pip install -r requirements.txt 18 | ``` 19 | 20 | ## Training 21 | Training instructions for BNM and CDAN-BNM are in the `README.md` in [BNM](BNM) and [CDAN-BNM](CDAN-BNM) respectively. 22 | 23 | ## Contact 24 | If you have any problem about our code, feel free to contact 25 | - hassassin1621@gmail.com 26 | 27 | or describe your problem in Issues. 28 | -------------------------------------------------------------------------------- /DA/BNM/README.md: -------------------------------------------------------------------------------- 1 | # BNM implemneted in PyTorch 2 | 3 | ## Prerequisites 4 | - pytorch >= 1.0.1 5 | - torchvision >= 0.2.1 6 | - numpy = 1.17.2 7 | - pillow = 6.2.0 8 | - python3.6 9 | - cuda10 10 | 11 | ## Training 12 | The following are the command for each task. The `--method` denote the method to utilize, choosen from ['BNM','BFM','ENT','NO']. The test_interval can be changed, which is the number of iterations between near test. The num_iterations can be changed, which is the total training iteration number. 13 | 14 | Office-31 15 | ``` 16 | python train_image.py --gpu_id 0 --method BNM --num_iterations 6004 --dset office --s_dset_path data/office/amazon_list.txt --t_dset_path data/office/dslr_list.txt --test_interval 400 --output_dir BNM/adn 17 | ``` 18 | 19 | Office-Home 20 | ``` 21 | python train_image.py --gpu_id 0 --method BNM --num_iterations 6004 --dset office-home --s_dset_path data/office-home/Art.txt --t_dset_path data/office-home/Clipart.txt --test_interval 400 --output_dir BNM/ArCl 22 | ``` 23 | 24 | The codes are borrowed from [CDAN](https://github.com/thuml/CDAN) 25 | -------------------------------------------------------------------------------- /DA/CDAN-BNM/README.md: -------------------------------------------------------------------------------- 1 | # CDAN-BNM implemneted in PyTorch 2 | 3 | ## Prerequisites 4 | - pytorch >= 1.0.1 5 | - torchvision >= 0.2.1 6 | - numpy = 1.17.2 7 | - pillow = 6.2.0 8 | - python3.6 9 | - cuda10 10 | 11 | ## Training 12 | The following are the command for each task. The `--method` denote the method to utilize, choosen from ['BNM','BFM','ENT','NO']. The test_interval can be changed, which is the number of iterations between near test. The num_iterations can be changed, which is the total training iteration number. 13 | 14 | Office-31 15 | ``` 16 | python train_image.py --gpu_id 0 --method BNM --num_iterations 8004 --dset office --s_dset_path data/office/amazon_list.txt --t_dset_path data/office/dslr_list.txt --test_interval 500 --output_dir BNM/adn 17 | ``` 18 | 19 | Office-Home 20 | ``` 21 | python train_image.py --gpu_id 0 --method BNM --num_iterations 8004 --dset office-home --s_dset_path data/office-home/Art.txt --t_dset_path data/office-home/Clipart.txt --test_interval 500 --output_dir BNM/ArCl 22 | ``` 23 | 24 | The codes are heavily borrowed from [CDAN](https://github.com/thuml/CDAN) 25 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 崔书豪 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /UODR/README.md: -------------------------------------------------------------------------------- 1 | # BNM for Unsupervised Open Domain Recognition 2 | 3 | ## Requirements 4 | The code is implemented with Python(3.6) and Pytorch(1.0.0). 5 | 6 | To install the required python packages, run 7 | 8 | ``` 9 | pip install -r requirements.txt 10 | ``` 11 | 12 | ## Dataset and Model 13 | Dataset and pretrained models could be calculated and found in [UODTN](https://github.com/junbaoZHUO/UODTN), 14 | while we directly take them. 15 | 16 | ### Dataset 17 | The source dataset could be found in[here](https://drive.google.com/file/d/1GdDZ1SvEqGin_zeCAGaJn0821vC_PJmc/view?usp=sharing) 18 | 19 | For the target dataset, one can download from [here](http://cvml.ist.ac.at/AwA2/). The link is [here](http://cvml.ist.ac.at/AwA2/AwA2-data.zip) 20 | 21 | The image list of the dataset should be edit to the path of images in 'data/new_AwA2.txt' and 'data/WEB_3D3_2.txt' 22 | 23 | ### Model 24 | The models could be calculated in UODTN, while we take the related models as follows: 25 | 26 | -["base_net_pretrained_on_I2AwA2_source_only"](https://drive.google.com/file/d/1FiHB8HV8U2Isfx0A6ipWEIaE4q-sekoO/view?usp=sharing) is a trained feature extractor for I2AwA. 27 | 28 | -["awa_50_cls_basic"](https://drive.google.com/file/d/1DLeCpM7-k1xBianFEmc3L6c9526WEha4/view?usp=sharing) contains 50 initial classifiers for AwA2. 29 | 30 | These models should be put in the folder in "model/" 31 | 32 | ## run 33 | python train_loader.py --gpu_id 0 34 | 35 | more options could be found in help 36 | 37 | 38 | The codes are borrowed from [UODTN](https://github.com/junbaoZHUO/UODTN) 39 | -------------------------------------------------------------------------------- /DA/CDAN-BNM/loss.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import torch 3 | import torch.nn as nn 4 | from torch.autograd import Variable 5 | import math 6 | import torch.nn.functional as F 7 | import pdb 8 | 9 | def Entropy(input_): 10 | bs = input_.size(0) 11 | epsilon = 1e-5 12 | entropy = -input_ * torch.log(input_ + epsilon) 13 | entropy = torch.sum(entropy, dim=1) 14 | return entropy 15 | 16 | def grl_hook(coeff): 17 | def fun1(grad): 18 | return -coeff*grad.clone() 19 | return fun1 20 | 21 | def CDAN(input_list, ad_net, entropy=None, coeff=None, random_layer=None): 22 | softmax_output = input_list[1].detach() 23 | feature = input_list[0] 24 | if random_layer is None: 25 | op_out = torch.bmm(softmax_output.unsqueeze(2), feature.unsqueeze(1)) 26 | ad_out = ad_net(op_out.view(-1, softmax_output.size(1) * feature.size(1))) 27 | else: 28 | random_out = random_layer.forward([feature, softmax_output]) 29 | ad_out = ad_net(random_out.view(-1, random_out.size(1))) 30 | ad_out = nn.Sigmoid()(ad_out) 31 | batch_size = softmax_output.size(0) // 2 32 | dc_target = torch.from_numpy(np.array([[1]] * batch_size + [[0]] * batch_size)).float().cuda() 33 | if entropy is not None: 34 | entropy.register_hook(grl_hook(coeff)) 35 | entropy = 1.0+torch.exp(-entropy) 36 | source_mask = torch.ones_like(entropy) 37 | source_mask[feature.size(0)//2:] = 0 38 | source_weight = entropy*source_mask 39 | target_mask = torch.ones_like(entropy) 40 | target_mask[0:feature.size(0)//2] = 0 41 | target_weight = entropy*target_mask 42 | weight = source_weight / torch.sum(source_weight).detach().item() + \ 43 | target_weight / torch.sum(target_weight).detach().item() 44 | return torch.sum(weight.view(-1, 1) * nn.BCELoss(reduction='none')(ad_out, dc_target)) / torch.sum(weight).detach().item() 45 | else: 46 | return nn.BCELoss()(ad_out, dc_target) 47 | -------------------------------------------------------------------------------- /UODR/utils.py: -------------------------------------------------------------------------------- 1 | import os 2 | import os.path as osp 3 | import shutil 4 | 5 | import numpy as np 6 | import scipy.sparse as sp 7 | import torch 8 | 9 | 10 | def ensure_path(path): 11 | if osp.exists(path): 12 | if input('{} exists, remove? ([y]/n)'.format(path)) != 'n': 13 | shutil.rmtree(path) 14 | os.mkdir(path) 15 | else: 16 | os.mkdir(path) 17 | 18 | 19 | def set_gpu(gpu): 20 | os.environ['CUDA_VISIBLE_DEVICES'] = gpu 21 | print('using gpu {}'.format(gpu)) 22 | 23 | 24 | def pick_vectors(dic, wnids, is_tensor=False): 25 | o = next(iter(dic.values())) 26 | dim = len(o) 27 | ret = [] 28 | for wnid in wnids: 29 | v = dic.get(wnid) 30 | if v is None: 31 | if not is_tensor: 32 | v = [0] * dim 33 | else: 34 | v = torch.zeros(dim) 35 | ret.append(v) 36 | if not is_tensor: 37 | return torch.FloatTensor(ret) 38 | else: 39 | return torch.stack(ret) 40 | 41 | 42 | def l2_loss(a, b): 43 | return ((a - b)**2).sum() / (len(a) * 2) 44 | 45 | 46 | def normt_spm(mx, method='in'): 47 | if method == 'in': 48 | mx = mx.transpose() 49 | rowsum = np.array(mx.sum(1)) 50 | r_inv = np.power(rowsum, -1).flatten() 51 | r_inv[np.isinf(r_inv)] = 0. 52 | r_mat_inv = sp.diags(r_inv) 53 | mx = r_mat_inv.dot(mx) 54 | return mx 55 | 56 | if method == 'sym': 57 | rowsum = np.array(mx.sum(1)) 58 | r_inv = np.power(rowsum, -0.5).flatten() 59 | r_inv[np.isinf(r_inv)] = 0. 60 | r_mat_inv = sp.diags(r_inv) 61 | mx = mx.dot(r_mat_inv).transpose().dot(r_mat_inv) 62 | return mx 63 | 64 | 65 | def spm_to_tensor(sparse_mx): 66 | sparse_mx = sparse_mx.tocoo().astype(np.float32) 67 | indices = torch.from_numpy(np.vstack( 68 | (sparse_mx.row, sparse_mx.col))).long().cuda() 69 | values = torch.from_numpy(sparse_mx.data).cuda() 70 | shape = torch.Size(sparse_mx.shape) 71 | return torch.sparse.FloatTensor(indices, values, shape) 72 | 73 | -------------------------------------------------------------------------------- /UODR/data_list.py: -------------------------------------------------------------------------------- 1 | #from __future__ import print_function, division 2 | 3 | import torch 4 | import numpy as np 5 | from sklearn.preprocessing import StandardScaler 6 | import random 7 | from PIL import Image 8 | from PIL import ImageFile 9 | ImageFile.LOAD_TRUNCATED_IMAGES = True 10 | import torch.utils.data as data 11 | import os 12 | import os.path 13 | import time 14 | 15 | def make_dataset(image_list, labels): 16 | images = [(val.split()[0], int(val.split()[1])) for val in image_list] 17 | return images 18 | 19 | 20 | def pil_loader(path): 21 | # open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835) 22 | with open(path, 'rb') as f: 23 | with Image.open(f) as img: 24 | return img.convert('RGB') 25 | 26 | 27 | def accimage_loader(path): 28 | import accimage 29 | try: 30 | return accimage.Image(path) 31 | except IOError: 32 | # Potentially a decoding problem, fall back to PIL.Image 33 | return pil_loader(path) 34 | 35 | 36 | def default_loader(path): 37 | #from torchvision import get_image_backend 38 | #if get_image_backend() == 'accimage': 39 | # return accimage_loader(path) 40 | #else: 41 | return pil_loader(path) 42 | 43 | 44 | class ImageList(object): 45 | 46 | def __init__(self, image_list, shape=None,labels=None, transform=None, 47 | loader=default_loader): 48 | imgs = make_dataset(image_list, labels) 49 | if len(imgs) == 0: 50 | raise(RuntimeError("Found 0 images in subfolders of: " + root + "\n" 51 | "Supported image extensions are: " + ",".join(IMG_EXTENSIONS))) 52 | 53 | self.imgs = imgs 54 | self.transform = transform 55 | self.loader = loader 56 | self.shape = shape#hassassin 57 | def __getitem__(self, index): 58 | """ 59 | Args: 60 | index (int): Index 61 | Returns: 62 | tuple: (image, target) where target is class_index of the target class. 63 | """ 64 | path, target = self.imgs[index] 65 | img = self.loader(path) 66 | if self.transform is not None: 67 | img = self.transform(img) 68 | return img, target 69 | 70 | def __len__(self): 71 | return len(self.imgs) 72 | 73 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | pip-wheel-metadata/ 24 | share/python-wheels/ 25 | *.egg-info/ 26 | .installed.cfg 27 | *.egg 28 | MANIFEST 29 | 30 | # PyInstaller 31 | # Usually these files are written by a python script from a template 32 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 33 | *.manifest 34 | *.spec 35 | 36 | # Installer logs 37 | pip-log.txt 38 | pip-delete-this-directory.txt 39 | 40 | # Unit test / coverage reports 41 | htmlcov/ 42 | .tox/ 43 | .nox/ 44 | .coverage 45 | .coverage.* 46 | .cache 47 | nosetests.xml 48 | coverage.xml 49 | *.cover 50 | *.py,cover 51 | .hypothesis/ 52 | .pytest_cache/ 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Django stuff: 59 | *.log 60 | local_settings.py 61 | db.sqlite3 62 | db.sqlite3-journal 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | target/ 76 | 77 | # Jupyter Notebook 78 | .ipynb_checkpoints 79 | 80 | # IPython 81 | profile_default/ 82 | ipython_config.py 83 | 84 | # pyenv 85 | .python-version 86 | 87 | # pipenv 88 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 89 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 90 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 91 | # install all needed dependencies. 92 | #Pipfile.lock 93 | 94 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow 95 | __pypackages__/ 96 | 97 | # Celery stuff 98 | celerybeat-schedule 99 | celerybeat.pid 100 | 101 | # SageMath parsed files 102 | *.sage.py 103 | 104 | # Environments 105 | .env 106 | .venv 107 | env/ 108 | venv/ 109 | ENV/ 110 | env.bak/ 111 | venv.bak/ 112 | 113 | # Spyder project settings 114 | .spyderproject 115 | .spyproject 116 | 117 | # Rope project settings 118 | .ropeproject 119 | 120 | # mkdocs documentation 121 | /site 122 | 123 | # mypy 124 | .mypy_cache/ 125 | .dmypy.json 126 | dmypy.json 127 | 128 | # Pyre type checker 129 | .pyre/ 130 | -------------------------------------------------------------------------------- /UODR/pre_process.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from torchvision import transforms 3 | import os 4 | from PIL import Image, ImageOps 5 | import numbers 6 | import torch 7 | 8 | class ResizeImage(): 9 | def __init__(self, size): 10 | if isinstance(size, int): 11 | self.size = (int(size), int(size)) 12 | else: 13 | self.size = size 14 | def __call__(self, img): 15 | th, tw = self.size 16 | return img.resize((th, tw)) 17 | 18 | 19 | class PlaceCrop(object): 20 | """Crops the given PIL.Image at the particular index. 21 | Args: 22 | size (sequence or int): Desired output size of the crop. If size is an 23 | int instead of sequence like (w, h), a square crop (size, size) is 24 | made. 25 | """ 26 | 27 | def __init__(self, size, start_x, start_y): 28 | if isinstance(size, int): 29 | self.size = (int(size), int(size)) 30 | else: 31 | self.size = size 32 | self.start_x = start_x 33 | self.start_y = start_y 34 | 35 | def __call__(self, img): 36 | """ 37 | Args: 38 | img (PIL.Image): Image to be cropped. 39 | Returns: 40 | PIL.Image: Cropped image. 41 | """ 42 | th, tw = self.size 43 | return img.crop((self.start_x, self.start_y, self.start_x + tw, self.start_y + th)) 44 | 45 | 46 | class ForceFlip(object): 47 | """Horizontally flip the given PIL.Image randomly with a probability of 0.5.""" 48 | 49 | def __call__(self, img): 50 | """ 51 | Args: 52 | img (PIL.Image): Image to be flipped. 53 | Returns: 54 | PIL.Image: Randomly flipped image. 55 | """ 56 | return img.transpose(Image.FLIP_LEFT_RIGHT) 57 | 58 | def image_train(resize_size=256, crop_size=224): 59 | normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], 60 | std=[0.229, 0.224, 0.225]) 61 | return transforms.Compose([ 62 | transforms.Resize((resize_size,resize_size)), 63 | transforms.RandomCrop(crop_size), 64 | transforms.RandomHorizontalFlip(), 65 | transforms.ToTensor(), 66 | normalize 67 | ]) 68 | 69 | def image_test(resize_size=256, crop_size=224): 70 | normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], 71 | std=[0.229, 0.224, 0.225]) 72 | #ten crops for image when validation, input the data_transforms dictionary 73 | start_first = 0 74 | start_center = (resize_size - crop_size - 1) / 2 75 | start_last = resize_size - crop_size - 1 76 | 77 | return transforms.Compose([ 78 | transforms.Resize((resize_size,resize_size)), 79 | PlaceCrop(crop_size, start_center, start_center), 80 | transforms.ToTensor(), 81 | normalize 82 | ]) 83 | -------------------------------------------------------------------------------- /DA/CDAN-BNM/data_list.py: -------------------------------------------------------------------------------- 1 | #from __future__ import print_function, division 2 | 3 | import torch 4 | import numpy as np 5 | import random 6 | from PIL import Image 7 | from torch.utils.data import Dataset 8 | import os 9 | import os.path 10 | 11 | def make_dataset(image_list, labels): 12 | if labels: 13 | len_ = len(image_list) 14 | images = [(image_list[i].strip(), labels[i, :]) for i in range(len_)] 15 | else: 16 | if len(image_list[0].split()) > 2: 17 | images = [(val.split()[0], np.array([int(la) for la in val.split()[1:]])) for val in image_list] 18 | else: 19 | images = [(val.split()[0], int(val.split()[1])) for val in image_list] 20 | return images 21 | 22 | 23 | def rgb_loader(path): 24 | with open(path, 'rb') as f: 25 | with Image.open(f) as img: 26 | return img.convert('RGB') 27 | 28 | def l_loader(path): 29 | with open(path, 'rb') as f: 30 | with Image.open(f) as img: 31 | return img.convert('L') 32 | 33 | class ImageList(Dataset): 34 | def __init__(self, image_list, labels=None, transform=None, target_transform=None, mode='RGB'): 35 | imgs = make_dataset(image_list, labels) 36 | if len(imgs) == 0: 37 | raise(RuntimeError("Found 0 images in subfolders of: " + root + "\n" 38 | "Supported image extensions are: " + ",".join(IMG_EXTENSIONS))) 39 | 40 | self.imgs = imgs 41 | self.transform = transform 42 | self.target_transform = target_transform 43 | if mode == 'RGB': 44 | self.loader = rgb_loader 45 | elif mode == 'L': 46 | self.loader = l_loader 47 | 48 | def __getitem__(self, index): 49 | path, target = self.imgs[index] 50 | img = self.loader(path) 51 | if self.transform is not None: 52 | img = self.transform(img) 53 | if self.target_transform is not None: 54 | target = self.target_transform(target) 55 | 56 | return img, target 57 | 58 | def __len__(self): 59 | return len(self.imgs) 60 | 61 | class ImageValueList(Dataset): 62 | def __init__(self, image_list, labels=None, transform=None, target_transform=None, 63 | loader=rgb_loader): 64 | imgs = make_dataset(image_list, labels) 65 | if len(imgs) == 0: 66 | raise(RuntimeError("Found 0 images in subfolders of: " + root + "\n" 67 | "Supported image extensions are: " + ",".join(IMG_EXTENSIONS))) 68 | 69 | self.imgs = imgs 70 | self.values = [1.0] * len(imgs) 71 | self.transform = transform 72 | self.target_transform = target_transform 73 | self.loader = loader 74 | 75 | def set_values(self, values): 76 | self.values = values 77 | 78 | def __getitem__(self, index): 79 | path, target = self.imgs[index] 80 | img = self.loader(path) 81 | if self.transform is not None: 82 | img = self.transform(img) 83 | if self.target_transform is not None: 84 | target = self.target_transform(target) 85 | 86 | return img, target 87 | 88 | def __len__(self): 89 | return len(self.imgs) 90 | 91 | -------------------------------------------------------------------------------- /DA/BNM/data_list.py: -------------------------------------------------------------------------------- 1 | #from __future__ import print_function, division 2 | 3 | import torch 4 | import numpy as np 5 | import random 6 | from PIL import Image 7 | from torch.utils.data import Dataset 8 | import os 9 | import os.path 10 | from PIL import ImageFile 11 | ImageFile.LOAD_TRUNCATED_IMAGES = True 12 | 13 | def make_dataset(image_list, labels): 14 | if labels: 15 | len_ = len(image_list) 16 | images = [(image_list[i].strip(), labels[i, :]) for i in range(len_)] 17 | else: 18 | if len(image_list[0].split()) > 2: 19 | images = [(val.split()[0], np.array([int(la) for la in val.split()[1:]])) for val in image_list] 20 | else: 21 | images = [(val.split()[0], int(val.split()[1])) for val in image_list] 22 | return images 23 | 24 | 25 | def rgb_loader(path): 26 | with open(path, 'rb') as f: 27 | with Image.open(f) as img: 28 | return img.convert('RGB') 29 | 30 | def l_loader(path): 31 | with open(path, 'rb') as f: 32 | with Image.open(f) as img: 33 | return img.convert('L') 34 | 35 | class ImageList(Dataset): 36 | def __init__(self, image_list, labels=None, transform=None, target_transform=None, mode='RGB'): 37 | imgs = make_dataset(image_list, labels) 38 | if len(imgs) == 0: 39 | raise(RuntimeError("Found 0 images in subfolders of: " + root + "\n" 40 | "Supported image extensions are: " + ",".join(IMG_EXTENSIONS))) 41 | 42 | self.imgs = imgs 43 | self.transform = transform 44 | self.target_transform = target_transform 45 | if mode == 'RGB': 46 | self.loader = rgb_loader 47 | elif mode == 'L': 48 | self.loader = l_loader 49 | 50 | def __getitem__(self, index): 51 | path, target = self.imgs[index] 52 | img = self.loader(path) 53 | if self.transform is not None: 54 | img = self.transform(img) 55 | if self.target_transform is not None: 56 | target = self.target_transform(target) 57 | 58 | return img, target 59 | 60 | def __len__(self): 61 | return len(self.imgs) 62 | 63 | class ImageValueList(Dataset): 64 | def __init__(self, image_list, labels=None, transform=None, target_transform=None, 65 | loader=rgb_loader): 66 | imgs = make_dataset(image_list, labels) 67 | if len(imgs) == 0: 68 | raise(RuntimeError("Found 0 images in subfolders of: " + root + "\n" 69 | "Supported image extensions are: " + ",".join(IMG_EXTENSIONS))) 70 | 71 | self.imgs = imgs 72 | self.values = [1.0] * len(imgs) 73 | self.transform = transform 74 | self.target_transform = target_transform 75 | self.loader = loader 76 | 77 | def set_values(self, values): 78 | self.values = values 79 | 80 | def __getitem__(self, index): 81 | path, target = self.imgs[index] 82 | img = self.loader(path) 83 | if self.transform is not None: 84 | img = self.transform(img) 85 | if self.target_transform is not None: 86 | target = self.target_transform(target) 87 | 88 | return img, target 89 | 90 | def __len__(self): 91 | return len(self.imgs) 92 | 93 | -------------------------------------------------------------------------------- /DA/BNM/network.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import torch 3 | import torch.nn as nn 4 | import torchvision 5 | from torchvision import models 6 | from torch.autograd import Variable 7 | import math 8 | import pdb 9 | import random 10 | 11 | 12 | def calc_coeff(iter_num, high=1.0, low=0.0, alpha=10.0, max_iter=10000.0): 13 | return np.float(2.0 * (high - low) / (1.0 + np.exp(-alpha*iter_num / max_iter)) - (high - low) + low) 14 | 15 | def init_weights(m): 16 | classname = m.__class__.__name__ 17 | if classname.find('Conv2d') != -1 or classname.find('ConvTranspose2d') != -1: 18 | nn.init.kaiming_uniform_(m.weight) 19 | nn.init.zeros_(m.bias) 20 | elif classname.find('BatchNorm') != -1: 21 | nn.init.normal_(m.weight, 1.0, 0.02) 22 | nn.init.zeros_(m.bias) 23 | elif classname.find('Linear') != -1: 24 | nn.init.kaiming_normal_(m.weight) 25 | nn.init.zeros_(m.bias) 26 | 27 | resnet_dict = {"ResNet18":models.resnet18, "ResNet34":models.resnet34, "ResNet50":models.resnet50, "ResNet101":models.resnet101, "ResNet152":models.resnet152} 28 | 29 | def grl_hook(coeff): 30 | def fun1(grad): 31 | return -coeff*grad.clone() 32 | return fun1 33 | 34 | class ResNetFc(nn.Module): 35 | def __init__(self, resnet_name, use_bottleneck=True, bottleneck_dim=256, new_cls=False, class_num=1000): 36 | super(ResNetFc, self).__init__() 37 | model_resnet = resnet_dict[resnet_name](pretrained=True) 38 | self.conv1 = model_resnet.conv1 39 | self.bn1 = model_resnet.bn1 40 | self.relu = model_resnet.relu 41 | self.maxpool = model_resnet.maxpool 42 | self.layer1 = model_resnet.layer1 43 | self.layer2 = model_resnet.layer2 44 | self.layer3 = model_resnet.layer3 45 | self.layer4 = model_resnet.layer4 46 | self.avgpool = model_resnet.avgpool 47 | self.feature_layers = nn.Sequential(self.conv1, self.bn1, self.relu, self.maxpool, \ 48 | self.layer1, self.layer2, self.layer3, self.layer4, self.avgpool) 49 | self.select_layers = nn.Sequential(self.layer3, self.layer4, self.avgpool) 50 | 51 | self.use_bottleneck = use_bottleneck 52 | self.sigmoid = nn.Sigmoid() 53 | self.new_cls = new_cls 54 | if new_cls: 55 | if self.use_bottleneck: 56 | self.bottleneck = nn.Linear(model_resnet.fc.in_features, bottleneck_dim) 57 | self.fc = nn.Linear(bottleneck_dim, class_num) 58 | self.focal1 = nn.Linear( class_num,class_num) 59 | self.focal2 = nn.Linear( class_num,1) 60 | self.bottleneck.apply(init_weights) 61 | self.fc.apply(init_weights) 62 | self.__in_features = bottleneck_dim 63 | else: 64 | self.fc = nn.Linear(model_resnet.fc.in_features, class_num) 65 | self.fc.apply(init_weights) 66 | self.__in_features = model_resnet.fc.in_features 67 | else: 68 | self.fc = model_resnet.fc 69 | self.__in_features = model_resnet.fc.in_features 70 | 71 | def forward(self, x): 72 | x = self.feature_layers(x) 73 | x = x.view(x.size(0), -1) 74 | if self.use_bottleneck and self.new_cls: 75 | x = self.bottleneck(x) 76 | y = self.fc(x) 77 | return x, y 78 | 79 | def output_num(self): 80 | return self.__in_features 81 | 82 | def get_parameters(self): 83 | if self.new_cls: 84 | if self.use_bottleneck: 85 | parameter_list = [{"params":self.feature_layers.parameters(), "lr_mult":1, 'decay_mult':2}, \ 86 | {"params":self.bottleneck.parameters(), "lr_mult":10, 'decay_mult':2}, \ 87 | {"params":self.fc.parameters(), "lr_mult":10, 'decay_mult':2}] 88 | else: 89 | parameter_list = [{"params":self.feature_layers.parameters(), "lr_mult":1, 'decay_mult':2}, \ 90 | {"params":self.fc.parameters(), "lr_mult":10, 'decay_mult':2}] 91 | else: 92 | parameter_list = [{"params":self.parameters(), "lr_mult":1, 'decay_mult':2}] 93 | return parameter_list 94 | 95 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Code Release for BNM v1 v2 2 | 3 | ## Papers 4 | BNM v1 [Towards Discriminability and Diversity: Batch Nuclear-norm Maximization under Label Insufficient Situations](https://arxiv.org/abs/2003.12237) (CVPR2020 oral) 5 | 6 | BNM v2 [Fast Batch Nuclear-norm Maximization and Minimization for Robust Domain Adaptation](https://arxiv.org/abs/2107.06154) (TPAMI under review) 7 | 8 | Clean code of BNM v1 can be found in [old version](https://github.com/cuishuhao/BNM/tree/BNMv1) 9 | 10 | ## One-sentence description 11 | BNM v1: we prove in the paper that Batch Nuclear-norm Maximization (BNM) can ensure the prediction discriminability and diversity, which is an effective method under label insufficient situations. 12 | 13 | BNM v2: we further devise Batch Nuclear-norm Minimization (BNMin) and Fast BNM (FBNM) for multiple domain adaptation scenarios. 14 | 15 | ## Quick start with BNM 16 | 17 | ### One line code for BNM v1 under Pytorch and Tensorflow 18 | 19 | Assume `X` is the prediction matrix. We can calculate BNM loss in both Pytorch and Tensorflow, as follows: 20 | 21 | -Pytorch 22 | 23 | 1. Direct calculation (Since there remains direct approach for nuclear-norm in Pytorch) 24 | ``` 25 | L_BNM = -torch.norm(X,'nuc') 26 | ``` 27 | 2. Calculation with SVD (For S, V and D, only S is useful for calculation of BNM) 28 | ``` 29 | L_BNM = -torch.sum(torch.svd(X, compute_uv=False)[1]) 30 | ``` 31 | -Tensorflow 32 | ``` 33 | L_BNM = -tf.reduce_sum(tf.svd(X, compute_uv=False)) 34 | ``` 35 | 36 | ### Code for FBNM under Pytorch 37 | Assume `X` is the prediction matrix. Then FBNM can be calculated as: 38 | ``` 39 | list_svd,_ = torch.sort(torch.sqrt(torch.sum(torch.pow(X,2),dim=0)), descending=True) 40 | nums = min(X.shape[0],X.shape[1]) 41 | L_FBNM = - torch.sum(list_svd[:nums]) 42 | ``` 43 | 44 | ### Sum of Changes from BNM v1 to BNM v2 45 | 1. - [x] [FBNM](https://github.com/cuishuhao/BNM/blob/master/DA/BNM/train_image.py#L167).(By approximation) 46 | 2. - [ ] BNMin.(On the other hand on source domain) 47 | 3. - [ ] Multiple BNM.(Multiple Batch Optimization) 48 | 4. - [x] [Balance domainnet](https://github.com/cuishuhao/BNM/tree/master/DA/data/Balance_Domainnet).(New dataset) 49 | 5. - [ ] Semi-supervised DA.(New task) 50 | 51 | ### Applications 52 | We apply BNM to unsupervised domain adaptation (UDA) in [DA](DA), unsupervised open domain recognition (UODR) in [UODR](UODR) and semi-supervised learning (SSL) in [SSL](SSL). 53 | 54 | Training instructions for UDA, UODR and SSL are in the `README.md` in [DA](DA), [UODR](UODR) and [SSL](SSL) respectively. 55 | 56 | ## Citation 57 | If you use this code for your research, please consider citing: 58 | ``` 59 | @InProceedings{Cui_2020_CVPR, 60 | author = {Cui, Shuhao and Wang, Shuhui and Zhuo, Junbao and Li, Liang and Huang, Qingming and Tian, Qi}, 61 | title = {Towards Discriminability and Diversity: Batch Nuclear-Norm Maximization Under Label Insufficient Situations}, 62 | booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, 63 | month = {June}, 64 | year = {2020} 65 | } 66 | @article{cui2021fast, 67 | title={Fast Batch Nuclear-norm Maximization and Minimization for Robust Domain Adaptation}, 68 | author={Cui, Shuhao and Wang, Shuhui and Zhuo, Junbao and Li, Liang and Huang, Qingming and Tian, Qi}, 69 | journal={arXiv preprint arXiv:2107.06154}, 70 | year={2021} 71 | } 72 | ``` 73 | Supplementary of BNM can be found in [Google driver](https://drive.google.com/file/d/15WOL2wFCSYVbPQfZ0OOSwtBXlcvgw8kA/view?usp=sharing) 74 | and [baidu cloud](https://pan.baidu.com/s/1eZAguvOXUOa0k_sietA8Zg) (z7yt). 75 | 76 | Supplementary of BNM2 can be found in [Google driver](https://drive.google.com/file/d/1jgumCIPJd8IR_b-ZsJoNybXj9vyrFpRz/view?usp=sharing) 77 | and [baidu cloud](https://pan.baidu.com/s/1Xjg03truL9wN1wn4nq8U3g) (hbqc). 78 | 79 | ## Note 80 | [量子位](https://zhuanlan.zhihu.com/p/124860496) 81 | 82 | [BNM v1](https://zhuanlan.zhihu.com/p/121507249) 83 | 84 | [BNM v2](https://zhuanlan.zhihu.com/p/392678258) 85 | 86 | ## Contact 87 | If you have any problem about our code, feel free to contact 88 | - hassassin1621@gmail.com 89 | 90 | or describe your problem in Issues. 91 | -------------------------------------------------------------------------------- /UODR/loss.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import torch 3 | import torch.nn as nn 4 | from torch.autograd import Variable 5 | 6 | def EntropyLoss(input_): 7 | mask = input_.ge(0.000001) 8 | mask_out = torch.masked_select(input_, mask) 9 | entropy = -(torch.sum(mask_out * torch.log(mask_out))) 10 | return entropy / float(input_.size(0)) 11 | 12 | def guassian_kernel(source, target, kernel_mul=2.0, kernel_num=5, fix_sigma=None): 13 | n_samples = int(source.size()[0])+int(target.size()[0]) 14 | total = torch.cat([source, target], dim=0) 15 | total0 = total.unsqueeze(0).expand(int(total.size(0)), int(total.size(0)), int(total.size(1))) 16 | total1 = total.unsqueeze(1).expand(int(total.size(0)), int(total.size(0)), int(total.size(1))) 17 | L2_distance = ((total0-total1)**2).sum(2) 18 | if fix_sigma: 19 | bandwidth = fix_sigma 20 | else: 21 | bandwidth = torch.sum(L2_distance.data) / (n_samples**2-n_samples) 22 | bandwidth /= kernel_mul ** (kernel_num // 2) 23 | bandwidth_list = [bandwidth * (kernel_mul**i) for i in range(kernel_num)] 24 | kernel_val = [torch.exp(-L2_distance / bandwidth_temp) for bandwidth_temp in bandwidth_list] 25 | return sum(kernel_val) 26 | 27 | 28 | def MK_MMD(source, target, kernel_mul=2.0, kernel_num=8, fix_sigma=None): 29 | batch_size = int(source.size()[0]) 30 | kernels = guassian_kernel(source, target, 31 | kernel_mul=kernel_mul, kernel_num=kernel_num, fix_sigma=fix_sigma) 32 | loss = 0 33 | for i in range(batch_size): 34 | s1, s2 = i, (i+1)%batch_size 35 | t1, t2 = s1+batch_size, s2+batch_size 36 | loss += kernels[s1, s2] + kernels[t1, t2] 37 | loss -= kernels[s1, t2] + kernels[s2, t1] 38 | return loss / float(batch_size) 39 | 40 | 41 | def JAN(source_list, target_list, kernel_muls=[2.0, 2.0], kernel_nums=[5, 1], fix_sigma_list=[None, 1.68]): 42 | batch_size = int(source_list[0].size()[0]) 43 | layer_num = len(source_list) 44 | joint_kernels = None 45 | for i in range(layer_num): 46 | source = source_list[i] 47 | target = target_list[i] 48 | kernel_mul = kernel_muls[i] 49 | kernel_num = kernel_nums[i] 50 | fix_sigma = fix_sigma_list[i] 51 | kernels = guassian_kernel(source, target, 52 | kernel_mul=kernel_mul, kernel_num=kernel_num, fix_sigma=fix_sigma) 53 | if joint_kernels is not None: 54 | joint_kernels = joint_kernels * kernels 55 | else: 56 | joint_kernels = kernels 57 | 58 | loss = 0 59 | for i in range(batch_size): 60 | s1, s2 = i, (i+1)%batch_size 61 | t1, t2 = s1+batch_size, s2+batch_size 62 | loss += joint_kernels[s1, s2] + joint_kernels[t1, t2] 63 | loss -= joint_kernels[s1, t2] + joint_kernels[s2, t1] 64 | return loss / float(batch_size) 65 | 66 | 67 | def L1N(source, target, WEIGHT=None, if_weight=False): 68 | if not if_weight: 69 | L1 = torch.mean(torch.sum(torch.abs(source - target), dim=1)) 70 | L_s = torch.mean(torch.sum(torch.abs(source), dim=1)) 71 | L_t = torch.mean(torch.sum(torch.abs(target), dim=1)) 72 | else: 73 | L1 = torch.mean(torch.sum(torch.abs(source - target), dim=1) * WEIGHT.view(-1, 1)) 74 | L_s = torch.mean(torch.sum(torch.abs(source), dim=1)* WEIGHT.view(-1, 1)) 75 | L_t = torch.mean(torch.sum(torch.abs(target), dim=1)* WEIGHT.view(-1, 1)) 76 | return L1/(L_s + L_t) 77 | 78 | def LP(source, target, WEIGHT=None, if_weight=False, P=2): 79 | if not if_weight: 80 | return torch.mean(torch.nn.PairwiseDistance(p=P)(source, target)) 81 | else: 82 | return torch.sum(torch.nn.PairwiseDistance(p=P)(source, target) * WEIGHT.view(-1))/(torch.sum(WEIGHT) + 0.0001) 83 | 84 | def Matching_MMD(source, target, WEIGHT, if_weight=False): 85 | Mask = WEIGHT.view(-1, 1).repeat(1, 2048).byte() 86 | source_matched = torch.masked_select(source, Mask).view(-1, 2048) 87 | target_matched = torch.masked_select(target, Mask).view(-1, 2048) 88 | return MK_MMD(source_matched, target_matched) 89 | 90 | loss_dict = {"MMD":MK_MMD, "JAN":JAN, "L1N":L1N, "LP":LP, "Matching-MMD":Matching_MMD} 91 | -------------------------------------------------------------------------------- /SSL/base.py: -------------------------------------------------------------------------------- 1 | # Copyright 2019 Google LLC 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # https://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | """Virtual adversarial training:a regularization method for supervised and semi-supervised learning. 15 | 16 | Application to SSL of https://arxiv.org/abs/1704.03976 17 | """ 18 | 19 | import functools 20 | import os 21 | 22 | from absl import app 23 | from absl import flags 24 | from easydict import EasyDict 25 | from libml import utils, data, layers, models 26 | import tensorflow as tf 27 | from third_party import vat_utils 28 | 29 | FLAGS = flags.FLAGS 30 | 31 | 32 | class VAT(models.MultiModel): 33 | 34 | def model(self, lr, wd, ema, warmup_pos, vat, vat_eps, batch, entmin_weight, method, **kwargs): 35 | hwc = [self.dataset.height, self.dataset.width, self.dataset.colors] 36 | x_in = tf.placeholder(tf.float32, [None] + hwc, 'x') 37 | y_in = tf.placeholder(tf.float32, [None] + hwc, 'y') 38 | l_in = tf.placeholder(tf.int32, [None], 'labels') 39 | wd *= lr 40 | warmup = tf.clip_by_value(tf.to_float(self.step) / (warmup_pos * (FLAGS.train_kimg << 10)), 0, 1) 41 | 42 | classifier = functools.partial(self.classifier, **kwargs) 43 | l = tf.one_hot(l_in, self.nclass) 44 | logits_x = classifier(x_in, training=True) 45 | post_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) # Take only first call to update batch norm. 46 | logits_y = classifier(y_in, training=True) 47 | 48 | if method ==1: 49 | loss_entmin = -tf.reduce_sum(tf.svd(tf.nn.softmax(logits_y),compute_uv=False))/batch 50 | else: 51 | loss_entmin = tf.reduce_mean(tf.distributions.Categorical(logits=logits_y).entropy()) 52 | 53 | loss = tf.nn.softmax_cross_entropy_with_logits_v2(labels=l, logits=logits_x) 54 | loss = tf.reduce_mean(loss) 55 | tf.summary.scalar('losses/xe', loss) 56 | 57 | train_op = tf.train.AdamOptimizer(lr).minimize(loss + entmin_weight * loss_entmin, 58 | colocate_gradients_with_ops=True) 59 | with tf.control_dependencies([train_op]): 60 | train_op = tf.group(*post_ops) 61 | 62 | # Tuning op: only retrain batch norm. 63 | skip_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) 64 | classifier(x_in, training=True) 65 | train_bn = tf.group(*[v for v in tf.get_collection(tf.GraphKeys.UPDATE_OPS) 66 | if v not in skip_ops]) 67 | 68 | return EasyDict( 69 | x=x_in, y=y_in, label=l_in, train_op=train_op, tune_op=train_bn, 70 | classify_raw=tf.nn.softmax(classifier(x_in, training=False)), # No EMA, for debugging. 71 | classify_op=tf.nn.softmax(classifier(x_in, training=False))) 72 | 73 | 74 | def main(argv): 75 | del argv # Unused. 76 | dataset = data.DATASETS[FLAGS.dataset]() 77 | log_width = utils.ilog2(dataset.width) 78 | model = VAT( 79 | os.path.join(FLAGS.train_dir, dataset.name), 80 | dataset, 81 | lr=FLAGS.lr, 82 | wd=FLAGS.wd, 83 | arch=FLAGS.arch, 84 | warmup_pos=FLAGS.warmup_pos, 85 | batch=FLAGS.batch, 86 | nclass=dataset.nclass, 87 | ema=FLAGS.ema, 88 | smoothing=FLAGS.smoothing, 89 | vat=FLAGS.vat, 90 | vat_eps=FLAGS.vat_eps, 91 | entmin_weight=FLAGS.entmin_weight, 92 | method = FLAGS.method, 93 | 94 | scales=FLAGS.scales or (log_width - 2), 95 | filters=FLAGS.filters, 96 | repeat=FLAGS.repeat) 97 | model.train(FLAGS.train_kimg << 10, FLAGS.report_kimg << 10) 98 | 99 | 100 | if __name__ == '__main__': 101 | utils.setup_tf() 102 | flags.DEFINE_float('wd', 0.02, 'Weight decay.') 103 | flags.DEFINE_float('vat', 0.3, 'VAT weight.') 104 | flags.DEFINE_float('vat_eps', 6, 'VAT perturbation size.') 105 | flags.DEFINE_float('method', 1, 'VAT perturbation size.') 106 | flags.DEFINE_float('entmin_weight', 0.06, 'Entropy minimization weight.') 107 | flags.DEFINE_float('warmup_pos', 0.4, 'Relative position at which constraint loss warmup ends.') 108 | flags.DEFINE_float('ema', 0.999, 'Exponential moving average of params.') 109 | flags.DEFINE_float('smoothing', 0.1, 'Label smoothing.') 110 | flags.DEFINE_integer('scales', 0, 'Number of 2x2 downscalings in the classifier.') 111 | flags.DEFINE_integer('filters', 32, 'Filter size of convolutions.') 112 | flags.DEFINE_integer('repeat', 4, 'Number of residual layers per stage.') 113 | FLAGS.set_default('dataset', 'cifar10.3@250-5000') 114 | FLAGS.set_default('batch', 64) 115 | FLAGS.set_default('lr', 0.002) 116 | FLAGS.set_default('train_kimg', 1 << 15) 117 | app.run(main) 118 | -------------------------------------------------------------------------------- /DA/BNM/pre_process.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from torchvision import transforms 3 | import os 4 | from PIL import Image, ImageOps 5 | import numbers 6 | import torch 7 | 8 | class ResizeImage(): 9 | def __init__(self, size): 10 | if isinstance(size, int): 11 | self.size = (int(size), int(size)) 12 | else: 13 | self.size = size 14 | def __call__(self, img): 15 | th, tw = self.size 16 | return img.resize((th, tw)) 17 | 18 | class RandomSizedCrop(object): 19 | """Crop the given PIL.Image to random size and aspect ratio. 20 | A crop of random size of (0.08 to 1.0) of the original size and a random 21 | aspect ratio of 3/4 to 4/3 of the original aspect ratio is made. This crop 22 | is finally resized to given size. 23 | This is popularly used to train the Inception networks. 24 | Args: 25 | size: size of the smaller edge 26 | interpolation: Default: PIL.Image.BILINEAR 27 | """ 28 | 29 | def __init__(self, size, interpolation=Image.BILINEAR): 30 | self.size = size 31 | self.interpolation = interpolation 32 | 33 | def __call__(self, img): 34 | h_off = random.randint(0, img.shape[1]-self.size) 35 | w_off = random.randint(0, img.shape[2]-self.size) 36 | img = img[:, h_off:h_off+self.size, w_off:w_off+self.size] 37 | return img 38 | 39 | 40 | class Normalize(object): 41 | """Normalize an tensor image with mean and standard deviation. 42 | Given mean: (R, G, B), 43 | will normalize each channel of the torch.*Tensor, i.e. 44 | channel = channel - mean 45 | Args: 46 | mean (sequence): Sequence of means for R, G, B channels respecitvely. 47 | """ 48 | 49 | def __init__(self, mean=None, meanfile=None): 50 | if mean: 51 | self.mean = mean 52 | else: 53 | arr = np.load(meanfile) 54 | self.mean = torch.from_numpy(arr.astype('float32')/255.0)[[2,1,0],:,:] 55 | 56 | def __call__(self, tensor): 57 | """ 58 | Args: 59 | tensor (Tensor): Tensor image of size (C, H, W) to be normalized. 60 | Returns: 61 | Tensor: Normalized image. 62 | """ 63 | # TODO: make efficient 64 | for t, m in zip(tensor, self.mean): 65 | t.sub_(m) 66 | return tensor 67 | 68 | 69 | 70 | class PlaceCrop(object): 71 | """Crops the given PIL.Image at the particular index. 72 | Args: 73 | size (sequence or int): Desired output size of the crop. If size is an 74 | int instead of sequence like (w, h), a square crop (size, size) is 75 | made. 76 | """ 77 | 78 | def __init__(self, size, start_x, start_y): 79 | if isinstance(size, int): 80 | self.size = (int(size), int(size)) 81 | else: 82 | self.size = size 83 | self.start_x = start_x 84 | self.start_y = start_y 85 | 86 | def __call__(self, img): 87 | """ 88 | Args: 89 | img (PIL.Image): Image to be cropped. 90 | Returns: 91 | PIL.Image: Cropped image. 92 | """ 93 | th, tw = self.size 94 | return img.crop((self.start_x, self.start_y, self.start_x + tw, self.start_y + th)) 95 | 96 | 97 | class ForceFlip(object): 98 | """Horizontally flip the given PIL.Image randomly with a probability of 0.5.""" 99 | 100 | def __call__(self, img): 101 | """ 102 | Args: 103 | img (PIL.Image): Image to be flipped. 104 | Returns: 105 | PIL.Image: Randomly flipped image. 106 | """ 107 | return img.transpose(Image.FLIP_LEFT_RIGHT) 108 | 109 | class CenterCrop(object): 110 | """Crops the given PIL.Image at the center. 111 | Args: 112 | size (sequence or int): Desired output size of the crop. If size is an 113 | int instead of sequence like (h, w), a square crop (size, size) is 114 | made. 115 | """ 116 | 117 | def __init__(self, size): 118 | if isinstance(size, numbers.Number): 119 | self.size = (int(size), int(size)) 120 | else: 121 | self.size = size 122 | 123 | def __call__(self, img): 124 | """ 125 | Args: 126 | img (PIL.Image): Image to be cropped. 127 | Returns: 128 | PIL.Image: Cropped image. 129 | """ 130 | w, h = (img.shape[1], img.shape[2]) 131 | th, tw = self.size 132 | w_off = int((w - tw) / 2.) 133 | h_off = int((h - th) / 2.) 134 | img = img[:, h_off:h_off+th, w_off:w_off+tw] 135 | return img 136 | 137 | 138 | def image_train(resize_size=256, crop_size=224, alexnet=False): 139 | normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], 140 | std=[0.229, 0.224, 0.225]) 141 | return transforms.Compose([ 142 | transforms.Resize((resize_size,resize_size)), 143 | transforms.RandomHorizontalFlip(), 144 | transforms.RandomResizedCrop(crop_size), 145 | transforms.ToTensor(), 146 | normalize 147 | ]) 148 | def image_target(resize_size=256, crop_size=224, alexnet=False): 149 | normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], 150 | std=[0.229, 0.224, 0.225]) 151 | return transforms.Compose([ 152 | transforms.Resize((resize_size,resize_size)), 153 | transforms.RandomCrop(224), 154 | transforms.RandomHorizontalFlip(), 155 | transforms.ToTensor(), 156 | normalize 157 | ]) 158 | 159 | def image_test(resize_size=256, crop_size=224, alexnet=False): 160 | normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], 161 | std=[0.229, 0.224, 0.225]) 162 | start_first = 0 163 | start_center = (resize_size - crop_size - 1) / 2 164 | start_last = resize_size - crop_size - 1 165 | 166 | return transforms.Compose([ 167 | transforms.Resize((resize_size,resize_size)), 168 | transforms.CenterCrop(224), 169 | transforms.ToTensor(), 170 | normalize 171 | ]) 172 | 173 | -------------------------------------------------------------------------------- /SSL/vat.py: -------------------------------------------------------------------------------- 1 | # Copyright 2019 Google LLC 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # https://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | """Virtual adversarial training:a regularization method for supervised and semi-supervised learning. 15 | 16 | Application to SSL of https://arxiv.org/abs/1704.03976 17 | """ 18 | 19 | import functools 20 | import os 21 | 22 | from absl import app 23 | from absl import flags 24 | from easydict import EasyDict 25 | from libml import utils, data, layers, models 26 | import tensorflow as tf 27 | from third_party import vat_utils 28 | 29 | FLAGS = flags.FLAGS 30 | 31 | 32 | class VAT(models.MultiModel): 33 | 34 | def model(self, lr, wd, ema, warmup_pos, vat, vat_eps, entmin_weight, method, **kwargs): 35 | hwc = [self.dataset.height, self.dataset.width, self.dataset.colors] 36 | x_in = tf.placeholder(tf.float32, [None] + hwc, 'x')#.gpu() 37 | y_in = tf.placeholder(tf.float32, [None] + hwc, 'y')#.gpu() 38 | l_in = tf.placeholder(tf.int32, [None], 'labels')#.gpu() 39 | wd *= lr 40 | warmup = tf.clip_by_value(tf.to_float(self.step) / (warmup_pos * (FLAGS.train_kimg << 10)), 0, 1) 41 | 42 | classifier = functools.partial(self.classifier, **kwargs) 43 | l = tf.one_hot(l_in, self.nclass) 44 | logits_x = classifier(x_in, training=True) 45 | post_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) # Take only first call to update batch norm. 46 | logits_y = classifier(y_in, training=True) 47 | delta_y = vat_utils.generate_perturbation(y_in, logits_y, lambda x: classifier(x, training=True), vat_eps) 48 | logits_student = classifier(y_in + delta_y, training=True) 49 | logits_teacher = tf.stop_gradient(logits_y) 50 | loss_vat = layers.kl_divergence_from_logits(logits_student, logits_teacher) 51 | loss_vat = tf.reduce_mean(loss_vat) 52 | if method==1: 53 | loss_entmin = -tf.reduce_sum(tf.svd(tf.nn.softmax(logits_y),compute_uv=False))/100 54 | else: 55 | loss_entmin = tf.reduce_mean(tf.distributions.Categorical(logits=logits_y).entropy()) 56 | #loss_entmin = -tf.reduce_mean(tf.svd(logits_y,compute_uv=False)) 57 | #loss_entmin = -tf.reduce_mean(tf.log(1-logits_y)+tf.log(logits_y)) 58 | 59 | loss = tf.nn.softmax_cross_entropy_with_logits_v2(labels=l, logits=logits_x) 60 | loss = tf.reduce_mean(loss) 61 | tf.summary.scalar('losses/xe', loss) 62 | tf.summary.scalar('losses/vat', loss_vat) 63 | tf.summary.scalar('losses/entmin', loss_entmin) 64 | 65 | ema = tf.train.ExponentialMovingAverage(decay=ema) 66 | ema_op = ema.apply(utils.model_vars()) 67 | ema_getter = functools.partial(utils.getter_ema, ema) 68 | post_ops.append(ema_op) 69 | post_ops.extend([tf.assign(v, v * (1 - wd)) for v in utils.model_vars('classify') if 'kernel' in v.name]) 70 | 71 | train_op = tf.train.AdamOptimizer(lr).minimize(loss + loss_vat * warmup * vat + entmin_weight * loss_entmin, 72 | colocate_gradients_with_ops=True) 73 | with tf.control_dependencies([train_op]): 74 | train_op = tf.group(*post_ops) 75 | 76 | # Tuning op: only retrain batch norm. 77 | skip_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) 78 | classifier(x_in, training=True) 79 | train_bn = tf.group(*[v for v in tf.get_collection(tf.GraphKeys.UPDATE_OPS) 80 | if v not in skip_ops]) 81 | 82 | return EasyDict( 83 | x=x_in, y=y_in, label=l_in, train_op=train_op, tune_op=train_bn, 84 | classify_raw=tf.nn.softmax(classifier(x_in, training=False)), # No EMA, for debugging. 85 | classify_op=tf.nn.softmax(classifier(x_in, getter=ema_getter, training=False))) 86 | 87 | 88 | def main(argv): 89 | del argv # Unused. 90 | dataset = data.DATASETS[FLAGS.dataset]() 91 | log_width = utils.ilog2(dataset.width) 92 | model = VAT( 93 | os.path.join(FLAGS.train_dir, dataset.name), 94 | dataset, 95 | lr=FLAGS.lr, 96 | wd=FLAGS.wd, 97 | arch=FLAGS.arch, 98 | warmup_pos=FLAGS.warmup_pos, 99 | batch=FLAGS.batch, 100 | nclass=dataset.nclass, 101 | ema=FLAGS.ema, 102 | smoothing=FLAGS.smoothing, 103 | vat=FLAGS.vat, 104 | vat_eps=FLAGS.vat_eps, 105 | entmin_weight=FLAGS.entmin_weight, 106 | method=FLAGS.method, 107 | 108 | scales=FLAGS.scales or (log_width - 2), 109 | filters=FLAGS.filters, 110 | repeat=FLAGS.repeat) 111 | model.train(FLAGS.train_kimg << 10, FLAGS.report_kimg << 10) 112 | 113 | 114 | if __name__ == '__main__': 115 | utils.setup_tf() 116 | flags.DEFINE_float('wd', 0.02, 'Weight decay.') 117 | flags.DEFINE_float('vat', 0.3, 'VAT weight.') 118 | flags.DEFINE_float('vat_eps', 6, 'VAT perturbation size.') 119 | flags.DEFINE_float('entmin_weight', 0.06, 'Entropy minimization weight.') 120 | flags.DEFINE_float('method', 1, 'Entropy minimization weight.') 121 | flags.DEFINE_float('warmup_pos', 0.4, 'Relative position at which constraint loss warmup ends.') 122 | flags.DEFINE_float('ema', 0.999, 'Exponential moving average of params.') 123 | flags.DEFINE_float('smoothing', 0.1, 'Label smoothing.') 124 | flags.DEFINE_integer('scales', 0, 'Number of 2x2 downscalings in the classifier.') 125 | flags.DEFINE_integer('filters', 32, 'Filter size of convolutions.') 126 | flags.DEFINE_integer('repeat', 4, 'Number of residual layers per stage.') 127 | FLAGS.set_default('dataset', 'cifar10.3@250-5000') 128 | FLAGS.set_default('batch', 64) 129 | FLAGS.set_default('lr', 0.002) 130 | FLAGS.set_default('train_kimg', 1 << 16) 131 | app.run(main) 132 | -------------------------------------------------------------------------------- /UODR/network.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import torch 3 | import torch.nn as nn 4 | import torchvision 5 | from torchvision import models 6 | from torch.autograd import Variable 7 | 8 | class SilenceLayer(torch.autograd.Function): 9 | def __init__(self): 10 | pass 11 | def forward(self, input): 12 | return input * 1.0 13 | 14 | def backward(self, gradOutput): 15 | return 0 * gradOutput 16 | 17 | 18 | # convnet without the last layer 19 | class AlexNetFc(nn.Module): 20 | def __init__(self): 21 | super(AlexNetFc, self).__init__() 22 | model_alexnet = models.alexnet(pretrained=True) 23 | self.features = model_alexnet.features 24 | self.classifier = nn.Sequential() 25 | for i in xrange(6): 26 | self.classifier.add_module("classifier"+str(i), model_alexnet.classifier[i]) 27 | self.__in_features = model_alexnet.classifier[6].in_features 28 | 29 | def forward(self, x): 30 | x = self.features(x) 31 | x = x.view(x.size(0), 256*6*6) 32 | x = self.classifier(x) 33 | return x 34 | 35 | def output_num(self): 36 | return self.__in_features 37 | 38 | class ResNet18Fc(nn.Module): 39 | def __init__(self): 40 | super(ResNet18Fc, self).__init__() 41 | model_resnet18 = models.resnet18(pretrained=True) 42 | self.conv1 = model_resnet18.conv1 43 | self.bn1 = model_resnet18.bn1 44 | self.relu = model_resnet18.relu 45 | self.maxpool = model_resnet18.maxpool 46 | self.layer1 = model_resnet18.layer1 47 | self.layer2 = model_resnet18.layer2 48 | self.layer3 = model_resnet18.layer3 49 | self.layer4 = model_resnet18.layer4 50 | self.avgpool = model_resnet18.avgpool 51 | self.__in_features = model_resnet18.fc.in_features 52 | 53 | def forward(self, x): 54 | x = self.conv1(x) 55 | x = self.bn1(x) 56 | x = self.relu(x) 57 | x = self.maxpool(x) 58 | x = self.layer1(x) 59 | x = self.layer2(x) 60 | x = self.layer3(x) 61 | x = self.layer4(x) 62 | x = self.avgpool(x) 63 | x = x.view(x.size(0), -1) 64 | return x 65 | 66 | def output_num(self): 67 | return self.__in_features 68 | 69 | class ResNet34Fc(nn.Module): 70 | def __init__(self): 71 | super(ResNet34Fc, self).__init__() 72 | model_resnet34 = models.resnet34(pretrained=True) 73 | self.conv1 = model_resnet34.conv1 74 | self.bn1 = model_resnet34.bn1 75 | self.relu = model_resnet34.relu 76 | self.maxpool = model_resnet34.maxpool 77 | self.layer1 = model_resnet34.layer1 78 | self.layer2 = model_resnet34.layer2 79 | self.layer3 = model_resnet34.layer3 80 | self.layer4 = model_resnet34.layer4 81 | self.avgpool = model_resnet34.avgpool 82 | self.__in_features = model_resnet34.fc.in_features 83 | 84 | def forward(self, x): 85 | x = self.conv1(x) 86 | x = self.bn1(x) 87 | x = self.relu(x) 88 | x = self.maxpool(x) 89 | x = self.layer1(x) 90 | x = self.layer2(x) 91 | x = self.layer3(x) 92 | x = self.layer4(x) 93 | x = self.avgpool(x) 94 | x = x.view(x.size(0), -1) 95 | return x 96 | 97 | def output_num(self): 98 | return self.__in_features 99 | 100 | class ResNet50Fc(nn.Module): 101 | def __init__(self): 102 | super(ResNet50Fc, self).__init__() 103 | model_resnet50 = models.resnet50(pretrained=True) 104 | self.conv1 = model_resnet50.conv1 105 | self.bn1 = model_resnet50.bn1 106 | self.relu = model_resnet50.relu 107 | self.maxpool = model_resnet50.maxpool 108 | self.layer1 = model_resnet50.layer1 109 | self.layer2 = model_resnet50.layer2 110 | self.layer3 = model_resnet50.layer3 111 | self.layer4 = model_resnet50.layer4 112 | self.avgpool = model_resnet50.avgpool 113 | self.__in_features = model_resnet50.fc.in_features 114 | 115 | def forward(self, x): 116 | x = self.conv1(x) 117 | x = self.bn1(x) 118 | x = self.relu(x) 119 | x = self.maxpool(x) 120 | x = self.layer1(x) 121 | x = self.layer2(x) 122 | x = self.layer3(x) 123 | x = self.layer4(x) 124 | x = self.avgpool(x) 125 | x = x.view(x.size(0), -1) 126 | return x 127 | 128 | def output_num(self): 129 | return self.__in_features 130 | 131 | class ResNet101Fc(nn.Module): 132 | def __init__(self): 133 | super(ResNet101Fc, self).__init__() 134 | model_resnet101 = models.resnet101(pretrained=True) 135 | self.conv1 = model_resnet101.conv1 136 | self.bn1 = model_resnet101.bn1 137 | self.relu = model_resnet101.relu 138 | self.maxpool = model_resnet101.maxpool 139 | self.layer1 = model_resnet101.layer1 140 | self.layer2 = model_resnet101.layer2 141 | self.layer3 = model_resnet101.layer3 142 | self.layer4 = model_resnet101.layer4 143 | self.avgpool = model_resnet101.avgpool 144 | self.__in_features = model_resnet101.fc.in_features 145 | 146 | def forward(self, x): 147 | x = self.conv1(x) 148 | x = self.bn1(x) 149 | x = self.relu(x) 150 | x = self.maxpool(x) 151 | x = self.layer1(x) 152 | x = self.layer2(x) 153 | x = self.layer3(x) 154 | x = self.layer4(x) 155 | x = self.avgpool(x) 156 | x = x.view(x.size(0), -1) 157 | return x 158 | 159 | def output_num(self): 160 | return self.__in_features 161 | 162 | 163 | class ResNet152Fc(nn.Module): 164 | def __init__(self): 165 | super(ResNet152Fc, self).__init__() 166 | model_resnet152 = models.resnet152(pretrained=True) 167 | self.conv1 = model_resnet152.conv1 168 | self.bn1 = model_resnet152.bn1 169 | self.relu = model_resnet152.relu 170 | self.maxpool = model_resnet152.maxpool 171 | self.layer1 = model_resnet152.layer1 172 | self.layer2 = model_resnet152.layer2 173 | self.layer3 = model_resnet152.layer3 174 | self.layer4 = model_resnet152.layer4 175 | self.avgpool = model_resnet152.avgpool 176 | self.__in_features = model_resnet152.fc.in_features 177 | 178 | def forward(self, x): 179 | x = self.conv1(x) 180 | x = self.bn1(x) 181 | x = self.relu(x) 182 | x = self.maxpool(x) 183 | x = self.layer1(x) 184 | x = self.layer2(x) 185 | x = self.layer3(x) 186 | x = self.layer4(x) 187 | x = self.avgpool(x) 188 | x = x.view(x.size(0), -1) 189 | return x 190 | 191 | def output_num(self): 192 | return self.__in_features 193 | 194 | network_dict = {"AlexNet":AlexNetFc, "ResNet18":ResNet18Fc, "ResNet34":ResNet34Fc, "ResNet50":ResNet50Fc, "ResNet101":ResNet101Fc, "ResNet152":ResNet152Fc} 195 | -------------------------------------------------------------------------------- /DA/CDAN-BNM/pre_process.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from torchvision import transforms 3 | import os 4 | from PIL import Image, ImageOps 5 | import numbers 6 | import torch 7 | 8 | class ResizeImage(): 9 | def __init__(self, size): 10 | if isinstance(size, int): 11 | self.size = (int(size), int(size)) 12 | else: 13 | self.size = size 14 | def __call__(self, img): 15 | th, tw = self.size 16 | return img.resize((th, tw)) 17 | 18 | class RandomSizedCrop(object): 19 | """Crop the given PIL.Image to random size and aspect ratio. 20 | A crop of random size of (0.08 to 1.0) of the original size and a random 21 | aspect ratio of 3/4 to 4/3 of the original aspect ratio is made. This crop 22 | is finally resized to given size. 23 | This is popularly used to train the Inception networks. 24 | Args: 25 | size: size of the smaller edge 26 | interpolation: Default: PIL.Image.BILINEAR 27 | """ 28 | 29 | def __init__(self, size, interpolation=Image.BILINEAR): 30 | self.size = size 31 | self.interpolation = interpolation 32 | 33 | def __call__(self, img): 34 | h_off = random.randint(0, img.shape[1]-self.size) 35 | w_off = random.randint(0, img.shape[2]-self.size) 36 | img = img[:, h_off:h_off+self.size, w_off:w_off+self.size] 37 | return img 38 | 39 | 40 | class Normalize(object): 41 | """Normalize an tensor image with mean and standard deviation. 42 | Given mean: (R, G, B), 43 | will normalize each channel of the torch.*Tensor, i.e. 44 | channel = channel - mean 45 | Args: 46 | mean (sequence): Sequence of means for R, G, B channels respecitvely. 47 | """ 48 | 49 | def __init__(self, mean=None, meanfile=None): 50 | if mean: 51 | self.mean = mean 52 | else: 53 | arr = np.load(meanfile) 54 | self.mean = torch.from_numpy(arr.astype('float32')/255.0)[[2,1,0],:,:] 55 | 56 | def __call__(self, tensor): 57 | """ 58 | Args: 59 | tensor (Tensor): Tensor image of size (C, H, W) to be normalized. 60 | Returns: 61 | Tensor: Normalized image. 62 | """ 63 | # TODO: make efficient 64 | for t, m in zip(tensor, self.mean): 65 | t.sub_(m) 66 | return tensor 67 | 68 | 69 | 70 | class PlaceCrop(object): 71 | """Crops the given PIL.Image at the particular index. 72 | Args: 73 | size (sequence or int): Desired output size of the crop. If size is an 74 | int instead of sequence like (w, h), a square crop (size, size) is 75 | made. 76 | """ 77 | 78 | def __init__(self, size, start_x, start_y): 79 | if isinstance(size, int): 80 | self.size = (int(size), int(size)) 81 | else: 82 | self.size = size 83 | self.start_x = start_x 84 | self.start_y = start_y 85 | 86 | def __call__(self, img): 87 | """ 88 | Args: 89 | img (PIL.Image): Image to be cropped. 90 | Returns: 91 | PIL.Image: Cropped image. 92 | """ 93 | th, tw = self.size 94 | return img.crop((self.start_x, self.start_y, self.start_x + tw, self.start_y + th)) 95 | 96 | 97 | class ForceFlip(object): 98 | """Horizontally flip the given PIL.Image randomly with a probability of 0.5.""" 99 | 100 | def __call__(self, img): 101 | """ 102 | Args: 103 | img (PIL.Image): Image to be flipped. 104 | Returns: 105 | PIL.Image: Randomly flipped image. 106 | """ 107 | return img.transpose(Image.FLIP_LEFT_RIGHT) 108 | 109 | class CenterCrop(object): 110 | """Crops the given PIL.Image at the center. 111 | Args: 112 | size (sequence or int): Desired output size of the crop. If size is an 113 | int instead of sequence like (h, w), a square crop (size, size) is 114 | made. 115 | """ 116 | 117 | def __init__(self, size): 118 | if isinstance(size, numbers.Number): 119 | self.size = (int(size), int(size)) 120 | else: 121 | self.size = size 122 | 123 | def __call__(self, img): 124 | """ 125 | Args: 126 | img (PIL.Image): Image to be cropped. 127 | Returns: 128 | PIL.Image: Cropped image. 129 | """ 130 | w, h = (img.shape[1], img.shape[2]) 131 | th, tw = self.size 132 | w_off = int((w - tw) / 2.) 133 | h_off = int((h - th) / 2.) 134 | img = img[:, h_off:h_off+th, w_off:w_off+tw] 135 | return img 136 | 137 | 138 | def image_train(resize_size=256, crop_size=224, alexnet=False): 139 | if not alexnet: 140 | normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], 141 | std=[0.229, 0.224, 0.225]) 142 | else: 143 | normalize = Normalize(meanfile='./ilsvrc_2012_mean.npy') 144 | return transforms.Compose([ 145 | transforms.Resize((resize_size,resize_size)), 146 | transforms.RandomResizedCrop(crop_size), 147 | transforms.RandomHorizontalFlip(), 148 | transforms.ToTensor(), 149 | normalize 150 | ]) 151 | 152 | def image_target(resize_size=256, crop_size=224, alexnet=False): 153 | if not alexnet: 154 | normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], 155 | std=[0.229, 0.224, 0.225]) 156 | else: 157 | normalize = Normalize(meanfile='./ilsvrc_2012_mean.npy') 158 | return transforms.Compose([ 159 | transforms.Resize((resize_size,resize_size)), 160 | transforms.RandomCrop(crop_size), 161 | transforms.RandomHorizontalFlip(), 162 | transforms.ToTensor(), 163 | normalize 164 | ]) 165 | 166 | def image_test(resize_size=256, crop_size=224, alexnet=False): 167 | if not alexnet: 168 | normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], 169 | std=[0.229, 0.224, 0.225]) 170 | else: 171 | normalize = Normalize(meanfile='./ilsvrc_2012_mean.npy') 172 | start_first = 0 173 | start_center = (resize_size - crop_size - 1) / 2 174 | start_last = resize_size - crop_size - 1 175 | 176 | return transforms.Compose([ 177 | transforms.Resize((resize_size,resize_size)), 178 | transforms.CenterCrop(224), 179 | transforms.ToTensor(), 180 | normalize 181 | ]) 182 | 183 | def image_test_10crop(resize_size=256, crop_size=224, alexnet=False): 184 | if not alexnet: 185 | normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], 186 | std=[0.229, 0.224, 0.225]) 187 | else: 188 | normalize = Normalize(meanfile='./ilsvrc_2012_mean.npy') 189 | start_first = 0 190 | start_center = (resize_size - crop_size - 1) / 2 191 | start_last = resize_size - crop_size - 1 192 | data_transforms = [ 193 | transforms.Compose([ 194 | ResizeImage(resize_size),ForceFlip(), 195 | PlaceCrop(crop_size, start_first, start_first), 196 | transforms.ToTensor(), 197 | normalize 198 | ]), 199 | transforms.Compose([ 200 | ResizeImage(resize_size),ForceFlip(), 201 | PlaceCrop(crop_size, start_last, start_last), 202 | transforms.ToTensor(), 203 | normalize 204 | ]), 205 | transforms.Compose([ 206 | ResizeImage(resize_size),ForceFlip(), 207 | PlaceCrop(crop_size, start_last, start_first), 208 | transforms.ToTensor(), 209 | normalize 210 | ]), 211 | transforms.Compose([ 212 | ResizeImage(resize_size),ForceFlip(), 213 | PlaceCrop(crop_size, start_first, start_last), 214 | transforms.ToTensor(), 215 | normalize 216 | ]), 217 | transforms.Compose([ 218 | ResizeImage(resize_size),ForceFlip(), 219 | PlaceCrop(crop_size, start_center, start_center), 220 | transforms.ToTensor(), 221 | normalize 222 | ]), 223 | transforms.Compose([ 224 | ResizeImage(resize_size), 225 | PlaceCrop(crop_size, start_first, start_first), 226 | transforms.ToTensor(), 227 | normalize 228 | ]), 229 | transforms.Compose([ 230 | ResizeImage(resize_size), 231 | PlaceCrop(crop_size, start_last, start_last), 232 | transforms.ToTensor(), 233 | normalize 234 | ]), 235 | transforms.Compose([ 236 | ResizeImage(resize_size), 237 | PlaceCrop(crop_size, start_last, start_first), 238 | transforms.ToTensor(), 239 | normalize 240 | ]), 241 | transforms.Compose([ 242 | ResizeImage(resize_size), 243 | PlaceCrop(crop_size, start_first, start_last), 244 | transforms.ToTensor(), 245 | normalize 246 | ]), 247 | transforms.Compose([ 248 | ResizeImage(resize_size), 249 | PlaceCrop(crop_size, start_center, start_center), 250 | transforms.ToTensor(), 251 | normalize 252 | ]) 253 | ] 254 | return data_transforms 255 | -------------------------------------------------------------------------------- /UODR/train_loader.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | import os.path as osp 4 | 5 | import numpy as np 6 | import torch 7 | import torch.nn as nn 8 | import torch.nn.functional as F 9 | import torch.optim as optim 10 | import torch.utils.data as util_data 11 | from torch.autograd import Variable 12 | 13 | import time 14 | import json 15 | import random 16 | 17 | from data_list import ImageList 18 | import network 19 | import loss 20 | import pre_process as prep 21 | import lr_schedule 22 | 23 | optim_dict = {"SGD": optim.SGD} 24 | 25 | 26 | def Entropy(input_): 27 | bs = input_.size(0) 28 | epsilon = 1e-7 29 | methodpy = -input_ * torch.log(input_ + epsilon) 30 | methodpy = torch.sum(methodpy, dim=1) 31 | return methodpy 32 | 33 | def calc_coeff(iter_num, high=1.0, low=0.0, alpha=10.0, max_iter=1200.0): 34 | return np.float(2.0 * (high - low) / (1.0 + np.exp(-alpha*iter_num / max_iter)) - (high - low) + low) 35 | 36 | def my_l2_loss(a, b): 37 | return ((a - b)**2).sum() / (len(a) * 2) 38 | 39 | def image_classification_test(iter_test,len_now, base, class1, class2, gpu=True): 40 | start_test = True 41 | Total_1k = 0. 42 | Total_4k = 0. 43 | COR_1k = 0. 44 | COR_4k = 0. 45 | COR = 0. 46 | Total = 0. 47 | print('Testing ...') 48 | for i in range(len_now): 49 | data = iter_test.next() 50 | inputs = data[0] 51 | labels = data[1] 52 | if gpu: 53 | inputs = Variable(inputs.cuda()) 54 | labels = Variable(labels.cuda()) 55 | else: 56 | inputs = Variable(inputs) 57 | labels = Variable(labels) 58 | output = base(inputs) 59 | out1 = class1(output) 60 | out2 = class2(output) 61 | outputs = torch.cat((out1,out2),dim=1) 62 | if start_test: 63 | all_output = outputs.data.float() 64 | all_label = labels.data.float() 65 | _, predict = torch.max(all_output, 1) 66 | ind_1K = all_label.gt(39) 67 | ind_4K = torch.logical_not(all_label.gt(39)) 68 | COR = COR + torch.sum(torch.squeeze(predict).float() == all_label) 69 | Total = Total + all_label.size()[0] 70 | COR_1k = COR_1k + torch.sum(torch.squeeze(predict).float()[ind_1K] == all_label[ind_1K]) 71 | Total_1k = Total_1k + torch.sum(ind_1K) 72 | COR_4k = COR_4k + torch.sum(torch.squeeze(predict).float()[ind_4K] == all_label[ind_4K]) 73 | Total_4k = Total_4k + torch.sum(ind_4K) 74 | print('Unkown_acc: '+ str(float(COR_1k)/float(Total_1k))) 75 | print('Known_acc: '+ str(float(COR_4k)/float(Total_4k))) 76 | accuracy = float(COR)/float(Total) 77 | return accuracy 78 | 79 | def train_classification(config): 80 | ## set pre-process 81 | prep_train = prep.image_train(resize_size=256, crop_size=224) 82 | prep_test = prep.image_test(resize_size=256, crop_size=224) 83 | 84 | ## set loss 85 | class_criterion = nn.CrossEntropyLoss() 86 | 87 | ## prepare data 88 | TRAIN_LIST = 'data/WEB_3D3_2.txt' 89 | TEST_LIST = 'data/new_AwA2.txt' 90 | BSZ = args.batch_size 91 | 92 | dsets_train = ImageList(open(TRAIN_LIST).readlines(), shape = (args.img_size,args.img_size), transform=prep_train) 93 | loaders_train = util_data.DataLoader(dsets_train, batch_size=BSZ, shuffle=True, num_workers=8, pin_memory=True) 94 | 95 | dsets_test = ImageList(open(TEST_LIST).readlines(), shape = (args.img_size,args.img_size),transform=prep_test) 96 | loaders_test = util_data.DataLoader(dsets_test, batch_size=BSZ, shuffle=True, num_workers=4, pin_memory=True) 97 | 98 | dsets_val = ImageList(open(TEST_LIST).readlines(), shape = (args.img_size,args.img_size),transform=prep_train) 99 | loaders_val = util_data.DataLoader(dsets_val, batch_size=BSZ, shuffle=True, num_workers=4, pin_memory=True) 100 | 101 | ## set base network 102 | class_num = 40 103 | all_num = 50 104 | net_config = config["network"] 105 | base_network = network.network_dict[net_config["name"]]() 106 | classifier_layer1 = nn.Linear(base_network.output_num(), class_num) 107 | classifier_layer2 = nn.Linear(base_network.output_num(), all_num-class_num) 108 | 109 | ## initialization 110 | base_network.load_state_dict(torch.load('model/base_net_pretrained_on_I2AwA2_source_only.pkl')) 111 | weight_bias=torch.load('model/awa_50_cls_basic')['fc50'] 112 | classifier_layer1.weight.data = weight_bias[:class_num,:2048] 113 | classifier_layer2.weight.data = weight_bias[class_num:,:2048] 114 | classifier_layer1.bias.data = weight_bias[:class_num,-1] 115 | classifier_layer2.bias.data = weight_bias[class_num:,-1] 116 | 117 | #gpu 118 | use_gpu = torch.cuda.is_available() 119 | if use_gpu: 120 | classifier_layer1 = classifier_layer1.cuda() 121 | classifier_layer2 = classifier_layer2.cuda() 122 | base_network = base_network.cuda() 123 | 124 | ## collect parameters 125 | parameter_list = [{"params": classifier_layer2.parameters(), "lr":2}, 126 | {"params": classifier_layer1.parameters(), "lr":5}, 127 | {"params": base_network.parameters(), "lr":1},] 128 | 129 | 130 | ## set optimizer 131 | optimizer_config = config["optimizer"] 132 | optimizer = optim_dict[optimizer_config["type"]](parameter_list, **(optimizer_config["optim_params"])) 133 | param_lr = [] 134 | for param_group in optimizer.param_groups: 135 | param_lr.append(param_group["lr"]) 136 | schedule_param = optimizer_config["lr_param"] 137 | lr_scheduler = lr_schedule.schedule_dict[optimizer_config["lr_type"]] 138 | 139 | # dataloader lenth 140 | len_train_source = len(loaders_train) - 1 141 | len_test_source = len(loaders_test) - 1 142 | len_val_source = len(loaders_val) - 1 143 | optimizer.zero_grad() 144 | 145 | #train 146 | for i in range(config["num_iterations"]): 147 | if ((i + 0) % config["test_interval"] == 0 and i > 100) or i== config["num_iterations"]-1 or i==0: 148 | base_network.train(False) 149 | classifier_layer1.train(False) 150 | classifier_layer2.train(False) 151 | print(str(i)+' ACC:') 152 | iter_target = iter(loaders_test) 153 | print(image_classification_test(iter_target,len_test_source, base_network, classifier_layer1,classifier_layer2, gpu=use_gpu)) 154 | iter_target = iter(loaders_test) 155 | 156 | #model train 157 | classifier_layer1.train(True) 158 | classifier_layer2.train(True) 159 | base_network.train(True) 160 | 161 | optimizer = lr_scheduler(param_lr, optimizer, i, **schedule_param) 162 | 163 | #iter dataloader 164 | if i % len_train_source == 0: 165 | iter_source = iter(loaders_train) 166 | if i % (len_test_source ) == 0: 167 | iter_target = iter(loaders_test) 168 | if i % (len_val_source ) == 0: 169 | iter_val = iter(loaders_val) 170 | 171 | inputs_source, labels_source = iter_source.next() 172 | inputs_target, _ = iter_val.next() 173 | 174 | if use_gpu: 175 | inputs_source, labels_source, inputs_target = Variable(inputs_source).cuda(), Variable(labels_source).cuda(), Variable(inputs_target).cuda() 176 | else: 177 | inputs_source, labels_source, inputs_target = Variable(inputs_source), Variable(labels_source),Variable(inputs_target) 178 | #network 179 | features_source = base_network(inputs_source) 180 | features_target = base_network(inputs_target) 181 | 182 | outputs_source1 = classifier_layer1(features_source) 183 | outputs_source2 = classifier_layer2(features_source) 184 | outputs_target1 = classifier_layer1(features_target) 185 | outputs_target2 = classifier_layer2(features_target) 186 | 187 | outputs_source = torch.cat((outputs_source1,outputs_source2),dim=1) 188 | outputs_target = torch.cat((outputs_target1,outputs_target2),dim=1) 189 | 190 | cls_loss = class_criterion(outputs_source1, labels_source) 191 | 192 | #method BNM: Batch Nuclear-norm Maximization 193 | target_softmax = F.softmax(outputs_target, dim=1) 194 | if args.method=='ENT': 195 | transfer_loss = torch.mean(Entropy(target_softmax))/torch.log(target_softmax.shape[1]) 196 | elif args.method=='BNM': 197 | transfer_loss = -torch.norm(target_softmax,'nuc')/target_softmax.shape[0] 198 | elif args.method=='BFM': 199 | transfer_loss = -torch.sqrt(torch.mean(torch.svd(target_softmax)[1]**2)) 200 | elif args.method=='balance': 201 | WEIGHT = torch.sum(torch.softmax(outputs_source, dim=1)[:,:40] * target_softmax[:,:40], 1) 202 | max_s = torch.max(target_softmax[:,:40], 1)[0] 203 | methodpy_loss = torch.sum(torch.sum(target_softmax[:,class_num:],1) * (1.0 - max_s.gt(0.5).float()*WEIGHT.gt(0.6).float()))/torch.sum(1.0 - max_s.gt(0.5).float()*WEIGHT.gt(0.6).float()) 204 | transfer_loss = methodpy_loss + 0.1 * 0.1/methodpy_loss 205 | 206 | total_loss = cls_loss + transfer_loss * args.w_transfer 207 | print("Step "+str(i)+": cls_loss: "+str(cls_loss.cpu().data.numpy())+ 208 | " transfer_loss: "+str(transfer_loss.cpu().data.numpy())) 209 | 210 | total_loss.backward() 211 | if (i+1)% config["opt_num"] ==0: 212 | optimizer.step() 213 | optimizer.zero_grad() 214 | 215 | 216 | if __name__ == "__main__": 217 | parser = argparse.ArgumentParser(description='Transfer Learning') 218 | parser.add_argument('--gpu_id', type=str, nargs='?', default='0', help="device id to run") 219 | parser.add_argument('--batch_size', type=int, nargs='?', default=48, help="batch size") 220 | parser.add_argument('--img_size', type=int, nargs='?', default=256, help="image size") 221 | parser.add_argument('--method', type=str, nargs='?', default='BNM', help="loss name") 222 | parser.add_argument('--w_transfer', type=float, nargs='?', default=2., help="weight of BNM") 223 | parser.add_argument('--lr', type=float, nargs='?', default=0.0001, help="percent of unseen data") 224 | args = parser.parse_args() 225 | os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu_id 226 | 227 | config = {} 228 | config["num_iterations"] = 6000 229 | config["test_interval"] = 400 230 | config["opt_num"] = 1 231 | config["network"] = {"name":"ResNet50"} 232 | config["optimizer"] = {"type":"SGD", "optim_params":{"lr":1.0, "momentum":0.9, "weight_decay":0.0001, "nesterov":True}, "lr_type":"inv", "lr_param":{"init_lr":args.lr, "gamma":0.0005, "power":0.75} } 233 | print(config) 234 | print(args) 235 | train_classification(config) 236 | -------------------------------------------------------------------------------- /DA/BNM/train_image.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | import os.path as osp 4 | 5 | import numpy as np 6 | import torch 7 | import torch.nn as nn 8 | import torch.optim as optim 9 | import network 10 | import pre_process as prep 11 | from torch.utils.data import DataLoader 12 | import lr_schedule 13 | import data_list 14 | from data_list import ImageList 15 | from torch.autograd import Variable 16 | import random 17 | import pdb 18 | import math 19 | 20 | 21 | def image_classification_test(loader, model): 22 | start_test = True 23 | with torch.no_grad(): 24 | iter_test = iter(loader["test"]) 25 | for i in range(len(loader['test'])): 26 | data = iter_test.next() 27 | inputs = data[0] 28 | labels = data[1] 29 | inputs = inputs.cuda() 30 | labels = labels.cuda() 31 | _, outputs = model(inputs) 32 | if start_test: 33 | all_output = outputs.float() 34 | all_label = labels.float() 35 | start_test = False 36 | else: 37 | all_output = torch.cat((all_output, outputs.float()), 0) 38 | all_label = torch.cat((all_label, labels.float()), 0) 39 | _, predict = torch.max(all_output, 1) 40 | accuracy = torch.sum(torch.squeeze(predict).float() == all_label).item() / float(all_label.size()[0]) 41 | return accuracy 42 | 43 | 44 | def train(config): 45 | ## set pre-process 46 | prep_dict = {} 47 | dsets = {} 48 | dset_loaders = {} 49 | data_config = config["data"] 50 | prep_config = config["prep"] 51 | if "webcam" in data_config["source"]["list_path"] or "dslr" in data_config["source"]["list_path"]: 52 | prep_dict["source"] = prep.image_train(**config["prep"]['params']) 53 | else: 54 | prep_dict["source"] = prep.image_target(**config["prep"]['params'])#TODO 55 | 56 | if "webcam" in data_config["target"]["list_path"] or "dslr" in data_config["target"]["list_path"]: 57 | prep_dict["target"] = prep.image_train(**config["prep"]['params']) 58 | else: 59 | prep_dict["target"] = prep.image_target(**config["prep"]['params'])#TODO 60 | 61 | prep_dict["test"] = prep.image_test(**config["prep"]['params']) 62 | 63 | ### set pre-process 64 | #prep_dict = {} 65 | #dsets = {} 66 | #dset_loaders = {} 67 | #data_config = config["data"] 68 | #prep_config = config["prep"] 69 | #prep_dict["source"] = prep.image_target(**config["prep"]['params']) 70 | #prep_dict["target"] = prep.image_target(**config["prep"]['params']) 71 | #prep_dict["test"] = prep.image_test(**config["prep"]['params']) 72 | 73 | ## prepare data 74 | train_bs = data_config["source"]["batch_size"] 75 | test_bs = data_config["test"]["batch_size"] 76 | dsets["source"] = ImageList(open(data_config["source"]["list_path"]).readlines(), \ 77 | transform=prep_dict["source"]) 78 | dset_loaders["source"] = DataLoader(dsets["source"], batch_size=train_bs, \ 79 | shuffle=True, num_workers=4, drop_last=True) 80 | dsets["target"] = ImageList(open(data_config["target"]["list_path"]).readlines(), \ 81 | transform=prep_dict["target"]) 82 | dset_loaders["target"] = DataLoader(dsets["target"], batch_size=train_bs, \ 83 | shuffle=True, num_workers=4, drop_last=True) 84 | 85 | dsets["test"] = ImageList(open(data_config["test"]["list_path"]).readlines(), \ 86 | transform=prep_dict["test"]) 87 | dset_loaders["test"] = DataLoader(dsets["test"], batch_size=test_bs, \ 88 | shuffle=False, num_workers=4) 89 | 90 | ## set base network 91 | class_num = config["network"]["params"]["class_num"] 92 | net_config = config["network"] 93 | base_network = net_config["name"](**net_config["params"]) 94 | base_network = base_network.cuda() 95 | 96 | ## set optimizer 97 | parameter_list = base_network.get_parameters() 98 | optimizer_config = config["optimizer"] 99 | optimizer = optimizer_config["type"](parameter_list, \ 100 | **(optimizer_config["optim_params"])) 101 | param_lr = [] 102 | for param_group in optimizer.param_groups: 103 | param_lr.append(param_group["lr"]) 104 | schedule_param = optimizer_config["lr_param"] 105 | lr_scheduler = lr_schedule.schedule_dict[optimizer_config["lr_type"]] 106 | 107 | #multi gpu 108 | gpus = config['gpu'].split(',') 109 | if len(gpus) > 1: 110 | base_network = nn.DataParallel(base_network, device_ids=[int(i) for i,k in enumerate(gpus)]) 111 | 112 | ## train 113 | len_train_source = len(dset_loaders["source"]) 114 | len_train_target = len(dset_loaders["target"]) 115 | transfer_loss_value = classifier_loss_value = total_loss_value = 0.0 116 | best_acc = 0.0 117 | for i in range(config["num_iterations"]): 118 | #test 119 | if i % config["test_interval"] == config["test_interval"] - 1: 120 | base_network.train(False) 121 | temp_acc = image_classification_test(dset_loaders, base_network) 122 | temp_model = nn.Sequential(base_network) 123 | if temp_acc > best_acc: 124 | best_acc = temp_acc 125 | best_model = temp_model 126 | log_str = "iter: {:05d}, precision: {:.5f}".format(i, temp_acc) 127 | config["out_file"].write(log_str+"\n") 128 | config["out_file"].flush() 129 | print(log_str) 130 | #save model 131 | if i % config["snapshot_interval"] == 0 and i: 132 | torch.save(base_network.state_dict(), osp.join(config["output_path"], \ 133 | "iter_{:05d}_model.pth.tar".format(i))) 134 | 135 | ## train one iter 136 | base_network.train(True) 137 | loss_params = config["loss"] 138 | optimizer = lr_scheduler(optimizer, i, **schedule_param) 139 | optimizer.zero_grad() 140 | 141 | #dataloader 142 | if i % len_train_source == 0: 143 | iter_source = iter(dset_loaders["source"]) 144 | if i % len_train_target == 0: 145 | iter_target = iter(dset_loaders["target"]) 146 | 147 | #network 148 | inputs_source, labels_source = iter_source.next() 149 | inputs_target, _ = iter_target.next() 150 | inputs_source, inputs_target, labels_source = inputs_source.cuda(), inputs_target.cuda(), labels_source.cuda() 151 | features_source, outputs_source = base_network(inputs_source) 152 | features_target, outputs_target = base_network(inputs_target) 153 | features = torch.cat((features_source, features_target), dim=0) 154 | outputs = torch.cat((outputs_source, outputs_target), dim=0) 155 | softmax_src = nn.Softmax(dim=1)(outputs_source) 156 | softmax_tgt = nn.Softmax(dim=1)(outputs_target) 157 | softmax_out = torch.cat((softmax_src, softmax_tgt), dim=0) 158 | 159 | #loss calculation 160 | classifier_loss = nn.CrossEntropyLoss()(outputs_source, labels_source) 161 | 162 | if config["method"]=="BNM": 163 | _, s_tgt, _ = torch.svd(softmax_tgt) 164 | transfer_loss = -torch.mean(s_tgt) 165 | elif config["method"]=="FBNM": 166 | list_svd,_ = torch.sort(torch.sqrt(torch.sum(torch.pow(softmax_tgt,2),dim=0)), descending=True) 167 | transfer_loss = - torch.mean(list_svd[:min(softmax_tgt.shape[0],softmax_tgt.shape[1])]) 168 | elif config["method"]=="BFM": 169 | _, s_tgt, _ = torch.svd(softmax_tgt) 170 | transfer_loss = -torch.sqrt(torch.sum(s_tgt*s_tgt)/s_tgt.shape[0]) 171 | elif config["method"]=="ENT": 172 | transfer_loss = -torch.mean(torch.sum(softmax_tgt*torch.log(softmax_tgt+1e-8),dim=1))/torch.log(softmax_tgt.shape[1]) 173 | total_loss = loss_params["trade_off"] * transfer_loss + classifier_loss 174 | 175 | if i % config["print_num"] == 0: 176 | log_str = "iter: {:05d}, transferloss: {:.5f}, classifier_loss: {:.5f}".format(i, transfer_loss, classifier_loss) 177 | config["out_file"].write(log_str+"\n") 178 | config["out_file"].flush() 179 | #print(log_str) 180 | 181 | total_loss.backward() 182 | optimizer.step() 183 | torch.save(best_model, osp.join(config["output_path"], "best_model.pth.tar")) 184 | return best_acc 185 | 186 | if __name__ == "__main__": 187 | parser = argparse.ArgumentParser(description='Transfer Learning') 188 | parser.add_argument('--gpu_id', type=str, nargs='?', default='0', help="device id to run") 189 | parser.add_argument('--net', type=str, default='ResNet50', help="Options: ResNet50") 190 | parser.add_argument('--dset', type=str, default='office-home', help="The dataset or source dataset used") 191 | parser.add_argument('--s_dset_path', type=str, default='../data/office-home/Art.txt', help="The source dataset path list") 192 | parser.add_argument('--t_dset_path', type=str, default='../data/office-home/Clipart.txt', help="The target dataset path list") 193 | parser.add_argument('--test_interval', type=int, default=500, help="interval of two continuous test phase") 194 | parser.add_argument('--snapshot_interval', type=int, default=5000, help="interval of two continuous output model") 195 | parser.add_argument('--print_num', type=int, default=100, help="interval of two print loss") 196 | parser.add_argument('--num_iterations', type=int, default=6002, help="interation num ") 197 | parser.add_argument('--output_dir', type=str, default='san', help="output directory of our model (in ../snapshot directory)") 198 | parser.add_argument('--method', type=str, default='BNM', help="Options: BNM, ENT, BFM, FBNM") 199 | parser.add_argument('--lr', type=float, default=0.001, help="learning rate") 200 | parser.add_argument('--trade_off', type=float, default=1, help="parameter for transfer loss") 201 | parser.add_argument('--batch_size', type=int, default=36, help="batch size") 202 | args = parser.parse_args() 203 | os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu_id 204 | 205 | # train config 206 | config = {} 207 | config["gpu"] = args.gpu_id 208 | config["method"] = args.method 209 | config["num_iterations"] = args.num_iterations 210 | config["print_num"] = args.print_num 211 | config["test_interval"] = args.test_interval 212 | config["snapshot_interval"] = args.snapshot_interval 213 | config["output_for_test"] = True 214 | config["output_path"] = args.dset + "/" + args.output_dir 215 | 216 | if not osp.exists(config["output_path"]): 217 | os.system('mkdir -p '+config["output_path"]) 218 | config["out_file"] = open(osp.join(config["output_path"], "log.txt"), "w") 219 | if not osp.exists(config["output_path"]): 220 | os.mkdir(config["output_path"]) 221 | 222 | config["prep"] = {'params':{"resize_size":256, "crop_size":224, 'alexnet':False}} 223 | config["loss"] = {"trade_off":args.trade_off} 224 | if "ResNet" in args.net: 225 | config["network"] = {"name":network.ResNetFc, \ 226 | "params":{"resnet_name":args.net, "use_bottleneck":False, "bottleneck_dim":256, "new_cls":True} } 227 | else: 228 | raise ValueError('Network cannot be recognized. Please define your own dataset here.') 229 | 230 | config["optimizer"] = {"type":optim.SGD, "optim_params":{'lr':args.lr, "momentum":0.9, \ 231 | "weight_decay":0.0005, "nesterov":True}, "lr_type":"inv", \ 232 | "lr_param":{"lr":args.lr, "gamma":0.001, "power":0.75} } 233 | 234 | config["dataset"] = args.dset 235 | config["data"] = {"source":{"list_path":args.s_dset_path, "batch_size":args.batch_size}, \ 236 | "target":{"list_path":args.t_dset_path, "batch_size":args.batch_size}, \ 237 | "test":{"list_path":args.t_dset_path, "batch_size":args.batch_size}} 238 | 239 | if config["dataset"] == "office": 240 | if ("webcam" in args.s_dset_path and "amazon" in args.t_dset_path) or \ 241 | ("amazon" in args.s_dset_path and "webcam" in args.t_dset_path) or \ 242 | ("dslr" in args.s_dset_path and "amazon" in args.t_dset_path): 243 | config["optimizer"]["lr_param"]["lr"] = 0.001 # optimal parameters 244 | elif ("amazon" in args.s_dset_path and "dslr" in args.t_dset_path) or \ 245 | ("dslr" in args.s_dset_path and "webcam" in args.t_dset_path) or \ 246 | ("webcam" in args.s_dset_path and "dslr" in args.t_dset_path): 247 | config["optimizer"]["lr_param"]["lr"] = 0.0003 # optimal parameters 248 | config["network"]["params"]["class_num"] = 31 249 | elif config["dataset"] == "office-home": 250 | config["optimizer"]["lr_param"]["lr"] = 0.001 # optimal parameters 251 | config["network"]["params"]["class_num"] = 65 252 | else: 253 | raise ValueError('Dataset cannot be recognized. Please define your own dataset here.') 254 | config["out_file"].write(str(config)) 255 | config["out_file"].flush() 256 | print('begin') 257 | train(config) 258 | -------------------------------------------------------------------------------- /DA/CDAN-BNM/train_image.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | import os.path as osp 4 | 5 | import numpy as np 6 | import torch 7 | import torch.nn as nn 8 | import torch.optim as optim 9 | import network 10 | import loss 11 | import pre_process as prep 12 | from torch.utils.data import DataLoader 13 | import lr_schedule 14 | import data_list 15 | from data_list import ImageList 16 | from torch.autograd import Variable 17 | import random 18 | import pdb 19 | import math 20 | 21 | 22 | def image_classification_test(loader, model, test_10crop=True): 23 | start_test = True 24 | with torch.no_grad(): 25 | if test_10crop: 26 | iter_test = [iter(loader['test'][i]) for i in range(10)] 27 | for i in range(len(loader['test'][0])): 28 | data = [iter_test[j].next() for j in range(10)] 29 | inputs = [data[j][0] for j in range(10)] 30 | labels = data[0][1] 31 | for j in range(10): 32 | inputs[j] = inputs[j].cuda() 33 | labels = labels 34 | outputs = [] 35 | for j in range(10): 36 | _, predict_out = model(inputs[j]) 37 | outputs.append(nn.Softmax(dim=1)(predict_out)) 38 | outputs = sum(outputs) 39 | if start_test: 40 | all_output = outputs.float().cpu() 41 | all_label = labels.float() 42 | start_test = False 43 | else: 44 | all_output = torch.cat((all_output, outputs.float().cpu()), 0) 45 | all_label = torch.cat((all_label, labels.float()), 0) 46 | else: 47 | iter_test = iter(loader["test"]) 48 | for i in range(len(loader['test'])): 49 | data = iter_test.next() 50 | inputs = data[0] 51 | labels = data[1] 52 | inputs = inputs.cuda() 53 | labels = labels.cuda() 54 | _, outputs = model(inputs) 55 | if start_test: 56 | all_output = outputs.float() 57 | all_label = labels.float() 58 | start_test = False 59 | else: 60 | all_output = torch.cat((all_output, outputs.float()), 0) 61 | all_label = torch.cat((all_label, labels.float()), 0) 62 | _, predict = torch.max(all_output, 1) 63 | accuracy = torch.sum(torch.squeeze(predict).float() == all_label).item() / float(all_label.size()[0]) 64 | return accuracy 65 | 66 | 67 | def train(config): 68 | ## set pre-process 69 | prep_dict = {} 70 | dsets = {} 71 | dset_loaders = {} 72 | data_config = config["data"] 73 | prep_config = config["prep"] 74 | print('hello world') 75 | if "webcam" in data_config["source"]["list_path"] or "dslr" in data_config["source"]["list_path"]: 76 | prep_dict["source"] = prep.image_train(**config["prep"]['params']) 77 | else: 78 | prep_dict["source"] = prep.image_target(**config["prep"]['params']) 79 | 80 | if "webcam" in data_config["target"]["list_path"] or "dslr" in data_config["target"]["list_path"]: 81 | prep_dict["target"] = prep.image_train(**config["prep"]['params']) 82 | else: 83 | prep_dict["target"] = prep.image_target(**config["prep"]['params']) 84 | 85 | if prep_config["test_10crop"]: 86 | prep_dict["test"] = prep.image_test_10crop(**config["prep"]['params']) 87 | else: 88 | prep_dict["test"] = prep.image_test(**config["prep"]['params']) 89 | 90 | ## prepare data 91 | train_bs = data_config["source"]["batch_size"] 92 | test_bs = data_config["test"]["batch_size"] 93 | dsets["source"] = ImageList(open(data_config["source"]["list_path"]).readlines(), \ 94 | transform=prep_dict["source"]) 95 | dset_loaders["source"] = DataLoader(dsets["source"], batch_size=train_bs, \ 96 | shuffle=True, num_workers=4, drop_last=True) 97 | dsets["target"] = ImageList(open(data_config["target"]["list_path"]).readlines(), \ 98 | transform=prep_dict["target"]) 99 | dset_loaders["target"] = DataLoader(dsets["target"], batch_size=train_bs, \ 100 | shuffle=True, num_workers=4, drop_last=True) 101 | 102 | if prep_config["test_10crop"]: 103 | for i in range(10): 104 | dsets["test"] = [ImageList(open(data_config["test"]["list_path"]).readlines(), \ 105 | transform=prep_dict["test"][i]) for i in range(10)] 106 | dset_loaders["test"] = [DataLoader(dset, batch_size=test_bs, \ 107 | shuffle=False, num_workers=4) for dset in dsets['test']] 108 | else: 109 | dsets["test"] = ImageList(open(data_config["test"]["list_path"]).readlines(), \ 110 | transform=prep_dict["test"]) 111 | dset_loaders["test"] = DataLoader(dsets["test"], batch_size=test_bs, \ 112 | shuffle=False, num_workers=4) 113 | 114 | class_num = config["network"]["params"]["class_num"] 115 | 116 | ## set base network 117 | net_config = config["network"] 118 | base_network = net_config["name"](**net_config["params"]) 119 | base_network = base_network.cuda() 120 | 121 | ## add additional network for some CDANs 122 | if config["loss"]["random"]: 123 | random_layer = network.RandomLayer([base_network.output_num(), class_num], config["loss"]["random_dim"]) 124 | ad_net = network.AdversarialNetwork(config["loss"]["random_dim"], 1024) 125 | else: 126 | random_layer = None 127 | ad_net = network.AdversarialNetwork(base_network.output_num() * class_num, 1024) 128 | if config["loss"]["random"]: 129 | random_layer.cuda() 130 | ad_net = ad_net.cuda() 131 | parameter_list = base_network.get_parameters() + ad_net.get_parameters() 132 | 133 | ## set optimizer 134 | optimizer_config = config["optimizer"] 135 | optimizer = optimizer_config["type"](parameter_list, \ 136 | **(optimizer_config["optim_params"])) 137 | param_lr = [] 138 | for param_group in optimizer.param_groups: 139 | param_lr.append(param_group["lr"]) 140 | schedule_param = optimizer_config["lr_param"] 141 | lr_scheduler = lr_schedule.schedule_dict[optimizer_config["lr_type"]] 142 | 143 | gpus = config['gpu'].split(',') 144 | if len(gpus) > 1: 145 | ad_net = nn.DataParallel(ad_net, device_ids=[int(i) for i in gpus]) 146 | base_network = nn.DataParallel(base_network, device_ids=[int(i) for i in gpus]) 147 | 148 | 149 | ## train 150 | len_train_source = len(dset_loaders["source"]) 151 | len_train_target = len(dset_loaders["target"]) 152 | transfer_loss_value = classifier_loss_value = total_loss_value = 0.0 153 | best_acc = 0.0 154 | for i in range(config["num_iterations"]): 155 | if i % config["test_interval"] == config["test_interval"] - 1: 156 | base_network.train(False) 157 | temp_acc = image_classification_test(dset_loaders, \ 158 | base_network, test_10crop=prep_config["test_10crop"]) 159 | temp_model = nn.Sequential(base_network) 160 | if temp_acc > best_acc: 161 | best_acc = temp_acc 162 | best_model = temp_model 163 | log_str = "iter: {:05d}, precision: {:.5f}".format(i, temp_acc) 164 | config["out_file"].write(log_str+"\n") 165 | config["out_file"].flush() 166 | print(log_str) 167 | if i % config["snapshot_interval"] == 0: 168 | torch.save(nn.Sequential(base_network), osp.join(config["output_path"], \ 169 | "iter_{:05d}_model.pth.tar".format(i))) 170 | 171 | loss_params = config["loss"] 172 | ## train one iter 173 | base_network.train(True) 174 | ad_net.train(True) 175 | optimizer = lr_scheduler(optimizer, i, **schedule_param) 176 | optimizer.zero_grad() 177 | if i % len_train_source == 0: 178 | iter_source = iter(dset_loaders["source"]) 179 | if i % len_train_target == 0: 180 | iter_target = iter(dset_loaders["target"]) 181 | 182 | inputs_source, labels_source = iter_source.next() 183 | inputs_target, labels_target = iter_target.next() 184 | inputs_source, inputs_target, labels_source = inputs_source.cuda(), inputs_target.cuda(), labels_source.cuda() 185 | features_source, outputs_source = base_network(inputs_source) 186 | features_target, outputs_target = base_network(inputs_target) 187 | features = torch.cat((features_source, features_target), dim=0) 188 | outputs = torch.cat((outputs_source, outputs_target), dim=0) 189 | 190 | softmax_src = nn.Softmax(dim=1)(outputs_source) 191 | softmax_tgt = nn.Softmax(dim=1)(outputs_target) 192 | softmax_out = torch.cat((softmax_src, softmax_tgt), dim=0) 193 | 194 | if config['CDAN'] == 'CDAN+E': 195 | entropy = loss.Entropy(softmax_out) 196 | transfer_loss = loss.CDAN([features, softmax_out], ad_net, entropy, network.calc_coeff(i), random_layer) 197 | elif config['CDAN'] == 'CDAN': 198 | transfer_loss = loss.CDAN([features, softmax_out], ad_net, None, None, random_layer) 199 | else: 200 | raise ValueError('Method cannot be recognized.') 201 | 202 | _, s_tgt, _ = torch.svd(softmax_tgt) 203 | if config["method"]=="BNM": 204 | method_loss = -torch.mean(s_tgt) 205 | elif config["method"]=="BFM": 206 | method_loss = -torch.sqrt(torch.sum(s_tgt*s_tgt)/s_tgt.shape[0]) 207 | elif config["method"]=="ENT": 208 | method_loss = -torch.mean(torch.sum(softmax_tgt*torch.log(softmax_tgt+1e-8),dim=1))/torch.log(softmax_tgt.shape[1]) 209 | elif config["method"]=="NO": 210 | method_loss = 0 211 | 212 | classifier_loss = nn.CrossEntropyLoss()(outputs_source, labels_source) 213 | total_loss = loss_params["trade_off"] * transfer_loss + classifier_loss + loss_params["lambda_method"] * method_loss 214 | total_loss.backward() 215 | optimizer.step() 216 | if i % config['print_num'] == 0: 217 | log_str = "iter: {:05d}, classification: {:.5f}, transfer: {:.5f}, method: {:.5f}".format(i, classifier_loss, transfer_loss, method_loss) 218 | config["out_file"].write(log_str+"\n") 219 | config["out_file"].flush() 220 | if config['show']: 221 | print(log_str) 222 | torch.save(best_model, osp.join(config["output_path"], "best_model.pth.tar")) 223 | return best_acc 224 | 225 | if __name__ == "__main__": 226 | parser = argparse.ArgumentParser(description='Conditional Domain Adversarial Network') 227 | parser.add_argument('--CDAN', type=str, default='CDAN+E', choices=['CDAN', 'CDAN+E']) 228 | parser.add_argument('--method', type=str, default='BNM', choices=['BNM', 'BFM', 'ENT','NO']) 229 | parser.add_argument('--gpu_id', type=str, nargs='?', default='0', help="device id to run") 230 | parser.add_argument('--net', type=str, default='ResNet50', choices=["ResNet18", "ResNet34", "ResNet50", "ResNet101", "ResNet152", "VGG11", "VGG13", "VGG16", "VGG19", "VGG11BN", "VGG13BN", "VGG16BN", "VGG19BN"]) 231 | parser.add_argument('--dset', type=str, default='office', choices=['office', 'image-clef', 'visda', 'office-home'], help="The dataset or source dataset used") 232 | parser.add_argument('--s_dset_path', type=str, default='../data/office/amazon_list.txt', help="The source dataset path list") 233 | parser.add_argument('--t_dset_path', type=str, default='../data/office/webcam_list.txt', help="The target dataset path list") 234 | parser.add_argument('--test_interval', type=int, default=500, help="interval of two continuous test phase") 235 | parser.add_argument('--print_num', type=int, default=100, help="print num ") 236 | parser.add_argument('--batch_size', type=int, default=36, help="number of batch size ") 237 | parser.add_argument('--num_iterations', type=int, default=10000, help="total iterations") 238 | parser.add_argument('--snapshot_interval', type=int, default=5000, help="interval of two continuous output model") 239 | parser.add_argument('--output_dir', type=str, default='san', help="output directory of our model (in ../snapshot directory)") 240 | parser.add_argument('--lr', type=float, default=0.001, help="learning rate") 241 | parser.add_argument('--trade_off', type=float, default=1.0, help="parameter for CDAN") 242 | parser.add_argument('--lambda_method', type=float, default=0.1, help="parameter for method") 243 | parser.add_argument('--random', type=bool, default=False, help="whether use random projection") 244 | parser.add_argument('--show', type=bool, default=False, help="whether show the loss functions") 245 | args = parser.parse_args() 246 | os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu_id 247 | 248 | # train config 249 | config = {} 250 | config['CDAN'] = args.CDAN 251 | config['method'] = args.method 252 | config["gpu"] = args.gpu_id 253 | config["num_iterations"] = args.num_iterations 254 | config["print_num"] = args.print_num 255 | config["test_interval"] = args.test_interval 256 | config["snapshot_interval"] = args.snapshot_interval 257 | config["output_for_test"] = True 258 | config["show"] = args.show 259 | config["output_path"] = args.dset + '/' + args.output_dir 260 | if not osp.exists(config["output_path"]): 261 | os.system('mkdir -p '+config["output_path"]) 262 | config["out_file"] = open(osp.join(config["output_path"], "log.txt"), "w") 263 | if not osp.exists(config["output_path"]): 264 | os.mkdir(config["output_path"]) 265 | 266 | config["prep"] = {"test_10crop":False, 'params':{"resize_size":256, "crop_size":224, 'alexnet':False}} 267 | config["loss"] = {"trade_off":args.trade_off, "lambda_method":args.lambda_method} 268 | if "AlexNet" in args.net: 269 | config["prep"]['params']['alexnet'] = True 270 | config["prep"]['params']['crop_size'] = 227 271 | config["network"] = {"name":network.AlexNetFc, \ 272 | "params":{"use_bottleneck":True, "bottleneck_dim":256, "new_cls":True} } 273 | elif "ResNet" in args.net: 274 | config["network"] = {"name":network.ResNetFc, \ 275 | "params":{"resnet_name":args.net, "use_bottleneck":True, "bottleneck_dim":256, "new_cls":True} } 276 | elif "VGG" in args.net: 277 | config["network"] = {"name":network.VGGFc, \ 278 | "params":{"vgg_name":args.net, "use_bottleneck":True, "bottleneck_dim":256, "new_cls":True} } 279 | config["loss"]["random"] = args.random 280 | config["loss"]["random_dim"] = 1024 281 | 282 | config["optimizer"] = {"type":optim.SGD, "optim_params":{'lr':args.lr, "momentum":0.9, \ 283 | "weight_decay":0.0005, "nesterov":True}, "lr_type":"inv", \ 284 | "lr_param":{"lr":args.lr, "gamma":0.001, "power":0.75} } 285 | 286 | config["dataset"] = args.dset 287 | config["data"] = {"source":{"list_path":args.s_dset_path, "batch_size":args.batch_size}, \ 288 | "target":{"list_path":args.t_dset_path, "batch_size":args.batch_size}, \ 289 | "test":{"list_path":args.t_dset_path, "batch_size":args.batch_size}} 290 | 291 | if config["dataset"] == "office": 292 | if ("webcam" in args.s_dset_path and "dslr" in args.t_dset_path) or \ 293 | ("webcam" in args.s_dset_path and "amazon" in args.t_dset_path) or \ 294 | ("dslr" in args.s_dset_path and "amazon" in args.t_dset_path): 295 | config["optimizer"]["lr_param"]["lr"] = 0.001 # optimal parameters 296 | elif ("amazon" in args.s_dset_path and "dslr" in args.t_dset_path) or \ 297 | ("amazon" in args.s_dset_path and "webcam" in args.t_dset_path) or \ 298 | ("dslr" in args.s_dset_path and "webcam" in args.t_dset_path): 299 | config["optimizer"]["lr_param"]["lr"] = 0.0003 # optimal parameters 300 | config["network"]["params"]["class_num"] = 31 301 | elif config["dataset"] == "image-clef": 302 | config["optimizer"]["lr_param"]["lr"] = 0.001 # optimal parameters 303 | config["network"]["params"]["class_num"] = 12 304 | elif config["dataset"] == "visda": 305 | config["optimizer"]["lr_param"]["lr"] = 0.0003 # optimal parameters 306 | config["network"]["params"]["class_num"] = 12 307 | config['loss']["trade_off"] = 1.0 308 | elif config["dataset"] == "office-home": 309 | config["optimizer"]["lr_param"]["lr"] = 0.001 # optimal parameters 310 | config["network"]["params"]["class_num"] = 65 311 | else: 312 | raise ValueError('Dataset cannot be recognized. Please define your own dataset here.') 313 | 314 | seed = random.randint(1,10000) 315 | print(seed) 316 | torch.manual_seed(seed) 317 | torch.cuda.manual_seed(seed) 318 | torch.cuda.manual_seed_all(seed) 319 | np.random.seed(seed) 320 | random.seed(seed) 321 | 322 | #uncommenting the following two lines for reproducing 323 | #torch.backends.cudnn.deterministic = True 324 | #torch.backends.cudnn.benchmark = False 325 | config["out_file"].write(str(config)) 326 | config["out_file"].flush() 327 | train(config) 328 | -------------------------------------------------------------------------------- /DA/CDAN-BNM/network.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import torch 3 | import torch.nn as nn 4 | import torchvision 5 | from torchvision import models 6 | from torch.autograd import Variable 7 | import math 8 | import pdb 9 | 10 | def calc_coeff(iter_num, high=1.0, low=0.0, alpha=10.0, max_iter=10000.0): 11 | return np.float(2.0 * (high - low) / (1.0 + np.exp(-alpha*iter_num / max_iter)) - (high - low) + low) 12 | 13 | def init_weights(m): 14 | classname = m.__class__.__name__ 15 | if classname.find('Conv2d') != -1 or classname.find('ConvTranspose2d') != -1: 16 | nn.init.kaiming_uniform_(m.weight) 17 | nn.init.zeros_(m.bias) 18 | elif classname.find('BatchNorm') != -1: 19 | nn.init.normal_(m.weight, 1.0, 0.02) 20 | nn.init.zeros_(m.bias) 21 | elif classname.find('Linear') != -1: 22 | nn.init.xavier_normal_(m.weight) 23 | nn.init.zeros_(m.bias) 24 | 25 | def zero_weights(m): 26 | classname = m.__class__.__name__ 27 | if classname.find('Conv2d') != -1 or classname.find('ConvTranspose2d') != -1: 28 | nn.init.kaiming_uniform_(m.weight) 29 | nn.init.zeros_(m.bias) 30 | elif classname.find('BatchNorm') != -1: 31 | nn.init.normal_(m.weight, 1.0, 0.02) 32 | nn.init.zeros_(m.bias) 33 | elif classname.find('Linear') != -1: 34 | nn.init.zeros_(m.weight) 35 | nn.init.zeros_(m.bias) 36 | 37 | class RandomLayer(nn.Module): 38 | def __init__(self, input_dim_list=[], output_dim=1024): 39 | super(RandomLayer, self).__init__() 40 | self.input_num = len(input_dim_list) 41 | self.output_dim = output_dim 42 | self.random_matrix = [torch.randn(input_dim_list[i], output_dim) for i in range(self.input_num)] 43 | 44 | def forward(self, input_list): 45 | return_list = [torch.mm(input_list[i], self.random_matrix[i]) for i in range(self.input_num)] 46 | return_tensor = return_list[0] / math.pow(float(self.output_dim), 1.0/len(return_list)) 47 | for single in return_list[1:]: 48 | return_tensor = torch.mul(return_tensor, single) 49 | return return_tensor 50 | 51 | def cuda(self): 52 | super(RandomLayer, self).cuda() 53 | self.random_matrix = [val.cuda() for val in self.random_matrix] 54 | 55 | class LRN(nn.Module): 56 | def __init__(self, local_size=1, alpha=1.0, beta=0.75, ACROSS_CHANNELS=True): 57 | super(LRN, self).__init__() 58 | self.ACROSS_CHANNELS = ACROSS_CHANNELS 59 | if ACROSS_CHANNELS: 60 | self.average=nn.AvgPool3d(kernel_size=(local_size, 1, 1), 61 | stride=1, 62 | padding=(int((local_size-1.0)/2), 0, 0)) 63 | else: 64 | self.average=nn.AvgPool2d(kernel_size=local_size, 65 | stride=1, 66 | padding=int((local_size-1.0)/2)) 67 | self.alpha = alpha 68 | self.beta = beta 69 | 70 | 71 | def forward(self, x): 72 | if self.ACROSS_CHANNELS: 73 | div = x.pow(2).unsqueeze(1) 74 | div = self.average(div).squeeze(1) 75 | div = div.mul(self.alpha).add(1.0).pow(self.beta) 76 | else: 77 | div = x.pow(2) 78 | div = self.average(div) 79 | div = div.mul(self.alpha).add(1.0).pow(self.beta) 80 | x = x.div(div) 81 | return x 82 | 83 | class AlexNet(nn.Module): 84 | 85 | def __init__(self, num_classes=1000): 86 | super(AlexNet, self).__init__() 87 | self.features = nn.Sequential( 88 | nn.Conv2d(3, 96, kernel_size=11, stride=4, padding=0), 89 | nn.ReLU(inplace=True), 90 | LRN(local_size=5, alpha=0.0001, beta=0.75), 91 | nn.MaxPool2d(kernel_size=3, stride=2), 92 | nn.Conv2d(96, 256, kernel_size=5, padding=2, groups=2), 93 | nn.ReLU(inplace=True), 94 | LRN(local_size=5, alpha=0.0001, beta=0.75), 95 | nn.MaxPool2d(kernel_size=3, stride=2), 96 | nn.Conv2d(256, 384, kernel_size=3, padding=1), 97 | nn.ReLU(inplace=True), 98 | nn.Conv2d(384, 384, kernel_size=3, padding=1, groups=2), 99 | nn.ReLU(inplace=True), 100 | nn.Conv2d(384, 256, kernel_size=3, padding=1, groups=2), 101 | nn.ReLU(inplace=True), 102 | nn.MaxPool2d(kernel_size=3, stride=2), 103 | ) 104 | self.classifier = nn.Sequential( 105 | nn.Linear(256 * 6 * 6, 4096), 106 | nn.ReLU(inplace=True), 107 | nn.Dropout(), 108 | nn.Linear(4096, 4096), 109 | nn.ReLU(inplace=True), 110 | nn.Dropout(), 111 | nn.Linear(4096, num_classes), 112 | ) 113 | 114 | def forward(self, x): 115 | x = self.features(x) 116 | print(x.size()) 117 | x = x.view(x.size(0), 256 * 6 * 6) 118 | x = self.classifier(x) 119 | return x 120 | 121 | 122 | def alexnet(pretrained=False, **kwargs): 123 | r"""AlexNet model architecture from the 124 | `"One weird trick..." `_ paper. 125 | Args: 126 | pretrained (bool): If True, returns a model pre-trained on ImageNet 127 | """ 128 | model = AlexNet(**kwargs) 129 | if pretrained: 130 | model_path = './alexnet.pth.tar' 131 | pretrained_model = torch.load(model_path) 132 | model.load_state_dict(pretrained_model['state_dict']) 133 | return model 134 | 135 | # convnet without the last layer 136 | class AlexNetFc(nn.Module): 137 | def __init__(self, use_bottleneck=True, bottleneck_dim=256, new_cls=False, class_num=1000): 138 | super(AlexNetFc, self).__init__() 139 | model_alexnet = alexnet(pretrained=True) 140 | self.features = model_alexnet.features 141 | self.classifier = nn.Sequential() 142 | for i in range(6): 143 | self.classifier.add_module("classifier"+str(i), model_alexnet.classifier[i]) 144 | self.feature_layers = nn.Sequential(self.features, self.classifier) 145 | 146 | self.use_bottleneck = use_bottleneck 147 | self.new_cls = new_cls 148 | if new_cls: 149 | if self.use_bottleneck: 150 | self.bottleneck = nn.Linear(4096, bottleneck_dim) 151 | self.fc = nn.Linear(bottleneck_dim, class_num) 152 | self.bottleneck.apply(init_weights) 153 | self.fc.apply(init_weights) 154 | self.__in_features = bottleneck_dim 155 | else: 156 | self.fc = nn.Linear(4096, class_num) 157 | self.fc.apply(init_weights) 158 | self.__in_features = 4096 159 | else: 160 | self.fc = model_alexnet.classifier[6] 161 | self.__in_features = 4096 162 | 163 | def forward(self, x): 164 | x = self.features(x) 165 | x = x.view(x.size(0), -1) 166 | x = self.classifier(x) 167 | if self.use_bottleneck and self.new_cls: 168 | x = self.bottleneck(x) 169 | y = self.fc(x) 170 | return x, y 171 | 172 | def output_num(self): 173 | return self.__in_features 174 | 175 | def get_parameters(self): 176 | if self.new_cls: 177 | if self.use_bottleneck: 178 | parameter_list = [{"params":self.features.parameters(), "lr_mult":1, 'decay_mult':2}, \ 179 | {"params":self.classifier.parameters(), "lr_mult":1, 'decay_mult':2}, \ 180 | {"params":self.bottleneck.parameters(), "lr_mult":10, 'decay_mult':2}, \ 181 | {"params":self.fc.parameters(), "lr_mult":10, 'decay_mult':2}] 182 | else: 183 | parameter_list = [{"params":self.feature_layers.parameters(), "lr_mult":1, 'decay_mult':2}, \ 184 | {"params":self.classifier.parameters(), "lr_mult":1, 'decay_mult':2}, \ 185 | {"params":self.fc.parameters(), "lr_mult":10, 'decay_mult':2}] 186 | else: 187 | parameter_list = [{"params":self.parameters(), "lr_mult":1, 'decay_mult':2}] 188 | return parameter_list 189 | 190 | 191 | resnet_dict = {"ResNet18":models.resnet18, "ResNet34":models.resnet34, "ResNet50":models.resnet50, "ResNet101":models.resnet101, "ResNet152":models.resnet152} 192 | 193 | def grl_hook(coeff): 194 | def fun1(grad): 195 | return -coeff*grad.clone() 196 | return fun1 197 | 198 | class ResNetFc(nn.Module): 199 | def __init__(self, resnet_name, use_bottleneck=True, bottleneck_dim=256, new_cls=False, class_num=1000): 200 | super(ResNetFc, self).__init__() 201 | model_resnet = resnet_dict[resnet_name](pretrained=True) 202 | self.conv1 = model_resnet.conv1 203 | self.bn1 = model_resnet.bn1 204 | self.relu = model_resnet.relu 205 | self.maxpool = model_resnet.maxpool 206 | self.layer1 = model_resnet.layer1 207 | self.layer2 = model_resnet.layer2 208 | self.layer3 = model_resnet.layer3 209 | self.layer4 = model_resnet.layer4 210 | self.avgpool = model_resnet.avgpool 211 | self.feature_layers = nn.Sequential(self.conv1, self.bn1, self.relu, self.maxpool, \ 212 | self.layer1, self.layer2, self.layer3, self.layer4, self.avgpool) 213 | 214 | self.use_bottleneck = use_bottleneck 215 | self.new_cls = new_cls 216 | if new_cls: 217 | if self.use_bottleneck: 218 | self.bottleneck = nn.Linear(model_resnet.fc.in_features, bottleneck_dim) 219 | self.fc = nn.Linear(bottleneck_dim, class_num) 220 | self.bottleneck.apply(init_weights) 221 | self.fc.apply(init_weights) 222 | self.__in_features = bottleneck_dim 223 | else: 224 | self.fc = nn.Linear(model_resnet.fc.in_features, class_num) 225 | self.fc.apply(init_weights) 226 | self.__in_features = model_resnet.fc.in_features 227 | else: 228 | self.fc = model_resnet.fc 229 | self.__in_features = model_resnet.fc.in_features 230 | 231 | def forward(self, x): 232 | x = self.feature_layers(x) 233 | x = x.view(x.size(0), -1) 234 | if self.use_bottleneck and self.new_cls: 235 | x = self.bottleneck(x) 236 | y = self.fc(x) 237 | return x, y 238 | 239 | def output_num(self): 240 | return self.__in_features 241 | 242 | def get_parameters(self): 243 | if self.new_cls: 244 | if self.use_bottleneck: 245 | parameter_list = [{"params":self.feature_layers.parameters(), "lr_mult":1, 'decay_mult':2}, \ 246 | {"params":self.bottleneck.parameters(), "lr_mult":10, 'decay_mult':2}, \ 247 | {"params":self.fc.parameters(), "lr_mult":10, 'decay_mult':2}] 248 | else: 249 | parameter_list = [{"params":self.feature_layers.parameters(), "lr_mult":1, 'decay_mult':2}, \ 250 | {"params":self.fc.parameters(), "lr_mult":10, 'decay_mult':2}] 251 | else: 252 | parameter_list = [{"params":self.parameters(), "lr_mult":1, 'decay_mult':2}] 253 | return parameter_list 254 | 255 | vgg_dict = {"VGG11":models.vgg11, "VGG13":models.vgg13, "VGG16":models.vgg16, "VGG19":models.vgg19, "VGG11BN":models.vgg11_bn, "VGG13BN":models.vgg13_bn, "VGG16BN":models.vgg16_bn, "VGG19BN":models.vgg19_bn} 256 | class VGGFc(nn.Module): 257 | def __init__(self, vgg_name, use_bottleneck=True, bottleneck_dim=256, new_cls=False, class_num=1000): 258 | super(VGGFc, self).__init__() 259 | model_vgg = vgg_dict[vgg_name](pretrained=True) 260 | self.features = model_vgg.features 261 | self.classifier = nn.Sequential() 262 | for i in range(6): 263 | self.classifier.add_module("classifier"+str(i), model_vgg.classifier[i]) 264 | self.feature_layers = nn.Sequential(self.features, self.classifier) 265 | 266 | self.use_bottleneck = use_bottleneck 267 | self.new_cls = new_cls 268 | if new_cls: 269 | if self.use_bottleneck: 270 | self.bottleneck = nn.Linear(4096, bottleneck_dim) 271 | self.fc = nn.Linear(bottleneck_dim, class_num) 272 | self.bridge = nn.Linear(bottleneck_dim, class_num) 273 | self.bottleneck.apply(init_weights) 274 | self.fc.apply(init_weights) 275 | self.bridge.apply(init_weights) 276 | self.__in_features = bottleneck_dim 277 | else: 278 | self.fc = nn.Linear(4096, class_num) 279 | self.fc.apply(init_weights) 280 | self.__in_features = 4096 281 | else: 282 | self.fc = model_vgg.classifier[6] 283 | self.__in_features = 4096 284 | 285 | def forward(self, x): 286 | x = self.features(x) 287 | x = x.view(x.size(0), -1) 288 | x = self.classifier(x) 289 | if self.use_bottleneck and self.new_cls: 290 | x = self.bottleneck(x) 291 | y = self.fc(x) 292 | z = self.bridge(x) 293 | return x, y, z 294 | 295 | def output_num(self): 296 | return self.__in_features 297 | 298 | def get_parameters(self): 299 | if self.new_cls: 300 | if self.use_bottleneck: 301 | parameter_list = [{"params":self.features.parameters(), "lr_mult":1, 'decay_mult':2}, \ 302 | {"params":self.classifier.parameters(), "lr_mult":1, 'decay_mult':2}, \ 303 | {"params":self.bottleneck.parameters(), "lr_mult":10, 'decay_mult':2}, \ 304 | {"params":self.fc.parameters(), "lr_mult":10, 'decay_mult':2}, \ 305 | {"params":self.bridge.parameters(), "lr_mult":10, 'decay_mult':2}] 306 | else: 307 | parameter_list = [{"params":self.feature_layers.parameters(), "lr_mult":1, 'decay_mult':2}, \ 308 | {"params":self.classifier.parameters(), "lr_mult":1, 'decay_mult':2}, \ 309 | {"params":self.fc.parameters(), "lr_mult":10, 'decay_mult':2}] 310 | else: 311 | parameter_list = [{"params":self.parameters(), "lr_mult":1, 'decay_mult':2}] 312 | return parameter_list 313 | 314 | # For SVHN dataset 315 | class DTN(nn.Module): 316 | def __init__(self): 317 | super(DTN, self).__init__() 318 | self.conv_params = nn.Sequential ( 319 | nn.Conv2d(3, 64, kernel_size=5, stride=2, padding=2), 320 | nn.BatchNorm2d(64), 321 | nn.Dropout2d(0.1), 322 | nn.ReLU(), 323 | nn.Conv2d(64, 128, kernel_size=5, stride=2, padding=2), 324 | nn.BatchNorm2d(128), 325 | nn.Dropout2d(0.3), 326 | nn.ReLU(), 327 | nn.Conv2d(128, 256, kernel_size=5, stride=2, padding=2), 328 | nn.BatchNorm2d(256), 329 | nn.Dropout2d(0.5), 330 | nn.ReLU() 331 | ) 332 | 333 | self.fc_params = nn.Sequential ( 334 | nn.Linear(256*4*4, 512), 335 | nn.BatchNorm1d(512), 336 | nn.ReLU(), 337 | nn.Dropout() 338 | ) 339 | 340 | self.classifier = nn.Linear(512, 10) 341 | self.__in_features = 512 342 | 343 | def forward(self, x): 344 | x = self.conv_params(x) 345 | x = x.view(x.size(0), -1) 346 | x = self.fc_params(x) 347 | y = self.classifier(x) 348 | return x, y 349 | 350 | def output_num(self): 351 | return self.__in_features 352 | 353 | class LeNet(nn.Module): 354 | def __init__(self): 355 | super(LeNet, self).__init__() 356 | self.conv_params = nn.Sequential( 357 | nn.Conv2d(1, 20, kernel_size=5), 358 | nn.MaxPool2d(2), 359 | nn.ReLU(), 360 | nn.Conv2d(20, 50, kernel_size=5), 361 | nn.Dropout2d(p=0.5), 362 | nn.MaxPool2d(2), 363 | nn.ReLU(), 364 | ) 365 | 366 | self.fc_params = nn.Sequential(nn.Linear(50*4*4, 500), nn.ReLU(), nn.Dropout(p=0.5)) 367 | self.classifier = nn.Linear(500, 10) 368 | self.__in_features = 500 369 | 370 | 371 | def forward(self, x): 372 | x = self.conv_params(x) 373 | x = x.view(x.size(0), -1) 374 | x = self.fc_params(x) 375 | y = self.classifier(x) 376 | return x, y 377 | 378 | def output_num(self): 379 | return self.__in_features 380 | 381 | class AdversarialNetwork(nn.Module): 382 | def __init__(self, in_feature, hidden_size): 383 | super(AdversarialNetwork, self).__init__() 384 | self.ad_layer1 = nn.Linear(in_feature, hidden_size) 385 | self.ad_layer2 = nn.Linear(hidden_size, hidden_size) 386 | self.ad_layer3 = nn.Linear(hidden_size, 1) 387 | self.relu1 = nn.ReLU() 388 | self.relu2 = nn.ReLU() 389 | self.dropout1 = nn.Dropout(0.5) 390 | self.dropout2 = nn.Dropout(0.5) 391 | self.sigmoid = nn.Sigmoid() 392 | self.apply(init_weights) 393 | self.iter_num = 0 394 | self.alpha = 10 395 | self.low = 0.0 396 | self.high = 1.0 397 | self.max_iter = 10000.0 398 | 399 | def forward(self, x): 400 | if self.training: 401 | self.iter_num += 1 402 | coeff = calc_coeff(self.iter_num, self.high, self.low, self.alpha, self.max_iter) 403 | x = x * 1.0 404 | x.register_hook(grl_hook(coeff)) 405 | x = self.ad_layer1(x) 406 | x = self.relu1(x) 407 | x = self.dropout1(x) 408 | y = self.ad_layer2(x) 409 | y = self.relu2(y) 410 | y = self.dropout2(y) 411 | y = self.ad_layer3(y) 412 | return y 413 | 414 | def output_num(self): 415 | return 1 416 | def get_parameters(self): 417 | return [{"params":self.parameters(), "lr_mult":10, 'decay_mult':2}] 418 | -------------------------------------------------------------------------------- /DA/data/office/dslr_list.txt: -------------------------------------------------------------------------------- 1 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/calculator/frame_0001.jpg 5 2 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/calculator/frame_0002.jpg 5 3 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/calculator/frame_0003.jpg 5 4 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/calculator/frame_0004.jpg 5 5 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/calculator/frame_0005.jpg 5 6 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/calculator/frame_0006.jpg 5 7 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/calculator/frame_0007.jpg 5 8 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/calculator/frame_0008.jpg 5 9 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/calculator/frame_0009.jpg 5 10 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/calculator/frame_0010.jpg 5 11 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/calculator/frame_0011.jpg 5 12 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/calculator/frame_0012.jpg 5 13 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/ring_binder/frame_0001.jpg 24 14 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/ring_binder/frame_0002.jpg 24 15 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/ring_binder/frame_0003.jpg 24 16 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/ring_binder/frame_0004.jpg 24 17 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/ring_binder/frame_0005.jpg 24 18 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/ring_binder/frame_0006.jpg 24 19 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/ring_binder/frame_0007.jpg 24 20 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/ring_binder/frame_0008.jpg 24 21 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/ring_binder/frame_0009.jpg 24 22 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/ring_binder/frame_0010.jpg 24 23 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/printer/frame_0001.jpg 21 24 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/printer/frame_0002.jpg 21 25 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/printer/frame_0003.jpg 21 26 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/printer/frame_0004.jpg 21 27 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/printer/frame_0005.jpg 21 28 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/printer/frame_0006.jpg 21 29 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/printer/frame_0007.jpg 21 30 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/printer/frame_0008.jpg 21 31 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/printer/frame_0009.jpg 21 32 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/printer/frame_0010.jpg 21 33 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/printer/frame_0011.jpg 21 34 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/printer/frame_0012.jpg 21 35 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/printer/frame_0013.jpg 21 36 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/printer/frame_0014.jpg 21 37 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/printer/frame_0015.jpg 21 38 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/keyboard/frame_0001.jpg 11 39 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/keyboard/frame_0002.jpg 11 40 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/keyboard/frame_0003.jpg 11 41 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/keyboard/frame_0004.jpg 11 42 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/keyboard/frame_0005.jpg 11 43 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/keyboard/frame_0006.jpg 11 44 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/keyboard/frame_0007.jpg 11 45 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/keyboard/frame_0008.jpg 11 46 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/keyboard/frame_0009.jpg 11 47 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/keyboard/frame_0010.jpg 11 48 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/scissors/frame_0001.jpg 26 49 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/scissors/frame_0002.jpg 26 50 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/scissors/frame_0003.jpg 26 51 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/scissors/frame_0004.jpg 26 52 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/scissors/frame_0005.jpg 26 53 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/scissors/frame_0006.jpg 26 54 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/scissors/frame_0007.jpg 26 55 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/scissors/frame_0008.jpg 26 56 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/scissors/frame_0009.jpg 26 57 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/scissors/frame_0010.jpg 26 58 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/scissors/frame_0011.jpg 26 59 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/scissors/frame_0012.jpg 26 60 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/scissors/frame_0013.jpg 26 61 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/scissors/frame_0014.jpg 26 62 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/scissors/frame_0015.jpg 26 63 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/scissors/frame_0016.jpg 26 64 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/scissors/frame_0017.jpg 26 65 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/scissors/frame_0018.jpg 26 66 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0001.jpg 12 67 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0002.jpg 12 68 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0003.jpg 12 69 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0004.jpg 12 70 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0005.jpg 12 71 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0006.jpg 12 72 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0007.jpg 12 73 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0008.jpg 12 74 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0009.jpg 12 75 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0010.jpg 12 76 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0011.jpg 12 77 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0012.jpg 12 78 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0013.jpg 12 79 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0014.jpg 12 80 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0015.jpg 12 81 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0016.jpg 12 82 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0017.jpg 12 83 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0018.jpg 12 84 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0019.jpg 12 85 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0020.jpg 12 86 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0021.jpg 12 87 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0022.jpg 12 88 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0023.jpg 12 89 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/laptop_computer/frame_0024.jpg 12 90 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mouse/frame_0001.jpg 16 91 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mouse/frame_0002.jpg 16 92 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mouse/frame_0003.jpg 16 93 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mouse/frame_0004.jpg 16 94 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mouse/frame_0005.jpg 16 95 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mouse/frame_0006.jpg 16 96 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mouse/frame_0007.jpg 16 97 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mouse/frame_0008.jpg 16 98 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mouse/frame_0009.jpg 16 99 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mouse/frame_0010.jpg 16 100 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mouse/frame_0011.jpg 16 101 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mouse/frame_0012.jpg 16 102 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/monitor/frame_0001.jpg 15 103 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/monitor/frame_0002.jpg 15 104 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/monitor/frame_0003.jpg 15 105 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/monitor/frame_0004.jpg 15 106 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/monitor/frame_0005.jpg 15 107 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/monitor/frame_0006.jpg 15 108 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/monitor/frame_0007.jpg 15 109 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/monitor/frame_0008.jpg 15 110 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/monitor/frame_0009.jpg 15 111 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/monitor/frame_0010.jpg 15 112 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/monitor/frame_0011.jpg 15 113 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/monitor/frame_0012.jpg 15 114 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/monitor/frame_0013.jpg 15 115 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/monitor/frame_0014.jpg 15 116 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/monitor/frame_0015.jpg 15 117 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/monitor/frame_0016.jpg 15 118 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/monitor/frame_0017.jpg 15 119 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/monitor/frame_0018.jpg 15 120 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/monitor/frame_0019.jpg 15 121 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/monitor/frame_0020.jpg 15 122 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/monitor/frame_0021.jpg 15 123 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/monitor/frame_0022.jpg 15 124 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mug/frame_0001.jpg 17 125 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mug/frame_0002.jpg 17 126 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mug/frame_0003.jpg 17 127 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mug/frame_0004.jpg 17 128 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mug/frame_0005.jpg 17 129 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mug/frame_0006.jpg 17 130 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mug/frame_0007.jpg 17 131 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mug/frame_0008.jpg 17 132 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/tape_dispenser/frame_0001.jpg 29 133 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/tape_dispenser/frame_0002.jpg 29 134 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/tape_dispenser/frame_0003.jpg 29 135 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/tape_dispenser/frame_0004.jpg 29 136 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/tape_dispenser/frame_0005.jpg 29 137 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/tape_dispenser/frame_0006.jpg 29 138 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/tape_dispenser/frame_0007.jpg 29 139 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/tape_dispenser/frame_0008.jpg 29 140 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/tape_dispenser/frame_0009.jpg 29 141 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/tape_dispenser/frame_0010.jpg 29 142 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/tape_dispenser/frame_0011.jpg 29 143 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/tape_dispenser/frame_0012.jpg 29 144 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/tape_dispenser/frame_0013.jpg 29 145 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/tape_dispenser/frame_0014.jpg 29 146 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/tape_dispenser/frame_0015.jpg 29 147 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/tape_dispenser/frame_0016.jpg 29 148 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/tape_dispenser/frame_0017.jpg 29 149 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/tape_dispenser/frame_0018.jpg 29 150 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/tape_dispenser/frame_0019.jpg 29 151 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/tape_dispenser/frame_0020.jpg 29 152 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/tape_dispenser/frame_0021.jpg 29 153 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/tape_dispenser/frame_0022.jpg 29 154 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/pen/frame_0001.jpg 19 155 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/pen/frame_0002.jpg 19 156 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/pen/frame_0003.jpg 19 157 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/pen/frame_0004.jpg 19 158 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/pen/frame_0005.jpg 19 159 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/pen/frame_0006.jpg 19 160 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/pen/frame_0007.jpg 19 161 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/pen/frame_0008.jpg 19 162 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/pen/frame_0009.jpg 19 163 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/pen/frame_0010.jpg 19 164 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike/frame_0001.jpg 1 165 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike/frame_0002.jpg 1 166 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike/frame_0003.jpg 1 167 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike/frame_0004.jpg 1 168 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike/frame_0005.jpg 1 169 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike/frame_0006.jpg 1 170 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike/frame_0007.jpg 1 171 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike/frame_0008.jpg 1 172 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike/frame_0009.jpg 1 173 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike/frame_0010.jpg 1 174 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike/frame_0011.jpg 1 175 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike/frame_0012.jpg 1 176 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike/frame_0013.jpg 1 177 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike/frame_0014.jpg 1 178 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike/frame_0015.jpg 1 179 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike/frame_0016.jpg 1 180 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike/frame_0017.jpg 1 181 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike/frame_0018.jpg 1 182 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike/frame_0019.jpg 1 183 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike/frame_0020.jpg 1 184 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike/frame_0021.jpg 1 185 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/punchers/frame_0001.jpg 23 186 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/punchers/frame_0002.jpg 23 187 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/punchers/frame_0003.jpg 23 188 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/punchers/frame_0004.jpg 23 189 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/punchers/frame_0005.jpg 23 190 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/punchers/frame_0006.jpg 23 191 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/punchers/frame_0007.jpg 23 192 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/punchers/frame_0008.jpg 23 193 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/punchers/frame_0009.jpg 23 194 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/punchers/frame_0010.jpg 23 195 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/punchers/frame_0011.jpg 23 196 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/punchers/frame_0012.jpg 23 197 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/punchers/frame_0013.jpg 23 198 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/punchers/frame_0014.jpg 23 199 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/punchers/frame_0015.jpg 23 200 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/punchers/frame_0016.jpg 23 201 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/punchers/frame_0017.jpg 23 202 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/punchers/frame_0018.jpg 23 203 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/back_pack/frame_0001.jpg 0 204 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/back_pack/frame_0002.jpg 0 205 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/back_pack/frame_0003.jpg 0 206 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/back_pack/frame_0004.jpg 0 207 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/back_pack/frame_0005.jpg 0 208 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/back_pack/frame_0006.jpg 0 209 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/back_pack/frame_0007.jpg 0 210 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/back_pack/frame_0008.jpg 0 211 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/back_pack/frame_0009.jpg 0 212 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/back_pack/frame_0010.jpg 0 213 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/back_pack/frame_0011.jpg 0 214 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/back_pack/frame_0012.jpg 0 215 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desktop_computer/frame_0001.jpg 8 216 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desktop_computer/frame_0002.jpg 8 217 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desktop_computer/frame_0003.jpg 8 218 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desktop_computer/frame_0004.jpg 8 219 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desktop_computer/frame_0005.jpg 8 220 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desktop_computer/frame_0006.jpg 8 221 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desktop_computer/frame_0007.jpg 8 222 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desktop_computer/frame_0008.jpg 8 223 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desktop_computer/frame_0009.jpg 8 224 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desktop_computer/frame_0010.jpg 8 225 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desktop_computer/frame_0011.jpg 8 226 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desktop_computer/frame_0012.jpg 8 227 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desktop_computer/frame_0013.jpg 8 228 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desktop_computer/frame_0014.jpg 8 229 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desktop_computer/frame_0015.jpg 8 230 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0001.jpg 27 231 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0002.jpg 27 232 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0003.jpg 27 233 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0004.jpg 27 234 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0005.jpg 27 235 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0006.jpg 27 236 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0007.jpg 27 237 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0008.jpg 27 238 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0009.jpg 27 239 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0010.jpg 27 240 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0011.jpg 27 241 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0012.jpg 27 242 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0013.jpg 27 243 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0014.jpg 27 244 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0015.jpg 27 245 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0016.jpg 27 246 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0017.jpg 27 247 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0018.jpg 27 248 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0019.jpg 27 249 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0020.jpg 27 250 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0021.jpg 27 251 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0022.jpg 27 252 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0023.jpg 27 253 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0024.jpg 27 254 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0025.jpg 27 255 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/speaker/frame_0026.jpg 27 256 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0001.jpg 14 257 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0002.jpg 14 258 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0003.jpg 14 259 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0004.jpg 14 260 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0005.jpg 14 261 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0006.jpg 14 262 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0007.jpg 14 263 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0008.jpg 14 264 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0009.jpg 14 265 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0010.jpg 14 266 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0011.jpg 14 267 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0012.jpg 14 268 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0013.jpg 14 269 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0014.jpg 14 270 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0015.jpg 14 271 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0016.jpg 14 272 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0017.jpg 14 273 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0018.jpg 14 274 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0019.jpg 14 275 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0020.jpg 14 276 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0021.jpg 14 277 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0022.jpg 14 278 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0023.jpg 14 279 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0024.jpg 14 280 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0025.jpg 14 281 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0026.jpg 14 282 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0027.jpg 14 283 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0028.jpg 14 284 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0029.jpg 14 285 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0030.jpg 14 286 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/mobile_phone/frame_0031.jpg 14 287 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/paper_notebook/frame_0001.jpg 18 288 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/paper_notebook/frame_0002.jpg 18 289 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/paper_notebook/frame_0003.jpg 18 290 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/paper_notebook/frame_0004.jpg 18 291 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/paper_notebook/frame_0005.jpg 18 292 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/paper_notebook/frame_0006.jpg 18 293 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/paper_notebook/frame_0007.jpg 18 294 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/paper_notebook/frame_0008.jpg 18 295 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/paper_notebook/frame_0009.jpg 18 296 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/paper_notebook/frame_0010.jpg 18 297 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/ruler/frame_0001.jpg 25 298 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/ruler/frame_0002.jpg 25 299 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/ruler/frame_0003.jpg 25 300 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/ruler/frame_0004.jpg 25 301 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/ruler/frame_0005.jpg 25 302 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/ruler/frame_0006.jpg 25 303 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/ruler/frame_0007.jpg 25 304 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/letter_tray/frame_0001.jpg 13 305 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/letter_tray/frame_0002.jpg 13 306 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/letter_tray/frame_0003.jpg 13 307 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/letter_tray/frame_0004.jpg 13 308 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/letter_tray/frame_0005.jpg 13 309 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/letter_tray/frame_0006.jpg 13 310 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/letter_tray/frame_0007.jpg 13 311 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/letter_tray/frame_0008.jpg 13 312 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/letter_tray/frame_0009.jpg 13 313 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/letter_tray/frame_0010.jpg 13 314 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/letter_tray/frame_0011.jpg 13 315 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/letter_tray/frame_0012.jpg 13 316 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/letter_tray/frame_0013.jpg 13 317 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/letter_tray/frame_0014.jpg 13 318 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/letter_tray/frame_0015.jpg 13 319 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/letter_tray/frame_0016.jpg 13 320 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/file_cabinet/frame_0001.jpg 9 321 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/file_cabinet/frame_0002.jpg 9 322 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/file_cabinet/frame_0003.jpg 9 323 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/file_cabinet/frame_0004.jpg 9 324 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/file_cabinet/frame_0005.jpg 9 325 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/file_cabinet/frame_0006.jpg 9 326 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/file_cabinet/frame_0007.jpg 9 327 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/file_cabinet/frame_0008.jpg 9 328 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/file_cabinet/frame_0009.jpg 9 329 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/file_cabinet/frame_0010.jpg 9 330 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/file_cabinet/frame_0011.jpg 9 331 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/file_cabinet/frame_0012.jpg 9 332 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/file_cabinet/frame_0013.jpg 9 333 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/file_cabinet/frame_0014.jpg 9 334 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/file_cabinet/frame_0015.jpg 9 335 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/phone/frame_0001.jpg 20 336 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/phone/frame_0002.jpg 20 337 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/phone/frame_0003.jpg 20 338 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/phone/frame_0004.jpg 20 339 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/phone/frame_0005.jpg 20 340 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/phone/frame_0006.jpg 20 341 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/phone/frame_0007.jpg 20 342 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/phone/frame_0008.jpg 20 343 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/phone/frame_0009.jpg 20 344 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/phone/frame_0010.jpg 20 345 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/phone/frame_0011.jpg 20 346 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/phone/frame_0012.jpg 20 347 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/phone/frame_0013.jpg 20 348 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bookcase/frame_0001.jpg 3 349 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bookcase/frame_0002.jpg 3 350 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bookcase/frame_0003.jpg 3 351 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bookcase/frame_0004.jpg 3 352 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bookcase/frame_0005.jpg 3 353 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bookcase/frame_0006.jpg 3 354 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bookcase/frame_0007.jpg 3 355 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bookcase/frame_0008.jpg 3 356 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bookcase/frame_0009.jpg 3 357 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bookcase/frame_0010.jpg 3 358 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bookcase/frame_0011.jpg 3 359 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bookcase/frame_0012.jpg 3 360 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0001.jpg 22 361 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0002.jpg 22 362 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0003.jpg 22 363 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0004.jpg 22 364 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0005.jpg 22 365 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0006.jpg 22 366 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0007.jpg 22 367 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0008.jpg 22 368 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0009.jpg 22 369 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0010.jpg 22 370 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0011.jpg 22 371 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0012.jpg 22 372 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0013.jpg 22 373 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0014.jpg 22 374 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0015.jpg 22 375 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0016.jpg 22 376 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0017.jpg 22 377 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0018.jpg 22 378 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0019.jpg 22 379 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0020.jpg 22 380 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0021.jpg 22 381 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0022.jpg 22 382 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/projector/frame_0023.jpg 22 383 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/stapler/frame_0001.jpg 28 384 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/stapler/frame_0002.jpg 28 385 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/stapler/frame_0003.jpg 28 386 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/stapler/frame_0004.jpg 28 387 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/stapler/frame_0005.jpg 28 388 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/stapler/frame_0006.jpg 28 389 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/stapler/frame_0007.jpg 28 390 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/stapler/frame_0008.jpg 28 391 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/stapler/frame_0009.jpg 28 392 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/stapler/frame_0010.jpg 28 393 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/stapler/frame_0011.jpg 28 394 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/stapler/frame_0012.jpg 28 395 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/stapler/frame_0013.jpg 28 396 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/stapler/frame_0014.jpg 28 397 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/stapler/frame_0015.jpg 28 398 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/stapler/frame_0016.jpg 28 399 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/stapler/frame_0017.jpg 28 400 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/stapler/frame_0018.jpg 28 401 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/stapler/frame_0019.jpg 28 402 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/stapler/frame_0020.jpg 28 403 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/stapler/frame_0021.jpg 28 404 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/trash_can/frame_0001.jpg 30 405 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/trash_can/frame_0002.jpg 30 406 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/trash_can/frame_0003.jpg 30 407 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/trash_can/frame_0004.jpg 30 408 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/trash_can/frame_0005.jpg 30 409 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/trash_can/frame_0006.jpg 30 410 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/trash_can/frame_0007.jpg 30 411 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/trash_can/frame_0008.jpg 30 412 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/trash_can/frame_0009.jpg 30 413 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/trash_can/frame_0010.jpg 30 414 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/trash_can/frame_0011.jpg 30 415 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/trash_can/frame_0012.jpg 30 416 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/trash_can/frame_0013.jpg 30 417 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/trash_can/frame_0014.jpg 30 418 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/trash_can/frame_0015.jpg 30 419 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0001.jpg 2 420 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0002.jpg 2 421 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0003.jpg 2 422 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0004.jpg 2 423 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0005.jpg 2 424 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0006.jpg 2 425 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0007.jpg 2 426 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0008.jpg 2 427 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0009.jpg 2 428 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0010.jpg 2 429 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0011.jpg 2 430 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0012.jpg 2 431 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0013.jpg 2 432 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0014.jpg 2 433 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0015.jpg 2 434 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0016.jpg 2 435 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0017.jpg 2 436 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0018.jpg 2 437 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0019.jpg 2 438 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0020.jpg 2 439 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0021.jpg 2 440 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0022.jpg 2 441 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0023.jpg 2 442 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bike_helmet/frame_0024.jpg 2 443 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/headphones/frame_0001.jpg 10 444 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/headphones/frame_0002.jpg 10 445 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/headphones/frame_0003.jpg 10 446 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/headphones/frame_0004.jpg 10 447 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/headphones/frame_0005.jpg 10 448 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/headphones/frame_0006.jpg 10 449 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/headphones/frame_0007.jpg 10 450 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/headphones/frame_0008.jpg 10 451 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/headphones/frame_0009.jpg 10 452 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/headphones/frame_0010.jpg 10 453 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/headphones/frame_0011.jpg 10 454 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/headphones/frame_0012.jpg 10 455 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/headphones/frame_0013.jpg 10 456 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_lamp/frame_0001.jpg 7 457 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_lamp/frame_0002.jpg 7 458 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_lamp/frame_0003.jpg 7 459 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_lamp/frame_0004.jpg 7 460 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_lamp/frame_0005.jpg 7 461 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_lamp/frame_0006.jpg 7 462 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_lamp/frame_0007.jpg 7 463 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_lamp/frame_0008.jpg 7 464 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_lamp/frame_0009.jpg 7 465 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_lamp/frame_0010.jpg 7 466 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_lamp/frame_0011.jpg 7 467 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_lamp/frame_0012.jpg 7 468 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_lamp/frame_0013.jpg 7 469 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_lamp/frame_0014.jpg 7 470 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_chair/frame_0001.jpg 6 471 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_chair/frame_0002.jpg 6 472 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_chair/frame_0003.jpg 6 473 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_chair/frame_0004.jpg 6 474 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_chair/frame_0005.jpg 6 475 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_chair/frame_0006.jpg 6 476 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_chair/frame_0007.jpg 6 477 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_chair/frame_0008.jpg 6 478 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_chair/frame_0009.jpg 6 479 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_chair/frame_0010.jpg 6 480 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_chair/frame_0011.jpg 6 481 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_chair/frame_0012.jpg 6 482 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/desk_chair/frame_0013.jpg 6 483 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bottle/frame_0001.jpg 4 484 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bottle/frame_0002.jpg 4 485 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bottle/frame_0003.jpg 4 486 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bottle/frame_0004.jpg 4 487 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bottle/frame_0005.jpg 4 488 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bottle/frame_0006.jpg 4 489 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bottle/frame_0007.jpg 4 490 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bottle/frame_0008.jpg 4 491 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bottle/frame_0009.jpg 4 492 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bottle/frame_0010.jpg 4 493 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bottle/frame_0011.jpg 4 494 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bottle/frame_0012.jpg 4 495 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bottle/frame_0013.jpg 4 496 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bottle/frame_0014.jpg 4 497 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bottle/frame_0015.jpg 4 498 | /DATA/disk1/hassassin/dataset/domain/office/dslr/images/bottle/frame_0016.jpg 4 499 | --------------------------------------------------------------------------------