├── .gitignore ├── Amazon_adain.pth ├── HyMOS.gif ├── README.md ├── common ├── __init__.py ├── common.py ├── eval.py └── train.py ├── data └── data_txt │ ├── DomainNet │ ├── OpenSet_source_train.txt │ ├── infographOpenSet_known.txt │ ├── ipc.txt │ ├── ips.txt │ └── paintingOpenSet_known.txt │ ├── Office31 │ ├── Amazon.txt │ ├── AmazonOpenSet_known.txt │ ├── Dslr.txt │ ├── DslrOpenSet_known.txt │ ├── Webcam.txt │ ├── WebcamOpenSet_known.txt │ ├── no_AmazonOpenSet.txt │ ├── no_DslrOpenSet.txt │ ├── no_WebcamOpenSet.txt │ └── office31.txt │ └── OfficeHome │ ├── Art.txt │ ├── ArtOpenSet_known.txt │ ├── Clipart.txt │ ├── ClipartOpenSet_known.txt │ ├── Product.txt │ ├── ProductOpenSet_known.txt │ ├── RealWorld.txt │ ├── RealWorldOpenSet_known.txt │ ├── no_ArtOpenSet.txt │ ├── no_ClipartOpenSet.txt │ ├── no_ProductOpenSet.txt │ └── no_RealWorldOpenSet.txt ├── datasets └── datasets.py ├── eval.py ├── evals └── evals.py ├── image.jpeg ├── models ├── __init__.py ├── adain.py ├── base_model.py ├── classifier.py ├── resnet_imagenet.py └── transform_layers.py ├── train.py ├── training ├── __init__.py ├── contrastive_loss.py ├── scheduler.py └── sup │ ├── HyMOS_st.py │ └── __init__.py └── utils ├── __init__.py ├── dist_utils.py ├── temperature_scaling.py └── utils.py /.gitignore: -------------------------------------------------------------------------------- 1 | **/__pycache__/ 2 | **/*.pyc 3 | notebooks/* 4 | -------------------------------------------------------------------------------- /Amazon_adain.pth: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/silvia1993/HyMOS/86bb5165c3ad921da2ffb00aa5e34ef9c38ea9c0/Amazon_adain.pth -------------------------------------------------------------------------------- /HyMOS.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/silvia1993/HyMOS/86bb5165c3ad921da2ffb00aa5e34ef9c38ea9c0/HyMOS.gif -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # HyMOS (Hyperspherical classification for Multi-source Open-Set domain adaptation) 2 | 3 | PyTorch official implementation of "[Distance-based Hyperspherical Classification for Multi-source Open-Set Domain Adaptation](https://arxiv.org/abs/2107.02067)". 4 | Video presentation available [here](https://www.youtube.com/watch?v=lleB3utdv9A). 5 | ![Test Image 1](HyMOS.gif) 6 | 7 | _Vision systems trained in close-world scenarios will inevitably fail when presented with new environmental conditions, new data distributions and novel classes at deployment time. How to move towards open-world learning is along standing research question, but the existing solutions mainly focus on specific aspects of the problem (single domain open-set, multi-domain closed-set), or propose complex strategies which combine multiple losses and manually tuned hyperparameters. In this work we tackle multi-source open-set domain adaptation by introducing HyMOS: a straightforward supervised model that exploits the power of contrastive learning and the properties of its hyperspherical feature space to correctly predict known labels on the target, while rejecting samples belonging to any unknown class. HyMOS includes a tailored data balancing to enforce cross-source alignment and introduces style transfer among the instance transformations of contrastive learning for source-target adaptation, avoiding the risk of negative transfer. Finally a self-training strategy refines the model without the need for handcrafted thresholds. We validate our method over three challenging datasets and provide an extensive quantitative and qualitative experimental analysis. The obtained results show that HyMOS outperforms several open-set and universal domain adaptation approaches, defining the new state-of-the-art._ 8 | 9 | **Office-31 (HOS)** 10 | 11 | | | D,A -> W | W,A -> D | W,D -> A | Avg. | 12 | | :---| :---: | :---: | :---: |---: | 13 | | **HyMOS** | 90.2| 89.9| 60.8 | 80.3 | 14 | 15 | **Office-Home (HOS)** 16 | 17 | | | Ar,Pr,Cl→Rw | Ar,Pr,Rw→Cl | Cl,Pr,Rw→Ar | Cl,Ar,Rw→Pr | Avg. | 18 | | :---| :---: | :---: | :---: | :---: | ---: | 19 | | **HyMOS** | 71.0 | 64.6 | 62.2 | 71.1 | 67.2 | 20 | 21 | **DomainNet (HOS)** 22 | 23 | | | I,P -> S | I,P -> C | Avg. | 24 | | :---| :---: | :---: | ---: 25 | | **HyMOS** | 57.5| 61.0| 59.3 | 26 | 27 | 28 | ## Code 29 | 30 | ### 1. Requirements 31 | 32 | #### Packages 33 | 34 | The code requires these packages: 35 | - python 3.6+ 36 | - torch 1.6+ 37 | - torchvision 0.7+ 38 | - CUDA 10.1+ 39 | - scikit-learn 0.22+ 40 | - tensorboardX 41 | - tqdm 42 | - [torchlars](https://github.com/kakaobrain/torchlars) == 0.1.2 43 | - [apex](https://github.com/NVIDIA/apex) == 0.1 44 | - [diffdist](https://github.com/ag14774/diffdist) == 0.1 45 | 46 | #### Datasets 47 | 48 | Datasets OfficeHome, Office31 and DomainNet should be placed in `~/data`. They can be downloaded 49 | from official sites: 50 | 51 | - [OfficeHome](https://www.hemanthdv.org/officeHomeDataset.html) 52 | - [Office31](https://people.eecs.berkeley.edu/~jhoffman/domainadapt/) 53 | - [DomainNet](http://ai.bu.edu/M3SDA/) 54 | 55 | Make sure that `~/data/OfficeHome/` points to the correct domain directory for all the domains. It may be possible that you need to rename *Real World* folder to remove the space. 56 | Similarly you should check `~/data/DomainNet/` and `~/data/Office31/`. 57 | 58 | #### Pretrained model 59 | 60 | We use ResNet50 pretrained via SupCLR, taken from [official github repository](https://github.com/google-research/google-research/tree/master/supcon). 61 | We converted the checkpoint to pytorch format using this guide: [here](https://github.com/google-research/simclr#model-convertion-to-pytorch-format). 62 | The converted model is available [here](https://drive.google.com/file/d/1w-IdsYwCScbHTlCUDGCDxCPhY9VH6hRl/view?usp=sharing). 63 | 64 | #### AdaIN model 65 | 66 | We use a freely available PyTorch based AdaIN implementation that can be found [here](https://github.com/irasin/Pytorch_AdaIN). Follow the instructions to train a model. Put source data 67 | in `train_content_dir` and target data in `train_style_dir`. We also included a model trained by us 68 | for the Office31 Dslr,Webcam -> Amazon shift together with this code. The file is named 69 | `Amazon_adain.pth`. 70 | 71 | ### 2. Training 72 | 73 | In the examples below the training is performed on multiple GPUs. It is possible to use more or 74 | less by changing the value in `--nproc_per_node=2` and setting `CUDA_VISIBLE_DEVICES` appropriately. 75 | In order to obtain domain and class-balance in each training mini batch the number of known classes 76 | of the datasets has to be divisible by the number of GPUs used. For example in the case of 77 | OfficeHome we use 3 GPUs because there are 45 known classes. 78 | 79 | We use 'test_domain' to refer to target domain. 80 | Train output, with saved models and log files, is stored in `logs/` directory. 81 | 82 | #### Office31 83 | 84 | ```bash 85 | CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port=10001 train.py --dataset Office31 \ 86 | --test_domain --pretrain --adain_ckpt 87 | ``` 88 | 89 | Test domain should be one of "Amazon", "Webcam", "Dslr". 90 | 91 | For example to train for the experiment having Amazon as target using the provided AdaIN model and a SupCLR pretrained ResNet50 model 92 | we use: 93 | 94 | ```bash 95 | CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port=10001 train.py --dataset Office31 \ 96 | --test_domain Amazon --pretrain pretrained/resnet50_SupCLR.pth --adain_ckpt Amazon_adain.pth 97 | ``` 98 | 99 | Use a different port if 10001 is already taken. 100 | 101 | #### OfficeHome 102 | 103 | ```bash 104 | CUDA_VISIBLE_DEVICES=0,1,2 python -m torch.distributed.launch --nproc_per_node=3 --master_port=10001 train.py --dataset OfficeHome \ 105 | --test_domain --pretrain --adain_ckpt 106 | ``` 107 | 108 | Test domain should be one of "Art", "Clipart", "Product", "RealWorld". 109 | 110 | #### DomainNet 111 | 112 | ```bash 113 | CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port=10001 train.py --dataset DomainNet \ 114 | --test_domain --pretrain --adain_ckpt 115 | ``` 116 | 117 | Test domain should be one of "ipc", "ips". 118 | 119 | ### 3. Evaluation 120 | 121 | Periodic evaluation is performed during training. The final model can be tested using: 122 | 123 | ```bash 124 | CUDA_VISIBLE_DEVICES=0 python eval.py --dataset --test_domain --load_path 125 | ``` 126 | 127 | For example to test the model trained for Office31 shift having Amazon as target we use: 128 | 129 | ```bash 130 | CUDA_VISIBLE_DEVICES=0 python eval.py --dataset Office31 --test_domain Amazon --load_path logs/Dataset-Office31_Target-Amazon_Mode-HyMOS_st_batchK-20_batchP-2_iterative_ProbST-0.5/last.model 131 | ``` 132 | 133 | ## Citation 134 | 135 | To cite, please use the following reference: 136 | ``` 137 | @inproceedings{bucci2022distance, 138 | title={Distance-based Hyperspherical Classification for Multi-source Open-Set Domain Adaptation}, 139 | author={Silvia Bucci, Francesco Cappio Borlino, Barbara Caputo, Tatiana Tommasi}, 140 | booktitle={Winter Conference on Applications of Computer Vision}, 141 | year={2022} 142 | } 143 | ``` 144 | -------------------------------------------------------------------------------- /common/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/silvia1993/HyMOS/86bb5165c3ad921da2ffb00aa5e34ef9c38ea9c0/common/__init__.py -------------------------------------------------------------------------------- /common/common.py: -------------------------------------------------------------------------------- 1 | from argparse import ArgumentParser 2 | 3 | 4 | def parse_args(default=False): 5 | """Command-line argument parser for training.""" 6 | 7 | parser = ArgumentParser(description='Pytorch implementation of HyMOS') 8 | 9 | parser.add_argument('--dataset', help='Dataset', default='OfficeHome', 10 | choices=['OfficeHome', 'DomainNet', 'Office31'], type=str) 11 | parser.add_argument('--test_domain', help="Domain (or name of file referring to data) for testing", type=str) 12 | 13 | parser.add_argument("--local_rank", type=int,default=0, help='Local rank for distributed learning') 14 | parser.add_argument('--load_path', help='Path to the loading checkpoint for eval', default=None, type=str) 15 | parser.add_argument("--no_strict", help='Do not strictly load state_dicts', action='store_true') 16 | parser.add_argument('--pretrain', help="Path to pretrained network", default = 'pretrained/resnet50_SupCLR.pth') 17 | 18 | ##### Training Configurations ##### 19 | parser.add_argument('--lr_init', help='Initial learning rate', default=0.005, type=float) 20 | parser.add_argument('--weight_decay', help='Weight decay', default=1e-6, type=float) 21 | parser.add_argument('--batch_size', help='Batch size for style batches',default=32, type=int) 22 | 23 | ##### Objective Configurations ##### 24 | parser.add_argument('--temperature', help='Temperature for similarity', default=0.07, type=float) 25 | 26 | parser.add_argument("--suffix", default="", type=str, help="suffix for log dir") 27 | 28 | #### Style transfer options #### 29 | parser.add_argument("--adain_probability", type=float, default=0.5, 30 | help="Probability to apply adain to each batch image") 31 | parser.add_argument("--adain_alpha", type=float, default=1.0, 32 | help="Alpha coefficient for adain style transfer") 33 | parser.add_argument("--adain_ckpt", type=str, default=None, 34 | help="Path to adain checkpoint") 35 | 36 | if default: 37 | return parser.parse_args('') # empty string 38 | else: 39 | return parser.parse_args() 40 | -------------------------------------------------------------------------------- /common/eval.py: -------------------------------------------------------------------------------- 1 | from copy import deepcopy 2 | 3 | import torch 4 | import torch.nn as nn 5 | from torch.utils.data import DataLoader 6 | 7 | from common.common import parse_args 8 | import models.classifier as C 9 | from datasets.datasets import get_datasets_for_test 10 | 11 | P = parse_args() 12 | P.mode = "openset_eval" 13 | P.model = "resnet50_imagenet" 14 | P.im_mean = [0.485, 0.456, 0.406] 15 | P.im_std = [0.229, 0.224, 0.225] 16 | 17 | 18 | ### Set torch device ### 19 | 20 | P.n_gpus = torch.cuda.device_count() 21 | assert P.n_gpus <= 1 # no multi GPU 22 | P.multi_gpu = False 23 | 24 | if torch.cuda.is_available(): 25 | torch.cuda.set_device(P.local_rank) 26 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu") 27 | 28 | source_ds, target_ds, n_classes = get_datasets_for_test(P) 29 | P.image_size = (224, 224, 3) 30 | P.n_classes = n_classes 31 | 32 | source_test_loader = DataLoader(source_ds, shuffle=False, batch_size=1, num_workers=4) 33 | target_test_loader = DataLoader(target_ds, shuffle=False, batch_size=1, num_workers=4) 34 | 35 | ### Initialize model ### 36 | 37 | model = C.get_classifier(P.model, n_classes=P.n_classes).to(device) 38 | 39 | criterion = nn.CrossEntropyLoss().to(device) 40 | 41 | # set normalize params 42 | mean = torch.tensor(P.im_mean).to(device) 43 | std = torch.tensor(P.im_std).to(device) 44 | model.normalize.mean=mean 45 | model.normalize.std=std 46 | 47 | assert P.load_path is not None, "You need to pass checkpoint path using --load_path" 48 | 49 | checkpoint = torch.load(P.load_path) 50 | 51 | missing, unexpected = model.load_state_dict(checkpoint, strict=not P.no_strict) 52 | print(f"Missing keys: {missing}\nUnexpected keys: {unexpected}") 53 | 54 | -------------------------------------------------------------------------------- /common/train.py: -------------------------------------------------------------------------------- 1 | from copy import deepcopy 2 | 3 | import torch 4 | import torch.nn as nn 5 | import torch.optim as optim 6 | import torch.optim.lr_scheduler as lr_scheduler 7 | from torch.utils.data import DataLoader, ConcatDataset 8 | 9 | from common.common import parse_args 10 | import models.classifier as C 11 | from datasets.datasets import get_dataset_2, get_style_dataset, get_datasets_for_test, BalancedMultiSourceRandomSampler 12 | from utils.utils import load_checkpoint 13 | import sys 14 | import os 15 | 16 | P = parse_args() 17 | P.iterative = True 18 | P.mode = "HyMOS_st" 19 | P.model = "resnet50_imagenet" 20 | P.resize_factor = 0.08 21 | P.simclr_dim = 128 22 | P.iterations = 40000 23 | P.its_breakpoints = [20000,25000,30000,35000] 24 | P.im_mean = [0.485, 0.456, 0.406] 25 | P.im_std = [0.229, 0.224, 0.225] 26 | P.optimizer = "lars" 27 | P.lr_scheduler = "cosine" 28 | P.warmup = 2500 29 | 30 | # batch_K -> number of known classes 31 | # batch_p -> number of source domains 32 | # they are necessary to build balanced batches. 33 | 34 | if P.dataset == "OfficeHome": 35 | P.batch_K = 45 36 | P.batch_p = 3 37 | elif P.dataset == "DomainNet": 38 | P.batch_K = 100 39 | P.batch_p = 2 40 | elif P.dataset == "Office31": 41 | P.batch_K = 20 42 | P.batch_p = 2 43 | else: 44 | raise NotImplementedError(f"Unknown dataset {P.dataset}") 45 | 46 | ### Set torch device ### 47 | if torch.cuda.is_available(): 48 | torch.cuda.set_device(P.local_rank) 49 | device = torch.device(f"cuda" if torch.cuda.is_available() else "cpu") 50 | 51 | P.n_gpus = torch.cuda.device_count() 52 | 53 | import apex 54 | import torch.distributed as dist 55 | from torch.utils.data.distributed import DistributedSampler 56 | from tqdm import tqdm 57 | import numpy as np 58 | 59 | P.multi_gpu = True 60 | torch.distributed.init_process_group( 61 | 'nccl', 62 | init_method='env://', 63 | ) 64 | 65 | # get local rank and world size from torch distributed 66 | P.local_rank=int(os.environ['RANK']) 67 | P.n_gpus=torch.distributed.get_world_size() 68 | print(f"Num of GPUS {P.n_gpus}") 69 | 70 | ### Initialize dataset ### 71 | train_sets, n_classes = get_dataset_2(P) 72 | 73 | # we have a list of ConcatDatasets, one for each class 74 | P.image_size = (224, 224, 3) 75 | P.n_classes = n_classes 76 | kwargs = {'pin_memory': False, 'num_workers': 2, 'drop_last':True} 77 | 78 | assert P.batch_K%P.n_gpus == 0, "batch_K has to be divisible by world size!!" 79 | single_GPU_batch_K = P.batch_K/P.n_gpus 80 | single_GPU_batch_size = int(P.batch_p*single_GPU_batch_K) 81 | whole_source = ConcatDataset([train_sets[idx] for idx in range(n_classes)]) 82 | my_sampler = BalancedMultiSourceRandomSampler(whole_source, P.batch_p, P.local_rank, P.n_gpus) 83 | print(f"{P.local_rank} sampler_size: {len(my_sampler)}. Dataset_size: {len(whole_source)}") 84 | train_loader = DataLoader(whole_source, sampler=my_sampler, batch_size=single_GPU_batch_size, **kwargs) 85 | 86 | ### Test datasets and data loaders 87 | source_ds, target_ds, _, = get_datasets_for_test(P) 88 | 89 | source_test_sampler = DistributedSampler(source_ds, num_replicas=P.n_gpus, rank=P.local_rank) 90 | target_test_sampler = DistributedSampler(target_ds, num_replicas=P.n_gpus, rank=P.local_rank) 91 | source_test_loader = DataLoader(source_ds, sampler=source_test_sampler, shuffle=False, batch_size=1, num_workers=4) 92 | target_test_loader = DataLoader(target_ds, sampler=target_test_sampler, shuffle=False, batch_size=1, num_workers=4) 93 | 94 | ### Initialize model ### 95 | # define transformations for SimCLR augmentation 96 | simclr_aug = C.get_simclr_augmentation(P, image_size=P.image_size).to(device) 97 | 98 | # generate model (includes backbone + classification head) 99 | model = C.get_classifier(P.model, n_classes=P.n_classes, pretrain=P.pretrain).to(device) 100 | 101 | # modify normalize params if necessary 102 | mean = torch.tensor(P.im_mean).to(device) 103 | std = torch.tensor(P.im_std).to(device) 104 | model.normalize.mean=mean 105 | model.normalize.std=std 106 | 107 | criterion = nn.CrossEntropyLoss().to(device) 108 | 109 | if P.optimizer == 'sgd': 110 | optimizer = optim.SGD(model.parameters(), lr=P.lr_init, momentum=0.9, weight_decay=P.weight_decay) 111 | lr_decay_gamma = 0.1 112 | elif P.optimizer == 'adam': 113 | optimizer = optim.Adam(model.parameters(), lr=P.lr_init, betas=(.9, .999), weight_decay=P.weight_decay) 114 | lr_decay_gamma = 0.3 115 | elif P.optimizer == 'lars': 116 | from torchlars import LARS 117 | # wrap SGD in LARS for multi-gpu optimization 118 | base_optimizer = optim.SGD(model.parameters(), lr=P.lr_init*10.0, momentum=0.9, weight_decay=P.weight_decay) 119 | optimizer = LARS(base_optimizer, eps=1e-8, trust_coef=0.001) 120 | lr_decay_gamma = 0.1 121 | else: 122 | raise NotImplementedError() 123 | 124 | if P.lr_scheduler == 'cosine': # default choice 125 | scheduler = lr_scheduler.CosineAnnealingLR(optimizer, P.iterations) 126 | elif P.lr_scheduler == 'step_decay': 127 | milestones = [int(0.5 * P.iterations), int(0.75 * P.iterations)] 128 | scheduler = lr_scheduler.MultiStepLR(optimizer, gamma=lr_decay_gamma, milestones=milestones) 129 | else: 130 | raise NotImplementedError() 131 | 132 | # a warmup scheduler is used in the first iterations, then substituted with the scheduler defined above 133 | from training.scheduler import GradualWarmupScheduler 134 | scheduler_warmup = GradualWarmupScheduler(optimizer, multiplier=1, total_epoch=P.warmup, after_scheduler=scheduler) 135 | 136 | resume = False 137 | start_its = 1 138 | 139 | if "st" in P.mode: 140 | assert P.adain_ckpt is not None, "You need to pass adain ckpt path using --adain_ckpt" 141 | 142 | # first we load style transfer model 143 | from models.adain import AdaIN 144 | adain_model = AdaIN() 145 | adain_model.load_state_dict(torch.load(P.adain_ckpt, map_location=lambda storage, loc: storage)) 146 | print('Adain augmentation setup: loaded checkpoint {}.'.format(P.adain_ckpt)) 147 | P.adain_model = adain_model 148 | 149 | # now we also need to create a data loader for style images 150 | style_dataset = get_style_dataset(P) 151 | 152 | if P.multi_gpu: 153 | style_sampler = DistributedSampler(style_dataset, num_replicas=P.n_gpus, rank=P.local_rank) 154 | style_loader = DataLoader(style_dataset, sampler=style_sampler, batch_size=P.batch_size, **kwargs) 155 | else: 156 | style_loader = DataLoader(style_dataset, shuffle=True, batch_size=P.batch_size, **kwargs) 157 | P.style_loader = style_loader 158 | 159 | # for images on which we apply style transfer we don't want to have also 160 | # color jittering and grey scale among simclr augmentations 161 | P.simclr_aug_st = C.get_simclr_augmentation_crop_only(P, image_size=P.image_size).to(device) 162 | 163 | if P.multi_gpu: 164 | simclr_aug = apex.parallel.DistributedDataParallel(simclr_aug, delay_allreduce=True) 165 | model = apex.parallel.convert_syncbn_model(model) 166 | model = apex.parallel.DistributedDataParallel(model, delay_allreduce=True) 167 | -------------------------------------------------------------------------------- /data/data_txt/Office31/Dslr.txt: -------------------------------------------------------------------------------- 1 | dslr/backpack/frame_0009.jpg 0 2 | dslr/backpack/frame_0003.jpg 0 3 | dslr/backpack/frame_0011.jpg 0 4 | dslr/backpack/frame_0001.jpg 0 5 | dslr/backpack/frame_0006.jpg 0 6 | dslr/backpack/frame_0005.jpg 0 7 | dslr/backpack/frame_0008.jpg 0 8 | dslr/backpack/frame_0004.jpg 0 9 | dslr/backpack/frame_0012.jpg 0 10 | dslr/backpack/frame_0002.jpg 0 11 | dslr/backpack/frame_0010.jpg 0 12 | dslr/backpack/frame_0007.jpg 0 13 | dslr/bike/frame_0005.jpg 1 14 | dslr/bike/frame_0001.jpg 1 15 | dslr/bike/frame_0018.jpg 1 16 | dslr/bike/frame_0004.jpg 1 17 | dslr/bike/frame_0014.jpg 1 18 | dslr/bike/frame_0012.jpg 1 19 | dslr/bike/frame_0002.jpg 1 20 | dslr/bike/frame_0009.jpg 1 21 | dslr/bike/frame_0003.jpg 1 22 | dslr/bike/frame_0008.jpg 1 23 | dslr/bike/frame_0015.jpg 1 24 | dslr/bike/frame_0007.jpg 1 25 | dslr/bike/frame_0016.jpg 1 26 | dslr/bike/frame_0019.jpg 1 27 | dslr/bike/frame_0006.jpg 1 28 | dslr/bike/frame_0017.jpg 1 29 | dslr/bike/frame_0013.jpg 1 30 | dslr/bike/frame_0021.jpg 1 31 | dslr/bike/frame_0020.jpg 1 32 | dslr/bike/frame_0011.jpg 1 33 | dslr/bike/frame_0010.jpg 1 34 | dslr/bike_helmet/frame_0001.jpg 2 35 | dslr/bike_helmet/frame_0007.jpg 2 36 | dslr/bike_helmet/frame_0021.jpg 2 37 | dslr/bike_helmet/frame_0010.jpg 2 38 | dslr/bike_helmet/frame_0023.jpg 2 39 | dslr/bike_helmet/frame_0017.jpg 2 40 | dslr/bike_helmet/frame_0003.jpg 2 41 | dslr/bike_helmet/frame_0019.jpg 2 42 | dslr/bike_helmet/frame_0016.jpg 2 43 | dslr/bike_helmet/frame_0012.jpg 2 44 | dslr/bike_helmet/frame_0015.jpg 2 45 | dslr/bike_helmet/frame_0011.jpg 2 46 | dslr/bike_helmet/frame_0009.jpg 2 47 | dslr/bike_helmet/frame_0020.jpg 2 48 | dslr/bike_helmet/frame_0006.jpg 2 49 | dslr/bike_helmet/frame_0005.jpg 2 50 | dslr/bike_helmet/frame_0008.jpg 2 51 | dslr/bike_helmet/frame_0002.jpg 2 52 | dslr/bike_helmet/frame_0013.jpg 2 53 | dslr/bike_helmet/frame_0022.jpg 2 54 | dslr/bike_helmet/frame_0018.jpg 2 55 | dslr/bike_helmet/frame_0024.jpg 2 56 | dslr/bike_helmet/frame_0004.jpg 2 57 | dslr/bike_helmet/frame_0014.jpg 2 58 | dslr/bookcase/frame_0010.jpg 3 59 | dslr/bookcase/frame_0012.jpg 3 60 | dslr/bookcase/frame_0002.jpg 3 61 | dslr/bookcase/frame_0007.jpg 3 62 | dslr/bookcase/frame_0001.jpg 3 63 | dslr/bookcase/frame_0005.jpg 3 64 | dslr/bookcase/frame_0009.jpg 3 65 | dslr/bookcase/frame_0011.jpg 3 66 | dslr/bookcase/frame_0004.jpg 3 67 | dslr/bookcase/frame_0006.jpg 3 68 | dslr/bookcase/frame_0003.jpg 3 69 | dslr/bookcase/frame_0008.jpg 3 70 | dslr/bottle/frame_0011.jpg 4 71 | dslr/bottle/frame_0008.jpg 4 72 | dslr/bottle/frame_0001.jpg 4 73 | dslr/bottle/frame_0016.jpg 4 74 | dslr/bottle/frame_0002.jpg 4 75 | dslr/bottle/frame_0003.jpg 4 76 | dslr/bottle/frame_0004.jpg 4 77 | dslr/bottle/frame_0014.jpg 4 78 | dslr/bottle/frame_0006.jpg 4 79 | dslr/bottle/frame_0013.jpg 4 80 | dslr/bottle/frame_0007.jpg 4 81 | dslr/bottle/frame_0012.jpg 4 82 | dslr/bottle/frame_0009.jpg 4 83 | dslr/bottle/frame_0005.jpg 4 84 | dslr/bottle/frame_0010.jpg 4 85 | dslr/bottle/frame_0015.jpg 4 86 | dslr/calculator/frame_0004.jpg 5 87 | dslr/calculator/frame_0005.jpg 5 88 | dslr/calculator/frame_0010.jpg 5 89 | dslr/calculator/frame_0009.jpg 5 90 | dslr/calculator/frame_0008.jpg 5 91 | dslr/calculator/frame_0011.jpg 5 92 | dslr/calculator/frame_0006.jpg 5 93 | dslr/calculator/frame_0007.jpg 5 94 | dslr/calculator/frame_0012.jpg 5 95 | dslr/calculator/frame_0003.jpg 5 96 | dslr/calculator/frame_0001.jpg 5 97 | dslr/calculator/frame_0002.jpg 5 98 | dslr/desk_chair/frame_0013.jpg 6 99 | dslr/desk_chair/frame_0006.jpg 6 100 | dslr/desk_chair/frame_0005.jpg 6 101 | dslr/desk_chair/frame_0002.jpg 6 102 | dslr/desk_chair/frame_0009.jpg 6 103 | dslr/desk_chair/frame_0008.jpg 6 104 | dslr/desk_chair/frame_0004.jpg 6 105 | dslr/desk_chair/frame_0007.jpg 6 106 | dslr/desk_chair/frame_0003.jpg 6 107 | dslr/desk_chair/frame_0011.jpg 6 108 | dslr/desk_chair/frame_0012.jpg 6 109 | dslr/desk_chair/frame_0010.jpg 6 110 | dslr/desk_chair/frame_0001.jpg 6 111 | dslr/desk_lamp/frame_0007.jpg 7 112 | dslr/desk_lamp/frame_0011.jpg 7 113 | dslr/desk_lamp/frame_0010.jpg 7 114 | dslr/desk_lamp/frame_0009.jpg 7 115 | dslr/desk_lamp/frame_0014.jpg 7 116 | dslr/desk_lamp/frame_0013.jpg 7 117 | dslr/desk_lamp/frame_0001.jpg 7 118 | dslr/desk_lamp/frame_0012.jpg 7 119 | dslr/desk_lamp/frame_0003.jpg 7 120 | dslr/desk_lamp/frame_0008.jpg 7 121 | dslr/desk_lamp/frame_0006.jpg 7 122 | dslr/desk_lamp/frame_0005.jpg 7 123 | dslr/desk_lamp/frame_0004.jpg 7 124 | dslr/desk_lamp/frame_0002.jpg 7 125 | dslr/desktop_computer/frame_0010.jpg 8 126 | dslr/desktop_computer/frame_0005.jpg 8 127 | dslr/desktop_computer/frame_0008.jpg 8 128 | dslr/desktop_computer/frame_0004.jpg 8 129 | dslr/desktop_computer/frame_0011.jpg 8 130 | dslr/desktop_computer/frame_0002.jpg 8 131 | dslr/desktop_computer/frame_0001.jpg 8 132 | dslr/desktop_computer/frame_0014.jpg 8 133 | dslr/desktop_computer/frame_0013.jpg 8 134 | dslr/desktop_computer/frame_0009.jpg 8 135 | dslr/desktop_computer/frame_0015.jpg 8 136 | dslr/desktop_computer/frame_0007.jpg 8 137 | dslr/desktop_computer/frame_0012.jpg 8 138 | dslr/desktop_computer/frame_0003.jpg 8 139 | dslr/desktop_computer/frame_0006.jpg 8 140 | dslr/file_cabinet/frame_0014.jpg 9 141 | dslr/file_cabinet/frame_0003.jpg 9 142 | dslr/file_cabinet/frame_0015.jpg 9 143 | dslr/file_cabinet/frame_0008.jpg 9 144 | dslr/file_cabinet/frame_0011.jpg 9 145 | dslr/file_cabinet/frame_0010.jpg 9 146 | dslr/file_cabinet/frame_0002.jpg 9 147 | dslr/file_cabinet/frame_0006.jpg 9 148 | dslr/file_cabinet/frame_0012.jpg 9 149 | dslr/file_cabinet/frame_0013.jpg 9 150 | dslr/file_cabinet/frame_0007.jpg 9 151 | dslr/file_cabinet/frame_0004.jpg 9 152 | dslr/file_cabinet/frame_0001.jpg 9 153 | dslr/file_cabinet/frame_0005.jpg 9 154 | dslr/file_cabinet/frame_0009.jpg 9 155 | dslr/headphones/frame_0012.jpg 10 156 | dslr/headphones/frame_0006.jpg 10 157 | dslr/headphones/frame_0001.jpg 10 158 | dslr/headphones/frame_0002.jpg 10 159 | dslr/headphones/frame_0010.jpg 10 160 | dslr/headphones/frame_0013.jpg 10 161 | dslr/headphones/frame_0005.jpg 10 162 | dslr/headphones/frame_0004.jpg 10 163 | dslr/headphones/frame_0009.jpg 10 164 | dslr/headphones/frame_0008.jpg 10 165 | dslr/headphones/frame_0003.jpg 10 166 | dslr/headphones/frame_0011.jpg 10 167 | dslr/headphones/frame_0007.jpg 10 168 | dslr/keyboard/frame_0008.jpg 11 169 | dslr/keyboard/frame_0001.jpg 11 170 | dslr/keyboard/frame_0003.jpg 11 171 | dslr/keyboard/frame_0005.jpg 11 172 | dslr/keyboard/frame_0004.jpg 11 173 | dslr/keyboard/frame_0010.jpg 11 174 | dslr/keyboard/frame_0009.jpg 11 175 | dslr/keyboard/frame_0007.jpg 11 176 | dslr/keyboard/frame_0006.jpg 11 177 | dslr/keyboard/frame_0002.jpg 11 178 | dslr/laptop/frame_0023.jpg 12 179 | dslr/laptop/frame_0003.jpg 12 180 | dslr/laptop/frame_0005.jpg 12 181 | dslr/laptop/frame_0007.jpg 12 182 | dslr/laptop/frame_0017.jpg 12 183 | dslr/laptop/frame_0016.jpg 12 184 | dslr/laptop/frame_0001.jpg 12 185 | dslr/laptop/frame_0004.jpg 12 186 | dslr/laptop/frame_0008.jpg 12 187 | dslr/laptop/frame_0011.jpg 12 188 | dslr/laptop/frame_0024.jpg 12 189 | dslr/laptop/frame_0021.jpg 12 190 | dslr/laptop/frame_0010.jpg 12 191 | dslr/laptop/frame_0019.jpg 12 192 | dslr/laptop/frame_0013.jpg 12 193 | dslr/laptop/frame_0015.jpg 12 194 | dslr/laptop/frame_0020.jpg 12 195 | dslr/laptop/frame_0018.jpg 12 196 | dslr/laptop/frame_0014.jpg 12 197 | dslr/laptop/frame_0006.jpg 12 198 | dslr/laptop/frame_0002.jpg 12 199 | dslr/laptop/frame_0022.jpg 12 200 | dslr/laptop/frame_0012.jpg 12 201 | dslr/laptop/frame_0009.jpg 12 202 | dslr/letter_tray/frame_0009.jpg 13 203 | dslr/letter_tray/frame_0002.jpg 13 204 | dslr/letter_tray/frame_0006.jpg 13 205 | dslr/letter_tray/frame_0012.jpg 13 206 | dslr/letter_tray/frame_0016.jpg 13 207 | dslr/letter_tray/frame_0010.jpg 13 208 | dslr/letter_tray/frame_0001.jpg 13 209 | dslr/letter_tray/frame_0004.jpg 13 210 | dslr/letter_tray/frame_0003.jpg 13 211 | dslr/letter_tray/frame_0008.jpg 13 212 | dslr/letter_tray/frame_0005.jpg 13 213 | dslr/letter_tray/frame_0013.jpg 13 214 | dslr/letter_tray/frame_0015.jpg 13 215 | dslr/letter_tray/frame_0011.jpg 13 216 | dslr/letter_tray/frame_0007.jpg 13 217 | dslr/letter_tray/frame_0014.jpg 13 218 | dslr/mobile_phone/frame_0016.jpg 14 219 | dslr/mobile_phone/frame_0017.jpg 14 220 | dslr/mobile_phone/frame_0007.jpg 14 221 | dslr/mobile_phone/frame_0018.jpg 14 222 | dslr/mobile_phone/frame_0029.jpg 14 223 | dslr/mobile_phone/frame_0011.jpg 14 224 | dslr/mobile_phone/frame_0019.jpg 14 225 | dslr/mobile_phone/frame_0014.jpg 14 226 | dslr/mobile_phone/frame_0008.jpg 14 227 | dslr/mobile_phone/frame_0024.jpg 14 228 | dslr/mobile_phone/frame_0028.jpg 14 229 | dslr/mobile_phone/frame_0001.jpg 14 230 | dslr/mobile_phone/frame_0009.jpg 14 231 | dslr/mobile_phone/frame_0006.jpg 14 232 | dslr/mobile_phone/frame_0005.jpg 14 233 | dslr/mobile_phone/frame_0027.jpg 14 234 | dslr/mobile_phone/frame_0025.jpg 14 235 | dslr/mobile_phone/frame_0010.jpg 14 236 | dslr/mobile_phone/frame_0015.jpg 14 237 | dslr/mobile_phone/frame_0013.jpg 14 238 | dslr/mobile_phone/frame_0023.jpg 14 239 | dslr/mobile_phone/frame_0002.jpg 14 240 | dslr/mobile_phone/frame_0031.jpg 14 241 | dslr/mobile_phone/frame_0030.jpg 14 242 | dslr/mobile_phone/frame_0022.jpg 14 243 | dslr/mobile_phone/frame_0021.jpg 14 244 | dslr/mobile_phone/frame_0026.jpg 14 245 | dslr/mobile_phone/frame_0003.jpg 14 246 | dslr/mobile_phone/frame_0012.jpg 14 247 | dslr/mobile_phone/frame_0004.jpg 14 248 | dslr/mobile_phone/frame_0020.jpg 14 249 | dslr/monitor/frame_0007.jpg 15 250 | dslr/monitor/frame_0002.jpg 15 251 | dslr/monitor/frame_0008.jpg 15 252 | dslr/monitor/frame_0017.jpg 15 253 | dslr/monitor/frame_0003.jpg 15 254 | dslr/monitor/frame_0016.jpg 15 255 | dslr/monitor/frame_0013.jpg 15 256 | dslr/monitor/frame_0019.jpg 15 257 | dslr/monitor/frame_0012.jpg 15 258 | dslr/monitor/frame_0015.jpg 15 259 | dslr/monitor/frame_0005.jpg 15 260 | dslr/monitor/frame_0014.jpg 15 261 | dslr/monitor/frame_0020.jpg 15 262 | dslr/monitor/frame_0021.jpg 15 263 | dslr/monitor/frame_0006.jpg 15 264 | dslr/monitor/frame_0004.jpg 15 265 | dslr/monitor/frame_0011.jpg 15 266 | dslr/monitor/frame_0009.jpg 15 267 | dslr/monitor/frame_0018.jpg 15 268 | dslr/monitor/frame_0022.jpg 15 269 | dslr/monitor/frame_0001.jpg 15 270 | dslr/monitor/frame_0010.jpg 15 271 | dslr/mouse/frame_0012.jpg 16 272 | dslr/mouse/frame_0002.jpg 16 273 | dslr/mouse/frame_0001.jpg 16 274 | dslr/mouse/frame_0003.jpg 16 275 | dslr/mouse/frame_0005.jpg 16 276 | dslr/mouse/frame_0011.jpg 16 277 | dslr/mouse/frame_0010.jpg 16 278 | dslr/mouse/frame_0007.jpg 16 279 | dslr/mouse/frame_0006.jpg 16 280 | dslr/mouse/frame_0008.jpg 16 281 | dslr/mouse/frame_0004.jpg 16 282 | dslr/mouse/frame_0009.jpg 16 283 | dslr/mug/frame_0001.jpg 17 284 | dslr/mug/frame_0008.jpg 17 285 | dslr/mug/frame_0007.jpg 17 286 | dslr/mug/frame_0006.jpg 17 287 | dslr/mug/frame_0002.jpg 17 288 | dslr/mug/frame_0004.jpg 17 289 | dslr/mug/frame_0005.jpg 17 290 | dslr/mug/frame_0003.jpg 17 291 | dslr/paper_notebook/frame_0007.jpg 18 292 | dslr/paper_notebook/frame_0002.jpg 18 293 | dslr/paper_notebook/frame_0001.jpg 18 294 | dslr/paper_notebook/frame_0008.jpg 18 295 | dslr/paper_notebook/frame_0006.jpg 18 296 | dslr/paper_notebook/frame_0009.jpg 18 297 | dslr/paper_notebook/frame_0005.jpg 18 298 | dslr/paper_notebook/frame_0004.jpg 18 299 | dslr/paper_notebook/frame_0003.jpg 18 300 | dslr/paper_notebook/frame_0010.jpg 18 301 | dslr/pen/frame_0006.jpg 19 302 | dslr/pen/frame_0002.jpg 19 303 | dslr/pen/frame_0008.jpg 19 304 | dslr/pen/frame_0005.jpg 19 305 | dslr/pen/frame_0009.jpg 19 306 | dslr/pen/frame_0004.jpg 19 307 | dslr/pen/frame_0007.jpg 19 308 | dslr/pen/frame_0010.jpg 19 309 | dslr/pen/frame_0001.jpg 19 310 | dslr/pen/frame_0003.jpg 19 311 | dslr/phone/frame_0011.jpg 20 312 | dslr/phone/frame_0002.jpg 20 313 | dslr/phone/frame_0010.jpg 20 314 | dslr/phone/frame_0001.jpg 20 315 | dslr/phone/frame_0003.jpg 20 316 | dslr/phone/frame_0013.jpg 20 317 | dslr/phone/frame_0007.jpg 20 318 | dslr/phone/frame_0006.jpg 20 319 | dslr/phone/frame_0012.jpg 20 320 | dslr/phone/frame_0005.jpg 20 321 | dslr/phone/frame_0004.jpg 20 322 | dslr/phone/frame_0008.jpg 20 323 | dslr/phone/frame_0009.jpg 20 324 | dslr/printer/frame_0004.jpg 21 325 | dslr/printer/frame_0010.jpg 21 326 | dslr/printer/frame_0005.jpg 21 327 | dslr/printer/frame_0001.jpg 21 328 | dslr/printer/frame_0012.jpg 21 329 | dslr/printer/frame_0003.jpg 21 330 | dslr/printer/frame_0009.jpg 21 331 | dslr/printer/frame_0011.jpg 21 332 | dslr/printer/frame_0007.jpg 21 333 | dslr/printer/frame_0006.jpg 21 334 | dslr/printer/frame_0013.jpg 21 335 | dslr/printer/frame_0002.jpg 21 336 | dslr/printer/frame_0008.jpg 21 337 | dslr/printer/frame_0015.jpg 21 338 | dslr/printer/frame_0014.jpg 21 339 | dslr/projector/frame_0012.jpg 22 340 | dslr/projector/frame_0020.jpg 22 341 | dslr/projector/frame_0005.jpg 22 342 | dslr/projector/frame_0017.jpg 22 343 | dslr/projector/frame_0003.jpg 22 344 | dslr/projector/frame_0022.jpg 22 345 | dslr/projector/frame_0009.jpg 22 346 | dslr/projector/frame_0019.jpg 22 347 | dslr/projector/frame_0007.jpg 22 348 | dslr/projector/frame_0016.jpg 22 349 | dslr/projector/frame_0018.jpg 22 350 | dslr/projector/frame_0001.jpg 22 351 | dslr/projector/frame_0010.jpg 22 352 | dslr/projector/frame_0023.jpg 22 353 | dslr/projector/frame_0014.jpg 22 354 | dslr/projector/frame_0006.jpg 22 355 | dslr/projector/frame_0008.jpg 22 356 | dslr/projector/frame_0013.jpg 22 357 | dslr/projector/frame_0002.jpg 22 358 | dslr/projector/frame_0021.jpg 22 359 | dslr/projector/frame_0015.jpg 22 360 | dslr/projector/frame_0004.jpg 22 361 | dslr/projector/frame_0011.jpg 22 362 | dslr/punchers/frame_0006.jpg 23 363 | dslr/punchers/frame_0017.jpg 23 364 | dslr/punchers/frame_0016.jpg 23 365 | dslr/punchers/frame_0013.jpg 23 366 | dslr/punchers/frame_0003.jpg 23 367 | dslr/punchers/frame_0018.jpg 23 368 | dslr/punchers/frame_0001.jpg 23 369 | dslr/punchers/frame_0010.jpg 23 370 | dslr/punchers/frame_0008.jpg 23 371 | dslr/punchers/frame_0015.jpg 23 372 | dslr/punchers/frame_0005.jpg 23 373 | dslr/punchers/frame_0014.jpg 23 374 | dslr/punchers/frame_0009.jpg 23 375 | dslr/punchers/frame_0004.jpg 23 376 | dslr/punchers/frame_0012.jpg 23 377 | dslr/punchers/frame_0007.jpg 23 378 | dslr/punchers/frame_0011.jpg 23 379 | dslr/punchers/frame_0002.jpg 23 380 | dslr/ring_binder/frame_0010.jpg 24 381 | dslr/ring_binder/frame_0003.jpg 24 382 | dslr/ring_binder/frame_0009.jpg 24 383 | dslr/ring_binder/frame_0004.jpg 24 384 | dslr/ring_binder/frame_0001.jpg 24 385 | dslr/ring_binder/frame_0006.jpg 24 386 | dslr/ring_binder/frame_0005.jpg 24 387 | dslr/ring_binder/frame_0002.jpg 24 388 | dslr/ring_binder/frame_0007.jpg 24 389 | dslr/ring_binder/frame_0008.jpg 24 390 | dslr/ruler/frame_0006.jpg 25 391 | dslr/ruler/frame_0004.jpg 25 392 | dslr/ruler/frame_0001.jpg 25 393 | dslr/ruler/frame_0007.jpg 25 394 | dslr/ruler/frame_0002.jpg 25 395 | dslr/ruler/frame_0003.jpg 25 396 | dslr/ruler/frame_0005.jpg 25 397 | dslr/scissors/frame_0017.jpg 26 398 | dslr/scissors/frame_0013.jpg 26 399 | dslr/scissors/frame_0006.jpg 26 400 | dslr/scissors/frame_0015.jpg 26 401 | dslr/scissors/frame_0016.jpg 26 402 | dslr/scissors/frame_0012.jpg 26 403 | dslr/scissors/frame_0002.jpg 26 404 | dslr/scissors/frame_0004.jpg 26 405 | dslr/scissors/frame_0010.jpg 26 406 | dslr/scissors/frame_0007.jpg 26 407 | dslr/scissors/frame_0011.jpg 26 408 | dslr/scissors/frame_0001.jpg 26 409 | dslr/scissors/frame_0018.jpg 26 410 | dslr/scissors/frame_0005.jpg 26 411 | dslr/scissors/frame_0008.jpg 26 412 | dslr/scissors/frame_0014.jpg 26 413 | dslr/scissors/frame_0009.jpg 26 414 | dslr/scissors/frame_0003.jpg 26 415 | dslr/speaker/frame_0021.jpg 27 416 | dslr/speaker/frame_0019.jpg 27 417 | dslr/speaker/frame_0004.jpg 27 418 | dslr/speaker/frame_0016.jpg 27 419 | dslr/speaker/frame_0012.jpg 27 420 | dslr/speaker/frame_0009.jpg 27 421 | dslr/speaker/frame_0017.jpg 27 422 | dslr/speaker/frame_0018.jpg 27 423 | dslr/speaker/frame_0015.jpg 27 424 | dslr/speaker/frame_0025.jpg 27 425 | dslr/speaker/frame_0022.jpg 27 426 | dslr/speaker/frame_0024.jpg 27 427 | dslr/speaker/frame_0023.jpg 27 428 | dslr/speaker/frame_0006.jpg 27 429 | dslr/speaker/frame_0005.jpg 27 430 | dslr/speaker/frame_0020.jpg 27 431 | dslr/speaker/frame_0010.jpg 27 432 | dslr/speaker/frame_0011.jpg 27 433 | dslr/speaker/frame_0007.jpg 27 434 | dslr/speaker/frame_0026.jpg 27 435 | dslr/speaker/frame_0008.jpg 27 436 | dslr/speaker/frame_0002.jpg 27 437 | dslr/speaker/frame_0013.jpg 27 438 | dslr/speaker/frame_0001.jpg 27 439 | dslr/speaker/frame_0003.jpg 27 440 | dslr/speaker/frame_0014.jpg 27 441 | dslr/stapler/frame_0013.jpg 28 442 | dslr/stapler/frame_0010.jpg 28 443 | dslr/stapler/frame_0021.jpg 28 444 | dslr/stapler/frame_0006.jpg 28 445 | dslr/stapler/frame_0002.jpg 28 446 | dslr/stapler/frame_0003.jpg 28 447 | dslr/stapler/frame_0004.jpg 28 448 | dslr/stapler/frame_0011.jpg 28 449 | dslr/stapler/frame_0001.jpg 28 450 | dslr/stapler/frame_0008.jpg 28 451 | dslr/stapler/frame_0009.jpg 28 452 | dslr/stapler/frame_0015.jpg 28 453 | dslr/stapler/frame_0018.jpg 28 454 | dslr/stapler/frame_0016.jpg 28 455 | dslr/stapler/frame_0019.jpg 28 456 | dslr/stapler/frame_0012.jpg 28 457 | dslr/stapler/frame_0014.jpg 28 458 | dslr/stapler/frame_0007.jpg 28 459 | dslr/stapler/frame_0005.jpg 28 460 | dslr/stapler/frame_0020.jpg 28 461 | dslr/stapler/frame_0017.jpg 28 462 | dslr/tape_dispenser/frame_0005.jpg 29 463 | dslr/tape_dispenser/frame_0003.jpg 29 464 | dslr/tape_dispenser/frame_0012.jpg 29 465 | dslr/tape_dispenser/frame_0022.jpg 29 466 | dslr/tape_dispenser/frame_0018.jpg 29 467 | dslr/tape_dispenser/frame_0017.jpg 29 468 | dslr/tape_dispenser/frame_0021.jpg 29 469 | dslr/tape_dispenser/frame_0015.jpg 29 470 | dslr/tape_dispenser/frame_0011.jpg 29 471 | dslr/tape_dispenser/frame_0014.jpg 29 472 | dslr/tape_dispenser/frame_0013.jpg 29 473 | dslr/tape_dispenser/frame_0004.jpg 29 474 | dslr/tape_dispenser/frame_0010.jpg 29 475 | dslr/tape_dispenser/frame_0009.jpg 29 476 | dslr/tape_dispenser/frame_0001.jpg 29 477 | dslr/tape_dispenser/frame_0007.jpg 29 478 | dslr/tape_dispenser/frame_0008.jpg 29 479 | dslr/tape_dispenser/frame_0016.jpg 29 480 | dslr/tape_dispenser/frame_0006.jpg 29 481 | dslr/tape_dispenser/frame_0019.jpg 29 482 | dslr/tape_dispenser/frame_0002.jpg 29 483 | dslr/tape_dispenser/frame_0020.jpg 29 484 | dslr/trash_can/frame_0011.jpg 30 485 | dslr/trash_can/frame_0006.jpg 30 486 | dslr/trash_can/frame_0015.jpg 30 487 | dslr/trash_can/frame_0002.jpg 30 488 | dslr/trash_can/frame_0009.jpg 30 489 | dslr/trash_can/frame_0004.jpg 30 490 | dslr/trash_can/frame_0008.jpg 30 491 | dslr/trash_can/frame_0001.jpg 30 492 | dslr/trash_can/frame_0013.jpg 30 493 | dslr/trash_can/frame_0007.jpg 30 494 | dslr/trash_can/frame_0010.jpg 30 495 | dslr/trash_can/frame_0005.jpg 30 496 | dslr/trash_can/frame_0012.jpg 30 497 | dslr/trash_can/frame_0003.jpg 30 498 | dslr/trash_can/frame_0014.jpg 30 499 | -------------------------------------------------------------------------------- /data/data_txt/Office31/DslrOpenSet_known.txt: -------------------------------------------------------------------------------- 1 | dslr/backpack/frame_0009.jpg 0 2 | dslr/backpack/frame_0003.jpg 0 3 | dslr/backpack/frame_0011.jpg 0 4 | dslr/backpack/frame_0001.jpg 0 5 | dslr/backpack/frame_0006.jpg 0 6 | dslr/backpack/frame_0005.jpg 0 7 | dslr/backpack/frame_0008.jpg 0 8 | dslr/backpack/frame_0004.jpg 0 9 | dslr/backpack/frame_0012.jpg 0 10 | dslr/backpack/frame_0002.jpg 0 11 | dslr/backpack/frame_0010.jpg 0 12 | dslr/backpack/frame_0007.jpg 0 13 | dslr/bike/frame_0005.jpg 1 14 | dslr/bike/frame_0001.jpg 1 15 | dslr/bike/frame_0018.jpg 1 16 | dslr/bike/frame_0004.jpg 1 17 | dslr/bike/frame_0014.jpg 1 18 | dslr/bike/frame_0012.jpg 1 19 | dslr/bike/frame_0002.jpg 1 20 | dslr/bike/frame_0009.jpg 1 21 | dslr/bike/frame_0003.jpg 1 22 | dslr/bike/frame_0008.jpg 1 23 | dslr/bike/frame_0015.jpg 1 24 | dslr/bike/frame_0007.jpg 1 25 | dslr/bike/frame_0016.jpg 1 26 | dslr/bike/frame_0019.jpg 1 27 | dslr/bike/frame_0006.jpg 1 28 | dslr/bike/frame_0017.jpg 1 29 | dslr/bike/frame_0013.jpg 1 30 | dslr/bike/frame_0021.jpg 1 31 | dslr/bike/frame_0020.jpg 1 32 | dslr/bike/frame_0011.jpg 1 33 | dslr/bike/frame_0010.jpg 1 34 | dslr/bike_helmet/frame_0001.jpg 2 35 | dslr/bike_helmet/frame_0007.jpg 2 36 | dslr/bike_helmet/frame_0021.jpg 2 37 | dslr/bike_helmet/frame_0010.jpg 2 38 | dslr/bike_helmet/frame_0023.jpg 2 39 | dslr/bike_helmet/frame_0017.jpg 2 40 | dslr/bike_helmet/frame_0003.jpg 2 41 | dslr/bike_helmet/frame_0019.jpg 2 42 | dslr/bike_helmet/frame_0016.jpg 2 43 | dslr/bike_helmet/frame_0012.jpg 2 44 | dslr/bike_helmet/frame_0015.jpg 2 45 | dslr/bike_helmet/frame_0011.jpg 2 46 | dslr/bike_helmet/frame_0009.jpg 2 47 | dslr/bike_helmet/frame_0020.jpg 2 48 | dslr/bike_helmet/frame_0006.jpg 2 49 | dslr/bike_helmet/frame_0005.jpg 2 50 | dslr/bike_helmet/frame_0008.jpg 2 51 | dslr/bike_helmet/frame_0002.jpg 2 52 | dslr/bike_helmet/frame_0013.jpg 2 53 | dslr/bike_helmet/frame_0022.jpg 2 54 | dslr/bike_helmet/frame_0018.jpg 2 55 | dslr/bike_helmet/frame_0024.jpg 2 56 | dslr/bike_helmet/frame_0004.jpg 2 57 | dslr/bike_helmet/frame_0014.jpg 2 58 | dslr/bookcase/frame_0010.jpg 3 59 | dslr/bookcase/frame_0012.jpg 3 60 | dslr/bookcase/frame_0002.jpg 3 61 | dslr/bookcase/frame_0007.jpg 3 62 | dslr/bookcase/frame_0001.jpg 3 63 | dslr/bookcase/frame_0005.jpg 3 64 | dslr/bookcase/frame_0009.jpg 3 65 | dslr/bookcase/frame_0011.jpg 3 66 | dslr/bookcase/frame_0004.jpg 3 67 | dslr/bookcase/frame_0006.jpg 3 68 | dslr/bookcase/frame_0003.jpg 3 69 | dslr/bookcase/frame_0008.jpg 3 70 | dslr/bottle/frame_0011.jpg 4 71 | dslr/bottle/frame_0008.jpg 4 72 | dslr/bottle/frame_0001.jpg 4 73 | dslr/bottle/frame_0016.jpg 4 74 | dslr/bottle/frame_0002.jpg 4 75 | dslr/bottle/frame_0003.jpg 4 76 | dslr/bottle/frame_0004.jpg 4 77 | dslr/bottle/frame_0014.jpg 4 78 | dslr/bottle/frame_0006.jpg 4 79 | dslr/bottle/frame_0013.jpg 4 80 | dslr/bottle/frame_0007.jpg 4 81 | dslr/bottle/frame_0012.jpg 4 82 | dslr/bottle/frame_0009.jpg 4 83 | dslr/bottle/frame_0005.jpg 4 84 | dslr/bottle/frame_0010.jpg 4 85 | dslr/bottle/frame_0015.jpg 4 86 | dslr/calculator/frame_0004.jpg 5 87 | dslr/calculator/frame_0005.jpg 5 88 | dslr/calculator/frame_0010.jpg 5 89 | dslr/calculator/frame_0009.jpg 5 90 | dslr/calculator/frame_0008.jpg 5 91 | dslr/calculator/frame_0011.jpg 5 92 | dslr/calculator/frame_0006.jpg 5 93 | dslr/calculator/frame_0007.jpg 5 94 | dslr/calculator/frame_0012.jpg 5 95 | dslr/calculator/frame_0003.jpg 5 96 | dslr/calculator/frame_0001.jpg 5 97 | dslr/calculator/frame_0002.jpg 5 98 | dslr/desk_chair/frame_0013.jpg 6 99 | dslr/desk_chair/frame_0006.jpg 6 100 | dslr/desk_chair/frame_0005.jpg 6 101 | dslr/desk_chair/frame_0002.jpg 6 102 | dslr/desk_chair/frame_0009.jpg 6 103 | dslr/desk_chair/frame_0008.jpg 6 104 | dslr/desk_chair/frame_0004.jpg 6 105 | dslr/desk_chair/frame_0007.jpg 6 106 | dslr/desk_chair/frame_0003.jpg 6 107 | dslr/desk_chair/frame_0011.jpg 6 108 | dslr/desk_chair/frame_0012.jpg 6 109 | dslr/desk_chair/frame_0010.jpg 6 110 | dslr/desk_chair/frame_0001.jpg 6 111 | dslr/desk_lamp/frame_0007.jpg 7 112 | dslr/desk_lamp/frame_0011.jpg 7 113 | dslr/desk_lamp/frame_0010.jpg 7 114 | dslr/desk_lamp/frame_0009.jpg 7 115 | dslr/desk_lamp/frame_0014.jpg 7 116 | dslr/desk_lamp/frame_0013.jpg 7 117 | dslr/desk_lamp/frame_0001.jpg 7 118 | dslr/desk_lamp/frame_0012.jpg 7 119 | dslr/desk_lamp/frame_0003.jpg 7 120 | dslr/desk_lamp/frame_0008.jpg 7 121 | dslr/desk_lamp/frame_0006.jpg 7 122 | dslr/desk_lamp/frame_0005.jpg 7 123 | dslr/desk_lamp/frame_0004.jpg 7 124 | dslr/desk_lamp/frame_0002.jpg 7 125 | dslr/desktop_computer/frame_0010.jpg 8 126 | dslr/desktop_computer/frame_0005.jpg 8 127 | dslr/desktop_computer/frame_0008.jpg 8 128 | dslr/desktop_computer/frame_0004.jpg 8 129 | dslr/desktop_computer/frame_0011.jpg 8 130 | dslr/desktop_computer/frame_0002.jpg 8 131 | dslr/desktop_computer/frame_0001.jpg 8 132 | dslr/desktop_computer/frame_0014.jpg 8 133 | dslr/desktop_computer/frame_0013.jpg 8 134 | dslr/desktop_computer/frame_0009.jpg 8 135 | dslr/desktop_computer/frame_0015.jpg 8 136 | dslr/desktop_computer/frame_0007.jpg 8 137 | dslr/desktop_computer/frame_0012.jpg 8 138 | dslr/desktop_computer/frame_0003.jpg 8 139 | dslr/desktop_computer/frame_0006.jpg 8 140 | dslr/file_cabinet/frame_0014.jpg 9 141 | dslr/file_cabinet/frame_0003.jpg 9 142 | dslr/file_cabinet/frame_0015.jpg 9 143 | dslr/file_cabinet/frame_0008.jpg 9 144 | dslr/file_cabinet/frame_0011.jpg 9 145 | dslr/file_cabinet/frame_0010.jpg 9 146 | dslr/file_cabinet/frame_0002.jpg 9 147 | dslr/file_cabinet/frame_0006.jpg 9 148 | dslr/file_cabinet/frame_0012.jpg 9 149 | dslr/file_cabinet/frame_0013.jpg 9 150 | dslr/file_cabinet/frame_0007.jpg 9 151 | dslr/file_cabinet/frame_0004.jpg 9 152 | dslr/file_cabinet/frame_0001.jpg 9 153 | dslr/file_cabinet/frame_0005.jpg 9 154 | dslr/file_cabinet/frame_0009.jpg 9 155 | dslr/headphones/frame_0012.jpg 10 156 | dslr/headphones/frame_0006.jpg 10 157 | dslr/headphones/frame_0001.jpg 10 158 | dslr/headphones/frame_0002.jpg 10 159 | dslr/headphones/frame_0010.jpg 10 160 | dslr/headphones/frame_0013.jpg 10 161 | dslr/headphones/frame_0005.jpg 10 162 | dslr/headphones/frame_0004.jpg 10 163 | dslr/headphones/frame_0009.jpg 10 164 | dslr/headphones/frame_0008.jpg 10 165 | dslr/headphones/frame_0003.jpg 10 166 | dslr/headphones/frame_0011.jpg 10 167 | dslr/headphones/frame_0007.jpg 10 168 | dslr/keyboard/frame_0008.jpg 11 169 | dslr/keyboard/frame_0001.jpg 11 170 | dslr/keyboard/frame_0003.jpg 11 171 | dslr/keyboard/frame_0005.jpg 11 172 | dslr/keyboard/frame_0004.jpg 11 173 | dslr/keyboard/frame_0010.jpg 11 174 | dslr/keyboard/frame_0009.jpg 11 175 | dslr/keyboard/frame_0007.jpg 11 176 | dslr/keyboard/frame_0006.jpg 11 177 | dslr/keyboard/frame_0002.jpg 11 178 | dslr/laptop/frame_0023.jpg 12 179 | dslr/laptop/frame_0003.jpg 12 180 | dslr/laptop/frame_0005.jpg 12 181 | dslr/laptop/frame_0007.jpg 12 182 | dslr/laptop/frame_0017.jpg 12 183 | dslr/laptop/frame_0016.jpg 12 184 | dslr/laptop/frame_0001.jpg 12 185 | dslr/laptop/frame_0004.jpg 12 186 | dslr/laptop/frame_0008.jpg 12 187 | dslr/laptop/frame_0011.jpg 12 188 | dslr/laptop/frame_0024.jpg 12 189 | dslr/laptop/frame_0021.jpg 12 190 | dslr/laptop/frame_0010.jpg 12 191 | dslr/laptop/frame_0019.jpg 12 192 | dslr/laptop/frame_0013.jpg 12 193 | dslr/laptop/frame_0015.jpg 12 194 | dslr/laptop/frame_0020.jpg 12 195 | dslr/laptop/frame_0018.jpg 12 196 | dslr/laptop/frame_0014.jpg 12 197 | dslr/laptop/frame_0006.jpg 12 198 | dslr/laptop/frame_0002.jpg 12 199 | dslr/laptop/frame_0022.jpg 12 200 | dslr/laptop/frame_0012.jpg 12 201 | dslr/laptop/frame_0009.jpg 12 202 | dslr/letter_tray/frame_0009.jpg 13 203 | dslr/letter_tray/frame_0002.jpg 13 204 | dslr/letter_tray/frame_0006.jpg 13 205 | dslr/letter_tray/frame_0012.jpg 13 206 | dslr/letter_tray/frame_0016.jpg 13 207 | dslr/letter_tray/frame_0010.jpg 13 208 | dslr/letter_tray/frame_0001.jpg 13 209 | dslr/letter_tray/frame_0004.jpg 13 210 | dslr/letter_tray/frame_0003.jpg 13 211 | dslr/letter_tray/frame_0008.jpg 13 212 | dslr/letter_tray/frame_0005.jpg 13 213 | dslr/letter_tray/frame_0013.jpg 13 214 | dslr/letter_tray/frame_0015.jpg 13 215 | dslr/letter_tray/frame_0011.jpg 13 216 | dslr/letter_tray/frame_0007.jpg 13 217 | dslr/letter_tray/frame_0014.jpg 13 218 | dslr/mobile_phone/frame_0016.jpg 14 219 | dslr/mobile_phone/frame_0017.jpg 14 220 | dslr/mobile_phone/frame_0007.jpg 14 221 | dslr/mobile_phone/frame_0018.jpg 14 222 | dslr/mobile_phone/frame_0029.jpg 14 223 | dslr/mobile_phone/frame_0011.jpg 14 224 | dslr/mobile_phone/frame_0019.jpg 14 225 | dslr/mobile_phone/frame_0014.jpg 14 226 | dslr/mobile_phone/frame_0008.jpg 14 227 | dslr/mobile_phone/frame_0024.jpg 14 228 | dslr/mobile_phone/frame_0028.jpg 14 229 | dslr/mobile_phone/frame_0001.jpg 14 230 | dslr/mobile_phone/frame_0009.jpg 14 231 | dslr/mobile_phone/frame_0006.jpg 14 232 | dslr/mobile_phone/frame_0005.jpg 14 233 | dslr/mobile_phone/frame_0027.jpg 14 234 | dslr/mobile_phone/frame_0025.jpg 14 235 | dslr/mobile_phone/frame_0010.jpg 14 236 | dslr/mobile_phone/frame_0015.jpg 14 237 | dslr/mobile_phone/frame_0013.jpg 14 238 | dslr/mobile_phone/frame_0023.jpg 14 239 | dslr/mobile_phone/frame_0002.jpg 14 240 | dslr/mobile_phone/frame_0031.jpg 14 241 | dslr/mobile_phone/frame_0030.jpg 14 242 | dslr/mobile_phone/frame_0022.jpg 14 243 | dslr/mobile_phone/frame_0021.jpg 14 244 | dslr/mobile_phone/frame_0026.jpg 14 245 | dslr/mobile_phone/frame_0003.jpg 14 246 | dslr/mobile_phone/frame_0012.jpg 14 247 | dslr/mobile_phone/frame_0004.jpg 14 248 | dslr/mobile_phone/frame_0020.jpg 14 249 | dslr/monitor/frame_0007.jpg 15 250 | dslr/monitor/frame_0002.jpg 15 251 | dslr/monitor/frame_0008.jpg 15 252 | dslr/monitor/frame_0017.jpg 15 253 | dslr/monitor/frame_0003.jpg 15 254 | dslr/monitor/frame_0016.jpg 15 255 | dslr/monitor/frame_0013.jpg 15 256 | dslr/monitor/frame_0019.jpg 15 257 | dslr/monitor/frame_0012.jpg 15 258 | dslr/monitor/frame_0015.jpg 15 259 | dslr/monitor/frame_0005.jpg 15 260 | dslr/monitor/frame_0014.jpg 15 261 | dslr/monitor/frame_0020.jpg 15 262 | dslr/monitor/frame_0021.jpg 15 263 | dslr/monitor/frame_0006.jpg 15 264 | dslr/monitor/frame_0004.jpg 15 265 | dslr/monitor/frame_0011.jpg 15 266 | dslr/monitor/frame_0009.jpg 15 267 | dslr/monitor/frame_0018.jpg 15 268 | dslr/monitor/frame_0022.jpg 15 269 | dslr/monitor/frame_0001.jpg 15 270 | dslr/monitor/frame_0010.jpg 15 271 | dslr/mouse/frame_0012.jpg 16 272 | dslr/mouse/frame_0002.jpg 16 273 | dslr/mouse/frame_0001.jpg 16 274 | dslr/mouse/frame_0003.jpg 16 275 | dslr/mouse/frame_0005.jpg 16 276 | dslr/mouse/frame_0011.jpg 16 277 | dslr/mouse/frame_0010.jpg 16 278 | dslr/mouse/frame_0007.jpg 16 279 | dslr/mouse/frame_0006.jpg 16 280 | dslr/mouse/frame_0008.jpg 16 281 | dslr/mouse/frame_0004.jpg 16 282 | dslr/mouse/frame_0009.jpg 16 283 | dslr/mug/frame_0001.jpg 17 284 | dslr/mug/frame_0008.jpg 17 285 | dslr/mug/frame_0007.jpg 17 286 | dslr/mug/frame_0006.jpg 17 287 | dslr/mug/frame_0002.jpg 17 288 | dslr/mug/frame_0004.jpg 17 289 | dslr/mug/frame_0005.jpg 17 290 | dslr/mug/frame_0003.jpg 17 291 | dslr/paper_notebook/frame_0007.jpg 18 292 | dslr/paper_notebook/frame_0002.jpg 18 293 | dslr/paper_notebook/frame_0001.jpg 18 294 | dslr/paper_notebook/frame_0008.jpg 18 295 | dslr/paper_notebook/frame_0006.jpg 18 296 | dslr/paper_notebook/frame_0009.jpg 18 297 | dslr/paper_notebook/frame_0005.jpg 18 298 | dslr/paper_notebook/frame_0004.jpg 18 299 | dslr/paper_notebook/frame_0003.jpg 18 300 | dslr/paper_notebook/frame_0010.jpg 18 301 | dslr/pen/frame_0006.jpg 19 302 | dslr/pen/frame_0002.jpg 19 303 | dslr/pen/frame_0008.jpg 19 304 | dslr/pen/frame_0005.jpg 19 305 | dslr/pen/frame_0009.jpg 19 306 | dslr/pen/frame_0004.jpg 19 307 | dslr/pen/frame_0007.jpg 19 308 | dslr/pen/frame_0010.jpg 19 309 | dslr/pen/frame_0001.jpg 19 310 | dslr/pen/frame_0003.jpg 19 311 | -------------------------------------------------------------------------------- /data/data_txt/Office31/Webcam.txt: -------------------------------------------------------------------------------- 1 | webcam/backpack/frame_0019.jpg 0 2 | webcam/backpack/frame_0004.jpg 0 3 | webcam/backpack/frame_0007.jpg 0 4 | webcam/backpack/frame_0024.jpg 0 5 | webcam/backpack/frame_0023.jpg 0 6 | webcam/backpack/frame_0027.jpg 0 7 | webcam/backpack/frame_0003.jpg 0 8 | webcam/backpack/frame_0022.jpg 0 9 | webcam/backpack/frame_0005.jpg 0 10 | webcam/backpack/frame_0017.jpg 0 11 | webcam/backpack/frame_0029.jpg 0 12 | webcam/backpack/frame_0016.jpg 0 13 | webcam/backpack/frame_0021.jpg 0 14 | webcam/backpack/frame_0012.jpg 0 15 | webcam/backpack/frame_0018.jpg 0 16 | webcam/backpack/frame_0028.jpg 0 17 | webcam/backpack/frame_0014.jpg 0 18 | webcam/backpack/frame_0026.jpg 0 19 | webcam/backpack/frame_0025.jpg 0 20 | webcam/backpack/frame_0020.jpg 0 21 | webcam/backpack/frame_0008.jpg 0 22 | webcam/backpack/frame_0015.jpg 0 23 | webcam/backpack/frame_0010.jpg 0 24 | webcam/backpack/frame_0009.jpg 0 25 | webcam/backpack/frame_0001.jpg 0 26 | webcam/backpack/frame_0011.jpg 0 27 | webcam/backpack/frame_0002.jpg 0 28 | webcam/backpack/frame_0006.jpg 0 29 | webcam/backpack/frame_0013.jpg 0 30 | webcam/bike/frame_0007.jpg 1 31 | webcam/bike/frame_0016.jpg 1 32 | webcam/bike/frame_0006.jpg 1 33 | webcam/bike/frame_0002.jpg 1 34 | webcam/bike/frame_0012.jpg 1 35 | webcam/bike/frame_0019.jpg 1 36 | webcam/bike/frame_0020.jpg 1 37 | webcam/bike/frame_0001.jpg 1 38 | webcam/bike/frame_0014.jpg 1 39 | webcam/bike/frame_0015.jpg 1 40 | webcam/bike/frame_0011.jpg 1 41 | webcam/bike/frame_0004.jpg 1 42 | webcam/bike/frame_0010.jpg 1 43 | webcam/bike/frame_0018.jpg 1 44 | webcam/bike/frame_0009.jpg 1 45 | webcam/bike/frame_0005.jpg 1 46 | webcam/bike/frame_0021.jpg 1 47 | webcam/bike/frame_0017.jpg 1 48 | webcam/bike/frame_0013.jpg 1 49 | webcam/bike/frame_0008.jpg 1 50 | webcam/bike/frame_0003.jpg 1 51 | webcam/bike_helmet/frame_0012.jpg 2 52 | webcam/bike_helmet/frame_0013.jpg 2 53 | webcam/bike_helmet/frame_0019.jpg 2 54 | webcam/bike_helmet/frame_0006.jpg 2 55 | webcam/bike_helmet/frame_0003.jpg 2 56 | webcam/bike_helmet/frame_0022.jpg 2 57 | webcam/bike_helmet/frame_0008.jpg 2 58 | webcam/bike_helmet/frame_0015.jpg 2 59 | webcam/bike_helmet/frame_0026.jpg 2 60 | webcam/bike_helmet/frame_0024.jpg 2 61 | webcam/bike_helmet/frame_0023.jpg 2 62 | webcam/bike_helmet/frame_0025.jpg 2 63 | webcam/bike_helmet/frame_0001.jpg 2 64 | webcam/bike_helmet/frame_0027.jpg 2 65 | webcam/bike_helmet/frame_0009.jpg 2 66 | webcam/bike_helmet/frame_0016.jpg 2 67 | webcam/bike_helmet/frame_0010.jpg 2 68 | webcam/bike_helmet/frame_0014.jpg 2 69 | webcam/bike_helmet/frame_0017.jpg 2 70 | webcam/bike_helmet/frame_0018.jpg 2 71 | webcam/bike_helmet/frame_0002.jpg 2 72 | webcam/bike_helmet/frame_0011.jpg 2 73 | webcam/bike_helmet/frame_0007.jpg 2 74 | webcam/bike_helmet/frame_0005.jpg 2 75 | webcam/bike_helmet/frame_0020.jpg 2 76 | webcam/bike_helmet/frame_0028.jpg 2 77 | webcam/bike_helmet/frame_0021.jpg 2 78 | webcam/bike_helmet/frame_0004.jpg 2 79 | webcam/bookcase/frame_0010.jpg 3 80 | webcam/bookcase/frame_0003.jpg 3 81 | webcam/bookcase/frame_0007.jpg 3 82 | webcam/bookcase/frame_0008.jpg 3 83 | webcam/bookcase/frame_0011.jpg 3 84 | webcam/bookcase/frame_0004.jpg 3 85 | webcam/bookcase/frame_0012.jpg 3 86 | webcam/bookcase/frame_0009.jpg 3 87 | webcam/bookcase/frame_0006.jpg 3 88 | webcam/bookcase/frame_0002.jpg 3 89 | webcam/bookcase/frame_0001.jpg 3 90 | webcam/bookcase/frame_0005.jpg 3 91 | webcam/bottle/frame_0012.jpg 4 92 | webcam/bottle/frame_0002.jpg 4 93 | webcam/bottle/frame_0006.jpg 4 94 | webcam/bottle/frame_0008.jpg 4 95 | webcam/bottle/frame_0013.jpg 4 96 | webcam/bottle/frame_0009.jpg 4 97 | webcam/bottle/frame_0010.jpg 4 98 | webcam/bottle/frame_0016.jpg 4 99 | webcam/bottle/frame_0003.jpg 4 100 | webcam/bottle/frame_0004.jpg 4 101 | webcam/bottle/frame_0007.jpg 4 102 | webcam/bottle/frame_0014.jpg 4 103 | webcam/bottle/frame_0001.jpg 4 104 | webcam/bottle/frame_0015.jpg 4 105 | webcam/bottle/frame_0005.jpg 4 106 | webcam/bottle/frame_0011.jpg 4 107 | webcam/calculator/frame_0022.jpg 5 108 | webcam/calculator/frame_0017.jpg 5 109 | webcam/calculator/frame_0020.jpg 5 110 | webcam/calculator/frame_0029.jpg 5 111 | webcam/calculator/frame_0025.jpg 5 112 | webcam/calculator/frame_0024.jpg 5 113 | webcam/calculator/frame_0023.jpg 5 114 | webcam/calculator/frame_0013.jpg 5 115 | webcam/calculator/frame_0011.jpg 5 116 | webcam/calculator/frame_0007.jpg 5 117 | webcam/calculator/frame_0030.jpg 5 118 | webcam/calculator/frame_0015.jpg 5 119 | webcam/calculator/frame_0014.jpg 5 120 | webcam/calculator/frame_0003.jpg 5 121 | webcam/calculator/frame_0006.jpg 5 122 | webcam/calculator/frame_0018.jpg 5 123 | webcam/calculator/frame_0004.jpg 5 124 | webcam/calculator/frame_0010.jpg 5 125 | webcam/calculator/frame_0016.jpg 5 126 | webcam/calculator/frame_0005.jpg 5 127 | webcam/calculator/frame_0002.jpg 5 128 | webcam/calculator/frame_0026.jpg 5 129 | webcam/calculator/frame_0012.jpg 5 130 | webcam/calculator/frame_0001.jpg 5 131 | webcam/calculator/frame_0008.jpg 5 132 | webcam/calculator/frame_0009.jpg 5 133 | webcam/calculator/frame_0021.jpg 5 134 | webcam/calculator/frame_0027.jpg 5 135 | webcam/calculator/frame_0028.jpg 5 136 | webcam/calculator/frame_0031.jpg 5 137 | webcam/calculator/frame_0019.jpg 5 138 | webcam/desk_chair/frame_0008.jpg 6 139 | webcam/desk_chair/frame_0033.jpg 6 140 | webcam/desk_chair/frame_0007.jpg 6 141 | webcam/desk_chair/frame_0036.jpg 6 142 | webcam/desk_chair/frame_0013.jpg 6 143 | webcam/desk_chair/frame_0023.jpg 6 144 | webcam/desk_chair/frame_0017.jpg 6 145 | webcam/desk_chair/frame_0028.jpg 6 146 | webcam/desk_chair/frame_0011.jpg 6 147 | webcam/desk_chair/frame_0021.jpg 6 148 | webcam/desk_chair/frame_0005.jpg 6 149 | webcam/desk_chair/frame_0024.jpg 6 150 | webcam/desk_chair/frame_0004.jpg 6 151 | webcam/desk_chair/frame_0034.jpg 6 152 | webcam/desk_chair/frame_0038.jpg 6 153 | webcam/desk_chair/frame_0030.jpg 6 154 | webcam/desk_chair/frame_0003.jpg 6 155 | webcam/desk_chair/frame_0010.jpg 6 156 | webcam/desk_chair/frame_0001.jpg 6 157 | webcam/desk_chair/frame_0031.jpg 6 158 | webcam/desk_chair/frame_0022.jpg 6 159 | webcam/desk_chair/frame_0015.jpg 6 160 | webcam/desk_chair/frame_0029.jpg 6 161 | webcam/desk_chair/frame_0012.jpg 6 162 | webcam/desk_chair/frame_0016.jpg 6 163 | webcam/desk_chair/frame_0039.jpg 6 164 | webcam/desk_chair/frame_0002.jpg 6 165 | webcam/desk_chair/frame_0009.jpg 6 166 | webcam/desk_chair/frame_0037.jpg 6 167 | webcam/desk_chair/frame_0025.jpg 6 168 | webcam/desk_chair/frame_0014.jpg 6 169 | webcam/desk_chair/frame_0020.jpg 6 170 | webcam/desk_chair/frame_0027.jpg 6 171 | webcam/desk_chair/frame_0032.jpg 6 172 | webcam/desk_chair/frame_0035.jpg 6 173 | webcam/desk_chair/frame_0018.jpg 6 174 | webcam/desk_chair/frame_0006.jpg 6 175 | webcam/desk_chair/frame_0019.jpg 6 176 | webcam/desk_chair/frame_0040.jpg 6 177 | webcam/desk_chair/frame_0026.jpg 6 178 | webcam/desk_lamp/frame_0013.jpg 7 179 | webcam/desk_lamp/frame_0007.jpg 7 180 | webcam/desk_lamp/frame_0017.jpg 7 181 | webcam/desk_lamp/frame_0014.jpg 7 182 | webcam/desk_lamp/frame_0001.jpg 7 183 | webcam/desk_lamp/frame_0003.jpg 7 184 | webcam/desk_lamp/frame_0002.jpg 7 185 | webcam/desk_lamp/frame_0015.jpg 7 186 | webcam/desk_lamp/frame_0010.jpg 7 187 | webcam/desk_lamp/frame_0004.jpg 7 188 | webcam/desk_lamp/frame_0008.jpg 7 189 | webcam/desk_lamp/frame_0016.jpg 7 190 | webcam/desk_lamp/frame_0009.jpg 7 191 | webcam/desk_lamp/frame_0006.jpg 7 192 | webcam/desk_lamp/frame_0012.jpg 7 193 | webcam/desk_lamp/frame_0011.jpg 7 194 | webcam/desk_lamp/frame_0005.jpg 7 195 | webcam/desk_lamp/frame_0018.jpg 7 196 | webcam/desktop_computer/frame_0005.jpg 8 197 | webcam/desktop_computer/frame_0007.jpg 8 198 | webcam/desktop_computer/frame_0003.jpg 8 199 | webcam/desktop_computer/frame_0010.jpg 8 200 | webcam/desktop_computer/frame_0021.jpg 8 201 | webcam/desktop_computer/frame_0019.jpg 8 202 | webcam/desktop_computer/frame_0012.jpg 8 203 | webcam/desktop_computer/frame_0002.jpg 8 204 | webcam/desktop_computer/frame_0017.jpg 8 205 | webcam/desktop_computer/frame_0008.jpg 8 206 | webcam/desktop_computer/frame_0018.jpg 8 207 | webcam/desktop_computer/frame_0004.jpg 8 208 | webcam/desktop_computer/frame_0006.jpg 8 209 | webcam/desktop_computer/frame_0016.jpg 8 210 | webcam/desktop_computer/frame_0014.jpg 8 211 | webcam/desktop_computer/frame_0009.jpg 8 212 | webcam/desktop_computer/frame_0020.jpg 8 213 | webcam/desktop_computer/frame_0015.jpg 8 214 | webcam/desktop_computer/frame_0001.jpg 8 215 | webcam/desktop_computer/frame_0013.jpg 8 216 | webcam/desktop_computer/frame_0011.jpg 8 217 | webcam/file_cabinet/frame_0018.jpg 9 218 | webcam/file_cabinet/frame_0003.jpg 9 219 | webcam/file_cabinet/frame_0005.jpg 9 220 | webcam/file_cabinet/frame_0001.jpg 9 221 | webcam/file_cabinet/frame_0010.jpg 9 222 | webcam/file_cabinet/frame_0014.jpg 9 223 | webcam/file_cabinet/frame_0008.jpg 9 224 | webcam/file_cabinet/frame_0019.jpg 9 225 | webcam/file_cabinet/frame_0007.jpg 9 226 | webcam/file_cabinet/frame_0009.jpg 9 227 | webcam/file_cabinet/frame_0017.jpg 9 228 | webcam/file_cabinet/frame_0016.jpg 9 229 | webcam/file_cabinet/frame_0012.jpg 9 230 | webcam/file_cabinet/frame_0013.jpg 9 231 | webcam/file_cabinet/frame_0015.jpg 9 232 | webcam/file_cabinet/frame_0006.jpg 9 233 | webcam/file_cabinet/frame_0004.jpg 9 234 | webcam/file_cabinet/frame_0011.jpg 9 235 | webcam/file_cabinet/frame_0002.jpg 9 236 | webcam/headphones/frame_0002.jpg 10 237 | webcam/headphones/frame_0019.jpg 10 238 | webcam/headphones/frame_0021.jpg 10 239 | webcam/headphones/frame_0026.jpg 10 240 | webcam/headphones/frame_0013.jpg 10 241 | webcam/headphones/frame_0007.jpg 10 242 | webcam/headphones/frame_0011.jpg 10 243 | webcam/headphones/frame_0023.jpg 10 244 | webcam/headphones/frame_0020.jpg 10 245 | webcam/headphones/frame_0010.jpg 10 246 | webcam/headphones/frame_0018.jpg 10 247 | webcam/headphones/frame_0008.jpg 10 248 | webcam/headphones/frame_0009.jpg 10 249 | webcam/headphones/frame_0022.jpg 10 250 | webcam/headphones/frame_0005.jpg 10 251 | webcam/headphones/frame_0017.jpg 10 252 | webcam/headphones/frame_0003.jpg 10 253 | webcam/headphones/frame_0012.jpg 10 254 | webcam/headphones/frame_0001.jpg 10 255 | webcam/headphones/frame_0025.jpg 10 256 | webcam/headphones/frame_0027.jpg 10 257 | webcam/headphones/frame_0014.jpg 10 258 | webcam/headphones/frame_0006.jpg 10 259 | webcam/headphones/frame_0015.jpg 10 260 | webcam/headphones/frame_0016.jpg 10 261 | webcam/headphones/frame_0024.jpg 10 262 | webcam/headphones/frame_0004.jpg 10 263 | webcam/keyboard/frame_0027.jpg 11 264 | webcam/keyboard/frame_0003.jpg 11 265 | webcam/keyboard/frame_0022.jpg 11 266 | webcam/keyboard/frame_0013.jpg 11 267 | webcam/keyboard/frame_0007.jpg 11 268 | webcam/keyboard/frame_0026.jpg 11 269 | webcam/keyboard/frame_0004.jpg 11 270 | webcam/keyboard/frame_0014.jpg 11 271 | webcam/keyboard/frame_0018.jpg 11 272 | webcam/keyboard/frame_0015.jpg 11 273 | webcam/keyboard/frame_0017.jpg 11 274 | webcam/keyboard/frame_0020.jpg 11 275 | webcam/keyboard/frame_0016.jpg 11 276 | webcam/keyboard/frame_0005.jpg 11 277 | webcam/keyboard/frame_0001.jpg 11 278 | webcam/keyboard/frame_0019.jpg 11 279 | webcam/keyboard/frame_0009.jpg 11 280 | webcam/keyboard/frame_0024.jpg 11 281 | webcam/keyboard/frame_0021.jpg 11 282 | webcam/keyboard/frame_0008.jpg 11 283 | webcam/keyboard/frame_0010.jpg 11 284 | webcam/keyboard/frame_0025.jpg 11 285 | webcam/keyboard/frame_0012.jpg 11 286 | webcam/keyboard/frame_0023.jpg 11 287 | webcam/keyboard/frame_0006.jpg 11 288 | webcam/keyboard/frame_0002.jpg 11 289 | webcam/keyboard/frame_0011.jpg 11 290 | webcam/laptop/frame_0014.jpg 12 291 | webcam/laptop/frame_0007.jpg 12 292 | webcam/laptop/frame_0028.jpg 12 293 | webcam/laptop/frame_0024.jpg 12 294 | webcam/laptop/frame_0004.jpg 12 295 | webcam/laptop/frame_0015.jpg 12 296 | webcam/laptop/frame_0020.jpg 12 297 | webcam/laptop/frame_0019.jpg 12 298 | webcam/laptop/frame_0005.jpg 12 299 | webcam/laptop/frame_0002.jpg 12 300 | webcam/laptop/frame_0010.jpg 12 301 | webcam/laptop/frame_0003.jpg 12 302 | webcam/laptop/frame_0026.jpg 12 303 | webcam/laptop/frame_0030.jpg 12 304 | webcam/laptop/frame_0017.jpg 12 305 | webcam/laptop/frame_0011.jpg 12 306 | webcam/laptop/frame_0006.jpg 12 307 | webcam/laptop/frame_0018.jpg 12 308 | webcam/laptop/frame_0027.jpg 12 309 | webcam/laptop/frame_0009.jpg 12 310 | webcam/laptop/frame_0016.jpg 12 311 | webcam/laptop/frame_0022.jpg 12 312 | webcam/laptop/frame_0013.jpg 12 313 | webcam/laptop/frame_0023.jpg 12 314 | webcam/laptop/frame_0012.jpg 12 315 | webcam/laptop/frame_0021.jpg 12 316 | webcam/laptop/frame_0025.jpg 12 317 | webcam/laptop/frame_0008.jpg 12 318 | webcam/laptop/frame_0029.jpg 12 319 | webcam/laptop/frame_0001.jpg 12 320 | webcam/letter_tray/frame_0011.jpg 13 321 | webcam/letter_tray/frame_0006.jpg 13 322 | webcam/letter_tray/frame_0010.jpg 13 323 | webcam/letter_tray/frame_0005.jpg 13 324 | webcam/letter_tray/frame_0003.jpg 13 325 | webcam/letter_tray/frame_0012.jpg 13 326 | webcam/letter_tray/frame_0001.jpg 13 327 | webcam/letter_tray/frame_0004.jpg 13 328 | webcam/letter_tray/frame_0016.jpg 13 329 | webcam/letter_tray/frame_0015.jpg 13 330 | webcam/letter_tray/frame_0014.jpg 13 331 | webcam/letter_tray/frame_0008.jpg 13 332 | webcam/letter_tray/frame_0013.jpg 13 333 | webcam/letter_tray/frame_0007.jpg 13 334 | webcam/letter_tray/frame_0002.jpg 13 335 | webcam/letter_tray/frame_0019.jpg 13 336 | webcam/letter_tray/frame_0018.jpg 13 337 | webcam/letter_tray/frame_0017.jpg 13 338 | webcam/letter_tray/frame_0009.jpg 13 339 | webcam/mobile_phone/frame_0026.jpg 14 340 | webcam/mobile_phone/frame_0024.jpg 14 341 | webcam/mobile_phone/frame_0005.jpg 14 342 | webcam/mobile_phone/frame_0022.jpg 14 343 | webcam/mobile_phone/frame_0019.jpg 14 344 | webcam/mobile_phone/frame_0004.jpg 14 345 | webcam/mobile_phone/frame_0017.jpg 14 346 | webcam/mobile_phone/frame_0014.jpg 14 347 | webcam/mobile_phone/frame_0011.jpg 14 348 | webcam/mobile_phone/frame_0008.jpg 14 349 | webcam/mobile_phone/frame_0003.jpg 14 350 | webcam/mobile_phone/frame_0007.jpg 14 351 | webcam/mobile_phone/frame_0002.jpg 14 352 | webcam/mobile_phone/frame_0023.jpg 14 353 | webcam/mobile_phone/frame_0015.jpg 14 354 | webcam/mobile_phone/frame_0016.jpg 14 355 | webcam/mobile_phone/frame_0025.jpg 14 356 | webcam/mobile_phone/frame_0010.jpg 14 357 | webcam/mobile_phone/frame_0020.jpg 14 358 | webcam/mobile_phone/frame_0029.jpg 14 359 | webcam/mobile_phone/frame_0030.jpg 14 360 | webcam/mobile_phone/frame_0021.jpg 14 361 | webcam/mobile_phone/frame_0009.jpg 14 362 | webcam/mobile_phone/frame_0027.jpg 14 363 | webcam/mobile_phone/frame_0001.jpg 14 364 | webcam/mobile_phone/frame_0013.jpg 14 365 | webcam/mobile_phone/frame_0006.jpg 14 366 | webcam/mobile_phone/frame_0018.jpg 14 367 | webcam/mobile_phone/frame_0012.jpg 14 368 | webcam/mobile_phone/frame_0028.jpg 14 369 | webcam/monitor/frame_0008.jpg 15 370 | webcam/monitor/frame_0039.jpg 15 371 | webcam/monitor/frame_0030.jpg 15 372 | webcam/monitor/frame_0001.jpg 15 373 | webcam/monitor/frame_0029.jpg 15 374 | webcam/monitor/frame_0041.jpg 15 375 | webcam/monitor/frame_0042.jpg 15 376 | webcam/monitor/frame_0032.jpg 15 377 | webcam/monitor/frame_0016.jpg 15 378 | webcam/monitor/frame_0012.jpg 15 379 | webcam/monitor/frame_0013.jpg 15 380 | webcam/monitor/frame_0023.jpg 15 381 | webcam/monitor/frame_0028.jpg 15 382 | webcam/monitor/frame_0005.jpg 15 383 | webcam/monitor/frame_0033.jpg 15 384 | webcam/monitor/frame_0010.jpg 15 385 | webcam/monitor/frame_0031.jpg 15 386 | webcam/monitor/frame_0018.jpg 15 387 | webcam/monitor/frame_0024.jpg 15 388 | webcam/monitor/frame_0002.jpg 15 389 | webcam/monitor/frame_0011.jpg 15 390 | webcam/monitor/frame_0021.jpg 15 391 | webcam/monitor/frame_0027.jpg 15 392 | webcam/monitor/frame_0038.jpg 15 393 | webcam/monitor/frame_0004.jpg 15 394 | webcam/monitor/frame_0035.jpg 15 395 | webcam/monitor/frame_0019.jpg 15 396 | webcam/monitor/frame_0036.jpg 15 397 | webcam/monitor/frame_0022.jpg 15 398 | webcam/monitor/frame_0040.jpg 15 399 | webcam/monitor/frame_0006.jpg 15 400 | webcam/monitor/frame_0015.jpg 15 401 | webcam/monitor/frame_0026.jpg 15 402 | webcam/monitor/frame_0017.jpg 15 403 | webcam/monitor/frame_0003.jpg 15 404 | webcam/monitor/frame_0014.jpg 15 405 | webcam/monitor/frame_0009.jpg 15 406 | webcam/monitor/frame_0007.jpg 15 407 | webcam/monitor/frame_0043.jpg 15 408 | webcam/monitor/frame_0020.jpg 15 409 | webcam/monitor/frame_0025.jpg 15 410 | webcam/monitor/frame_0037.jpg 15 411 | webcam/monitor/frame_0034.jpg 15 412 | webcam/mouse/frame_0015.jpg 16 413 | webcam/mouse/frame_0024.jpg 16 414 | webcam/mouse/frame_0016.jpg 16 415 | webcam/mouse/frame_0020.jpg 16 416 | webcam/mouse/frame_0010.jpg 16 417 | webcam/mouse/frame_0011.jpg 16 418 | webcam/mouse/frame_0012.jpg 16 419 | webcam/mouse/frame_0027.jpg 16 420 | webcam/mouse/frame_0009.jpg 16 421 | webcam/mouse/frame_0004.jpg 16 422 | webcam/mouse/frame_0023.jpg 16 423 | webcam/mouse/frame_0026.jpg 16 424 | webcam/mouse/frame_0017.jpg 16 425 | webcam/mouse/frame_0005.jpg 16 426 | webcam/mouse/frame_0008.jpg 16 427 | webcam/mouse/frame_0006.jpg 16 428 | webcam/mouse/frame_0021.jpg 16 429 | webcam/mouse/frame_0013.jpg 16 430 | webcam/mouse/frame_0007.jpg 16 431 | webcam/mouse/frame_0019.jpg 16 432 | webcam/mouse/frame_0022.jpg 16 433 | webcam/mouse/frame_0028.jpg 16 434 | webcam/mouse/frame_0003.jpg 16 435 | webcam/mouse/frame_0002.jpg 16 436 | webcam/mouse/frame_0030.jpg 16 437 | webcam/mouse/frame_0014.jpg 16 438 | webcam/mouse/frame_0001.jpg 16 439 | webcam/mouse/frame_0029.jpg 16 440 | webcam/mouse/frame_0025.jpg 16 441 | webcam/mouse/frame_0018.jpg 16 442 | webcam/mug/frame_0027.jpg 17 443 | webcam/mug/frame_0016.jpg 17 444 | webcam/mug/frame_0024.jpg 17 445 | webcam/mug/frame_0006.jpg 17 446 | webcam/mug/frame_0015.jpg 17 447 | webcam/mug/frame_0017.jpg 17 448 | webcam/mug/frame_0025.jpg 17 449 | webcam/mug/frame_0005.jpg 17 450 | webcam/mug/frame_0014.jpg 17 451 | webcam/mug/frame_0019.jpg 17 452 | webcam/mug/frame_0003.jpg 17 453 | webcam/mug/frame_0008.jpg 17 454 | webcam/mug/frame_0010.jpg 17 455 | webcam/mug/frame_0012.jpg 17 456 | webcam/mug/frame_0009.jpg 17 457 | webcam/mug/frame_0020.jpg 17 458 | webcam/mug/frame_0013.jpg 17 459 | webcam/mug/frame_0002.jpg 17 460 | webcam/mug/frame_0001.jpg 17 461 | webcam/mug/frame_0026.jpg 17 462 | webcam/mug/frame_0007.jpg 17 463 | webcam/mug/frame_0023.jpg 17 464 | webcam/mug/frame_0022.jpg 17 465 | webcam/mug/frame_0011.jpg 17 466 | webcam/mug/frame_0004.jpg 17 467 | webcam/mug/frame_0021.jpg 17 468 | webcam/mug/frame_0018.jpg 17 469 | webcam/paper_notebook/frame_0020.jpg 18 470 | webcam/paper_notebook/frame_0022.jpg 18 471 | webcam/paper_notebook/frame_0021.jpg 18 472 | webcam/paper_notebook/frame_0017.jpg 18 473 | webcam/paper_notebook/frame_0025.jpg 18 474 | webcam/paper_notebook/frame_0016.jpg 18 475 | webcam/paper_notebook/frame_0014.jpg 18 476 | webcam/paper_notebook/frame_0026.jpg 18 477 | webcam/paper_notebook/frame_0012.jpg 18 478 | webcam/paper_notebook/frame_0003.jpg 18 479 | webcam/paper_notebook/frame_0005.jpg 18 480 | webcam/paper_notebook/frame_0019.jpg 18 481 | webcam/paper_notebook/frame_0011.jpg 18 482 | webcam/paper_notebook/frame_0010.jpg 18 483 | webcam/paper_notebook/frame_0024.jpg 18 484 | webcam/paper_notebook/frame_0009.jpg 18 485 | webcam/paper_notebook/frame_0013.jpg 18 486 | webcam/paper_notebook/frame_0027.jpg 18 487 | webcam/paper_notebook/frame_0007.jpg 18 488 | webcam/paper_notebook/frame_0006.jpg 18 489 | webcam/paper_notebook/frame_0015.jpg 18 490 | webcam/paper_notebook/frame_0004.jpg 18 491 | webcam/paper_notebook/frame_0028.jpg 18 492 | webcam/paper_notebook/frame_0008.jpg 18 493 | webcam/paper_notebook/frame_0002.jpg 18 494 | webcam/paper_notebook/frame_0001.jpg 18 495 | webcam/paper_notebook/frame_0018.jpg 18 496 | webcam/paper_notebook/frame_0023.jpg 18 497 | webcam/pen/frame_0032.jpg 19 498 | webcam/pen/frame_0012.jpg 19 499 | webcam/pen/frame_0028.jpg 19 500 | webcam/pen/frame_0002.jpg 19 501 | webcam/pen/frame_0015.jpg 19 502 | webcam/pen/frame_0011.jpg 19 503 | webcam/pen/frame_0023.jpg 19 504 | webcam/pen/frame_0008.jpg 19 505 | webcam/pen/frame_0027.jpg 19 506 | webcam/pen/frame_0014.jpg 19 507 | webcam/pen/frame_0030.jpg 19 508 | webcam/pen/frame_0003.jpg 19 509 | webcam/pen/frame_0013.jpg 19 510 | webcam/pen/frame_0005.jpg 19 511 | webcam/pen/frame_0024.jpg 19 512 | webcam/pen/frame_0018.jpg 19 513 | webcam/pen/frame_0009.jpg 19 514 | webcam/pen/frame_0025.jpg 19 515 | webcam/pen/frame_0010.jpg 19 516 | webcam/pen/frame_0029.jpg 19 517 | webcam/pen/frame_0026.jpg 19 518 | webcam/pen/frame_0031.jpg 19 519 | webcam/pen/frame_0004.jpg 19 520 | webcam/pen/frame_0017.jpg 19 521 | webcam/pen/frame_0021.jpg 19 522 | webcam/pen/frame_0020.jpg 19 523 | webcam/pen/frame_0001.jpg 19 524 | webcam/pen/frame_0022.jpg 19 525 | webcam/pen/frame_0006.jpg 19 526 | webcam/pen/frame_0007.jpg 19 527 | webcam/pen/frame_0016.jpg 19 528 | webcam/pen/frame_0019.jpg 19 529 | webcam/phone/frame_0006.jpg 20 530 | webcam/phone/frame_0015.jpg 20 531 | webcam/phone/frame_0014.jpg 20 532 | webcam/phone/frame_0005.jpg 20 533 | webcam/phone/frame_0013.jpg 20 534 | webcam/phone/frame_0010.jpg 20 535 | webcam/phone/frame_0007.jpg 20 536 | webcam/phone/frame_0008.jpg 20 537 | webcam/phone/frame_0016.jpg 20 538 | webcam/phone/frame_0004.jpg 20 539 | webcam/phone/frame_0012.jpg 20 540 | webcam/phone/frame_0003.jpg 20 541 | webcam/phone/frame_0002.jpg 20 542 | webcam/phone/frame_0001.jpg 20 543 | webcam/phone/frame_0011.jpg 20 544 | webcam/phone/frame_0009.jpg 20 545 | webcam/printer/frame_0018.jpg 21 546 | webcam/printer/frame_0012.jpg 21 547 | webcam/printer/frame_0011.jpg 21 548 | webcam/printer/frame_0006.jpg 21 549 | webcam/printer/frame_0015.jpg 21 550 | webcam/printer/frame_0007.jpg 21 551 | webcam/printer/frame_0016.jpg 21 552 | webcam/printer/frame_0003.jpg 21 553 | webcam/printer/frame_0010.jpg 21 554 | webcam/printer/frame_0002.jpg 21 555 | webcam/printer/frame_0001.jpg 21 556 | webcam/printer/frame_0020.jpg 21 557 | webcam/printer/frame_0005.jpg 21 558 | webcam/printer/frame_0019.jpg 21 559 | webcam/printer/frame_0013.jpg 21 560 | webcam/printer/frame_0017.jpg 21 561 | webcam/printer/frame_0004.jpg 21 562 | webcam/printer/frame_0009.jpg 21 563 | webcam/printer/frame_0014.jpg 21 564 | webcam/printer/frame_0008.jpg 21 565 | webcam/projector/frame_0026.jpg 22 566 | webcam/projector/frame_0008.jpg 22 567 | webcam/projector/frame_0027.jpg 22 568 | webcam/projector/frame_0024.jpg 22 569 | webcam/projector/frame_0005.jpg 22 570 | webcam/projector/frame_0002.jpg 22 571 | webcam/projector/frame_0001.jpg 22 572 | webcam/projector/frame_0007.jpg 22 573 | webcam/projector/frame_0018.jpg 22 574 | webcam/projector/frame_0013.jpg 22 575 | webcam/projector/frame_0006.jpg 22 576 | webcam/projector/frame_0003.jpg 22 577 | webcam/projector/frame_0011.jpg 22 578 | webcam/projector/frame_0012.jpg 22 579 | webcam/projector/frame_0023.jpg 22 580 | webcam/projector/frame_0017.jpg 22 581 | webcam/projector/frame_0015.jpg 22 582 | webcam/projector/frame_0014.jpg 22 583 | webcam/projector/frame_0019.jpg 22 584 | webcam/projector/frame_0020.jpg 22 585 | webcam/projector/frame_0030.jpg 22 586 | webcam/projector/frame_0021.jpg 22 587 | webcam/projector/frame_0028.jpg 22 588 | webcam/projector/frame_0010.jpg 22 589 | webcam/projector/frame_0016.jpg 22 590 | webcam/projector/frame_0009.jpg 22 591 | webcam/projector/frame_0025.jpg 22 592 | webcam/projector/frame_0004.jpg 22 593 | webcam/projector/frame_0029.jpg 22 594 | webcam/projector/frame_0022.jpg 22 595 | webcam/punchers/frame_0006.jpg 23 596 | webcam/punchers/frame_0012.jpg 23 597 | webcam/punchers/frame_0002.jpg 23 598 | webcam/punchers/frame_0004.jpg 23 599 | webcam/punchers/frame_0010.jpg 23 600 | webcam/punchers/frame_0020.jpg 23 601 | webcam/punchers/frame_0026.jpg 23 602 | webcam/punchers/frame_0003.jpg 23 603 | webcam/punchers/frame_0018.jpg 23 604 | webcam/punchers/frame_0001.jpg 23 605 | webcam/punchers/frame_0011.jpg 23 606 | webcam/punchers/frame_0019.jpg 23 607 | webcam/punchers/frame_0025.jpg 23 608 | webcam/punchers/frame_0017.jpg 23 609 | webcam/punchers/frame_0008.jpg 23 610 | webcam/punchers/frame_0016.jpg 23 611 | webcam/punchers/frame_0023.jpg 23 612 | webcam/punchers/frame_0007.jpg 23 613 | webcam/punchers/frame_0013.jpg 23 614 | webcam/punchers/frame_0021.jpg 23 615 | webcam/punchers/frame_0027.jpg 23 616 | webcam/punchers/frame_0015.jpg 23 617 | webcam/punchers/frame_0022.jpg 23 618 | webcam/punchers/frame_0014.jpg 23 619 | webcam/punchers/frame_0005.jpg 23 620 | webcam/punchers/frame_0009.jpg 23 621 | webcam/punchers/frame_0024.jpg 23 622 | webcam/ring_binder/frame_0022.jpg 24 623 | webcam/ring_binder/frame_0025.jpg 24 624 | webcam/ring_binder/frame_0034.jpg 24 625 | webcam/ring_binder/frame_0021.jpg 24 626 | webcam/ring_binder/frame_0009.jpg 24 627 | webcam/ring_binder/frame_0003.jpg 24 628 | webcam/ring_binder/frame_0017.jpg 24 629 | webcam/ring_binder/frame_0033.jpg 24 630 | webcam/ring_binder/frame_0015.jpg 24 631 | webcam/ring_binder/frame_0035.jpg 24 632 | webcam/ring_binder/frame_0031.jpg 24 633 | webcam/ring_binder/frame_0004.jpg 24 634 | webcam/ring_binder/frame_0030.jpg 24 635 | webcam/ring_binder/frame_0020.jpg 24 636 | webcam/ring_binder/frame_0016.jpg 24 637 | webcam/ring_binder/frame_0013.jpg 24 638 | webcam/ring_binder/frame_0037.jpg 24 639 | webcam/ring_binder/frame_0006.jpg 24 640 | webcam/ring_binder/frame_0038.jpg 24 641 | webcam/ring_binder/frame_0008.jpg 24 642 | webcam/ring_binder/frame_0018.jpg 24 643 | webcam/ring_binder/frame_0011.jpg 24 644 | webcam/ring_binder/frame_0014.jpg 24 645 | webcam/ring_binder/frame_0007.jpg 24 646 | webcam/ring_binder/frame_0039.jpg 24 647 | webcam/ring_binder/frame_0023.jpg 24 648 | webcam/ring_binder/frame_0010.jpg 24 649 | webcam/ring_binder/frame_0029.jpg 24 650 | webcam/ring_binder/frame_0028.jpg 24 651 | webcam/ring_binder/frame_0005.jpg 24 652 | webcam/ring_binder/frame_0019.jpg 24 653 | webcam/ring_binder/frame_0026.jpg 24 654 | webcam/ring_binder/frame_0001.jpg 24 655 | webcam/ring_binder/frame_0024.jpg 24 656 | webcam/ring_binder/frame_0036.jpg 24 657 | webcam/ring_binder/frame_0012.jpg 24 658 | webcam/ring_binder/frame_0002.jpg 24 659 | webcam/ring_binder/frame_0032.jpg 24 660 | webcam/ring_binder/frame_0040.jpg 24 661 | webcam/ring_binder/frame_0027.jpg 24 662 | webcam/ruler/frame_0002.jpg 25 663 | webcam/ruler/frame_0011.jpg 25 664 | webcam/ruler/frame_0003.jpg 25 665 | webcam/ruler/frame_0008.jpg 25 666 | webcam/ruler/frame_0001.jpg 25 667 | webcam/ruler/frame_0009.jpg 25 668 | webcam/ruler/frame_0007.jpg 25 669 | webcam/ruler/frame_0010.jpg 25 670 | webcam/ruler/frame_0006.jpg 25 671 | webcam/ruler/frame_0004.jpg 25 672 | webcam/ruler/frame_0005.jpg 25 673 | webcam/scissors/frame_0008.jpg 26 674 | webcam/scissors/frame_0004.jpg 26 675 | webcam/scissors/frame_0016.jpg 26 676 | webcam/scissors/frame_0006.jpg 26 677 | webcam/scissors/frame_0019.jpg 26 678 | webcam/scissors/frame_0021.jpg 26 679 | webcam/scissors/frame_0003.jpg 26 680 | webcam/scissors/frame_0011.jpg 26 681 | webcam/scissors/frame_0024.jpg 26 682 | webcam/scissors/frame_0012.jpg 26 683 | webcam/scissors/frame_0005.jpg 26 684 | webcam/scissors/frame_0007.jpg 26 685 | webcam/scissors/frame_0025.jpg 26 686 | webcam/scissors/frame_0001.jpg 26 687 | webcam/scissors/frame_0002.jpg 26 688 | webcam/scissors/frame_0009.jpg 26 689 | webcam/scissors/frame_0023.jpg 26 690 | webcam/scissors/frame_0020.jpg 26 691 | webcam/scissors/frame_0013.jpg 26 692 | webcam/scissors/frame_0018.jpg 26 693 | webcam/scissors/frame_0017.jpg 26 694 | webcam/scissors/frame_0022.jpg 26 695 | webcam/scissors/frame_0014.jpg 26 696 | webcam/scissors/frame_0015.jpg 26 697 | webcam/scissors/frame_0010.jpg 26 698 | webcam/speaker/frame_0026.jpg 27 699 | webcam/speaker/frame_0010.jpg 27 700 | webcam/speaker/frame_0022.jpg 27 701 | webcam/speaker/frame_0028.jpg 27 702 | webcam/speaker/frame_0008.jpg 27 703 | webcam/speaker/frame_0019.jpg 27 704 | webcam/speaker/frame_0004.jpg 27 705 | webcam/speaker/frame_0014.jpg 27 706 | webcam/speaker/frame_0011.jpg 27 707 | webcam/speaker/frame_0024.jpg 27 708 | webcam/speaker/frame_0012.jpg 27 709 | webcam/speaker/frame_0029.jpg 27 710 | webcam/speaker/frame_0020.jpg 27 711 | webcam/speaker/frame_0009.jpg 27 712 | webcam/speaker/frame_0005.jpg 27 713 | webcam/speaker/frame_0018.jpg 27 714 | webcam/speaker/frame_0023.jpg 27 715 | webcam/speaker/frame_0006.jpg 27 716 | webcam/speaker/frame_0013.jpg 27 717 | webcam/speaker/frame_0030.jpg 27 718 | webcam/speaker/frame_0007.jpg 27 719 | webcam/speaker/frame_0021.jpg 27 720 | webcam/speaker/frame_0025.jpg 27 721 | webcam/speaker/frame_0001.jpg 27 722 | webcam/speaker/frame_0015.jpg 27 723 | webcam/speaker/frame_0016.jpg 27 724 | webcam/speaker/frame_0017.jpg 27 725 | webcam/speaker/frame_0003.jpg 27 726 | webcam/speaker/frame_0002.jpg 27 727 | webcam/speaker/frame_0027.jpg 27 728 | webcam/stapler/frame_0022.jpg 28 729 | webcam/stapler/frame_0001.jpg 28 730 | webcam/stapler/frame_0002.jpg 28 731 | webcam/stapler/frame_0016.jpg 28 732 | webcam/stapler/frame_0010.jpg 28 733 | webcam/stapler/frame_0013.jpg 28 734 | webcam/stapler/frame_0005.jpg 28 735 | webcam/stapler/frame_0009.jpg 28 736 | webcam/stapler/frame_0018.jpg 28 737 | webcam/stapler/frame_0006.jpg 28 738 | webcam/stapler/frame_0004.jpg 28 739 | webcam/stapler/frame_0012.jpg 28 740 | webcam/stapler/frame_0024.jpg 28 741 | webcam/stapler/frame_0003.jpg 28 742 | webcam/stapler/frame_0020.jpg 28 743 | webcam/stapler/frame_0019.jpg 28 744 | webcam/stapler/frame_0023.jpg 28 745 | webcam/stapler/frame_0014.jpg 28 746 | webcam/stapler/frame_0017.jpg 28 747 | webcam/stapler/frame_0011.jpg 28 748 | webcam/stapler/frame_0021.jpg 28 749 | webcam/stapler/frame_0015.jpg 28 750 | webcam/stapler/frame_0007.jpg 28 751 | webcam/stapler/frame_0008.jpg 28 752 | webcam/tape_dispenser/frame_0018.jpg 29 753 | webcam/tape_dispenser/frame_0005.jpg 29 754 | webcam/tape_dispenser/frame_0010.jpg 29 755 | webcam/tape_dispenser/frame_0001.jpg 29 756 | webcam/tape_dispenser/frame_0009.jpg 29 757 | webcam/tape_dispenser/frame_0004.jpg 29 758 | webcam/tape_dispenser/frame_0022.jpg 29 759 | webcam/tape_dispenser/frame_0006.jpg 29 760 | webcam/tape_dispenser/frame_0007.jpg 29 761 | webcam/tape_dispenser/frame_0015.jpg 29 762 | webcam/tape_dispenser/frame_0013.jpg 29 763 | webcam/tape_dispenser/frame_0020.jpg 29 764 | webcam/tape_dispenser/frame_0003.jpg 29 765 | webcam/tape_dispenser/frame_0008.jpg 29 766 | webcam/tape_dispenser/frame_0021.jpg 29 767 | webcam/tape_dispenser/frame_0019.jpg 29 768 | webcam/tape_dispenser/frame_0014.jpg 29 769 | webcam/tape_dispenser/frame_0002.jpg 29 770 | webcam/tape_dispenser/frame_0011.jpg 29 771 | webcam/tape_dispenser/frame_0016.jpg 29 772 | webcam/tape_dispenser/frame_0012.jpg 29 773 | webcam/tape_dispenser/frame_0023.jpg 29 774 | webcam/tape_dispenser/frame_0017.jpg 29 775 | webcam/trash_can/frame_0015.jpg 30 776 | webcam/trash_can/frame_0017.jpg 30 777 | webcam/trash_can/frame_0018.jpg 30 778 | webcam/trash_can/frame_0006.jpg 30 779 | webcam/trash_can/frame_0007.jpg 30 780 | webcam/trash_can/frame_0002.jpg 30 781 | webcam/trash_can/frame_0010.jpg 30 782 | webcam/trash_can/frame_0005.jpg 30 783 | webcam/trash_can/frame_0009.jpg 30 784 | webcam/trash_can/frame_0014.jpg 30 785 | webcam/trash_can/frame_0013.jpg 30 786 | webcam/trash_can/frame_0004.jpg 30 787 | webcam/trash_can/frame_0003.jpg 30 788 | webcam/trash_can/frame_0011.jpg 30 789 | webcam/trash_can/frame_0021.jpg 30 790 | webcam/trash_can/frame_0016.jpg 30 791 | webcam/trash_can/frame_0001.jpg 30 792 | webcam/trash_can/frame_0008.jpg 30 793 | webcam/trash_can/frame_0020.jpg 30 794 | webcam/trash_can/frame_0012.jpg 30 795 | webcam/trash_can/frame_0019.jpg 30 796 | -------------------------------------------------------------------------------- /data/data_txt/Office31/WebcamOpenSet_known.txt: -------------------------------------------------------------------------------- 1 | webcam/backpack/frame_0019.jpg 0 2 | webcam/backpack/frame_0004.jpg 0 3 | webcam/backpack/frame_0007.jpg 0 4 | webcam/backpack/frame_0024.jpg 0 5 | webcam/backpack/frame_0023.jpg 0 6 | webcam/backpack/frame_0027.jpg 0 7 | webcam/backpack/frame_0003.jpg 0 8 | webcam/backpack/frame_0022.jpg 0 9 | webcam/backpack/frame_0005.jpg 0 10 | webcam/backpack/frame_0017.jpg 0 11 | webcam/backpack/frame_0029.jpg 0 12 | webcam/backpack/frame_0016.jpg 0 13 | webcam/backpack/frame_0021.jpg 0 14 | webcam/backpack/frame_0012.jpg 0 15 | webcam/backpack/frame_0018.jpg 0 16 | webcam/backpack/frame_0028.jpg 0 17 | webcam/backpack/frame_0014.jpg 0 18 | webcam/backpack/frame_0026.jpg 0 19 | webcam/backpack/frame_0025.jpg 0 20 | webcam/backpack/frame_0020.jpg 0 21 | webcam/backpack/frame_0008.jpg 0 22 | webcam/backpack/frame_0015.jpg 0 23 | webcam/backpack/frame_0010.jpg 0 24 | webcam/backpack/frame_0009.jpg 0 25 | webcam/backpack/frame_0001.jpg 0 26 | webcam/backpack/frame_0011.jpg 0 27 | webcam/backpack/frame_0002.jpg 0 28 | webcam/backpack/frame_0006.jpg 0 29 | webcam/backpack/frame_0013.jpg 0 30 | webcam/bike/frame_0007.jpg 1 31 | webcam/bike/frame_0016.jpg 1 32 | webcam/bike/frame_0006.jpg 1 33 | webcam/bike/frame_0002.jpg 1 34 | webcam/bike/frame_0012.jpg 1 35 | webcam/bike/frame_0019.jpg 1 36 | webcam/bike/frame_0020.jpg 1 37 | webcam/bike/frame_0001.jpg 1 38 | webcam/bike/frame_0014.jpg 1 39 | webcam/bike/frame_0015.jpg 1 40 | webcam/bike/frame_0011.jpg 1 41 | webcam/bike/frame_0004.jpg 1 42 | webcam/bike/frame_0010.jpg 1 43 | webcam/bike/frame_0018.jpg 1 44 | webcam/bike/frame_0009.jpg 1 45 | webcam/bike/frame_0005.jpg 1 46 | webcam/bike/frame_0021.jpg 1 47 | webcam/bike/frame_0017.jpg 1 48 | webcam/bike/frame_0013.jpg 1 49 | webcam/bike/frame_0008.jpg 1 50 | webcam/bike/frame_0003.jpg 1 51 | webcam/bike_helmet/frame_0012.jpg 2 52 | webcam/bike_helmet/frame_0013.jpg 2 53 | webcam/bike_helmet/frame_0019.jpg 2 54 | webcam/bike_helmet/frame_0006.jpg 2 55 | webcam/bike_helmet/frame_0003.jpg 2 56 | webcam/bike_helmet/frame_0022.jpg 2 57 | webcam/bike_helmet/frame_0008.jpg 2 58 | webcam/bike_helmet/frame_0015.jpg 2 59 | webcam/bike_helmet/frame_0026.jpg 2 60 | webcam/bike_helmet/frame_0024.jpg 2 61 | webcam/bike_helmet/frame_0023.jpg 2 62 | webcam/bike_helmet/frame_0025.jpg 2 63 | webcam/bike_helmet/frame_0001.jpg 2 64 | webcam/bike_helmet/frame_0027.jpg 2 65 | webcam/bike_helmet/frame_0009.jpg 2 66 | webcam/bike_helmet/frame_0016.jpg 2 67 | webcam/bike_helmet/frame_0010.jpg 2 68 | webcam/bike_helmet/frame_0014.jpg 2 69 | webcam/bike_helmet/frame_0017.jpg 2 70 | webcam/bike_helmet/frame_0018.jpg 2 71 | webcam/bike_helmet/frame_0002.jpg 2 72 | webcam/bike_helmet/frame_0011.jpg 2 73 | webcam/bike_helmet/frame_0007.jpg 2 74 | webcam/bike_helmet/frame_0005.jpg 2 75 | webcam/bike_helmet/frame_0020.jpg 2 76 | webcam/bike_helmet/frame_0028.jpg 2 77 | webcam/bike_helmet/frame_0021.jpg 2 78 | webcam/bike_helmet/frame_0004.jpg 2 79 | webcam/bookcase/frame_0010.jpg 3 80 | webcam/bookcase/frame_0003.jpg 3 81 | webcam/bookcase/frame_0007.jpg 3 82 | webcam/bookcase/frame_0008.jpg 3 83 | webcam/bookcase/frame_0011.jpg 3 84 | webcam/bookcase/frame_0004.jpg 3 85 | webcam/bookcase/frame_0012.jpg 3 86 | webcam/bookcase/frame_0009.jpg 3 87 | webcam/bookcase/frame_0006.jpg 3 88 | webcam/bookcase/frame_0002.jpg 3 89 | webcam/bookcase/frame_0001.jpg 3 90 | webcam/bookcase/frame_0005.jpg 3 91 | webcam/bottle/frame_0012.jpg 4 92 | webcam/bottle/frame_0002.jpg 4 93 | webcam/bottle/frame_0006.jpg 4 94 | webcam/bottle/frame_0008.jpg 4 95 | webcam/bottle/frame_0013.jpg 4 96 | webcam/bottle/frame_0009.jpg 4 97 | webcam/bottle/frame_0010.jpg 4 98 | webcam/bottle/frame_0016.jpg 4 99 | webcam/bottle/frame_0003.jpg 4 100 | webcam/bottle/frame_0004.jpg 4 101 | webcam/bottle/frame_0007.jpg 4 102 | webcam/bottle/frame_0014.jpg 4 103 | webcam/bottle/frame_0001.jpg 4 104 | webcam/bottle/frame_0015.jpg 4 105 | webcam/bottle/frame_0005.jpg 4 106 | webcam/bottle/frame_0011.jpg 4 107 | webcam/calculator/frame_0022.jpg 5 108 | webcam/calculator/frame_0017.jpg 5 109 | webcam/calculator/frame_0020.jpg 5 110 | webcam/calculator/frame_0029.jpg 5 111 | webcam/calculator/frame_0025.jpg 5 112 | webcam/calculator/frame_0024.jpg 5 113 | webcam/calculator/frame_0023.jpg 5 114 | webcam/calculator/frame_0013.jpg 5 115 | webcam/calculator/frame_0011.jpg 5 116 | webcam/calculator/frame_0007.jpg 5 117 | webcam/calculator/frame_0030.jpg 5 118 | webcam/calculator/frame_0015.jpg 5 119 | webcam/calculator/frame_0014.jpg 5 120 | webcam/calculator/frame_0003.jpg 5 121 | webcam/calculator/frame_0006.jpg 5 122 | webcam/calculator/frame_0018.jpg 5 123 | webcam/calculator/frame_0004.jpg 5 124 | webcam/calculator/frame_0010.jpg 5 125 | webcam/calculator/frame_0016.jpg 5 126 | webcam/calculator/frame_0005.jpg 5 127 | webcam/calculator/frame_0002.jpg 5 128 | webcam/calculator/frame_0026.jpg 5 129 | webcam/calculator/frame_0012.jpg 5 130 | webcam/calculator/frame_0001.jpg 5 131 | webcam/calculator/frame_0008.jpg 5 132 | webcam/calculator/frame_0009.jpg 5 133 | webcam/calculator/frame_0021.jpg 5 134 | webcam/calculator/frame_0027.jpg 5 135 | webcam/calculator/frame_0028.jpg 5 136 | webcam/calculator/frame_0031.jpg 5 137 | webcam/calculator/frame_0019.jpg 5 138 | webcam/desk_chair/frame_0008.jpg 6 139 | webcam/desk_chair/frame_0033.jpg 6 140 | webcam/desk_chair/frame_0007.jpg 6 141 | webcam/desk_chair/frame_0036.jpg 6 142 | webcam/desk_chair/frame_0013.jpg 6 143 | webcam/desk_chair/frame_0023.jpg 6 144 | webcam/desk_chair/frame_0017.jpg 6 145 | webcam/desk_chair/frame_0028.jpg 6 146 | webcam/desk_chair/frame_0011.jpg 6 147 | webcam/desk_chair/frame_0021.jpg 6 148 | webcam/desk_chair/frame_0005.jpg 6 149 | webcam/desk_chair/frame_0024.jpg 6 150 | webcam/desk_chair/frame_0004.jpg 6 151 | webcam/desk_chair/frame_0034.jpg 6 152 | webcam/desk_chair/frame_0038.jpg 6 153 | webcam/desk_chair/frame_0030.jpg 6 154 | webcam/desk_chair/frame_0003.jpg 6 155 | webcam/desk_chair/frame_0010.jpg 6 156 | webcam/desk_chair/frame_0001.jpg 6 157 | webcam/desk_chair/frame_0031.jpg 6 158 | webcam/desk_chair/frame_0022.jpg 6 159 | webcam/desk_chair/frame_0015.jpg 6 160 | webcam/desk_chair/frame_0029.jpg 6 161 | webcam/desk_chair/frame_0012.jpg 6 162 | webcam/desk_chair/frame_0016.jpg 6 163 | webcam/desk_chair/frame_0039.jpg 6 164 | webcam/desk_chair/frame_0002.jpg 6 165 | webcam/desk_chair/frame_0009.jpg 6 166 | webcam/desk_chair/frame_0037.jpg 6 167 | webcam/desk_chair/frame_0025.jpg 6 168 | webcam/desk_chair/frame_0014.jpg 6 169 | webcam/desk_chair/frame_0020.jpg 6 170 | webcam/desk_chair/frame_0027.jpg 6 171 | webcam/desk_chair/frame_0032.jpg 6 172 | webcam/desk_chair/frame_0035.jpg 6 173 | webcam/desk_chair/frame_0018.jpg 6 174 | webcam/desk_chair/frame_0006.jpg 6 175 | webcam/desk_chair/frame_0019.jpg 6 176 | webcam/desk_chair/frame_0040.jpg 6 177 | webcam/desk_chair/frame_0026.jpg 6 178 | webcam/desk_lamp/frame_0013.jpg 7 179 | webcam/desk_lamp/frame_0007.jpg 7 180 | webcam/desk_lamp/frame_0017.jpg 7 181 | webcam/desk_lamp/frame_0014.jpg 7 182 | webcam/desk_lamp/frame_0001.jpg 7 183 | webcam/desk_lamp/frame_0003.jpg 7 184 | webcam/desk_lamp/frame_0002.jpg 7 185 | webcam/desk_lamp/frame_0015.jpg 7 186 | webcam/desk_lamp/frame_0010.jpg 7 187 | webcam/desk_lamp/frame_0004.jpg 7 188 | webcam/desk_lamp/frame_0008.jpg 7 189 | webcam/desk_lamp/frame_0016.jpg 7 190 | webcam/desk_lamp/frame_0009.jpg 7 191 | webcam/desk_lamp/frame_0006.jpg 7 192 | webcam/desk_lamp/frame_0012.jpg 7 193 | webcam/desk_lamp/frame_0011.jpg 7 194 | webcam/desk_lamp/frame_0005.jpg 7 195 | webcam/desk_lamp/frame_0018.jpg 7 196 | webcam/desktop_computer/frame_0005.jpg 8 197 | webcam/desktop_computer/frame_0007.jpg 8 198 | webcam/desktop_computer/frame_0003.jpg 8 199 | webcam/desktop_computer/frame_0010.jpg 8 200 | webcam/desktop_computer/frame_0021.jpg 8 201 | webcam/desktop_computer/frame_0019.jpg 8 202 | webcam/desktop_computer/frame_0012.jpg 8 203 | webcam/desktop_computer/frame_0002.jpg 8 204 | webcam/desktop_computer/frame_0017.jpg 8 205 | webcam/desktop_computer/frame_0008.jpg 8 206 | webcam/desktop_computer/frame_0018.jpg 8 207 | webcam/desktop_computer/frame_0004.jpg 8 208 | webcam/desktop_computer/frame_0006.jpg 8 209 | webcam/desktop_computer/frame_0016.jpg 8 210 | webcam/desktop_computer/frame_0014.jpg 8 211 | webcam/desktop_computer/frame_0009.jpg 8 212 | webcam/desktop_computer/frame_0020.jpg 8 213 | webcam/desktop_computer/frame_0015.jpg 8 214 | webcam/desktop_computer/frame_0001.jpg 8 215 | webcam/desktop_computer/frame_0013.jpg 8 216 | webcam/desktop_computer/frame_0011.jpg 8 217 | webcam/file_cabinet/frame_0018.jpg 9 218 | webcam/file_cabinet/frame_0003.jpg 9 219 | webcam/file_cabinet/frame_0005.jpg 9 220 | webcam/file_cabinet/frame_0001.jpg 9 221 | webcam/file_cabinet/frame_0010.jpg 9 222 | webcam/file_cabinet/frame_0014.jpg 9 223 | webcam/file_cabinet/frame_0008.jpg 9 224 | webcam/file_cabinet/frame_0019.jpg 9 225 | webcam/file_cabinet/frame_0007.jpg 9 226 | webcam/file_cabinet/frame_0009.jpg 9 227 | webcam/file_cabinet/frame_0017.jpg 9 228 | webcam/file_cabinet/frame_0016.jpg 9 229 | webcam/file_cabinet/frame_0012.jpg 9 230 | webcam/file_cabinet/frame_0013.jpg 9 231 | webcam/file_cabinet/frame_0015.jpg 9 232 | webcam/file_cabinet/frame_0006.jpg 9 233 | webcam/file_cabinet/frame_0004.jpg 9 234 | webcam/file_cabinet/frame_0011.jpg 9 235 | webcam/file_cabinet/frame_0002.jpg 9 236 | webcam/headphones/frame_0002.jpg 10 237 | webcam/headphones/frame_0019.jpg 10 238 | webcam/headphones/frame_0021.jpg 10 239 | webcam/headphones/frame_0026.jpg 10 240 | webcam/headphones/frame_0013.jpg 10 241 | webcam/headphones/frame_0007.jpg 10 242 | webcam/headphones/frame_0011.jpg 10 243 | webcam/headphones/frame_0023.jpg 10 244 | webcam/headphones/frame_0020.jpg 10 245 | webcam/headphones/frame_0010.jpg 10 246 | webcam/headphones/frame_0018.jpg 10 247 | webcam/headphones/frame_0008.jpg 10 248 | webcam/headphones/frame_0009.jpg 10 249 | webcam/headphones/frame_0022.jpg 10 250 | webcam/headphones/frame_0005.jpg 10 251 | webcam/headphones/frame_0017.jpg 10 252 | webcam/headphones/frame_0003.jpg 10 253 | webcam/headphones/frame_0012.jpg 10 254 | webcam/headphones/frame_0001.jpg 10 255 | webcam/headphones/frame_0025.jpg 10 256 | webcam/headphones/frame_0027.jpg 10 257 | webcam/headphones/frame_0014.jpg 10 258 | webcam/headphones/frame_0006.jpg 10 259 | webcam/headphones/frame_0015.jpg 10 260 | webcam/headphones/frame_0016.jpg 10 261 | webcam/headphones/frame_0024.jpg 10 262 | webcam/headphones/frame_0004.jpg 10 263 | webcam/keyboard/frame_0027.jpg 11 264 | webcam/keyboard/frame_0003.jpg 11 265 | webcam/keyboard/frame_0022.jpg 11 266 | webcam/keyboard/frame_0013.jpg 11 267 | webcam/keyboard/frame_0007.jpg 11 268 | webcam/keyboard/frame_0026.jpg 11 269 | webcam/keyboard/frame_0004.jpg 11 270 | webcam/keyboard/frame_0014.jpg 11 271 | webcam/keyboard/frame_0018.jpg 11 272 | webcam/keyboard/frame_0015.jpg 11 273 | webcam/keyboard/frame_0017.jpg 11 274 | webcam/keyboard/frame_0020.jpg 11 275 | webcam/keyboard/frame_0016.jpg 11 276 | webcam/keyboard/frame_0005.jpg 11 277 | webcam/keyboard/frame_0001.jpg 11 278 | webcam/keyboard/frame_0019.jpg 11 279 | webcam/keyboard/frame_0009.jpg 11 280 | webcam/keyboard/frame_0024.jpg 11 281 | webcam/keyboard/frame_0021.jpg 11 282 | webcam/keyboard/frame_0008.jpg 11 283 | webcam/keyboard/frame_0010.jpg 11 284 | webcam/keyboard/frame_0025.jpg 11 285 | webcam/keyboard/frame_0012.jpg 11 286 | webcam/keyboard/frame_0023.jpg 11 287 | webcam/keyboard/frame_0006.jpg 11 288 | webcam/keyboard/frame_0002.jpg 11 289 | webcam/keyboard/frame_0011.jpg 11 290 | webcam/laptop/frame_0014.jpg 12 291 | webcam/laptop/frame_0007.jpg 12 292 | webcam/laptop/frame_0028.jpg 12 293 | webcam/laptop/frame_0024.jpg 12 294 | webcam/laptop/frame_0004.jpg 12 295 | webcam/laptop/frame_0015.jpg 12 296 | webcam/laptop/frame_0020.jpg 12 297 | webcam/laptop/frame_0019.jpg 12 298 | webcam/laptop/frame_0005.jpg 12 299 | webcam/laptop/frame_0002.jpg 12 300 | webcam/laptop/frame_0010.jpg 12 301 | webcam/laptop/frame_0003.jpg 12 302 | webcam/laptop/frame_0026.jpg 12 303 | webcam/laptop/frame_0030.jpg 12 304 | webcam/laptop/frame_0017.jpg 12 305 | webcam/laptop/frame_0011.jpg 12 306 | webcam/laptop/frame_0006.jpg 12 307 | webcam/laptop/frame_0018.jpg 12 308 | webcam/laptop/frame_0027.jpg 12 309 | webcam/laptop/frame_0009.jpg 12 310 | webcam/laptop/frame_0016.jpg 12 311 | webcam/laptop/frame_0022.jpg 12 312 | webcam/laptop/frame_0013.jpg 12 313 | webcam/laptop/frame_0023.jpg 12 314 | webcam/laptop/frame_0012.jpg 12 315 | webcam/laptop/frame_0021.jpg 12 316 | webcam/laptop/frame_0025.jpg 12 317 | webcam/laptop/frame_0008.jpg 12 318 | webcam/laptop/frame_0029.jpg 12 319 | webcam/laptop/frame_0001.jpg 12 320 | webcam/letter_tray/frame_0011.jpg 13 321 | webcam/letter_tray/frame_0006.jpg 13 322 | webcam/letter_tray/frame_0010.jpg 13 323 | webcam/letter_tray/frame_0005.jpg 13 324 | webcam/letter_tray/frame_0003.jpg 13 325 | webcam/letter_tray/frame_0012.jpg 13 326 | webcam/letter_tray/frame_0001.jpg 13 327 | webcam/letter_tray/frame_0004.jpg 13 328 | webcam/letter_tray/frame_0016.jpg 13 329 | webcam/letter_tray/frame_0015.jpg 13 330 | webcam/letter_tray/frame_0014.jpg 13 331 | webcam/letter_tray/frame_0008.jpg 13 332 | webcam/letter_tray/frame_0013.jpg 13 333 | webcam/letter_tray/frame_0007.jpg 13 334 | webcam/letter_tray/frame_0002.jpg 13 335 | webcam/letter_tray/frame_0019.jpg 13 336 | webcam/letter_tray/frame_0018.jpg 13 337 | webcam/letter_tray/frame_0017.jpg 13 338 | webcam/letter_tray/frame_0009.jpg 13 339 | webcam/mobile_phone/frame_0026.jpg 14 340 | webcam/mobile_phone/frame_0024.jpg 14 341 | webcam/mobile_phone/frame_0005.jpg 14 342 | webcam/mobile_phone/frame_0022.jpg 14 343 | webcam/mobile_phone/frame_0019.jpg 14 344 | webcam/mobile_phone/frame_0004.jpg 14 345 | webcam/mobile_phone/frame_0017.jpg 14 346 | webcam/mobile_phone/frame_0014.jpg 14 347 | webcam/mobile_phone/frame_0011.jpg 14 348 | webcam/mobile_phone/frame_0008.jpg 14 349 | webcam/mobile_phone/frame_0003.jpg 14 350 | webcam/mobile_phone/frame_0007.jpg 14 351 | webcam/mobile_phone/frame_0002.jpg 14 352 | webcam/mobile_phone/frame_0023.jpg 14 353 | webcam/mobile_phone/frame_0015.jpg 14 354 | webcam/mobile_phone/frame_0016.jpg 14 355 | webcam/mobile_phone/frame_0025.jpg 14 356 | webcam/mobile_phone/frame_0010.jpg 14 357 | webcam/mobile_phone/frame_0020.jpg 14 358 | webcam/mobile_phone/frame_0029.jpg 14 359 | webcam/mobile_phone/frame_0030.jpg 14 360 | webcam/mobile_phone/frame_0021.jpg 14 361 | webcam/mobile_phone/frame_0009.jpg 14 362 | webcam/mobile_phone/frame_0027.jpg 14 363 | webcam/mobile_phone/frame_0001.jpg 14 364 | webcam/mobile_phone/frame_0013.jpg 14 365 | webcam/mobile_phone/frame_0006.jpg 14 366 | webcam/mobile_phone/frame_0018.jpg 14 367 | webcam/mobile_phone/frame_0012.jpg 14 368 | webcam/mobile_phone/frame_0028.jpg 14 369 | webcam/monitor/frame_0008.jpg 15 370 | webcam/monitor/frame_0039.jpg 15 371 | webcam/monitor/frame_0030.jpg 15 372 | webcam/monitor/frame_0001.jpg 15 373 | webcam/monitor/frame_0029.jpg 15 374 | webcam/monitor/frame_0041.jpg 15 375 | webcam/monitor/frame_0042.jpg 15 376 | webcam/monitor/frame_0032.jpg 15 377 | webcam/monitor/frame_0016.jpg 15 378 | webcam/monitor/frame_0012.jpg 15 379 | webcam/monitor/frame_0013.jpg 15 380 | webcam/monitor/frame_0023.jpg 15 381 | webcam/monitor/frame_0028.jpg 15 382 | webcam/monitor/frame_0005.jpg 15 383 | webcam/monitor/frame_0033.jpg 15 384 | webcam/monitor/frame_0010.jpg 15 385 | webcam/monitor/frame_0031.jpg 15 386 | webcam/monitor/frame_0018.jpg 15 387 | webcam/monitor/frame_0024.jpg 15 388 | webcam/monitor/frame_0002.jpg 15 389 | webcam/monitor/frame_0011.jpg 15 390 | webcam/monitor/frame_0021.jpg 15 391 | webcam/monitor/frame_0027.jpg 15 392 | webcam/monitor/frame_0038.jpg 15 393 | webcam/monitor/frame_0004.jpg 15 394 | webcam/monitor/frame_0035.jpg 15 395 | webcam/monitor/frame_0019.jpg 15 396 | webcam/monitor/frame_0036.jpg 15 397 | webcam/monitor/frame_0022.jpg 15 398 | webcam/monitor/frame_0040.jpg 15 399 | webcam/monitor/frame_0006.jpg 15 400 | webcam/monitor/frame_0015.jpg 15 401 | webcam/monitor/frame_0026.jpg 15 402 | webcam/monitor/frame_0017.jpg 15 403 | webcam/monitor/frame_0003.jpg 15 404 | webcam/monitor/frame_0014.jpg 15 405 | webcam/monitor/frame_0009.jpg 15 406 | webcam/monitor/frame_0007.jpg 15 407 | webcam/monitor/frame_0043.jpg 15 408 | webcam/monitor/frame_0020.jpg 15 409 | webcam/monitor/frame_0025.jpg 15 410 | webcam/monitor/frame_0037.jpg 15 411 | webcam/monitor/frame_0034.jpg 15 412 | webcam/mouse/frame_0015.jpg 16 413 | webcam/mouse/frame_0024.jpg 16 414 | webcam/mouse/frame_0016.jpg 16 415 | webcam/mouse/frame_0020.jpg 16 416 | webcam/mouse/frame_0010.jpg 16 417 | webcam/mouse/frame_0011.jpg 16 418 | webcam/mouse/frame_0012.jpg 16 419 | webcam/mouse/frame_0027.jpg 16 420 | webcam/mouse/frame_0009.jpg 16 421 | webcam/mouse/frame_0004.jpg 16 422 | webcam/mouse/frame_0023.jpg 16 423 | webcam/mouse/frame_0026.jpg 16 424 | webcam/mouse/frame_0017.jpg 16 425 | webcam/mouse/frame_0005.jpg 16 426 | webcam/mouse/frame_0008.jpg 16 427 | webcam/mouse/frame_0006.jpg 16 428 | webcam/mouse/frame_0021.jpg 16 429 | webcam/mouse/frame_0013.jpg 16 430 | webcam/mouse/frame_0007.jpg 16 431 | webcam/mouse/frame_0019.jpg 16 432 | webcam/mouse/frame_0022.jpg 16 433 | webcam/mouse/frame_0028.jpg 16 434 | webcam/mouse/frame_0003.jpg 16 435 | webcam/mouse/frame_0002.jpg 16 436 | webcam/mouse/frame_0030.jpg 16 437 | webcam/mouse/frame_0014.jpg 16 438 | webcam/mouse/frame_0001.jpg 16 439 | webcam/mouse/frame_0029.jpg 16 440 | webcam/mouse/frame_0025.jpg 16 441 | webcam/mouse/frame_0018.jpg 16 442 | webcam/mug/frame_0027.jpg 17 443 | webcam/mug/frame_0016.jpg 17 444 | webcam/mug/frame_0024.jpg 17 445 | webcam/mug/frame_0006.jpg 17 446 | webcam/mug/frame_0015.jpg 17 447 | webcam/mug/frame_0017.jpg 17 448 | webcam/mug/frame_0025.jpg 17 449 | webcam/mug/frame_0005.jpg 17 450 | webcam/mug/frame_0014.jpg 17 451 | webcam/mug/frame_0019.jpg 17 452 | webcam/mug/frame_0003.jpg 17 453 | webcam/mug/frame_0008.jpg 17 454 | webcam/mug/frame_0010.jpg 17 455 | webcam/mug/frame_0012.jpg 17 456 | webcam/mug/frame_0009.jpg 17 457 | webcam/mug/frame_0020.jpg 17 458 | webcam/mug/frame_0013.jpg 17 459 | webcam/mug/frame_0002.jpg 17 460 | webcam/mug/frame_0001.jpg 17 461 | webcam/mug/frame_0026.jpg 17 462 | webcam/mug/frame_0007.jpg 17 463 | webcam/mug/frame_0023.jpg 17 464 | webcam/mug/frame_0022.jpg 17 465 | webcam/mug/frame_0011.jpg 17 466 | webcam/mug/frame_0004.jpg 17 467 | webcam/mug/frame_0021.jpg 17 468 | webcam/mug/frame_0018.jpg 17 469 | webcam/paper_notebook/frame_0020.jpg 18 470 | webcam/paper_notebook/frame_0022.jpg 18 471 | webcam/paper_notebook/frame_0021.jpg 18 472 | webcam/paper_notebook/frame_0017.jpg 18 473 | webcam/paper_notebook/frame_0025.jpg 18 474 | webcam/paper_notebook/frame_0016.jpg 18 475 | webcam/paper_notebook/frame_0014.jpg 18 476 | webcam/paper_notebook/frame_0026.jpg 18 477 | webcam/paper_notebook/frame_0012.jpg 18 478 | webcam/paper_notebook/frame_0003.jpg 18 479 | webcam/paper_notebook/frame_0005.jpg 18 480 | webcam/paper_notebook/frame_0019.jpg 18 481 | webcam/paper_notebook/frame_0011.jpg 18 482 | webcam/paper_notebook/frame_0010.jpg 18 483 | webcam/paper_notebook/frame_0024.jpg 18 484 | webcam/paper_notebook/frame_0009.jpg 18 485 | webcam/paper_notebook/frame_0013.jpg 18 486 | webcam/paper_notebook/frame_0027.jpg 18 487 | webcam/paper_notebook/frame_0007.jpg 18 488 | webcam/paper_notebook/frame_0006.jpg 18 489 | webcam/paper_notebook/frame_0015.jpg 18 490 | webcam/paper_notebook/frame_0004.jpg 18 491 | webcam/paper_notebook/frame_0028.jpg 18 492 | webcam/paper_notebook/frame_0008.jpg 18 493 | webcam/paper_notebook/frame_0002.jpg 18 494 | webcam/paper_notebook/frame_0001.jpg 18 495 | webcam/paper_notebook/frame_0018.jpg 18 496 | webcam/paper_notebook/frame_0023.jpg 18 497 | webcam/pen/frame_0032.jpg 19 498 | webcam/pen/frame_0012.jpg 19 499 | webcam/pen/frame_0028.jpg 19 500 | webcam/pen/frame_0002.jpg 19 501 | webcam/pen/frame_0015.jpg 19 502 | webcam/pen/frame_0011.jpg 19 503 | webcam/pen/frame_0023.jpg 19 504 | webcam/pen/frame_0008.jpg 19 505 | webcam/pen/frame_0027.jpg 19 506 | webcam/pen/frame_0014.jpg 19 507 | webcam/pen/frame_0030.jpg 19 508 | webcam/pen/frame_0003.jpg 19 509 | webcam/pen/frame_0013.jpg 19 510 | webcam/pen/frame_0005.jpg 19 511 | webcam/pen/frame_0024.jpg 19 512 | webcam/pen/frame_0018.jpg 19 513 | webcam/pen/frame_0009.jpg 19 514 | webcam/pen/frame_0025.jpg 19 515 | webcam/pen/frame_0010.jpg 19 516 | webcam/pen/frame_0029.jpg 19 517 | webcam/pen/frame_0026.jpg 19 518 | webcam/pen/frame_0031.jpg 19 519 | webcam/pen/frame_0004.jpg 19 520 | webcam/pen/frame_0017.jpg 19 521 | webcam/pen/frame_0021.jpg 19 522 | webcam/pen/frame_0020.jpg 19 523 | webcam/pen/frame_0001.jpg 19 524 | webcam/pen/frame_0022.jpg 19 525 | webcam/pen/frame_0006.jpg 19 526 | webcam/pen/frame_0007.jpg 19 527 | webcam/pen/frame_0016.jpg 19 528 | webcam/pen/frame_0019.jpg 19 529 | -------------------------------------------------------------------------------- /datasets/datasets.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | import numpy as np 4 | import torch 5 | from torch.utils.data.dataset import Subset 6 | from torch.utils.data.dataset import ConcatDataset 7 | from torch.utils.data.sampler import Sampler 8 | import torch.utils.data as data 9 | from torchvision import datasets, transforms 10 | from PIL import Image 11 | import random 12 | 13 | from utils.utils import set_random_seed 14 | 15 | DATA_PATH = '~/data/' 16 | 17 | class MultiDataTransform(object): 18 | def __init__(self, transform): 19 | self.transform1 = transform 20 | self.transform2 = transform 21 | 22 | def __call__(self, sample): 23 | x1 = self.transform1(sample) 24 | x2 = self.transform2(sample) 25 | return x1, x2 26 | 27 | def cycle(iterable): 28 | while True: 29 | for x in iterable: 30 | yield x 31 | random.shuffle(iterable) 32 | 33 | def get_train_transform(): 34 | 35 | train_transform = transforms.Compose([ 36 | transforms.Resize(256), 37 | transforms.RandomResizedCrop(224), 38 | transforms.RandomHorizontalFlip(), 39 | transforms.ToTensor(), 40 | ]) 41 | 42 | return MultiDataTransform(train_transform) 43 | 44 | def get_test_transform_crop(): 45 | test_transform = transforms.Compose([ 46 | transforms.Resize(256), 47 | transforms.CenterCrop(224), 48 | transforms.ToTensor(), 49 | ]) 50 | 51 | return test_transform 52 | 53 | def get_test_transform(): 54 | test_transform = transforms.Compose([ 55 | transforms.Resize(224), 56 | transforms.ToTensor(), 57 | ]) 58 | return test_transform 59 | 60 | def _dataset_info(txt_labels): 61 | with open(txt_labels, 'r') as f: 62 | images_list = f.readlines() 63 | 64 | file_names = [] 65 | labels = [] 66 | for row in images_list: 67 | row = row.split(' ') 68 | file_names.append(row[0]) 69 | labels.append(int(row[1])) 70 | 71 | return file_names, labels 72 | 73 | class FileDataset(data.Dataset): 74 | def __init__(self, benchmark, data_file, transform=None,add_idx=False): 75 | super().__init__() 76 | 77 | self.root_dir = DATA_PATH 78 | self.benchmark = benchmark 79 | self.data_file = data_file 80 | self.transform = transform 81 | self.names, self.labels = _dataset_info(self.data_file) 82 | self.add_idx = add_idx 83 | 84 | def __len__(self): 85 | return len(self.names) 86 | 87 | def __getitem__(self, index): 88 | path, target = self.names[index],self.labels[index] 89 | 90 | path = os.path.expanduser(self.root_dir+f"{self.benchmark}/{path}") 91 | with open(path, 'rb') as f: 92 | img = Image.open(f).convert('RGB') 93 | 94 | img_size = img.size 95 | 96 | if self.transform is not None: 97 | img = self.transform(img) 98 | 99 | if self.add_idx: 100 | return img, target, index 101 | return img, target 102 | 103 | class ClassDataset(data.Dataset): 104 | def __init__(self, root, names, label, transform): 105 | super().__init__() 106 | 107 | self.root = root 108 | self.names = names 109 | self.label = label 110 | self.transform = transform 111 | 112 | def __len__(self): 113 | return len(self.names) 114 | 115 | def __getitem__(self, index): 116 | path, target = self.names[index], self.label 117 | 118 | path = os.path.expanduser(self.root+f"/{path}") 119 | with open(path, 'rb') as f: 120 | img = Image.open(f).convert('RGB') 121 | 122 | img_size = img.size 123 | 124 | if self.transform is not None: 125 | img = self.transform(img) 126 | 127 | return img, target, path 128 | 129 | def get_pseudo_targets(P, known_mask, known_pseudo_labels, transform, n_classes): 130 | file_path = f'data/data_txt/{P.dataset}/{P.test_domain}.txt' 131 | 132 | names, _ = _dataset_info(file_path) 133 | np_names = np.array(names) 134 | 135 | selected_names = np_names[known_mask] 136 | selected_labels = known_pseudo_labels[known_mask] 137 | 138 | return get_class_datasets(P.dataset, data_file=None, transform=transform, names=selected_names.tolist(), labels = selected_labels.tolist(), n_classes=n_classes) 139 | 140 | 141 | def get_class_datasets(benchmark, data_file, transform=None, names=None, labels=None, n_classes=45): 142 | # from a dataset file it builds a set of datasets, one for each class 143 | if names is None: 144 | names, labels = _dataset_info(data_file) 145 | root_dir = os.path.join(DATA_PATH, benchmark) 146 | 147 | names = np.array(names) 148 | labels_set = set(labels) 149 | labels = np.array(labels) 150 | all_labels = np.arange(labels.max()+1) 151 | 152 | if len(labels_set) != n_classes: 153 | print("One dataset does not contain all the classes!") 154 | 155 | datasets = {} 156 | for lbl in labels_set: 157 | mask = labels == lbl 158 | class_names = names[mask] 159 | ds = ClassDataset(root_dir, class_names, lbl, transform) 160 | datasets[lbl] = ds 161 | return datasets 162 | 163 | def get_datasets_for_test(P): 164 | test_transform = get_test_transform() 165 | 166 | # target 167 | benchmark = P.dataset 168 | file_path = f'data/data_txt/{benchmark}/{P.test_domain}.txt' 169 | target_ds = FileDataset(benchmark, file_path, test_transform, add_idx=True) 170 | 171 | # source 172 | if benchmark == "OfficeHome": 173 | source_name = f"no_{P.test_domain}OpenSet" 174 | n_classes = 45 175 | elif benchmark == "Office31": 176 | source_name = f"no_{P.test_domain}OpenSet" 177 | n_classes = 20 178 | elif benchmark == "DomainNet": 179 | source_name = "OpenSet_source_train" 180 | n_classes = 100 181 | else: 182 | raise NotImplementedError(f"Unknown benchmark {benchmark}") 183 | 184 | source_file_path = f'data/data_txt/{benchmark}/{source_name}.txt' 185 | 186 | source_ds = FileDataset(benchmark, source_file_path, test_transform, add_idx=True) 187 | 188 | return source_ds, target_ds, n_classes 189 | 190 | def get_dataset_2(P, train=True, target_known_mask=None, target_known_pseudo_labels=None): 191 | 192 | if train: 193 | transform = get_train_transform() 194 | 195 | benchmark = P.dataset 196 | 197 | domain_datasets = {} 198 | if benchmark == "OfficeHome": 199 | domains = ["Art", "Clipart", "Product", "RealWorld"] 200 | assert P.test_domain in domains, f"{P.test_domain} unknown!" 201 | domains.remove(P.test_domain) 202 | sources = domains 203 | n_classes = 45 204 | 205 | elif benchmark == "DomainNet": 206 | sources = ["infograph", "painting"] 207 | n_classes = 100 208 | 209 | elif benchmark == "Office31": 210 | domains = ["Amazon", "Dslr", "Webcam"] 211 | assert P.test_domain in domains, f"{P.test_domain} unknown!" 212 | domains.remove(P.test_domain) 213 | sources = domains 214 | n_classes = 20 215 | else: 216 | raise NotImplementedError(f"Unknown benchmark {benchmark}") 217 | for domain in sources: 218 | file_path = f'data/data_txt/{benchmark}/{domain}OpenSet_known.txt' 219 | datasets = get_class_datasets(benchmark, file_path, transform, n_classes=n_classes) 220 | domain_datasets[domain] = datasets 221 | 222 | if target_known_mask is not None: 223 | domain_datasets["pseudo_target"] = get_pseudo_targets(P, target_known_mask, target_known_pseudo_labels, transform, n_classes=n_classes) 224 | sources.append("pseudo_target") 225 | 226 | # now for each class we build a ConcatDataset 227 | class_datasets = [] 228 | for idx in range(n_classes): 229 | this_class = [] 230 | for source in sources: 231 | if idx in domain_datasets[source]: 232 | this_class.append(domain_datasets[source][idx]) 233 | class_datasets.append(this_class) 234 | 235 | class_datasets = [ConcatDataset(sets) for sets in class_datasets] 236 | return class_datasets, n_classes 237 | else: 238 | raise NotImplementedError() 239 | 240 | 241 | def get_style_dataset(P): 242 | transform = get_test_transform_crop() 243 | benchmark = P.dataset 244 | 245 | file_path = f'data/data_txt/{benchmark}/{P.test_domain}.txt' 246 | ds = FileDataset(benchmark, file_path,transform) 247 | return ds 248 | 249 | class DistributedMultiSourceRandomSampler(Sampler): 250 | r"""Samples elements randomly from a ConcatDataset, cycling between sources. 251 | Always with replacement, since batch should be balanced 252 | Arguments: 253 | data_source (Dataset): dataset to sample from 254 | num_samples (int): number of samples to draw, default=`len(dataset)`. This argument 255 | is supposed to be specified only when `replacement` is ``True``. 256 | """ 257 | 258 | def __init__(self, data_source, num_samples=None, num_replicas=None, rank=None): 259 | if num_replicas is None: 260 | if not dist.is_available(): 261 | raise RuntimeError("Requires distributed package to be available") 262 | num_replicas = dist.get_world_size() 263 | if rank is None: 264 | if not dist.is_available(): 265 | raise RuntimeError("Requires distributed package to be available") 266 | rank = dist.get_rank() 267 | self.num_replicas = num_replicas 268 | self.rank = rank 269 | 270 | self.data_source = data_source 271 | self._num_samples = num_samples 272 | 273 | if not isinstance(self.data_source, ConcatDataset): 274 | raise ValueError('data_source should be instance of ConcatDataset') 275 | 276 | self.cumulative_sizes = self.data_source.cumulative_sizes 277 | 278 | if not isinstance(self.num_samples, int) or self.num_samples <= 0: 279 | raise ValueError("num_samples should be a positive integer " 280 | "value, but got num_samples={}".format(self.num_samples)) 281 | 282 | @property 283 | def num_samples(self): 284 | # dataset size might change at runtime 285 | if self._num_samples is None: 286 | return len(self.data_source) 287 | return self._num_samples 288 | 289 | def __iter__(self): 290 | indexes = [] 291 | low = 0 292 | for i in range(len(self.cumulative_sizes)): 293 | high = self.cumulative_sizes[i] 294 | data_idx = torch.randint(low=low, high=high, size=(self.num_samples,), dtype=torch.int64).tolist() 295 | indexes.append(data_idx) 296 | low = high 297 | interleave_indexes = [x for t in zip(*indexes) for x in t] 298 | interleave_indexes = interleave_indexes[self.rank:self.cumulative_sizes[-1]:self.num_replicas] # distributed 299 | return iter(interleave_indexes) 300 | 301 | def __len__(self): 302 | return self.num_samples 303 | 304 | class BalancedMultiSourceRandomSampler(Sampler): 305 | r"""Samples elements randomly from a ConcatDataset, cycling between sources. 306 | Designed to balance both classes and sources. 307 | Always with replacement, since batch should be balanced 308 | Arguments: 309 | data_source (Dataset): ConcatDataset of ConcatDatasets. First level: one concatdataset per class. Second level: one dataset for each source that has this class 310 | batch_p: number of samples of the same class that form a group. The group is the atomic batch size. E.g.: each gpu at each iteration receive at least a group 311 | rank: identifier of this GPU (for the separation of sampled elements between gpus for distributed training) 312 | world_size: number of GPUs taking part to distributed training 313 | """ 314 | 315 | def __init__(self, data_source, batch_p, rank, world_size): 316 | self.rank = rank 317 | self.world_size = world_size 318 | self.batch_p = batch_p 319 | 320 | if not isinstance(data_source, ConcatDataset): 321 | raise ValueError('data_source should be instance of ConcatDataset') 322 | 323 | for el in data_source.datasets: 324 | if not isinstance(el, ConcatDataset): 325 | raise ValueError("data_source should be a concatdataset of concat datasets") 326 | 327 | n_classes = len(data_source.datasets) 328 | sources_per_class = {} 329 | lengths_dict = {} 330 | ids_dict = {} 331 | 332 | low = 0 333 | for cls_id in range(n_classes): 334 | cls_ds = data_source.datasets[cls_id] 335 | # number of sources for this class 336 | n_sources = len(cls_ds.cumulative_sizes) 337 | sources_per_class[cls_id] = n_sources 338 | 339 | # size of each source for this class 340 | sizes = [cls_ds.cumulative_sizes[0]] 341 | sizes.extend([(cls_ds.cumulative_sizes[el] - cls_ds.cumulative_sizes[el-1]) for el in range(1,n_sources)]) 342 | lengths_dict[cls_id] = sizes 343 | 344 | # elements of each source for this class -> taken in random order! 345 | low_this = 0 346 | 347 | src_dict = {} 348 | for src in range(n_sources): 349 | high = cls_ds.cumulative_sizes[src] 350 | ids_src = [el for el in range(low+low_this, high+low)] 351 | 352 | # random order 353 | random.shuffle(ids_src) 354 | src_dict[src] = ids_src 355 | 356 | low_this = high 357 | ids_dict[cls_id] = src_dict 358 | low += high 359 | 360 | list_strs, list_indices = BalancedMultiSourceRandomSampler.generate_list(n_classes, sources_per_class, batch_p, lengths_dict, ids_dict) 361 | 362 | # now we split indices among processes 363 | num_chunks = int(len(list_indices)/self.batch_p) 364 | 365 | while not num_chunks%self.world_size == 0: 366 | print("Removing some data as dataset size is not divisible per number of processes") 367 | for _ in range(batch_p): 368 | list_indices.pop() 369 | num_chunks = int(len(list_indices)/self.batch_p) 370 | 371 | indices_tensor = torch.tensor(list_indices) 372 | chunks = torch.chunk(indices_tensor, num_chunks) 373 | 374 | starts_from = self.rank 375 | my_chunks = [] 376 | for idx, ch in enumerate(chunks): 377 | if (idx - starts_from)%self.world_size == 0: 378 | my_chunks.append(ch) 379 | my_indices = torch.cat(my_chunks).tolist() 380 | self.indices = my_indices 381 | 382 | @staticmethod 383 | def generate_list(n_classes, sources_per_class, batch_p, lengths_dict, ids_dict): 384 | """ 385 | n_classes -> total number of classes 386 | sources_per_class -> dict with {class_id : number of sources containing this class} 387 | batch_p -> number of samples for each class in a block 388 | lengths_dict -> number of samples in each class for each source. Ex class 0 has K=sources_per_class[0] sources. lengths[0] = [len_class_0_source_1, ..., len_class_0_source_K] 389 | ids_dict -> each sample has a unique identifier. This identifiers are those that should be inserted in the final list. This is a dict that contains for 390 | each class and each souce a list of ids 391 | Should return a list which can be divided in blocks of size batch_p. Each block contains batch_p 392 | elements of the same class. Subsequent blocks refer to different classes. 393 | The sampling should always be done with replacement in order to maintain balancing. In particular 394 | - if for a certain class one source has less samples than the others those samples should be selected 395 | more often in order to rebalance the various sources; 396 | - if a certain class has in total a lower number of samples w.r.t. the others it should still appear in the 397 | same number of blocks. 398 | Therefore the correct approach is: 399 | - we compute the number of samples that we need for each class (max of each class number of sources*max_source_length) 400 | - for each class we randomly sample from the various sources (in an alternating fashion) until we reach the desired length 401 | Example of result with: 402 | - n_classes = 6 403 | - sources_per_class = {0:5,1:5,2:5,3:5,4:5,5:5} # -> each class has 5 sources 404 | - batch_p = 3 405 | - lengths_dict = {0:[8,8,8,8,8],1:[8,8,8,8,8],2:[8,8,8,8,8],3:[8,8,8,8,8],4:[8,8,8,8,8],5:[8,8,8,8,8] 406 | - ids_dict = {} 407 | OUTPUT: [ 408 | D0C0E0, D1C0E0, D2C0E0, 409 | D0C1E0, D1C1E0, D2C1E0, 410 | D0C2E0, D1C2E0, D2C2E0, 411 | D0C3E0, D1C3E0, D2C3E0, 412 | D0C4E0, D1C4E0, D2C4E0, 413 | D0C5E0, D1C5E0, D2C5E0, 414 | D3C0E0, D4C0E0, D0C0E1, 415 | D3C1E0, D4C1E0, D0C1E1, 416 | D3C2E0, D4C2E0, D0C2E1, 417 | D3C3E0, D4C3E0, D0C3E1, 418 | D3C4E0, D4C4E0, D0C4E1, 419 | D3C5E0, D4C5E0, D0C5E1, 420 | D1C0E1, D2C0E1, D3C0E1, 421 | D1C1E1, D2C1E1, D3C1E1, 422 | D1C2E1, D2C2E1, D3C2E1, 423 | D1C3E1, D2C3E1, D3C3E1, 424 | D1C4E1, D2C4E1, D3C4E1, 425 | D1C5E1, D2C5E1, D3C5E1, 426 | D4C0E1, D0C0E2, D1C0E2, 427 | D4C1E1, D0C1E2, D1C1E2, 428 | D4C2E1, D0C2E2, D1C2E2, 429 | D4C3E1, D0C3E2, D1C3E2, 430 | D4C4E1, D0C4E2, D1C4E2, 431 | D4C5E1, D0C5E2, D1C5E2, 432 | ... 433 | ] 434 | First of all we compute the desired length for each class queue. 435 | So for each class we compute num_sources*len_largest_source and we get the max of those values 436 | Then we create a queue for each class with the desired length and alternating samples from the various 437 | sources. 438 | We first create some intermediate parts that will help in finalizing the last list 439 | first for each source for each class we create a queue of elements: 440 | queue_C0_D0: [E0,E1,E2,E3,E4,E5,E6,E7] 441 | queue_C0_D1: [E0,E1,E2,E3,E4,E5,E6,E7] 442 | ... 443 | here we should take into account that sources should be balanced and therefore for those sources 444 | having a lower number of sample w.r.t. the others we will perform replacement 445 | Then for each class we create a queue that contains elements of that class alternating sources 446 | queue_C0 = [D0E0, D1E0, D2E0, D3E0, D4E0, D0E1, D1E1, D2E1, D3E1, D4E1, D0E2, D1E2, ... 447 | At this point we have a queue for each class. However it is possible that some queues are longer than others. 448 | Through resampling we should fix this so that we can keep the balancing between classes. 449 | When resampling we should keep the alternating strategy for sources. 450 | """ 451 | 452 | # compute desired length 453 | cls_sizes = [max(lengths_dict[cls_id])*sources_per_class[cls_id] for cls_id in range(n_classes)] 454 | 455 | max_size = max(cls_sizes) 456 | 457 | desired_class_len = max_size 458 | 459 | # we duplicate each data structure 460 | # simply queues contains strings -> each string tell us how an element was chosen 461 | # while queues_ids contains the real ids 462 | queues = {} 463 | queues_ids = {} 464 | 465 | for cls_id in range(n_classes): 466 | 467 | n_sources = sources_per_class[cls_id] 468 | ids_this_class = ids_dict[cls_id] 469 | len_sources = lengths_dict[cls_id] 470 | queue_this_class = [] 471 | queue_this_class_ids = [] 472 | 473 | src_list = [idx for idx in range(n_sources)] 474 | random.shuffle(src_list) 475 | src_iter = iter(cycle(src_list)) 476 | while len(queue_this_class) < desired_class_len: 477 | src = next(src_iter) 478 | ids_this_src = ids_this_class[src] 479 | len_this_src = len_sources[src] 480 | queue_this_class.append(f"D{src}E{random.randrange(len_this_src)}") 481 | queue_this_class_ids.append(ids_this_src[random.randrange(len_this_src)]) 482 | 483 | queues[cls_id] = queue_this_class 484 | queues_ids[cls_id] = queue_this_class_ids 485 | 486 | out = [] 487 | out_ids = [] 488 | 489 | while True: 490 | found = False 491 | for cls_id in range(n_classes): 492 | q_this_class = queues[cls_id] 493 | q_this_class_ids = queues_ids[cls_id] 494 | if len(q_this_class) >= batch_p: 495 | found = True 496 | for el in range(batch_p): 497 | out.append(f'C{cls_id}{q_this_class.pop(0)}') 498 | out_ids.append(q_this_class_ids.pop(0)) 499 | if not found: 500 | break 501 | return out, out_ids 502 | 503 | def __iter__(self): 504 | return iter(self.indices) 505 | 506 | def __len__(self): 507 | return len(self.indices) 508 | -------------------------------------------------------------------------------- /eval.py: -------------------------------------------------------------------------------- 1 | from common.eval import * 2 | from tqdm import tqdm 3 | import numpy as np 4 | 5 | model.eval() 6 | 7 | 8 | if P.mode == "openset_eval": 9 | from evals.evals import openset_eval 10 | 11 | with torch.no_grad(): 12 | openset_eval(P, model, source_test_loader, target_test_loader, logger=None) 13 | 14 | elif P.mode == "eval_known_selection": 15 | from evals.evals import compute_confident_known_mask 16 | 17 | with torch.no_grad(): 18 | known_mask, known_pseudo_labels, known_gt_labels = compute_confident_known_mask(P, model, source_test_loader, target_test_loader, logger=None) 19 | 20 | from sklearn.metrics import accuracy_score 21 | acc = accuracy_score(known_gt_labels[known_mask], known_pseudo_labels[known_mask]) 22 | 23 | gt_known = 0 24 | 25 | known_gt_lbls = known_gt_labels[known_mask] 26 | number_real_known = len(known_gt_lbls[known_gt_lbls < P.n_classes]) 27 | percentage_true_known = number_real_known/len(known_mask.nonzero()[0]) 28 | print("Selected {} target samples as known. Classification accuracy: {:.4f}. Percentage of gt known: {:.4f}".format(len(known_mask.nonzero()[0]), acc, percentage_true_known)) 29 | else: 30 | raise NotImplementedError() 31 | 32 | 33 | 34 | -------------------------------------------------------------------------------- /evals/evals.py: -------------------------------------------------------------------------------- 1 | import time 2 | import itertools 3 | import math 4 | 5 | import diffdist.functional as distops 6 | import numpy as np 7 | import torch 8 | import torch.distributed as dist 9 | import torch.nn as nn 10 | import torch.nn.functional as F 11 | from sklearn.metrics import roc_auc_score 12 | 13 | import models.transform_layers as TL 14 | from utils.temperature_scaling import _ECELoss 15 | from utils.utils import AverageMeter, set_random_seed, normalize 16 | from tqdm import tqdm 17 | import sys 18 | from utils.dist_utils import synchronize, all_gather 19 | np.set_printoptions(threshold=sys.maxsize) 20 | 21 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu") 22 | cpu_device = torch.device("cpu") 23 | 24 | def get_features(P, model, test_loaders, normalize=True, layer="penultimate", distributed=False): 25 | model.eval() 26 | 27 | # we have to use penultimate layer as some models (like baseline naive) do not have the simclr projection head 28 | feats = [] 29 | labels = [] 30 | ids = [] 31 | out_dict = {} 32 | 33 | for loader in test_loaders: 34 | for batch in tqdm(loader): 35 | images, lbl, img_id = batch 36 | labels.append(lbl.item()) 37 | images = images.to(device) 38 | with torch.no_grad(): 39 | _, output_aux = model(images, penultimate=True,simclr=True) 40 | feat = output_aux[layer] 41 | if normalize: 42 | feat = feat/feat.norm() 43 | cpu_feat = feat.to(cpu_device) 44 | feats.append(cpu_feat) 45 | out_dict[img_id.item()] = {'feat': cpu_feat, 'label': lbl.item()} 46 | 47 | feats=torch.cat(feats) 48 | 49 | if distributed: 50 | all_dicts = all_gather(out_dict) 51 | predictions = {} 52 | for dic in all_dicts: 53 | predictions.update(dic) 54 | image_ids = list(sorted(predictions.keys())) 55 | 56 | all_feats = [] 57 | all_labels = [] 58 | for img_id in image_ids: 59 | all_feats.append(predictions[img_id]['feat']) 60 | all_labels.append(predictions[img_id]['label']) 61 | 62 | all_feats = torch.cat(all_feats) 63 | feats = all_feats 64 | 65 | all_labels = torch.tensor(all_labels) 66 | labels = all_labels.numpy() 67 | synchronize() 68 | return feats, np.array(labels) 69 | 70 | def rescale_cosine_similarity(similarities): 71 | return (similarities+1)/2 72 | 73 | def compute_source_prototypes(P, model, source_loader, eval_layer, distributed=False): 74 | 75 | # we extract features for all source samples 76 | source_feats, source_gt_labels = get_features(P,model, [source_loader], layer=eval_layer, normalize=True,distributed=distributed) 77 | 78 | # we compute prototypes for source classes 79 | labels_set = set(source_gt_labels.tolist()) 80 | prototypes = {} 81 | for label in labels_set: 82 | lbl_mask = source_gt_labels == label 83 | source_this_label = source_feats[lbl_mask] 84 | prototypes[label] = source_this_label.mean(dim=0) 85 | 86 | # let's move prototypes to the hypersphere. We also compute average cluster compactness 87 | # we will need a threshold which will be based on this value 88 | hyp_prototypes = [] 89 | cls_compactness_tot = 0 90 | for cls in prototypes.keys(): 91 | cls_prototype = prototypes[cls] 92 | norm = cls_prototype.norm() 93 | hyp_prototype = cls_prototype/norm 94 | hyp_prototypes.append(hyp_prototype) 95 | 96 | source_this_label = source_feats[source_gt_labels == cls] 97 | tot_similarities = 0 98 | for src_feat in source_this_label: 99 | similarity = (src_feat*hyp_prototype).sum() 100 | similarity = rescale_cosine_similarity(similarity) 101 | tot_similarities += similarity.item() 102 | avg_cls_similarity = tot_similarities/len(source_this_label) 103 | cls_compactness_tot += avg_cls_similarity 104 | cls_compactness_avg = cls_compactness_tot/len(labels_set) 105 | 106 | hyp_prototypes = np.stack(hyp_prototypes) 107 | 108 | # we also compute average distance between nearest prototypes 109 | topk_sims = np.zeros((len(hyp_prototypes))) 110 | for idx, hyp_prt in enumerate(hyp_prototypes): 111 | similarities = (hyp_prt*hyp_prototypes).sum(1) 112 | similarities = rescale_cosine_similarity(similarities) 113 | similarities.sort() 114 | topk_val = similarities[-2] 115 | topk_sims[idx] = topk_val 116 | 117 | return source_feats, hyp_prototypes, cls_compactness_avg, topk_sims.mean() 118 | 119 | def compute_threshold_multiplier(avg_compactness, avg_cls_dist): 120 | y = (1-avg_cls_dist) 121 | x = (1-avg_compactness) 122 | z = y/(2*x) 123 | return math.log(z) + 1 124 | 125 | def compute_confident_known_mask(P, model, source_loader, target_loader, logger, eval_layer="simclr"): 126 | 127 | distributed = isinstance(source_loader.sampler, torch.utils.data.distributed.DistributedSampler) 128 | model.eval() 129 | 130 | # compute source prototypes and compactness for known classes 131 | _, hyp_prototypes, cls_compactness_avg, avg_min_sim = compute_source_prototypes(P, model, source_loader, eval_layer, distributed=distributed) 132 | 133 | # now we get features for target samples 134 | target_feats, target_gt_labels = get_features(P, model, [target_loader], layer=eval_layer, normalize=True, distributed=distributed) 135 | 136 | # for each target sample we measure distance from nearest prototype. If ditance is lower than a threshold we select 137 | #this sample and the prototype label as pseudo label 138 | 139 | threshold_multiplier = compute_threshold_multiplier(cls_compactness_avg, avg_min_sim)/2 140 | known_threshold = (1-cls_compactness_avg)*threshold_multiplier 141 | 142 | known_mask = np.zeros((len(target_feats)), dtype=np.bool) 143 | known_pseudo_labels = P.n_classes*np.ones((len(target_feats)), dtype=np.uint32) 144 | known_gt_labels = P.n_classes*np.ones((len(target_feats)), dtype=np.uint32) 145 | for idx, (tgt_feat, tgt_gt_label) in enumerate(zip(target_feats, target_gt_labels)): 146 | 147 | similarities = (tgt_feat*hyp_prototypes).sum(dim=1) 148 | similarities = rescale_cosine_similarity(similarities) 149 | 150 | highest = similarities.max() 151 | cls_id = similarities.argmax() 152 | 153 | # check whether it is near enough to nearest prototype to be considered known 154 | if highest >= (1 - known_threshold): 155 | known_mask[idx] = True 156 | known_pseudo_labels[idx] = cls_id 157 | 158 | known_gt_labels[idx] = tgt_gt_label.item() 159 | if tgt_gt_label > P.n_classes: # unknown class 160 | known_gt_labels[idx] = P.n_classes 161 | 162 | return known_mask, known_pseudo_labels, known_gt_labels 163 | 164 | def openset_eval(P, model, source_loader, target_loader, logger, eval_layer="simclr"): 165 | 166 | distributed = isinstance(source_loader.sampler, torch.utils.data.distributed.DistributedSampler) 167 | 168 | model.eval() 169 | source_feats, hyp_prototypes, cls_compactness_avg, avg_min_sim = compute_source_prototypes(P, model, source_loader, eval_layer, distributed=distributed) 170 | print(f"Class compactness avg: {cls_compactness_avg}, avg_min_sim: {avg_min_sim}") 171 | 172 | # now we get features for target samples 173 | target_feats, target_gt_labels = get_features(P, model, [target_loader], layer=eval_layer, normalize=True, distributed=distributed) 174 | 175 | # define counters we need for openset eval 176 | samples_per_class = np.zeros(P.n_classes + 1) #46 177 | correct_pred_per_class = np.zeros(P.n_classes + 1) #46 178 | 179 | # for each target sample we have to make a predictions. So we compare it with all the prototypes. 180 | # the sample is associated with the class of the nearest prototype if its similarity with this prototype 181 | # is higher than a certain threshold 182 | threshold_multiplier = compute_threshold_multiplier(cls_compactness_avg, avg_min_sim) 183 | normality_threshold = (1-cls_compactness_avg)*threshold_multiplier 184 | for tgt_feat, tgt_gt_label in zip(target_feats, target_gt_labels): 185 | 186 | similarities = (tgt_feat*hyp_prototypes).sum(dim=1) 187 | similarities = rescale_cosine_similarity(similarities) 188 | 189 | highest = similarities.max() 190 | cls_id = similarities.argmax() 191 | 192 | # check whether it is near enough to nearest prototype to be considered known 193 | if highest < (1 - normality_threshold): 194 | # this is the unknown cls_id 195 | cls_id = P.n_classes 196 | 197 | # accumulate prediction 198 | if tgt_gt_label > P.n_classes: 199 | tgt_gt_label = P.n_classes 200 | samples_per_class[tgt_gt_label] += 1 201 | if cls_id == tgt_gt_label: 202 | correct_pred_per_class[cls_id] += 1 203 | 204 | acc_os_star = np.mean(correct_pred_per_class[0:len(correct_pred_per_class)-1] / samples_per_class[0:len(correct_pred_per_class)-1]) 205 | acc_unknown = (correct_pred_per_class[-1] / samples_per_class[-1]) 206 | acc_hos = 2 * (acc_os_star * acc_unknown) / (acc_os_star + acc_unknown) 207 | acc_os = np.mean(correct_pred_per_class/ samples_per_class) 208 | 209 | acc_os *= 100 210 | acc_os_star *= 100 211 | acc_unknown *= 100 212 | acc_hos *= 100 213 | 214 | x = (1 - cls_compactness_avg) 215 | y = (1 - avg_min_sim) 216 | 217 | if logger is None: 218 | print('[OS %6f]' %(acc_os)) 219 | print('[OS* %6f]' % (acc_os_star)) 220 | print('[UNK %6f]' % (acc_unknown)) 221 | print('[HOS %6f]' % (acc_hos)) 222 | else: 223 | logger.log('[OS %6f]' %(acc_os)) 224 | logger.log('[OS* %6f]' % (acc_os_star)) 225 | logger.log('[UNK %6f]' % (acc_unknown)) 226 | logger.log('[HOS %6f]' % (acc_hos)) 227 | 228 | -------------------------------------------------------------------------------- /image.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/silvia1993/HyMOS/86bb5165c3ad921da2ffb00aa5e34ef9c38ea9c0/image.jpeg -------------------------------------------------------------------------------- /models/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/silvia1993/HyMOS/86bb5165c3ad921da2ffb00aa5e34ef9c38ea9c0/models/__init__.py -------------------------------------------------------------------------------- /models/adain.py: -------------------------------------------------------------------------------- 1 | import torch.nn as nn 2 | import torch.nn.functional as F 3 | from torchvision.models import vgg19 4 | 5 | 6 | def calc_mean_std(features): 7 | """ 8 | 9 | :param features: shape of features -> [batch_size, c, h, w] 10 | :return: features_mean, feature_s: shape of mean/std ->[batch_size, c, 1, 1] 11 | """ 12 | 13 | batch_size, c = features.size()[:2] 14 | features_mean = features.reshape(batch_size, c, -1).mean(dim=2).reshape(batch_size, c, 1, 1) 15 | features_std = features.reshape(batch_size, c, -1).std(dim=2).reshape(batch_size, c, 1, 1) + 1e-6 16 | return features_mean, features_std 17 | 18 | 19 | def adain(content_features, style_features): 20 | """ 21 | Adaptive Instance Normalization 22 | 23 | :param content_features: shape -> [batch_size, c, h, w] 24 | :param style_features: shape -> [batch_size, c, h, w] 25 | :return: normalized_features shape -> [batch_size, c, h, w] 26 | """ 27 | content_mean, content_std = calc_mean_std(content_features) 28 | style_mean, style_std = calc_mean_std(style_features) 29 | normalized_features = style_std * (content_features - content_mean) / content_std + style_mean 30 | return normalized_features 31 | 32 | 33 | class VGGEncoder(nn.Module): 34 | def __init__(self): 35 | super().__init__() 36 | vgg = vgg19(pretrained=True).features 37 | self.slice1 = vgg[: 2] 38 | self.slice2 = vgg[2: 7] 39 | self.slice3 = vgg[7: 12] 40 | self.slice4 = vgg[12: 21] 41 | for p in self.parameters(): 42 | p.requires_grad = False 43 | 44 | def forward(self, images, output_last_feature=False): 45 | h1 = self.slice1(images) 46 | h2 = self.slice2(h1) 47 | h3 = self.slice3(h2) 48 | h4 = self.slice4(h3) 49 | if output_last_feature: 50 | return h4 51 | else: 52 | return h1, h2, h3, h4 53 | 54 | 55 | class RC(nn.Module): 56 | """A wrapper of ReflectionPad2d and Conv2d""" 57 | def __init__(self, in_channels, out_channels, kernel_size=3, pad_size=1, activated=True): 58 | super().__init__() 59 | self.pad = nn.ReflectionPad2d((pad_size, pad_size, pad_size, pad_size)) 60 | self.conv = nn.Conv2d(in_channels, out_channels, kernel_size) 61 | self.activated = activated 62 | 63 | def forward(self, x): 64 | h = self.pad(x) 65 | h = self.conv(h) 66 | if self.activated: 67 | return F.relu(h) 68 | else: 69 | return h 70 | 71 | 72 | class Decoder(nn.Module): 73 | def __init__(self): 74 | super().__init__() 75 | self.rc1 = RC(512, 256, 3, 1) 76 | self.rc2 = RC(256, 256, 3, 1) 77 | self.rc3 = RC(256, 256, 3, 1) 78 | self.rc4 = RC(256, 256, 3, 1) 79 | self.rc5 = RC(256, 128, 3, 1) 80 | self.rc6 = RC(128, 128, 3, 1) 81 | self.rc7 = RC(128, 64, 3, 1) 82 | self.rc8 = RC(64, 64, 3, 1) 83 | self.rc9 = RC(64, 3, 3, 1, False) 84 | 85 | def forward(self, features): 86 | h = self.rc1(features) 87 | h = F.interpolate(h, scale_factor=2) 88 | h = self.rc2(h) 89 | h = self.rc3(h) 90 | h = self.rc4(h) 91 | h = self.rc5(h) 92 | h = F.interpolate(h, scale_factor=2) 93 | h = self.rc6(h) 94 | h = self.rc7(h) 95 | h = F.interpolate(h, scale_factor=2) 96 | h = self.rc8(h) 97 | h = self.rc9(h) 98 | return h 99 | 100 | 101 | class AdaIN(nn.Module): 102 | def __init__(self): 103 | super().__init__() 104 | self.vgg_encoder = VGGEncoder() 105 | self.decoder = Decoder() 106 | 107 | def generate(self, content_images, style_images, alpha=1.0): 108 | # data should already have passed ToTensor and Normalize 109 | content_features = self.vgg_encoder(content_images, output_last_feature=True) 110 | style_features = self.vgg_encoder(style_images, output_last_feature=True) 111 | t = adain(content_features, style_features) 112 | t = alpha * t + (1 - alpha) * content_features 113 | out = self.decoder(t) 114 | return out 115 | 116 | @staticmethod 117 | def calc_content_loss(out_features, t): 118 | return F.mse_loss(out_features, t) 119 | 120 | @staticmethod 121 | def calc_style_loss(content_middle_features, style_middle_features): 122 | loss = 0 123 | for c, s in zip(content_middle_features, style_middle_features): 124 | c_mean, c_std = calc_mean_std(c) 125 | s_mean, s_std = calc_mean_std(s) 126 | loss += F.mse_loss(c_mean, s_mean) + F.mse_loss(c_std, s_std) 127 | return loss 128 | 129 | def forward(self, content_images, style_images, alpha=1.0, lam=10): 130 | content_features = self.vgg_encoder(content_images, output_last_feature=True) 131 | style_features = self.vgg_encoder(style_images, output_last_feature=True) 132 | t = adain(content_features, style_features) 133 | t = alpha * t + (1 - alpha) * content_features 134 | out = self.decoder(t) 135 | 136 | output_features = self.vgg_encoder(out, output_last_feature=True) 137 | output_middle_features = self.vgg_encoder(out, output_last_feature=False) 138 | style_middle_features = self.vgg_encoder(style_images, output_last_feature=False) 139 | 140 | loss_c = self.calc_content_loss(output_features, t) 141 | loss_s = self.calc_style_loss(output_middle_features, style_middle_features) 142 | loss = loss_c + lam * loss_s 143 | return loss 144 | -------------------------------------------------------------------------------- /models/base_model.py: -------------------------------------------------------------------------------- 1 | from abc import * 2 | import torch.nn as nn 3 | 4 | class BaseModel(nn.Module, metaclass=ABCMeta): 5 | def __init__(self, last_dim, num_classes=10, simclr_dim=128): 6 | super(BaseModel, self).__init__() 7 | self.simclr_layer = nn.Sequential( 8 | nn.Linear(last_dim, last_dim), 9 | nn.ReLU(), 10 | nn.Linear(last_dim, simclr_dim), 11 | ) 12 | 13 | @abstractmethod 14 | def penultimate(self, inputs, all_features=False): 15 | pass 16 | 17 | def forward(self, inputs, penultimate=False, simclr=False): 18 | _aux = {} 19 | _return_aux = False 20 | 21 | features = self.penultimate(inputs) 22 | 23 | if penultimate: 24 | _return_aux = True 25 | _aux['penultimate'] = features 26 | 27 | if simclr: 28 | _return_aux = True 29 | _aux['simclr'] = self.simclr_layer(features) 30 | 31 | if _return_aux: 32 | return None, _aux 33 | 34 | return None 35 | -------------------------------------------------------------------------------- /models/classifier.py: -------------------------------------------------------------------------------- 1 | import torch.nn as nn 2 | import torch 3 | 4 | from models.resnet_imagenet import resnet50 5 | import models.transform_layers as TL 6 | 7 | def get_simclr_augmentation(P, image_size): 8 | 9 | # parameter for resizecrop 10 | resize_scale = (P.resize_factor, 1.0) # resize scaling factor 11 | 12 | # Align augmentation 13 | color_jitter = TL.ColorJitterLayer(brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1, p=0.8) 14 | color_gray = TL.RandomColorGrayLayer(p=0.2) 15 | resize_crop = TL.RandomResizedCropLayer(scale=resize_scale, size=image_size) 16 | 17 | # disable resize_crop 18 | resize_crop = nn.Identity() 19 | 20 | if P.dataset == 'imagenet': # Using RandomResizedCrop at PIL transform 21 | transform = nn.Sequential( 22 | color_jitter, 23 | color_gray, 24 | ) 25 | else: 26 | transform = nn.Sequential( 27 | color_jitter, 28 | color_gray, 29 | resize_crop, 30 | ) 31 | 32 | return transform 33 | 34 | def get_simclr_augmentation_crop_only(P, image_size): 35 | # parameter for resizecrop 36 | resize_scale = (P.resize_factor, 1.0) # resize scaling factor 37 | 38 | # Align augmentation 39 | resize_crop = TL.RandomResizedCropLayer(scale=resize_scale, size=image_size) 40 | 41 | # disable resize crop 42 | resize_crop = nn.Identity() 43 | # Transform define # 44 | transform = nn.Sequential(resize_crop) 45 | return transform 46 | 47 | def get_classifier(mode, n_classes=10, pretrain=None): 48 | if mode == 'resnet50_imagenet': 49 | classifier = resnet50(num_classes=n_classes) 50 | if not pretrain is None: 51 | ckpt = torch.load(pretrain) 52 | if 'state_dict' in ckpt: 53 | state_dict = ckpt['state_dict'] 54 | else: 55 | state_dict = ckpt 56 | missing, unexpected = classifier.load_state_dict(state_dict, strict=False) 57 | print(f"Loaded model from {pretrain}") 58 | print(f"Missing keys: {missing}\nUnexpected keys: {unexpected}") 59 | else: 60 | raise NotImplementedError() 61 | 62 | return classifier 63 | 64 | -------------------------------------------------------------------------------- /models/resnet_imagenet.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | 4 | from models.base_model import BaseModel 5 | from models.transform_layers import NormalizeLayer 6 | 7 | 8 | def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1): 9 | """3x3 convolution with padding""" 10 | return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, 11 | padding=dilation, groups=groups, bias=False, dilation=dilation) 12 | 13 | 14 | def conv1x1(in_planes, out_planes, stride=1): 15 | """1x1 convolution""" 16 | return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) 17 | 18 | 19 | class BasicBlock(nn.Module): 20 | expansion = 1 21 | 22 | def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1, 23 | base_width=64, dilation=1, norm_layer=None): 24 | super(BasicBlock, self).__init__() 25 | if norm_layer is None: 26 | norm_layer = nn.BatchNorm2d 27 | if groups != 1 or base_width != 64: 28 | raise ValueError('BasicBlock only supports groups=1 and base_width=64') 29 | if dilation > 1: 30 | raise NotImplementedError("Dilation > 1 not supported in BasicBlock") 31 | # Both self.conv1 and self.downsample layers downsample the input when stride != 1 32 | self.conv1 = conv3x3(inplanes, planes, stride) 33 | self.bn1 = norm_layer(planes) 34 | self.relu = nn.ReLU(inplace=True) 35 | self.conv2 = conv3x3(planes, planes) 36 | self.bn2 = norm_layer(planes) 37 | self.downsample = downsample 38 | self.stride = stride 39 | 40 | def forward(self, x): 41 | identity = x 42 | 43 | out = self.conv1(x) 44 | out = self.bn1(out) 45 | out = self.relu(out) 46 | 47 | out = self.conv2(out) 48 | out = self.bn2(out) 49 | 50 | if self.downsample is not None: 51 | identity = self.downsample(x) 52 | 53 | out += identity 54 | out = self.relu(out) 55 | 56 | return out 57 | 58 | 59 | class Bottleneck(nn.Module): 60 | # Bottleneck in torchvision places the stride for downsampling at 3x3 convolution(self.conv2) 61 | # while original implementation places the stride at the first 1x1 convolution(self.conv1) 62 | # according to "Deep residual learning for image recognition"https://arxiv.org/abs/1512.03385. 63 | # This variant is also known as ResNet V1.5 and improves accuracy according to 64 | # https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorch. 65 | 66 | expansion = 4 67 | 68 | def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1, 69 | base_width=64, dilation=1, norm_layer=None): 70 | super(Bottleneck, self).__init__() 71 | if norm_layer is None: 72 | norm_layer = nn.BatchNorm2d 73 | width = int(planes * (base_width / 64.)) * groups 74 | # Both self.conv2 and self.downsample layers downsample the input when stride != 1 75 | self.conv1 = conv1x1(inplanes, width) 76 | self.bn1 = norm_layer(width) 77 | self.conv2 = conv3x3(width, width, stride, groups, dilation) 78 | self.bn2 = norm_layer(width) 79 | self.conv3 = conv1x1(width, planes * self.expansion) 80 | self.bn3 = norm_layer(planes * self.expansion) 81 | self.relu = nn.ReLU(inplace=True) 82 | self.downsample = downsample 83 | self.stride = stride 84 | 85 | def forward(self, x): 86 | identity = x 87 | 88 | out = self.conv1(x) 89 | out = self.bn1(out) 90 | out = self.relu(out) 91 | 92 | out = self.conv2(out) 93 | out = self.bn2(out) 94 | out = self.relu(out) 95 | 96 | out = self.conv3(out) 97 | out = self.bn3(out) 98 | 99 | if self.downsample is not None: 100 | identity = self.downsample(x) 101 | 102 | out += identity 103 | out = self.relu(out) 104 | 105 | return out 106 | 107 | 108 | class ResNet(BaseModel): 109 | def __init__(self, block, layers, num_classes=10, 110 | zero_init_residual=False, groups=1, width_per_group=64, replace_stride_with_dilation=None, 111 | norm_layer=None): 112 | last_dim = 512 * block.expansion 113 | super(ResNet, self).__init__(last_dim, num_classes) 114 | if norm_layer is None: 115 | norm_layer = nn.BatchNorm2d 116 | self._norm_layer = norm_layer 117 | 118 | self.inplanes = 64 119 | self.dilation = 1 120 | if replace_stride_with_dilation is None: 121 | # each element in the tuple indicates if we should replace 122 | # the 2x2 stride with a dilated convolution instead 123 | replace_stride_with_dilation = [False, False, False] 124 | if len(replace_stride_with_dilation) != 3: 125 | raise ValueError("replace_stride_with_dilation should be None " 126 | "or a 3-element tuple, got {}".format(replace_stride_with_dilation)) 127 | self.groups = groups 128 | self.base_width = width_per_group 129 | self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3, 130 | bias=False) 131 | self.bn1 = norm_layer(self.inplanes) 132 | self.relu = nn.ReLU(inplace=True) 133 | self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) 134 | self.layer1 = self._make_layer(block, 64, layers[0]) 135 | self.layer2 = self._make_layer(block, 128, layers[1], stride=2, 136 | dilate=replace_stride_with_dilation[0]) 137 | self.layer3 = self._make_layer(block, 256, layers[2], stride=2, 138 | dilate=replace_stride_with_dilation[1]) 139 | self.layer4 = self._make_layer(block, 512, layers[3], stride=2, 140 | dilate=replace_stride_with_dilation[2]) 141 | self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) 142 | self.normalize = NormalizeLayer() 143 | self.last_dim = 512 * block.expansion 144 | 145 | for m in self.modules(): 146 | if isinstance(m, nn.Conv2d): 147 | nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') 148 | elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): 149 | nn.init.constant_(m.weight, 1) 150 | nn.init.constant_(m.bias, 0) 151 | 152 | # Zero-initialize the last BN in each residual branch, 153 | # so that the residual branch starts with zeros, and each residual block behaves like an identity. 154 | # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677 155 | if zero_init_residual: 156 | for m in self.modules(): 157 | if isinstance(m, Bottleneck): 158 | nn.init.constant_(m.bn3.weight, 0) 159 | elif isinstance(m, BasicBlock): 160 | nn.init.constant_(m.bn2.weight, 0) 161 | 162 | def _make_layer(self, block, planes, blocks, stride=1, dilate=False): 163 | norm_layer = self._norm_layer 164 | downsample = None 165 | previous_dilation = self.dilation 166 | if dilate: 167 | self.dilation *= stride 168 | stride = 1 169 | if stride != 1 or self.inplanes != planes * block.expansion: 170 | downsample = nn.Sequential( 171 | conv1x1(self.inplanes, planes * block.expansion, stride), 172 | norm_layer(planes * block.expansion), 173 | ) 174 | 175 | layers = [] 176 | layers.append(block(self.inplanes, planes, stride, downsample, self.groups, 177 | self.base_width, previous_dilation, norm_layer)) 178 | self.inplanes = planes * block.expansion 179 | for _ in range(1, blocks): 180 | layers.append(block(self.inplanes, planes, groups=self.groups, 181 | base_width=self.base_width, dilation=self.dilation, 182 | norm_layer=norm_layer)) 183 | 184 | return nn.Sequential(*layers) 185 | 186 | def penultimate(self, x, all_features=False): 187 | # See note [TorchScript super()] 188 | out_list = [] 189 | 190 | x = self.normalize(x) 191 | x = self.conv1(x) 192 | x = self.bn1(x) 193 | x = self.relu(x) 194 | x = self.maxpool(x) 195 | out_list.append(x) 196 | 197 | x = self.layer1(x) 198 | out_list.append(x) 199 | x = self.layer2(x) 200 | out_list.append(x) 201 | x = self.layer3(x) 202 | out_list.append(x) 203 | x = self.layer4(x) 204 | out_list.append(x) 205 | 206 | x = self.avgpool(x) 207 | x = torch.flatten(x, 1) 208 | 209 | if all_features: 210 | return x, out_list 211 | else: 212 | return x 213 | 214 | 215 | def _resnet(arch, block, layers, **kwargs): 216 | model = ResNet(block, layers, **kwargs) 217 | return model 218 | 219 | 220 | def resnet18(**kwargs): 221 | r"""ResNet-18 model from 222 | `"Deep Residual Learning for Image Recognition" `_ 223 | """ 224 | return _resnet('resnet18', BasicBlock, [2, 2, 2, 2], **kwargs) 225 | 226 | 227 | def resnet50(**kwargs): 228 | r"""ResNet-50 model from 229 | `"Deep Residual Learning for Image Recognition" `_ 230 | """ 231 | return _resnet('resnet50', Bottleneck, [3, 4, 6, 3], **kwargs) 232 | -------------------------------------------------------------------------------- /models/transform_layers.py: -------------------------------------------------------------------------------- 1 | import math 2 | import numbers 3 | import numpy as np 4 | 5 | import torch 6 | import torch.nn as nn 7 | import torch.nn.functional as F 8 | from torch.autograd import Function 9 | 10 | if torch.__version__ >= '1.4.0': 11 | kwargs = {'align_corners': False} 12 | else: 13 | kwargs = {} 14 | 15 | 16 | def rgb2hsv(rgb): 17 | """Convert a 4-d RGB tensor to the HSV counterpart. 18 | 19 | Here, we compute hue using atan2() based on the definition in [1], 20 | instead of using the common lookup table approach as in [2, 3]. 21 | Those values agree when the angle is a multiple of 30°, 22 | otherwise they may differ at most ~1.2°. 23 | 24 | References 25 | [1] https://en.wikipedia.org/wiki/Hue 26 | [2] https://www.rapidtables.com/convert/color/rgb-to-hsv.html 27 | [3] https://github.com/scikit-image/scikit-image/blob/master/skimage/color/colorconv.py#L212 28 | """ 29 | 30 | r, g, b = rgb[:, 0, :, :], rgb[:, 1, :, :], rgb[:, 2, :, :] 31 | 32 | Cmax = rgb.max(1)[0] 33 | Cmin = rgb.min(1)[0] 34 | delta = Cmax - Cmin 35 | 36 | hue = torch.atan2(math.sqrt(3) * (g - b), 2 * r - g - b) 37 | hue = (hue % (2 * math.pi)) / (2 * math.pi) 38 | saturate = delta / Cmax 39 | value = Cmax 40 | hsv = torch.stack([hue, saturate, value], dim=1) 41 | hsv[~torch.isfinite(hsv)] = 0. 42 | return hsv 43 | 44 | 45 | def hsv2rgb(hsv): 46 | """Convert a 4-d HSV tensor to the RGB counterpart. 47 | 48 | >>> %timeit hsv2rgb(hsv) 49 | 2.37 ms ± 13.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) 50 | >>> %timeit rgb2hsv_fast(rgb) 51 | 298 µs ± 542 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each) 52 | >>> torch.allclose(hsv2rgb(hsv), hsv2rgb_fast(hsv), atol=1e-6) 53 | True 54 | 55 | References 56 | [1] https://en.wikipedia.org/wiki/HSL_and_HSV#HSV_to_RGB_alternative 57 | """ 58 | h, s, v = hsv[:, [0]], hsv[:, [1]], hsv[:, [2]] 59 | c = v * s 60 | 61 | n = hsv.new_tensor([5, 3, 1]).view(3, 1, 1) 62 | k = (n + h * 6) % 6 63 | t = torch.min(k, 4 - k) 64 | t = torch.clamp(t, 0, 1) 65 | 66 | return v - c * t 67 | 68 | 69 | class RandomResizedCropLayer(nn.Module): 70 | def __init__(self, size=None, scale=(0.08, 1.0), ratio=(3. / 4., 4. / 3.)): 71 | ''' 72 | Inception Crop 73 | size (tuple): size of fowarding image (C, W, H) 74 | scale (tuple): range of size of the origin size cropped 75 | ratio (tuple): range of aspect ratio of the origin aspect ratio cropped 76 | ''' 77 | super(RandomResizedCropLayer, self).__init__() 78 | 79 | _eye = torch.eye(2, 3) 80 | self.size = size 81 | self.register_buffer('_eye', _eye) 82 | self.scale = scale 83 | self.ratio = ratio 84 | 85 | def forward(self, inputs, whbias=None): 86 | _device = inputs.device 87 | N = inputs.size(0) 88 | _theta = self._eye.repeat(N, 1, 1) 89 | 90 | if whbias is None: 91 | whbias = self._sample_latent(inputs) 92 | 93 | _theta[:, 0, 0] = whbias[:, 0] 94 | _theta[:, 1, 1] = whbias[:, 1] 95 | _theta[:, 0, 2] = whbias[:, 2] 96 | _theta[:, 1, 2] = whbias[:, 3] 97 | 98 | grid = F.affine_grid(_theta, inputs.size(), **kwargs).to(_device) 99 | output = F.grid_sample(inputs, grid, padding_mode='reflection', **kwargs) 100 | 101 | if self.size is not None: 102 | output = F.adaptive_avg_pool2d(output, self.size) 103 | 104 | return output 105 | 106 | def _clamp(self, whbias): 107 | 108 | w = whbias[:, 0] 109 | h = whbias[:, 1] 110 | w_bias = whbias[:, 2] 111 | h_bias = whbias[:, 3] 112 | 113 | # Clamp with scale 114 | w = torch.clamp(w, *self.scale) 115 | h = torch.clamp(h, *self.scale) 116 | 117 | # Clamp with ratio 118 | w = self.ratio[0] * h + torch.relu(w - self.ratio[0] * h) 119 | w = self.ratio[1] * h - torch.relu(self.ratio[1] * h - w) 120 | 121 | # Clamp with bias range: w_bias \in (w - 1, 1 - w), h_bias \in (h - 1, 1 - h) 122 | w_bias = w - 1 + torch.relu(w_bias - w + 1) 123 | w_bias = 1 - w - torch.relu(1 - w - w_bias) 124 | 125 | h_bias = h - 1 + torch.relu(h_bias - h + 1) 126 | h_bias = 1 - h - torch.relu(1 - h - h_bias) 127 | 128 | whbias = torch.stack([w, h, w_bias, h_bias], dim=0).t() 129 | 130 | return whbias 131 | 132 | def _sample_latent(self, inputs): 133 | 134 | _device = inputs.device 135 | N, _, width, height = inputs.shape 136 | 137 | # N * 10 trial 138 | area = width * height 139 | target_area = np.random.uniform(*self.scale, N * 10) * area 140 | log_ratio = (math.log(self.ratio[0]), math.log(self.ratio[1])) 141 | aspect_ratio = np.exp(np.random.uniform(*log_ratio, N * 10)) 142 | 143 | # If doesn't satisfy ratio condition, then do central crop 144 | w = np.round(np.sqrt(target_area * aspect_ratio)) 145 | h = np.round(np.sqrt(target_area / aspect_ratio)) 146 | cond = (0 < w) * (w <= width) * (0 < h) * (h <= height) 147 | w = w[cond] 148 | h = h[cond] 149 | cond_len = w.shape[0] 150 | if cond_len >= N: 151 | w = w[:N] 152 | h = h[:N] 153 | else: 154 | w = np.concatenate([w, np.ones(N - cond_len) * width]) 155 | h = np.concatenate([h, np.ones(N - cond_len) * height]) 156 | 157 | w_bias = np.random.randint(w - width, width - w + 1) / width 158 | h_bias = np.random.randint(h - height, height - h + 1) / height 159 | w = w / width 160 | h = h / height 161 | 162 | whbias = np.column_stack([w, h, w_bias, h_bias]) 163 | whbias = torch.tensor(whbias, device=_device) 164 | 165 | return whbias 166 | 167 | 168 | class HorizontalFlipRandomCrop(nn.Module): 169 | def __init__(self, max_range): 170 | super(HorizontalFlipRandomCrop, self).__init__() 171 | self.max_range = max_range 172 | _eye = torch.eye(2, 3) 173 | self.register_buffer('_eye', _eye) 174 | 175 | def forward(self, input, sign=None, bias=None, rotation=None): 176 | _device = input.device 177 | N = input.size(0) 178 | _theta = self._eye.repeat(N, 1, 1) 179 | 180 | if sign is None: 181 | sign = torch.bernoulli(torch.ones(N, device=_device) * 0.5) * 2 - 1 182 | if bias is None: 183 | bias = torch.empty((N, 2), device=_device).uniform_(-self.max_range, self.max_range) 184 | _theta[:, 0, 0] = sign 185 | _theta[:, :, 2] = bias 186 | 187 | if rotation is not None: 188 | _theta[:, 0:2, 0:2] = rotation 189 | 190 | grid = F.affine_grid(_theta, input.size(), **kwargs).to(_device) 191 | output = F.grid_sample(input, grid, padding_mode='reflection', **kwargs) 192 | 193 | return output 194 | 195 | def _sample_latent(self, N, device=None): 196 | sign = torch.bernoulli(torch.ones(N, device=device) * 0.5) * 2 - 1 197 | bias = torch.empty((N, 2), device=device).uniform_(-self.max_range, self.max_range) 198 | return sign, bias 199 | 200 | 201 | class Rotation(nn.Module): 202 | def __init__(self, max_range = 4): 203 | super(Rotation, self).__init__() 204 | self.max_range = max_range 205 | self.prob = 0.5 206 | 207 | def forward(self, input, aug_index=None): 208 | _device = input.device 209 | 210 | _, _, H, W = input.size() 211 | 212 | if aug_index is None: 213 | aug_index = np.random.randint(4) 214 | 215 | output = torch.rot90(input, aug_index, (2, 3)) 216 | 217 | _prob = input.new_full((input.size(0),), self.prob) 218 | _mask = torch.bernoulli(_prob).view(-1, 1, 1, 1) 219 | output = _mask * input + (1-_mask) * output 220 | 221 | else: 222 | aug_index = aug_index % self.max_range 223 | output = torch.rot90(input, aug_index, (2, 3)) 224 | 225 | return output 226 | 227 | 228 | class CutPerm(nn.Module): 229 | def __init__(self, max_range = 4): 230 | super(CutPerm, self).__init__() 231 | self.max_range = max_range 232 | self.prob = 0.5 233 | 234 | def forward(self, input, aug_index=None): 235 | _device = input.device 236 | 237 | _, _, H, W = input.size() 238 | 239 | if aug_index is None: 240 | aug_index = np.random.randint(4) 241 | 242 | output = self._cutperm(input, aug_index) 243 | 244 | _prob = input.new_full((input.size(0),), self.prob) 245 | _mask = torch.bernoulli(_prob).view(-1, 1, 1, 1) 246 | output = _mask * input + (1 - _mask) * output 247 | 248 | else: 249 | aug_index = aug_index % self.max_range 250 | output = self._cutperm(input, aug_index) 251 | 252 | return output 253 | 254 | def _cutperm(self, inputs, aug_index): 255 | 256 | _, _, H, W = inputs.size() 257 | h_mid = int(H / 2) 258 | w_mid = int(W / 2) 259 | 260 | jigsaw_h = aug_index // 2 261 | jigsaw_v = aug_index % 2 262 | 263 | if jigsaw_h == 1: 264 | inputs = torch.cat((inputs[:, :, h_mid:, :], inputs[:, :, 0:h_mid, :]), dim=2) 265 | if jigsaw_v == 1: 266 | inputs = torch.cat((inputs[:, :, :, w_mid:], inputs[:, :, :, 0:w_mid]), dim=3) 267 | 268 | return inputs 269 | 270 | 271 | class HorizontalFlipLayer(nn.Module): 272 | def __init__(self): 273 | """ 274 | img_size : (int, int, int) 275 | Height and width must be powers of 2. E.g. (32, 32, 1) or 276 | (64, 128, 3). Last number indicates number of channels, e.g. 1 for 277 | grayscale or 3 for RGB 278 | """ 279 | super(HorizontalFlipLayer, self).__init__() 280 | 281 | _eye = torch.eye(2, 3) 282 | self.register_buffer('_eye', _eye) 283 | 284 | def forward(self, inputs): 285 | _device = inputs.device 286 | 287 | N = inputs.size(0) 288 | _theta = self._eye.repeat(N, 1, 1) 289 | r_sign = torch.bernoulli(torch.ones(N, device=_device) * 0.5) * 2 - 1 290 | _theta[:, 0, 0] = r_sign 291 | grid = F.affine_grid(_theta, inputs.size(), **kwargs).to(_device) 292 | inputs = F.grid_sample(inputs, grid, padding_mode='reflection', **kwargs) 293 | 294 | return inputs 295 | 296 | 297 | class RandomColorGrayLayer(nn.Module): 298 | def __init__(self, p): 299 | super(RandomColorGrayLayer, self).__init__() 300 | self.prob = p 301 | 302 | _weight = torch.tensor([[0.299, 0.587, 0.114]]) 303 | self.register_buffer('_weight', _weight.view(1, 3, 1, 1)) 304 | 305 | def forward(self, inputs, aug_index=None): 306 | 307 | if aug_index == 0: 308 | return inputs 309 | 310 | l = F.conv2d(inputs, self._weight) 311 | gray = torch.cat([l, l, l], dim=1) 312 | 313 | if aug_index is None: 314 | _prob = inputs.new_full((inputs.size(0),), self.prob) 315 | _mask = torch.bernoulli(_prob).view(-1, 1, 1, 1) 316 | 317 | gray = inputs * (1 - _mask) + gray * _mask 318 | 319 | return gray 320 | 321 | 322 | class ColorJitterLayer(nn.Module): 323 | def __init__(self, p, brightness, contrast, saturation, hue): 324 | super(ColorJitterLayer, self).__init__() 325 | self.prob = p 326 | self.brightness = self._check_input(brightness, 'brightness') 327 | self.contrast = self._check_input(contrast, 'contrast') 328 | self.saturation = self._check_input(saturation, 'saturation') 329 | self.hue = self._check_input(hue, 'hue', center=0, bound=(-0.5, 0.5), 330 | clip_first_on_zero=False) 331 | 332 | def _check_input(self, value, name, center=1, bound=(0, float('inf')), clip_first_on_zero=True): 333 | if isinstance(value, numbers.Number): 334 | if value < 0: 335 | raise ValueError("If {} is a single number, it must be non negative.".format(name)) 336 | value = [center - value, center + value] 337 | if clip_first_on_zero: 338 | value[0] = max(value[0], 0) 339 | elif isinstance(value, (tuple, list)) and len(value) == 2: 340 | if not bound[0] <= value[0] <= value[1] <= bound[1]: 341 | raise ValueError("{} values should be between {}".format(name, bound)) 342 | else: 343 | raise TypeError("{} should be a single number or a list/tuple with lenght 2.".format(name)) 344 | 345 | # if value is 0 or (1., 1.) for brightness/contrast/saturation 346 | # or (0., 0.) for hue, do nothing 347 | if value[0] == value[1] == center: 348 | value = None 349 | return value 350 | 351 | def adjust_contrast(self, x): 352 | if self.contrast: 353 | factor = x.new_empty(x.size(0), 1, 1, 1).uniform_(*self.contrast) 354 | means = torch.mean(x, dim=[2, 3], keepdim=True) 355 | x = (x - means) * factor + means 356 | return torch.clamp(x, 0, 1) 357 | 358 | def adjust_hsv(self, x): 359 | f_h = x.new_zeros(x.size(0), 1, 1) 360 | f_s = x.new_ones(x.size(0), 1, 1) 361 | f_v = x.new_ones(x.size(0), 1, 1) 362 | 363 | if self.hue: 364 | f_h.uniform_(*self.hue) 365 | if self.saturation: 366 | f_s = f_s.uniform_(*self.saturation) 367 | if self.brightness: 368 | f_v = f_v.uniform_(*self.brightness) 369 | 370 | return RandomHSVFunction.apply(x, f_h, f_s, f_v) 371 | 372 | def transform(self, inputs): 373 | # Shuffle transform 374 | if np.random.rand() > 0.5: 375 | transforms = [self.adjust_contrast, self.adjust_hsv] 376 | else: 377 | transforms = [self.adjust_hsv, self.adjust_contrast] 378 | 379 | for t in transforms: 380 | inputs = t(inputs) 381 | 382 | return inputs 383 | 384 | def forward(self, inputs): 385 | _prob = inputs.new_full((inputs.size(0),), self.prob) 386 | _mask = torch.bernoulli(_prob).view(-1, 1, 1, 1) 387 | return inputs * (1 - _mask) + self.transform(inputs) * _mask 388 | 389 | 390 | class RandomHSVFunction(Function): 391 | @staticmethod 392 | def forward(ctx, x, f_h, f_s, f_v): 393 | # ctx is a context object that can be used to stash information 394 | # for backward computation 395 | x = rgb2hsv(x) 396 | h = x[:, 0, :, :] 397 | h += (f_h * 255. / 360.) 398 | h = (h % 1) 399 | x[:, 0, :, :] = h 400 | x[:, 1, :, :] = x[:, 1, :, :] * f_s 401 | x[:, 2, :, :] = x[:, 2, :, :] * f_v 402 | x = torch.clamp(x, 0, 1) 403 | x = hsv2rgb(x) 404 | return x 405 | 406 | @staticmethod 407 | def backward(ctx, grad_output): 408 | # We return as many input gradients as there were arguments. 409 | # Gradients of non-Tensor arguments to forward must be None. 410 | grad_input = None 411 | if ctx.needs_input_grad[0]: 412 | grad_input = grad_output.clone() 413 | return grad_input, None, None, None 414 | 415 | 416 | class NormalizeLayer(nn.Module): 417 | """ 418 | In order to certify radii in original coordinates rather than standardized coordinates, we 419 | add the Gaussian noise _before_ standardizing, which is why we have standardization be the first 420 | layer of the classifier rather than as a part of preprocessing as is typical. 421 | """ 422 | 423 | def __init__(self, mean=0.5, std=0.5): 424 | super(NormalizeLayer, self).__init__() 425 | self.mean=mean 426 | self.std=std 427 | 428 | def forward(self, inputs): 429 | if isinstance(self.mean, torch.Tensor): 430 | return ((inputs.permute(0,2,3,1)-self.mean)/self.std).permute(0,3,1,2) 431 | else: 432 | return (inputs - self.mean) / self.std 433 | 434 | -------------------------------------------------------------------------------- /train.py: -------------------------------------------------------------------------------- 1 | from utils.utils import Logger 2 | from utils.utils import save_checkpoint, save_checkpoint_name 3 | from utils.utils import AverageMeter 4 | from evals.evals import compute_confident_known_mask, openset_eval 5 | from datasets.datasets import get_dataset_2 6 | import time 7 | import sys 8 | from sklearn.metrics import accuracy_score 9 | 10 | from common.train import * 11 | import torch 12 | import resource 13 | 14 | from training.sup import setup 15 | # setup training routine 16 | train, fname = setup(P) 17 | 18 | logger = Logger(fname, ask=not resume, local_rank=P.local_rank) 19 | logger.log(P) 20 | logger.log(model) 21 | 22 | # Run experiments 23 | losses = dict() 24 | losses['sim'] = AverageMeter() 25 | losses['norm'] = AverageMeter() 26 | losses['time'] = AverageMeter() 27 | check = time.time() 28 | 29 | P.style_iter = iter(P.style_loader) 30 | 31 | data_iter = iter(train_loader) 32 | 33 | for its in range(start_its, P.iterations + 1): 34 | model.train() 35 | 36 | kwargs = {} 37 | kwargs['simclr_aug'] = simclr_aug 38 | 39 | # train one iteration 40 | sim_loss, simclr_norm, train_loader, data_iter = train(P, its, model, criterion, optimizer, scheduler_warmup, train_loader, data_iter, logger=logger, **kwargs) 41 | 42 | model.eval() 43 | 44 | if its == P.iterations and P.local_rank == 0: 45 | if P.multi_gpu: 46 | save_states = model.module.state_dict() 47 | else: 48 | save_states = model.state_dict() 49 | save_checkpoint(its, save_states, optimizer.state_dict(), logger.logdir) 50 | save_checkpoint_name(save_states, logger.logdir, f"{its}") 51 | logger.log("[Saving checkpoint]") 52 | 53 | # log 54 | losses['sim'].update(sim_loss, 1) 55 | losses['norm'].update(simclr_norm, 1) 56 | losses['time'].update(time.time()-check, 1) 57 | 58 | check = time.time() 59 | if its%10 == 0 : 60 | eta_sec = (P.iterations - its)*losses['time'].average 61 | hour = eta_sec // 3600 62 | eta_sec = eta_sec % 3600 63 | min = eta_sec // 60 64 | eta_sec = eta_sec % 60 65 | 66 | lr = optimizer.param_groups[0]['lr'] 67 | 68 | logger.log('[Iteration %3d] [Loss_Sim %5.2f] [SimclrNorm %f] [LR %f] [Avg time %.2fs] [ETA %dh%dm%ds]' % (its, losses['sim'].average, 69 | losses['norm'].average, lr, losses['time'].average, hour, min, eta_sec)) 70 | 71 | losses['sim'] = AverageMeter() 72 | losses['norm'] = AverageMeter() 73 | 74 | if P.iterative and its in P.its_breakpoints: 75 | logger.log(f"Reached breakpoint iteration {its}. Start computing pseudo labels for new known samples") 76 | 77 | known_mask, known_pseudo_labels, known_gt_labels = compute_confident_known_mask(P, model, source_test_loader, target_test_loader, logger=None) 78 | 79 | acc = accuracy_score(known_gt_labels[known_mask], known_pseudo_labels[known_mask]) 80 | logger.log("Selected {} target samples as known. Classification accuracy for selected samples: {:.4f}".format(len(known_mask.nonzero()[0]), acc)) 81 | del known_gt_labels 82 | 83 | if len(known_mask.nonzero()[0]) > 0: 84 | train_sets, n_classes = get_dataset_2(P, train=True, target_known_mask=known_mask, target_known_pseudo_labels = known_pseudo_labels) 85 | whole_source = ConcatDataset([train_sets[idx] for idx in range(n_classes)]) 86 | my_sampler = BalancedMultiSourceRandomSampler(whole_source, P.batch_p, P.local_rank, P.n_gpus) 87 | print(f"{P.local_rank} sampler_size: {len(my_sampler)}") 88 | loader_kwargs = {'pin_memory': False, 'num_workers': 1, 'drop_last':True} 89 | train_loader = DataLoader(whole_source, sampler=my_sampler, batch_size=single_GPU_batch_size, **loader_kwargs) 90 | data_iter = iter(train_loader) 91 | 92 | 93 | if its%5000 == 0 or its == P.iterations: 94 | 95 | logger.log("Running openset eval") 96 | openset_eval(P, model, source_test_loader, target_test_loader, logger=logger) 97 | 98 | sys.exit(0) 99 | -------------------------------------------------------------------------------- /training/__init__.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.functional as F 4 | 5 | 6 | def update_learning_rate(P, optimizer, cur_epoch, n, n_total): 7 | 8 | cur_epoch = cur_epoch - 1 9 | 10 | lr = P.lr_init 11 | if P.optimizer == 'sgd' or 'lars': 12 | DECAY_RATIO = 0.1 13 | elif P.optimizer == 'adam': 14 | DECAY_RATIO = 0.3 15 | else: 16 | raise NotImplementedError() 17 | 18 | if P.warmup > 0: 19 | cur_iter = cur_epoch * n_total + n 20 | if cur_iter <= P.warmup: 21 | lr *= cur_iter / float(P.warmup) 22 | 23 | if cur_epoch >= 0.5 * P.epochs: 24 | lr *= DECAY_RATIO 25 | if cur_epoch >= 0.75 * P.epochs: 26 | lr *= DECAY_RATIO 27 | for param_group in optimizer.param_groups: 28 | param_group['lr'] = lr 29 | return lr 30 | 31 | 32 | def _cross_entropy(input, targets, reduction='mean'): 33 | targets_prob = F.softmax(targets, dim=1) 34 | xent = (-targets_prob * F.log_softmax(input, dim=1)).sum(1) 35 | if reduction == 'sum': 36 | return xent.sum() 37 | elif reduction == 'mean': 38 | return xent.mean() 39 | elif reduction == 'none': 40 | return xent 41 | else: 42 | raise NotImplementedError() 43 | 44 | 45 | def _entropy(input, reduction='mean'): 46 | return _cross_entropy(input, input, reduction) 47 | 48 | 49 | def cross_entropy_soft(input, targets, reduction='mean'): 50 | targets_prob = F.softmax(targets, dim=1) 51 | xent = (-targets_prob * F.log_softmax(input, dim=1)).sum(1) 52 | if reduction == 'sum': 53 | return xent.sum() 54 | elif reduction == 'mean': 55 | return xent.mean() 56 | elif reduction == 'none': 57 | return xent 58 | else: 59 | raise NotImplementedError() 60 | 61 | 62 | def kl_div(input, targets, reduction='batchmean'): 63 | return F.kl_div(F.log_softmax(input, dim=1), F.softmax(targets, dim=1), 64 | reduction=reduction) 65 | 66 | 67 | def target_nll_loss(inputs, targets, reduction='none'): 68 | inputs_t = -F.nll_loss(inputs, targets, reduction='none') 69 | logit_diff = inputs - inputs_t.view(-1, 1) 70 | logit_diff = logit_diff.scatter(1, targets.view(-1, 1), -1e8) 71 | diff_max = logit_diff.max(1)[0] 72 | 73 | if reduction == 'sum': 74 | return diff_max.sum() 75 | elif reduction == 'mean': 76 | return diff_max.mean() 77 | elif reduction == 'none': 78 | return diff_max 79 | else: 80 | raise NotImplementedError() 81 | 82 | 83 | def target_nll_c(inputs, targets, reduction='none'): 84 | conf = torch.softmax(inputs, dim=1) 85 | conf_t = -F.nll_loss(conf, targets, reduction='none') 86 | conf_diff = conf - conf_t.view(-1, 1) 87 | conf_diff = conf_diff.scatter(1, targets.view(-1, 1), -1) 88 | diff_max = conf_diff.max(1)[0] 89 | 90 | if reduction == 'sum': 91 | return diff_max.sum() 92 | elif reduction == 'mean': 93 | return diff_max.mean() 94 | elif reduction == 'none': 95 | return diff_max 96 | else: 97 | raise NotImplementedError() -------------------------------------------------------------------------------- /training/contrastive_loss.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.distributed as dist 3 | import diffdist.functional as distops 4 | 5 | 6 | def get_similarity_matrix(outputs, chunk=2, multi_gpu=False): 7 | ''' 8 | Compute similarity matrix 9 | - outputs: (B', d) tensor for B' = B * chunk 10 | - sim_matrix: (B', B') tensor 11 | ''' 12 | 13 | if multi_gpu: 14 | outputs_gathered = [] 15 | for out in outputs.chunk(chunk): 16 | gather_t = [torch.empty_like(out) for _ in range(dist.get_world_size())] 17 | gather_t = torch.cat(distops.all_gather(gather_t, out)) 18 | outputs_gathered.append(gather_t) 19 | outputs = torch.cat(outputs_gathered) 20 | 21 | sim_matrix = torch.mm(outputs, outputs.t()) # (B', d), (d, B') -> (B', B') 22 | 23 | return sim_matrix 24 | 25 | def get_similarity_matrix_padded(outputs, sizes, chunk=2, multi_gpu=False): 26 | ''' 27 | Compute similarity matrix 28 | - outputs: (B', d) tensor for B' = B * chunk 29 | - sim_matrix: (B', B') tensor 30 | Padded version: not all gpus have the same batch size. We preallocate considering the maximum size and 31 | then select using effective sizes. (ref: https://discuss.pytorch.org/t/how-to-concatenate-different-size-tensors-from-distributed-processes/44819/4) 32 | ''' 33 | 34 | feat_size = outputs.shape[1] 35 | 36 | maxsize = sizes.max() 37 | if multi_gpu: 38 | outputs_gathered = [] 39 | for out in outputs.chunk(chunk): 40 | if out.shape[0] < maxsize: 41 | empty = torch.empty((maxsize,feat_size), dtype=out.dtype, device=out.device) 42 | empty[:out.shape[0]]=out 43 | out = empty 44 | gather_t = [torch.empty((maxsize,feat_size), dtype=out.dtype, layout=out.layout, device=out.device) for _ in range(dist.get_world_size())] 45 | gather_t = distops.all_gather(gather_t, out) 46 | gather_t_out = [] 47 | for g, size in zip(gather_t, sizes): 48 | gather_t_out.append(g[:size]) 49 | gather_t = torch.cat(gather_t_out) 50 | outputs_gathered.append(gather_t) 51 | outputs = torch.cat(outputs_gathered) 52 | 53 | sim_matrix = torch.mm(outputs, outputs.t()) # (B', d), (d, B') -> (B', B') 54 | 55 | return sim_matrix 56 | 57 | def NT_xent(sim_matrix, temperature=0.5, chunk=2, eps=1e-8): 58 | ''' 59 | Compute NT_xent loss 60 | - sim_matrix: (B', B') tensor for B' = B * chunk (first 2B are pos samples) 61 | ''' 62 | 63 | device = sim_matrix.device 64 | 65 | B = sim_matrix.size(0) // chunk # B = B' / chunk 66 | 67 | eye = torch.eye(B * chunk).to(device) # (B', B') 68 | sim_matrix = torch.exp(sim_matrix / temperature) * (1 - eye) # remove diagonal 69 | 70 | denom = torch.sum(sim_matrix, dim=1, keepdim=True) 71 | sim_matrix = -torch.log(sim_matrix / (denom + eps) + eps) # loss matrix 72 | 73 | loss = torch.sum(sim_matrix[:B, B:].diag() + sim_matrix[B:, :B].diag()) / (2 * B) 74 | 75 | return loss 76 | 77 | 78 | def Supervised_NT_xent(sim_matrix, labels, temperature=0.5, chunk=2, eps=1e-8, multi_gpu=False): 79 | ''' 80 | Compute NT_xent loss 81 | - sim_matrix: (B', B') tensor for B' = B * chunk (first 2B are pos samples) 82 | ''' 83 | 84 | device = sim_matrix.device 85 | 86 | if multi_gpu: 87 | gather_t = [torch.empty_like(labels) for _ in range(dist.get_world_size())] 88 | labels = torch.cat(distops.all_gather(gather_t, labels)) 89 | labels = labels.repeat(2) 90 | 91 | logits_max, _ = torch.max(sim_matrix, dim=1, keepdim=True) 92 | sim_matrix = sim_matrix - logits_max.detach() 93 | 94 | B = sim_matrix.size(0) // chunk # B = B' / chunk 95 | 96 | eye = torch.eye(B * chunk).to(device) # (B', B') 97 | sim_matrix = torch.exp(sim_matrix / temperature) * (1 - eye) # remove diagonal 98 | 99 | denom = torch.sum(sim_matrix, dim=1, keepdim=True) 100 | sim_matrix = -torch.log(sim_matrix / (denom + eps) + eps) # loss matrix 101 | 102 | labels = labels.contiguous().view(-1, 1) 103 | Mask = torch.eq(labels, labels.t()).float().to(device) 104 | #Mask = eye * torch.stack([labels == labels[i] for i in range(labels.size(0))]).float().to(device) 105 | Mask = Mask / (Mask.sum(dim=1, keepdim=True) + eps) 106 | 107 | loss = torch.sum(Mask * sim_matrix) / (2 * B) 108 | 109 | return loss 110 | 111 | def Supervised_NT_xent_padded(sim_matrix, labels, sizes, temperature=0.5, chunk=2, eps=1e-8, multi_gpu=False): 112 | ''' 113 | Compute NT_xent loss 114 | - sim_matrix: (B', B') tensor for B' = B * chunk (first 2B are pos samples) 115 | Padded version: not all gpus have the same batch size. We preallocate considering the maximum size and 116 | then select using effective sizes. (ref: https://discuss.pytorch.org/t/how-to-concatenate-different-size-tensors-from-distributed-processes/44819/4) 117 | ''' 118 | 119 | device = sim_matrix.device 120 | maxsize = sizes.max() 121 | if multi_gpu: 122 | if labels.shape[0] < maxsize: 123 | empty = torch.empty((maxsize), dtype=labels.dtype, device=labels.device) 124 | empty[:labels.shape[0]] = labels 125 | labels=empty 126 | gather_t = [torch.empty((maxsize), dtype=labels.dtype, layout=labels.layout, device=labels.device) for _ in range(dist.get_world_size())] 127 | gather_t = distops.all_gather(gather_t, labels) 128 | gather_t_out = [] 129 | for g, size in zip(gather_t, sizes): 130 | gather_t_out.append(g[:size]) 131 | labels = torch.cat(gather_t_out) 132 | labels = labels.repeat(2) 133 | logits_max, _ = torch.max(sim_matrix, dim=1, keepdim=True) 134 | sim_matrix = sim_matrix - logits_max.detach() 135 | 136 | B = sim_matrix.size(0) // chunk # B = B' / chunk 137 | 138 | eye = torch.eye(B * chunk).to(device) # (B', B') 139 | sim_matrix = torch.exp(sim_matrix / temperature) * (1 - eye) # remove diagonal 140 | 141 | denom = torch.sum(sim_matrix, dim=1, keepdim=True) 142 | sim_matrix = -torch.log(sim_matrix / (denom + eps) + eps) # loss matrix 143 | 144 | labels = labels.contiguous().view(-1, 1) 145 | Mask = torch.eq(labels, labels.t()).float().to(device) 146 | #Mask = eye * torch.stack([labels == labels[i] for i in range(labels.size(0))]).float().to(device) 147 | Mask = Mask / (Mask.sum(dim=1, keepdim=True) + eps) 148 | 149 | loss = torch.sum(Mask * sim_matrix) / (2 * B) 150 | 151 | return loss 152 | -------------------------------------------------------------------------------- /training/scheduler.py: -------------------------------------------------------------------------------- 1 | from torch.optim.lr_scheduler import _LRScheduler 2 | from torch.optim.lr_scheduler import ReduceLROnPlateau 3 | 4 | 5 | class GradualWarmupScheduler(_LRScheduler): 6 | """ Gradually warm-up(increasing) learning rate in optimizer. 7 | Proposed in 'Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour'. 8 | 9 | Args: 10 | optimizer (Optimizer): Wrapped optimizer. 11 | multiplier: target learning rate = base lr * multiplier if multiplier > 1.0. if multiplier = 1.0, lr starts from 0 and ends up with the base_lr. 12 | total_epoch: target learning rate is reached at total_epoch, gradually 13 | after_scheduler: after target_epoch, use this scheduler(eg. ReduceLROnPlateau) 14 | """ 15 | 16 | def __init__(self, optimizer, multiplier, total_epoch, after_scheduler=None): 17 | self.multiplier = multiplier 18 | if self.multiplier < 1.: 19 | raise ValueError('multiplier should be greater thant or equal to 1.') 20 | self.total_epoch = total_epoch 21 | self.after_scheduler = after_scheduler 22 | self.finished = False 23 | super(GradualWarmupScheduler, self).__init__(optimizer) 24 | 25 | def get_lr(self): 26 | if self.last_epoch > self.total_epoch: 27 | if self.after_scheduler: 28 | if not self.finished: 29 | self.after_scheduler.base_lrs = [base_lr * self.multiplier for base_lr in self.base_lrs] 30 | self.finished = True 31 | return self.after_scheduler.get_last_lr() 32 | return [base_lr * self.multiplier for base_lr in self.base_lrs] 33 | 34 | if self.multiplier == 1.0: 35 | return [base_lr * (float(self.last_epoch) / self.total_epoch) for base_lr in self.base_lrs] 36 | else: 37 | return [base_lr * ((self.multiplier - 1.) * self.last_epoch / self.total_epoch + 1.) for base_lr in self.base_lrs] 38 | 39 | def step_ReduceLROnPlateau(self, metrics, epoch=None): 40 | print("Warning: reduce lr on plateau!") 41 | if epoch is None: 42 | epoch = self.last_epoch + 1 43 | self.last_epoch = epoch if epoch != 0 else 1 # ReduceLROnPlateau is called at the end of epoch, whereas others are called at beginning 44 | if self.last_epoch <= self.total_epoch: 45 | warmup_lr = [base_lr * ((self.multiplier - 1.) * self.last_epoch / self.total_epoch + 1.) for base_lr in self.base_lrs] 46 | for param_group, lr in zip(self.optimizer.param_groups, warmup_lr): 47 | param_group['lr'] = lr 48 | else: 49 | if epoch is None: 50 | self.after_scheduler.step(metrics, None) 51 | else: 52 | self.after_scheduler.step(metrics, epoch - self.total_epoch) 53 | 54 | def step(self, epoch=None, metrics=None): 55 | if type(self.after_scheduler) != ReduceLROnPlateau: 56 | if self.finished and self.after_scheduler: 57 | if epoch is None: 58 | self.after_scheduler.step(None) 59 | else: 60 | self.after_scheduler.step(epoch - self.total_epoch) 61 | else: 62 | return super(GradualWarmupScheduler, self).step(epoch) 63 | else: 64 | self.step_ReduceLROnPlateau(metrics, epoch) 65 | -------------------------------------------------------------------------------- /training/sup/HyMOS_st.py: -------------------------------------------------------------------------------- 1 | import time 2 | import numpy as np 3 | from torchvision import datasets, transforms 4 | 5 | import torch.optim 6 | 7 | import models.transform_layers as TL 8 | from training.contrastive_loss import get_similarity_matrix_padded, get_similarity_matrix, Supervised_NT_xent_padded, Supervised_NT_xent 9 | from utils.utils import AverageMeter, normalize, apply_simclr_aug 10 | import torch.distributed as dist 11 | import diffdist.functional as distops 12 | from datasets.datasets import BalancedMultiSourceRandomSampler 13 | from torch.utils.data import DataLoader 14 | 15 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu") 16 | hflip = TL.HorizontalFlipLayer().to(device) 17 | 18 | def train(P, its, model, criterion, optimizer, scheduler, data_loader, data_iter, logger=None, simclr_aug=None): 19 | 20 | assert simclr_aug is not None 21 | 22 | try: 23 | ims, lbls, path = next(data_iter) 24 | except StopIteration: 25 | my_sampler = BalancedMultiSourceRandomSampler(data_loader.dataset, P.batch_p, P.local_rank, P.n_gpus) 26 | loader_kwargs = {'pin_memory': False, 'num_workers': 1, 'drop_last':True} 27 | data_loader = DataLoader(data_loader.dataset, sampler=my_sampler, batch_size=data_loader.batch_size, **loader_kwargs) 28 | data_iter = iter(data_loader) 29 | ims, lbls, path = next(data_iter) 30 | images1 = ims[0] 31 | images2 = ims[1] 32 | labels = lbls 33 | images1 = images1.to(device) 34 | images2 = images2.to(device) 35 | labels = labels.to(device) 36 | 37 | try: 38 | style_images, _ = next(P.style_iter) 39 | except StopIteration: 40 | P.style_iter = iter(P.style_loader) 41 | style_images, _ = next(P.style_iter) 42 | 43 | images_pair = torch.cat([images1, images2], dim=0) # 2B 44 | 45 | images_pair = apply_simclr_aug(P, simclr_aug, images_pair, style_images) # simclr augmentation 46 | 47 | # perform forward 48 | _, outputs_aux = model(images_pair, simclr=True, penultimate=True) 49 | 50 | # normalize output 51 | simclr = normalize(outputs_aux['simclr']) # normalize 52 | # compute similarities 53 | 54 | sim_matrix = get_similarity_matrix(simclr,multi_gpu=P.multi_gpu) 55 | 56 | # obtain simclr (supclr) loss 57 | temperature = P.temperature 58 | loss_sim = Supervised_NT_xent(sim_matrix, labels=labels, temperature=temperature, multi_gpu=P.multi_gpu) 59 | 60 | ### total loss ### 61 | loss = loss_sim 62 | 63 | # perform backward and step 64 | optimizer.zero_grad() 65 | loss.backward() 66 | optimizer.step() 67 | 68 | scheduler.step() 69 | 70 | ### Post-processing stuffs ### 71 | simclr_norm = outputs_aux['simclr'].norm(dim=1).mean() # compute avg norm of output features 72 | 73 | return loss_sim, simclr_norm, data_loader, data_iter 74 | -------------------------------------------------------------------------------- /training/sup/__init__.py: -------------------------------------------------------------------------------- 1 | def setup(P): 2 | 3 | from .HyMOS_st import train 4 | 5 | dir_name = 'Dataset-'+P.dataset+'_Target-'+P.test_domain+'_Mode-'+P.mode+"_batchK-"+str(P.batch_K)+"_batchP-"+str(P.batch_p) 6 | if P.iterative: 7 | dir_name = dir_name + "_iterative" 8 | if 'st' in P.mode: 9 | dir_name = dir_name + '_ProbST-'+str(P.adain_probability) 10 | 11 | fname = dir_name 12 | 13 | if P.suffix != "": 14 | fname += f"_{P.suffix}" 15 | 16 | return train, fname 17 | -------------------------------------------------------------------------------- /utils/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/silvia1993/HyMOS/86bb5165c3ad921da2ffb00aa5e34ef9c38ea9c0/utils/__init__.py -------------------------------------------------------------------------------- /utils/dist_utils.py: -------------------------------------------------------------------------------- 1 | import pickle 2 | 3 | import torch 4 | import torch.distributed as dist 5 | 6 | 7 | def get_world_size(): 8 | if not dist.is_available(): 9 | return 1 10 | if not dist.is_initialized(): 11 | return 1 12 | return dist.get_world_size() 13 | 14 | 15 | def get_rank(): 16 | if not dist.is_available(): 17 | return 0 18 | if not dist.is_initialized(): 19 | return 0 20 | return dist.get_rank() 21 | 22 | 23 | def is_main_process(): 24 | return get_rank() == 0 25 | 26 | 27 | def synchronize(): 28 | """ 29 | Helper function to synchronize (barrier) among all processes when 30 | using distributed training 31 | """ 32 | if not dist.is_available(): 33 | return 34 | if not dist.is_initialized(): 35 | return 36 | world_size = dist.get_world_size() 37 | if world_size == 1: 38 | return 39 | dist.barrier() 40 | 41 | 42 | def _encode(encoded_data, data): 43 | # gets a byte representation for the data 44 | encoded_bytes = pickle.dumps(data) 45 | # convert this byte string into a byte tensor 46 | storage = torch.ByteStorage.from_buffer(encoded_bytes) 47 | tensor = torch.ByteTensor(storage).to("cuda") 48 | # encoding: first byte is the size and then rest is the data 49 | s = tensor.numel() 50 | assert s <= 255, "Can't encode data greater than 255 bytes" 51 | # put the encoded data in encoded_data 52 | encoded_data[0] = s 53 | encoded_data[1: (s + 1)] = tensor 54 | 55 | 56 | def all_gather(data): 57 | """ 58 | Run all_gather on arbitrary picklable data (not necessarily tensors) 59 | Args: 60 | data: any picklable object 61 | Returns: 62 | list[data]: list of data gathered from each rank 63 | """ 64 | world_size = get_world_size() 65 | if world_size == 1: 66 | return [data] 67 | 68 | # serialized to a Tensor 69 | buffer = pickle.dumps(data) 70 | storage = torch.ByteStorage.from_buffer(buffer) 71 | tensor = torch.ByteTensor(storage).to("cuda") 72 | 73 | # obtain Tensor size of each rank 74 | local_size = torch.LongTensor([tensor.numel()]).to("cuda") 75 | size_list = [torch.LongTensor([0]).to("cuda") for _ in range(world_size)] 76 | dist.all_gather(size_list, local_size) 77 | size_list = [int(size.item()) for size in size_list] 78 | max_size = max(size_list) 79 | 80 | # receiving Tensor from all ranks 81 | # we pad the tensor because torch all_gather does not support 82 | # gathering tensors of different shapes 83 | tensor_list = [] 84 | for _ in size_list: 85 | tensor_list.append(torch.ByteTensor(size=(max_size,)).to("cuda")) 86 | if local_size != max_size: 87 | padding = torch.ByteTensor(size=(max_size - local_size,)).to("cuda") 88 | tensor = torch.cat((tensor, padding), dim=0) 89 | dist.all_gather(tensor_list, tensor) 90 | 91 | data_list = [] 92 | for size, tensor in zip(size_list, tensor_list): 93 | buffer = tensor.cpu().numpy().tobytes()[:size] 94 | data_list.append(pickle.loads(buffer)) 95 | 96 | return data_list 97 | 98 | -------------------------------------------------------------------------------- /utils/temperature_scaling.py: -------------------------------------------------------------------------------- 1 | import torch 2 | from torch import nn, optim 3 | from torch.nn import functional as F 4 | 5 | 6 | class ModelWithTemperature(nn.Module): 7 | """ 8 | A thin decorator, which wraps a model with temperature scaling 9 | model (nn.Module): 10 | A classification neural network 11 | NB: Output of the neural network should be the classification logits, 12 | NOT the softmax (or log softmax)! 13 | """ 14 | def __init__(self, model): 15 | super(ModelWithTemperature, self).__init__() 16 | self.model = model 17 | self.temperature = nn.Parameter(torch.ones(1) * 0.5) 18 | 19 | def forward(self, input): 20 | logits = self.model(input) 21 | return self.temperature_scale(logits) 22 | 23 | def temperature_scale(self, logits): 24 | """ 25 | Perform temperature scaling on logits 26 | """ 27 | # Expand temperature to match the size of logits 28 | temperature = self.temperature.unsqueeze(1).expand(logits.size(0), logits.size(1)) 29 | return logits / temperature 30 | 31 | # This function probably should live outside of this class, but whatever 32 | def set_temperature(self, valid_loader): 33 | """ 34 | Tune the tempearature of the model (using the validation set). 35 | We're going to set it to optimize NLL. 36 | valid_loader (DataLoader): validation set loader 37 | """ 38 | self.cuda() 39 | nll_criterion = nn.CrossEntropyLoss().cuda() 40 | ece_criterion = _ECELoss().cuda() 41 | 42 | # First: collect all the logits and labels for the validation set 43 | logits_list = [] 44 | labels_list = [] 45 | with torch.no_grad(): 46 | for input, label in valid_loader: 47 | input = input.cuda() 48 | logits = self.model(input) 49 | logits_list.append(logits) 50 | labels_list.append(label) 51 | logits = torch.cat(logits_list).cuda() 52 | labels = torch.cat(labels_list).cuda() 53 | 54 | # Calculate NLL and ECE before temperature scaling 55 | before_temperature_nll = nll_criterion(logits, labels).item() 56 | before_temperature_ece = ece_criterion(logits, labels).item() 57 | print('Before temperature - NLL: %.3f, ECE: %.3f' % (before_temperature_nll, before_temperature_ece)) 58 | 59 | # Next: optimize the temperature w.r.t. NLL 60 | optimizer = optim.LBFGS([self.temperature], lr=0.0001, max_iter=50000) 61 | 62 | def eval(): 63 | loss = nll_criterion(self.temperature_scale(logits), labels) 64 | loss.backward() 65 | return loss 66 | optimizer.step(eval) 67 | 68 | # Calculate NLL and ECE after temperature scaling 69 | after_temperature_nll = nll_criterion(self.temperature_scale(logits), labels).item() 70 | after_temperature_ece = ece_criterion(self.temperature_scale(logits), labels).item() 71 | print('Optimal temperature: %.3f' % self.temperature.item()) 72 | print('After temperature - NLL: %.3f, ECE: %.3f' % (after_temperature_nll, after_temperature_ece)) 73 | 74 | return self 75 | 76 | 77 | class _ECELoss(nn.Module): 78 | """ 79 | Calculates the Expected Calibration Error of a model. 80 | (This isn't necessary for temperature scaling, just a cool metric). 81 | 82 | The input to this loss is the logits of a model, NOT the softmax scores. 83 | 84 | This divides the confidence outputs into equally-sized interval bins. 85 | In each bin, we compute the confidence gap: 86 | 87 | bin_gap = | avg_confidence_in_bin - accuracy_in_bin | 88 | 89 | We then return a weighted average of the gaps, based on the number 90 | of samples in each bin 91 | 92 | See: Naeini, Mahdi Pakdaman, Gregory F. Cooper, and Milos Hauskrecht. 93 | "Obtaining Well Calibrated Probabilities Using Bayesian Binning." AAAI. 94 | 2015. 95 | """ 96 | def __init__(self, n_bins=15): 97 | """ 98 | n_bins (int): number of confidence interval bins 99 | """ 100 | super(_ECELoss, self).__init__() 101 | bin_boundaries = torch.linspace(0, 1, n_bins + 1) 102 | self.bin_lowers = bin_boundaries[:-1] 103 | self.bin_uppers = bin_boundaries[1:] 104 | 105 | def forward(self, logits, labels): 106 | softmaxes = F.softmax(logits, dim=1) 107 | confidences, predictions = torch.max(softmaxes, 1) 108 | accuracies = predictions.eq(labels) 109 | 110 | ece = torch.zeros(1, device=logits.device) 111 | for bin_lower, bin_upper in zip(self.bin_lowers, self.bin_uppers): 112 | # Calculated |confidence - accuracy| in each bin 113 | in_bin = confidences.gt(bin_lower.item()) * confidences.le(bin_upper.item()) 114 | prop_in_bin = in_bin.float().mean() 115 | if prop_in_bin.item() > 0: 116 | accuracy_in_bin = accuracies[in_bin].float().mean() 117 | avg_confidence_in_bin = confidences[in_bin].mean() 118 | ece += torch.abs(avg_confidence_in_bin - accuracy_in_bin) * prop_in_bin 119 | 120 | return ece -------------------------------------------------------------------------------- /utils/utils.py: -------------------------------------------------------------------------------- 1 | import os 2 | import pickle 3 | import random 4 | import shutil 5 | import sys 6 | from datetime import datetime 7 | 8 | import numpy as np 9 | import torch 10 | from torch.nn.functional import interpolate 11 | from matplotlib import pyplot as plt 12 | from tensorboardX import SummaryWriter 13 | 14 | 15 | class Logger(object): 16 | """Reference: https://gist.github.com/gyglim/1f8dfb1b5c82627ae3efcfbbadb9f514""" 17 | 18 | def __init__(self, fn, ask=True, local_rank=0): 19 | self.local_rank = local_rank 20 | if self.local_rank == 0: 21 | if not os.path.exists("./logs/"): 22 | os.mkdir("./logs/") 23 | 24 | logdir = self._make_dir(fn) 25 | if not os.path.exists(logdir): 26 | os.mkdir(logdir) 27 | 28 | if len(os.listdir(logdir)) != 0 and ask: 29 | ans = input("log_dir is not empty. All data inside log_dir will be deleted. " 30 | "Will you proceed [y/N]? ") 31 | if ans in ['y', 'Y']: 32 | shutil.rmtree(logdir) 33 | else: 34 | exit(1) 35 | 36 | self.set_dir(logdir) 37 | 38 | def _make_dir(self, fn): 39 | today = datetime.today().strftime("%y%m%d") 40 | logdir = 'logs/' + fn 41 | return logdir 42 | 43 | def set_dir(self, logdir, log_fn='log.txt'): 44 | self.logdir = logdir 45 | if not os.path.exists(logdir): 46 | os.mkdir(logdir) 47 | self.writer = SummaryWriter(logdir) 48 | self.log_file = open(os.path.join(logdir, log_fn), 'a') 49 | 50 | def log(self, string): 51 | if self.local_rank == 0: 52 | self.log_file.write('[%s] %s' % (datetime.now(), string) + '\n') 53 | self.log_file.flush() 54 | 55 | print('[%s] %s' % (datetime.now(), string)) 56 | sys.stdout.flush() 57 | 58 | def log_dirname(self, string): 59 | if self.local_rank == 0: 60 | self.log_file.write('%s (%s)' % (string, self.logdir) + '\n') 61 | self.log_file.flush() 62 | 63 | print('%s (%s)' % (string, self.logdir)) 64 | sys.stdout.flush() 65 | 66 | def scalar_summary(self, tag, value, step): 67 | """Log a scalar variable.""" 68 | if self.local_rank == 0: 69 | self.writer.add_scalar(tag, value, step) 70 | 71 | def images_summary(self, tag, images, step): 72 | """Log a list of images.""" 73 | if self.local_rank == 0: 74 | self.writer.add_images(tag, images, step) 75 | 76 | def histo_summary(self, tag, values, step): 77 | """Log a histogram of the tensor of values.""" 78 | if self.local_rank == 0: 79 | self.writer.add_histogram(tag, values, step, bins='auto') 80 | 81 | 82 | class AverageMeter(object): 83 | """Computes and stores the average and current value""" 84 | 85 | def __init__(self): 86 | self.value = 0 87 | self.average = 0 88 | self.sum = 0 89 | self.count = 0 90 | 91 | def reset(self): 92 | self.value = 0 93 | self.average = 0 94 | self.sum = 0 95 | self.count = 0 96 | 97 | def update(self, value, n=1): 98 | self.value = value 99 | self.sum += value * n 100 | self.count += n 101 | self.average = self.sum / self.count 102 | 103 | 104 | def load_checkpoint(logdir, mode='last'): 105 | if mode == 'last': 106 | model_path = os.path.join(logdir, 'last.model') 107 | optim_path = os.path.join(logdir, 'last.optim') 108 | config_path = os.path.join(logdir, 'last.config') 109 | elif mode == 'best': 110 | model_path = os.path.join(logdir, 'best.model') 111 | optim_path = os.path.join(logdir, 'best.optim') 112 | config_path = os.path.join(logdir, 'best.config') 113 | 114 | else: 115 | raise NotImplementedError() 116 | 117 | print("=> Loading checkpoint from '{}'".format(logdir)) 118 | if os.path.exists(model_path): 119 | model_state = torch.load(model_path) 120 | optim_state = torch.load(optim_path) 121 | with open(config_path, 'rb') as handle: 122 | cfg = pickle.load(handle) 123 | else: 124 | return None, None, None 125 | 126 | return model_state, optim_state, cfg 127 | 128 | 129 | def save_checkpoint(its, model_state, optim_state, logdir): 130 | last_model = os.path.join(logdir, 'last.model') 131 | last_optim = os.path.join(logdir, 'last.optim') 132 | last_config = os.path.join(logdir, 'last.config') 133 | 134 | opt = { 135 | 'its': its, 136 | } 137 | torch.save(model_state, last_model) 138 | torch.save(optim_state, last_optim) 139 | with open(last_config, 'wb') as handle: 140 | pickle.dump(opt, handle, protocol=pickle.HIGHEST_PROTOCOL) 141 | 142 | def save_checkpoint_name(model_state, logdir, prefix): 143 | last_model = os.path.join(logdir, f'{prefix}.model') 144 | torch.save(model_state, last_model) 145 | 146 | def set_random_seed(seed): 147 | random.seed(seed) 148 | np.random.seed(seed) 149 | torch.manual_seed(seed) 150 | torch.cuda.manual_seed(seed) 151 | 152 | 153 | def normalize(x, dim=1, eps=1e-8): 154 | return x / (x.norm(dim=dim, keepdim=True) + eps) 155 | 156 | def normalize_images(P, inputs): 157 | mean = torch.tensor(P.im_mean).to(inputs.device) 158 | std = torch.tensor(P.im_std).to(inputs.device) 159 | return ((inputs.permute(0,2,3,1)-mean)/std).permute(0,3,1,2) 160 | 161 | def denormalize_images(P, inputs): 162 | mean = torch.tensor(P.im_mean).to(inputs.device) 163 | std = torch.tensor(P.im_std).to(inputs.device) 164 | return (inputs.permute(0,2,3,1)*std+mean).permute(0,3,1,2) 165 | 166 | def apply_simclr_aug(P, simclr_aug, images, style_images): 167 | cpu_device = torch.device("cpu") 168 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu") 169 | 170 | batch_size = len(images) 171 | adain_mask = torch.zeros((len(images)), dtype=torch.bool) 172 | 173 | selected_content_images = [] 174 | selected_style_images = [] 175 | cnt_ids = [] 176 | for i in range(len(images)): 177 | adain_mask[i] = random.random() < P.adain_probability 178 | if adain_mask[i]: 179 | selected_content_images.append(images[i]) 180 | selected_style_images.append(style_images[int(random.random()*len(style_images))]) 181 | cnt_ids.append(i) 182 | 183 | no_adain_mask = ~adain_mask 184 | 185 | adain_model = P.adain_model.to(device) 186 | 187 | if len(selected_content_images) > 0: 188 | selected_content_images = normalize_images(P,torch.stack(selected_content_images).to(device)) 189 | selected_style_images = normalize_images(P,torch.stack(selected_style_images).to(device)) 190 | with torch.no_grad(): 191 | output_images = adain_model.generate( 192 | selected_content_images, 193 | selected_style_images, 194 | P.adain_alpha) 195 | 196 | images[adain_mask] = P.simclr_aug_st(denormalize_images(P,interpolate(output_images, images[0].shape[1:][::-1]))) 197 | del output_images 198 | del selected_content_images 199 | del selected_style_images 200 | 201 | if len(images[no_adain_mask]) > 0: 202 | images[no_adain_mask] = simclr_aug(images[no_adain_mask]) 203 | 204 | adain_model = adain_model.to(cpu_device) 205 | 206 | return images 207 | 208 | 209 | --------------------------------------------------------------------------------