├── LICENSE ├── README.md ├── environment.yml ├── misc ├── __pycache__ │ ├── imutils.cpython-36.pyc │ ├── indexing.cpython-36.pyc │ ├── pyutils.cpython-36.pyc │ ├── pyutils.cpython-38.pyc │ └── torchutils.cpython-36.pyc ├── imutils.py ├── indexing.py ├── pyutils.py └── torchutils.py ├── mscoco ├── annToMask.py └── dataloader.py ├── net ├── __pycache__ │ ├── resnet50.cpython-36.pyc │ ├── resnet50_cam.cpython-36.pyc │ ├── resnet50_fpn2_cam.cpython-36.pyc │ ├── resnet50_fpn3_cam.cpython-36.pyc │ ├── resnet50_fpn_cam.cpython-36.pyc │ ├── resnet50_fpn_cam_share.cpython-36.pyc │ └── resnet50_irn.cpython-36.pyc ├── resnet50.py ├── resnet50_cam.py └── resnet50_irn.py ├── run_sample.py ├── run_sample_coco.py ├── step ├── __pycache__ │ ├── cam_to_ir_label.cpython-36.pyc │ ├── eval_cam.cpython-36.pyc │ ├── make_recam.cpython-36.pyc │ ├── make_sem_seg_labels.cpython-36.pyc │ ├── train_irn.cpython-36.pyc │ └── train_recam.cpython-36.pyc ├── cam_to_ir_label.py ├── eval_cam.py ├── eval_sem_seg.py ├── make_cam.py ├── make_recam.py ├── make_sem_seg_labels.py ├── train_cam.py ├── train_irn.py └── train_recam.py ├── step_coco ├── __pycache__ │ ├── cam_to_ir_label.cpython-36.pyc │ ├── eval_cam.cpython-36.pyc │ ├── eval_sem_seg.cpython-36.pyc │ ├── make_recam.cpython-36.pyc │ ├── make_sem_seg_labels.cpython-36.pyc │ ├── train_cam.cpython-36.pyc │ ├── train_irn.cpython-36.pyc │ └── train_recam.cpython-36.pyc ├── cam_to_ir_label.py ├── eval_cam.py ├── eval_sem_seg.py ├── make_cam.py ├── make_recam.py ├── make_sem_seg_labels.py ├── train_cam.py ├── train_irn.py └── train_recam.py └── voc12 ├── __pycache__ └── dataloader.cpython-36.pyc ├── cls_labels.npy ├── dataloader.py ├── make_cls_labels.py ├── test.txt ├── train.txt ├── train_aug.txt └── val.txt /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2021 Zhaozheng Chen 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # ReCAM 2 | The official code of CVPR 2022 paper (Class Re-Activation Maps for Weakly-Supervised Semantic Segmentation). [arXiv](https://arxiv.org/abs/2203.00962) 3 | 4 | ## Citation 5 | ``` 6 | @inproceedings{recam, 7 | title={Class Re-Activation Maps for Weakly-Supervised Semantic Segmentation}, 8 | author={Chen, Zhaozheng and Wang, Tan and Wu, Xiongwei and Hua, Xian-Sheng and Zhang, Hanwang and Sun, Qianru}, 9 | booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, 10 | year={2022} 11 | } 12 | ``` 13 | 14 | ## Prerequisite 15 | - Python 3.6, PyTorch 1.9, and others in environment.yml 16 | - You can create the environment from environment.yml file 17 | ``` 18 | conda env create -f environment.yml 19 | ``` 20 | ## Usage (PASCAL VOC) 21 | ### Step 1. Prepare dataset. 22 | - Download PASCAL VOC 2012 devkit from [official website](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/#devkit). [Download](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar). 23 | - You need to specify the path ('voc12_root') of your downloaded devkit in the following steps. 24 | ### Step 2. Train ReCAM and generate seeds. 25 | - Please specify a workspace to save the model and logs. 26 | ``` 27 | CUDA_VISIBLE_DEVICES=0 python run_sample.py --voc12_root ./VOCdevkit/VOC2012/ --work_space YOUR_WORK_SPACE --train_cam_pass True --train_recam_pass True --make_recam_pass True --eval_cam_pass True 28 | ``` 29 | ### Step 3. Train IRN and generate pseudo masks. 30 | ``` 31 | CUDA_VISIBLE_DEVICES=0 python run_sample.py --voc12_root ./VOCdevkit/VOC2012/ --work_space YOUR_WORK_SPACE --cam_to_ir_label_pass True --train_irn_pass True --make_sem_seg_pass True --eval_sem_seg_pass True 32 | ``` 33 | ### Step 4. Train semantic segmentation network. 34 | To train DeepLab-v2, we refer to [deeplab-pytorch](https://github.com/kazuto1011/deeplab-pytorch). 35 | We use the [ImageNet pre-trained model](https://drive.google.com/file/d/14soMKDnIZ_crXQTlol9sNHVPozcQQpMn/view?usp=sharing) for DeepLabV2 provided by [AdvCAM](https://github.com/jbeomlee93/AdvCAM). 36 | Please replace the groundtruth masks with generated pseudo masks. 37 | 38 | ## Usage (MS COCO) 39 | ### Step 1. Prepare dataset. 40 | - Download MS COCO images from the [official COCO website](https://cocodataset.org/#download). 41 | - Generate mask from annotations (annToMask.py file in ./mscoco/). 42 | - Download MS COCO image-level labels from [here](https://drive.google.com/drive/folders/1XCu51bAUK3nOvO-VVKD7kE9bIFpAECBR?usp=sharing) and put them in ./mscoco/ 43 | ### Step 2. Train ReCAM and generate seeds. 44 | - Please specify a workspace to save the model and logs. 45 | ``` 46 | CUDA_VISIBLE_DEVICES=0 python run_sample_coco.py --mscoco_root ../MSCOCO/ --work_space YOUR_WORK_SPACE --train_cam_pass True --train_recam_pass True --make_recam_pass True --eval_cam_pass True 47 | ``` 48 | ### Step 3. Train IRN and generate pseudo masks. 49 | ``` 50 | CUDA_VISIBLE_DEVICES=0 python run_sample_coco.py --mscoco_root ../MSCOCO/ --work_space YOUR_WORK_SPACE --cam_to_ir_label_pass True --train_irn_pass True --make_sem_seg_pass True --eval_sem_seg_pass True 51 | ``` 52 | ### Step 4. Train semantic segmentation network. 53 | - The same as PASCAL VOC. 54 | 55 | ## Acknowledgment 56 | This code is borrowed from [IRN](https://github.com/jiwoon-ahn/irn) and [AdvCAM](https://github.com/jbeomlee93/AdvCAM), thanks Jiwoon and Jungbeom. 57 | -------------------------------------------------------------------------------- /environment.yml: -------------------------------------------------------------------------------- 1 | name: recam 2 | channels: 3 | - pytorch 4 | - nvidia 5 | - defaults 6 | dependencies: 7 | - _libgcc_mutex=0.1=main 8 | - _openmp_mutex=4.5=1_gnu 9 | - blas=1.0=mkl 10 | - bzip2=1.0.8=h7b6447c_0 11 | - ca-certificates=2021.5.25=h06a4308_1 12 | - certifi=2021.5.30=py36h06a4308_0 13 | - cudatoolkit=11.1.74=h6bb024c_0 14 | - dataclasses=0.8=pyh4f3eec9_6 15 | - ffmpeg=4.3=hf484d3e_0 16 | - freetype=2.10.4=h5ab3b9f_0 17 | - gmp=6.2.1=h2531618_2 18 | - gnutls=3.6.15=he1e5248_0 19 | - intel-openmp=2021.2.0=h06a4308_610 20 | - jpeg=9b=h024ee3a_2 21 | - lame=3.100=h7b6447c_0 22 | - lcms2=2.12=h3be6417_0 23 | - ld_impl_linux-64=2.35.1=h7274673_9 24 | - libffi=3.3=he6710b0_2 25 | - libgcc-ng=9.3.0=h5101ec6_17 26 | - libgomp=9.3.0=h5101ec6_17 27 | - libiconv=1.15=h63c8f33_5 28 | - libidn2=2.3.1=h27cfd23_0 29 | - libpng=1.6.37=hbc83047_0 30 | - libstdcxx-ng=9.3.0=hd4cf53a_17 31 | - libtasn1=4.16.0=h27cfd23_0 32 | - libtiff=4.2.0=h85742a9_0 33 | - libunistring=0.9.10=h27cfd23_0 34 | - libuv=1.40.0=h7b6447c_0 35 | - libwebp-base=1.2.0=h27cfd23_0 36 | - lz4-c=1.9.3=h2531618_0 37 | - mkl=2020.2=256 38 | - mkl-service=2.3.0=py36he8ac12f_0 39 | - mkl_fft=1.3.0=py36h54f3939_0 40 | - mkl_random=1.1.1=py36h0573a6f_0 41 | - ncurses=6.2=he6710b0_1 42 | - nettle=3.7.3=hbbd107a_1 43 | - ninja=1.10.2=hff7bd54_1 44 | - numpy=1.19.2=py36h54aff64_0 45 | - numpy-base=1.19.2=py36hfa32c7d_0 46 | - olefile=0.46=py36_0 47 | - openh264=2.1.0=hd408876_0 48 | - openssl=1.1.1k=h27cfd23_0 49 | - pillow=8.2.0=py36he98fc37_0 50 | - pip=21.1.2=py36h06a4308_0 51 | - python=3.6.13=h12debd9_1 52 | - pytorch=1.9.0=py3.6_cuda11.1_cudnn8.0.5_0 53 | - readline=8.1=h27cfd23_0 54 | - setuptools=52.0.0=py36h06a4308_0 55 | - six=1.16.0=pyhd3eb1b0_0 56 | - sqlite=3.35.4=hdfb4753_0 57 | - tk=8.6.10=hbc83047_0 58 | - torchaudio=0.9.0=py36 59 | - torchvision=0.10.0=py36_cu111 60 | - typing_extensions=3.7.4.3=pyha847dfd_0 61 | - unzip=6.0=h611a1e1_0 62 | - wheel=0.36.2=pyhd3eb1b0_0 63 | - xz=5.2.5=h7b6447c_0 64 | - zlib=1.2.11=h7b6447c_3 65 | - zstd=1.4.9=haebb681_0 66 | - pip: 67 | - absl-py==0.13.0 68 | - addict==2.4.0 69 | - albumentations==1.1.0 70 | - antlr4-python3-runtime==4.8 71 | - blessings==1.7 72 | - cachetools==4.2.2 73 | - chainer==7.8.0 74 | - chainercv==0.13.1 75 | - chardet==4.0.0 76 | - click==8.0.1 77 | - cycler==0.10.0 78 | - cython==0.29.23 79 | - decorator==4.4.2 80 | - filelock==3.0.12 81 | - google-auth==1.31.0 82 | - google-auth-oauthlib==0.4.4 83 | - gpustat==0.6.0 84 | - grpcio==1.38.0 85 | - idna==2.10 86 | - imageio==2.9.0 87 | - importlib-metadata==4.5.0 88 | - joblib==1.0.1 89 | - jsonpatch==1.32 90 | - jsonpointer==2.1 91 | - kiwisolver==1.3.1 92 | - markdown==3.3.4 93 | - matplotlib==3.3.4 94 | - munch==2.5.0 95 | - mxnet==1.8.0.post0 96 | - networkx==2.5.1 97 | - nvidia-ml-py3==7.352.0 98 | - oauthlib==3.1.1 99 | - omegaconf==2.1.0 100 | - opencv-python==4.5.2.54 101 | - opencv-python-headless==4.5.4.58 102 | - pandas==1.1.5 103 | - protobuf==3.17.3 104 | - psutil==5.8.0 105 | - pyasn1==0.4.8 106 | - pyasn1-modules==0.2.8 107 | - pycocotools==2.0.2 108 | - pydensecrf==1.0rc2 109 | - pyparsing==2.4.7 110 | - python-dateutil==2.8.1 111 | - pytz==2021.3 112 | - pywavelets==1.1.1 113 | - pyyaml==5.4.1 114 | - pyzmq==22.1.0 115 | - qudida==0.0.4 116 | - requests==2.25.1 117 | - requests-oauthlib==1.3.0 118 | - rsa==4.7.2 119 | - scikit-image==0.17.2 120 | - scikit-learn==0.24.2 121 | - scipy==1.5.4 122 | - tensorboard==2.5.0 123 | - tensorboard-data-server==0.6.1 124 | - tensorboard-plugin-wit==1.8.0 125 | - tensorboardx==2.4 126 | - threadpoolctl==3.0.0 127 | - tifffile==2020.9.3 128 | - torchfile==0.1.0 129 | - torchnet==0.0.4 130 | - tornado==6.1 131 | - tqdm==4.61.1 132 | - urllib3==1.26.5 133 | - visdom==0.1.8.9 134 | - websocket-client==1.1.0 135 | - werkzeug==2.0.1 136 | - zipp==3.4.1 137 | -------------------------------------------------------------------------------- /misc/__pycache__/imutils.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/misc/__pycache__/imutils.cpython-36.pyc -------------------------------------------------------------------------------- /misc/__pycache__/indexing.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/misc/__pycache__/indexing.cpython-36.pyc -------------------------------------------------------------------------------- /misc/__pycache__/pyutils.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/misc/__pycache__/pyutils.cpython-36.pyc -------------------------------------------------------------------------------- /misc/__pycache__/pyutils.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/misc/__pycache__/pyutils.cpython-38.pyc -------------------------------------------------------------------------------- /misc/__pycache__/torchutils.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/misc/__pycache__/torchutils.cpython-36.pyc -------------------------------------------------------------------------------- /misc/imutils.py: -------------------------------------------------------------------------------- 1 | import random 2 | import numpy as np 3 | 4 | import pydensecrf.densecrf as dcrf 5 | from pydensecrf.utils import unary_from_labels 6 | from PIL import Image 7 | 8 | def pil_resize(img, size, order): 9 | if size[0] == img.shape[0] and size[1] == img.shape[1]: 10 | return img 11 | 12 | if order == 3: 13 | resample = Image.BICUBIC 14 | elif order == 0: 15 | resample = Image.NEAREST 16 | 17 | return np.asarray(Image.fromarray(img).resize(size[::-1], resample)) 18 | 19 | def pil_rescale(img, scale, order): 20 | height, width = img.shape[:2] 21 | target_size = (int(np.round(height*scale)), int(np.round(width*scale))) 22 | return pil_resize(img, target_size, order) 23 | 24 | 25 | def random_resize_long(img, min_long, max_long): 26 | target_long = random.randint(min_long, max_long) 27 | h, w = img.shape[:2] 28 | 29 | if w < h: 30 | scale = target_long / h 31 | else: 32 | scale = target_long / w 33 | 34 | return pil_rescale(img, scale, 3) 35 | 36 | def random_scale(img, scale_range, order): 37 | 38 | target_scale = scale_range[0] + random.random() * (scale_range[1] - scale_range[0]) 39 | 40 | if isinstance(img, tuple): 41 | return (pil_rescale(img[0], target_scale, order[0]), pil_rescale(img[1], target_scale, order[1])) 42 | else: 43 | return pil_rescale(img[0], target_scale, order) 44 | 45 | def random_lr_flip(img): 46 | 47 | if bool(random.getrandbits(1)): 48 | if isinstance(img, tuple): 49 | return [np.fliplr(m) for m in img] 50 | else: 51 | return np.fliplr(img) 52 | else: 53 | return img 54 | 55 | def get_random_crop_box(imgsize, cropsize): 56 | h, w = imgsize 57 | 58 | ch = min(cropsize, h) 59 | cw = min(cropsize, w) 60 | 61 | w_space = w - cropsize 62 | h_space = h - cropsize 63 | 64 | if w_space > 0: 65 | cont_left = 0 66 | img_left = random.randrange(w_space + 1) 67 | else: 68 | cont_left = random.randrange(-w_space + 1) 69 | img_left = 0 70 | 71 | if h_space > 0: 72 | cont_top = 0 73 | img_top = random.randrange(h_space + 1) 74 | else: 75 | cont_top = random.randrange(-h_space + 1) 76 | img_top = 0 77 | 78 | return cont_top, cont_top+ch, cont_left, cont_left+cw, img_top, img_top+ch, img_left, img_left+cw 79 | 80 | def random_crop(images, cropsize, default_values): 81 | 82 | if isinstance(images, np.ndarray): images = (images,) 83 | if isinstance(default_values, int): default_values = (default_values,) 84 | 85 | imgsize = images[0].shape[:2] 86 | box = get_random_crop_box(imgsize, cropsize) 87 | 88 | new_images = [] 89 | for img, f in zip(images, default_values): 90 | 91 | if len(img.shape) == 3: 92 | cont = np.ones((cropsize, cropsize, img.shape[2]), img.dtype)*f 93 | else: 94 | cont = np.ones((cropsize, cropsize), img.dtype)*f 95 | cont[box[0]:box[1], box[2]:box[3]] = img[box[4]:box[5], box[6]:box[7]] 96 | new_images.append(cont) 97 | 98 | if len(new_images) == 1: 99 | new_images = new_images[0] 100 | 101 | return new_images 102 | 103 | def top_left_crop(img, cropsize, default_value): 104 | 105 | h, w = img.shape[:2] 106 | 107 | ch = min(cropsize, h) 108 | cw = min(cropsize, w) 109 | 110 | if len(img.shape) == 2: 111 | container = np.ones((cropsize, cropsize), img.dtype)*default_value 112 | else: 113 | container = np.ones((cropsize, cropsize, img.shape[2]), img.dtype)*default_value 114 | 115 | container[:ch, :cw] = img[:ch, :cw] 116 | 117 | return container 118 | 119 | def center_crop(img, cropsize, default_value=0): 120 | 121 | h, w = img.shape[:2] 122 | 123 | ch = min(cropsize, h) 124 | cw = min(cropsize, w) 125 | 126 | sh = h - cropsize 127 | sw = w - cropsize 128 | 129 | if sw > 0: 130 | cont_left = 0 131 | img_left = int(round(sw / 2)) 132 | else: 133 | cont_left = int(round(-sw / 2)) 134 | img_left = 0 135 | 136 | if sh > 0: 137 | cont_top = 0 138 | img_top = int(round(sh / 2)) 139 | else: 140 | cont_top = int(round(-sh / 2)) 141 | img_top = 0 142 | 143 | if len(img.shape) == 2: 144 | container = np.ones((cropsize, cropsize), img.dtype)*default_value 145 | else: 146 | container = np.ones((cropsize, cropsize, img.shape[2]), img.dtype)*default_value 147 | 148 | container[cont_top:cont_top+ch, cont_left:cont_left+cw] = \ 149 | img[img_top:img_top+ch, img_left:img_left+cw] 150 | 151 | return container 152 | 153 | def HWC_to_CHW(img): 154 | return np.transpose(img, (2, 0, 1)) 155 | 156 | def crf_inference_label(img, labels, t=10, n_labels=21, gt_prob=0.7): 157 | 158 | h, w = img.shape[:2] 159 | 160 | d = dcrf.DenseCRF2D(w, h, n_labels) 161 | 162 | unary = unary_from_labels(labels, n_labels, gt_prob=gt_prob, zero_unsure=False) 163 | 164 | d.setUnaryEnergy(unary) 165 | d.addPairwiseGaussian(sxy=3, compat=3) 166 | d.addPairwiseBilateral(sxy=50, srgb=5, rgbim=np.ascontiguousarray(np.copy(img)), compat=10) 167 | 168 | q = d.inference(t) 169 | 170 | return np.argmax(np.array(q).reshape((n_labels, h, w)), axis=0) 171 | 172 | 173 | def get_strided_size(orig_size, stride): 174 | return ((orig_size[0]-1)//stride+1, (orig_size[1]-1)//stride+1) 175 | 176 | 177 | def get_strided_up_size(orig_size, stride): 178 | strided_size = get_strided_size(orig_size, stride) 179 | return strided_size[0]*stride, strided_size[1]*stride 180 | 181 | 182 | def compress_range(arr): 183 | uniques = np.unique(arr) 184 | maximum = np.max(uniques) 185 | 186 | d = np.zeros(maximum+1, np.int32) 187 | d[uniques] = np.arange(uniques.shape[0]) 188 | 189 | out = d[arr] 190 | return out - np.min(out) 191 | 192 | 193 | def colorize_score(score_map, exclude_zero=False, normalize=True, by_hue=False): 194 | import matplotlib.colors 195 | if by_hue: 196 | aranged = np.arange(score_map.shape[0]) / (score_map.shape[0]) 197 | hsv_color = np.stack((aranged, np.ones_like(aranged), np.ones_like(aranged)), axis=-1) 198 | rgb_color = matplotlib.colors.hsv_to_rgb(hsv_color) 199 | 200 | test = rgb_color[np.argmax(score_map, axis=0)] 201 | test = np.expand_dims(np.max(score_map, axis=0), axis=-1) * test 202 | 203 | if normalize: 204 | return test / (np.max(test) + 1e-5) 205 | else: 206 | return test 207 | 208 | else: 209 | VOC_color = np.array([(0, 0, 0), (128, 0, 0), (0, 128, 0), (128, 128, 0), (0, 0, 128), (128, 0, 128), 210 | (0, 128, 128), (128, 128, 128), (64, 0, 0), (192, 0, 0), (64, 128, 0), (192, 128, 0), 211 | (64, 0, 128), (192, 0, 128), (64, 128, 128), (192, 128, 128), (0, 64, 0), (128, 64, 0), 212 | (0, 192, 0), (128, 192, 0), (0, 64, 128), (255, 255, 255)], np.float32) 213 | 214 | if exclude_zero: 215 | VOC_color = VOC_color[1:] 216 | 217 | test = VOC_color[np.argmax(score_map, axis=0)%22] 218 | test = np.expand_dims(np.max(score_map, axis=0), axis=-1) * test 219 | if normalize: 220 | test /= np.max(test) + 1e-5 221 | 222 | return test 223 | 224 | 225 | def colorize_displacement(disp): 226 | 227 | import matplotlib.colors 228 | import math 229 | 230 | a = (np.arctan2(-disp[0], -disp[1]) / math.pi + 1) / 2 231 | 232 | r = np.sqrt(disp[0] ** 2 + disp[1] ** 2) 233 | s = r / np.max(r) 234 | hsv_color = np.stack((a, s, np.ones_like(a)), axis=-1) 235 | rgb_color = matplotlib.colors.hsv_to_rgb(hsv_color) 236 | 237 | return rgb_color 238 | 239 | 240 | def colorize_label(label_map, normalize=True, by_hue=True, exclude_zero=False, outline=False): 241 | 242 | label_map = label_map.astype(np.uint8) 243 | 244 | if by_hue: 245 | import matplotlib.colors 246 | sz = np.max(label_map) 247 | aranged = np.arange(sz) / sz 248 | hsv_color = np.stack((aranged, np.ones_like(aranged), np.ones_like(aranged)), axis=-1) 249 | rgb_color = matplotlib.colors.hsv_to_rgb(hsv_color) 250 | rgb_color = np.concatenate([np.zeros((1, 3)), rgb_color], axis=0) 251 | 252 | test = rgb_color[label_map] 253 | else: 254 | VOC_color = np.array([(0, 0, 0), (128, 0, 0), (0, 128, 0), (128, 128, 0), (0, 0, 128), (128, 0, 128), 255 | (0, 128, 128), (128, 128, 128), (64, 0, 0), (192, 0, 0), (64, 128, 0), (192, 128, 0), 256 | (64, 0, 128), (192, 0, 128), (64, 128, 128), (192, 128, 128), (0, 64, 0), (128, 64, 0), 257 | (0, 192, 0), (128, 192, 0), (0, 64, 128), (255, 255, 255)], np.float32) 258 | 259 | if exclude_zero: 260 | VOC_color = VOC_color[1:] 261 | test = VOC_color[label_map] 262 | if normalize: 263 | test /= np.max(test) 264 | 265 | if outline: 266 | edge = np.greater(np.sum(np.abs(test[:-1, :-1] - test[1:, :-1]), axis=-1) + np.sum(np.abs(test[:-1, :-1] - test[:-1, 1:]), axis=-1), 0) 267 | edge1 = np.pad(edge, ((0, 1), (0, 1)), mode='constant', constant_values=0) 268 | edge2 = np.pad(edge, ((1, 0), (1, 0)), mode='constant', constant_values=0) 269 | edge = np.repeat(np.expand_dims(np.maximum(edge1, edge2), -1), 3, axis=-1) 270 | 271 | test = np.maximum(test, edge) 272 | return test 273 | -------------------------------------------------------------------------------- /misc/indexing.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn.functional as F 3 | import numpy as np 4 | 5 | 6 | class PathIndex: 7 | 8 | def __init__(self, radius, default_size): 9 | self.radius = radius 10 | self.radius_floor = int(np.ceil(radius) - 1) 11 | 12 | self.search_paths, self.search_dst = self.get_search_paths_dst(self.radius) 13 | 14 | self.path_indices, self.src_indices, self.dst_indices = self.get_path_indices(default_size) 15 | 16 | return 17 | 18 | def get_search_paths_dst(self, max_radius=5): 19 | 20 | coord_indices_by_length = [[] for _ in range(max_radius * 4)] 21 | 22 | search_dirs = [] 23 | 24 | for x in range(1, max_radius): 25 | search_dirs.append((0, x)) 26 | 27 | for y in range(1, max_radius): 28 | for x in range(-max_radius + 1, max_radius): 29 | if x * x + y * y < max_radius ** 2: 30 | search_dirs.append((y, x)) 31 | 32 | for dir in search_dirs: 33 | 34 | length_sq = dir[0] ** 2 + dir[1] ** 2 35 | path_coords = [] 36 | 37 | min_y, max_y = sorted((0, dir[0])) 38 | min_x, max_x = sorted((0, dir[1])) 39 | 40 | for y in range(min_y, max_y + 1): 41 | for x in range(min_x, max_x + 1): 42 | 43 | dist_sq = (dir[0] * x - dir[1] * y) ** 2 / length_sq 44 | 45 | if dist_sq < 1: 46 | path_coords.append([y, x]) 47 | 48 | path_coords.sort(key=lambda x: -abs(x[0]) - abs(x[1])) 49 | path_length = len(path_coords) 50 | 51 | coord_indices_by_length[path_length].append(path_coords) 52 | 53 | path_list_by_length = [np.asarray(v) for v in coord_indices_by_length if v] 54 | path_destinations = np.concatenate([p[:, 0] for p in path_list_by_length], axis=0) 55 | 56 | return path_list_by_length, path_destinations 57 | 58 | def get_path_indices(self, size): 59 | 60 | full_indices = np.reshape(np.arange(0, size[0] * size[1], dtype=np.int64), (size[0], size[1])) 61 | 62 | cropped_height = size[0] - self.radius_floor 63 | cropped_width = size[1] - 2 * self.radius_floor 64 | 65 | path_indices = [] 66 | 67 | for paths in self.search_paths: 68 | 69 | path_indices_list = [] 70 | for p in paths: 71 | 72 | coord_indices_list = [] 73 | 74 | for dy, dx in p: 75 | coord_indices = full_indices[dy:dy + cropped_height, 76 | self.radius_floor + dx:self.radius_floor + dx + cropped_width] 77 | coord_indices = np.reshape(coord_indices, [-1]) 78 | 79 | coord_indices_list.append(coord_indices) 80 | 81 | path_indices_list.append(coord_indices_list) 82 | 83 | path_indices.append(np.array(path_indices_list)) 84 | 85 | src_indices = np.reshape(full_indices[:cropped_height, self.radius_floor:self.radius_floor + cropped_width], -1) 86 | dst_indices = np.concatenate([p[:,0] for p in path_indices], axis=0) 87 | 88 | return path_indices, src_indices, dst_indices 89 | 90 | 91 | def edge_to_affinity(edge, paths_indices): 92 | 93 | aff_list = [] 94 | edge = edge.view(edge.size(0), -1) 95 | 96 | for i in range(len(paths_indices)): 97 | if isinstance(paths_indices[i], np.ndarray): 98 | paths_indices[i] = torch.from_numpy(paths_indices[i]) 99 | paths_indices[i] = paths_indices[i].cuda(non_blocking=True) 100 | 101 | for ind in paths_indices: 102 | ind_flat = ind.view(-1) 103 | dist = torch.index_select(edge, dim=-1, index=ind_flat) 104 | dist = dist.view(dist.size(0), ind.size(0), ind.size(1), ind.size(2)) 105 | aff = torch.squeeze(1 - F.max_pool2d(dist, (dist.size(2), 1)), dim=2) 106 | aff_list.append(aff) 107 | aff_cat = torch.cat(aff_list, dim=1) 108 | 109 | return aff_cat 110 | 111 | 112 | def affinity_sparse2dense(affinity_sparse, ind_from, ind_to, n_vertices): 113 | 114 | ind_from = torch.from_numpy(ind_from) 115 | ind_to = torch.from_numpy(ind_to) 116 | 117 | affinity_sparse = affinity_sparse.view(-1).cpu() 118 | ind_from = ind_from.repeat(ind_to.size(0)).view(-1) 119 | ind_to = ind_to.view(-1) 120 | 121 | indices = torch.stack([ind_from, ind_to]) 122 | indices_tp = torch.stack([ind_to, ind_from]) 123 | 124 | indices_id = torch.stack([torch.arange(0, n_vertices).long(), torch.arange(0, n_vertices).long()]) 125 | 126 | affinity_dense = torch.sparse.FloatTensor(torch.cat([indices, indices_id, indices_tp], dim=1), 127 | torch.cat([affinity_sparse, torch.ones([n_vertices]), affinity_sparse])).to_dense().cuda() 128 | 129 | return affinity_dense 130 | 131 | 132 | def to_transition_matrix(affinity_dense, beta, times): 133 | scaled_affinity = torch.pow(affinity_dense, beta) 134 | 135 | trans_mat = scaled_affinity / torch.sum(scaled_affinity, dim=0, keepdim=True) 136 | for _ in range(times): 137 | trans_mat = torch.matmul(trans_mat, trans_mat) 138 | 139 | return trans_mat 140 | 141 | def propagate_to_edge(x, edge, radius=5, beta=10, exp_times=8): 142 | 143 | height, width = x.shape[-2:] 144 | 145 | hor_padded = width+radius*2 146 | ver_padded = height+radius 147 | 148 | path_index = PathIndex(radius=radius, default_size=(ver_padded, hor_padded)) 149 | 150 | edge_padded = F.pad(edge, (radius, radius, 0, radius), mode='constant', value=1.0) 151 | sparse_aff = edge_to_affinity(torch.unsqueeze(edge_padded, 0), 152 | path_index.path_indices) 153 | 154 | dense_aff = affinity_sparse2dense(sparse_aff, path_index.src_indices, 155 | path_index.dst_indices, ver_padded * hor_padded) 156 | dense_aff = dense_aff.view(ver_padded, hor_padded, ver_padded, hor_padded) 157 | dense_aff = dense_aff[:-radius, radius:-radius, :-radius, radius:-radius] 158 | dense_aff = dense_aff.reshape(height * width, height * width) 159 | 160 | trans_mat = to_transition_matrix(dense_aff, beta=beta, times=exp_times) 161 | 162 | x = x.view(-1, height, width) * (1 - edge) 163 | 164 | rw = torch.matmul(x.view(-1, height * width), trans_mat) 165 | rw = rw.view(rw.size(0), 1, height, width) 166 | 167 | return rw -------------------------------------------------------------------------------- /misc/pyutils.py: -------------------------------------------------------------------------------- 1 | 2 | import numpy as np 3 | import time 4 | import sys 5 | 6 | class Logger(object): 7 | def __init__(self, outfile): 8 | self.terminal = sys.stdout 9 | self.log = open(outfile, "a") 10 | sys.stdout = self 11 | 12 | def write(self, message): 13 | self.terminal.write(message) 14 | self.log.write(message) 15 | 16 | def flush(self): 17 | self.terminal.flush() 18 | 19 | 20 | class AverageMeter: 21 | def __init__(self, *keys): 22 | self.__data = dict() 23 | for k in keys: 24 | self.__data[k] = [0.0, 0] 25 | 26 | def add(self, dict): 27 | for k, v in dict.items(): 28 | if k not in self.__data: 29 | self.__data[k] = [0.0, 0] 30 | self.__data[k][0] += v 31 | self.__data[k][1] += 1 32 | 33 | def get(self, *keys): 34 | if len(keys) == 1: 35 | return self.__data[keys[0]][0] / self.__data[keys[0]][1] 36 | else: 37 | v_list = [self.__data[k][0] / self.__data[k][1] for k in keys] 38 | return tuple(v_list) 39 | 40 | def pop(self, key=None): 41 | if key is None: 42 | for k in self.__data.keys(): 43 | self.__data[k] = [0.0, 0] 44 | else: 45 | v = self.get(key) 46 | self.__data[key] = [0.0, 0] 47 | return v 48 | 49 | 50 | class Timer: 51 | def __init__(self, starting_msg = None): 52 | self.start = time.time() 53 | self.stage_start = self.start 54 | 55 | if starting_msg is not None: 56 | print(starting_msg, time.ctime(time.time())) 57 | 58 | def __enter__(self): 59 | return self 60 | 61 | def __exit__(self, exc_type, exc_val, exc_tb): 62 | return 63 | 64 | def update_progress(self, progress): 65 | self.elapsed = time.time() - self.start 66 | self.est_total = self.elapsed / progress 67 | self.est_remaining = self.est_total - self.elapsed 68 | self.est_finish = int(self.start + self.est_total) 69 | 70 | 71 | def str_estimated_complete(self): 72 | return str(time.ctime(self.est_finish)) 73 | 74 | def get_stage_elapsed(self): 75 | return time.time() - self.stage_start 76 | 77 | def reset_stage(self): 78 | self.stage_start = time.time() 79 | 80 | def lapse(self): 81 | out = time.time() - self.stage_start 82 | self.stage_start = time.time() 83 | return out 84 | 85 | 86 | def to_one_hot(sparse_integers, maximum_val=None, dtype=np.bool): 87 | 88 | if maximum_val is None: 89 | maximum_val = np.max(sparse_integers) + 1 90 | 91 | src_shape = sparse_integers.shape 92 | 93 | flat_src = np.reshape(sparse_integers, [-1]) 94 | src_size = flat_src.shape[0] 95 | 96 | one_hot = np.zeros((maximum_val, src_size), dtype) 97 | one_hot[flat_src, np.arange(src_size)] = 1 98 | 99 | one_hot = np.reshape(one_hot, [maximum_val] + list(src_shape)) 100 | 101 | return one_hot 102 | -------------------------------------------------------------------------------- /misc/torchutils.py: -------------------------------------------------------------------------------- 1 | 2 | import torch 3 | 4 | from torch.utils.data import Subset 5 | import numpy as np 6 | import math 7 | 8 | 9 | class PolyOptimizer(torch.optim.SGD): 10 | 11 | def __init__(self, params, lr, weight_decay, max_step, momentum=0.9): 12 | super().__init__(params, lr, weight_decay) 13 | 14 | self.global_step = 0 15 | self.max_step = max_step 16 | self.momentum = momentum 17 | 18 | self.__initial_lr = [group['lr'] for group in self.param_groups] 19 | 20 | 21 | def step(self, closure=None): 22 | 23 | if self.global_step < self.max_step: 24 | lr_mult = (1 - self.global_step / self.max_step) ** self.momentum 25 | 26 | for i in range(len(self.param_groups)): 27 | self.param_groups[i]['lr'] = self.__initial_lr[i] * lr_mult 28 | 29 | super().step(closure) 30 | 31 | self.global_step += 1 32 | 33 | class SGDROptimizer(torch.optim.SGD): 34 | 35 | def __init__(self, params, steps_per_epoch, lr=0, weight_decay=0, epoch_start=1, restart_mult=2): 36 | super().__init__(params, lr, weight_decay) 37 | 38 | self.global_step = 0 39 | self.local_step = 0 40 | self.total_restart = 0 41 | 42 | self.max_step = steps_per_epoch * epoch_start 43 | self.restart_mult = restart_mult 44 | 45 | self.__initial_lr = [group['lr'] for group in self.param_groups] 46 | 47 | 48 | def step(self, closure=None): 49 | 50 | if self.local_step >= self.max_step: 51 | self.local_step = 0 52 | self.max_step *= self.restart_mult 53 | self.total_restart += 1 54 | 55 | lr_mult = (1 + math.cos(math.pi * self.local_step / self.max_step))/2 / (self.total_restart + 1) 56 | 57 | for i in range(len(self.param_groups)): 58 | self.param_groups[i]['lr'] = self.__initial_lr[i] * lr_mult 59 | 60 | super().step(closure) 61 | 62 | self.local_step += 1 63 | self.global_step += 1 64 | 65 | 66 | def split_dataset(dataset, n_splits): 67 | 68 | return [Subset(dataset, np.arange(i, len(dataset), n_splits)) for i in range(n_splits)] 69 | 70 | 71 | def gap2d(x, keepdims=False): 72 | out = torch.mean(x.view(x.size(0), x.size(1), -1), -1) 73 | if keepdims: 74 | out = out.view(out.size(0), out.size(1), 1, 1) 75 | 76 | return out 77 | 78 | def gap2d_pos(x, keepdims=False): 79 | out = torch.sum(x.view(x.size(0), x.size(1), -1), -1) / (torch.sum(x>0)+1e-12) 80 | if keepdims: 81 | out = out.view(out.size(0), out.size(1), 1, 1) 82 | 83 | return out 84 | 85 | def gsp2d(x, keepdims=False): 86 | out = torch.sum(x.view(x.size(0), x.size(1), -1), -1) 87 | if keepdims: 88 | out = out.view(out.size(0), out.size(1), 1, 1) 89 | 90 | return out 91 | 92 | -------------------------------------------------------------------------------- /mscoco/annToMask.py: -------------------------------------------------------------------------------- 1 | import os 2 | import imageio 3 | import numpy as np 4 | from torch import multiprocessing 5 | from pycocotools.coco import COCO 6 | from torch.utils.data import Subset 7 | 8 | category_map = {"1": 1, "2": 2, "3": 3, "4": 4, "5": 5, "6": 6, "7": 7, "8": 8, "9": 9, "10": 10, "11": 11, "13": 12, "14": 13, "15": 14, "16": 15, "17": 16, "18": 17, "19": 18, "20": 19, "21": 20, "22": 21, "23": 22, "24": 23, "25": 24, "27": 25, "28": 26, "31": 27, "32": 28, "33": 29, "34": 30, "35": 31, "36": 32, "37": 33, "38": 34, "39": 35, "40": 36, "41": 37, "42": 38, "43": 39, "44": 40, "46": 41, "47": 42, "48": 43, "49": 44, "50": 45, "51": 46, "52": 47, "53": 48, "54": 49, "55": 50, "56": 51, "57": 52, "58": 53, "59": 54, "60": 55, "61": 56, "62": 57, "63": 58, "64": 59, "65": 60, "67": 61, "70": 62, "72": 63, "73": 64, "74": 65, "75": 66, "76": 67, "77": 68, "78": 69, "79": 70, "80": 71, "81": 72, "82": 73, "84": 74, "85": 75, "86": 76, "87": 77, "88": 78, "89": 79, "90": 80} 9 | 10 | def work(process_id, infer_dataset, coco, mask_path): 11 | databin = infer_dataset[process_id] 12 | print(len(databin)) 13 | for imgId in databin: 14 | curImg = coco.imgs[imgId] 15 | imageSize = (curImg['height'], curImg['width']) 16 | labelMap = np.zeros(imageSize) 17 | 18 | # Get annotations of the current image (may be empty) 19 | annIds = coco.getAnnIds(imgIds=imgId, iscrowd=False) 20 | imgAnnots = coco.loadAnns(annIds) 21 | 22 | # Combine all annotations of this image in labelMap 23 | # labelMasks = mask.decode([a['segmentation'] for a in imgAnnots]) 24 | for i in range(len(imgAnnots)): 25 | labelMask = coco.annToMask(imgAnnots[i]) == 1 26 | newLabel = imgAnnots[i]['category_id'] 27 | labelMap[labelMask] = category_map[str(newLabel)] 28 | 29 | imageio.imsave(os.path.join(mask_path, str(imgId) + '.png'), labelMap.astype(np.uint8)) 30 | 31 | if __name__ == '__main__': 32 | annFile = '../MSCOCO/annotations/instances_train2014.json' 33 | mask_path = '../MSCOCO/mask/train2014' 34 | os.makedirs(mask_path, exist_ok=True) 35 | coco = COCO(annFile) 36 | num_workers = 8 37 | ids = list(coco.imgs.keys()) 38 | print(len(ids)) 39 | num_per_worker = (len(ids)//num_workers) + 1 40 | dataset = [ ids[i*num_per_worker:(i+1)*num_per_worker] for i in range(num_workers)] 41 | multiprocessing.spawn(work, nprocs=num_workers, args=(dataset,coco,mask_path), join=True) 42 | 43 | annFile = '../MSCOCO/annotations/instances_val2014.json' 44 | mask_path = '../MSCOCO/mask/val2014' 45 | os.makedirs(mask_path, exist_ok=True) 46 | coco = COCO(annFile) 47 | ids = list(coco.imgs.keys()) 48 | print(len(ids)) 49 | num_per_worker = (len(ids)//num_workers) + 1 50 | dataset = [ ids[i*num_per_worker:(i+1)*num_per_worker] for i in range(num_workers)] 51 | multiprocessing.spawn(work, nprocs=num_workers, args=(dataset,coco,mask_path), join=True) -------------------------------------------------------------------------------- /mscoco/dataloader.py: -------------------------------------------------------------------------------- 1 | import os 2 | import torch 3 | import imageio 4 | import numpy as np 5 | from misc import imutils 6 | from torch.utils import data 7 | import torchvision.datasets as dset 8 | 9 | category_map = {"1": 1, "2": 2, "3": 3, "4": 4, "5": 5, "6": 6, "7": 7, "8": 8, "9": 9, "10": 10, "11": 11, "13": 12, "14": 13, "15": 14, "16": 15, "17": 16, "18": 17, "19": 18, "20": 19, "21": 20, "22": 21, "23": 22, "24": 23, "25": 24, "27": 25, "28": 26, "31": 27, "32": 28, "33": 29, "34": 30, "35": 31, "36": 32, "37": 33, "38": 34, "39": 35, "40": 36, "41": 37, "42": 38, "43": 39, "44": 40, "46": 41, "47": 42, "48": 43, "49": 44, "50": 45, "51": 46, "52": 47, "53": 48, "54": 49, "55": 50, "56": 51, "57": 52, "58": 53, "59": 54, "60": 55, "61": 56, "62": 57, "63": 58, "64": 59, "65": 60, "67": 61, "70": 62, "72": 63, "73": 64, "74": 65, "75": 66, "76": 67, "77": 68, "78": 69, "79": 70, "80": 71, "81": 72, "82": 73, "84": 74, "85": 75, "86": 76, "87": 77, "88": 78, "89": 79, "90": 80} 10 | 11 | class TorchvisionNormalize(): 12 | def __init__(self, mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)): 13 | self.mean = mean 14 | self.std = std 15 | 16 | def __call__(self, img): 17 | imgarr = np.asarray(img) 18 | proc_img = np.empty_like(imgarr, np.float32) 19 | 20 | proc_img[..., 0] = (imgarr[..., 0] / 255. - self.mean[0]) / self.std[0] 21 | proc_img[..., 1] = (imgarr[..., 1] / 255. - self.mean[1]) / self.std[1] 22 | proc_img[..., 2] = (imgarr[..., 2] / 255. - self.mean[2]) / self.std[2] 23 | 24 | return proc_img 25 | 26 | class GetAffinityLabelFromIndices(): 27 | 28 | def __init__(self, indices_from, indices_to): 29 | 30 | self.indices_from = indices_from 31 | self.indices_to = indices_to 32 | 33 | def __call__(self, segm_map): 34 | 35 | segm_map_flat = np.reshape(segm_map, -1) 36 | 37 | segm_label_from = np.expand_dims(segm_map_flat[self.indices_from], axis=0) 38 | segm_label_to = segm_map_flat[self.indices_to] 39 | 40 | valid_label = np.logical_and(np.less(segm_label_from, 81), np.less(segm_label_to, 81)) 41 | 42 | equal_label = np.equal(segm_label_from, segm_label_to) 43 | 44 | pos_affinity_label = np.logical_and(equal_label, valid_label) 45 | 46 | bg_pos_affinity_label = np.logical_and(pos_affinity_label, np.equal(segm_label_from, 0)).astype(np.float32) 47 | fg_pos_affinity_label = np.logical_and(pos_affinity_label, np.greater(segm_label_from, 0)).astype(np.float32) 48 | 49 | neg_affinity_label = np.logical_and(np.logical_not(equal_label), valid_label).astype(np.float32) 50 | 51 | return torch.from_numpy(bg_pos_affinity_label), torch.from_numpy(fg_pos_affinity_label), \ 52 | torch.from_numpy(neg_affinity_label) 53 | 54 | class COCOClassificationDataset(data.Dataset): 55 | def __init__(self, image_dir, anno_path, labels_path=None,resize_long=None, rescale=None, img_normal=TorchvisionNormalize(), hor_flip=False, 56 | crop_size=None, crop_method=None, to_torch=True): 57 | self.coco = dset.CocoDetection(root=image_dir, annFile=anno_path) 58 | self.labels_path = labels_path 59 | self.category_map = category_map 60 | 61 | self.resize_long = resize_long 62 | self.rescale = rescale 63 | self.crop_size = crop_size 64 | self.img_normal = img_normal 65 | self.hor_flip = hor_flip 66 | self.crop_method = crop_method 67 | self.to_torch = to_torch 68 | 69 | self.labels = [] 70 | if os.path.exists(self.labels_path): 71 | self.labels = np.load(self.labels_path).astype(np.float64) 72 | self.labels = (self.labels > 0).astype(np.float64) 73 | else: 74 | print("No preprocessed label file found in {}.".format(self.labels_path)) 75 | l = len(self.coco) 76 | for i in range(l): 77 | item = self.coco[i] 78 | categories = self.getCategoryList(item[1]) 79 | label = self.getLabelVector(categories) 80 | self.labels.append(label) 81 | self.save_datalabels(labels_path) 82 | 83 | def getCategoryList(self, item): 84 | categories = set() 85 | for t in item: 86 | categories.add(t['category_id']) 87 | return list(categories) 88 | 89 | def getLabelVector(self, categories): 90 | label = np.zeros(80) 91 | for c in categories: 92 | index = self.category_map[str(c)]-1 93 | label[index] = 1.0 # / label_num 94 | return label 95 | 96 | def save_datalabels(self, outpath): 97 | """ 98 | Save datalabels to disk. 99 | For faster loading next time. 100 | """ 101 | os.makedirs(os.path.dirname(outpath), exist_ok=True) 102 | labels = np.array(self.labels) 103 | np.save(outpath, labels) 104 | 105 | def __getitem__(self, index): 106 | name = self.coco.ids[index] 107 | name = self.coco.coco.loadImgs(name)[0]["file_name"].split('.')[0] 108 | 109 | img = np.asarray(self.coco[index][0]) 110 | 111 | if self.resize_long: 112 | img = imutils.random_resize_long(img, self.resize_long[0], self.resize_long[1]) 113 | 114 | if self.rescale: 115 | img = imutils.random_scale(img, scale_range=self.rescale, order=3) 116 | 117 | if self.img_normal: 118 | img = self.img_normal(img) 119 | 120 | if self.hor_flip: 121 | img = imutils.random_lr_flip(img) 122 | 123 | if self.crop_size: 124 | if self.crop_method == "random": 125 | img = imutils.random_crop(img, self.crop_size, 0) 126 | else: 127 | img = imutils.top_left_crop(img, self.crop_size, 0) 128 | 129 | if self.to_torch: 130 | img = imutils.HWC_to_CHW(img) 131 | 132 | return {'name': name, 'img': img, 'label':self.labels[index]} 133 | 134 | def __len__(self): 135 | return len(self.coco) 136 | 137 | class COCOClassificationDatasetMSF(COCOClassificationDataset): 138 | def __init__(self, image_dir, anno_path, labels_path=None, img_normal=TorchvisionNormalize(), hor_flip=False,scales=(1.0,)): 139 | self.scales = scales 140 | super().__init__(image_dir, anno_path, labels_path, img_normal, hor_flip) 141 | 142 | def __getitem__(self,index): 143 | name = self.coco.ids[index] 144 | name = self.coco.coco.loadImgs(name)[0]["file_name"].split('.')[0] 145 | 146 | img = np.asarray(self.coco[index][0]) 147 | 148 | ms_img_list = [] 149 | for s in self.scales: 150 | if s == 1: 151 | s_img = img 152 | else: 153 | s_img = imutils.pil_rescale(img, s, order=3) 154 | s_img = self.img_normal(s_img) 155 | s_img = imutils.HWC_to_CHW(s_img) 156 | ms_img_list.append(np.stack([s_img, np.flip(s_img, -1)], axis=0)) 157 | if len(self.scales) == 1: 158 | ms_img_list = ms_img_list[0] 159 | 160 | out = {"name": name, "img": ms_img_list, "size": (img.shape[0], img.shape[1]), 161 | "label": self.labels[index]} 162 | return out 163 | 164 | class COCOSegmentationDataset(data.Dataset): 165 | def __init__(self, image_dir, anno_path, masks_path, crop_size, rescale=None, img_normal=TorchvisionNormalize(), 166 | hor_flip=False,crop_method='random',read_ir_label=False): 167 | self.coco = dset.CocoDetection(root=image_dir, annFile=anno_path) 168 | self.masks_path = masks_path 169 | self.category_map = category_map 170 | 171 | self.rescale = rescale 172 | self.crop_size = crop_size 173 | self.img_normal = img_normal 174 | self.hor_flip = hor_flip 175 | self.crop_method = crop_method 176 | self.read_ir_label = read_ir_label 177 | 178 | self.ids2name = {} 179 | for ids in self.coco.ids: 180 | self.ids2name[ids] = self.coco.coco.loadImgs(ids)[0]["file_name"].split('.')[0] 181 | 182 | def __getitem__(self, index): 183 | ids = self.coco.ids[index] 184 | name = self.ids2name[ids] 185 | 186 | img = np.asarray(self.coco[index][0]) 187 | if self.read_ir_label: 188 | label = imageio.imread(os.path.join(self.masks_path, name+'.png')) 189 | else: 190 | label = imageio.imread(os.path.join(self.masks_path, str(ids) + '.png')) 191 | 192 | if self.rescale: 193 | img, label = imutils.random_scale((img, label), scale_range=self.rescale, order=(3, 0)) 194 | 195 | if self.img_normal: 196 | img = self.img_normal(img) 197 | 198 | if self.hor_flip: 199 | img, label = imutils.random_lr_flip((img, label)) 200 | 201 | if self.crop_method == "random": 202 | img, label = imutils.random_crop((img, label), self.crop_size, (0, 255)) 203 | else: 204 | img = imutils.top_left_crop(img, self.crop_size, 0) 205 | label = imutils.top_left_crop(label, self.crop_size, 255) 206 | 207 | img = imutils.HWC_to_CHW(img) 208 | 209 | return {'name': name, 'img': img, 'label':label} 210 | 211 | def get_label_by_id(self,ids): 212 | label = imageio.imread(os.path.join(self.masks_path, str(ids) + '.png')) 213 | return label 214 | 215 | def get_label_by_name(self,name): 216 | # COCO_val2014_000000159977.jpg 217 | label = imageio.imread(os.path.join(self.masks_path, str(int(name.split('.')[0].split('_')[-1])) + '.png')) 218 | return label 219 | 220 | def __len__(self): 221 | return len(self.coco) 222 | 223 | class COCOAffinityDataset(COCOSegmentationDataset): 224 | def __init__(self, image_dir, anno_path, label_dir, crop_size, indices_from, indices_to, 225 | rescale=None, img_normal=TorchvisionNormalize(), hor_flip=False, crop_method=None): 226 | super().__init__(image_dir, anno_path, label_dir, crop_size, rescale, img_normal, hor_flip, crop_method=crop_method,read_ir_label=True) 227 | 228 | self.extract_aff_lab_func = GetAffinityLabelFromIndices(indices_from, indices_to) 229 | 230 | def __getitem__(self, idx): 231 | out = super().__getitem__(idx) 232 | 233 | reduced_label = imutils.pil_rescale(out['label'], 0.25, 0) 234 | 235 | out['aff_bg_pos_label'], out['aff_fg_pos_label'], out['aff_neg_label'] = self.extract_aff_lab_func(reduced_label) 236 | 237 | return out -------------------------------------------------------------------------------- /net/__pycache__/resnet50.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/net/__pycache__/resnet50.cpython-36.pyc -------------------------------------------------------------------------------- /net/__pycache__/resnet50_cam.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/net/__pycache__/resnet50_cam.cpython-36.pyc -------------------------------------------------------------------------------- /net/__pycache__/resnet50_fpn2_cam.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/net/__pycache__/resnet50_fpn2_cam.cpython-36.pyc -------------------------------------------------------------------------------- /net/__pycache__/resnet50_fpn3_cam.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/net/__pycache__/resnet50_fpn3_cam.cpython-36.pyc -------------------------------------------------------------------------------- /net/__pycache__/resnet50_fpn_cam.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/net/__pycache__/resnet50_fpn_cam.cpython-36.pyc -------------------------------------------------------------------------------- /net/__pycache__/resnet50_fpn_cam_share.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/net/__pycache__/resnet50_fpn_cam_share.cpython-36.pyc -------------------------------------------------------------------------------- /net/__pycache__/resnet50_irn.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/net/__pycache__/resnet50_irn.cpython-36.pyc -------------------------------------------------------------------------------- /net/resnet50.py: -------------------------------------------------------------------------------- 1 | import torch.nn as nn 2 | import torch.nn.functional as F 3 | import torch.utils.model_zoo as model_zoo 4 | 5 | model_urls = { 6 | 'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth' 7 | } 8 | 9 | 10 | class FixedBatchNorm(nn.BatchNorm2d): 11 | def forward(self, input): 12 | return F.batch_norm(input, self.running_mean, self.running_var, self.weight, self.bias, 13 | training=False, eps=self.eps) 14 | 15 | 16 | class Bottleneck(nn.Module): 17 | expansion = 4 18 | 19 | def __init__(self, inplanes, planes, stride=1, downsample=None, dilation=1): 20 | super(Bottleneck, self).__init__() 21 | self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) 22 | self.bn1 = FixedBatchNorm(planes) 23 | self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, 24 | padding=dilation, bias=False, dilation=dilation) 25 | self.bn2 = FixedBatchNorm(planes) 26 | self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False) 27 | self.bn3 = FixedBatchNorm(planes * 4) 28 | self.relu = nn.ReLU(inplace=True) 29 | self.downsample = downsample 30 | self.stride = stride 31 | self.dilation = dilation 32 | 33 | def forward(self, x): 34 | residual = x 35 | 36 | out = self.conv1(x) 37 | out = self.bn1(out) 38 | out = self.relu(out) 39 | 40 | out = self.conv2(out) 41 | out = self.bn2(out) 42 | out = self.relu(out) 43 | 44 | out = self.conv3(out) 45 | out = self.bn3(out) 46 | 47 | if self.downsample is not None: 48 | residual = self.downsample(x) 49 | 50 | out += residual 51 | out = self.relu(out) 52 | 53 | return out 54 | 55 | 56 | class ResNet(nn.Module): 57 | 58 | def __init__(self, block, layers, strides=(2, 2, 2, 2), dilations=(1, 1, 1, 1)): 59 | self.inplanes = 64 60 | super(ResNet, self).__init__() 61 | self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=strides[0], padding=3, 62 | bias=False) 63 | self.bn1 = FixedBatchNorm(64) 64 | self.relu = nn.ReLU(inplace=True) 65 | self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) 66 | 67 | self.layer1 = self._make_layer(block, 64, layers[0], stride=1, dilation=dilations[0]) 68 | self.layer2 = self._make_layer(block, 128, layers[1], stride=strides[1], dilation=dilations[1]) 69 | self.layer3 = self._make_layer(block, 256, layers[2], stride=strides[2], dilation=dilations[2]) 70 | self.layer4 = self._make_layer(block, 512, layers[3], stride=strides[3], dilation=dilations[3]) 71 | 72 | self.inplanes = 1024 73 | 74 | #self.avgpool = nn.AvgPool2d(7, stride=1) 75 | #self.fc = nn.Linear(512 * block.expansion, 1000) 76 | 77 | 78 | def _make_layer(self, block, planes, blocks, stride=1, dilation=1): 79 | downsample = None 80 | if stride != 1 or self.inplanes != planes * block.expansion: 81 | downsample = nn.Sequential( 82 | nn.Conv2d(self.inplanes, planes * block.expansion, 83 | kernel_size=1, stride=stride, bias=False), 84 | FixedBatchNorm(planes * block.expansion), 85 | ) 86 | 87 | layers = [block(self.inplanes, planes, stride, downsample, dilation=1)] 88 | self.inplanes = planes * block.expansion 89 | for i in range(1, blocks): 90 | layers.append(block(self.inplanes, planes, dilation=dilation)) 91 | 92 | return nn.Sequential(*layers) 93 | 94 | def forward(self, x): 95 | x = self.conv1(x) 96 | x = self.bn1(x) 97 | x = self.relu(x) 98 | x = self.maxpool(x) 99 | 100 | x = self.layer1(x) 101 | x = self.layer2(x) 102 | x = self.layer3(x) 103 | x = self.layer4(x) 104 | 105 | x = self.avgpool(x) 106 | x = x.view(x.size(0), -1) 107 | x = self.fc(x) 108 | 109 | return x 110 | 111 | 112 | def resnet50(pretrained=True, **kwargs): 113 | 114 | model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs) 115 | if pretrained: 116 | state_dict = model_zoo.load_url(model_urls['resnet50']) 117 | # state_dict.pop('fc.weight') 118 | # state_dict.pop('fc.bias') 119 | model.load_state_dict(state_dict, strict=False) 120 | print("model pretrained initialized") 121 | 122 | return model -------------------------------------------------------------------------------- /net/resnet50_cam.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import torch 3 | import torch.nn as nn 4 | import torch.nn.functional as F 5 | from misc import torchutils 6 | from net import resnet50 7 | 8 | 9 | class Net(nn.Module): 10 | 11 | def __init__(self, stride=16, n_classes=20): 12 | super(Net, self).__init__() 13 | if stride == 16: 14 | self.resnet50 = resnet50.resnet50(pretrained=True, strides=(2, 2, 2, 1)) 15 | self.stage1 = nn.Sequential(self.resnet50.conv1, self.resnet50.bn1, self.resnet50.relu, self.resnet50.maxpool,self.resnet50.layer1) 16 | else: 17 | self.resnet50 = resnet50.resnet50(pretrained=True, strides=(2, 2, 1, 1), dilations=(1, 1, 2, 2)) 18 | self.stage1 = nn.Sequential(self.resnet50.conv1, self.resnet50.bn1, self.resnet50.relu, self.resnet50.maxpool,self.resnet50.layer1) 19 | self.stage2 = nn.Sequential(self.resnet50.layer2) 20 | self.stage3 = nn.Sequential(self.resnet50.layer3) 21 | self.stage4 = nn.Sequential(self.resnet50.layer4) 22 | self.n_classes = n_classes 23 | self.classifier = nn.Conv2d(2048, n_classes, 1, bias=False) 24 | 25 | self.backbone = nn.ModuleList([self.stage1, self.stage2, self.stage3, self.stage4]) 26 | self.newly_added = nn.ModuleList([self.classifier]) 27 | 28 | 29 | def forward(self, x): 30 | 31 | x = self.stage1(x) 32 | x = self.stage2(x) 33 | 34 | x = self.stage3(x) 35 | x = self.stage4(x) 36 | 37 | x = torchutils.gap2d(x, keepdims=True) 38 | x = self.classifier(x) 39 | x = x.view(-1, self.n_classes) 40 | 41 | return x 42 | 43 | def train(self, mode=True): 44 | super(Net, self).train(mode) 45 | for p in self.resnet50.conv1.parameters(): 46 | p.requires_grad = False 47 | for p in self.resnet50.bn1.parameters(): 48 | p.requires_grad = False 49 | 50 | def trainable_parameters(self): 51 | 52 | return (list(self.backbone.parameters()), list(self.newly_added.parameters())) 53 | 54 | class Net_CAM(Net): 55 | 56 | def __init__(self,stride=16,n_classes=20): 57 | super(Net_CAM, self).__init__(stride=stride,n_classes=n_classes) 58 | 59 | def forward(self, x): 60 | 61 | x = self.stage1(x) 62 | x = self.stage2(x) 63 | 64 | x = self.stage3(x) 65 | feature = self.stage4(x) 66 | 67 | x = torchutils.gap2d(feature, keepdims=True) 68 | x = self.classifier(x) 69 | x = x.view(-1, self.n_classes) 70 | 71 | cams = F.conv2d(feature, self.classifier.weight) 72 | cams = F.relu(cams) 73 | 74 | return x,cams,feature 75 | 76 | class Net_CAM_Feature(Net): 77 | 78 | def __init__(self,stride=16,n_classes=20): 79 | super(Net_CAM_Feature, self).__init__(stride=stride,n_classes=n_classes) 80 | 81 | def forward(self, x): 82 | 83 | x = self.stage1(x) 84 | x = self.stage2(x) 85 | 86 | x = self.stage3(x) 87 | feature = self.stage4(x) # bs*2048*32*32 88 | 89 | x = torchutils.gap2d(feature, keepdims=True) 90 | x = self.classifier(x) 91 | x = x.view(-1, self.n_classes) 92 | 93 | cams = F.conv2d(feature, self.classifier.weight) 94 | cams = F.relu(cams) 95 | cams = cams/(F.adaptive_max_pool2d(cams, (1, 1)) + 1e-5) 96 | cams_feature = cams.unsqueeze(2)*feature.unsqueeze(1) # bs*20*2048*32*32 97 | cams_feature = cams_feature.view(cams_feature.size(0),cams_feature.size(1),cams_feature.size(2),-1) 98 | cams_feature = torch.mean(cams_feature,-1) 99 | 100 | return x,cams_feature,cams 101 | 102 | class CAM(Net): 103 | 104 | def __init__(self, stride=16,n_classes=20): 105 | super(CAM, self).__init__(stride=stride,n_classes=n_classes) 106 | 107 | def forward(self, x, separate=False): 108 | x = self.stage1(x) 109 | x = self.stage2(x) 110 | x = self.stage3(x) 111 | x = self.stage4(x) 112 | x = F.conv2d(x, self.classifier.weight) 113 | if separate: 114 | return x 115 | x = F.relu(x) 116 | x = x[0] + x[1].flip(-1) 117 | 118 | return x 119 | 120 | def forward1(self, x, weight, separate=False): 121 | x = self.stage1(x) 122 | x = self.stage2(x) 123 | x = self.stage3(x) 124 | x = self.stage4(x) 125 | x = F.conv2d(x, weight) 126 | 127 | if separate: 128 | return x 129 | x = F.relu(x) 130 | x = x[0] + x[1].flip(-1) 131 | 132 | return x 133 | 134 | def forward2(self, x, weight, separate=False): 135 | x = self.stage1(x) 136 | x = self.stage2(x) 137 | x = self.stage3(x) 138 | x = self.stage4(x) 139 | x = F.conv2d(x, weight*self.classifier.weight) 140 | 141 | if separate: 142 | return x 143 | x = F.relu(x) 144 | x = x[0] + x[1].flip(-1) 145 | return x 146 | 147 | class Class_Predictor(nn.Module): 148 | def __init__(self, num_classes, representation_size): 149 | super(Class_Predictor, self).__init__() 150 | self.num_classes = num_classes 151 | self.classifier = nn.Conv2d(representation_size, num_classes, 1, bias=False) 152 | 153 | def forward(self, x, label): 154 | batch_size = x.shape[0] 155 | x = x.reshape(batch_size,self.num_classes,-1) # bs*20*2048 156 | mask = label>0 # bs*20 157 | 158 | feature_list = [x[i][mask[i]] for i in range(batch_size)] # bs*n*2048 159 | prediction = [self.classifier(y.unsqueeze(-1).unsqueeze(-1)).squeeze(-1).squeeze(-1) for y in feature_list] 160 | labels = [torch.nonzero(label[i]).squeeze(1) for i in range(label.shape[0])] 161 | 162 | loss = 0 163 | acc = 0 164 | num = 0 165 | for logit,label in zip(prediction, labels): 166 | if label.shape[0] == 0: 167 | continue 168 | loss_ce= F.cross_entropy(logit, label) 169 | loss += loss_ce 170 | acc += (logit.argmax(dim=1)==label.view(-1)).sum().float() 171 | num += label.size(0) 172 | 173 | return loss/batch_size, acc/num 174 | -------------------------------------------------------------------------------- /net/resnet50_irn.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.functional as F 4 | from net import resnet50 5 | 6 | 7 | class Net(nn.Module): 8 | 9 | def __init__(self): 10 | super(Net, self).__init__() 11 | 12 | # backbone 13 | self.resnet50 = resnet50.resnet50(pretrained=True, strides=[2, 2, 2, 1]) 14 | 15 | self.stage1 = nn.Sequential(self.resnet50.conv1, self.resnet50.bn1, self.resnet50.relu, self.resnet50.maxpool) 16 | self.stage2 = nn.Sequential(self.resnet50.layer1) 17 | self.stage3 = nn.Sequential(self.resnet50.layer2) 18 | self.stage4 = nn.Sequential(self.resnet50.layer3) 19 | self.stage5 = nn.Sequential(self.resnet50.layer4) 20 | self.mean_shift = Net.MeanShift(2) 21 | 22 | # branch: class boundary detection 23 | self.fc_edge1 = nn.Sequential( 24 | nn.Conv2d(64, 32, 1, bias=False), 25 | nn.GroupNorm(4, 32), 26 | nn.ReLU(inplace=True), 27 | ) 28 | self.fc_edge2 = nn.Sequential( 29 | nn.Conv2d(256, 32, 1, bias=False), 30 | nn.GroupNorm(4, 32), 31 | nn.ReLU(inplace=True), 32 | ) 33 | self.fc_edge3 = nn.Sequential( 34 | nn.Conv2d(512, 32, 1, bias=False), 35 | nn.GroupNorm(4, 32), 36 | nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False), 37 | nn.ReLU(inplace=True), 38 | ) 39 | self.fc_edge4 = nn.Sequential( 40 | nn.Conv2d(1024, 32, 1, bias=False), 41 | nn.GroupNorm(4, 32), 42 | nn.Upsample(scale_factor=4, mode='bilinear', align_corners=False), 43 | nn.ReLU(inplace=True), 44 | ) 45 | self.fc_edge5 = nn.Sequential( 46 | nn.Conv2d(2048, 32, 1, bias=False), 47 | nn.GroupNorm(4, 32), 48 | nn.Upsample(scale_factor=4, mode='bilinear', align_corners=False), 49 | nn.ReLU(inplace=True), 50 | ) 51 | self.fc_edge6 = nn.Conv2d(160, 1, 1, bias=True) 52 | 53 | # branch: displacement field 54 | self.fc_dp1 = nn.Sequential( 55 | nn.Conv2d(64, 64, 1, bias=False), 56 | nn.GroupNorm(8, 64), 57 | nn.ReLU(inplace=True), 58 | ) 59 | self.fc_dp2 = nn.Sequential( 60 | nn.Conv2d(256, 128, 1, bias=False), 61 | nn.GroupNorm(16, 128), 62 | nn.ReLU(inplace=True), 63 | ) 64 | self.fc_dp3 = nn.Sequential( 65 | nn.Conv2d(512, 256, 1, bias=False), 66 | nn.GroupNorm(16, 256), 67 | nn.ReLU(inplace=True), 68 | ) 69 | self.fc_dp4 = nn.Sequential( 70 | nn.Conv2d(1024, 256, 1, bias=False), 71 | nn.GroupNorm(16, 256), 72 | nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False), 73 | nn.ReLU(inplace=True), 74 | ) 75 | self.fc_dp5 = nn.Sequential( 76 | nn.Conv2d(2048, 256, 1, bias=False), 77 | nn.GroupNorm(16, 256), 78 | nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False), 79 | nn.ReLU(inplace=True), 80 | ) 81 | self.fc_dp6 = nn.Sequential( 82 | nn.Conv2d(768, 256, 1, bias=False), 83 | nn.GroupNorm(16, 256), 84 | nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False), 85 | nn.ReLU(inplace=True), 86 | ) 87 | self.fc_dp7 = nn.Sequential( 88 | nn.Conv2d(448, 256, 1, bias=False), 89 | nn.GroupNorm(16, 256), 90 | nn.ReLU(inplace=True), 91 | nn.Conv2d(256, 2, 1, bias=False), 92 | self.mean_shift 93 | ) 94 | 95 | self.backbone = nn.ModuleList([self.stage1, self.stage2, self.stage3, self.stage4, self.stage5]) 96 | self.edge_layers = nn.ModuleList([self.fc_edge1, self.fc_edge2, self.fc_edge3, self.fc_edge4, self.fc_edge5, self.fc_edge6]) 97 | self.dp_layers = nn.ModuleList([self.fc_dp1, self.fc_dp2, self.fc_dp3, self.fc_dp4, self.fc_dp5, self.fc_dp6, self.fc_dp7]) 98 | 99 | class MeanShift(nn.Module): 100 | 101 | def __init__(self, num_features): 102 | super(Net.MeanShift, self).__init__() 103 | self.register_buffer('running_mean', torch.zeros(num_features)) 104 | 105 | def forward(self, input): 106 | if self.training: 107 | return input 108 | return input - self.running_mean.view(1, 2, 1, 1) 109 | 110 | def forward(self, x): 111 | x1 = self.stage1(x).detach() 112 | x2 = self.stage2(x1).detach() 113 | x3 = self.stage3(x2).detach() 114 | x4 = self.stage4(x3).detach() 115 | x5 = self.stage5(x4).detach() 116 | 117 | edge1 = self.fc_edge1(x1) 118 | edge2 = self.fc_edge2(x2) 119 | edge3 = self.fc_edge3(x3)[..., :edge2.size(2), :edge2.size(3)] 120 | edge4 = self.fc_edge4(x4)[..., :edge2.size(2), :edge2.size(3)] 121 | edge5 = self.fc_edge5(x5)[..., :edge2.size(2), :edge2.size(3)] 122 | edge_out = self.fc_edge6(torch.cat([edge1, edge2, edge3, edge4, edge5], dim=1)) 123 | 124 | dp1 = self.fc_dp1(x1) 125 | dp2 = self.fc_dp2(x2) 126 | dp3 = self.fc_dp3(x3) 127 | dp4 = self.fc_dp4(x4)[..., :dp3.size(2), :dp3.size(3)] 128 | dp5 = self.fc_dp5(x5)[..., :dp3.size(2), :dp3.size(3)] 129 | 130 | dp_up3 = self.fc_dp6(torch.cat([dp3, dp4, dp5], dim=1))[..., :dp2.size(2), :dp2.size(3)] 131 | dp_out = self.fc_dp7(torch.cat([dp1, dp2, dp_up3], dim=1)) 132 | 133 | return edge_out, dp_out 134 | 135 | def trainable_parameters(self): 136 | return (tuple(self.edge_layers.parameters()), 137 | tuple(self.dp_layers.parameters())) 138 | 139 | def train(self, mode=True): 140 | super().train(mode) 141 | self.backbone.eval() 142 | 143 | 144 | class AffinityDisplacementLoss(Net): 145 | 146 | path_indices_prefix = "path_indices" 147 | 148 | def __init__(self, path_index): 149 | 150 | super(AffinityDisplacementLoss, self).__init__() 151 | 152 | self.path_index = path_index 153 | 154 | self.n_path_lengths = len(path_index.path_indices) 155 | for i, pi in enumerate(path_index.path_indices): 156 | self.register_buffer(AffinityDisplacementLoss.path_indices_prefix + str(i), torch.from_numpy(pi)) 157 | 158 | self.register_buffer( 159 | 'disp_target', 160 | torch.unsqueeze(torch.unsqueeze(torch.from_numpy(path_index.search_dst).transpose(1, 0), 0), -1).float()) 161 | 162 | def to_affinity(self, edge): 163 | aff_list = [] 164 | edge = edge.view(edge.size(0), -1) 165 | 166 | for i in range(self.n_path_lengths): 167 | ind = self._buffers[AffinityDisplacementLoss.path_indices_prefix + str(i)] 168 | ind_flat = ind.view(-1) 169 | dist = torch.index_select(edge, dim=-1, index=ind_flat) 170 | dist = dist.view(dist.size(0), ind.size(0), ind.size(1), ind.size(2)) 171 | aff = torch.squeeze(1 - F.max_pool2d(dist, (dist.size(2), 1)), dim=2) 172 | aff_list.append(aff) 173 | aff_cat = torch.cat(aff_list, dim=1) 174 | 175 | return aff_cat 176 | 177 | def to_pair_displacement(self, disp): 178 | height, width = disp.size(2), disp.size(3) 179 | radius_floor = self.path_index.radius_floor 180 | 181 | cropped_height = height - radius_floor 182 | cropped_width = width - 2 * radius_floor 183 | 184 | disp_src = disp[:, :, :cropped_height, radius_floor:radius_floor + cropped_width] 185 | 186 | disp_dst = [disp[:, :, dy:dy + cropped_height, radius_floor + dx:radius_floor + dx + cropped_width] 187 | for dy, dx in self.path_index.search_dst] 188 | disp_dst = torch.stack(disp_dst, 2) 189 | 190 | pair_disp = torch.unsqueeze(disp_src, 2) - disp_dst 191 | pair_disp = pair_disp.view(pair_disp.size(0), pair_disp.size(1), pair_disp.size(2), -1) 192 | 193 | return pair_disp 194 | 195 | def to_displacement_loss(self, pair_disp): 196 | return torch.abs(pair_disp - self.disp_target) 197 | 198 | def forward(self, *inputs): 199 | x, return_loss = inputs 200 | edge_out, dp_out = super().forward(x) 201 | 202 | if return_loss is False: 203 | return edge_out, dp_out 204 | 205 | aff = self.to_affinity(torch.sigmoid(edge_out)) 206 | pos_aff_loss = (-1) * torch.log(aff + 1e-5) 207 | neg_aff_loss = (-1) * torch.log(1. + 1e-5 - aff) 208 | 209 | pair_disp = self.to_pair_displacement(dp_out) 210 | dp_fg_loss = self.to_displacement_loss(pair_disp) 211 | dp_bg_loss = torch.abs(pair_disp) 212 | 213 | return pos_aff_loss, neg_aff_loss, dp_fg_loss, dp_bg_loss 214 | 215 | 216 | class EdgeDisplacement(Net): 217 | 218 | def __init__(self, crop_size=512, stride=4): 219 | super(EdgeDisplacement, self).__init__() 220 | self.crop_size = crop_size 221 | self.stride = stride 222 | 223 | def forward(self, x): 224 | feat_size = (x.size(2)-1)//self.stride+1, (x.size(3)-1)//self.stride+1 225 | 226 | # x = F.pad(x, [0, self.crop_size-x.size(3), 0, self.crop_size-x.size(2)]) 227 | edge_out, dp_out = super().forward(x) 228 | edge_out = edge_out[..., :feat_size[0], :feat_size[1]] 229 | dp_out = dp_out[..., :feat_size[0], :feat_size[1]] 230 | 231 | edge_out = torch.sigmoid(edge_out[0]/2 + edge_out[1].flip(-1)/2) 232 | dp_out = dp_out[0] 233 | 234 | return edge_out, dp_out 235 | 236 | 237 | -------------------------------------------------------------------------------- /run_sample.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | import numpy as np 4 | import os.path as osp 5 | 6 | from misc import pyutils 7 | 8 | if __name__ == '__main__': 9 | def str2bool(v): 10 | if isinstance(v, bool): 11 | return v 12 | if v.lower() in ('yes', 'true', 't', 'y', '1'): 13 | return True 14 | elif v.lower() in ('no', 'false', 'f', 'n', '0'): 15 | return False 16 | else: 17 | raise argparse.ArgumentTypeError('Boolean value expected.') 18 | 19 | parser = argparse.ArgumentParser() 20 | 21 | # Environment 22 | # parser.add_argument("--num_workers", default=os.cpu_count()//2, type=int) 23 | parser.add_argument("--num_workers", default=12, type=int) 24 | parser.add_argument("--voc12_root", default='../VOCdevkit/VOC2012/', type=str, 25 | help="Path to VOC 2012 Devkit, must contain ./JPEGImages as subdirectory.") 26 | 27 | # Dataset 28 | parser.add_argument("--train_list", default="voc12/train_aug.txt", type=str) 29 | parser.add_argument("--val_list", default="voc12/val.txt", type=str) 30 | parser.add_argument("--infer_list", default="voc12/train_aug.txt", type=str, 31 | help="voc12/train_aug.txt to train a fully supervised model, " 32 | "voc12/train.txt or voc12/val.txt to quickly check the quality of the labels.") 33 | parser.add_argument("--chainer_eval_set", default="train", type=str) 34 | 35 | # Class Activation Map 36 | parser.add_argument("--cam_network", default="net.resnet50_cam", type=str) 37 | parser.add_argument("--feature_dim", default=2048, type=int) 38 | parser.add_argument("--cam_crop_size", default=512, type=int) 39 | parser.add_argument("--cam_batch_size", default=16, type=int) 40 | parser.add_argument("--cam_num_epoches", default=5, type=int) 41 | parser.add_argument("--cam_learning_rate", default=0.1, type=float) 42 | parser.add_argument("--cam_weight_decay", default=1e-4, type=float) 43 | parser.add_argument("--cam_eval_thres", default=0.21, type=float) 44 | parser.add_argument("--cam_scales", default=(1.0, 0.5, 1.5, 2.0), 45 | help="Multi-scale inferences") 46 | # ReCAM 47 | parser.add_argument("--recam_num_epoches", default=4, type=int) 48 | parser.add_argument("--recam_learning_rate", default=0.0005, type=float) 49 | parser.add_argument("--recam_loss_weight", default=1.0, type=float) 50 | 51 | # Mining Inter-pixel Relations 52 | parser.add_argument("--conf_fg_thres", default=0.35, type=float) 53 | parser.add_argument("--conf_bg_thres", default=0.1, type=float) 54 | 55 | # Inter-pixel Relation Network (IRNet) 56 | parser.add_argument("--irn_network", default="net.resnet50_irn", type=str) 57 | parser.add_argument("--irn_crop_size", default=512, type=int) 58 | parser.add_argument("--irn_batch_size", default=32, type=int) 59 | parser.add_argument("--irn_num_epoches", default=3, type=int) 60 | parser.add_argument("--irn_learning_rate", default=0.1, type=float) 61 | parser.add_argument("--irn_weight_decay", default=1e-4, type=float) 62 | 63 | # Random Walk Params 64 | parser.add_argument("--beta", default=10) 65 | parser.add_argument("--exp_times", default=8, 66 | help="Hyper-parameter that controls the number of random walk iterations," 67 | "The random walk is performed 2^{exp_times}.") 68 | parser.add_argument("--sem_seg_bg_thres", default=0.28) 69 | 70 | # Output Path 71 | parser.add_argument("--work_space", default="result_default5", type=str) # set your path 72 | parser.add_argument("--log_name", default="sample_train_eval", type=str) 73 | parser.add_argument("--cam_weights_name", default="res50_cam.pth", type=str) 74 | parser.add_argument("--irn_weights_name", default="res50_irn.pth", type=str) 75 | parser.add_argument("--cam_out_dir", default="cam_mask", type=str) 76 | parser.add_argument("--ir_label_out_dir", default="ir_label", type=str) 77 | parser.add_argument("--sem_seg_out_dir", default="sem_seg", type=str) 78 | parser.add_argument("--ins_seg_out_dir", default="ins_seg", type=str) 79 | parser.add_argument("--recam_weight_dir", default="recam_weight", type=str) 80 | 81 | # Step 82 | parser.add_argument("--train_cam_pass", type=str2bool, default=False) 83 | parser.add_argument("--train_recam_pass", type=str2bool, default=False) 84 | parser.add_argument("--make_cam_pass", type=str2bool, default=False) 85 | parser.add_argument("--make_recam_pass", type=str2bool, default=False) 86 | parser.add_argument("--eval_cam_pass", type=str2bool, default=False) 87 | parser.add_argument("--cam_to_ir_label_pass", type=str2bool, default=False) 88 | parser.add_argument("--train_irn_pass", type=str2bool, default=False) 89 | parser.add_argument("--make_ins_seg_pass", type=str2bool, default=False) 90 | parser.add_argument("--eval_ins_seg_pass", type=str2bool, default=False) 91 | parser.add_argument("--make_sem_seg_pass", type=str2bool, default=False) 92 | parser.add_argument("--eval_sem_seg_pass", type=str2bool, default=False) 93 | 94 | args = parser.parse_args() 95 | args.log_name = osp.join(args.work_space,args.log_name) 96 | args.cam_weights_name = osp.join(args.work_space,args.cam_weights_name) 97 | args.irn_weights_name = osp.join(args.work_space,args.irn_weights_name) 98 | args.cam_out_dir = osp.join(args.work_space,args.cam_out_dir) 99 | args.ir_label_out_dir = osp.join(args.work_space,args.ir_label_out_dir) 100 | args.sem_seg_out_dir = osp.join(args.work_space,args.sem_seg_out_dir) 101 | args.ins_seg_out_dir = osp.join(args.work_space,args.ins_seg_out_dir) 102 | args.recam_weight_dir = osp.join(args.work_space,args.recam_weight_dir) 103 | 104 | os.makedirs(args.work_space, exist_ok=True) 105 | os.makedirs(args.cam_out_dir, exist_ok=True) 106 | os.makedirs(args.ir_label_out_dir, exist_ok=True) 107 | os.makedirs(args.sem_seg_out_dir, exist_ok=True) 108 | os.makedirs(args.ins_seg_out_dir, exist_ok=True) 109 | os.makedirs(args.recam_weight_dir, exist_ok=True) 110 | pyutils.Logger(args.log_name + '.log') 111 | print(vars(args)) 112 | 113 | 114 | if args.train_cam_pass is True: 115 | import step.train_cam 116 | 117 | timer = pyutils.Timer('step.train_cam:') 118 | step.train_cam.run(args) 119 | 120 | 121 | if args.train_recam_pass is True: 122 | import step.train_recam 123 | 124 | timer = pyutils.Timer('step.train_recam:') 125 | step.train_recam.run(args) 126 | 127 | if args.make_cam_pass is True: 128 | import step.make_cam 129 | 130 | timer = pyutils.Timer('step.make_cam:') 131 | step.make_cam.run(args) 132 | 133 | if args.make_recam_pass is True: 134 | import step.make_recam 135 | 136 | timer = pyutils.Timer('step.make_recam:') 137 | step.make_recam.run(args) 138 | 139 | if args.eval_cam_pass is True: 140 | import step.eval_cam 141 | 142 | timer = pyutils.Timer('step.eval_cam:') 143 | step.eval_cam.run(args) 144 | 145 | if args.cam_to_ir_label_pass is True: 146 | import step.cam_to_ir_label 147 | 148 | timer = pyutils.Timer('step.cam_to_ir_label:') 149 | step.cam_to_ir_label.run(args) 150 | 151 | if args.train_irn_pass is True: 152 | import step.train_irn 153 | 154 | timer = pyutils.Timer('step.train_irn:') 155 | step.train_irn.run(args) 156 | 157 | if args.make_sem_seg_pass is True: 158 | import step.make_sem_seg_labels 159 | args.sem_seg_bg_thres = float(args.sem_seg_bg_thres) 160 | timer = pyutils.Timer('step.make_sem_seg_labels:') 161 | step.make_sem_seg_labels.run(args) 162 | 163 | if args.eval_sem_seg_pass is True: 164 | import step.eval_sem_seg 165 | 166 | timer = pyutils.Timer('step.eval_sem_seg:') 167 | step.eval_sem_seg.run(args) 168 | 169 | -------------------------------------------------------------------------------- /run_sample_coco.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | import numpy as np 4 | import os.path as osp 5 | 6 | from misc import pyutils 7 | 8 | if __name__ == '__main__': 9 | def str2bool(v): 10 | if isinstance(v, bool): 11 | return v 12 | if v.lower() in ('yes', 'true', 't', 'y', '1'): 13 | return True 14 | elif v.lower() in ('no', 'false', 'f', 'n', '0'): 15 | return False 16 | else: 17 | raise argparse.ArgumentTypeError('Boolean value expected.') 18 | 19 | parser = argparse.ArgumentParser() 20 | 21 | # Environment 22 | # parser.add_argument("--num_workers", default=os.cpu_count()//2, type=int) 23 | parser.add_argument("--num_workers", default=12, type=int) 24 | parser.add_argument("--mscoco_root", default='../MSCOCO/', type=str, help="Path to MSCOCO") 25 | 26 | parser.add_argument("--num_classes", default=80, type=int) 27 | 28 | # Class Activation Map 29 | parser.add_argument("--cam_network", default="net.resnet50_cam", type=str) 30 | parser.add_argument("--feature_dim", default=2048, type=int) 31 | parser.add_argument("--cam_crop_size", default=512, type=int) 32 | parser.add_argument("--cam_batch_size", default=16, type=int) 33 | parser.add_argument("--cam_num_epoches", default=5, type=int) 34 | parser.add_argument("--cam_learning_rate", default=0.1, type=float) 35 | parser.add_argument("--cam_weight_decay", default=1e-4, type=float) 36 | parser.add_argument("--cam_eval_thres", default=0.15, type=float) 37 | parser.add_argument("--cam_scales", default=(1.0, 0.5, 1.5, 2.0), 38 | help="Multi-scale inferences") 39 | # ReCAM 40 | parser.add_argument("--recam_num_epoches", default=4, type=int) 41 | parser.add_argument("--recam_learning_rate", default=0.0005, type=float) 42 | parser.add_argument("--recam_loss_weight", default=0.1, type=float) 43 | parser.add_argument("--recam_batch_size", default=6, type=int) 44 | 45 | 46 | # Mining Inter-pixel Relations 47 | parser.add_argument("--conf_fg_thres", default=0.35, type=float) 48 | parser.add_argument("--conf_bg_thres", default=0.1, type=float) 49 | 50 | # Inter-pixel Relation Network (IRNet) 51 | parser.add_argument("--irn_network", default="net.resnet50_irn", type=str) 52 | parser.add_argument("--irn_crop_size", default=512, type=int) 53 | parser.add_argument("--irn_batch_size", default=32, type=int) 54 | parser.add_argument("--irn_num_epoches", default=3, type=int) 55 | parser.add_argument("--irn_learning_rate", default=0.1, type=float) 56 | parser.add_argument("--irn_weight_decay", default=1e-4, type=float) 57 | 58 | # Random Walk Params 59 | parser.add_argument("--beta", default=10) 60 | parser.add_argument("--exp_times", default=8, 61 | help="Hyper-parameter that controls the number of random walk iterations," 62 | "The random walk is performed 2^{exp_times}.") 63 | parser.add_argument("--ins_seg_bg_thres", default=0.25) 64 | parser.add_argument("--sem_seg_bg_thres", default=0.25) 65 | 66 | # Output Path 67 | parser.add_argument("--work_space", default="result_default5", type=str) # set your path 68 | parser.add_argument("--log_name", default="sample_train_eval", type=str) 69 | parser.add_argument("--cam_weights_name", default="res50_cam.pth", type=str) 70 | parser.add_argument("--irn_weights_name", default="res50_irn.pth", type=str) 71 | parser.add_argument("--cam_out_dir", default="cam_mask", type=str) 72 | parser.add_argument("--ir_label_out_dir", default="ir_label", type=str) 73 | parser.add_argument("--sem_seg_out_dir", default="sem_seg", type=str) 74 | parser.add_argument("--ins_seg_out_dir", default="ins_seg", type=str) 75 | parser.add_argument("--recam_weight_dir", default="recam_weight", type=str) 76 | 77 | # Step 78 | parser.add_argument("--train_cam_pass", type=str2bool, default=False) 79 | parser.add_argument("--train_recam_pass", type=str2bool, default=False) 80 | parser.add_argument("--make_cam_pass", type=str2bool, default=False) 81 | parser.add_argument("--make_recam_pass", type=str2bool, default=False) 82 | parser.add_argument("--eval_cam_pass", type=str2bool, default=False) 83 | parser.add_argument("--cam_to_ir_label_pass", type=str2bool, default=False) 84 | parser.add_argument("--train_irn_pass", type=str2bool, default=False) 85 | parser.add_argument("--make_ins_seg_pass", type=str2bool, default=False) 86 | parser.add_argument("--eval_ins_seg_pass", type=str2bool, default=False) 87 | parser.add_argument("--make_sem_seg_pass", type=str2bool, default=False) 88 | parser.add_argument("--eval_sem_seg_pass", type=str2bool, default=False) 89 | 90 | args = parser.parse_args() 91 | args.log_name = osp.join(args.work_space,args.log_name) 92 | args.cam_weights_name = osp.join(args.work_space,args.cam_weights_name) 93 | args.irn_weights_name = osp.join(args.work_space,args.irn_weights_name) 94 | args.cam_out_dir = osp.join(args.work_space,args.cam_out_dir) 95 | args.ir_label_out_dir = osp.join(args.work_space,args.ir_label_out_dir) 96 | args.sem_seg_out_dir = osp.join(args.work_space,args.sem_seg_out_dir) 97 | args.ins_seg_out_dir = osp.join(args.work_space,args.ins_seg_out_dir) 98 | args.recam_weight_dir = osp.join(args.work_space,args.recam_weight_dir) 99 | 100 | os.makedirs(args.work_space, exist_ok=True) 101 | os.makedirs(args.cam_out_dir, exist_ok=True) 102 | os.makedirs(args.ir_label_out_dir, exist_ok=True) 103 | os.makedirs(args.sem_seg_out_dir, exist_ok=True) 104 | os.makedirs(args.ins_seg_out_dir, exist_ok=True) 105 | os.makedirs(args.recam_weight_dir, exist_ok=True) 106 | pyutils.Logger(args.log_name + '.log') 107 | print(vars(args)) 108 | 109 | if args.train_cam_pass is True: 110 | import step_coco.train_cam 111 | 112 | timer = pyutils.Timer('step.train_cam:') 113 | step_coco.train_cam.run(args) 114 | 115 | if args.train_recam_pass is True: 116 | import step_coco.train_recam 117 | 118 | timer = pyutils.Timer('step.train_recam:') 119 | step_coco.train_recam.run(args) 120 | 121 | if args.make_cam_pass is True: 122 | import step_coco.make_cam 123 | 124 | timer = pyutils.Timer('step.make_cam:') 125 | step_coco.make_cam.run(args) 126 | 127 | if args.make_recam_pass is True: 128 | import step_coco.make_recam 129 | 130 | timer = pyutils.Timer('step.make_recam:') 131 | step_coco.make_recam.run(args) 132 | 133 | if args.eval_cam_pass is True: 134 | import step_coco.eval_cam 135 | 136 | timer = pyutils.Timer('step.eval_cam:') 137 | step_coco.eval_cam.run(args) 138 | if args.cam_to_ir_label_pass is True: 139 | import step_coco.cam_to_ir_label 140 | 141 | timer = pyutils.Timer('step.cam_to_ir_label:') 142 | step_coco.cam_to_ir_label.run(args) 143 | 144 | if args.train_irn_pass is True: 145 | import step_coco.train_irn 146 | 147 | timer = pyutils.Timer('step.train_irn:') 148 | step_coco.train_irn.run(args) 149 | 150 | if args.make_sem_seg_pass is True: 151 | import step_coco.make_sem_seg_labels 152 | args.sem_seg_bg_thres = float(args.sem_seg_bg_thres) 153 | timer = pyutils.Timer('step.make_sem_seg_labels:') 154 | step_coco.make_sem_seg_labels.run(args) 155 | 156 | if args.eval_sem_seg_pass is True: 157 | import step_coco.eval_sem_seg 158 | timer = pyutils.Timer('step.eval_sem_seg:') 159 | step_coco.eval_sem_seg.run(args) 160 | 161 | -------------------------------------------------------------------------------- /step/__pycache__/cam_to_ir_label.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/step/__pycache__/cam_to_ir_label.cpython-36.pyc -------------------------------------------------------------------------------- /step/__pycache__/eval_cam.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/step/__pycache__/eval_cam.cpython-36.pyc -------------------------------------------------------------------------------- /step/__pycache__/make_recam.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/step/__pycache__/make_recam.cpython-36.pyc -------------------------------------------------------------------------------- /step/__pycache__/make_sem_seg_labels.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/step/__pycache__/make_sem_seg_labels.cpython-36.pyc -------------------------------------------------------------------------------- /step/__pycache__/train_irn.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/step/__pycache__/train_irn.cpython-36.pyc -------------------------------------------------------------------------------- /step/__pycache__/train_recam.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/step/__pycache__/train_recam.cpython-36.pyc -------------------------------------------------------------------------------- /step/cam_to_ir_label.py: -------------------------------------------------------------------------------- 1 | 2 | import os 3 | import numpy as np 4 | import imageio 5 | 6 | from torch import multiprocessing 7 | from torch.utils.data import DataLoader 8 | 9 | import voc12.dataloader 10 | from misc import torchutils, imutils 11 | from PIL import Image 12 | 13 | 14 | palette = [0,0,0, 128,0,0, 0,128,0, 128,128,0, 0,0,128, 128,0,128, 0,128,128, 128,128,128, 15 | 64,0,0, 192,0,0, 64,128,0, 192,128,0, 64,0,128, 192,0,128, 64,128,128, 192,128,128, 16 | 0,64,0, 128,64,0, 0,192,0, 128,192,0, 0,64,128, 128,64,128, 0,192,128, 128,192,128, 17 | 64,64,0, 192,64,0, 64,192,0, 192,192,0] 18 | 19 | def _work(process_id, infer_dataset, args): 20 | visualize_intermediate_cam = False 21 | databin = infer_dataset[process_id] 22 | infer_data_loader = DataLoader(databin, shuffle=False, num_workers=0, pin_memory=False) 23 | 24 | for iter, pack in enumerate(infer_data_loader): 25 | img_name = voc12.dataloader.decode_int_filename(pack['name'][0]) 26 | img = pack['img'][0].numpy() 27 | cam_dict = np.load(os.path.join(args.cam_out_dir, img_name + '.npy'), allow_pickle=True).item() 28 | 29 | cams = cam_dict['high_res'] 30 | keys = np.pad(cam_dict['keys'] + 1, (1, 0), mode='constant') 31 | 32 | # 1. find confident fg & bg 33 | fg_conf_cam = np.pad(cams, ((1, 0), (0, 0), (0, 0)), mode='constant', constant_values=args.conf_fg_thres) 34 | fg_conf_cam = np.argmax(fg_conf_cam, axis=0) 35 | 36 | 37 | pred = imutils.crf_inference_label(img, fg_conf_cam, n_labels=keys.shape[0]) 38 | 39 | fg_conf = keys[pred] 40 | bg_conf_cam = np.pad(cams, ((1, 0), (0, 0), (0, 0)), mode='constant', constant_values=args.conf_bg_thres) 41 | bg_conf_cam = np.argmax(bg_conf_cam, axis=0) 42 | pred = imutils.crf_inference_label(img, bg_conf_cam, n_labels=keys.shape[0]) 43 | bg_conf = keys[pred] 44 | 45 | # 2. combine confident fg & bg 46 | conf = fg_conf.copy() 47 | conf[fg_conf == 0] = 255 48 | conf[bg_conf + fg_conf == 0] = 0 49 | 50 | imageio.imwrite(os.path.join(args.ir_label_out_dir, img_name + '.png'), conf.astype(np.uint8)) 51 | 52 | 53 | if process_id == args.num_workers - 1 and iter % (len(databin) // 20) == 0: 54 | print("%d " % ((5 * iter + 1) // (len(databin) // 20)), end='') 55 | 56 | def run(args): 57 | dataset = voc12.dataloader.VOC12ImageDataset(args.train_list, voc12_root=args.voc12_root, img_normal=None, to_torch=False) 58 | dataset = torchutils.split_dataset(dataset, args.num_workers) 59 | 60 | print('[ ', end='') 61 | multiprocessing.spawn(_work, nprocs=args.num_workers, args=(dataset, args), join=True) 62 | print(']') 63 | -------------------------------------------------------------------------------- /step/eval_cam.py: -------------------------------------------------------------------------------- 1 | 2 | import numpy as np 3 | import os 4 | from chainercv.datasets import VOCSemanticSegmentationDataset 5 | from chainercv.evaluations import calc_semantic_segmentation_confusion 6 | 7 | def run(args): 8 | dataset = VOCSemanticSegmentationDataset(split=args.chainer_eval_set, data_dir=args.voc12_root) 9 | # labels = [dataset.get_example_by_keys(i, (1,))[0] for i in range(len(dataset))] 10 | 11 | preds = [] 12 | labels = [] 13 | n_images = 0 14 | for i, id in enumerate(dataset.ids): 15 | n_images += 1 16 | cam_dict = np.load(os.path.join(args.cam_out_dir, id + '.npy'), allow_pickle=True).item() 17 | cams = cam_dict['high_res'] 18 | cams = np.pad(cams, ((1, 0), (0, 0), (0, 0)), mode='constant', constant_values=args.cam_eval_thres) 19 | keys = np.pad(cam_dict['keys'] + 1, (1, 0), mode='constant') 20 | cls_labels = np.argmax(cams, axis=0) 21 | cls_labels = keys[cls_labels] 22 | preds.append(cls_labels.copy()) 23 | labels.append(dataset.get_example_by_keys(i, (1,))[0]) 24 | 25 | confusion = calc_semantic_segmentation_confusion(preds, labels) 26 | 27 | gtj = confusion.sum(axis=1) 28 | resj = confusion.sum(axis=0) 29 | gtjresj = np.diag(confusion) 30 | denominator = gtj + resj - gtjresj 31 | iou = gtjresj / denominator 32 | 33 | 34 | print("threshold:", args.cam_eval_thres, 'miou:', np.nanmean(iou), "i_imgs", n_images) 35 | print('among_predfg_bg', float((resj[1:].sum()-confusion[1:,1:].sum())/(resj[1:].sum()))) 36 | 37 | return np.nanmean(iou) -------------------------------------------------------------------------------- /step/eval_sem_seg.py: -------------------------------------------------------------------------------- 1 | 2 | import numpy as np 3 | import os 4 | from chainercv.datasets import VOCSemanticSegmentationDataset 5 | from chainercv.evaluations import calc_semantic_segmentation_confusion 6 | import imageio 7 | 8 | def run(args): 9 | dataset = VOCSemanticSegmentationDataset(split=args.chainer_eval_set, data_dir=args.voc12_root) 10 | 11 | preds = [] 12 | labels = [] 13 | n_img = 0 14 | for i, id in enumerate(dataset.ids): 15 | cls_labels = imageio.imread(os.path.join(args.sem_seg_out_dir, id + '.png')).astype(np.uint8) 16 | cls_labels[cls_labels == 255] = 0 17 | preds.append(cls_labels.copy()) 18 | labels.append(dataset.get_example_by_keys(i, (1,))[0]) 19 | n_img += 1 20 | 21 | confusion = calc_semantic_segmentation_confusion(preds, labels)[:21, :21] 22 | 23 | gtj = confusion.sum(axis=1) 24 | resj = confusion.sum(axis=0) 25 | gtjresj = np.diag(confusion) 26 | denominator = gtj + resj - gtjresj 27 | fp = 1. - gtj / denominator 28 | fn = 1. - resj / denominator 29 | iou = gtjresj / denominator 30 | print("total images", n_img) 31 | print(fp[0], fn[0]) 32 | print(np.mean(fp[1:]), np.mean(fn[1:])) 33 | 34 | print({'iou': iou, 'miou': np.nanmean(iou)}) 35 | -------------------------------------------------------------------------------- /step/make_cam.py: -------------------------------------------------------------------------------- 1 | import torch 2 | from torch import multiprocessing, cuda 3 | from torch.utils.data import DataLoader 4 | import torch.nn.functional as F 5 | from torch.backends import cudnn 6 | 7 | import numpy as np 8 | import importlib 9 | import os 10 | import os.path as osp 11 | 12 | import voc12.dataloader 13 | from misc import torchutils, imutils 14 | import cv2 15 | cudnn.enabled = True 16 | 17 | def _work(process_id, model, dataset, args): 18 | 19 | databin = dataset[process_id] 20 | n_gpus = torch.cuda.device_count() 21 | data_loader = DataLoader(databin, shuffle=False, num_workers=args.num_workers // n_gpus, pin_memory=False) 22 | 23 | with torch.no_grad(), cuda.device(process_id): 24 | 25 | model.cuda() 26 | 27 | for iter, pack in enumerate(data_loader): 28 | 29 | img_name = pack['name'][0] 30 | label = pack['label'][0] 31 | size = pack['size'] 32 | 33 | strided_size = imutils.get_strided_size(size, 4) 34 | strided_up_size = imutils.get_strided_up_size(size, 16) 35 | 36 | outputs = [model(img[0].cuda(non_blocking=True)) for img in pack['img']] # b x 20 x w x h 37 | 38 | strided_cam = torch.sum(torch.stack([F.interpolate(torch.unsqueeze(o, 0), strided_size, mode='bilinear', align_corners=False)[0] for o in outputs]), 0) 39 | 40 | highres_cam = [F.interpolate(torch.unsqueeze(o, 1), strided_up_size,mode='bilinear', align_corners=False) for o in outputs] 41 | highres_cam = torch.sum(torch.stack(highres_cam, 0), 0)[:, 0, :size[0], :size[1]] 42 | valid_cat = torch.nonzero(label)[:, 0] 43 | 44 | strided_cam = strided_cam[valid_cat] 45 | strided_cam /= F.adaptive_max_pool2d(strided_cam, (1, 1)) + 1e-5 46 | 47 | highres_cam = highres_cam[valid_cat] 48 | highres_cam /= F.adaptive_max_pool2d(highres_cam, (1, 1)) + 1e-5 49 | 50 | np.save(os.path.join(args.cam_out_dir, img_name.replace('jpg','npy')), 51 | {"keys": valid_cat, "cam": strided_cam.cpu(), "high_res": highres_cam.cpu().numpy()}) 52 | 53 | if process_id == n_gpus - 1 and iter % (len(databin) // 20) == 0: 54 | print("%d " % ((5*iter+1)//(len(databin) // 20)), end='') 55 | 56 | 57 | def run(args): 58 | model = getattr(importlib.import_module(args.cam_network), 'CAM')() 59 | model.load_state_dict(torch.load(args.cam_weights_name), strict=True) 60 | model.eval() 61 | 62 | n_gpus = torch.cuda.device_count() 63 | 64 | dataset = voc12.dataloader.VOC12ClassificationDatasetMSF(args.train_list, voc12_root=args.voc12_root, scales=args.cam_scales) 65 | dataset = torchutils.split_dataset(dataset, n_gpus) 66 | 67 | print('[ ', end='') 68 | multiprocessing.spawn(_work, nprocs=n_gpus, args=(model, dataset, args), join=True) 69 | print(']') 70 | 71 | torch.cuda.empty_cache() -------------------------------------------------------------------------------- /step/make_recam.py: -------------------------------------------------------------------------------- 1 | import torch 2 | from torch import multiprocessing, cuda 3 | from torch.utils.data import DataLoader 4 | import torch.nn.functional as F 5 | from torch.backends import cudnn 6 | 7 | import numpy as np 8 | import importlib 9 | import os 10 | import os.path as osp 11 | 12 | import voc12.dataloader 13 | from misc import torchutils, imutils 14 | import net.resnet50_cam 15 | import cv2 16 | cudnn.enabled = True 17 | 18 | def _work(process_id, model, dataset, args): 19 | 20 | databin = dataset[process_id] 21 | n_gpus = torch.cuda.device_count() 22 | data_loader = DataLoader(databin, shuffle=False, num_workers=args.num_workers // n_gpus, pin_memory=False) 23 | recam_predictor = net.resnet50_cam.Class_Predictor(20, 2048) 24 | recam_predictor.load_state_dict(torch.load(osp.join(args.recam_weight_dir,'recam_predictor_'+str(args.recam_num_epoches) + '.pth'))) 25 | 26 | with torch.no_grad(), cuda.device(process_id): 27 | 28 | model.cuda() 29 | recam_predictor.cuda() 30 | for iter, pack in enumerate(data_loader): 31 | 32 | img_name = pack['name'][0] 33 | label = pack['label'][0] 34 | size = pack['size'] 35 | 36 | strided_size = imutils.get_strided_size(size, 4) 37 | strided_up_size = imutils.get_strided_up_size(size, 16) 38 | 39 | outputs = [model.forward2(img[0].cuda(non_blocking=True),recam_predictor.classifier.weight) for img in pack['img']] # b x 20 x w x h 40 | 41 | strided_cam = torch.sum(torch.stack( 42 | [F.interpolate(torch.unsqueeze(o, 0), strided_size, mode='bilinear', align_corners=False)[0] for o in outputs]), 0) 43 | 44 | highres_cam = [F.interpolate(torch.unsqueeze(o, 1), strided_up_size, 45 | mode='bilinear', align_corners=False) for o in outputs] 46 | highres_cam = torch.sum(torch.stack(highres_cam, 0), 0)[:, 0, :size[0], :size[1]] 47 | valid_cat = torch.nonzero(label)[:, 0] 48 | 49 | strided_cam = strided_cam[valid_cat] 50 | strided_cam /= F.adaptive_max_pool2d(strided_cam, (1, 1)) + 1e-5 51 | 52 | highres_cam = highres_cam[valid_cat] 53 | highres_cam /= F.adaptive_max_pool2d(highres_cam, (1, 1)) + 1e-5 54 | 55 | # save cams 56 | np.save(os.path.join(args.cam_out_dir, img_name + '.npy'), {"keys": valid_cat, "cam": strided_cam.cpu(), "high_res": highres_cam.cpu().numpy()}) 57 | 58 | if process_id == n_gpus - 1 and iter % (len(databin) // 20) == 0: 59 | print("%d " % ((5*iter+1)//(len(databin) // 20)), end='') 60 | 61 | 62 | def run(args): 63 | model = getattr(importlib.import_module(args.cam_network), 'CAM')() 64 | model.load_state_dict(torch.load(osp.join(args.recam_weight_dir,'res50_recam_'+str(args.recam_num_epoches) + '.pth'))) 65 | model.eval() 66 | 67 | n_gpus = torch.cuda.device_count() 68 | 69 | dataset = voc12.dataloader.VOC12ClassificationDatasetMSF(args.train_list, voc12_root=args.voc12_root, scales=args.cam_scales) 70 | dataset = torchutils.split_dataset(dataset, n_gpus) 71 | 72 | print('[ ', end='') 73 | multiprocessing.spawn(_work, nprocs=n_gpus, args=(model, dataset, args), join=True) 74 | print(']') 75 | 76 | torch.cuda.empty_cache() -------------------------------------------------------------------------------- /step/make_sem_seg_labels.py: -------------------------------------------------------------------------------- 1 | import torch 2 | from torch import multiprocessing, cuda 3 | from torch.utils.data import DataLoader 4 | import torch.nn.functional as F 5 | from torch.backends import cudnn 6 | 7 | import numpy as np 8 | import importlib 9 | import os 10 | import imageio 11 | 12 | import voc12.dataloader 13 | from misc import torchutils, indexing 14 | from PIL import Image 15 | 16 | cudnn.enabled = True 17 | palette = [0,0,0, 128,0,0, 0,128,0, 128,128,0, 0,0,128, 128,0,128, 0,128,128, 128,128,128, 18 | 64,0,0, 192,0,0, 64,128,0, 192,128,0, 64,0,128, 192,0,128, 64,128,128, 192,128,128, 19 | 0,64,0, 128,64,0, 0,192,0, 128,192,0, 0,64,128, 128,64,128, 0,192,128, 128,192,128, 20 | 64,64,0, 192,64,0, 64,192,0, 192,192,0] 21 | def _work(process_id, model, dataset, args): 22 | 23 | n_gpus = torch.cuda.device_count() 24 | databin = dataset[process_id] 25 | data_loader = DataLoader(databin, 26 | shuffle=False, num_workers=args.num_workers // n_gpus, pin_memory=False) 27 | 28 | with torch.no_grad(), cuda.device(process_id): 29 | 30 | model.cuda() 31 | 32 | for iter, pack in enumerate(data_loader): 33 | img_name = voc12.dataloader.decode_int_filename(pack['name'][0]) 34 | # if os.path.exists(os.path.join(args.sem_seg_out_dir, img_name + '.png')): 35 | # continue 36 | orig_img_size = np.asarray(pack['size']) 37 | 38 | edge, dp = model(pack['img'][0].cuda(non_blocking=True)) 39 | 40 | cam_dict = np.load(args.cam_out_dir + '/' + img_name + '.npy', allow_pickle=True).item() 41 | 42 | cams = cam_dict['cam'] 43 | # cams = np.power(cam_dict['cam'], 1.5) # AdvCAM 44 | keys = np.pad(cam_dict['keys'] + 1, (1, 0), mode='constant') 45 | 46 | cam_downsized_values = cams.cuda() 47 | 48 | rw = indexing.propagate_to_edge(cam_downsized_values, edge, beta=args.beta, exp_times=args.exp_times, radius=5) 49 | 50 | rw_up = F.interpolate(rw, scale_factor=4, mode='bilinear', align_corners=False)[..., 0, :orig_img_size[0], :orig_img_size[1]] 51 | rw_up = rw_up / torch.max(rw_up) 52 | 53 | rw_up_bg = F.pad(rw_up, (0, 0, 0, 0, 1, 0), value=args.sem_seg_bg_thres) 54 | rw_pred = torch.argmax(rw_up_bg, dim=0).cpu().numpy() 55 | 56 | rw_pred = keys[rw_pred] 57 | 58 | imageio.imsave(os.path.join(args.sem_seg_out_dir, img_name + '.png'), rw_pred.astype(np.uint8)) 59 | 60 | if process_id == n_gpus - 1 and iter % (len(databin) // 20) == 0: 61 | print("%d " % ((5*iter+1)//(len(databin) // 20)), end='') 62 | 63 | 64 | def run(args): 65 | model = getattr(importlib.import_module(args.irn_network), 'EdgeDisplacement')() 66 | print(args.irn_weights_name) 67 | model.load_state_dict(torch.load(args.irn_weights_name), strict=False) 68 | model.eval() 69 | 70 | n_gpus = torch.cuda.device_count() 71 | 72 | dataset = voc12.dataloader.VOC12ClassificationDatasetMSF(args.infer_list, 73 | voc12_root=args.voc12_root, 74 | scales=(1.0,)) 75 | dataset = torchutils.split_dataset(dataset, n_gpus) 76 | 77 | print("[", end='') 78 | multiprocessing.spawn(_work, nprocs=n_gpus, args=(model, dataset, args), join=True) 79 | print("]") 80 | 81 | torch.cuda.empty_cache() 82 | -------------------------------------------------------------------------------- /step/train_cam.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | 3 | import torch 4 | import torch.nn as nn 5 | from torch.backends import cudnn 6 | cudnn.enabled = True 7 | from torch.utils.data import DataLoader 8 | import torch.nn.functional as F 9 | 10 | import importlib 11 | 12 | import voc12.dataloader 13 | from misc import pyutils, torchutils 14 | from torch import autograd 15 | import os 16 | 17 | def validate(model, data_loader): 18 | print('validating ... ', flush=True, end='') 19 | 20 | val_loss_meter = pyutils.AverageMeter('loss1', 'loss2') 21 | 22 | model.eval() 23 | ce = nn.CrossEntropyLoss() 24 | with torch.no_grad(): 25 | for pack in data_loader: 26 | img = pack['img'] 27 | 28 | label = pack['label'].cuda(non_blocking=True) 29 | 30 | x = model(img) 31 | loss = F.multilabel_soft_margin_loss(x, label) 32 | 33 | val_loss_meter.add({'loss': loss.item()}) 34 | 35 | model.train() 36 | 37 | print('loss: %.4f' % (val_loss_meter.pop('loss'))) 38 | 39 | return 40 | 41 | 42 | def run(args): 43 | 44 | model = getattr(importlib.import_module(args.cam_network), 'Net')() 45 | 46 | 47 | train_dataset = voc12.dataloader.VOC12ClassificationDataset(args.train_list, voc12_root=args.voc12_root, 48 | resize_long=(320, 640), hor_flip=True, 49 | crop_size=512, crop_method="random") 50 | train_data_loader = DataLoader(train_dataset, batch_size=args.cam_batch_size, 51 | shuffle=True, num_workers=args.num_workers, pin_memory=True, drop_last=True) 52 | max_step = (len(train_dataset) // args.cam_batch_size) * args.cam_num_epoches 53 | 54 | val_dataset = voc12.dataloader.VOC12ClassificationDataset(args.val_list, voc12_root=args.voc12_root, 55 | crop_size=512) 56 | val_data_loader = DataLoader(val_dataset, batch_size=args.cam_batch_size, 57 | shuffle=False, num_workers=args.num_workers, pin_memory=True, drop_last=True) 58 | 59 | param_groups = model.trainable_parameters() 60 | optimizer = torchutils.PolyOptimizer([ 61 | {'params': param_groups[0], 'lr': args.cam_learning_rate, 'weight_decay': args.cam_weight_decay}, 62 | {'params': param_groups[1], 'lr': 10*args.cam_learning_rate, 'weight_decay': args.cam_weight_decay}, 63 | ], lr=args.cam_learning_rate, weight_decay=args.cam_weight_decay, max_step=max_step) 64 | 65 | model = torch.nn.DataParallel(model).cuda() 66 | model.train() 67 | 68 | avg_meter = pyutils.AverageMeter() 69 | 70 | timer = pyutils.Timer() 71 | ce = nn.CrossEntropyLoss() 72 | 73 | for ep in range(args.cam_num_epoches): 74 | 75 | print('Epoch %d/%d' % (ep+1, args.cam_num_epoches)) 76 | 77 | for step, pack in enumerate(train_data_loader): 78 | 79 | img = pack['img'] 80 | img = img.cuda() 81 | label = pack['label'].cuda(non_blocking=True) 82 | x = model(img) 83 | 84 | optimizer.zero_grad() 85 | 86 | loss = F.multilabel_soft_margin_loss(x, label) 87 | 88 | loss.backward() 89 | avg_meter.add({'loss': loss.item()}) 90 | 91 | 92 | optimizer.step() 93 | if (optimizer.global_step-1)%100 == 0: 94 | timer.update_progress(optimizer.global_step / max_step) 95 | 96 | print('step:%5d/%5d' % (optimizer.global_step - 1, max_step), 97 | 'loss:%.4f' % (avg_meter.pop('loss')), 98 | 'imps:%.1f' % ((step + 1) * args.cam_batch_size / timer.get_stage_elapsed()), 99 | 'lr: %.4f' % (optimizer.param_groups[0]['lr']), 100 | 'etc:%s' % (timer.str_estimated_complete()), flush=True) 101 | 102 | 103 | validate(model, val_data_loader) 104 | timer.reset_stage() 105 | 106 | torch.save(model.module.state_dict(), args.cam_weights_name) 107 | torch.cuda.empty_cache() 108 | -------------------------------------------------------------------------------- /step/train_irn.py: -------------------------------------------------------------------------------- 1 | 2 | import torch 3 | from torch.backends import cudnn 4 | cudnn.enabled = True 5 | from torch.utils.data import DataLoader 6 | import voc12.dataloader 7 | from misc import pyutils, torchutils, indexing 8 | import importlib 9 | from PIL import ImageFile 10 | ImageFile.LOAD_TRUNCATED_IMAGES = True 11 | def run(args): 12 | 13 | path_index = indexing.PathIndex(radius=10, default_size=(args.irn_crop_size // 4, args.irn_crop_size // 4)) 14 | 15 | model = getattr(importlib.import_module(args.irn_network), 'AffinityDisplacementLoss')( 16 | path_index) 17 | 18 | train_dataset = voc12.dataloader.VOC12AffinityDataset(args.train_list, 19 | label_dir=args.ir_label_out_dir, 20 | voc12_root=args.voc12_root, 21 | indices_from=path_index.src_indices, 22 | indices_to=path_index.dst_indices, 23 | hor_flip=True, 24 | crop_size=args.irn_crop_size, 25 | crop_method="random", 26 | rescale=(0.5, 1.5) 27 | ) 28 | train_data_loader = DataLoader(train_dataset, batch_size=args.irn_batch_size, 29 | shuffle=True, num_workers=args.num_workers, pin_memory=True, drop_last=True) 30 | 31 | max_step = (len(train_dataset) // args.irn_batch_size) * args.irn_num_epoches 32 | 33 | param_groups = model.trainable_parameters() 34 | optimizer = torchutils.PolyOptimizer([ 35 | {'params': param_groups[0], 'lr': 1*args.irn_learning_rate, 'weight_decay': args.irn_weight_decay}, 36 | {'params': param_groups[1], 'lr': 10*args.irn_learning_rate, 'weight_decay': args.irn_weight_decay} 37 | ], lr=args.irn_learning_rate, weight_decay=args.irn_weight_decay, max_step=max_step) 38 | 39 | model = torch.nn.DataParallel(model).cuda() 40 | model.train() 41 | 42 | avg_meter = pyutils.AverageMeter() 43 | 44 | timer = pyutils.Timer() 45 | 46 | for ep in range(args.irn_num_epoches): 47 | 48 | print('Epoch %d/%d' % (ep+1, args.irn_num_epoches)) 49 | 50 | for iter, pack in enumerate(train_data_loader): 51 | 52 | img = pack['img'].cuda(non_blocking=True) 53 | bg_pos_label = pack['aff_bg_pos_label'].cuda(non_blocking=True) 54 | fg_pos_label = pack['aff_fg_pos_label'].cuda(non_blocking=True) 55 | neg_label = pack['aff_neg_label'].cuda(non_blocking=True) 56 | 57 | pos_aff_loss, neg_aff_loss, dp_fg_loss, dp_bg_loss = model(img, True) 58 | 59 | bg_pos_aff_loss = torch.sum(bg_pos_label * pos_aff_loss) / (torch.sum(bg_pos_label) + 1e-5) 60 | fg_pos_aff_loss = torch.sum(fg_pos_label * pos_aff_loss) / (torch.sum(fg_pos_label) + 1e-5) 61 | pos_aff_loss = bg_pos_aff_loss / 2 + fg_pos_aff_loss / 2 62 | neg_aff_loss = torch.sum(neg_label * neg_aff_loss) / (torch.sum(neg_label) + 1e-5) 63 | 64 | dp_fg_loss = torch.sum(dp_fg_loss * torch.unsqueeze(fg_pos_label, 1)) / (2 * torch.sum(fg_pos_label) + 1e-5) 65 | dp_bg_loss = torch.sum(dp_bg_loss * torch.unsqueeze(bg_pos_label, 1)) / (2 * torch.sum(bg_pos_label) + 1e-5) 66 | 67 | avg_meter.add({'loss1': pos_aff_loss.item(), 'loss2': neg_aff_loss.item(), 68 | 'loss3': dp_fg_loss.item(), 'loss4': dp_bg_loss.item()}) 69 | 70 | total_loss = (pos_aff_loss + neg_aff_loss) / 2 + (dp_fg_loss + dp_bg_loss) / 2 71 | 72 | optimizer.zero_grad() 73 | total_loss.backward() 74 | optimizer.step() 75 | 76 | if (optimizer.global_step - 1) % 50 == 0: 77 | timer.update_progress(optimizer.global_step / max_step) 78 | 79 | print('step:%5d/%5d' % (optimizer.global_step - 1, max_step), 80 | 'loss:%.4f %.4f %.4f %.4f' % ( 81 | avg_meter.pop('loss1'), avg_meter.pop('loss2'), avg_meter.pop('loss3'), avg_meter.pop('loss4')), 82 | 'imps:%.1f' % ((iter + 1) * args.irn_batch_size / timer.get_stage_elapsed()), 83 | 'lr: %.4f' % (optimizer.param_groups[0]['lr']), 84 | 'etc:%s' % (timer.str_estimated_complete()), flush=True) 85 | else: 86 | timer.reset_stage() 87 | 88 | infer_dataset = voc12.dataloader.VOC12ImageDataset(args.infer_list, 89 | voc12_root=args.voc12_root, 90 | crop_size=args.irn_crop_size, 91 | crop_method="top_left") 92 | infer_data_loader = DataLoader(infer_dataset, batch_size=args.irn_batch_size, 93 | shuffle=False, num_workers=args.num_workers, pin_memory=True, drop_last=True) 94 | 95 | model.eval() 96 | print('Analyzing displacements mean ... ', end='') 97 | 98 | dp_mean_list = [] 99 | 100 | with torch.no_grad(): 101 | for iter, pack in enumerate(infer_data_loader): 102 | img = pack['img'].cuda(non_blocking=True) 103 | 104 | aff, dp = model(img, False) 105 | 106 | dp_mean_list.append(torch.mean(dp, dim=(0, 2, 3)).cpu()) 107 | 108 | model.module.mean_shift.running_mean = torch.mean(torch.stack(dp_mean_list), dim=0) 109 | print('done.') 110 | 111 | torch.save(model.module.state_dict(), args.irn_weights_name) 112 | torch.cuda.empty_cache() 113 | -------------------------------------------------------------------------------- /step/train_recam.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import time 3 | import torch 4 | import numpy as np 5 | import os.path as osp 6 | from torch.backends import cudnn 7 | cudnn.enabled = True 8 | from torch.utils.data import DataLoader 9 | import torch.nn.functional as F 10 | from chainercv.datasets import VOCSemanticSegmentationDataset 11 | from chainercv.evaluations import calc_semantic_segmentation_confusion 12 | 13 | 14 | import importlib 15 | 16 | import voc12.dataloader 17 | import net.resnet50_cam 18 | from misc import pyutils, torchutils, imutils 19 | from torch import autograd 20 | import os 21 | 22 | def validate(model, data_loader): 23 | print('validating ... ', flush=True, end='') 24 | 25 | val_loss_meter = pyutils.AverageMeter('loss1', 'loss2') 26 | 27 | model.eval() 28 | 29 | with torch.no_grad(): 30 | for pack in data_loader: 31 | img = pack['img'] 32 | 33 | label = pack['label'].cuda(non_blocking=True) 34 | 35 | x,_,_= model(img) 36 | loss1 = F.multilabel_soft_margin_loss(x, label) 37 | 38 | val_loss_meter.add({'loss': loss1.item()}) 39 | 40 | model.train() 41 | 42 | print('loss: %.4f' % (val_loss_meter.pop('loss'))) 43 | 44 | return 45 | 46 | 47 | def run(args): 48 | print('train_recam') 49 | model = getattr(importlib.import_module(args.cam_network), 'Net_CAM_Feature')() 50 | param_groups = model.trainable_parameters() 51 | model.load_state_dict(torch.load(args.cam_weights_name), strict=True) 52 | model = torch.nn.DataParallel(model).cuda() 53 | 54 | recam_predictor = net.resnet50_cam.Class_Predictor(20, 2048) 55 | recam_predictor = torch.nn.DataParallel(recam_predictor).cuda() 56 | recam_predictor.train() 57 | 58 | 59 | train_dataset = voc12.dataloader.VOC12ClassificationDataset(args.train_list, voc12_root=args.voc12_root, 60 | resize_long=(320, 640), hor_flip=True, 61 | crop_size=512, crop_method="random") 62 | train_data_loader = DataLoader(train_dataset, batch_size=args.cam_batch_size, 63 | shuffle=True, num_workers=args.num_workers, pin_memory=True, drop_last=True) 64 | max_step = (len(train_dataset) // args.cam_batch_size) * args.recam_num_epoches 65 | 66 | val_dataset = voc12.dataloader.VOC12ClassificationDataset(args.val_list, voc12_root=args.voc12_root, 67 | crop_size=512) 68 | val_data_loader = DataLoader(val_dataset, batch_size=args.cam_batch_size, 69 | shuffle=False, num_workers=args.num_workers, pin_memory=True, drop_last=True) 70 | 71 | optimizer = torchutils.PolyOptimizer([ 72 | {'params': param_groups[0], 'lr': 0.1*args.recam_learning_rate, 'weight_decay': args.cam_weight_decay}, 73 | {'params': param_groups[1], 'lr': 0.1*args.recam_learning_rate, 'weight_decay': args.cam_weight_decay}, 74 | {'params': recam_predictor.parameters(), 'lr': args.recam_learning_rate, 'weight_decay': args.cam_weight_decay}, 75 | ], lr=args.recam_learning_rate, weight_decay=args.cam_weight_decay, max_step=max_step) 76 | 77 | avg_meter = pyutils.AverageMeter() 78 | 79 | timer = pyutils.Timer() 80 | global_step = 0 81 | for ep in range(args.recam_num_epoches): 82 | 83 | print('Epoch %d/%d' % (ep+1, args.recam_num_epoches)) 84 | model.train() 85 | 86 | for step, pack in enumerate(train_data_loader): 87 | 88 | img = pack['img'].cuda() 89 | label = pack['label'].cuda(non_blocking=True) 90 | x,cam,_ = model(img) 91 | 92 | 93 | loss_cls = F.multilabel_soft_margin_loss(x, label) 94 | loss_ce,acc = recam_predictor(cam,label) 95 | loss = loss_cls + args.recam_loss_weight*loss_ce 96 | 97 | avg_meter.add({'loss_cls': loss_cls.item()}) 98 | avg_meter.add({'loss_ce': loss_ce.item()}) 99 | avg_meter.add({'acc': acc.item()}) 100 | 101 | optimizer.zero_grad() 102 | loss.backward() 103 | optimizer.step() 104 | global_step += 1 105 | 106 | if (global_step-1)%100 == 0: 107 | timer.update_progress(global_step / max_step) 108 | 109 | print('step:%5d/%5d' % (global_step - 1, max_step), 110 | 'loss_cls:%.4f' % (avg_meter.pop('loss_cls')), 111 | 'loss_ce:%.4f' % (avg_meter.pop('loss_ce')), 112 | 'acc:%.4f' % (avg_meter.pop('acc')), 113 | 'imps:%.1f' % ((step + 1) * args.cam_batch_size / timer.get_stage_elapsed()), 114 | 'lr: %.4f' % (optimizer.param_groups[2]['lr']), 115 | 'etc:%s' % (timer.str_estimated_complete()), flush=True) 116 | 117 | validate(model, val_data_loader) 118 | timer.reset_stage() 119 | torch.save(model.module.state_dict(), osp.join(args.recam_weight_dir,'res50_recam_'+str(ep+1) + '.pth')) 120 | torch.save(recam_predictor.module.state_dict(), osp.join(args.recam_weight_dir,'recam_predictor_'+str(ep+1) + '.pth')) 121 | torch.cuda.empty_cache() 122 | -------------------------------------------------------------------------------- /step_coco/__pycache__/cam_to_ir_label.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/step_coco/__pycache__/cam_to_ir_label.cpython-36.pyc -------------------------------------------------------------------------------- /step_coco/__pycache__/eval_cam.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/step_coco/__pycache__/eval_cam.cpython-36.pyc -------------------------------------------------------------------------------- /step_coco/__pycache__/eval_sem_seg.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/step_coco/__pycache__/eval_sem_seg.cpython-36.pyc -------------------------------------------------------------------------------- /step_coco/__pycache__/make_recam.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/step_coco/__pycache__/make_recam.cpython-36.pyc -------------------------------------------------------------------------------- /step_coco/__pycache__/make_sem_seg_labels.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/step_coco/__pycache__/make_sem_seg_labels.cpython-36.pyc -------------------------------------------------------------------------------- /step_coco/__pycache__/train_cam.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/step_coco/__pycache__/train_cam.cpython-36.pyc -------------------------------------------------------------------------------- /step_coco/__pycache__/train_irn.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/step_coco/__pycache__/train_irn.cpython-36.pyc -------------------------------------------------------------------------------- /step_coco/__pycache__/train_recam.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/step_coco/__pycache__/train_recam.cpython-36.pyc -------------------------------------------------------------------------------- /step_coco/cam_to_ir_label.py: -------------------------------------------------------------------------------- 1 | 2 | import os 3 | import numpy as np 4 | import imageio 5 | 6 | from torch import multiprocessing 7 | from torch.utils.data import DataLoader 8 | import os.path as osp 9 | 10 | import mscoco.dataloader 11 | from misc import torchutils, imutils 12 | from PIL import Image 13 | 14 | 15 | def _work(process_id, infer_dataset, args): 16 | visualize_intermediate_cam = False 17 | databin = infer_dataset[process_id] 18 | infer_data_loader = DataLoader(databin, shuffle=False, num_workers=0, pin_memory=False) 19 | 20 | for iter, pack in enumerate(infer_data_loader): 21 | # img_name = voc12.dataloader.decode_int_filename(pack['name'][0]) 22 | img_name = pack['name'][0].split('.')[0] 23 | if os.path.exists(os.path.join(args.ir_label_out_dir, img_name + '.png')): 24 | continue 25 | img = pack['img'][0].numpy() 26 | cam_dict = np.load(os.path.join(args.cam_out_dir, img_name + '.npy'), allow_pickle=True).item() 27 | 28 | cams = cam_dict['high_res'] 29 | keys = np.pad(cam_dict['keys'] + 1, (1, 0), mode='constant') 30 | 31 | if keys.shape[0] == 1: 32 | conf = np.zeros_like(img)[:, :, 0] 33 | imageio.imwrite(os.path.join(args.ir_label_out_dir, img_name + '.png'),conf.astype(np.uint8)) 34 | continue 35 | 36 | 37 | # 1. find confident fg & bg 38 | fg_conf_cam = np.pad(cams, ((1, 0), (0, 0), (0, 0)), mode='constant', constant_values=args.conf_fg_thres) 39 | fg_conf_cam = np.argmax(fg_conf_cam, axis=0) 40 | 41 | 42 | pred = imutils.crf_inference_label(img, fg_conf_cam, n_labels=keys.shape[0]) 43 | 44 | fg_conf = keys[pred] 45 | bg_conf_cam = np.pad(cams, ((1, 0), (0, 0), (0, 0)), mode='constant', constant_values=args.conf_bg_thres) 46 | bg_conf_cam = np.argmax(bg_conf_cam, axis=0) 47 | pred = imutils.crf_inference_label(img, bg_conf_cam, n_labels=keys.shape[0]) 48 | bg_conf = keys[pred] 49 | 50 | # 2. combine confident fg & bg 51 | conf = fg_conf.copy() 52 | conf[fg_conf == 0] = 255 53 | conf[bg_conf + fg_conf == 0] = 0 54 | 55 | imageio.imwrite(os.path.join(args.ir_label_out_dir, img_name + '.png'), conf.astype(np.uint8)) 56 | 57 | 58 | if process_id == args.num_workers - 1 and iter % (len(databin) // 20) == 0: 59 | print("%d " % ((5 * iter + 1) // (len(databin) // 20)), end='') 60 | 61 | def run(args): 62 | dataset = mscoco.dataloader.COCOClassificationDataset( 63 | image_dir = osp.join(args.mscoco_root,'train2014/'), 64 | anno_path= osp.join(args.mscoco_root,'annotations/instances_train2014.json'), 65 | labels_path='./mscoco/train_labels.npy', img_normal=None, to_torch=False) 66 | dataset = torchutils.split_dataset(dataset, args.num_workers) 67 | print('[ ', end='') 68 | multiprocessing.spawn(_work, nprocs=args.num_workers, args=(dataset, args), join=True) 69 | print(']') 70 | -------------------------------------------------------------------------------- /step_coco/eval_cam.py: -------------------------------------------------------------------------------- 1 | 2 | import numpy as np 3 | import os.path as osp 4 | import mscoco.dataloader 5 | from torch.utils.data import DataLoader 6 | from chainercv.evaluations import calc_semantic_segmentation_confusion 7 | 8 | def run(args): 9 | dataset = mscoco.dataloader.COCOSegmentationDataset(image_dir = osp.join(args.mscoco_root,'train2014/'), 10 | anno_path= osp.join(args.mscoco_root,'annotations/instances_train2014.json'), 11 | masks_path=osp.join(args.mscoco_root,'mask/train2014'),crop_size=512) 12 | preds = [] 13 | labels = [] 14 | n_images = 0 15 | num = len(dataset) 16 | for i, pack in enumerate(dataset): 17 | if i%1000==0: 18 | print(i,'/',num) 19 | filename = pack['name'].split('.')[0] 20 | 21 | n_images += 1 22 | cam_dict = np.load(osp.join(args.cam_out_dir, filename + '.npy'), allow_pickle=True).item() 23 | cams = cam_dict['high_res'] 24 | cams = np.pad(cams, ((1, 0), (0, 0), (0, 0)), mode='constant', constant_values=args.cam_eval_thres) 25 | keys = np.pad(cam_dict['keys'] + 1, (1, 0), mode='constant') 26 | cls_labels = np.argmax(cams, axis=0) 27 | cls_labels = keys[cls_labels].astype(np.uint8) 28 | preds.append(cls_labels.copy()) 29 | 30 | label = dataset.get_label_by_name(filename) 31 | labels.append(label) 32 | 33 | confusion = calc_semantic_segmentation_confusion(preds, labels) 34 | 35 | gtj = confusion.sum(axis=1) 36 | resj = confusion.sum(axis=0) 37 | gtjresj = np.diag(confusion) 38 | denominator = gtj + resj - gtjresj 39 | iou = gtjresj / denominator 40 | 41 | 42 | print("threshold:", args.cam_eval_thres, 'miou:', np.nanmean(iou), "i_imgs", n_images) 43 | print('among_predfg_bg', float((resj[1:].sum()-confusion[1:,1:].sum())/(resj[1:].sum()))) 44 | 45 | return np.nanmean(iou) -------------------------------------------------------------------------------- /step_coco/eval_sem_seg.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import os.path as osp 3 | import mscoco.dataloader 4 | from torch.utils.data import DataLoader 5 | from chainercv.evaluations import calc_semantic_segmentation_confusion 6 | import imageio 7 | 8 | def run(args): 9 | dataset = mscoco.dataloader.COCOSegmentationDataset(image_dir = osp.join(args.mscoco_root,'train2014/'), 10 | anno_path= osp.join(args.mscoco_root,'annotations/instances_train2014.json'), 11 | masks_path=osp.join(args.mscoco_root,'mask/train2014'), 12 | crop_size=512) 13 | preds = [] 14 | labels = [] 15 | n_img = 0 16 | num = len(dataset) 17 | for i, pack in enumerate(dataset): 18 | if i%1000==0: 19 | print(i,'/',num) 20 | img_name = pack['name'].split('.')[0] 21 | cls_file = img_name+'.png' 22 | cls_labels = imageio.imread(osp.join(args.sem_seg_out_dir, cls_file)).astype(np.uint8) 23 | preds.append(cls_labels.copy()) 24 | label = dataset.get_label_by_name(img_name) 25 | labels.append(label) 26 | n_img += 1 27 | confusion = calc_semantic_segmentation_confusion(preds, labels) 28 | 29 | gtj = confusion.sum(axis=1) 30 | resj = confusion.sum(axis=0) 31 | gtjresj = np.diag(confusion) 32 | denominator = gtj + resj - gtjresj 33 | fp = 1. - gtj / denominator 34 | fn = 1. - resj / denominator 35 | iou = gtjresj / denominator 36 | print("total images", n_img) 37 | print(fp[0], fn[0]) 38 | print(np.mean(fp[1:]), np.mean(fn[1:])) 39 | 40 | print({'iou': iou, 'miou': np.nanmean(iou)}) 41 | -------------------------------------------------------------------------------- /step_coco/make_cam.py: -------------------------------------------------------------------------------- 1 | import torch 2 | from torch import multiprocessing, cuda 3 | from torch.utils.data import DataLoader 4 | import torch.nn.functional as F 5 | from torch.backends import cudnn 6 | 7 | import numpy as np 8 | import importlib 9 | import os 10 | import os.path as osp 11 | 12 | import mscoco.dataloader 13 | from misc import torchutils, imutils 14 | import net.resnet50_cam 15 | import cv2 16 | cudnn.enabled = True 17 | 18 | def _work(process_id, model, dataset, args): 19 | 20 | databin = dataset[process_id] 21 | n_gpus = torch.cuda.device_count() 22 | data_loader = DataLoader(databin, shuffle=False, num_workers=args.num_workers // n_gpus, pin_memory=False) 23 | 24 | with torch.no_grad(), cuda.device(process_id): 25 | 26 | model.cuda() 27 | for iter, pack in enumerate(data_loader): 28 | 29 | img_name = pack['name'][0] 30 | label = pack['label'][0] 31 | size = pack['size'] 32 | 33 | if os.path.exists(os.path.join(args.cam_out_dir, img_name.replace('jpg','npy'))): 34 | continue 35 | strided_size = imutils.get_strided_size(size, 4) 36 | strided_up_size = imutils.get_strided_up_size(size, 16) 37 | 38 | outputs = [model(img[0].cuda(non_blocking=True)) for img in pack['img']] # b x 20 x w x h 39 | 40 | strided_cam = torch.sum(torch.stack( 41 | [F.interpolate(torch.unsqueeze(o, 0), strided_size, mode='bilinear', align_corners=False)[0] for o in outputs]), 0) 42 | 43 | highres_cam = [F.interpolate(torch.unsqueeze(o, 1), strided_up_size, 44 | mode='bilinear', align_corners=False) for o in outputs] 45 | highres_cam = torch.sum(torch.stack(highres_cam, 0), 0)[:, 0, :size[0], :size[1]] 46 | valid_cat = torch.nonzero(label)[:, 0] 47 | 48 | strided_cam = strided_cam[valid_cat] 49 | strided_cam /= F.adaptive_max_pool2d(strided_cam, (1, 1)) + 1e-5 50 | 51 | highres_cam = highres_cam[valid_cat] 52 | highres_cam /= F.adaptive_max_pool2d(highres_cam, (1, 1)) + 1e-5 53 | 54 | # save cams 55 | np.save(os.path.join(args.cam_out_dir, img_name.replace('jpg','npy')), 56 | {"keys": valid_cat, "cam": strided_cam.cpu(), "high_res": highres_cam.cpu().numpy()}) 57 | 58 | if process_id == n_gpus - 1 and iter % (len(databin) // 20) == 0: 59 | print("%d " % ((5*iter+1)//(len(databin) // 20)), end='') 60 | 61 | 62 | def run(args): 63 | model = getattr(importlib.import_module(args.cam_network), 'CAM')(n_classes=80) 64 | model.load_state_dict(torch.load(args.cam_weights_name), strict=True) 65 | model.eval() 66 | 67 | n_gpus = torch.cuda.device_count() 68 | 69 | dataset = mscoco.dataloader.COCOClassificationDatasetMSF( 70 | image_dir = osp.join(args.mscoco_root,'train2014/'), 71 | anno_path= osp.join(args.mscoco_root,'annotations/instances_train2014.json'), 72 | labels_path='./mscoco/train_labels.npy', 73 | scales=args.cam_scales,num_classes=args.num_classes) 74 | dataset = torchutils.split_dataset(dataset, n_gpus) 75 | 76 | print('[ ', end='') 77 | multiprocessing.spawn(_work, nprocs=n_gpus, args=(model, dataset, args), join=True) 78 | print(']') 79 | 80 | torch.cuda.empty_cache() 81 | -------------------------------------------------------------------------------- /step_coco/make_recam.py: -------------------------------------------------------------------------------- 1 | import torch 2 | from torch import multiprocessing, cuda 3 | from torch.utils.data import DataLoader 4 | import torch.nn.functional as F 5 | from torch.backends import cudnn 6 | 7 | import numpy as np 8 | import importlib 9 | import os 10 | import os.path as osp 11 | 12 | import mscoco.dataloader 13 | from misc import torchutils, imutils 14 | import net.resnet50_cam 15 | import cv2 16 | cudnn.enabled = True 17 | 18 | def _work(process_id, model, dataset, args): 19 | 20 | databin = dataset[process_id] 21 | n_gpus = torch.cuda.device_count() 22 | data_loader = DataLoader(databin, shuffle=False, num_workers=args.num_workers // n_gpus, pin_memory=False) 23 | 24 | with torch.no_grad(), cuda.device(process_id): 25 | 26 | model.cuda() 27 | for iter, pack in enumerate(data_loader): 28 | 29 | img_name = pack['name'][0] 30 | label = pack['label'][0] 31 | size = pack['size'] 32 | 33 | if os.path.exists(os.path.join(args.cam_out_dir, img_name.replace('jpg','npy'))): 34 | continue 35 | strided_size = imutils.get_strided_size(size, 4) 36 | strided_up_size = imutils.get_strided_up_size(size, 16) 37 | 38 | outputs = [model(img[0].cuda(non_blocking=True)) for img in pack['img']] # b x 20 x w x h 39 | 40 | strided_cam = torch.sum(torch.stack( 41 | [F.interpolate(torch.unsqueeze(o, 0), strided_size, mode='bilinear', align_corners=False)[0] for o in outputs]), 0) 42 | 43 | highres_cam = [F.interpolate(torch.unsqueeze(o, 1), strided_up_size, 44 | mode='bilinear', align_corners=False) for o in outputs] 45 | highres_cam = torch.sum(torch.stack(highres_cam, 0), 0)[:, 0, :size[0], :size[1]] 46 | valid_cat = torch.nonzero(label)[:, 0] 47 | 48 | strided_cam = strided_cam[valid_cat] 49 | highres_cam = highres_cam[valid_cat] 50 | 51 | if strided_cam.shape[0]>0: 52 | strided_cam /= F.adaptive_max_pool2d(strided_cam, (1, 1)) + 1e-5 53 | highres_cam /= F.adaptive_max_pool2d(highres_cam, (1, 1)) + 1e-5 54 | 55 | # save cams 56 | np.save(os.path.join(args.cam_out_dir, img_name.replace('jpg','npy')), 57 | {"keys": valid_cat, "cam": strided_cam.cpu(), "high_res": highres_cam.cpu().numpy()}) 58 | 59 | if process_id == n_gpus - 1 and iter % (len(databin) // 20) == 0: 60 | print("%d " % ((5*iter+1)//(len(databin) // 20)), end='') 61 | 62 | 63 | def run(args): 64 | model = getattr(importlib.import_module(args.cam_network), 'CAM')(n_classes=80) 65 | model.load_state_dict(torch.load(osp.join(args.recam_weight_dir,'res50_recam_'+str(args.recam_num_epoches) + '.pth'))) 66 | model.eval() 67 | 68 | n_gpus = torch.cuda.device_count() 69 | 70 | dataset = mscoco.dataloader.COCOClassificationDatasetMSF( 71 | image_dir = osp.join(args.mscoco_root,'train2014/'), 72 | anno_path= osp.join(args.mscoco_root,'annotations/instances_train2014.json'), 73 | labels_path='./mscoco/train_labels.npy', 74 | scales=args.cam_scales) 75 | dataset = torchutils.split_dataset(dataset, n_gpus) 76 | 77 | print('[ ', end='') 78 | multiprocessing.spawn(_work, nprocs=n_gpus, args=(model, dataset, args), join=True) 79 | print(']') 80 | 81 | torch.cuda.empty_cache() -------------------------------------------------------------------------------- /step_coco/make_sem_seg_labels.py: -------------------------------------------------------------------------------- 1 | import torch 2 | from torch import multiprocessing, cuda 3 | from torch.utils.data import DataLoader 4 | import torch.nn.functional as F 5 | from torch.backends import cudnn 6 | import mscoco.dataloader 7 | import os.path as osp 8 | import numpy as np 9 | import importlib 10 | import os 11 | import imageio 12 | 13 | from misc import torchutils, indexing 14 | from PIL import Image 15 | 16 | cudnn.enabled = True 17 | def _work(process_id, model, dataset, args): 18 | 19 | n_gpus = torch.cuda.device_count() 20 | databin = dataset[process_id] 21 | data_loader = DataLoader(databin, 22 | shuffle=False, num_workers=args.num_workers // n_gpus, pin_memory=False) 23 | 24 | with torch.no_grad(), cuda.device(process_id): 25 | 26 | model.cuda() 27 | 28 | for iter, pack in enumerate(data_loader): 29 | img_name = pack['name'][0].split('.')[0] 30 | if os.path.exists(os.path.join(args.sem_seg_out_dir, img_name + '.png')): 31 | continue 32 | orig_img_size = np.asarray(pack['size']) 33 | 34 | edge, dp = model(pack['img'][0].cuda(non_blocking=True)) 35 | 36 | cam_dict = np.load(args.cam_out_dir + '/' + img_name + '.npy', allow_pickle=True).item() 37 | 38 | cams = cam_dict['cam'] 39 | # cams = np.power(cam_dict['cam'], 1.5) # Anti 40 | # for cam in cams: 41 | # print(cam.shape, cam.max()) 42 | keys = np.pad(cam_dict['keys'] + 1, (1, 0), mode='constant') 43 | 44 | if keys.shape[0] == 1: 45 | 46 | conf = np.zeros_like(pack['img'][0])[0, 0] 47 | imageio.imsave(os.path.join(args.sem_seg_out_dir, img_name + '.png'), conf.astype(np.uint8)) 48 | continue 49 | 50 | cam_downsized_values = cams.cuda() 51 | 52 | rw = indexing.propagate_to_edge(cam_downsized_values, edge, beta=args.beta, exp_times=args.exp_times, radius=5) 53 | 54 | rw_up = F.interpolate(rw, scale_factor=4, mode='bilinear', align_corners=False)[..., 0, :orig_img_size[0], :orig_img_size[1]] 55 | rw_up = rw_up / torch.max(rw_up) 56 | 57 | rw_up_bg = F.pad(rw_up, (0, 0, 0, 0, 1, 0), value=args.sem_seg_bg_thres) 58 | rw_pred = torch.argmax(rw_up_bg, dim=0).cpu().numpy() 59 | 60 | rw_pred = keys[rw_pred] 61 | 62 | imageio.imsave(os.path.join(args.sem_seg_out_dir, img_name + '.png'), rw_pred.astype(np.uint8)) 63 | 64 | if process_id == n_gpus - 1 and iter % (len(databin) // 20) == 0: 65 | print("%d " % ((5*iter+1)//(len(databin) // 20)), end='') 66 | 67 | 68 | def run(args): 69 | model = getattr(importlib.import_module(args.irn_network), 'EdgeDisplacement')() 70 | print(args.irn_weights_name) 71 | model.load_state_dict(torch.load(args.irn_weights_name), strict=False) 72 | model.eval() 73 | 74 | n_gpus = torch.cuda.device_count() 75 | 76 | dataset = mscoco.dataloader.COCOClassificationDatasetMSF( 77 | image_dir = osp.join(args.mscoco_root,'train2014/'), 78 | anno_path= osp.join(args.mscoco_root,'annotations/instances_train2014.json'), 79 | labels_path='./mscoco/train_labels.npy', 80 | scales=(1.0,)) 81 | dataset = torchutils.split_dataset(dataset, n_gpus) 82 | 83 | print("[", end='') 84 | multiprocessing.spawn(_work, nprocs=n_gpus, args=(model, dataset, args), join=True) 85 | print("]") 86 | 87 | torch.cuda.empty_cache() 88 | -------------------------------------------------------------------------------- /step_coco/train_cam.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import os 3 | import torch 4 | import os.path as osp 5 | from torch.backends import cudnn 6 | cudnn.enabled = True 7 | from torch.utils.data import DataLoader 8 | import torch.nn.functional as F 9 | 10 | import importlib 11 | 12 | import mscoco.dataloader 13 | from misc import pyutils, torchutils 14 | import os 15 | 16 | def validate(model, data_loader): 17 | print('validating ... ', flush=True, end='') 18 | 19 | val_loss_meter = pyutils.AverageMeter('loss1', 'loss2') 20 | 21 | model.eval() 22 | 23 | with torch.no_grad(): 24 | for pack in data_loader: 25 | img = pack['img'] 26 | 27 | label = pack['label'].cuda(non_blocking=True) 28 | 29 | x = model(img) 30 | loss = F.multilabel_soft_margin_loss(x, label) 31 | 32 | val_loss_meter.add({'loss': loss.item()}) 33 | 34 | model.train() 35 | 36 | print('loss: %.4f' % (val_loss_meter.pop('loss'))) 37 | 38 | return 39 | 40 | 41 | def run(args): 42 | 43 | model = getattr(importlib.import_module(args.cam_network), 'Net')(n_classes=80) 44 | 45 | 46 | train_dataset = mscoco.dataloader.COCOClassificationDataset( 47 | image_dir = osp.join(args.mscoco_root,'train2014/'), 48 | anno_path= osp.join(args.mscoco_root,'annotations/instances_train2014.json'), 49 | labels_path='./mscoco/train_labels.npy', 50 | resize_long=(320, 640), hor_flip=True, crop_size=512, crop_method="random") 51 | train_data_loader = DataLoader(train_dataset, batch_size=args.cam_batch_size, 52 | shuffle=True, num_workers=args.num_workers, pin_memory=True, drop_last=True) 53 | max_step = (len(train_dataset) // args.cam_batch_size) * args.cam_num_epoches 54 | 55 | val_dataset = mscoco.dataloader.COCOClassificationDataset( 56 | image_dir = osp.join(args.mscoco_root,'val2014/'), 57 | anno_path= osp.join(args.mscoco_root,'annotations/instances_val2014.json'), 58 | labels_path='./mscoco/val_labels.npy',crop_size=512) 59 | val_data_loader = DataLoader(val_dataset, batch_size=args.cam_batch_size,shuffle=False, num_workers=args.num_workers, pin_memory=True, drop_last=True) 60 | 61 | param_groups = model.trainable_parameters() 62 | optimizer = torchutils.PolyOptimizer([ 63 | {'params': param_groups[0], 'lr': args.cam_learning_rate, 'weight_decay': args.cam_weight_decay}, 64 | {'params': param_groups[1], 'lr': 10*args.cam_learning_rate, 'weight_decay': args.cam_weight_decay}, 65 | ], lr=args.cam_learning_rate, weight_decay=args.cam_weight_decay, max_step=max_step) 66 | 67 | model = torch.nn.DataParallel(model).cuda() 68 | model.train() 69 | 70 | avg_meter = pyutils.AverageMeter() 71 | 72 | timer = pyutils.Timer() 73 | 74 | for ep in range(args.cam_num_epoches): 75 | 76 | print('Epoch %d/%d' % (ep+1, args.cam_num_epoches)) 77 | 78 | for step, pack in enumerate(train_data_loader): 79 | 80 | img = pack['img'] 81 | img = img.cuda() 82 | label = pack['label'].cuda(non_blocking=True) 83 | x = model(img) 84 | 85 | optimizer.zero_grad() 86 | 87 | loss = F.multilabel_soft_margin_loss(x, label) 88 | 89 | loss.backward() 90 | avg_meter.add({'loss1': loss.item()}) 91 | 92 | 93 | optimizer.step() 94 | if (optimizer.global_step-1)%100 == 0: 95 | timer.update_progress(optimizer.global_step / max_step) 96 | 97 | print('step:%5d/%5d' % (optimizer.global_step - 1, max_step), 98 | 'loss:%.4f' % (avg_meter.pop('loss1')), 99 | 'imps:%.1f' % ((step + 1) * args.cam_batch_size / timer.get_stage_elapsed()), 100 | 'lr: %.4f' % (optimizer.param_groups[0]['lr']), 101 | 'etc:%s' % (timer.str_estimated_complete()), flush=True) 102 | 103 | validate(model, val_data_loader) 104 | timer.reset_stage() 105 | 106 | torch.save(model.module.state_dict(), args.cam_weights_name) 107 | torch.cuda.empty_cache() -------------------------------------------------------------------------------- /step_coco/train_irn.py: -------------------------------------------------------------------------------- 1 | 2 | import torch 3 | from torch.backends import cudnn 4 | cudnn.enabled = True 5 | import os.path as osp 6 | from torch.utils.data import DataLoader 7 | from misc import pyutils, torchutils, indexing 8 | import mscoco.dataloader 9 | import importlib 10 | from PIL import ImageFile 11 | ImageFile.LOAD_TRUNCATED_IMAGES = True 12 | def run(args): 13 | 14 | path_index = indexing.PathIndex(radius=10, default_size=(args.irn_crop_size // 4, args.irn_crop_size // 4)) 15 | 16 | model = getattr(importlib.import_module(args.irn_network), 'AffinityDisplacementLoss')( 17 | path_index) 18 | 19 | train_dataset = mscoco.dataloader.COCOAffinityDataset( 20 | image_dir = osp.join(args.mscoco_root,'train2014/'), 21 | anno_path= osp.join(args.mscoco_root,'annotations/instances_train2014.json'), 22 | label_dir=args.ir_label_out_dir, 23 | indices_from=path_index.src_indices, 24 | indices_to=path_index.dst_indices, 25 | hor_flip=True, 26 | crop_size=args.irn_crop_size, 27 | crop_method="random", 28 | rescale=(0.5, 1.5) 29 | ) 30 | train_data_loader = DataLoader(train_dataset, batch_size=args.irn_batch_size, 31 | shuffle=True, num_workers=args.num_workers, pin_memory=True, drop_last=True) 32 | 33 | max_step = (len(train_dataset) // args.irn_batch_size) * args.irn_num_epoches 34 | 35 | param_groups = model.trainable_parameters() 36 | optimizer = torchutils.PolyOptimizer([ 37 | {'params': param_groups[0], 'lr': 1*args.irn_learning_rate, 'weight_decay': args.irn_weight_decay}, 38 | {'params': param_groups[1], 'lr': 10*args.irn_learning_rate, 'weight_decay': args.irn_weight_decay} 39 | ], lr=args.irn_learning_rate, weight_decay=args.irn_weight_decay, max_step=max_step) 40 | 41 | model = torch.nn.DataParallel(model).cuda() 42 | model.train() 43 | 44 | avg_meter = pyutils.AverageMeter() 45 | 46 | timer = pyutils.Timer() 47 | 48 | for ep in range(args.irn_num_epoches): 49 | 50 | print('Epoch %d/%d' % (ep+1, args.irn_num_epoches)) 51 | 52 | for iter, pack in enumerate(train_data_loader): 53 | 54 | img = pack['img'].cuda(non_blocking=True) 55 | bg_pos_label = pack['aff_bg_pos_label'].cuda(non_blocking=True) 56 | fg_pos_label = pack['aff_fg_pos_label'].cuda(non_blocking=True) 57 | neg_label = pack['aff_neg_label'].cuda(non_blocking=True) 58 | 59 | pos_aff_loss, neg_aff_loss, dp_fg_loss, dp_bg_loss = model(img, True) 60 | 61 | bg_pos_aff_loss = torch.sum(bg_pos_label * pos_aff_loss) / (torch.sum(bg_pos_label) + 1e-5) 62 | fg_pos_aff_loss = torch.sum(fg_pos_label * pos_aff_loss) / (torch.sum(fg_pos_label) + 1e-5) 63 | pos_aff_loss = bg_pos_aff_loss / 2 + fg_pos_aff_loss / 2 64 | neg_aff_loss = torch.sum(neg_label * neg_aff_loss) / (torch.sum(neg_label) + 1e-5) 65 | 66 | dp_fg_loss = torch.sum(dp_fg_loss * torch.unsqueeze(fg_pos_label, 1)) / (2 * torch.sum(fg_pos_label) + 1e-5) 67 | dp_bg_loss = torch.sum(dp_bg_loss * torch.unsqueeze(bg_pos_label, 1)) / (2 * torch.sum(bg_pos_label) + 1e-5) 68 | 69 | avg_meter.add({'loss1': pos_aff_loss.item(), 'loss2': neg_aff_loss.item(), 70 | 'loss3': dp_fg_loss.item(), 'loss4': dp_bg_loss.item()}) 71 | 72 | total_loss = (pos_aff_loss + neg_aff_loss) / 2 + (dp_fg_loss + dp_bg_loss) / 2 73 | 74 | optimizer.zero_grad() 75 | total_loss.backward() 76 | optimizer.step() 77 | 78 | if (optimizer.global_step - 1) % 50 == 0: 79 | timer.update_progress(optimizer.global_step / max_step) 80 | 81 | print('step:%5d/%5d' % (optimizer.global_step - 1, max_step), 82 | 'loss:%.4f %.4f %.4f %.4f' % ( 83 | avg_meter.pop('loss1'), avg_meter.pop('loss2'), avg_meter.pop('loss3'), avg_meter.pop('loss4')), 84 | 'imps:%.1f' % ((iter + 1) * args.irn_batch_size / timer.get_stage_elapsed()), 85 | 'lr: %.4f' % (optimizer.param_groups[0]['lr']), 86 | 'etc:%s' % (timer.str_estimated_complete()), flush=True) 87 | else: 88 | timer.reset_stage() 89 | infer_dataset = mscoco.dataloader.COCOClassificationDataset( 90 | image_dir = osp.join(args.mscoco_root,'train2014/'), 91 | anno_path= osp.join(args.mscoco_root,'annotations/instances_train2014.json'), 92 | labels_path='./mscoco/train_labels.npy', 93 | crop_size=args.irn_crop_size, 94 | crop_method="top_left") 95 | infer_data_loader = DataLoader(infer_dataset, batch_size=args.irn_batch_size, 96 | shuffle=False, num_workers=args.num_workers, pin_memory=True, drop_last=True) 97 | 98 | model.eval() 99 | print('Analyzing displacements mean ... ', end='') 100 | 101 | dp_mean_list = [] 102 | 103 | with torch.no_grad(): 104 | for iter, pack in enumerate(infer_data_loader): 105 | img = pack['img'].cuda(non_blocking=True) 106 | 107 | aff, dp = model(img, False) 108 | 109 | dp_mean_list.append(torch.mean(dp, dim=(0, 2, 3)).cpu()) 110 | 111 | model.module.mean_shift.running_mean = torch.mean(torch.stack(dp_mean_list), dim=0) 112 | print('done.') 113 | 114 | torch.save(model.module.state_dict(), args.irn_weights_name) 115 | torch.cuda.empty_cache() 116 | -------------------------------------------------------------------------------- /step_coco/train_recam.py: -------------------------------------------------------------------------------- 1 | import time 2 | import torch 3 | import importlib 4 | import numpy as np 5 | import os.path as osp 6 | import torch.nn.functional as F 7 | from torch.backends import cudnn 8 | from torch.utils.data import DataLoader 9 | 10 | cudnn.enabled = True 11 | 12 | import mscoco.dataloader 13 | import net.resnet50_cam 14 | from misc import pyutils, torchutils, imutils 15 | 16 | def validate(model, data_loader): 17 | print('validating ... ', flush=True, end='') 18 | 19 | val_loss_meter = pyutils.AverageMeter('loss1', 'loss2') 20 | 21 | model.eval() 22 | 23 | with torch.no_grad(): 24 | for pack in data_loader: 25 | img = pack['img'] 26 | 27 | label = pack['label'].cuda(non_blocking=True) 28 | 29 | x,_,_= model(img) 30 | loss1 = F.multilabel_soft_margin_loss(x, label) 31 | 32 | val_loss_meter.add({'loss1': loss1.item()}) 33 | 34 | model.train() 35 | 36 | print('loss: %.4f' % (val_loss_meter.pop('loss1'))) 37 | 38 | return 39 | 40 | def run(args): 41 | print('train_recam_coco') 42 | model = getattr(importlib.import_module(args.cam_network), 'Net_CAM_Feature')(n_classes=80) 43 | param_groups = model.trainable_parameters() 44 | model.load_state_dict(torch.load(args.cam_weights_name), strict=True) 45 | model = torch.nn.DataParallel(model).cuda() 46 | 47 | recam_predictor = net.resnet50_cam.Class_Predictor(80, 2048) 48 | recam_predictor = torch.nn.DataParallel(recam_predictor).cuda() 49 | recam_predictor.train() 50 | 51 | train_dataset = mscoco.dataloader.COCOClassificationDataset( 52 | image_dir = osp.join(args.mscoco_root,'train2014/'), 53 | anno_path= osp.join(args.mscoco_root,'annotations/instances_train2014.json'), 54 | labels_path='./mscoco/train_labels.npy', 55 | resize_long=(320, 640), hor_flip=True, crop_size=512, crop_method="random") 56 | train_data_loader = DataLoader(train_dataset, batch_size=args.recam_batch_size,shuffle=True, num_workers=args.num_workers, pin_memory=True, drop_last=True) 57 | max_step = (len(train_dataset) // args.recam_batch_size) * args.recam_num_epoches 58 | 59 | val_dataset = mscoco.dataloader.COCOClassificationDataset( 60 | image_dir = osp.join(args.mscoco_root,'val2014/'), 61 | anno_path= osp.join(args.mscoco_root,'annotations/instances_val2014.json'), 62 | labels_path='./mscoco/val_labels.npy',crop_size=512) 63 | val_data_loader = DataLoader(val_dataset, batch_size=args.recam_batch_size,shuffle=False, num_workers=args.num_workers, pin_memory=True, drop_last=True) 64 | optimizer = torchutils.PolyOptimizer([ 65 | {'params': param_groups[0], 'lr': 0.1*args.recam_learning_rate, 'weight_decay': args.cam_weight_decay}, 66 | {'params': param_groups[1], 'lr': 0.1*args.recam_learning_rate, 'weight_decay': args.cam_weight_decay}, 67 | {'params': recam_predictor.parameters(), 'lr': args.recam_learning_rate, 'weight_decay': args.cam_weight_decay}, 68 | ], lr=args.recam_learning_rate, weight_decay=args.cam_weight_decay, max_step=max_step) 69 | 70 | avg_meter = pyutils.AverageMeter() 71 | 72 | timer = pyutils.Timer() 73 | global_step = 0 74 | start_time = time.time() 75 | for ep in range(args.recam_num_epoches): 76 | 77 | print('Epoch %d/%d' % (ep+1, args.recam_num_epoches)) 78 | model.train() 79 | print('step') 80 | for step, pack in enumerate(train_data_loader): 81 | 82 | img = pack['img'].cuda() 83 | label = pack['label'].cuda(non_blocking=True) 84 | x,cam,_ = model(img) 85 | 86 | loss_cls = F.multilabel_soft_margin_loss(x, label) 87 | loss_ce,acc = recam_predictor(cam,label) 88 | loss_ce = loss_ce.mean() 89 | acc = acc.mean() 90 | loss = loss_cls + args.recam_loss_weight*loss_ce 91 | 92 | avg_meter.add({'loss_cls': loss_cls.item()}) 93 | avg_meter.add({'loss_ce': loss_ce.item()}) 94 | avg_meter.add({'acc': acc.item()}) 95 | 96 | optimizer.zero_grad() 97 | loss.backward() 98 | optimizer.step() 99 | global_step += 1 100 | 101 | if (global_step-1)%100 == 0: 102 | timer.update_progress(global_step / max_step) 103 | 104 | print('step:%5d/%5d' % (global_step - 1, max_step), 105 | 'loss_cls:%.4f' % (avg_meter.pop('loss_cls')), 106 | 'loss_ce:%.4f' % (avg_meter.pop('loss_ce')), 107 | 'acc:%.4f' % (avg_meter.pop('acc')), 108 | 'imps:%.1f' % ((step + 1) * args.recam_batch_size / timer.get_stage_elapsed()), 109 | 'lr: %.4f' % (optimizer.param_groups[2]['lr']), 110 | 'time:%ds' % (int(time.time()-start_time)), 111 | 'etc:%s' % (timer.str_estimated_complete()), flush=True) 112 | 113 | validate(model, val_data_loader) 114 | timer.reset_stage() 115 | torch.save(model.module.state_dict(), osp.join(args.recam_weight_dir,'res50_recam_'+str(ep+1) + '.pth')) 116 | torch.save(recam_predictor.module.state_dict(), osp.join(args.recam_weight_dir,'recam_predictor_'+str(ep+1) + '.pth')) 117 | torch.cuda.empty_cache() 118 | -------------------------------------------------------------------------------- /voc12/__pycache__/dataloader.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/voc12/__pycache__/dataloader.cpython-36.pyc -------------------------------------------------------------------------------- /voc12/cls_labels.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhaozhengChen/ReCAM/2bdbcd2ed89d800b3340fc4021692a636a1db32d/voc12/cls_labels.npy -------------------------------------------------------------------------------- /voc12/dataloader.py: -------------------------------------------------------------------------------- 1 | 2 | import numpy as np 3 | import torch 4 | from torch.utils.data import Dataset 5 | import os.path 6 | import imageio 7 | from misc import imutils 8 | from PIL import Image 9 | import torch.nn.functional as F 10 | 11 | IMG_FOLDER_NAME = "JPEGImages" 12 | ANNOT_FOLDER_NAME = "Annotations" 13 | IGNORE = 255 14 | 15 | CAT_LIST = ['aeroplane', 'bicycle', 'bird', 'boat', 16 | 'bottle', 'bus', 'car', 'cat', 'chair', 17 | 'cow', 'diningtable', 'dog', 'horse', 18 | 'motorbike', 'person', 'pottedplant', 19 | 'sheep', 'sofa', 'train', 20 | 'tvmonitor'] 21 | 22 | N_CAT = len(CAT_LIST) 23 | 24 | CAT_NAME_TO_NUM = dict(zip(CAT_LIST,range(len(CAT_LIST)))) 25 | 26 | cls_labels_dict = np.load('voc12/cls_labels.npy', allow_pickle=True).item() 27 | 28 | def decode_int_filename(int_filename): 29 | s = str(int(int_filename)) 30 | return s[:4] + '_' + s[4:] 31 | 32 | def load_image_label_from_xml(img_name, voc12_root): 33 | from xml.dom import minidom 34 | 35 | elem_list = minidom.parse(os.path.join(voc12_root, ANNOT_FOLDER_NAME, decode_int_filename(img_name) + '.xml')).getElementsByTagName('name') 36 | 37 | multi_cls_lab = np.zeros((N_CAT), np.float32) 38 | 39 | for elem in elem_list: 40 | cat_name = elem.firstChild.data 41 | if cat_name in CAT_LIST: 42 | cat_num = CAT_NAME_TO_NUM[cat_name] 43 | multi_cls_lab[cat_num] = 1.0 44 | 45 | return multi_cls_lab 46 | 47 | def load_image_label_list_from_xml(img_name_list, voc12_root): 48 | 49 | return [load_image_label_from_xml(img_name, voc12_root) for img_name in img_name_list] 50 | 51 | def load_image_label_list_from_npy(img_name_list): 52 | 53 | return np.array([cls_labels_dict[img_name] for img_name in img_name_list]) 54 | 55 | def get_img_path(img_name, voc12_root): 56 | if not isinstance(img_name, str): 57 | img_name = decode_int_filename(img_name) 58 | return os.path.join(voc12_root, IMG_FOLDER_NAME, img_name + '.jpg') 59 | 60 | def load_img_name_list(dataset_path): 61 | 62 | img_name_list = np.loadtxt(dataset_path, dtype=np.int32) 63 | 64 | return img_name_list 65 | 66 | 67 | class TorchvisionNormalize(): 68 | def __init__(self, mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)): 69 | self.mean = mean 70 | self.std = std 71 | 72 | def __call__(self, img): 73 | imgarr = np.asarray(img) 74 | proc_img = np.empty_like(imgarr, np.float32) 75 | 76 | proc_img[..., 0] = (imgarr[..., 0] / 255. - self.mean[0]) / self.std[0] 77 | proc_img[..., 1] = (imgarr[..., 1] / 255. - self.mean[1]) / self.std[1] 78 | proc_img[..., 2] = (imgarr[..., 2] / 255. - self.mean[2]) / self.std[2] 79 | 80 | return proc_img 81 | 82 | class GetAffinityLabelFromIndices(): 83 | 84 | def __init__(self, indices_from, indices_to): 85 | 86 | self.indices_from = indices_from 87 | self.indices_to = indices_to 88 | 89 | def __call__(self, segm_map): 90 | 91 | segm_map_flat = np.reshape(segm_map, -1) 92 | 93 | segm_label_from = np.expand_dims(segm_map_flat[self.indices_from], axis=0) 94 | segm_label_to = segm_map_flat[self.indices_to] 95 | 96 | valid_label = np.logical_and(np.less(segm_label_from, 21), np.less(segm_label_to, 21)) 97 | 98 | equal_label = np.equal(segm_label_from, segm_label_to) 99 | 100 | pos_affinity_label = np.logical_and(equal_label, valid_label) 101 | 102 | bg_pos_affinity_label = np.logical_and(pos_affinity_label, np.equal(segm_label_from, 0)).astype(np.float32) 103 | fg_pos_affinity_label = np.logical_and(pos_affinity_label, np.greater(segm_label_from, 0)).astype(np.float32) 104 | 105 | neg_affinity_label = np.logical_and(np.logical_not(equal_label), valid_label).astype(np.float32) 106 | 107 | return torch.from_numpy(bg_pos_affinity_label), torch.from_numpy(fg_pos_affinity_label), \ 108 | torch.from_numpy(neg_affinity_label) 109 | 110 | 111 | class VOC12ImageDataset(Dataset): 112 | 113 | def __init__(self, img_name_list_path, voc12_root, 114 | resize_long=None, rescale=None, img_normal=TorchvisionNormalize(), hor_flip=False, 115 | crop_size=None, crop_method=None, to_torch=True): 116 | 117 | self.img_name_list = load_img_name_list(img_name_list_path) 118 | self.voc12_root = voc12_root 119 | 120 | self.resize_long = resize_long 121 | self.rescale = rescale 122 | self.crop_size = crop_size 123 | self.img_normal = img_normal 124 | self.hor_flip = hor_flip 125 | self.crop_method = crop_method 126 | self.to_torch = to_torch 127 | 128 | def __len__(self): 129 | return len(self.img_name_list) 130 | 131 | def __getitem__(self, idx): 132 | name = self.img_name_list[idx] 133 | name_str = decode_int_filename(name) 134 | 135 | img = np.asarray(imageio.imread(get_img_path(name_str, self.voc12_root))) 136 | 137 | if self.resize_long: 138 | img = imutils.random_resize_long(img, self.resize_long[0], self.resize_long[1]) 139 | 140 | if self.rescale: 141 | img = imutils.random_scale(img, scale_range=self.rescale, order=3) 142 | 143 | if self.img_normal: 144 | img = self.img_normal(img) 145 | 146 | if self.hor_flip: 147 | img = imutils.random_lr_flip(img) 148 | 149 | if self.crop_size: 150 | if self.crop_method == "random": 151 | img = imutils.random_crop(img, self.crop_size, 0) 152 | else: 153 | img = imutils.top_left_crop(img, self.crop_size, 0) 154 | 155 | if self.to_torch: 156 | img = imutils.HWC_to_CHW(img) 157 | 158 | return {'name': name_str, 'img': img} 159 | 160 | class VOC12ClassificationDataset(VOC12ImageDataset): 161 | 162 | def __init__(self, img_name_list_path, voc12_root, 163 | resize_long=None, rescale=None, img_normal=TorchvisionNormalize(), hor_flip=False, 164 | crop_size=None, crop_method=None): 165 | super().__init__(img_name_list_path, voc12_root, 166 | resize_long, rescale, img_normal, hor_flip, 167 | crop_size, crop_method) 168 | self.label_list = load_image_label_list_from_npy(self.img_name_list) 169 | 170 | def __getitem__(self, idx): 171 | out = super().__getitem__(idx) 172 | 173 | # label = torch.from_numpy(self.label_list[idx]) 174 | # label = torch.nonzero(label)[:,0] 175 | # label = label[torch.randint(len(label),(1,))] 176 | # out['label'] = label 177 | 178 | out['label'] = torch.from_numpy(self.label_list[idx]) 179 | 180 | return out 181 | 182 | class VOC12ClassificationDataset_Single(VOC12ImageDataset): 183 | 184 | def __init__(self, img_name_list_path, voc12_root, 185 | resize_long=None, rescale=None, img_normal=TorchvisionNormalize(), hor_flip=False, 186 | crop_size=None, crop_method=None): 187 | super().__init__(img_name_list_path, voc12_root, 188 | resize_long, rescale, img_normal, hor_flip, 189 | crop_size, crop_method) 190 | self.label_list = load_image_label_list_from_npy(self.img_name_list) 191 | # print() 192 | self.len = np.sum(self.label_list).astype(np.int) 193 | self.idx_map = np.zeros(self.len,dtype=np.int) 194 | self.bias = np.zeros(self.len,dtype=np.int) 195 | print('single_obj_data_num:',self.len) 196 | idx = 0 197 | for i in range(len(self.label_list)): 198 | x = np.sum(self.label_list[i]) 199 | while x > 0: 200 | x = x-1 201 | self.idx_map[idx] = i 202 | self.bias[idx] = x 203 | idx = idx + 1 204 | print(idx) 205 | # print(self.bias[:30]) 206 | def __getitem__(self, idx): 207 | if idx < len(self.img_name_list): 208 | out = super().__getitem__(idx) 209 | out['label'] = torch.from_numpy(self.label_list[idx]) 210 | else: 211 | idx = idx%len(self.label_list) 212 | bias = self.bias[idx] 213 | idx = self.idx_map[idx] 214 | label = torch.from_numpy(self.label_list[idx]) 215 | label = torch.nonzero(label)[:,0][bias] 216 | 217 | 218 | name = self.img_name_list[idx] 219 | name_str = decode_int_filename(name) 220 | 221 | mask = imageio.imread(os.path.join(self.voc12_root, 'SegmentationClassAug', name_str + '.png')) 222 | img0 = np.asarray(imageio.imread(get_img_path(name_str, self.voc12_root))) 223 | # print(img0.dtype) 224 | # print(img) 225 | mask = np.stack([mask,mask,mask],axis=2) 226 | mask = (mask==0)*1 + (mask==(label+1).item())*1 227 | img_rand = np.random.randint(255, size=img0.shape) 228 | # wh = img0.shape[:2] 229 | # img_rand = np.stack([torch.ones(wh)*124,torch.ones(wh)*116,torch.ones(wh)*104],axis=2) 230 | img = (mask*img0+(1-mask)*img_rand).astype(np.uint8) 231 | 232 | if self.resize_long: 233 | img = imutils.random_resize_long(img, self.resize_long[0], self.resize_long[1]) 234 | 235 | if self.rescale: 236 | img = imutils.random_scale(img, scale_range=self.rescale, order=3) 237 | 238 | if self.img_normal: 239 | img = self.img_normal(img) 240 | 241 | if self.hor_flip: 242 | img = imutils.random_lr_flip(img) 243 | 244 | if self.crop_size: 245 | if self.crop_method == "random": 246 | img = imutils.random_crop(img, self.crop_size, 0) 247 | else: 248 | img = imutils.top_left_crop(img, self.crop_size, 0) 249 | 250 | if self.to_torch: 251 | img = imutils.HWC_to_CHW(img) 252 | out = {'name': name_str, 'img': img, 'label':F.one_hot(label, num_classes=20).type(torch.float32)} 253 | return out 254 | 255 | def __len__(self): 256 | print('len:',self.len + len(self.img_name_list)) 257 | return self.len + len(self.img_name_list) 258 | 259 | class VOC12ClassificationDatasetMSF(VOC12ClassificationDataset): 260 | 261 | def __init__(self, img_name_list_path, voc12_root, img_normal=TorchvisionNormalize(), scales=(1.0,)): 262 | self.scales = scales 263 | 264 | super().__init__(img_name_list_path, voc12_root, img_normal=img_normal) 265 | self.scales = scales 266 | 267 | def __getitem__(self, idx): 268 | name = self.img_name_list[idx] 269 | name_str = decode_int_filename(name) 270 | 271 | img = imageio.imread(get_img_path(name_str, self.voc12_root)) 272 | 273 | ms_img_list = [] 274 | for s in self.scales: 275 | if s == 1: 276 | s_img = img 277 | else: 278 | s_img = imutils.pil_rescale(img, s, order=3) 279 | s_img = self.img_normal(s_img) 280 | s_img = imutils.HWC_to_CHW(s_img) 281 | ms_img_list.append(np.stack([s_img, np.flip(s_img, -1)], axis=0)) 282 | if len(self.scales) == 1: 283 | ms_img_list = ms_img_list[0] 284 | 285 | out = {"name": name_str, "img": ms_img_list, "size": (img.shape[0], img.shape[1]), 286 | "label": torch.from_numpy(self.label_list[idx])} 287 | return out 288 | 289 | class VOC12SegmentationDataset(Dataset): 290 | 291 | def __init__(self, img_name_list_path, label_dir, crop_size, voc12_root, 292 | rescale=None, img_normal=TorchvisionNormalize(), hor_flip=False, 293 | crop_method = 'random'): 294 | 295 | self.img_name_list = load_img_name_list(img_name_list_path) 296 | self.voc12_root = voc12_root 297 | 298 | self.label_dir = label_dir 299 | 300 | self.rescale = rescale 301 | self.crop_size = crop_size 302 | self.img_normal = img_normal 303 | self.hor_flip = hor_flip 304 | self.crop_method = crop_method 305 | 306 | self.cls_label_list = load_image_label_list_from_npy(self.img_name_list) 307 | 308 | def __len__(self): 309 | return len(self.img_name_list) 310 | 311 | def __getitem__(self, idx): 312 | name = self.img_name_list[idx] 313 | name_str = decode_int_filename(name) 314 | 315 | img = imageio.imread(get_img_path(name_str, self.voc12_root)) 316 | # print(os.path.join(self.label_dir, name_str + '.png')) 317 | label = imageio.imread(os.path.join(self.label_dir, name_str + '.png')) 318 | 319 | img = np.asarray(img) 320 | 321 | if self.rescale: 322 | img, label = imutils.random_scale((img, label), scale_range=self.rescale, order=(3, 0)) 323 | 324 | if self.img_normal: 325 | img = self.img_normal(img) 326 | 327 | if self.hor_flip: 328 | img, label = imutils.random_lr_flip((img, label)) 329 | 330 | if self.crop_method == "random": 331 | img, label = imutils.random_crop((img, label), self.crop_size, (0, 255)) 332 | else: 333 | img = imutils.top_left_crop(img, self.crop_size, 0) 334 | label = imutils.top_left_crop(label, self.crop_size, 255) 335 | 336 | img = imutils.HWC_to_CHW(img) 337 | 338 | return {'name': name, 'img': img, 'label': label, 'cls_label':torch.from_numpy(self.cls_label_list[idx])} 339 | 340 | class VOC12_ours(Dataset): 341 | 342 | def __init__(self, img_name_list_path, voc12_root): 343 | 344 | self.ids = np.loadtxt(img_name_list_path, dtype=np.str) 345 | self.voc12_root = voc12_root 346 | def read_label(self, file, dtype=np.int32): 347 | f = Image.open(file) 348 | try: 349 | img = f.convert('P') 350 | img = np.array(img, dtype=dtype) 351 | finally: 352 | if hasattr(f, 'close'): 353 | f.close() 354 | 355 | if img.ndim == 2: 356 | return img 357 | elif img.shape[2] == 1: 358 | return img[:, :, 0] 359 | 360 | def get_label(self,i): 361 | label_path = os.path.join(self.voc12_root, 'SegmentationClassAug', self.ids[i] + '.png') 362 | label = self.read_label(label_path, dtype=np.int32) 363 | label[label == 255] = -1 364 | return label 365 | def get_label_by_name(self,i): 366 | label_path = os.path.join(self.voc12_root, 'SegmentationClassAug', i + '.png') 367 | label = self.read_label(label_path, dtype=np.int32) 368 | label[label == 255] = -1 369 | return label 370 | 371 | def __len__(self): 372 | return len(self.ids) 373 | 374 | def __getitem__(self, idx): 375 | return idx 376 | 377 | class VOC12AffinityDataset(VOC12SegmentationDataset): 378 | def __init__(self, img_name_list_path, label_dir, crop_size, voc12_root, 379 | indices_from, indices_to, 380 | rescale=None, img_normal=TorchvisionNormalize(), hor_flip=False, crop_method=None): 381 | super().__init__(img_name_list_path, label_dir, crop_size, voc12_root, rescale, img_normal, hor_flip, crop_method=crop_method) 382 | 383 | self.extract_aff_lab_func = GetAffinityLabelFromIndices(indices_from, indices_to) 384 | 385 | def __len__(self): 386 | return len(self.img_name_list) 387 | 388 | def __getitem__(self, idx): 389 | out = super().__getitem__(idx) 390 | 391 | reduced_label = imutils.pil_rescale(out['label'], 0.25, 0) 392 | 393 | out['aff_bg_pos_label'], out['aff_fg_pos_label'], out['aff_neg_label'] = self.extract_aff_lab_func(reduced_label) 394 | 395 | return out 396 | 397 | -------------------------------------------------------------------------------- /voc12/make_cls_labels.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import voc12.dataloader 3 | import numpy as np 4 | 5 | if __name__ == '__main__': 6 | 7 | parser = argparse.ArgumentParser() 8 | parser.add_argument("--train_list", default='train_aug.txt', type=str) 9 | parser.add_argument("--val_list", default='val.txt', type=str) 10 | parser.add_argument("--out", default="cls_labels.npy", type=str) 11 | parser.add_argument("--voc12_root", default="../../../Dataset/VOC2012", type=str) 12 | args = parser.parse_args() 13 | 14 | train_name_list = voc12.dataloader.load_img_name_list(args.train_list) 15 | val_name_list = voc12.dataloader.load_img_name_list(args.val_list) 16 | 17 | train_val_name_list = np.concatenate([train_name_list, val_name_list], axis=0) 18 | label_list = voc12.dataloader.load_image_label_list_from_xml(train_val_name_list, args.voc12_root) 19 | 20 | total_label = np.zeros(20) 21 | 22 | d = dict() 23 | for img_name, label in zip(train_val_name_list, label_list): 24 | d[img_name] = label 25 | total_label += label 26 | 27 | print(total_label) 28 | np.save(args.out, d) -------------------------------------------------------------------------------- /voc12/test.txt: -------------------------------------------------------------------------------- 1 | 2008_000006 2 | 2008_000011 3 | 2008_000012 4 | 2008_000018 5 | 2008_000024 6 | 2008_000030 7 | 2008_000031 8 | 2008_000046 9 | 2008_000047 10 | 2008_000048 11 | 2008_000057 12 | 2008_000058 13 | 2008_000068 14 | 2008_000072 15 | 2008_000079 16 | 2008_000081 17 | 2008_000083 18 | 2008_000088 19 | 2008_000094 20 | 2008_000101 21 | 2008_000104 22 | 2008_000106 23 | 2008_000108 24 | 2008_000110 25 | 2008_000111 26 | 2008_000126 27 | 2008_000127 28 | 2008_000129 29 | 2008_000130 30 | 2008_000135 31 | 2008_000150 32 | 2008_000152 33 | 2008_000156 34 | 2008_000159 35 | 2008_000160 36 | 2008_000161 37 | 2008_000166 38 | 2008_000167 39 | 2008_000168 40 | 2008_000169 41 | 2008_000171 42 | 2008_000175 43 | 2008_000178 44 | 2008_000186 45 | 2008_000198 46 | 2008_000206 47 | 2008_000208 48 | 2008_000209 49 | 2008_000211 50 | 2008_000220 51 | 2008_000224 52 | 2008_000230 53 | 2008_000240 54 | 2008_000248 55 | 2008_000249 56 | 2008_000250 57 | 2008_000256 58 | 2008_000279 59 | 2008_000282 60 | 2008_000285 61 | 2008_000286 62 | 2008_000296 63 | 2008_000300 64 | 2008_000322 65 | 2008_000324 66 | 2008_000337 67 | 2008_000366 68 | 2008_000369 69 | 2008_000377 70 | 2008_000384 71 | 2008_000390 72 | 2008_000404 73 | 2008_000411 74 | 2008_000434 75 | 2008_000440 76 | 2008_000460 77 | 2008_000467 78 | 2008_000478 79 | 2008_000485 80 | 2008_000487 81 | 2008_000490 82 | 2008_000503 83 | 2008_000504 84 | 2008_000507 85 | 2008_000513 86 | 2008_000523 87 | 2008_000529 88 | 2008_000556 89 | 2008_000565 90 | 2008_000580 91 | 2008_000590 92 | 2008_000596 93 | 2008_000597 94 | 2008_000600 95 | 2008_000603 96 | 2008_000604 97 | 2008_000612 98 | 2008_000617 99 | 2008_000621 100 | 2008_000627 101 | 2008_000633 102 | 2008_000643 103 | 2008_000644 104 | 2008_000649 105 | 2008_000651 106 | 2008_000664 107 | 2008_000665 108 | 2008_000680 109 | 2008_000681 110 | 2008_000684 111 | 2008_000685 112 | 2008_000688 113 | 2008_000693 114 | 2008_000698 115 | 2008_000707 116 | 2008_000709 117 | 2008_000712 118 | 2008_000747 119 | 2008_000751 120 | 2008_000754 121 | 2008_000762 122 | 2008_000767 123 | 2008_000768 124 | 2008_000773 125 | 2008_000774 126 | 2008_000779 127 | 2008_000797 128 | 2008_000813 129 | 2008_000816 130 | 2008_000846 131 | 2008_000866 132 | 2008_000871 133 | 2008_000872 134 | 2008_000891 135 | 2008_000892 136 | 2008_000894 137 | 2008_000896 138 | 2008_000898 139 | 2008_000909 140 | 2008_000913 141 | 2008_000920 142 | 2008_000933 143 | 2008_000935 144 | 2008_000937 145 | 2008_000938 146 | 2008_000954 147 | 2008_000958 148 | 2008_000963 149 | 2008_000967 150 | 2008_000974 151 | 2008_000986 152 | 2008_000994 153 | 2008_000995 154 | 2008_001008 155 | 2008_001010 156 | 2008_001014 157 | 2008_001016 158 | 2008_001025 159 | 2008_001029 160 | 2008_001037 161 | 2008_001059 162 | 2008_001061 163 | 2008_001072 164 | 2008_001124 165 | 2008_001126 166 | 2008_001131 167 | 2008_001138 168 | 2008_001144 169 | 2008_001151 170 | 2008_001156 171 | 2008_001179 172 | 2008_001181 173 | 2008_001184 174 | 2008_001186 175 | 2008_001197 176 | 2008_001207 177 | 2008_001212 178 | 2008_001233 179 | 2008_001234 180 | 2008_001258 181 | 2008_001268 182 | 2008_001279 183 | 2008_001281 184 | 2008_001288 185 | 2008_001291 186 | 2008_001298 187 | 2008_001309 188 | 2008_001315 189 | 2008_001316 190 | 2008_001319 191 | 2008_001327 192 | 2008_001328 193 | 2008_001332 194 | 2008_001341 195 | 2008_001347 196 | 2008_001355 197 | 2008_001378 198 | 2008_001386 199 | 2008_001400 200 | 2008_001409 201 | 2008_001411 202 | 2008_001416 203 | 2008_001418 204 | 2008_001435 205 | 2008_001459 206 | 2008_001469 207 | 2008_001474 208 | 2008_001477 209 | 2008_001483 210 | 2008_001484 211 | 2008_001485 212 | 2008_001496 213 | 2008_001507 214 | 2008_001511 215 | 2008_001519 216 | 2008_001557 217 | 2008_001567 218 | 2008_001570 219 | 2008_001571 220 | 2008_001572 221 | 2008_001579 222 | 2008_001587 223 | 2008_001608 224 | 2008_001611 225 | 2008_001614 226 | 2008_001621 227 | 2008_001639 228 | 2008_001658 229 | 2008_001678 230 | 2008_001700 231 | 2008_001713 232 | 2008_001720 233 | 2008_001755 234 | 2008_001779 235 | 2008_001785 236 | 2008_001793 237 | 2008_001794 238 | 2008_001803 239 | 2008_001818 240 | 2008_001848 241 | 2008_001855 242 | 2008_001857 243 | 2008_001861 244 | 2008_001875 245 | 2008_001878 246 | 2008_001886 247 | 2008_001897 248 | 2008_001916 249 | 2008_001925 250 | 2008_001949 251 | 2008_001953 252 | 2008_001972 253 | 2008_001999 254 | 2008_002027 255 | 2008_002040 256 | 2008_002057 257 | 2008_002070 258 | 2008_002075 259 | 2008_002095 260 | 2008_002104 261 | 2008_002105 262 | 2008_002106 263 | 2008_002136 264 | 2008_002137 265 | 2008_002147 266 | 2008_002149 267 | 2008_002163 268 | 2008_002173 269 | 2008_002174 270 | 2008_002184 271 | 2008_002186 272 | 2008_002188 273 | 2008_002190 274 | 2008_002203 275 | 2008_002211 276 | 2008_002217 277 | 2008_002228 278 | 2008_002233 279 | 2008_002246 280 | 2008_002257 281 | 2008_002261 282 | 2008_002285 283 | 2008_002287 284 | 2008_002295 285 | 2008_002303 286 | 2008_002306 287 | 2008_002309 288 | 2008_002310 289 | 2008_002318 290 | 2008_002320 291 | 2008_002332 292 | 2008_002337 293 | 2008_002345 294 | 2008_002348 295 | 2008_002352 296 | 2008_002360 297 | 2008_002381 298 | 2008_002387 299 | 2008_002388 300 | 2008_002393 301 | 2008_002406 302 | 2008_002440 303 | 2008_002455 304 | 2008_002460 305 | 2008_002462 306 | 2008_002480 307 | 2008_002518 308 | 2008_002525 309 | 2008_002535 310 | 2008_002544 311 | 2008_002553 312 | 2008_002569 313 | 2008_002572 314 | 2008_002587 315 | 2008_002635 316 | 2008_002655 317 | 2008_002695 318 | 2008_002702 319 | 2008_002706 320 | 2008_002707 321 | 2008_002722 322 | 2008_002745 323 | 2008_002757 324 | 2008_002779 325 | 2008_002805 326 | 2008_002871 327 | 2008_002895 328 | 2008_002905 329 | 2008_002923 330 | 2008_002927 331 | 2008_002939 332 | 2008_002941 333 | 2008_002962 334 | 2008_002975 335 | 2008_003000 336 | 2008_003031 337 | 2008_003038 338 | 2008_003042 339 | 2008_003069 340 | 2008_003070 341 | 2008_003115 342 | 2008_003116 343 | 2008_003130 344 | 2008_003137 345 | 2008_003138 346 | 2008_003139 347 | 2008_003165 348 | 2008_003171 349 | 2008_003176 350 | 2008_003192 351 | 2008_003194 352 | 2008_003195 353 | 2008_003198 354 | 2008_003227 355 | 2008_003247 356 | 2008_003262 357 | 2008_003298 358 | 2008_003299 359 | 2008_003307 360 | 2008_003337 361 | 2008_003353 362 | 2008_003355 363 | 2008_003363 364 | 2008_003383 365 | 2008_003389 366 | 2008_003392 367 | 2008_003399 368 | 2008_003436 369 | 2008_003457 370 | 2008_003465 371 | 2008_003481 372 | 2008_003539 373 | 2008_003548 374 | 2008_003550 375 | 2008_003567 376 | 2008_003568 377 | 2008_003606 378 | 2008_003615 379 | 2008_003654 380 | 2008_003670 381 | 2008_003700 382 | 2008_003705 383 | 2008_003727 384 | 2008_003731 385 | 2008_003734 386 | 2008_003760 387 | 2008_003804 388 | 2008_003807 389 | 2008_003810 390 | 2008_003822 391 | 2008_003833 392 | 2008_003877 393 | 2008_003879 394 | 2008_003895 395 | 2008_003901 396 | 2008_003903 397 | 2008_003911 398 | 2008_003919 399 | 2008_003927 400 | 2008_003937 401 | 2008_003946 402 | 2008_003950 403 | 2008_003955 404 | 2008_003981 405 | 2008_003991 406 | 2008_004009 407 | 2008_004039 408 | 2008_004052 409 | 2008_004063 410 | 2008_004070 411 | 2008_004078 412 | 2008_004104 413 | 2008_004139 414 | 2008_004177 415 | 2008_004181 416 | 2008_004200 417 | 2008_004219 418 | 2008_004236 419 | 2008_004250 420 | 2008_004266 421 | 2008_004299 422 | 2008_004320 423 | 2008_004334 424 | 2008_004343 425 | 2008_004349 426 | 2008_004366 427 | 2008_004386 428 | 2008_004401 429 | 2008_004423 430 | 2008_004448 431 | 2008_004481 432 | 2008_004516 433 | 2008_004536 434 | 2008_004582 435 | 2008_004609 436 | 2008_004638 437 | 2008_004642 438 | 2008_004644 439 | 2008_004669 440 | 2008_004673 441 | 2008_004691 442 | 2008_004693 443 | 2008_004709 444 | 2008_004715 445 | 2008_004757 446 | 2008_004775 447 | 2008_004782 448 | 2008_004785 449 | 2008_004798 450 | 2008_004848 451 | 2008_004861 452 | 2008_004870 453 | 2008_004877 454 | 2008_004884 455 | 2008_004891 456 | 2008_004901 457 | 2008_004919 458 | 2008_005058 459 | 2008_005069 460 | 2008_005086 461 | 2008_005087 462 | 2008_005112 463 | 2008_005113 464 | 2008_005118 465 | 2008_005128 466 | 2008_005129 467 | 2008_005153 468 | 2008_005161 469 | 2008_005162 470 | 2008_005165 471 | 2008_005187 472 | 2008_005227 473 | 2008_005308 474 | 2008_005318 475 | 2008_005320 476 | 2008_005351 477 | 2008_005372 478 | 2008_005383 479 | 2008_005391 480 | 2008_005407 481 | 2008_005420 482 | 2008_005440 483 | 2008_005487 484 | 2008_005493 485 | 2008_005520 486 | 2008_005551 487 | 2008_005556 488 | 2008_005576 489 | 2008_005578 490 | 2008_005594 491 | 2008_005619 492 | 2008_005629 493 | 2008_005644 494 | 2008_005645 495 | 2008_005651 496 | 2008_005661 497 | 2008_005662 498 | 2008_005667 499 | 2008_005694 500 | 2008_005697 501 | 2008_005709 502 | 2008_005710 503 | 2008_005733 504 | 2008_005749 505 | 2008_005753 506 | 2008_005771 507 | 2008_005781 508 | 2008_005793 509 | 2008_005802 510 | 2008_005833 511 | 2008_005844 512 | 2008_005908 513 | 2008_005931 514 | 2008_005952 515 | 2008_006016 516 | 2008_006030 517 | 2008_006033 518 | 2008_006054 519 | 2008_006073 520 | 2008_006091 521 | 2008_006142 522 | 2008_006150 523 | 2008_006206 524 | 2008_006217 525 | 2008_006264 526 | 2008_006283 527 | 2008_006308 528 | 2008_006313 529 | 2008_006333 530 | 2008_006343 531 | 2008_006381 532 | 2008_006391 533 | 2008_006423 534 | 2008_006428 535 | 2008_006440 536 | 2008_006444 537 | 2008_006473 538 | 2008_006505 539 | 2008_006531 540 | 2008_006560 541 | 2008_006571 542 | 2008_006582 543 | 2008_006594 544 | 2008_006601 545 | 2008_006633 546 | 2008_006653 547 | 2008_006678 548 | 2008_006755 549 | 2008_006772 550 | 2008_006788 551 | 2008_006799 552 | 2008_006809 553 | 2008_006838 554 | 2008_006845 555 | 2008_006852 556 | 2008_006894 557 | 2008_006905 558 | 2008_006947 559 | 2008_006983 560 | 2008_007049 561 | 2008_007065 562 | 2008_007068 563 | 2008_007111 564 | 2008_007148 565 | 2008_007159 566 | 2008_007193 567 | 2008_007228 568 | 2008_007235 569 | 2008_007249 570 | 2008_007255 571 | 2008_007268 572 | 2008_007275 573 | 2008_007292 574 | 2008_007299 575 | 2008_007306 576 | 2008_007316 577 | 2008_007400 578 | 2008_007401 579 | 2008_007419 580 | 2008_007437 581 | 2008_007483 582 | 2008_007487 583 | 2008_007520 584 | 2008_007551 585 | 2008_007603 586 | 2008_007616 587 | 2008_007654 588 | 2008_007663 589 | 2008_007708 590 | 2008_007795 591 | 2008_007801 592 | 2008_007859 593 | 2008_007903 594 | 2008_007920 595 | 2008_007926 596 | 2008_008014 597 | 2008_008017 598 | 2008_008060 599 | 2008_008077 600 | 2008_008107 601 | 2008_008108 602 | 2008_008119 603 | 2008_008126 604 | 2008_008133 605 | 2008_008144 606 | 2008_008216 607 | 2008_008244 608 | 2008_008248 609 | 2008_008250 610 | 2008_008260 611 | 2008_008277 612 | 2008_008280 613 | 2008_008290 614 | 2008_008304 615 | 2008_008340 616 | 2008_008371 617 | 2008_008390 618 | 2008_008397 619 | 2008_008409 620 | 2008_008412 621 | 2008_008419 622 | 2008_008454 623 | 2008_008491 624 | 2008_008498 625 | 2008_008565 626 | 2008_008599 627 | 2008_008603 628 | 2008_008631 629 | 2008_008634 630 | 2008_008640 631 | 2008_008646 632 | 2008_008660 633 | 2008_008663 634 | 2008_008664 635 | 2008_008709 636 | 2008_008720 637 | 2008_008747 638 | 2008_008768 639 | 2009_000004 640 | 2009_000019 641 | 2009_000024 642 | 2009_000025 643 | 2009_000053 644 | 2009_000076 645 | 2009_000107 646 | 2009_000110 647 | 2009_000115 648 | 2009_000117 649 | 2009_000175 650 | 2009_000220 651 | 2009_000259 652 | 2009_000275 653 | 2009_000314 654 | 2009_000368 655 | 2009_000373 656 | 2009_000384 657 | 2009_000388 658 | 2009_000423 659 | 2009_000433 660 | 2009_000434 661 | 2009_000458 662 | 2009_000475 663 | 2009_000481 664 | 2009_000495 665 | 2009_000514 666 | 2009_000555 667 | 2009_000556 668 | 2009_000561 669 | 2009_000571 670 | 2009_000581 671 | 2009_000605 672 | 2009_000609 673 | 2009_000644 674 | 2009_000654 675 | 2009_000671 676 | 2009_000733 677 | 2009_000740 678 | 2009_000766 679 | 2009_000775 680 | 2009_000776 681 | 2009_000795 682 | 2009_000850 683 | 2009_000881 684 | 2009_000900 685 | 2009_000914 686 | 2009_000941 687 | 2009_000977 688 | 2009_000984 689 | 2009_000986 690 | 2009_001005 691 | 2009_001015 692 | 2009_001058 693 | 2009_001072 694 | 2009_001087 695 | 2009_001092 696 | 2009_001109 697 | 2009_001114 698 | 2009_001115 699 | 2009_001141 700 | 2009_001174 701 | 2009_001175 702 | 2009_001182 703 | 2009_001222 704 | 2009_001228 705 | 2009_001246 706 | 2009_001262 707 | 2009_001274 708 | 2009_001284 709 | 2009_001297 710 | 2009_001331 711 | 2009_001336 712 | 2009_001337 713 | 2009_001379 714 | 2009_001392 715 | 2009_001451 716 | 2009_001485 717 | 2009_001488 718 | 2009_001497 719 | 2009_001504 720 | 2009_001506 721 | 2009_001573 722 | 2009_001576 723 | 2009_001603 724 | 2009_001613 725 | 2009_001652 726 | 2009_001661 727 | 2009_001668 728 | 2009_001680 729 | 2009_001688 730 | 2009_001697 731 | 2009_001729 732 | 2009_001771 733 | 2009_001785 734 | 2009_001793 735 | 2009_001814 736 | 2009_001866 737 | 2009_001872 738 | 2009_001880 739 | 2009_001883 740 | 2009_001891 741 | 2009_001913 742 | 2009_001938 743 | 2009_001946 744 | 2009_001953 745 | 2009_001969 746 | 2009_001978 747 | 2009_001995 748 | 2009_002007 749 | 2009_002036 750 | 2009_002041 751 | 2009_002049 752 | 2009_002051 753 | 2009_002062 754 | 2009_002063 755 | 2009_002067 756 | 2009_002085 757 | 2009_002092 758 | 2009_002114 759 | 2009_002115 760 | 2009_002142 761 | 2009_002148 762 | 2009_002157 763 | 2009_002181 764 | 2009_002220 765 | 2009_002284 766 | 2009_002287 767 | 2009_002300 768 | 2009_002310 769 | 2009_002315 770 | 2009_002334 771 | 2009_002337 772 | 2009_002354 773 | 2009_002357 774 | 2009_002411 775 | 2009_002426 776 | 2009_002458 777 | 2009_002459 778 | 2009_002461 779 | 2009_002466 780 | 2009_002481 781 | 2009_002483 782 | 2009_002503 783 | 2009_002581 784 | 2009_002583 785 | 2009_002589 786 | 2009_002600 787 | 2009_002601 788 | 2009_002602 789 | 2009_002641 790 | 2009_002646 791 | 2009_002656 792 | 2009_002666 793 | 2009_002720 794 | 2009_002767 795 | 2009_002768 796 | 2009_002794 797 | 2009_002821 798 | 2009_002825 799 | 2009_002839 800 | 2009_002840 801 | 2009_002859 802 | 2009_002860 803 | 2009_002881 804 | 2009_002889 805 | 2009_002892 806 | 2009_002895 807 | 2009_002896 808 | 2009_002900 809 | 2009_002924 810 | 2009_002966 811 | 2009_002973 812 | 2009_002981 813 | 2009_003004 814 | 2009_003021 815 | 2009_003028 816 | 2009_003037 817 | 2009_003038 818 | 2009_003055 819 | 2009_003085 820 | 2009_003100 821 | 2009_003106 822 | 2009_003117 823 | 2009_003139 824 | 2009_003170 825 | 2009_003179 826 | 2009_003184 827 | 2009_003186 828 | 2009_003190 829 | 2009_003221 830 | 2009_003236 831 | 2009_003242 832 | 2009_003244 833 | 2009_003260 834 | 2009_003264 835 | 2009_003274 836 | 2009_003283 837 | 2009_003296 838 | 2009_003332 839 | 2009_003341 840 | 2009_003354 841 | 2009_003370 842 | 2009_003371 843 | 2009_003374 844 | 2009_003391 845 | 2009_003393 846 | 2009_003404 847 | 2009_003405 848 | 2009_003414 849 | 2009_003428 850 | 2009_003470 851 | 2009_003474 852 | 2009_003532 853 | 2009_003536 854 | 2009_003578 855 | 2009_003580 856 | 2009_003620 857 | 2009_003621 858 | 2009_003680 859 | 2009_003699 860 | 2009_003727 861 | 2009_003737 862 | 2009_003780 863 | 2009_003811 864 | 2009_003824 865 | 2009_003831 866 | 2009_003844 867 | 2009_003850 868 | 2009_003851 869 | 2009_003864 870 | 2009_003868 871 | 2009_003869 872 | 2009_003893 873 | 2009_003909 874 | 2009_003924 875 | 2009_003925 876 | 2009_003960 877 | 2009_003979 878 | 2009_003990 879 | 2009_003997 880 | 2009_004006 881 | 2009_004010 882 | 2009_004066 883 | 2009_004077 884 | 2009_004081 885 | 2009_004097 886 | 2009_004098 887 | 2009_004136 888 | 2009_004216 889 | 2009_004220 890 | 2009_004266 891 | 2009_004269 892 | 2009_004286 893 | 2009_004296 894 | 2009_004321 895 | 2009_004342 896 | 2009_004343 897 | 2009_004344 898 | 2009_004385 899 | 2009_004408 900 | 2009_004420 901 | 2009_004441 902 | 2009_004447 903 | 2009_004461 904 | 2009_004467 905 | 2009_004485 906 | 2009_004488 907 | 2009_004516 908 | 2009_004521 909 | 2009_004544 910 | 2009_004596 911 | 2009_004613 912 | 2009_004615 913 | 2009_004618 914 | 2009_004621 915 | 2009_004646 916 | 2009_004659 917 | 2009_004663 918 | 2009_004666 919 | 2009_004691 920 | 2009_004715 921 | 2009_004726 922 | 2009_004753 923 | 2009_004776 924 | 2009_004811 925 | 2009_004814 926 | 2009_004818 927 | 2009_004835 928 | 2009_004863 929 | 2009_004894 930 | 2009_004909 931 | 2009_004928 932 | 2009_004937 933 | 2009_004954 934 | 2009_004966 935 | 2009_004970 936 | 2009_004976 937 | 2009_005004 938 | 2009_005011 939 | 2009_005053 940 | 2009_005072 941 | 2009_005115 942 | 2009_005146 943 | 2009_005151 944 | 2009_005164 945 | 2009_005179 946 | 2009_005224 947 | 2009_005243 948 | 2009_005249 949 | 2009_005252 950 | 2009_005254 951 | 2009_005258 952 | 2009_005264 953 | 2009_005266 954 | 2009_005276 955 | 2009_005290 956 | 2009_005295 957 | 2010_000004 958 | 2010_000005 959 | 2010_000006 960 | 2010_000032 961 | 2010_000062 962 | 2010_000093 963 | 2010_000094 964 | 2010_000161 965 | 2010_000176 966 | 2010_000223 967 | 2010_000226 968 | 2010_000236 969 | 2010_000239 970 | 2010_000287 971 | 2010_000300 972 | 2010_000301 973 | 2010_000328 974 | 2010_000378 975 | 2010_000405 976 | 2010_000407 977 | 2010_000472 978 | 2010_000479 979 | 2010_000491 980 | 2010_000533 981 | 2010_000535 982 | 2010_000542 983 | 2010_000554 984 | 2010_000580 985 | 2010_000594 986 | 2010_000596 987 | 2010_000599 988 | 2010_000606 989 | 2010_000615 990 | 2010_000654 991 | 2010_000659 992 | 2010_000693 993 | 2010_000698 994 | 2010_000730 995 | 2010_000734 996 | 2010_000741 997 | 2010_000755 998 | 2010_000768 999 | 2010_000794 1000 | 2010_000813 1001 | 2010_000817 1002 | 2010_000834 1003 | 2010_000839 1004 | 2010_000848 1005 | 2010_000881 1006 | 2010_000888 1007 | 2010_000900 1008 | 2010_000903 1009 | 2010_000924 1010 | 2010_000946 1011 | 2010_000953 1012 | 2010_000957 1013 | 2010_000967 1014 | 2010_000992 1015 | 2010_000998 1016 | 2010_001053 1017 | 2010_001067 1018 | 2010_001114 1019 | 2010_001132 1020 | 2010_001138 1021 | 2010_001169 1022 | 2010_001171 1023 | 2010_001228 1024 | 2010_001260 1025 | 2010_001268 1026 | 2010_001280 1027 | 2010_001298 1028 | 2010_001302 1029 | 2010_001308 1030 | 2010_001324 1031 | 2010_001332 1032 | 2010_001335 1033 | 2010_001345 1034 | 2010_001346 1035 | 2010_001349 1036 | 2010_001373 1037 | 2010_001381 1038 | 2010_001392 1039 | 2010_001396 1040 | 2010_001420 1041 | 2010_001500 1042 | 2010_001506 1043 | 2010_001521 1044 | 2010_001532 1045 | 2010_001558 1046 | 2010_001598 1047 | 2010_001611 1048 | 2010_001631 1049 | 2010_001639 1050 | 2010_001651 1051 | 2010_001663 1052 | 2010_001664 1053 | 2010_001728 1054 | 2010_001778 1055 | 2010_001861 1056 | 2010_001874 1057 | 2010_001900 1058 | 2010_001905 1059 | 2010_001969 1060 | 2010_002008 1061 | 2010_002014 1062 | 2010_002049 1063 | 2010_002052 1064 | 2010_002091 1065 | 2010_002115 1066 | 2010_002119 1067 | 2010_002134 1068 | 2010_002156 1069 | 2010_002160 1070 | 2010_002186 1071 | 2010_002210 1072 | 2010_002241 1073 | 2010_002252 1074 | 2010_002258 1075 | 2010_002262 1076 | 2010_002273 1077 | 2010_002290 1078 | 2010_002292 1079 | 2010_002347 1080 | 2010_002358 1081 | 2010_002360 1082 | 2010_002367 1083 | 2010_002416 1084 | 2010_002451 1085 | 2010_002481 1086 | 2010_002490 1087 | 2010_002495 1088 | 2010_002588 1089 | 2010_002607 1090 | 2010_002609 1091 | 2010_002610 1092 | 2010_002641 1093 | 2010_002685 1094 | 2010_002699 1095 | 2010_002719 1096 | 2010_002735 1097 | 2010_002751 1098 | 2010_002804 1099 | 2010_002835 1100 | 2010_002852 1101 | 2010_002885 1102 | 2010_002889 1103 | 2010_002904 1104 | 2010_002908 1105 | 2010_002916 1106 | 2010_002974 1107 | 2010_002977 1108 | 2010_003005 1109 | 2010_003021 1110 | 2010_003030 1111 | 2010_003038 1112 | 2010_003046 1113 | 2010_003052 1114 | 2010_003089 1115 | 2010_003110 1116 | 2010_003118 1117 | 2010_003171 1118 | 2010_003217 1119 | 2010_003221 1120 | 2010_003228 1121 | 2010_003243 1122 | 2010_003271 1123 | 2010_003295 1124 | 2010_003306 1125 | 2010_003324 1126 | 2010_003363 1127 | 2010_003382 1128 | 2010_003388 1129 | 2010_003389 1130 | 2010_003392 1131 | 2010_003430 1132 | 2010_003442 1133 | 2010_003459 1134 | 2010_003485 1135 | 2010_003486 1136 | 2010_003500 1137 | 2010_003523 1138 | 2010_003542 1139 | 2010_003552 1140 | 2010_003570 1141 | 2010_003572 1142 | 2010_003586 1143 | 2010_003615 1144 | 2010_003623 1145 | 2010_003657 1146 | 2010_003666 1147 | 2010_003705 1148 | 2010_003710 1149 | 2010_003720 1150 | 2010_003733 1151 | 2010_003750 1152 | 2010_003767 1153 | 2010_003802 1154 | 2010_003809 1155 | 2010_003830 1156 | 2010_003832 1157 | 2010_003836 1158 | 2010_003838 1159 | 2010_003850 1160 | 2010_003867 1161 | 2010_003882 1162 | 2010_003909 1163 | 2010_003922 1164 | 2010_003923 1165 | 2010_003978 1166 | 2010_003989 1167 | 2010_003990 1168 | 2010_004000 1169 | 2010_004003 1170 | 2010_004068 1171 | 2010_004076 1172 | 2010_004117 1173 | 2010_004136 1174 | 2010_004142 1175 | 2010_004195 1176 | 2010_004200 1177 | 2010_004202 1178 | 2010_004232 1179 | 2010_004261 1180 | 2010_004266 1181 | 2010_004273 1182 | 2010_004305 1183 | 2010_004403 1184 | 2010_004433 1185 | 2010_004434 1186 | 2010_004435 1187 | 2010_004438 1188 | 2010_004442 1189 | 2010_004473 1190 | 2010_004482 1191 | 2010_004487 1192 | 2010_004489 1193 | 2010_004512 1194 | 2010_004525 1195 | 2010_004527 1196 | 2010_004532 1197 | 2010_004566 1198 | 2010_004568 1199 | 2010_004579 1200 | 2010_004611 1201 | 2010_004641 1202 | 2010_004688 1203 | 2010_004699 1204 | 2010_004702 1205 | 2010_004716 1206 | 2010_004754 1207 | 2010_004767 1208 | 2010_004776 1209 | 2010_004811 1210 | 2010_004837 1211 | 2010_004839 1212 | 2010_004845 1213 | 2010_004860 1214 | 2010_004867 1215 | 2010_004881 1216 | 2010_004939 1217 | 2010_005001 1218 | 2010_005047 1219 | 2010_005051 1220 | 2010_005091 1221 | 2010_005095 1222 | 2010_005125 1223 | 2010_005140 1224 | 2010_005177 1225 | 2010_005178 1226 | 2010_005194 1227 | 2010_005197 1228 | 2010_005200 1229 | 2010_005205 1230 | 2010_005212 1231 | 2010_005248 1232 | 2010_005294 1233 | 2010_005298 1234 | 2010_005313 1235 | 2010_005324 1236 | 2010_005328 1237 | 2010_005329 1238 | 2010_005380 1239 | 2010_005404 1240 | 2010_005407 1241 | 2010_005411 1242 | 2010_005423 1243 | 2010_005499 1244 | 2010_005509 1245 | 2010_005510 1246 | 2010_005544 1247 | 2010_005549 1248 | 2010_005590 1249 | 2010_005639 1250 | 2010_005699 1251 | 2010_005704 1252 | 2010_005707 1253 | 2010_005711 1254 | 2010_005726 1255 | 2010_005741 1256 | 2010_005765 1257 | 2010_005790 1258 | 2010_005792 1259 | 2010_005797 1260 | 2010_005812 1261 | 2010_005850 1262 | 2010_005861 1263 | 2010_005869 1264 | 2010_005908 1265 | 2010_005915 1266 | 2010_005946 1267 | 2010_005965 1268 | 2010_006044 1269 | 2010_006047 1270 | 2010_006052 1271 | 2010_006081 1272 | 2011_000001 1273 | 2011_000013 1274 | 2011_000014 1275 | 2011_000020 1276 | 2011_000032 1277 | 2011_000042 1278 | 2011_000063 1279 | 2011_000115 1280 | 2011_000120 1281 | 2011_000240 1282 | 2011_000244 1283 | 2011_000254 1284 | 2011_000261 1285 | 2011_000262 1286 | 2011_000271 1287 | 2011_000274 1288 | 2011_000306 1289 | 2011_000311 1290 | 2011_000316 1291 | 2011_000328 1292 | 2011_000351 1293 | 2011_000352 1294 | 2011_000406 1295 | 2011_000414 1296 | 2011_000448 1297 | 2011_000451 1298 | 2011_000470 1299 | 2011_000473 1300 | 2011_000515 1301 | 2011_000537 1302 | 2011_000576 1303 | 2011_000603 1304 | 2011_000616 1305 | 2011_000636 1306 | 2011_000639 1307 | 2011_000654 1308 | 2011_000660 1309 | 2011_000664 1310 | 2011_000667 1311 | 2011_000670 1312 | 2011_000676 1313 | 2011_000721 1314 | 2011_000723 1315 | 2011_000762 1316 | 2011_000766 1317 | 2011_000786 1318 | 2011_000802 1319 | 2011_000810 1320 | 2011_000821 1321 | 2011_000841 1322 | 2011_000844 1323 | 2011_000846 1324 | 2011_000869 1325 | 2011_000890 1326 | 2011_000915 1327 | 2011_000924 1328 | 2011_000937 1329 | 2011_000939 1330 | 2011_000952 1331 | 2011_000968 1332 | 2011_000974 1333 | 2011_001037 1334 | 2011_001072 1335 | 2011_001085 1336 | 2011_001089 1337 | 2011_001090 1338 | 2011_001099 1339 | 2011_001104 1340 | 2011_001112 1341 | 2011_001120 1342 | 2011_001132 1343 | 2011_001151 1344 | 2011_001194 1345 | 2011_001258 1346 | 2011_001274 1347 | 2011_001314 1348 | 2011_001317 1349 | 2011_001321 1350 | 2011_001379 1351 | 2011_001425 1352 | 2011_001431 1353 | 2011_001443 1354 | 2011_001446 1355 | 2011_001452 1356 | 2011_001454 1357 | 2011_001477 1358 | 2011_001509 1359 | 2011_001512 1360 | 2011_001515 1361 | 2011_001528 1362 | 2011_001554 1363 | 2011_001561 1364 | 2011_001580 1365 | 2011_001587 1366 | 2011_001623 1367 | 2011_001648 1368 | 2011_001651 1369 | 2011_001654 1370 | 2011_001684 1371 | 2011_001696 1372 | 2011_001697 1373 | 2011_001760 1374 | 2011_001761 1375 | 2011_001798 1376 | 2011_001807 1377 | 2011_001851 1378 | 2011_001852 1379 | 2011_001853 1380 | 2011_001888 1381 | 2011_001940 1382 | 2011_002014 1383 | 2011_002028 1384 | 2011_002056 1385 | 2011_002061 1386 | 2011_002068 1387 | 2011_002076 1388 | 2011_002090 1389 | 2011_002095 1390 | 2011_002104 1391 | 2011_002136 1392 | 2011_002138 1393 | 2011_002151 1394 | 2011_002153 1395 | 2011_002155 1396 | 2011_002197 1397 | 2011_002198 1398 | 2011_002243 1399 | 2011_002250 1400 | 2011_002257 1401 | 2011_002262 1402 | 2011_002264 1403 | 2011_002296 1404 | 2011_002314 1405 | 2011_002331 1406 | 2011_002333 1407 | 2011_002411 1408 | 2011_002417 1409 | 2011_002425 1410 | 2011_002437 1411 | 2011_002444 1412 | 2011_002445 1413 | 2011_002449 1414 | 2011_002468 1415 | 2011_002469 1416 | 2011_002473 1417 | 2011_002508 1418 | 2011_002523 1419 | 2011_002534 1420 | 2011_002557 1421 | 2011_002564 1422 | 2011_002572 1423 | 2011_002597 1424 | 2011_002622 1425 | 2011_002632 1426 | 2011_002635 1427 | 2011_002643 1428 | 2011_002653 1429 | 2011_002667 1430 | 2011_002681 1431 | 2011_002707 1432 | 2011_002736 1433 | 2011_002759 1434 | 2011_002783 1435 | 2011_002792 1436 | 2011_002799 1437 | 2011_002824 1438 | 2011_002835 1439 | 2011_002866 1440 | 2011_002876 1441 | 2011_002888 1442 | 2011_002894 1443 | 2011_002903 1444 | 2011_002905 1445 | 2011_002986 1446 | 2011_003045 1447 | 2011_003064 1448 | 2011_003070 1449 | 2011_003083 1450 | 2011_003093 1451 | 2011_003096 1452 | 2011_003102 1453 | 2011_003156 1454 | 2011_003170 1455 | 2011_003178 1456 | 2011_003231 1457 | -------------------------------------------------------------------------------- /voc12/val.txt: -------------------------------------------------------------------------------- 1 | 2007_000033 2 | 2007_000042 3 | 2007_000061 4 | 2007_000123 5 | 2007_000129 6 | 2007_000175 7 | 2007_000187 8 | 2007_000323 9 | 2007_000332 10 | 2007_000346 11 | 2007_000452 12 | 2007_000464 13 | 2007_000491 14 | 2007_000529 15 | 2007_000559 16 | 2007_000572 17 | 2007_000629 18 | 2007_000636 19 | 2007_000661 20 | 2007_000663 21 | 2007_000676 22 | 2007_000727 23 | 2007_000762 24 | 2007_000783 25 | 2007_000799 26 | 2007_000804 27 | 2007_000830 28 | 2007_000837 29 | 2007_000847 30 | 2007_000862 31 | 2007_000925 32 | 2007_000999 33 | 2007_001154 34 | 2007_001175 35 | 2007_001239 36 | 2007_001284 37 | 2007_001288 38 | 2007_001289 39 | 2007_001299 40 | 2007_001311 41 | 2007_001321 42 | 2007_001377 43 | 2007_001408 44 | 2007_001423 45 | 2007_001430 46 | 2007_001457 47 | 2007_001458 48 | 2007_001526 49 | 2007_001568 50 | 2007_001585 51 | 2007_001586 52 | 2007_001587 53 | 2007_001594 54 | 2007_001630 55 | 2007_001677 56 | 2007_001678 57 | 2007_001717 58 | 2007_001733 59 | 2007_001761 60 | 2007_001763 61 | 2007_001774 62 | 2007_001884 63 | 2007_001955 64 | 2007_002046 65 | 2007_002094 66 | 2007_002119 67 | 2007_002132 68 | 2007_002260 69 | 2007_002266 70 | 2007_002268 71 | 2007_002284 72 | 2007_002376 73 | 2007_002378 74 | 2007_002387 75 | 2007_002400 76 | 2007_002412 77 | 2007_002426 78 | 2007_002427 79 | 2007_002445 80 | 2007_002470 81 | 2007_002539 82 | 2007_002565 83 | 2007_002597 84 | 2007_002618 85 | 2007_002619 86 | 2007_002624 87 | 2007_002643 88 | 2007_002648 89 | 2007_002719 90 | 2007_002728 91 | 2007_002823 92 | 2007_002824 93 | 2007_002852 94 | 2007_002903 95 | 2007_003011 96 | 2007_003020 97 | 2007_003022 98 | 2007_003051 99 | 2007_003088 100 | 2007_003101 101 | 2007_003106 102 | 2007_003110 103 | 2007_003131 104 | 2007_003134 105 | 2007_003137 106 | 2007_003143 107 | 2007_003169 108 | 2007_003188 109 | 2007_003194 110 | 2007_003195 111 | 2007_003201 112 | 2007_003349 113 | 2007_003367 114 | 2007_003373 115 | 2007_003499 116 | 2007_003503 117 | 2007_003506 118 | 2007_003530 119 | 2007_003571 120 | 2007_003587 121 | 2007_003611 122 | 2007_003621 123 | 2007_003682 124 | 2007_003711 125 | 2007_003714 126 | 2007_003742 127 | 2007_003786 128 | 2007_003841 129 | 2007_003848 130 | 2007_003861 131 | 2007_003872 132 | 2007_003917 133 | 2007_003957 134 | 2007_003991 135 | 2007_004033 136 | 2007_004052 137 | 2007_004112 138 | 2007_004121 139 | 2007_004143 140 | 2007_004189 141 | 2007_004190 142 | 2007_004193 143 | 2007_004241 144 | 2007_004275 145 | 2007_004281 146 | 2007_004380 147 | 2007_004392 148 | 2007_004405 149 | 2007_004468 150 | 2007_004483 151 | 2007_004510 152 | 2007_004538 153 | 2007_004558 154 | 2007_004644 155 | 2007_004649 156 | 2007_004712 157 | 2007_004722 158 | 2007_004856 159 | 2007_004866 160 | 2007_004902 161 | 2007_004969 162 | 2007_005058 163 | 2007_005074 164 | 2007_005107 165 | 2007_005114 166 | 2007_005149 167 | 2007_005173 168 | 2007_005281 169 | 2007_005294 170 | 2007_005296 171 | 2007_005304 172 | 2007_005331 173 | 2007_005354 174 | 2007_005358 175 | 2007_005428 176 | 2007_005460 177 | 2007_005469 178 | 2007_005509 179 | 2007_005547 180 | 2007_005600 181 | 2007_005608 182 | 2007_005626 183 | 2007_005689 184 | 2007_005696 185 | 2007_005705 186 | 2007_005759 187 | 2007_005803 188 | 2007_005813 189 | 2007_005828 190 | 2007_005844 191 | 2007_005845 192 | 2007_005857 193 | 2007_005911 194 | 2007_005915 195 | 2007_005978 196 | 2007_006028 197 | 2007_006035 198 | 2007_006046 199 | 2007_006076 200 | 2007_006086 201 | 2007_006117 202 | 2007_006171 203 | 2007_006241 204 | 2007_006260 205 | 2007_006277 206 | 2007_006348 207 | 2007_006364 208 | 2007_006373 209 | 2007_006444 210 | 2007_006449 211 | 2007_006549 212 | 2007_006553 213 | 2007_006560 214 | 2007_006647 215 | 2007_006678 216 | 2007_006680 217 | 2007_006698 218 | 2007_006761 219 | 2007_006802 220 | 2007_006837 221 | 2007_006841 222 | 2007_006864 223 | 2007_006866 224 | 2007_006946 225 | 2007_007007 226 | 2007_007084 227 | 2007_007109 228 | 2007_007130 229 | 2007_007165 230 | 2007_007168 231 | 2007_007195 232 | 2007_007196 233 | 2007_007203 234 | 2007_007211 235 | 2007_007235 236 | 2007_007341 237 | 2007_007414 238 | 2007_007417 239 | 2007_007470 240 | 2007_007477 241 | 2007_007493 242 | 2007_007498 243 | 2007_007524 244 | 2007_007534 245 | 2007_007624 246 | 2007_007651 247 | 2007_007688 248 | 2007_007748 249 | 2007_007795 250 | 2007_007810 251 | 2007_007815 252 | 2007_007818 253 | 2007_007836 254 | 2007_007849 255 | 2007_007881 256 | 2007_007996 257 | 2007_008051 258 | 2007_008084 259 | 2007_008106 260 | 2007_008110 261 | 2007_008204 262 | 2007_008222 263 | 2007_008256 264 | 2007_008260 265 | 2007_008339 266 | 2007_008374 267 | 2007_008415 268 | 2007_008430 269 | 2007_008543 270 | 2007_008547 271 | 2007_008596 272 | 2007_008645 273 | 2007_008670 274 | 2007_008708 275 | 2007_008722 276 | 2007_008747 277 | 2007_008802 278 | 2007_008815 279 | 2007_008897 280 | 2007_008944 281 | 2007_008964 282 | 2007_008973 283 | 2007_008980 284 | 2007_009015 285 | 2007_009068 286 | 2007_009084 287 | 2007_009088 288 | 2007_009096 289 | 2007_009221 290 | 2007_009245 291 | 2007_009251 292 | 2007_009252 293 | 2007_009258 294 | 2007_009320 295 | 2007_009323 296 | 2007_009331 297 | 2007_009346 298 | 2007_009392 299 | 2007_009413 300 | 2007_009419 301 | 2007_009446 302 | 2007_009458 303 | 2007_009521 304 | 2007_009562 305 | 2007_009592 306 | 2007_009654 307 | 2007_009655 308 | 2007_009684 309 | 2007_009687 310 | 2007_009691 311 | 2007_009706 312 | 2007_009750 313 | 2007_009756 314 | 2007_009764 315 | 2007_009794 316 | 2007_009817 317 | 2007_009841 318 | 2007_009897 319 | 2007_009911 320 | 2007_009923 321 | 2007_009938 322 | 2008_000009 323 | 2008_000016 324 | 2008_000073 325 | 2008_000075 326 | 2008_000080 327 | 2008_000107 328 | 2008_000120 329 | 2008_000123 330 | 2008_000149 331 | 2008_000182 332 | 2008_000213 333 | 2008_000215 334 | 2008_000223 335 | 2008_000233 336 | 2008_000234 337 | 2008_000239 338 | 2008_000254 339 | 2008_000270 340 | 2008_000271 341 | 2008_000345 342 | 2008_000359 343 | 2008_000391 344 | 2008_000401 345 | 2008_000464 346 | 2008_000469 347 | 2008_000474 348 | 2008_000501 349 | 2008_000510 350 | 2008_000533 351 | 2008_000573 352 | 2008_000589 353 | 2008_000602 354 | 2008_000630 355 | 2008_000657 356 | 2008_000661 357 | 2008_000662 358 | 2008_000666 359 | 2008_000673 360 | 2008_000700 361 | 2008_000725 362 | 2008_000731 363 | 2008_000763 364 | 2008_000765 365 | 2008_000782 366 | 2008_000795 367 | 2008_000811 368 | 2008_000848 369 | 2008_000853 370 | 2008_000863 371 | 2008_000911 372 | 2008_000919 373 | 2008_000943 374 | 2008_000992 375 | 2008_001013 376 | 2008_001028 377 | 2008_001040 378 | 2008_001070 379 | 2008_001074 380 | 2008_001076 381 | 2008_001078 382 | 2008_001135 383 | 2008_001150 384 | 2008_001170 385 | 2008_001231 386 | 2008_001249 387 | 2008_001260 388 | 2008_001283 389 | 2008_001308 390 | 2008_001379 391 | 2008_001404 392 | 2008_001433 393 | 2008_001439 394 | 2008_001478 395 | 2008_001491 396 | 2008_001504 397 | 2008_001513 398 | 2008_001514 399 | 2008_001531 400 | 2008_001546 401 | 2008_001547 402 | 2008_001580 403 | 2008_001629 404 | 2008_001640 405 | 2008_001682 406 | 2008_001688 407 | 2008_001715 408 | 2008_001821 409 | 2008_001874 410 | 2008_001885 411 | 2008_001895 412 | 2008_001966 413 | 2008_001971 414 | 2008_001992 415 | 2008_002043 416 | 2008_002152 417 | 2008_002205 418 | 2008_002212 419 | 2008_002239 420 | 2008_002240 421 | 2008_002241 422 | 2008_002269 423 | 2008_002273 424 | 2008_002358 425 | 2008_002379 426 | 2008_002383 427 | 2008_002429 428 | 2008_002464 429 | 2008_002467 430 | 2008_002492 431 | 2008_002495 432 | 2008_002504 433 | 2008_002521 434 | 2008_002536 435 | 2008_002588 436 | 2008_002623 437 | 2008_002680 438 | 2008_002681 439 | 2008_002775 440 | 2008_002778 441 | 2008_002835 442 | 2008_002859 443 | 2008_002864 444 | 2008_002900 445 | 2008_002904 446 | 2008_002929 447 | 2008_002936 448 | 2008_002942 449 | 2008_002958 450 | 2008_003003 451 | 2008_003026 452 | 2008_003034 453 | 2008_003076 454 | 2008_003105 455 | 2008_003108 456 | 2008_003110 457 | 2008_003135 458 | 2008_003141 459 | 2008_003155 460 | 2008_003210 461 | 2008_003238 462 | 2008_003270 463 | 2008_003330 464 | 2008_003333 465 | 2008_003369 466 | 2008_003379 467 | 2008_003451 468 | 2008_003461 469 | 2008_003477 470 | 2008_003492 471 | 2008_003499 472 | 2008_003511 473 | 2008_003546 474 | 2008_003576 475 | 2008_003577 476 | 2008_003676 477 | 2008_003709 478 | 2008_003733 479 | 2008_003777 480 | 2008_003782 481 | 2008_003821 482 | 2008_003846 483 | 2008_003856 484 | 2008_003858 485 | 2008_003874 486 | 2008_003876 487 | 2008_003885 488 | 2008_003886 489 | 2008_003926 490 | 2008_003976 491 | 2008_004069 492 | 2008_004101 493 | 2008_004140 494 | 2008_004172 495 | 2008_004175 496 | 2008_004212 497 | 2008_004279 498 | 2008_004339 499 | 2008_004345 500 | 2008_004363 501 | 2008_004367 502 | 2008_004396 503 | 2008_004399 504 | 2008_004453 505 | 2008_004477 506 | 2008_004552 507 | 2008_004562 508 | 2008_004575 509 | 2008_004610 510 | 2008_004612 511 | 2008_004621 512 | 2008_004624 513 | 2008_004654 514 | 2008_004659 515 | 2008_004687 516 | 2008_004701 517 | 2008_004704 518 | 2008_004705 519 | 2008_004754 520 | 2008_004758 521 | 2008_004854 522 | 2008_004910 523 | 2008_004995 524 | 2008_005049 525 | 2008_005089 526 | 2008_005097 527 | 2008_005105 528 | 2008_005145 529 | 2008_005197 530 | 2008_005217 531 | 2008_005242 532 | 2008_005245 533 | 2008_005254 534 | 2008_005262 535 | 2008_005338 536 | 2008_005398 537 | 2008_005399 538 | 2008_005422 539 | 2008_005439 540 | 2008_005445 541 | 2008_005525 542 | 2008_005544 543 | 2008_005628 544 | 2008_005633 545 | 2008_005637 546 | 2008_005642 547 | 2008_005676 548 | 2008_005680 549 | 2008_005691 550 | 2008_005727 551 | 2008_005738 552 | 2008_005812 553 | 2008_005904 554 | 2008_005915 555 | 2008_006008 556 | 2008_006036 557 | 2008_006055 558 | 2008_006063 559 | 2008_006108 560 | 2008_006130 561 | 2008_006143 562 | 2008_006159 563 | 2008_006216 564 | 2008_006219 565 | 2008_006229 566 | 2008_006254 567 | 2008_006275 568 | 2008_006325 569 | 2008_006327 570 | 2008_006341 571 | 2008_006408 572 | 2008_006480 573 | 2008_006523 574 | 2008_006526 575 | 2008_006528 576 | 2008_006553 577 | 2008_006554 578 | 2008_006703 579 | 2008_006722 580 | 2008_006752 581 | 2008_006784 582 | 2008_006835 583 | 2008_006874 584 | 2008_006981 585 | 2008_006986 586 | 2008_007025 587 | 2008_007031 588 | 2008_007048 589 | 2008_007120 590 | 2008_007123 591 | 2008_007143 592 | 2008_007194 593 | 2008_007219 594 | 2008_007273 595 | 2008_007350 596 | 2008_007378 597 | 2008_007392 598 | 2008_007402 599 | 2008_007497 600 | 2008_007498 601 | 2008_007507 602 | 2008_007513 603 | 2008_007527 604 | 2008_007548 605 | 2008_007596 606 | 2008_007677 607 | 2008_007737 608 | 2008_007797 609 | 2008_007804 610 | 2008_007811 611 | 2008_007814 612 | 2008_007828 613 | 2008_007836 614 | 2008_007945 615 | 2008_007994 616 | 2008_008051 617 | 2008_008103 618 | 2008_008127 619 | 2008_008221 620 | 2008_008252 621 | 2008_008268 622 | 2008_008296 623 | 2008_008301 624 | 2008_008335 625 | 2008_008362 626 | 2008_008392 627 | 2008_008393 628 | 2008_008421 629 | 2008_008434 630 | 2008_008469 631 | 2008_008629 632 | 2008_008682 633 | 2008_008711 634 | 2008_008746 635 | 2009_000012 636 | 2009_000013 637 | 2009_000022 638 | 2009_000032 639 | 2009_000037 640 | 2009_000039 641 | 2009_000074 642 | 2009_000080 643 | 2009_000087 644 | 2009_000096 645 | 2009_000121 646 | 2009_000136 647 | 2009_000149 648 | 2009_000156 649 | 2009_000201 650 | 2009_000205 651 | 2009_000219 652 | 2009_000242 653 | 2009_000309 654 | 2009_000318 655 | 2009_000335 656 | 2009_000351 657 | 2009_000354 658 | 2009_000387 659 | 2009_000391 660 | 2009_000412 661 | 2009_000418 662 | 2009_000421 663 | 2009_000426 664 | 2009_000440 665 | 2009_000446 666 | 2009_000455 667 | 2009_000457 668 | 2009_000469 669 | 2009_000487 670 | 2009_000488 671 | 2009_000523 672 | 2009_000573 673 | 2009_000619 674 | 2009_000628 675 | 2009_000641 676 | 2009_000664 677 | 2009_000675 678 | 2009_000704 679 | 2009_000705 680 | 2009_000712 681 | 2009_000716 682 | 2009_000723 683 | 2009_000727 684 | 2009_000730 685 | 2009_000731 686 | 2009_000732 687 | 2009_000771 688 | 2009_000825 689 | 2009_000828 690 | 2009_000839 691 | 2009_000840 692 | 2009_000845 693 | 2009_000879 694 | 2009_000892 695 | 2009_000919 696 | 2009_000924 697 | 2009_000931 698 | 2009_000935 699 | 2009_000964 700 | 2009_000989 701 | 2009_000991 702 | 2009_000998 703 | 2009_001008 704 | 2009_001082 705 | 2009_001108 706 | 2009_001160 707 | 2009_001215 708 | 2009_001240 709 | 2009_001255 710 | 2009_001278 711 | 2009_001299 712 | 2009_001300 713 | 2009_001314 714 | 2009_001332 715 | 2009_001333 716 | 2009_001363 717 | 2009_001391 718 | 2009_001411 719 | 2009_001433 720 | 2009_001505 721 | 2009_001535 722 | 2009_001536 723 | 2009_001565 724 | 2009_001607 725 | 2009_001644 726 | 2009_001663 727 | 2009_001683 728 | 2009_001684 729 | 2009_001687 730 | 2009_001718 731 | 2009_001731 732 | 2009_001765 733 | 2009_001768 734 | 2009_001775 735 | 2009_001804 736 | 2009_001816 737 | 2009_001818 738 | 2009_001850 739 | 2009_001851 740 | 2009_001854 741 | 2009_001941 742 | 2009_001991 743 | 2009_002012 744 | 2009_002035 745 | 2009_002042 746 | 2009_002082 747 | 2009_002094 748 | 2009_002097 749 | 2009_002122 750 | 2009_002150 751 | 2009_002155 752 | 2009_002164 753 | 2009_002165 754 | 2009_002171 755 | 2009_002185 756 | 2009_002202 757 | 2009_002221 758 | 2009_002238 759 | 2009_002239 760 | 2009_002265 761 | 2009_002268 762 | 2009_002291 763 | 2009_002295 764 | 2009_002317 765 | 2009_002320 766 | 2009_002346 767 | 2009_002366 768 | 2009_002372 769 | 2009_002382 770 | 2009_002390 771 | 2009_002415 772 | 2009_002445 773 | 2009_002487 774 | 2009_002521 775 | 2009_002527 776 | 2009_002535 777 | 2009_002539 778 | 2009_002549 779 | 2009_002562 780 | 2009_002568 781 | 2009_002571 782 | 2009_002573 783 | 2009_002584 784 | 2009_002591 785 | 2009_002594 786 | 2009_002604 787 | 2009_002618 788 | 2009_002635 789 | 2009_002638 790 | 2009_002649 791 | 2009_002651 792 | 2009_002727 793 | 2009_002732 794 | 2009_002749 795 | 2009_002753 796 | 2009_002771 797 | 2009_002808 798 | 2009_002856 799 | 2009_002887 800 | 2009_002888 801 | 2009_002928 802 | 2009_002936 803 | 2009_002975 804 | 2009_002982 805 | 2009_002990 806 | 2009_003003 807 | 2009_003005 808 | 2009_003043 809 | 2009_003059 810 | 2009_003063 811 | 2009_003065 812 | 2009_003071 813 | 2009_003080 814 | 2009_003105 815 | 2009_003123 816 | 2009_003193 817 | 2009_003196 818 | 2009_003217 819 | 2009_003224 820 | 2009_003241 821 | 2009_003269 822 | 2009_003273 823 | 2009_003299 824 | 2009_003304 825 | 2009_003311 826 | 2009_003323 827 | 2009_003343 828 | 2009_003378 829 | 2009_003387 830 | 2009_003406 831 | 2009_003433 832 | 2009_003450 833 | 2009_003466 834 | 2009_003481 835 | 2009_003494 836 | 2009_003498 837 | 2009_003504 838 | 2009_003507 839 | 2009_003517 840 | 2009_003523 841 | 2009_003542 842 | 2009_003549 843 | 2009_003551 844 | 2009_003564 845 | 2009_003569 846 | 2009_003576 847 | 2009_003589 848 | 2009_003607 849 | 2009_003640 850 | 2009_003666 851 | 2009_003696 852 | 2009_003703 853 | 2009_003707 854 | 2009_003756 855 | 2009_003771 856 | 2009_003773 857 | 2009_003804 858 | 2009_003806 859 | 2009_003810 860 | 2009_003849 861 | 2009_003857 862 | 2009_003858 863 | 2009_003895 864 | 2009_003903 865 | 2009_003904 866 | 2009_003928 867 | 2009_003938 868 | 2009_003971 869 | 2009_003991 870 | 2009_004021 871 | 2009_004033 872 | 2009_004043 873 | 2009_004070 874 | 2009_004072 875 | 2009_004084 876 | 2009_004099 877 | 2009_004125 878 | 2009_004140 879 | 2009_004217 880 | 2009_004221 881 | 2009_004247 882 | 2009_004248 883 | 2009_004255 884 | 2009_004298 885 | 2009_004324 886 | 2009_004455 887 | 2009_004494 888 | 2009_004497 889 | 2009_004504 890 | 2009_004507 891 | 2009_004509 892 | 2009_004540 893 | 2009_004568 894 | 2009_004579 895 | 2009_004581 896 | 2009_004590 897 | 2009_004592 898 | 2009_004594 899 | 2009_004635 900 | 2009_004653 901 | 2009_004687 902 | 2009_004721 903 | 2009_004730 904 | 2009_004732 905 | 2009_004738 906 | 2009_004748 907 | 2009_004789 908 | 2009_004799 909 | 2009_004801 910 | 2009_004848 911 | 2009_004859 912 | 2009_004867 913 | 2009_004882 914 | 2009_004886 915 | 2009_004895 916 | 2009_004942 917 | 2009_004969 918 | 2009_004987 919 | 2009_004993 920 | 2009_004994 921 | 2009_005038 922 | 2009_005078 923 | 2009_005087 924 | 2009_005089 925 | 2009_005137 926 | 2009_005148 927 | 2009_005156 928 | 2009_005158 929 | 2009_005189 930 | 2009_005190 931 | 2009_005217 932 | 2009_005219 933 | 2009_005220 934 | 2009_005231 935 | 2009_005260 936 | 2009_005262 937 | 2009_005302 938 | 2010_000003 939 | 2010_000038 940 | 2010_000065 941 | 2010_000083 942 | 2010_000084 943 | 2010_000087 944 | 2010_000110 945 | 2010_000159 946 | 2010_000160 947 | 2010_000163 948 | 2010_000174 949 | 2010_000216 950 | 2010_000238 951 | 2010_000241 952 | 2010_000256 953 | 2010_000272 954 | 2010_000284 955 | 2010_000309 956 | 2010_000318 957 | 2010_000330 958 | 2010_000335 959 | 2010_000342 960 | 2010_000372 961 | 2010_000422 962 | 2010_000426 963 | 2010_000427 964 | 2010_000502 965 | 2010_000530 966 | 2010_000552 967 | 2010_000559 968 | 2010_000572 969 | 2010_000573 970 | 2010_000622 971 | 2010_000628 972 | 2010_000639 973 | 2010_000666 974 | 2010_000679 975 | 2010_000682 976 | 2010_000683 977 | 2010_000724 978 | 2010_000738 979 | 2010_000764 980 | 2010_000788 981 | 2010_000814 982 | 2010_000836 983 | 2010_000874 984 | 2010_000904 985 | 2010_000906 986 | 2010_000907 987 | 2010_000918 988 | 2010_000929 989 | 2010_000941 990 | 2010_000952 991 | 2010_000961 992 | 2010_001000 993 | 2010_001010 994 | 2010_001011 995 | 2010_001016 996 | 2010_001017 997 | 2010_001024 998 | 2010_001036 999 | 2010_001061 1000 | 2010_001069 1001 | 2010_001070 1002 | 2010_001079 1003 | 2010_001104 1004 | 2010_001124 1005 | 2010_001149 1006 | 2010_001151 1007 | 2010_001174 1008 | 2010_001206 1009 | 2010_001246 1010 | 2010_001251 1011 | 2010_001256 1012 | 2010_001264 1013 | 2010_001292 1014 | 2010_001313 1015 | 2010_001327 1016 | 2010_001331 1017 | 2010_001351 1018 | 2010_001367 1019 | 2010_001376 1020 | 2010_001403 1021 | 2010_001448 1022 | 2010_001451 1023 | 2010_001522 1024 | 2010_001534 1025 | 2010_001553 1026 | 2010_001557 1027 | 2010_001563 1028 | 2010_001577 1029 | 2010_001579 1030 | 2010_001646 1031 | 2010_001656 1032 | 2010_001692 1033 | 2010_001699 1034 | 2010_001734 1035 | 2010_001752 1036 | 2010_001767 1037 | 2010_001768 1038 | 2010_001773 1039 | 2010_001820 1040 | 2010_001830 1041 | 2010_001851 1042 | 2010_001908 1043 | 2010_001913 1044 | 2010_001951 1045 | 2010_001956 1046 | 2010_001962 1047 | 2010_001966 1048 | 2010_001995 1049 | 2010_002017 1050 | 2010_002025 1051 | 2010_002030 1052 | 2010_002106 1053 | 2010_002137 1054 | 2010_002142 1055 | 2010_002146 1056 | 2010_002147 1057 | 2010_002150 1058 | 2010_002161 1059 | 2010_002200 1060 | 2010_002228 1061 | 2010_002232 1062 | 2010_002251 1063 | 2010_002271 1064 | 2010_002305 1065 | 2010_002310 1066 | 2010_002336 1067 | 2010_002348 1068 | 2010_002361 1069 | 2010_002390 1070 | 2010_002396 1071 | 2010_002422 1072 | 2010_002450 1073 | 2010_002480 1074 | 2010_002512 1075 | 2010_002531 1076 | 2010_002536 1077 | 2010_002538 1078 | 2010_002546 1079 | 2010_002623 1080 | 2010_002682 1081 | 2010_002691 1082 | 2010_002693 1083 | 2010_002701 1084 | 2010_002763 1085 | 2010_002792 1086 | 2010_002868 1087 | 2010_002900 1088 | 2010_002902 1089 | 2010_002921 1090 | 2010_002929 1091 | 2010_002939 1092 | 2010_002988 1093 | 2010_003014 1094 | 2010_003060 1095 | 2010_003123 1096 | 2010_003127 1097 | 2010_003132 1098 | 2010_003168 1099 | 2010_003183 1100 | 2010_003187 1101 | 2010_003207 1102 | 2010_003231 1103 | 2010_003239 1104 | 2010_003275 1105 | 2010_003276 1106 | 2010_003293 1107 | 2010_003302 1108 | 2010_003325 1109 | 2010_003362 1110 | 2010_003365 1111 | 2010_003381 1112 | 2010_003402 1113 | 2010_003409 1114 | 2010_003418 1115 | 2010_003446 1116 | 2010_003453 1117 | 2010_003468 1118 | 2010_003473 1119 | 2010_003495 1120 | 2010_003506 1121 | 2010_003514 1122 | 2010_003531 1123 | 2010_003532 1124 | 2010_003541 1125 | 2010_003547 1126 | 2010_003597 1127 | 2010_003675 1128 | 2010_003708 1129 | 2010_003716 1130 | 2010_003746 1131 | 2010_003758 1132 | 2010_003764 1133 | 2010_003768 1134 | 2010_003771 1135 | 2010_003772 1136 | 2010_003781 1137 | 2010_003813 1138 | 2010_003820 1139 | 2010_003854 1140 | 2010_003912 1141 | 2010_003915 1142 | 2010_003947 1143 | 2010_003956 1144 | 2010_003971 1145 | 2010_004041 1146 | 2010_004042 1147 | 2010_004056 1148 | 2010_004063 1149 | 2010_004104 1150 | 2010_004120 1151 | 2010_004149 1152 | 2010_004165 1153 | 2010_004208 1154 | 2010_004219 1155 | 2010_004226 1156 | 2010_004314 1157 | 2010_004320 1158 | 2010_004322 1159 | 2010_004337 1160 | 2010_004348 1161 | 2010_004355 1162 | 2010_004369 1163 | 2010_004382 1164 | 2010_004419 1165 | 2010_004432 1166 | 2010_004472 1167 | 2010_004479 1168 | 2010_004519 1169 | 2010_004520 1170 | 2010_004529 1171 | 2010_004543 1172 | 2010_004550 1173 | 2010_004551 1174 | 2010_004556 1175 | 2010_004559 1176 | 2010_004628 1177 | 2010_004635 1178 | 2010_004662 1179 | 2010_004697 1180 | 2010_004757 1181 | 2010_004763 1182 | 2010_004772 1183 | 2010_004783 1184 | 2010_004789 1185 | 2010_004795 1186 | 2010_004815 1187 | 2010_004825 1188 | 2010_004828 1189 | 2010_004856 1190 | 2010_004857 1191 | 2010_004861 1192 | 2010_004941 1193 | 2010_004946 1194 | 2010_004951 1195 | 2010_004980 1196 | 2010_004994 1197 | 2010_005013 1198 | 2010_005021 1199 | 2010_005046 1200 | 2010_005063 1201 | 2010_005108 1202 | 2010_005118 1203 | 2010_005159 1204 | 2010_005160 1205 | 2010_005166 1206 | 2010_005174 1207 | 2010_005180 1208 | 2010_005187 1209 | 2010_005206 1210 | 2010_005245 1211 | 2010_005252 1212 | 2010_005284 1213 | 2010_005305 1214 | 2010_005344 1215 | 2010_005353 1216 | 2010_005366 1217 | 2010_005401 1218 | 2010_005421 1219 | 2010_005428 1220 | 2010_005432 1221 | 2010_005433 1222 | 2010_005496 1223 | 2010_005501 1224 | 2010_005508 1225 | 2010_005531 1226 | 2010_005534 1227 | 2010_005575 1228 | 2010_005582 1229 | 2010_005606 1230 | 2010_005626 1231 | 2010_005644 1232 | 2010_005664 1233 | 2010_005705 1234 | 2010_005706 1235 | 2010_005709 1236 | 2010_005718 1237 | 2010_005719 1238 | 2010_005727 1239 | 2010_005762 1240 | 2010_005788 1241 | 2010_005860 1242 | 2010_005871 1243 | 2010_005877 1244 | 2010_005888 1245 | 2010_005899 1246 | 2010_005922 1247 | 2010_005991 1248 | 2010_005992 1249 | 2010_006026 1250 | 2010_006034 1251 | 2010_006054 1252 | 2010_006070 1253 | 2011_000045 1254 | 2011_000051 1255 | 2011_000054 1256 | 2011_000066 1257 | 2011_000070 1258 | 2011_000112 1259 | 2011_000173 1260 | 2011_000178 1261 | 2011_000185 1262 | 2011_000226 1263 | 2011_000234 1264 | 2011_000238 1265 | 2011_000239 1266 | 2011_000248 1267 | 2011_000283 1268 | 2011_000291 1269 | 2011_000310 1270 | 2011_000312 1271 | 2011_000338 1272 | 2011_000396 1273 | 2011_000412 1274 | 2011_000419 1275 | 2011_000435 1276 | 2011_000436 1277 | 2011_000438 1278 | 2011_000455 1279 | 2011_000456 1280 | 2011_000479 1281 | 2011_000481 1282 | 2011_000482 1283 | 2011_000503 1284 | 2011_000512 1285 | 2011_000521 1286 | 2011_000526 1287 | 2011_000536 1288 | 2011_000548 1289 | 2011_000566 1290 | 2011_000585 1291 | 2011_000598 1292 | 2011_000607 1293 | 2011_000618 1294 | 2011_000638 1295 | 2011_000658 1296 | 2011_000661 1297 | 2011_000669 1298 | 2011_000747 1299 | 2011_000780 1300 | 2011_000789 1301 | 2011_000807 1302 | 2011_000809 1303 | 2011_000813 1304 | 2011_000830 1305 | 2011_000843 1306 | 2011_000874 1307 | 2011_000888 1308 | 2011_000900 1309 | 2011_000912 1310 | 2011_000953 1311 | 2011_000969 1312 | 2011_001005 1313 | 2011_001014 1314 | 2011_001020 1315 | 2011_001047 1316 | 2011_001060 1317 | 2011_001064 1318 | 2011_001069 1319 | 2011_001071 1320 | 2011_001082 1321 | 2011_001110 1322 | 2011_001114 1323 | 2011_001159 1324 | 2011_001161 1325 | 2011_001190 1326 | 2011_001232 1327 | 2011_001263 1328 | 2011_001276 1329 | 2011_001281 1330 | 2011_001287 1331 | 2011_001292 1332 | 2011_001313 1333 | 2011_001341 1334 | 2011_001346 1335 | 2011_001350 1336 | 2011_001407 1337 | 2011_001416 1338 | 2011_001421 1339 | 2011_001434 1340 | 2011_001447 1341 | 2011_001489 1342 | 2011_001529 1343 | 2011_001530 1344 | 2011_001534 1345 | 2011_001546 1346 | 2011_001567 1347 | 2011_001589 1348 | 2011_001597 1349 | 2011_001601 1350 | 2011_001607 1351 | 2011_001613 1352 | 2011_001614 1353 | 2011_001619 1354 | 2011_001624 1355 | 2011_001642 1356 | 2011_001665 1357 | 2011_001669 1358 | 2011_001674 1359 | 2011_001708 1360 | 2011_001713 1361 | 2011_001714 1362 | 2011_001722 1363 | 2011_001726 1364 | 2011_001745 1365 | 2011_001748 1366 | 2011_001775 1367 | 2011_001782 1368 | 2011_001793 1369 | 2011_001794 1370 | 2011_001812 1371 | 2011_001862 1372 | 2011_001863 1373 | 2011_001868 1374 | 2011_001880 1375 | 2011_001910 1376 | 2011_001984 1377 | 2011_001988 1378 | 2011_002002 1379 | 2011_002040 1380 | 2011_002041 1381 | 2011_002064 1382 | 2011_002075 1383 | 2011_002098 1384 | 2011_002110 1385 | 2011_002121 1386 | 2011_002124 1387 | 2011_002150 1388 | 2011_002156 1389 | 2011_002178 1390 | 2011_002200 1391 | 2011_002223 1392 | 2011_002244 1393 | 2011_002247 1394 | 2011_002279 1395 | 2011_002295 1396 | 2011_002298 1397 | 2011_002308 1398 | 2011_002317 1399 | 2011_002322 1400 | 2011_002327 1401 | 2011_002343 1402 | 2011_002358 1403 | 2011_002371 1404 | 2011_002379 1405 | 2011_002391 1406 | 2011_002498 1407 | 2011_002509 1408 | 2011_002515 1409 | 2011_002532 1410 | 2011_002535 1411 | 2011_002548 1412 | 2011_002575 1413 | 2011_002578 1414 | 2011_002589 1415 | 2011_002592 1416 | 2011_002623 1417 | 2011_002641 1418 | 2011_002644 1419 | 2011_002662 1420 | 2011_002675 1421 | 2011_002685 1422 | 2011_002713 1423 | 2011_002730 1424 | 2011_002754 1425 | 2011_002812 1426 | 2011_002863 1427 | 2011_002879 1428 | 2011_002885 1429 | 2011_002929 1430 | 2011_002951 1431 | 2011_002975 1432 | 2011_002993 1433 | 2011_002997 1434 | 2011_003003 1435 | 2011_003011 1436 | 2011_003019 1437 | 2011_003030 1438 | 2011_003055 1439 | 2011_003085 1440 | 2011_003103 1441 | 2011_003114 1442 | 2011_003145 1443 | 2011_003146 1444 | 2011_003182 1445 | 2011_003197 1446 | 2011_003205 1447 | 2011_003240 1448 | 2011_003256 1449 | 2011_003271 1450 | --------------------------------------------------------------------------------