├── README.md
├── data
└── README.md
├── dataset
├── __init__.py
├── davis_test_dataset.py
├── range_transform.py
├── reseed.py
├── static_dataset.py
├── tps.py
├── util.py
├── vos_dataset.py
└── yv_test_dataset.py
├── docs
├── .DS_Store
├── framework.png
├── long_video.png
└── visualization.png
├── eval_davis.py
├── eval_davis_2016.py
├── eval_youtube.py
├── inference_core.py
├── inference_core_yv.py
├── model
├── .DS_Store
├── __init__.py
├── aggregate.py
├── eval_network.py
├── losses.py
├── mod_resnet.py
├── model.py
├── modules.py
└── network.py
├── requirements.txt
├── scripts
├── __init__.py
├── resize_length.py
└── resize_youtube.py
├── train.py
└── util
├── __init__.py
├── davis_subset.txt
├── hyper_para.py
├── image_saver.py
├── load_subset.py
├── log_integrator.py
├── logger.py
├── tensor_util.py
└── yv_subset.txt
/README.md:
--------------------------------------------------------------------------------
1 | # Learning Quality-aware Dynamic Memory for Video Object Segmentation
2 |
3 | *ECCV 2022* | [Paper](https://arxiv.org/pdf/2207.07922.pdf)
4 |
5 | ## Abstract
6 | Previous memory-based methods mainly focus on better matching between the current frame and the memory frames without explicitly paying attention to the quality of the memory. Therefore, frames with poor segmentation masks are prone to be memorized, which leads to a segmentation mask error accumulation problem and further affect the segmentation performance. In addition, the linear increase of memory frames with the growth of frame number also limits the ability of the models to handle long videos. To this end, we propose a Quality-aware Dynamic Memory Network (QDMN) to evaluate the segmentation quality of each frame, allowing the memory bank to selectively store accurately segmented frames to prevent the error accumulation problem. Then, we combine the segmentation quality with temporal consistency to dynamically update the memory bank to make the model have ability to hande videos of arbitray length.
7 |
8 | ## Framework
9 |
10 |
11 | ## Visualization Results
12 |
13 |
14 | ## Long Video Comparison
15 | (a) is the results of retaining the most recent memory frames and (b) is applying our updating strategy.
16 |
17 |
18 |
19 | ## Results (S012)
20 | | Dataset | Split | J&F | J | F |
21 | | --- | :---: | :--:|:--:|:---:|
22 | | DAVIS 2016 | val | 92.0 | 90.7 | 93.2 |
23 | | DAVIS 2017 | val | 85.6 | 82.5 | 88.6 |
24 | | DAVIS 2017 | test-dev | 81.9 | 78.1 | 85.4 |
25 |
26 | | Dataset | Split | Overall Score | J-Seen | F-Seen | J-Unseen | F-Unseen
27 | | --- | --- | :--:|:--:|:---:|:---:|:---:|
28 | | YouTubeVOS 18 | validation | 83.8 | 82.7 | 87.5 | 78.4 | 86.4 |
29 |
30 |
31 | ## Pretrained Model
32 | Please download the pretrained s012 model [here](https://drive.google.com/file/d/1fdiGFGol1ecVowPe6gfRHsRs9zCThFXB/view?usp=sharing).
33 |
34 | ## Requirements
35 |
36 | The following packages are used in this project.
37 | - Pytorch 1.8.1 (or higher version)
38 | - torchvision 0.9.1 (or higher version)
39 | - opencv
40 | - pillow
41 | - progressbar2
42 | - thinspline for training (https://github.com/cheind/py-thin-plate-spline)
43 | - gitpython
44 | - gdown
45 |
46 | For installing Pytorch and torchvision, please refer to the official [guideline](https://pytorch.org/).
47 |
48 | For others, you can install them by `pip install -r requirements.txt`.
49 |
50 | ## Data Preparation
51 | Please refer to [MiVOS](https://github.com/hkchengrex/Mask-Propagation) to prepare the datasets and put all datasets in `/data`.
52 |
53 | ## Code Structure
54 | ```xml
55 | ├── data/: here are train and test datasets.
56 | │ ├── static
57 | │ ├── DAVIS
58 | │ ├── YouTube
59 | │ ├── BL30K
60 | ├── datasets/: transform and dataloader for train and test datasets
61 | ├── model/: here are the code of the network and training engine(model.py)
62 | ├── saves/: here are the checkpoint obtained from training
63 | ├── scripts/: some function used to process dataset
64 | ├── util/: here are the config(hyper_para.py) and some utils
65 | ├── train.py
66 | ├── inference_core.py: test engine for DAVIS
67 | ├── inference_core_yv.py: test engine for YouTubeVOS
68 | ├── eval_*.py
69 | ├── requirements.txt
70 | ```
71 |
72 | **If you encounter the problem of prediction score is 0 in the pre-training stage, please change the ReLU activation function of FC layer in QAM to sigmoid, which will solve the above problem. The corresponding code is on line 174 and 175 of the model/modules.py file.**
73 |
74 | ## Training
75 | ### For pretraining:
76 | To train on the static image datasets, use the following command:
77 |
78 | `CUDA_VISIBLE_DEVICES=[GPU_ids] OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port [cccc] --nproc_per_node=GPU_num train.py --id [save_name] --stage 0`
79 |
80 | For example, if we use 2 GPU for training and use 's0-QDMN' as ckpt name, the command is:
81 |
82 | `CUDA_VISIBLE_DEVICES=0,1 OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port 12345 --nproc_per_node=2 train.py --id s0-QDMN --stage 0`
83 |
84 | ### For main training:
85 | To train on DAVIS and YouTube, use this command:
86 |
87 | `CUDA_VISIBLE_DEVICES=[GPU_ids] OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port [cccc] --nproc_per_node=GPU_num train.py --id [save_name] --stage 2 --load_network path_to_pretrained_ckpt`
88 |
89 | Samely, if using 2 GPU, the command is:
90 |
91 | `CUDA_VISIBLE_DEVICES=0,1 OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port 12345 --nproc_per_node=2 train.py --id s03-QDMN --stage 2 --load_network saves/s0-QDMN/**.pth`
92 |
93 | ### Resume training
94 | Besides, if you want to resume interrupted training, you can run the command with `--load_model` and using the `*_checkpoint.pth`, for example:
95 |
96 | `CUDA_VISIBLE_DEVICES=0,1 OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port 12345 --nproc_per_node=2 train.py --id s0-QDMN --stage 0 --load_model saves/s0-QDMN/s0-QDMN_checkpoint.pth`
97 |
98 | ## Inference
99 | Run the following file to perform inference on the corresponding dataset.
100 | - `eval_davis_2016.py` used for DAVIS 2016 val set.
101 | - `eval_davis.py` used for DAVIS 2017 val and test-dev set (controlled by `--split`).
102 | - `eval_youtube.py` used for YouTubeVOS 2018/19 val and test set.
103 |
104 |
105 | ## Evaluation
106 | For the evaluation metric on DAVIS 2016/2017 val set, we refer to the repository [DAVIS_val](https://github.com/workforai/DAVIS-evaluation).
107 | For DAVIS 2017 test-dev set, you can get the metric results by submitting masks to the Codalab website [DAVIS_test](https://competitions.codalab.org/competitions/20516)
108 | For YouTube2019 val set, please submit your results to [YouTube19](https://competitions.codalab.org/competitions/20127)
109 | For YouTube2018 val set, please submit to [YouTube18](https://competitions.codalab.org/competitions/19544)
110 |
111 |
112 | ## Citation
113 | If you find this work useful for your research, please cite:
114 | ```
115 | @inproceedings{liu2022learning,
116 | title={Learning quality-aware dynamic memory for video object segmentation},
117 | author={Liu, Yong and Yu, Ran and Yin, Fei and Zhao, Xinyuan and Zhao, Wei and Xia, Weihao and Yang, Yujiu},
118 | booktitle={ECCV},
119 | pages={468--486},
120 | year={2022}
121 | }
122 | ```
123 |
124 |
125 | ## Acknowledgement
126 | Code in this repository is built upon several public repositories.
127 | Thanks to
128 | [STCN](https://github.com/hkchengrex/STCN),
129 | [MiVOS](https://github.com/hkchengrex/MiVOS),
130 | [Mask Scoring RCNN](https://github.com/zjhuang22/maskscoring_rcnn)
131 | for sharing their code.
132 |
--------------------------------------------------------------------------------
/data/README.md:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/yongliu20/QDMN/301f7bff500dc5d642407f2de11248b97e96178f/data/README.md
--------------------------------------------------------------------------------
/dataset/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/yongliu20/QDMN/301f7bff500dc5d642407f2de11248b97e96178f/dataset/__init__.py
--------------------------------------------------------------------------------
/dataset/davis_test_dataset.py:
--------------------------------------------------------------------------------
1 | import os
2 | from os import path
3 | import numpy as np
4 | from PIL import Image
5 |
6 | import torch
7 | import torch.nn.functional as F
8 | from torchvision import transforms
9 | from torch.utils.data.dataset import Dataset
10 | from dataset.range_transform import im_normalization
11 | from dataset.util import all_to_onehot
12 |
13 |
14 | class DAVISTestDataset(Dataset):
15 | def __init__(self, root, imset='2017/val.txt', resolution='480p', single_object=False, target_name=None):
16 | self.root = root
17 | self.mask_dir = path.join(root, 'Annotations', resolution)
18 | self.mask480_dir = path.join(root, 'Annotations', '480p')
19 | self.image_dir = path.join(root, 'JPEGImages', resolution)
20 | self.resolution = resolution
21 | _imset_dir = path.join(root, 'ImageSets')
22 | _imset_f = path.join(_imset_dir, imset)
23 |
24 | self.videos = []
25 | self.num_frames = {}
26 | self.num_objects = {}
27 | self.shape = {}
28 | self.size_480p = {}
29 | with open(path.join(_imset_f), "r") as lines:
30 | for line in lines:
31 | _video = line.rstrip('\n')
32 | if target_name is not None and target_name != _video:
33 | continue
34 | self.videos.append(_video)
35 | self.num_frames[_video] = len(os.listdir(path.join(self.image_dir, _video)))
36 | _mask = np.array(Image.open(path.join(self.mask_dir, _video, '00000.png')).convert("P"))
37 | self.num_objects[_video] = np.max(_mask)
38 | self.shape[_video] = np.shape(_mask)
39 | _mask480 = np.array(Image.open(path.join(self.mask480_dir, _video, '00000.png')).convert("P"))
40 | self.size_480p[_video] = np.shape(_mask480)
41 |
42 | self.single_object = single_object
43 |
44 | if resolution == '480p':
45 | self.im_transform = transforms.Compose([
46 | transforms.ToTensor(),
47 | im_normalization,
48 | ])
49 | else:
50 | self.im_transform = transforms.Compose([
51 | transforms.ToTensor(),
52 | im_normalization,
53 | transforms.Resize(600, interpolation=Image.BICUBIC),
54 | ])
55 | self.mask_transform = transforms.Compose([
56 | transforms.Resize(600, interpolation=Image.NEAREST),
57 | ])
58 |
59 | def __len__(self):
60 | return len(self.videos)
61 |
62 | def __getitem__(self, index):
63 | video = self.videos[index]
64 | info = {}
65 | info['name'] = video
66 | info['num_frames'] = self.num_frames[video]
67 | info['size_480p'] = self.size_480p[video]
68 |
69 | images = []
70 | masks = []
71 | for f in range(self.num_frames[video]):
72 | img_file = path.join(self.image_dir, video, '{:05d}.jpg'.format(f))
73 | images.append(self.im_transform(Image.open(img_file).convert('RGB')))
74 |
75 | mask_file = path.join(self.mask_dir, video, '{:05d}.png'.format(f))
76 | if path.exists(mask_file):
77 | masks.append(np.array(Image.open(mask_file).convert('P'), dtype=np.uint8))
78 | else:
79 | # Test-set maybe?
80 | masks.append(np.zeros_like(masks[0]))
81 |
82 | images = torch.stack(images, 0)
83 | masks = np.stack(masks, 0)
84 |
85 | if self.single_object:
86 | labels = [1]
87 | masks = (masks > 0.5).astype(np.uint8)
88 | masks = torch.from_numpy(all_to_onehot(masks, labels)).float()
89 | else:
90 | labels = np.unique(masks[0])
91 | labels = labels[labels!=0]
92 | masks = torch.from_numpy(all_to_onehot(masks, labels)).float()
93 |
94 | if self.resolution != '480p':
95 | masks = self.mask_transform(masks)
96 | masks = masks.unsqueeze(2)
97 |
98 | info['labels'] = labels
99 |
100 | data = {
101 | 'rgb': images,
102 | 'gt': masks,
103 | 'info': info,
104 | }
105 |
106 | return data
107 |
108 |
--------------------------------------------------------------------------------
/dataset/range_transform.py:
--------------------------------------------------------------------------------
1 | import torchvision.transforms as transforms
2 |
3 | im_mean = (124, 116, 104)
4 |
5 | im_normalization = transforms.Normalize(
6 | mean=[0.485, 0.456, 0.406],
7 | std=[0.229, 0.224, 0.225]
8 | )
9 |
10 | inv_im_trans = transforms.Normalize(
11 | mean=[-0.485/0.229, -0.456/0.224, -0.406/0.225],
12 | std=[1/0.229, 1/0.224, 1/0.225])
13 |
--------------------------------------------------------------------------------
/dataset/reseed.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import random
3 |
4 | def reseed(seed):
5 | random.seed(seed)
6 | torch.manual_seed(seed)
--------------------------------------------------------------------------------
/dataset/static_dataset.py:
--------------------------------------------------------------------------------
1 | import os
2 | from os import path
3 |
4 | import torch
5 | from torch.utils.data.dataset import Dataset
6 | from torchvision import transforms
7 | from PIL import Image
8 | import numpy as np
9 |
10 | from dataset.range_transform import im_normalization, im_mean
11 | from dataset.tps import random_tps_warp
12 | from dataset.reseed import reseed
13 | import cv2
14 |
15 |
16 | class StaticTransformDataset(Dataset):
17 | """
18 | Generate pseudo VOS data by applying random transforms on static images.
19 | Single-object only.
20 |
21 | Method 0 - FSS style (class/1.jpg class/1.png)
22 | Method 1 - Others style (XXX.jpg XXX.png)
23 | """
24 | def __init__(self, root, method=0):
25 | self.root = root
26 | self.method = method
27 |
28 | if method == 0:
29 | # Get images
30 | self.im_list = []
31 | classes = os.listdir(self.root)
32 | for c in classes:
33 | imgs = os.listdir(path.join(root, c))
34 | jpg_list = [im for im in imgs if 'jpg' in im[-3:].lower()]
35 |
36 | joint_list = [path.join(root, c, im) for im in jpg_list]
37 | self.im_list.extend(joint_list)
38 |
39 | elif method == 1:
40 | self.im_list = [path.join(self.root, im) for im in os.listdir(self.root) if '.jpg' in im]
41 |
42 | print('%d images found in %s' % (len(self.im_list), root))
43 |
44 | # These set of transform is the same for im/gt pairs, but different among the 3 sampled frames
45 | self.pair_im_lone_transform = transforms.Compose([
46 | transforms.ColorJitter(0.1, 0.05, 0.05, 0), # No hue change here as that's not realistic
47 | ])
48 |
49 | self.pair_im_dual_transform = transforms.Compose([
50 | transforms.RandomAffine(degrees=20, scale=(0.9,1.1), shear=10, resample=Image.BICUBIC, fillcolor=im_mean),
51 | transforms.Resize(384, Image.BICUBIC),
52 | transforms.RandomCrop((384, 384), pad_if_needed=True, fill=im_mean),
53 | ])
54 |
55 | self.pair_gt_dual_transform = transforms.Compose([
56 | transforms.RandomAffine(degrees=20, scale=(0.9,1.1), shear=10, resample=Image.BICUBIC, fillcolor=0),
57 | transforms.Resize(384, Image.NEAREST),
58 | transforms.RandomCrop((384, 384), pad_if_needed=True, fill=0),
59 | ])
60 |
61 | # These transform are the same for all pairs in the sampled sequence
62 | self.all_im_lone_transform = transforms.Compose([
63 | transforms.ColorJitter(0.1, 0.05, 0.05, 0.05),
64 | transforms.RandomGrayscale(0.05),
65 | ])
66 |
67 | self.all_im_dual_transform = transforms.Compose([
68 | transforms.RandomAffine(degrees=0, scale=(0.8, 1.5), fillcolor=im_mean),
69 | transforms.RandomHorizontalFlip(),
70 | ])
71 |
72 | self.all_gt_dual_transform = transforms.Compose([
73 | transforms.RandomAffine(degrees=0, scale=(0.8, 1.5), fillcolor=0),
74 | transforms.RandomHorizontalFlip(),
75 | ])
76 |
77 | # Final transform without randomness
78 | self.final_im_transform = transforms.Compose([
79 | transforms.ToTensor(),
80 | im_normalization,
81 | ])
82 |
83 | self.final_gt_transform = transforms.Compose([
84 | transforms.ToTensor(),
85 | ])
86 |
87 | def __getitem__(self, idx):
88 | im = Image.open(self.im_list[idx]).convert('RGB')
89 |
90 | if self.method == 0:
91 | gt = Image.open(self.im_list[idx][:-3]+'png').convert('L')
92 | else:
93 | gt = Image.open(self.im_list[idx].replace('.jpg','.png')).convert('L')
94 |
95 | sequence_seed = np.random.randint(2147483647)
96 |
97 | images = []
98 | masks = []
99 | for _ in range(3):
100 | reseed(sequence_seed)
101 | this_im = self.all_im_dual_transform(im)
102 | this_im = self.all_im_lone_transform(this_im)
103 | reseed(sequence_seed)
104 | this_gt = self.all_gt_dual_transform(gt)
105 |
106 | pairwise_seed = np.random.randint(2147483647)
107 | reseed(pairwise_seed)
108 | this_im = self.pair_im_dual_transform(this_im)
109 | this_im = self.pair_im_lone_transform(this_im)
110 | reseed(pairwise_seed)
111 | this_gt = self.pair_gt_dual_transform(this_gt)
112 |
113 | # Use TPS only some of the times
114 | # Not because TPS is bad -- just that it is too slow and I need to speed up data loading
115 | if np.random.rand() < 0.33:
116 | this_im, this_gt = random_tps_warp(this_im, this_gt, scale=0.02)
117 |
118 | this_im = self.final_im_transform(this_im)
119 | this_gt = self.final_gt_transform(this_gt)
120 |
121 | images.append(this_im)
122 | masks.append(this_gt)
123 |
124 | images = torch.stack(images, 0)
125 | masks = torch.stack(masks, 0)
126 |
127 | info = {}
128 | info['name'] = self.im_list[idx]
129 |
130 | cls_gt = np.zeros((3, 384, 384), dtype=np.int)
131 | cls_gt[masks[:,0] > 0.5] = 1
132 |
133 | data = {
134 | 'rgb': images,
135 | 'gt': masks,
136 | 'cls_gt': cls_gt,
137 | 'info': info
138 | }
139 |
140 | return data
141 |
142 |
143 | def __len__(self):
144 | return len(self.im_list)
145 |
--------------------------------------------------------------------------------
/dataset/tps.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from PIL import Image
3 | import cv2
4 | import thinplate as tps
5 |
6 | cv2.setNumThreads(0)
7 |
8 | def pick_random_points(h, w, n_samples):
9 | y_idx = np.random.choice(np.arange(h), size=n_samples, replace=False)
10 | x_idx = np.random.choice(np.arange(w), size=n_samples, replace=False)
11 | return y_idx/h, x_idx/w
12 |
13 |
14 | def warp_dual_cv(img, mask, c_src, c_dst):
15 | dshape = img.shape
16 | theta = tps.tps_theta_from_points(c_src, c_dst, reduced=True)
17 | grid = tps.tps_grid(theta, c_dst, dshape)
18 | mapx, mapy = tps.tps_grid_to_remap(grid, img.shape)
19 | return cv2.remap(img, mapx, mapy, cv2.INTER_LINEAR), cv2.remap(mask, mapx, mapy, cv2.INTER_NEAREST)
20 |
21 |
22 | def random_tps_warp(img, mask, scale, n_ctrl_pts=12):
23 | """
24 | Apply a random TPS warp of the input image and mask
25 | Uses randomness from numpy
26 | """
27 | img = np.asarray(img)
28 | mask = np.asarray(mask)
29 |
30 | h, w = mask.shape
31 | points = pick_random_points(h, w, n_ctrl_pts)
32 | c_src = np.stack(points, 1)
33 | c_dst = c_src + np.random.normal(scale=scale, size=c_src.shape)
34 | warp_im, warp_gt = warp_dual_cv(img, mask, c_src, c_dst)
35 |
36 | return Image.fromarray(warp_im), Image.fromarray(warp_gt)
37 |
38 |
--------------------------------------------------------------------------------
/dataset/util.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 |
4 | def all_to_onehot(masks, labels):
5 | Ms = np.zeros((len(labels), masks.shape[0], masks.shape[1], masks.shape[2]), dtype=np.uint8)
6 | for k, l in enumerate(labels):
7 | Ms[k] = (masks == l).astype(np.uint8)
8 | return Ms
9 |
--------------------------------------------------------------------------------
/dataset/vos_dataset.py:
--------------------------------------------------------------------------------
1 | import os
2 | from os import path
3 |
4 | import torch
5 | from torch.utils.data.dataset import Dataset
6 | from torchvision import transforms
7 | from PIL import Image
8 | import numpy as np
9 |
10 | from dataset.range_transform import im_normalization, im_mean
11 | from dataset.reseed import reseed
12 |
13 |
14 | class VOSDataset(Dataset):
15 | """
16 | Works for DAVIS/YouTubeVOS/BL30K training
17 | For each sequence:
18 | - Pick three frames
19 | - Pick two objects
20 | - Apply some random transforms that are the same for all frames
21 | - Apply random transform to each of the frame
22 | - The distance between frames is controlled
23 | """
24 | def __init__(self, im_root, gt_root, max_jump, is_bl, subset=None):
25 | self.im_root = im_root
26 | self.gt_root = gt_root
27 | self.max_jump = max_jump
28 | self.is_bl = is_bl
29 |
30 | self.videos = []
31 | self.frames = {}
32 |
33 | vid_list = sorted(os.listdir(self.im_root))
34 | # Pre-filtering
35 | for vid in vid_list:
36 | if subset is not None:
37 | if vid not in subset:
38 | continue
39 | frames = sorted(os.listdir(os.path.join(self.im_root, vid)))
40 | if len(frames) < 3:
41 | continue
42 | self.frames[vid] = frames
43 | self.videos.append(vid)
44 |
45 | print('%d out of %d videos accepted in %s.' % (len(self.videos), len(vid_list), im_root))
46 |
47 | # These set of transform is the same for im/gt pairs, but different among the 3 sampled frames
48 | self.pair_im_lone_transform = transforms.Compose([
49 | transforms.ColorJitter(0.01, 0.01, 0.01, 0),
50 | ])
51 |
52 | self.pair_im_dual_transform = transforms.Compose([
53 | transforms.RandomAffine(degrees=15, shear=10, resample=Image.BICUBIC, fillcolor=im_mean),
54 | ])
55 |
56 | self.pair_gt_dual_transform = transforms.Compose([
57 | transforms.RandomAffine(degrees=15, shear=10, resample=Image.NEAREST, fillcolor=0),
58 | ])
59 |
60 | # These transform are the same for all pairs in the sampled sequence
61 | self.all_im_lone_transform = transforms.Compose([
62 | transforms.ColorJitter(0.1, 0.03, 0.03, 0),
63 | transforms.RandomGrayscale(0.05),
64 | ])
65 |
66 | if self.is_bl:
67 | # Use a different cropping scheme for the blender dataset because the image size is different
68 | self.all_im_dual_transform = transforms.Compose([
69 | transforms.RandomHorizontalFlip(),
70 | transforms.RandomResizedCrop((384, 384), scale=(0.25, 1.00), interpolation=Image.BICUBIC)
71 | ])
72 |
73 | self.all_gt_dual_transform = transforms.Compose([
74 | transforms.RandomHorizontalFlip(),
75 | transforms.RandomResizedCrop((384, 384), scale=(0.25, 1.00), interpolation=Image.NEAREST)
76 | ])
77 | else:
78 | self.all_im_dual_transform = transforms.Compose([
79 | transforms.RandomHorizontalFlip(),
80 | transforms.RandomResizedCrop((384, 384), scale=(0.36,1.00), interpolation=Image.BICUBIC)
81 | ])
82 |
83 | self.all_gt_dual_transform = transforms.Compose([
84 | transforms.RandomHorizontalFlip(),
85 | transforms.RandomResizedCrop((384, 384), scale=(0.36,1.00), interpolation=Image.NEAREST)
86 | ])
87 |
88 | # Final transform without randomness
89 | self.final_im_transform = transforms.Compose([
90 | transforms.ToTensor(),
91 | im_normalization,
92 | ])
93 |
94 | def __getitem__(self, idx):
95 | video = self.videos[idx]
96 | info = {}
97 | info['name'] = video
98 |
99 | vid_im_path = path.join(self.im_root, video)
100 | vid_gt_path = path.join(self.gt_root, video)
101 | frames = self.frames[video]
102 |
103 | trials = 0
104 | while trials < 5:
105 | info['frames'] = [] # Appended with actual frames
106 |
107 | # Don't want to bias towards beginning/end
108 | this_max_jump = min(len(frames), self.max_jump)
109 | start_idx = np.random.randint(len(frames)-this_max_jump+1)
110 | f1_idx = start_idx + np.random.randint(1, this_max_jump+1)
111 | f1_idx = min(f1_idx, len(frames)-this_max_jump, len(frames)-1)
112 |
113 | f2_idx = f1_idx + np.random.randint(1, this_max_jump+1)
114 | f2_idx = min(f2_idx, len(frames)-this_max_jump//2, len(frames)-1)
115 |
116 | frames_idx = [start_idx, f1_idx, f2_idx]
117 | if np.random.rand() < 0.5:
118 | # Reverse time
119 | frames_idx = frames_idx[::-1]
120 |
121 | sequence_seed = np.random.randint(2147483647)
122 | images = []
123 | masks = []
124 | target_object = None
125 | for f_idx in frames_idx:
126 | jpg_name = frames[f_idx][:-4] + '.jpg'
127 | png_name = frames[f_idx][:-4] + '.png'
128 | info['frames'].append(jpg_name)
129 |
130 | reseed(sequence_seed)
131 | this_im = Image.open(path.join(vid_im_path, jpg_name)).convert('RGB')
132 | this_im = self.all_im_dual_transform(this_im)
133 | this_im = self.all_im_lone_transform(this_im)
134 | reseed(sequence_seed)
135 | this_gt = Image.open(path.join(vid_gt_path, png_name)).convert('P')
136 | this_gt = self.all_gt_dual_transform(this_gt)
137 |
138 | pairwise_seed = np.random.randint(2147483647)
139 | reseed(pairwise_seed)
140 | this_im = self.pair_im_dual_transform(this_im)
141 | this_im = self.pair_im_lone_transform(this_im)
142 | reseed(pairwise_seed)
143 | this_gt = self.pair_gt_dual_transform(this_gt)
144 |
145 | this_im = self.final_im_transform(this_im)
146 | this_gt = np.array(this_gt)
147 |
148 | images.append(this_im)
149 | masks.append(this_gt)
150 |
151 | images = torch.stack(images, 0)
152 |
153 | labels = np.unique(masks[0])
154 | # Remove background
155 | labels = labels[labels!=0]
156 |
157 | if self.is_bl:
158 | # Find large enough labels
159 | good_lables = []
160 | for l in labels:
161 | pixel_sum = (masks[0]==l).sum()
162 | if pixel_sum > 10*10:
163 | # OK if the object is always this small
164 | # Not OK if it is actually much bigger
165 | if pixel_sum > 30*30:
166 | good_lables.append(l)
167 | elif max((masks[1]==l).sum(), (masks[2]==l).sum()) < 20*20:
168 | good_lables.append(l)
169 | labels = np.array(good_lables, dtype=np.uint8)
170 |
171 | if len(labels) == 0:
172 | target_object = -1 # all black if no objects
173 | has_second_object = False
174 | trials += 1
175 | else:
176 | target_object = np.random.choice(labels)
177 | has_second_object = (len(labels) > 1)
178 | if has_second_object:
179 | labels = labels[labels!=target_object]
180 | second_object = np.random.choice(labels)
181 | break
182 |
183 | masks = np.stack(masks, 0)
184 | tar_masks = (masks==target_object).astype(np.float32)[:,np.newaxis,:,:]
185 | if has_second_object:
186 | sec_masks = (masks==second_object).astype(np.float32)[:,np.newaxis,:,:]
187 | selector = torch.FloatTensor([1, 1])
188 | else:
189 | sec_masks = np.zeros_like(tar_masks)
190 | selector = torch.FloatTensor([1, 0])
191 |
192 | cls_gt = np.zeros((3, 384, 384), dtype=np.int)
193 | cls_gt[tar_masks[:,0] > 0.5] = 1
194 | cls_gt[sec_masks[:,0] > 0.5] = 2
195 |
196 | data = {
197 | 'rgb': images,
198 | 'gt': tar_masks,
199 | 'cls_gt': cls_gt,
200 | 'sec_gt': sec_masks,
201 | 'selector': selector,
202 | 'info': info
203 | }
204 |
205 | return data
206 |
207 | def __len__(self):
208 | return len(self.videos)
--------------------------------------------------------------------------------
/dataset/yv_test_dataset.py:
--------------------------------------------------------------------------------
1 | import os
2 | from os import path
3 |
4 | import torch
5 | import torch.nn.functional as F
6 | from torch.utils.data.dataset import Dataset
7 | from torchvision import transforms
8 | from PIL import Image
9 | import numpy as np
10 | import random
11 |
12 | from dataset.range_transform import im_normalization
13 | from dataset.util import all_to_onehot
14 |
15 |
16 | class YouTubeVOSTestDataset(Dataset):
17 | def __init__(self, data_root, split):
18 | self.image_dir = path.join(data_root, 'all_frames', split+'_all_frames', 'JPEGImages') #all frames for test
19 | # self.image_dir = path.join(data_root, split, 'JPEGImages') #every 5 frame for test
20 | self.mask_dir = path.join(data_root, split, 'Annotations')
21 |
22 | self.videos = []
23 | self.shape = {}
24 | self.frames = {}
25 |
26 | vid_list = sorted(os.listdir(self.image_dir))
27 | # Pre-reading
28 | for vid in vid_list:
29 | frames = sorted(os.listdir(os.path.join(self.image_dir, vid)))
30 | self.frames[vid] = frames
31 |
32 | self.videos.append(vid)
33 | first_mask = os.listdir(path.join(self.mask_dir, vid))[0]
34 | _mask = np.array(Image.open(path.join(self.mask_dir, vid, first_mask)).convert("P"))
35 | self.shape[vid] = np.shape(_mask)
36 |
37 | self.im_transform = transforms.Compose([
38 | transforms.ToTensor(),
39 | im_normalization,
40 | transforms.Resize(480, interpolation=Image.BICUBIC),
41 | ])
42 |
43 | self.mask_transform = transforms.Compose([
44 | transforms.Resize(480, interpolation=Image.NEAREST),
45 | ])
46 |
47 | def __getitem__(self, idx):
48 | video = self.videos[idx]
49 | info = {}
50 | info['name'] = video
51 | info['num_objects'] = 0
52 | info['frames'] = self.frames[video]
53 | info['size'] = self.shape[video] # Real sizes
54 | info['gt_obj'] = {} # Frames with labelled objects
55 |
56 | vid_im_path = path.join(self.image_dir, video)
57 | vid_gt_path = path.join(self.mask_dir, video)
58 |
59 | frames = self.frames[video]
60 |
61 | images = []
62 | masks = []
63 | for i, f in enumerate(frames):
64 | img = Image.open(path.join(vid_im_path, f)).convert('RGB')
65 | images.append(self.im_transform(img))
66 |
67 | mask_file = path.join(vid_gt_path, f.replace('.jpg','.png'))
68 | if path.exists(mask_file):
69 | masks.append(np.array(Image.open(mask_file).convert('P'), dtype=np.uint8))
70 | this_labels = np.unique(masks[-1])
71 | this_labels = this_labels[this_labels!=0]
72 | info['gt_obj'][i] = this_labels
73 | else:
74 | # Mask not exists -> nothing in it
75 | masks.append(np.zeros(self.shape[video]))
76 |
77 | images = torch.stack(images, 0)
78 | masks = np.stack(masks, 0)
79 |
80 | # Construct the forward and backward mapping table for labels
81 | labels = np.unique(masks).astype(np.uint8)
82 | labels = labels[labels!=0]
83 | info['label_convert'] = {}
84 | info['label_backward'] = {}
85 | idx = 1
86 | for l in labels:
87 | info['label_convert'][l] = idx
88 | info['label_backward'][idx] = l
89 | idx += 1
90 | masks = torch.from_numpy(all_to_onehot(masks, labels)).float()
91 |
92 | # Resize to 480p
93 | masks = self.mask_transform(masks)
94 | masks = masks.unsqueeze(2)
95 |
96 | info['labels'] = labels
97 |
98 | data = {
99 | 'rgb': images,
100 | 'gt': masks,
101 | 'info': info,
102 | }
103 |
104 | return data
105 |
106 | def __len__(self):
107 | return len(self.videos)
--------------------------------------------------------------------------------
/docs/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/yongliu20/QDMN/301f7bff500dc5d642407f2de11248b97e96178f/docs/.DS_Store
--------------------------------------------------------------------------------
/docs/framework.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/yongliu20/QDMN/301f7bff500dc5d642407f2de11248b97e96178f/docs/framework.png
--------------------------------------------------------------------------------
/docs/long_video.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/yongliu20/QDMN/301f7bff500dc5d642407f2de11248b97e96178f/docs/long_video.png
--------------------------------------------------------------------------------
/docs/visualization.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/yongliu20/QDMN/301f7bff500dc5d642407f2de11248b97e96178f/docs/visualization.png
--------------------------------------------------------------------------------
/eval_davis.py:
--------------------------------------------------------------------------------
1 | import os
2 | from os import path
3 | import time
4 | from argparse import ArgumentParser
5 | from collections import defaultdict
6 |
7 | import torch
8 | import torch.nn.functional as F
9 | from torch.utils.data import DataLoader
10 | import numpy as np
11 | from PIL import Image
12 |
13 | from model.eval_network import PropagationNetwork
14 | from dataset.davis_test_dataset import DAVISTestDataset
15 | from inference_core import InferenceCore
16 |
17 | from progressbar import progressbar
18 |
19 |
20 | """
21 | Arguments loading
22 | """
23 | parser = ArgumentParser()
24 | parser.add_argument('--model', default='saves/QDMN.pth')
25 | parser.add_argument('--davis', default='data/DAVIS/2017')
26 | parser.add_argument('--output')
27 | parser.add_argument('--split', help='val/testdev', default='val')
28 | parser.add_argument('--use_km', action='store_true')
29 | parser.add_argument('--no_top', action='store_true')
30 | args = parser.parse_args()
31 |
32 | davis_path = args.davis
33 | out_path = args.output
34 |
35 | # Simple setup
36 | os.makedirs(out_path, exist_ok=True)
37 | palette = Image.open(path.expanduser(davis_path + '/trainval/Annotations/480p/blackswan/00000.png')).getpalette()
38 |
39 | torch.autograd.set_grad_enabled(False)
40 |
41 | # Setup Dataset
42 | if args.split == 'val':
43 | test_dataset = DAVISTestDataset(davis_path+'/trainval', imset='2017/val.txt')
44 | test_loader = DataLoader(test_dataset, batch_size=1, shuffle=False, num_workers=4, pin_memory=True)
45 | elif args.split == 'testdev':
46 | test_dataset = DAVISTestDataset(davis_path+'/test-dev', imset='2017/test-dev.txt', resolution='480p')
47 | test_loader = DataLoader(test_dataset, batch_size=1, shuffle=False, num_workers=1)
48 | else:
49 | raise NotImplementedError
50 |
51 | # Load our checkpoint
52 | prop_saved = torch.load(args.model)
53 | top_k = None if args.no_top else 50
54 | if args.use_km:
55 | prop_model = PropagationNetwork(top_k=top_k, km=5.6).cuda().eval()
56 | else:
57 | prop_model = PropagationNetwork(top_k=top_k, km=None).cuda().eval()
58 | prop_model.load_state_dict(prop_saved)
59 |
60 | total_process_time = 0
61 | total_frames = 0
62 |
63 | # Start eval
64 | for data in progressbar(test_loader, max_value=len(test_loader), redirect_stdout=True):
65 |
66 | rgb = data['rgb'].cuda()
67 | msk = data['gt'][0].cuda()
68 | info = data['info']
69 | name = info['name'][0]
70 | k = len(info['labels'][0])
71 | size = info['size_480p']
72 |
73 | torch.cuda.synchronize()
74 | process_begin = time.time()
75 |
76 | processor = InferenceCore(prop_model, rgb, k)
77 | processor.interact(msk[:,0], 0, rgb.shape[1])
78 |
79 | # Do unpad -> upsample to original size
80 | out_masks = torch.zeros((processor.t, 1, *size), dtype=torch.uint8, device='cuda')
81 | for ti in range(processor.t):
82 | prob = processor.prob[:,ti]
83 |
84 | if processor.pad[2]+processor.pad[3] > 0:
85 | prob = prob[:,:,processor.pad[2]:-processor.pad[3],:]
86 | if processor.pad[0]+processor.pad[1] > 0:
87 | prob = prob[:,:,:,processor.pad[0]:-processor.pad[1]]
88 |
89 | prob = F.interpolate(prob, size, mode='bilinear', align_corners=False)
90 | out_masks[ti] = torch.argmax(prob, dim=0)
91 |
92 | out_masks = (out_masks.detach().cpu().numpy()[:,0]).astype(np.uint8)
93 |
94 | torch.cuda.synchronize()
95 | total_process_time += time.time() - process_begin
96 | total_frames += out_masks.shape[0]
97 |
98 | # Save the results
99 | this_out_path = path.join(out_path, name)
100 | os.makedirs(this_out_path, exist_ok=True)
101 | for f in range(out_masks.shape[0]):
102 | img_E = Image.fromarray(out_masks[f])
103 | img_E.putpalette(palette)
104 | img_E.save(os.path.join(this_out_path, '{:05d}.png'.format(f)))
105 |
106 | del rgb
107 | del msk
108 | del processor
109 |
110 | print('Total processing time: ', total_process_time)
111 | print('Total processed frames: ', total_frames)
112 | print('FPS: ', total_frames / total_process_time)
--------------------------------------------------------------------------------
/eval_davis_2016.py:
--------------------------------------------------------------------------------
1 | import os
2 | from os import path
3 | import time
4 | from argparse import ArgumentParser
5 |
6 | import torch
7 | from torch.utils.data import Dataset, DataLoader
8 | import numpy as np
9 | from PIL import Image
10 |
11 | from model.eval_network import PropagationNetwork
12 | from dataset.davis_test_dataset import DAVISTestDataset
13 | from inference_core import InferenceCore
14 |
15 | from progressbar import progressbar
16 |
17 |
18 | """
19 | Arguments loading
20 | """
21 | parser = ArgumentParser()
22 | parser.add_argument('--model', default='saves/QDMN.pth')
23 | parser.add_argument('--davis', default='data/DAVIS/2016')
24 | parser.add_argument('--output')
25 | parser.add_argument('--no_top', action='store_true')
26 | args = parser.parse_args()
27 |
28 | davis_path = args.davis
29 | out_path = args.output
30 |
31 | # Simple setup
32 | os.makedirs(out_path, exist_ok=True)
33 |
34 | torch.autograd.set_grad_enabled(False)
35 |
36 | # Setup Dataset, a small hack to use the image set in the 2017 folder because the 2016 one is of a different format
37 | test_dataset = DAVISTestDataset(davis_path, imset='../../2017/trainval/ImageSets/2016/val.txt', single_object=True)
38 | test_loader = DataLoader(test_dataset, batch_size=1, shuffle=False, num_workers=4, pin_memory=True)
39 |
40 | # Load our checkpoint
41 | prop_saved = torch.load(args.model)
42 | top_k = None if args.no_top else 50
43 | prop_model = PropagationNetwork(top_k=top_k).cuda().eval()
44 | prop_model.load_state_dict(prop_saved)
45 |
46 | total_process_time = 0
47 | total_frames = 0
48 |
49 | # Start eval
50 | for data in progressbar(test_loader, max_value=len(test_loader), redirect_stdout=True):
51 |
52 | rgb = data['rgb'].cuda()
53 | msk = data['gt'][0].cuda()
54 | info = data['info']
55 | name = info['name'][0]
56 | k = len(info['labels'][0])
57 |
58 | torch.cuda.synchronize()
59 | process_begin = time.time()
60 |
61 | processor = InferenceCore(prop_model, rgb, k)
62 | processor.interact(msk[:,0], 0, rgb.shape[1])
63 |
64 | # Do unpad -> upsample to original size
65 | out_masks = torch.zeros((processor.t, 1, *rgb.shape[-2:]), dtype=torch.float32, device='cuda')
66 | for ti in range(processor.t):
67 | prob = processor.prob[:,ti]
68 |
69 | if processor.pad[2]+processor.pad[3] > 0:
70 | prob = prob[:,:,processor.pad[2]:-processor.pad[3],:]
71 | if processor.pad[0]+processor.pad[1] > 0:
72 | prob = prob[:,:,:,processor.pad[0]:-processor.pad[1]]
73 |
74 | out_masks[ti] = torch.argmax(prob, dim=0) * 255
75 |
76 | out_masks = (out_masks.detach().cpu().numpy()[:,0]).astype(np.uint8)
77 |
78 | torch.cuda.synchronize()
79 | total_process_time += time.time() - process_begin
80 | total_frames += out_masks.shape[0]
81 |
82 | this_out_path = path.join(out_path, name)
83 | os.makedirs(this_out_path, exist_ok=True)
84 | for f in range(out_masks.shape[0]):
85 | img_E = Image.fromarray(out_masks[f])
86 | img_E.save(os.path.join(this_out_path, '{:05d}.png'.format(f)))
87 |
88 | del rgb
89 | del msk
90 | del processor
91 |
92 | print('Total processing time: ', total_process_time)
93 | print('Total processed frames: ', total_frames)
94 | print('FPS: ', total_frames / total_process_time)
--------------------------------------------------------------------------------
/eval_youtube.py:
--------------------------------------------------------------------------------
1 | """
2 | YouTubeVOS has a label structure that is more complicated than DAVIS
3 | Labels might not appear on the first frame (there might be no labels at all in the first frame)
4 | Labels might not even appear on the same frame (i.e. Object 0 at frame 10, and object 1 at frame 15)
5 | 0 does not mean background -- it is simply "no-label"
6 | and object indices might not be in order, there are missing indices somewhere in the validation set
7 |
8 | Dealing with these makes the logic a bit convoluted here
9 | It is not necessarily hacky but do understand that it is not as straightforward as DAVIS
10 |
11 | Validation set only.
12 | """
13 |
14 |
15 | import os
16 | from os import path
17 | import time
18 | from argparse import ArgumentParser
19 |
20 | import torch
21 | import torch.nn.functional as F
22 | from torch.utils.data import Dataset, DataLoader
23 | import numpy as np
24 | from PIL import Image
25 |
26 | from model.eval_network import PropagationNetwork
27 | from dataset.yv_test_dataset import YouTubeVOSTestDataset
28 | from inference_core_yv import InferenceCore
29 |
30 | from progressbar import progressbar
31 |
32 | """
33 | Arguments loading
34 | """
35 | parser = ArgumentParser()
36 | parser.add_argument('--model', default='saves/QDMN.pth')
37 | parser.add_argument('--yv', default='data/YouTube')
38 | parser.add_argument('--output')
39 | parser.add_argument('--split', default='valid')
40 | parser.add_argument('--use_km', action='store_true')
41 | parser.add_argument('--no_top', action='store_true')
42 | args = parser.parse_args()
43 |
44 | yv_path = args.yv
45 | out_path = args.output
46 |
47 | # Simple setup
48 | os.makedirs(out_path, exist_ok=True)
49 | palette = Image.open(path.expanduser(yv_path + '/valid/Annotations/0a49f5265b/00000.png')).getpalette()
50 |
51 | torch.autograd.set_grad_enabled(False)
52 |
53 | # Setup Dataset
54 | test_dataset = YouTubeVOSTestDataset(data_root=yv_path, split=args.split)
55 | test_loader = DataLoader(test_dataset, batch_size=1, shuffle=False, num_workers=2)
56 |
57 | # Load our checkpoint
58 | prop_saved = torch.load(args.model)
59 | top_k = None if args.no_top else 50
60 | if args.use_km:
61 | prop_model = PropagationNetwork(top_k=top_k, km=5.6).cuda().eval()
62 | else:
63 | prop_model = PropagationNetwork(top_k=top_k, km=None).cuda().eval()
64 | prop_model.load_state_dict(prop_saved)
65 |
66 | total_process_time = 0
67 | total_frames = 0
68 |
69 | # Start eval
70 | for data in progressbar(test_loader, max_value=len(test_loader), redirect_stdout=True):
71 | rgb = data['rgb']
72 | msk = data['gt'][0]
73 | info = data['info']
74 | name = info['name'][0]
75 | k = len(info['labels'][0])
76 | gt_obj = info['gt_obj']
77 | size = info['size']
78 |
79 | torch.cuda.synchronize()
80 | process_begin = time.time()
81 |
82 | # Frames with labels, but they are not exhaustively labeled
83 | frames_with_gt = sorted(list(gt_obj.keys()))
84 |
85 | processor = InferenceCore(prop_model, rgb, num_objects=k)
86 | # min_idx tells us the starting point of propagation
87 | # Propagating before there are labels is not useful
88 | min_idx = 99999
89 | for i, frame_idx in enumerate(frames_with_gt):
90 | min_idx = min(frame_idx, min_idx)
91 | # Note that there might be more than one label per frame
92 | obj_idx = gt_obj[frame_idx][0].tolist()
93 | # Map the possibly non-continuous labels into a continuous scheme
94 | obj_idx = [info['label_convert'][o].item() for o in obj_idx]
95 |
96 | # Append the background label
97 | with_bg_msk = torch.cat([
98 | 1 - torch.sum(msk[:,frame_idx], dim=0, keepdim=True),
99 | msk[:,frame_idx],
100 | ], 0).cuda()
101 |
102 | # We perform propagation from the current frame to the next frame with label
103 | if i == len(frames_with_gt) - 1:
104 | processor.interact(with_bg_msk, frame_idx, rgb.shape[1], obj_idx)
105 | else:
106 | processor.interact(with_bg_msk, frame_idx, frames_with_gt[i+1]+1, obj_idx)
107 |
108 | # Do unpad -> upsample to original size (we made it 480p)
109 | out_masks = torch.zeros((processor.t, 1, *size), dtype=torch.uint8, device='cuda')
110 | for ti in range(processor.t):
111 | prob = processor.prob[:,ti]
112 |
113 | if processor.pad[2]+processor.pad[3] > 0:
114 | prob = prob[:,:,processor.pad[2]:-processor.pad[3],:]
115 | if processor.pad[0]+processor.pad[1] > 0:
116 | prob = prob[:,:,:,processor.pad[0]:-processor.pad[1]]
117 |
118 | prob = F.interpolate(prob, size, mode='bilinear', align_corners=False)
119 | out_masks[ti] = torch.argmax(prob, dim=0)
120 |
121 | out_masks = (out_masks.detach().cpu().numpy()[:,0]).astype(np.uint8)
122 |
123 | # Remap the indices to the original domain
124 | idx_masks = np.zeros_like(out_masks)
125 | for i in range(1, k+1):
126 | backward_idx = info['label_backward'][i].item()
127 | idx_masks[out_masks==i] = backward_idx
128 |
129 | torch.cuda.synchronize()
130 | total_process_time += time.time() - process_begin
131 | total_frames += (idx_masks.shape[0] - min_idx)
132 |
133 | # Save the results
134 | this_out_path = path.join(out_path, name)
135 | os.makedirs(this_out_path, exist_ok=True)
136 | for f in range(idx_masks.shape[0]):
137 | if f >= min_idx:
138 | img_E = Image.fromarray(idx_masks[f])
139 | img_E.putpalette(palette)
140 | img_E.save(os.path.join(this_out_path, info['frames'][f][0].replace('.jpg','.png')))
141 |
142 | del rgb
143 | del msk
144 | del processor
145 |
146 | print('Total processing time: ', total_process_time)
147 | print('Total processed frames: ', total_frames)
148 | print('FPS: ', total_frames / total_process_time)
149 |
--------------------------------------------------------------------------------
/inference_core.py:
--------------------------------------------------------------------------------
1 | """
2 | This file can handle DAVIS 2016/2017 evaluation.
3 | """
4 |
5 | import torch
6 | import numpy as np
7 | import cv2
8 | import torch.nn.functional as F
9 | from model.eval_network import PropagationNetwork
10 | from model.aggregate import aggregate_wbg
11 | from util.tensor_util import pad_divide_by
12 |
13 |
14 | class InferenceCore:
15 | def __init__(self, prop_net:PropagationNetwork, images, num_objects, mem_freq=5):
16 | self.prop_net = prop_net
17 | self.mem_freq = mem_freq
18 |
19 | # True dimensions
20 | t = images.shape[1]
21 | h, w = images.shape[-2:]
22 |
23 | # Pad each side to multiple of 16
24 | images, self.pad = pad_divide_by(images, 16)
25 | # Padded dimensions
26 | nh, nw = images.shape[-2:]
27 |
28 | self.images = images
29 | self.device = 'cuda'
30 |
31 | self.k = num_objects
32 | self.masks = torch.zeros((t, 1, nh, nw), dtype=torch.uint8, device=self.device)
33 | self.out_masks = np.zeros((t, h, w), dtype=np.uint8)
34 |
35 | # Background included, not always consistent (i.e. sum up to 1)
36 | self.prob = torch.zeros((self.k+1, t, 1, nh, nw), dtype=torch.float32, device=self.device)
37 | self.prob[0] = 1e-7
38 |
39 | self.t, self.h, self.w = t, h, w
40 | self.nh, self.nw = nh, nw
41 | self.kh = self.nh//16
42 | self.kw = self.nw//16
43 | self.memory_thr = 20
44 |
45 | def get_query_kv_buffered(self, idx):
46 | # not actually buffered
47 | f16, f8, f4 = self.prop_net.get_query_values(self.images[:,idx].cuda())
48 | _, _, h, w = f16.size()
49 | pre_mask, _ = torch.max(self.prob[1:, idx-1], dim=0)
50 | pre_mask = pre_mask.unsqueeze(0)
51 | pre_mask = F.interpolate(pre_mask, size=[h, w], mode='bilinear')
52 | concat_f16 = torch.cat([f16, pre_mask], dim=1)
53 | concat_f16 = self.prop_net.concat_conv(concat_f16)
54 | concat_f16 = torch.sigmoid(concat_f16)
55 | concat_f16 = f16 * concat_f16
56 | k16, v16 = self.prop_net.kv_q_f16(concat_f16)
57 | result = (concat_f16, f8, f4, k16, v16)
58 | return result
59 |
60 | def do_pass(self, key_k, key_v, idx, end_idx, scores_fir):
61 | """
62 | key_k, key_v - Memory feature of the starting frame
63 | idx - Frame index of the starting frame
64 | end_idx - Frame index at which we stop the propagation
65 | """
66 | closest_ti = end_idx
67 | memory = []
68 | sa = {}
69 |
70 | K, CK, _, H, W = key_k.shape
71 | _, CV, _, _, _ = key_v.shape
72 |
73 | keys = key_k
74 | values = key_v
75 |
76 | prev_in_mem = True
77 | prev_key = prev_value = None
78 | last_ti = idx
79 | index = 0
80 |
81 | # Note that we never reach closest_ti, just the frame before it
82 | this_range = range(idx+1, closest_ti)
83 | end = closest_ti - 1
84 |
85 | scores_fir = scores_fir.cpu().numpy()
86 | scores_fir = list(filter(lambda a: a != 0, scores_fir))
87 | score_fir = np.sum(scores_fir)
88 | score_fir = score_fir / len(scores_fir)
89 |
90 | for ti in this_range:
91 | if prev_in_mem:
92 | # if the previous frame has already been added to the memory bank
93 | this_k = keys
94 | this_v = values
95 | else:
96 | # append it to a temporary memory bank otherwise
97 | this_k = torch.cat([keys, prev_key], 2)
98 | this_v = torch.cat([values, prev_value], 2)
99 | query = self.get_query_kv_buffered(ti)
100 | out_mask = self.prop_net.segment_with_query(this_k, this_v, *query)
101 | out_mask = aggregate_wbg(out_mask, keep_bg=True)
102 | self.prob[:,ti] = out_mask
103 |
104 | if ti != end:
105 | # Memorize this frame
106 | prev_key, prev_value, scores = self.prop_net.memorize(self.images[:,ti].cuda(), out_mask[1:])
107 | scores = scores.cpu().numpy()
108 | # scores = list(filter(lambda a: a != 0, scores))
109 | score = np.sum(scores)
110 | score = score / len(scores)
111 | score = score / score_fir
112 | if (abs(ti-last_ti) >= self.mem_freq) and (score > 0.8):
113 | index += 1
114 | if len(memory) > self.memory_thr:
115 | sr = {}
116 | for m in memory:
117 | score_c = np.exp(-abs(index-m))
118 | sr[m] = score_c + sa[m]
119 | result_min = min(sr, key=lambda x: sr[x])
120 | pos = memory.index(result_min)
121 | memory.remove(result_min)
122 | del sa[result_min]
123 | memory.append(index)
124 | sa[index] = score
125 | keys = torch.cat([keys[:, :, :pos], keys[:, :, pos+1:]], dim=2)
126 | keys = torch.cat([keys, prev_key], dim=2)
127 | values = torch.cat([values[:, :, :pos], values[:, :, pos+1:]], dim=2)
128 | values = torch.cat([values, prev_value], 2)
129 | else:
130 | memory.append(index)
131 | sa[index] = score
132 | keys = torch.cat([keys, prev_key], 2)
133 | values = torch.cat([values, prev_value], 2)
134 | last_ti = ti
135 | prev_in_mem = True
136 | else:
137 | prev_in_mem = False
138 |
139 | return closest_ti
140 |
141 | def interact(self, mask, frame_idx, end_idx):
142 | """
143 | mask - Input one-hot encoded mask WITHOUT the background class
144 | frame_idx, end_idx - Start and end idx of propagation
145 | """
146 | mask, _ = pad_divide_by(mask.cuda(), 16)
147 |
148 | self.prob[:, frame_idx] = aggregate_wbg(mask, keep_bg=True)
149 |
150 | # KV pair for the interacting frame
151 | key_k, key_v, scores_fir = self.prop_net.memorize(self.images[:,frame_idx].cuda(), self.prob[1:,frame_idx].cuda())
152 |
153 | # Propagate
154 | self.do_pass(key_k, key_v, frame_idx, end_idx, scores_fir)
155 |
--------------------------------------------------------------------------------
/inference_core_yv.py:
--------------------------------------------------------------------------------
1 | """
2 | This file specifies an advanced version of inference_core.py
3 | specific for YouTubeVOS evaluation (which is not trivial and one has to be very careful!)
4 |
5 | In a very high level, we perform propagation independently for each object
6 | and we start memorization for each object only after their first appearance
7 | which is also when YouTubeVOS gives the "first frame"
8 |
9 | The "first frame" for each object is different
10 | """
11 |
12 | from collections import defaultdict
13 | import torch
14 | import numpy as np
15 | import cv2
16 |
17 | from model.eval_network import PropagationNetwork
18 | from model.aggregate import aggregate_wbg
19 |
20 | from util.tensor_util import pad_divide_by
21 | import torch.nn.functional as F
22 |
23 |
24 |
25 | class InferenceCore:
26 | def __init__(self, prop_net:PropagationNetwork, images, num_objects, mem_freq=5):
27 | self.prop_net = prop_net
28 | self.mem_freq = mem_freq
29 |
30 | # True dimensions
31 | t = images.shape[1]
32 | h, w = images.shape[-2:]
33 |
34 | # Pad each side to multiple of 16
35 | images, self.pad = pad_divide_by(images, 16)
36 | # Padded dimensions
37 | nh, nw = images.shape[-2:]
38 |
39 | self.images = images
40 | self.device = 'cuda'
41 |
42 | self.k = num_objects
43 | self.masks = torch.zeros((t, 1, nh, nw), dtype=torch.uint8, device=self.device)
44 | self.out_masks = np.zeros((t, h, w), dtype=np.uint8)
45 |
46 | # Background included, not always consistent (i.e. sum up to 1)
47 | self.prob = torch.zeros((self.k+1, t, 1, nh, nw), dtype=torch.float32, device=self.device)
48 | self.prob[0] = 1e-7
49 |
50 | self.t, self.h, self.w = t, h, w
51 | self.nh, self.nw = nh, nw
52 | self.kh = self.nh//16
53 | self.kw = self.nw//16
54 | self.memory_thr = 10
55 |
56 | # The keys/values are always presevered in YouTube testing
57 | # the reason is that we still consider it as a single propagation pass
58 | # just that some objects are arriving later than usual
59 | self.keys = dict()
60 | self.values = dict()
61 |
62 | # list of objects with usable memory
63 | self.enabled_obj = []
64 |
65 | def get_query_kv_buffered(self, idx, first, score=None):
66 | # not actually buffered
67 | f16, f8, f4 = self.prop_net.get_query_values(self.images[:,idx].cuda())
68 | _, _, h, w = f16.size()
69 | if score is None:
70 | previous_mask = F.softmax(self.prob[:, idx - 1], dim=0)
71 | previous_mask, _ = torch.max(previous_mask[1:], dim=0)
72 | else:
73 | if score < 0.0:
74 | previous_mask = F.softmax(self.prob[:, first], dim=0)
75 | previous_mask, _ = torch.max(previous_mask[1:], dim=0)
76 | else:
77 | previous_mask = F.softmax(self.prob[:, idx - 1], dim=0)
78 | previous_mask, _ = torch.max(previous_mask[1:], dim=0)
79 |
80 | previous_mask = previous_mask.unsqueeze(0)
81 | previous_mask = F.interpolate(previous_mask, size=[h, w], mode='bilinear')
82 | concat_f16 = torch.cat((f16, previous_mask), dim=1)
83 | concat_f16 = self.prop_net.concat_conv(concat_f16)
84 | concat_f16 = torch.sigmoid(concat_f16)
85 | f16_final = f16 * concat_f16
86 | k16, v16 = self.prop_net.kv_q_f16(f16_final)
87 | result = (f16_final, f8, f4, k16, v16)
88 | return result
89 |
90 | def do_pass(self, key_k, key_v, idx, end_idx):
91 | """
92 | key_k, key_v - Memory feature of the starting frame
93 | idx - Frame index of the starting frame
94 | end_idx - Frame index at which we stop the propagation
95 | """
96 | closest_ti = end_idx
97 | memory = []
98 | sa = {}
99 |
100 | K, CK, _, H, W = key_k.shape
101 | _, CV, _, _, _ = key_v.shape
102 |
103 | for i, oi in enumerate(self.enabled_obj): #先把第一帧的key和value存进去
104 | if oi not in self.keys: #如果一个物体第一次出现,那么就将它的key和value添加到dict key和dict value中
105 | self.keys[oi] = key_k[i:i+1]
106 | self.values[oi] = key_v[i:i+1]
107 | else: #如果一个物体出现过,那么就将它的key和value按T维度和此前的进行concat
108 | self.keys[oi] = torch.cat([self.keys[oi], key_k[i:i+1]], 2)
109 | self.values[oi] = torch.cat([self.values[oi], key_v[i:i+1]], 2)
110 |
111 | prev_in_mem = True
112 | prev_key = {}
113 | prev_value = {}
114 | last_ti = idx #last_ti是memory中除第一帧外最早的帧的索引,idx是第一帧的索引
115 | index = 0
116 |
117 | # Note that we never reach closest_ti, just the frame before it
118 | this_range = range(idx+1, closest_ti)
119 | step = +1
120 | end = closest_ti - 1
121 |
122 |
123 |
124 | for ti in this_range: #ti下一帧到最后一帧的索引
125 | if prev_in_mem:
126 | # if the previous frame has already been added to the memory bank
127 | this_k = self.keys
128 | this_v = self.values
129 | else:
130 | # append it to a temporary memory bank otherwise
131 | # everything has to be done independently for each object
132 | this_k = {}
133 | this_v = {}
134 | for i, oi in enumerate(self.enabled_obj):
135 | this_k[oi] = torch.cat([self.keys[oi], prev_key[i:i+1]], 2)
136 | this_v[oi] = torch.cat([self.values[oi], prev_value[i:i+1]], 2)
137 |
138 | if ti == idx+1: #第二帧没有score
139 | query = self.get_query_kv_buffered(ti, idx) #提取当前帧的key、value以及图片特征
140 | else:
141 | query = self.get_query_kv_buffered(ti, idx, scores)
142 |
143 | out_mask = torch.cat([
144 | self.prop_net.segment_with_query(this_k[oi], this_v[oi], *query)
145 | for oi in self.enabled_obj], 0) #在不同物体的channel(dim=0)上进行预测 N,1,H,W
146 |
147 |
148 |
149 | out_mask = aggregate_wbg(out_mask, keep_bg=True) #soft aggregation操作
150 | self.prob[0,ti] = out_mask[0]
151 | # output mapping to the full object id space
152 | for i, oi in enumerate(self.enabled_obj):
153 | self.prob[oi,ti] = out_mask[i+1] #N,1,H,W
154 |
155 | if ti != end:
156 | # memorize this frame
157 | prev_key, prev_value, score = self.prop_net.memorize(self.images[:,ti].cuda(), out_mask[1:]) #得到刚刚处理完的这一帧的key和value
158 |
159 | score = score.cpu().numpy()
160 | score = list(filter(lambda a: a != 0, score))
161 | scores = np.sum(score)
162 | scores = scores / len(score)
163 |
164 |
165 | if ti == idx+1:
166 | max_score = scores
167 | scores = scores / max_score
168 |
169 |
170 | if (abs(ti-last_ti) >= self.mem_freq) and (scores > 0.75):
171 | index += 1
172 | if len(memory) > self.memory_thr:
173 | sr = {}
174 | for m in memory:
175 | score_c = np.exp(-abs(index-m))
176 | sr[m] = score_c + sa[m]
177 | result_min = min(sr, key=lambda x: sr[x])
178 | pos = memory.index(result_min)
179 | memory.remove(result_min)
180 | del sa[result_min]
181 | memory.append(index)
182 | sa[index] = score
183 | for i, oi in enumerate(self.enabled_obj):
184 | self.keys[oi] = torch.cat([self.keys[oi][:, :, :pos], self.keys[oi][:, :, pos+1:]], dim=2)
185 | self.keys[oi] = torch.cat([self.keys[oi], prev_key[i:i+1]], 2)
186 | self.values[oi] = torch.cat([self.values[oi][:, :, :pos], self.values[oi][:, :, pos+1:]], dim=2)
187 | self.values[oi] = torch.cat([self.values[oi], prev_value[i:i+1]], 2)
188 | else:
189 | memory.append(index)
190 | sa[index] = score
191 | for i, oi in enumerate(self.enabled_obj):
192 | self.keys[oi] = torch.cat([self.keys[oi], prev_key[i:i+1]], 2)
193 | self.values[oi] = torch.cat([self.values[oi], prev_value[i:i+1]], 2)
194 |
195 |
196 | last_ti = ti
197 | prev_in_mem = True
198 | else:
199 | prev_in_mem = False
200 |
201 |
202 | return closest_ti
203 |
204 | def interact(self, mask, frame_idx, end_idx, obj_idx):
205 | """
206 | mask - Input one-hot encoded mask WITHOUT the background class
207 | frame_idx, end_idx - Start and end idx of propagation
208 | obj_idx - list of object IDs that first appear on this frame
209 | """
210 |
211 | # In youtube mode, we interact with a subset of object id at a time
212 | mask, _ = pad_divide_by(mask.cuda(), 16)
213 |
214 | # update objects that have been labeled
215 | self.enabled_obj.extend(obj_idx)
216 |
217 | # Set other prob of mask regions to zero
218 | mask_regions = (mask[1:].sum(0) > 0.5)
219 | self.prob[:, frame_idx, mask_regions] = 0
220 | self.prob[obj_idx, frame_idx] = mask[obj_idx]
221 |
222 | self.prob[:, frame_idx] = aggregate_wbg(self.prob[1:, frame_idx], keep_bg=True)
223 |
224 | # KV pair for the interacting frame
225 | key_k, key_v, _ = self.prop_net.memorize(self.images[:,frame_idx].cuda(), self.prob[self.enabled_obj,frame_idx].cuda())
226 |
227 | # Propagate
228 | self.do_pass(key_k, key_v, frame_idx, end_idx)
229 |
--------------------------------------------------------------------------------
/model/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/yongliu20/QDMN/301f7bff500dc5d642407f2de11248b97e96178f/model/.DS_Store
--------------------------------------------------------------------------------
/model/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/yongliu20/QDMN/301f7bff500dc5d642407f2de11248b97e96178f/model/__init__.py
--------------------------------------------------------------------------------
/model/aggregate.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn.functional as F
3 |
4 |
5 | def aggregate_wbg(prob, keep_bg=False):
6 | k = prob.shape
7 | new_prob = torch.cat([
8 | torch.prod(1-prob, dim=0, keepdim=True),
9 | prob
10 | ], 0).clamp(1e-7, 1-1e-7)
11 | logits = torch.log((new_prob /(1-new_prob)))
12 |
13 | if keep_bg:
14 | return F.softmax(logits, dim=0)
15 | else:
16 | return F.softmax(logits, dim=0)[1:]
--------------------------------------------------------------------------------
/model/eval_network.py:
--------------------------------------------------------------------------------
1 | """
2 | eval_network.py - Evaluation version of the network
3 | """
4 |
5 | import math
6 | import torch
7 | import torch.nn as nn
8 | import torch.nn.functional as F
9 | from model.modules import *
10 | from model.network import Decoder
11 |
12 |
13 | def make_gaussian(y_idx, x_idx, height, width, sigma=7):
14 | yv, xv = torch.meshgrid([torch.arange(0, height), torch.arange(0, width)])
15 |
16 | yv = yv.reshape(height*width).unsqueeze(0).float().cuda()
17 | xv = xv.reshape(height*width).unsqueeze(0).float().cuda()
18 |
19 | y_idx = y_idx.transpose(0, 1)
20 | x_idx = x_idx.transpose(0, 1)
21 |
22 | g = torch.exp(- ((yv-y_idx)**2 + (xv-x_idx)**2) / (2*sigma**2) )
23 |
24 | return g
25 |
26 | def softmax_w_g_top(x, top=None, gauss=None):
27 | if top is not None:
28 | if gauss is not None:
29 | maxes = torch.max(x, dim=1, keepdim=True)[0]
30 | x_exp = torch.exp(x - maxes)*gauss
31 | x_exp, indices = torch.topk(x_exp, k=top, dim=1)
32 | else:
33 | values, indices = torch.topk(x, k=top, dim=1)
34 | x_exp = torch.exp(values - values[:,0])
35 |
36 | x_exp_sum = torch.sum(x_exp, dim=1, keepdim=True)
37 | x_exp /= x_exp_sum
38 | x.zero_().scatter_(1, indices, x_exp) # B * THW * HW
39 |
40 | output = x
41 | else:
42 | maxes = torch.max(x, dim=1, keepdim=True)[0]
43 | if gauss is not None:
44 | x_exp = torch.exp(x-maxes)*gauss
45 |
46 | x_exp_sum = torch.sum(x_exp, dim=1, keepdim=True)
47 | x_exp /= x_exp_sum
48 | output = x_exp
49 |
50 | return output
51 |
52 | class EvalMemoryReader(nn.Module):
53 | def __init__(self, top_k, km):
54 | super().__init__()
55 | self.top_k = top_k
56 | self.km = km
57 |
58 | def forward(self, mk, mv, qk):
59 | B, CK, T, H, W = mk.shape
60 | _, CV, _, _, _ = mv.shape
61 |
62 | mi = mk.view(B, CK, T*H*W).transpose(1, 2)
63 | qi = qk.view(1, CK, H*W).expand(B, -1, -1) / math.sqrt(CK) # B * CK * HW
64 |
65 | affinity = torch.bmm(mi, qi) # B, THW, HW
66 |
67 | if self.km is not None:
68 | # Make a bunch of Gaussian distributions
69 | argmax_idx = affinity.max(2)[1]
70 | y_idx, x_idx = argmax_idx//W, argmax_idx%W
71 | g = make_gaussian(y_idx, x_idx, H, W, sigma=self.km)
72 | g = g.view(B, T*H*W, H*W)
73 |
74 | affinity = softmax_w_g_top(affinity, top=self.top_k, gauss=g) # B, THW, HW
75 | else:
76 | if self.top_k is not None:
77 | affinity = softmax_w_g_top(affinity, top=self.top_k, gauss=None) # B, THW, HW
78 | else:
79 | affinity = F.softmax(affinity, dim=1)
80 |
81 | mv = mv.view(B, CV, T*H*W)
82 | mem = torch.bmm(mv, affinity) # Weighted-sum B, CV, HW
83 | mem = mem.view(B, CV, H, W)
84 |
85 | return mem
86 |
87 | class PropagationNetwork(nn.Module):
88 | def __init__(self, top_k=50, km=None):
89 | super().__init__()
90 | self.mask_rgb_encoder = MaskRGBEncoder()
91 | self.rgb_encoder = RGBEncoder()
92 |
93 | self.kv_m_f16 = KeyValue(1024, keydim=128, valdim=512)
94 | self.kv_q_f16 = KeyValue(1024, keydim=128, valdim=512)
95 |
96 | self.memory = EvalMemoryReader(top_k, km=km)
97 | self.decoder = Decoder()
98 | self.aspp = ASPP(1024)
99 | self.score = Score(1024)
100 | self.concat_conv = nn.Conv2d(1025,1,3,1,1)
101 |
102 | def memorize(self, frame, masks):
103 | k, _, h, w = masks.shape
104 |
105 | # Extract memory key/value for a frame with multiple masks
106 | frame = frame.view(1, 3, h, w).repeat(k, 1, 1, 1)
107 | # Compute the "others" mask
108 | if k != 1:
109 | others = torch.cat([
110 | torch.sum(
111 | masks[[j for j in range(k) if i!=j]]
112 | , dim=0, keepdim=True)
113 | for i in range(k)], 0)
114 | else:
115 | others = torch.zeros_like(masks)
116 |
117 | f16 = self.mask_rgb_encoder(frame, masks, others)
118 | f16_score = F.interpolate(f16, [24, 24], mode='bilinear') #feature for quality assessment
119 | mask_score = self.score(f16_score)
120 | k16, v16 = self.kv_m_f16(f16) # num_objects, 128 and 512, H/16, W/16
121 |
122 | return k16.unsqueeze(2), v16.unsqueeze(2), mask_score
123 |
124 | def get_query_values(self, frame):
125 | f16, f8, f4 = self.rgb_encoder(frame)
126 |
127 | return f16, f8, f4
128 |
129 | def segment_with_query(self, keys, values, f16, f8, f4, k16, v16):
130 | k = keys.shape[0]
131 | # Do it batch by batch to reduce memory usage
132 | batched = 1
133 | m4 = torch.cat([
134 | self.memory(keys[i:i+batched], values[i:i+batched], k16) for i in range(0, k, batched)
135 | ], 0)
136 |
137 | v16 = v16.expand(k, -1, -1, -1)
138 | m4 = torch.cat([m4, v16], 1)
139 | m4 = self.aspp(m4)
140 |
141 | return torch.sigmoid(self.decoder(m4, f8, f4))
142 |
--------------------------------------------------------------------------------
/model/losses.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 | import torch.nn.functional as F
4 | from util.tensor_util import compute_tensor_iu
5 |
6 | from collections import defaultdict
7 |
8 |
9 | def get_iou_hook(values):
10 | return 'iou/iou', (values['hide_iou/i']+1)/(values['hide_iou/u']+1)
11 |
12 | def get_sec_iou_hook(values):
13 | return 'iou/sec_iou', (values['hide_iou/sec_i']+1)/(values['hide_iou/sec_u']+1)
14 |
15 | iou_hooks_so = [
16 | get_iou_hook,
17 | ]
18 |
19 | iou_hooks_mo = [
20 | get_iou_hook,
21 | get_sec_iou_hook,
22 | ]
23 |
24 |
25 | class BootstrappedCE(nn.Module):
26 | def __init__(self, start_warm=20000, end_warm=70000, top_p=0.15):
27 | super().__init__()
28 |
29 | self.start_warm = start_warm
30 | self.end_warm = end_warm
31 | self.top_p = top_p
32 |
33 | def forward(self, input, target, it):
34 | if it < self.start_warm:
35 | return F.cross_entropy(input, target), 1.0
36 |
37 | raw_loss = F.cross_entropy(input, target, reduction='none').view(-1)
38 | num_pixels = raw_loss.numel()
39 |
40 | if it > self.end_warm:
41 | this_p = self.top_p
42 | else:
43 | this_p = self.top_p + (1-self.top_p)*((self.end_warm-it)/(self.end_warm-self.start_warm))
44 | loss, _ = torch.topk(raw_loss, int(num_pixels * this_p), sorted=False)
45 | return loss.mean(), this_p
46 |
47 |
48 | class LossComputer:
49 | def __init__(self, para):
50 | super().__init__()
51 | self.para = para
52 | self.bce = BootstrappedCE()
53 | self.score_loss = nn.MSELoss()
54 |
55 | def compute(self, data, it):
56 | losses = defaultdict(int)
57 |
58 | b, s, _, _, _ = data['gt'].shape
59 | selector = data.get('selector', None)
60 |
61 | for i in range(1, s):
62 | for j in range(b):
63 | if selector is not None and selector[j][1] > 0.5:
64 | loss_seg, p = self.bce(data['logits_%d'%i][j:j+1], data['cls_gt'][j:j+1,i], it)
65 | loss_score = self.score_loss(data['mask_score_%d' % i][j:j+1], data['gt_score_%d' % i][j:j+1])
66 | else:
67 | loss_seg, p = self.bce(data['logits_%d'%i][j:j+1,:2], data['cls_gt'][j:j+1,i], it)
68 | loss_score = self.score_loss(data['mask_score_%d' % i][j:j+1], data['gt_score_%d' % i][j:j+1])
69 |
70 | losses['loss_%d'%i] = losses['loss_%d'%i] + loss_seg / b + loss_score / b
71 | losses['p'] += p / b / (s-1)
72 |
73 | losses['total_loss'] += losses['loss_%d'%i]
74 |
75 | new_total_i, new_total_u = compute_tensor_iu(data['mask_%d'%i]>0.5, data['gt'][:,i]>0.5)
76 | losses['hide_iou/i'] += new_total_i
77 | losses['hide_iou/u'] += new_total_u
78 |
79 | if selector is not None:
80 | new_total_i, new_total_u = compute_tensor_iu(data['sec_mask_%d'%i]>0.5, data['sec_gt'][:,i]>0.5)
81 | losses['hide_iou/sec_i'] += new_total_i
82 | losses['hide_iou/sec_u'] += new_total_u
83 |
84 | return losses
85 |
--------------------------------------------------------------------------------
/model/mod_resnet.py:
--------------------------------------------------------------------------------
1 | """
2 | mod_resnet.py - A modified ResNet structure
3 | We append extra channels to the first conv by some network surgery
4 | """
5 |
6 | from collections import OrderedDict
7 | import math
8 |
9 | import torch
10 | import torch.nn as nn
11 | from torch.utils import model_zoo
12 |
13 |
14 | def load_weights_sequential(target, source_state, extra_chan=1):
15 |
16 | new_dict = OrderedDict()
17 |
18 | for k1, v1 in target.state_dict().items():
19 | if not 'num_batches_tracked' in k1:
20 | if k1 in source_state:
21 | tar_v = source_state[k1]
22 |
23 | if v1.shape != tar_v.shape:
24 | # Init the new segmentation channel with zeros
25 | # print(v1.shape, tar_v.shape)
26 | c, _, w, h = v1.shape
27 | pads = torch.zeros((c,extra_chan,w,h), device=tar_v.device)
28 | nn.init.orthogonal_(pads)
29 | tar_v = torch.cat([tar_v, pads], 1)
30 |
31 | new_dict[k1] = tar_v
32 | elif 'bias' not in k1:
33 | print('Not OK', k1)
34 |
35 | target.load_state_dict(new_dict, strict=False)
36 |
37 |
38 | model_urls = {
39 | 'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',
40 | }
41 |
42 |
43 | def conv3x3(in_planes, out_planes, stride=1, dilation=1):
44 | return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
45 | padding=dilation, dilation=dilation)
46 |
47 |
48 | class BasicBlock(nn.Module):
49 | expansion = 1
50 |
51 | def __init__(self, inplanes, planes, stride=1, downsample=None, dilation=1):
52 | super(BasicBlock, self).__init__()
53 | self.conv1 = conv3x3(inplanes, planes, stride=stride, dilation=dilation)
54 | self.bn1 = nn.BatchNorm2d(planes)
55 | self.relu = nn.ReLU(inplace=True)
56 | self.conv2 = conv3x3(planes, planes, stride=1, dilation=dilation)
57 | self.bn2 = nn.BatchNorm2d(planes)
58 | self.downsample = downsample
59 | self.stride = stride
60 |
61 | def forward(self, x):
62 | residual = x
63 |
64 | out = self.conv1(x)
65 | out = self.bn1(out)
66 | out = self.relu(out)
67 |
68 | out = self.conv2(out)
69 | out = self.bn2(out)
70 |
71 | if self.downsample is not None:
72 | residual = self.downsample(x)
73 |
74 | out += residual
75 | out = self.relu(out)
76 |
77 | return out
78 |
79 |
80 | class Bottleneck(nn.Module):
81 | expansion = 4
82 |
83 | def __init__(self, inplanes, planes, stride=1, downsample=None, dilation=1):
84 | super(Bottleneck, self).__init__()
85 | self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1)
86 | self.bn1 = nn.BatchNorm2d(planes)
87 | self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, dilation=dilation,
88 | padding=dilation)
89 | self.bn2 = nn.BatchNorm2d(planes)
90 | self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1)
91 | self.bn3 = nn.BatchNorm2d(planes * 4)
92 | self.relu = nn.ReLU(inplace=True)
93 | self.downsample = downsample
94 | self.stride = stride
95 |
96 | def forward(self, x):
97 | residual = x
98 |
99 | out = self.conv1(x)
100 | out = self.bn1(out)
101 | out = self.relu(out)
102 |
103 | out = self.conv2(out)
104 | out = self.bn2(out)
105 | out = self.relu(out)
106 |
107 | out = self.conv3(out)
108 | out = self.bn3(out)
109 |
110 | if self.downsample is not None:
111 | residual = self.downsample(x)
112 |
113 | out += residual
114 | out = self.relu(out)
115 |
116 | return out
117 |
118 |
119 | class ResNet(nn.Module):
120 | def __init__(self, block, layers=(3, 4, 23, 3), extra_chan=1):
121 | self.inplanes = 64
122 | super(ResNet, self).__init__()
123 | self.conv1 = nn.Conv2d(3+extra_chan, 64, kernel_size=7, stride=2, padding=3)
124 | self.bn1 = nn.BatchNorm2d(64)
125 | self.relu = nn.ReLU(inplace=True)
126 | self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
127 | self.layer1 = self._make_layer(block, 64, layers[0])
128 | self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
129 | self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
130 | self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
131 |
132 | for m in self.modules():
133 | if isinstance(m, nn.Conv2d):
134 | n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
135 | m.weight.data.normal_(0, math.sqrt(2. / n))
136 | m.bias.data.zero_()
137 | elif isinstance(m, nn.BatchNorm2d):
138 | m.weight.data.fill_(1)
139 | m.bias.data.zero_()
140 |
141 | def _make_layer(self, block, planes, blocks, stride=1, dilation=1):
142 | downsample = None
143 | if stride != 1 or self.inplanes != planes * block.expansion:
144 | downsample = nn.Sequential(
145 | nn.Conv2d(self.inplanes, planes * block.expansion,
146 | kernel_size=1, stride=stride),
147 | nn.BatchNorm2d(planes * block.expansion),
148 | )
149 |
150 | layers = [block(self.inplanes, planes, stride, downsample)]
151 | self.inplanes = planes * block.expansion
152 | for i in range(1, blocks):
153 | layers.append(block(self.inplanes, planes, dilation=dilation))
154 |
155 | return nn.Sequential(*layers)
156 |
157 | def resnet50(pretrained=True, extra_chan=0):
158 | model = ResNet(Bottleneck, [3, 4, 6, 3], extra_chan)
159 | if pretrained:
160 | load_weights_sequential(model, model_zoo.load_url(model_urls['resnet50']), extra_chan) #download
161 | # load_weights_sequential(model, torch.load('path_to_yours'), extra_chan) #load yours
162 | return model
163 |
164 |
--------------------------------------------------------------------------------
/model/model.py:
--------------------------------------------------------------------------------
1 | """
2 | model.py - warpper and utility functions for network training
3 | Compute loss, back-prop, update parameters, logging, etc.
4 | """
5 |
6 |
7 | import os
8 | import time
9 | import torch
10 | import torch.nn as nn
11 | import torch.optim as optim
12 | import cv2
13 |
14 | from model.network import PropagationNetwork
15 | from model.losses import LossComputer, iou_hooks_mo, iou_hooks_so
16 | from util.log_integrator import Integrator
17 | from util.image_saver import pool_pairs
18 | from util.tensor_util import maskiou
19 |
20 |
21 | class PropagationModel:
22 | def __init__(self, para, logger=None, save_path=None, local_rank=0, world_size=1):
23 | self.para = para
24 | self.single_object = para['single_object']
25 | self.local_rank = local_rank
26 |
27 | self.PNet = nn.parallel.DistributedDataParallel(
28 | PropagationNetwork(self.single_object).cuda(),
29 | device_ids=[local_rank], output_device=local_rank, broadcast_buffers=False)
30 |
31 | # Setup logger when local_rank=0
32 | self.logger = logger
33 | self.save_path = save_path
34 | if logger is not None:
35 | self.last_time = time.time()
36 | self.train_integrator = Integrator(self.logger, distributed=True, local_rank=local_rank, world_size=world_size)
37 | if self.single_object:
38 | self.train_integrator.add_hook(iou_hooks_so)
39 | else:
40 | self.train_integrator.add_hook(iou_hooks_mo)
41 | self.loss_computer = LossComputer(para)
42 |
43 | self.train()
44 | self.optimizer = optim.Adam(filter(
45 | lambda p: p.requires_grad, self.PNet.parameters()), lr=para['lr'], weight_decay=1e-7)
46 | self.scheduler = optim.lr_scheduler.MultiStepLR(self.optimizer, para['steps'], para['gamma'])
47 |
48 | # Logging info
49 | self.report_interval = 100
50 | self.save_im_interval = 800
51 | self.save_model_interval = 50000
52 | if para['debug']:
53 | self.report_interval = self.save_im_interval = 1
54 |
55 | def do_pass(self, data, it=0):
56 | # No need to store the gradient outside training
57 | torch.set_grad_enabled(self._is_train)
58 |
59 | for k, v in data.items():
60 | if type(v) != list and type(v) != dict and type(v) != int:
61 | data[k] = v.cuda(non_blocking=True)
62 |
63 | out = {}
64 | Fs = data['rgb']
65 | Ms = data['gt']
66 |
67 | if self.single_object:
68 | key_k, key_v, _ = self.PNet(Fs[:,0], Ms[:,0])
69 | prev_logits, prev_mask = self.PNet(Fs[:,1], key_k, key_v, mask1=Ms[:,0])
70 | prev_k, prev_v, prev_score = self.PNet(Fs[:,1], prev_mask)
71 |
72 | keys = torch.cat([key_k, prev_k], 2)
73 | values = torch.cat([key_v, prev_v], 2)
74 | this_logits, this_mask = self.PNet(Fs[:,2], keys, values, mask1=prev_mask)
75 | _, _, this_score = self.PNet(Fs[:, 2], this_mask)
76 |
77 | out['mask_1'] = prev_mask
78 | out['mask_2'] = this_mask
79 | out['logits_1'] = prev_logits
80 | out['logits_2'] = this_logits
81 | out['mask_score_1'] = prev_score
82 | out['mask_score_2'] = this_score
83 |
84 | ###calcute maskiou
85 | mask_for_prev = torch.zeros_like(prev_mask)
86 | mask_for_prev[prev_mask > 0.5] = 1
87 | mask_for_this = torch.zeros_like(this_mask)
88 | mask_for_this[this_mask > 0.5] = 1
89 | gt_mask_prev = data['gt'][:, 1]
90 | gt_mask_this = data['gt'][:, 2]
91 | iou_prev = maskiou(mask_for_prev, gt_mask_prev)
92 | iou_this = maskiou(mask_for_this, gt_mask_this)
93 |
94 | out['gt_score_1'] = iou_prev
95 | out['gt_score_2'] = iou_this
96 | ###
97 |
98 | else:
99 | sec_Ms = data['sec_gt']
100 | selector = data['selector']
101 |
102 | key_k1, key_v1, _ = self.PNet(Fs[:,0], Ms[:,0], sec_Ms[:,0])
103 | key_k2, key_v2, _ = self.PNet(Fs[:,0], sec_Ms[:,0], Ms[:,0])
104 | key_k = torch.stack([key_k1, key_k2], 1)
105 | key_v = torch.stack([key_v1, key_v2], 1)
106 |
107 | prev_logits, prev_mask = self.PNet(Fs[:,1], key_k, key_v, mask1=Ms[:, 0], mask2=sec_Ms[:, 0], selector=selector)
108 |
109 | prev_k1, prev_v1, prev_score_1 = self.PNet(Fs[:,1], prev_mask[:,0:1], prev_mask[:,1:2])
110 | prev_k2, prev_v2, prev_score_2 = self.PNet(Fs[:,1], prev_mask[:,1:2], prev_mask[:,0:1])
111 | prev_k = torch.stack([prev_k1, prev_k2], 1)
112 | prev_v = torch.stack([prev_v1, prev_v2], 1)
113 | keys = torch.cat([key_k, prev_k], 3)
114 | values = torch.cat([key_v, prev_v], 3)
115 |
116 | ###calculate maskiou
117 | mask_for_prev_1 = torch.zeros_like(prev_mask[:, 0:1])
118 | mask_for_prev_1[prev_mask[:, 0:1] > 0.5] = 1
119 | mask_for_prev_2 = torch.zeros_like(prev_mask[:, 1:2])
120 | mask_for_prev_2[prev_mask[:, 1:2] > 0.5] = 1
121 | gt_mask_prev_1 = data['gt'][:, 1]
122 | gt_mask_prev_2 = data['sec_gt'][:, 1]
123 | prev_iou_1 = maskiou(mask_for_prev_1, gt_mask_prev_1)
124 | prev_iou_2 = maskiou(mask_for_prev_2, gt_mask_prev_2)
125 | out['gt_score_1'] = (prev_iou_1 + prev_iou_2) / 2
126 | out['mask_score_1'] = (prev_score_1 + prev_score_2) / 2
127 | ###
128 |
129 | this_logits, this_mask = self.PNet(Fs[:,2], keys, values, mask1=prev_mask[:, 0:1], mask2=prev_mask[:, 1:2], selector=selector)
130 | _, _, this_score_1 = self.PNet(Fs[:, 2], this_mask[:, 0:1], this_mask[:, 1:2])
131 | _, _, this_score_2 = self.PNet(Fs[:, 2], this_mask[:, 1:2], this_mask[:, 0:1])
132 |
133 | ###calculate maskiou
134 | mask_for_this_1 = torch.zeros_like(this_mask[:, 0:1])
135 | mask_for_this_1[this_mask[:, 0:1] > 0.5] = 1
136 | mask_for_this_2 = torch.zeros_like(this_mask[:, 1:2])
137 | mask_for_this_2[this_mask[:, 1:2] > 0.5] = 1
138 | gt_mask_this_1 = data['gt'][:, 2]
139 | gt_mask_this_2 = data['sec_gt'][:, 2]
140 | this_iou_1 = maskiou(mask_for_this_1, gt_mask_this_1)
141 | this_iou_2 = maskiou(mask_for_this_2, gt_mask_this_2)
142 | out['gt_score_2'] = (this_iou_1 + this_iou_2) / 2
143 | out['mask_score_2'] = (this_score_1 + this_score_2) / 2
144 | ###
145 |
146 | out['mask_1'] = prev_mask[:,0:1]
147 | out['mask_2'] = this_mask[:,0:1]
148 | out['sec_mask_1'] = prev_mask[:,1:2]
149 | out['sec_mask_2'] = this_mask[:,1:2]
150 |
151 | out['logits_1'] = prev_logits
152 | out['logits_2'] = this_logits
153 |
154 | if self._do_log or self._is_train:
155 | losses = self.loss_computer.compute({**data, **out}, it)
156 |
157 | # Logging
158 | if self._do_log:
159 | self.integrator.add_dict(losses)
160 | if self._is_train:
161 | if it % self.save_im_interval == 0 and it != 0:
162 | if self.logger is not None:
163 | images = {**data, **out}
164 | size = (384, 384)
165 | self.logger.log_cv2('train/pairs', pool_pairs(images, size, self.single_object), it)
166 |
167 | if self._is_train:
168 | if (it) % self.report_interval == 0 and it != 0:
169 | if self.logger is not None:
170 | self.logger.log_scalar('train/lr', self.scheduler.get_last_lr()[0], it)
171 | self.logger.log_metrics('train', 'time', (time.time()-self.last_time)/self.report_interval, it)
172 | self.last_time = time.time()
173 | self.train_integrator.finalize('train', it)
174 | self.train_integrator.reset_except_hooks()
175 |
176 | if it % self.save_model_interval == 0 and it != 0:
177 | if self.logger is not None:
178 | self.save(it)
179 |
180 | # Backward pass
181 | for param_group in self.optimizer.param_groups:
182 | for p in param_group['params']:
183 | p.grad = None
184 | losses['total_loss'].backward()
185 | self.optimizer.step()
186 | self.scheduler.step()
187 |
188 | def save(self, it):
189 | if self.save_path is None:
190 | print('Saving has been disabled.')
191 | return
192 |
193 | os.makedirs(os.path.dirname(self.save_path), exist_ok=True)
194 | model_path = self.save_path + ('_%s.pth' % it)
195 | torch.save(self.PNet.module.state_dict(), model_path)
196 | print('Model saved to %s.' % model_path)
197 |
198 | self.save_checkpoint(it)
199 |
200 | def save_checkpoint(self, it):
201 | if self.save_path is None:
202 | print('Saving has been disabled.')
203 | return
204 |
205 | os.makedirs(os.path.dirname(self.save_path), exist_ok=True)
206 | checkpoint_path = self.save_path + '_checkpoint.pth'
207 | checkpoint = {
208 | 'it': it,
209 | 'network': self.PNet.module.state_dict(),
210 | 'optimizer': self.optimizer.state_dict(),
211 | 'scheduler': self.scheduler.state_dict()}
212 | torch.save(checkpoint, checkpoint_path)
213 |
214 | print('Checkpoint saved to %s.' % checkpoint_path)
215 |
216 | def load_model(self, path):
217 | map_location = 'cuda:%d' % self.local_rank
218 | checkpoint = torch.load(path, map_location={'cuda:0': map_location})
219 |
220 | it = checkpoint['it']
221 | network = checkpoint['network']
222 | optimizer = checkpoint['optimizer']
223 | scheduler = checkpoint['scheduler']
224 |
225 | map_location = 'cuda:%d' % self.local_rank
226 | self.PNet.module.load_state_dict(network)
227 | self.optimizer.load_state_dict(optimizer)
228 | self.scheduler.load_state_dict(scheduler)
229 |
230 | print('Model loaded.')
231 |
232 | return it
233 |
234 | def load_network(self, path):
235 | map_location = 'cuda:%d' % self.local_rank
236 | src_dict = torch.load(path, map_location={'cuda:0': map_location})
237 |
238 | # Maps SO weight (without other_mask) to MO weight (with other_mask)
239 | for k in list(src_dict.keys()):
240 | if k == 'mask_rgb_encoder.conv1.weight':
241 | if src_dict[k].shape[1] == 4:
242 | pads = torch.zeros((64,1,7,7), device=src_dict[k].device)
243 | nn.init.orthogonal_(pads)
244 | src_dict[k] = torch.cat([src_dict[k], pads], 1)
245 |
246 | self.PNet.module.load_state_dict(src_dict)
247 | print('Network weight loaded:', path)
248 |
249 | def train(self):
250 | self._is_train = True
251 | self._do_log = True
252 | self.integrator = self.train_integrator
253 | # Shall be in eval() mode to freeze BN parameters
254 | self.PNet.eval()
255 | return self
256 |
257 | def val(self):
258 | self._is_train = False
259 | self._do_log = True
260 | self.PNet.eval()
261 | return self
262 |
263 | def test(self):
264 | self._is_train = False
265 | self._do_log = False
266 | self.PNet.eval()
267 | return self
268 |
269 |
--------------------------------------------------------------------------------
/model/modules.py:
--------------------------------------------------------------------------------
1 | """
2 | modules.py - This file stores the rathering boring network blocks.
3 | """
4 |
5 | from numpy.lib.arraysetops import isin
6 | import torch
7 | import torch.nn as nn
8 | import torch.nn.functional as F
9 | from torchvision import models
10 |
11 | from model import mod_resnet
12 |
13 |
14 | class ResBlock(nn.Module):
15 | def __init__(self, indim, outdim=None):
16 | super(ResBlock, self).__init__()
17 | if outdim == None:
18 | outdim = indim
19 | if indim == outdim:
20 | self.downsample = None
21 | else:
22 | self.downsample = nn.Conv2d(indim, outdim, kernel_size=3, padding=1)
23 |
24 | self.conv1 = nn.Conv2d(indim, outdim, kernel_size=3, padding=1)
25 | self.conv2 = nn.Conv2d(outdim, outdim, kernel_size=3, padding=1)
26 |
27 | def forward(self, x):
28 | r = self.conv1(F.relu(x))
29 | r = self.conv2(F.relu(r))
30 |
31 | if self.downsample is not None:
32 | x = self.downsample(x)
33 |
34 | return x + r
35 |
36 |
37 | class MaskRGBEncoderSO(nn.Module):
38 | def __init__(self):
39 | super().__init__()
40 |
41 | resnet = mod_resnet.resnet50(pretrained=True, extra_chan=1)
42 | self.conv1 = resnet.conv1
43 | self.bn1 = resnet.bn1
44 | self.relu = resnet.relu # 1/2, 64
45 | self.maxpool = resnet.maxpool
46 |
47 | self.layer1 = resnet.layer1 # 1/4, 256
48 | self.layer2 = resnet.layer2 # 1/8, 512
49 | self.layer3 = resnet.layer3 # 1/16, 1024
50 |
51 | def forward(self, f, m):
52 |
53 | f = torch.cat([f, m], 1)
54 |
55 | x = self.conv1(f)
56 | x = self.bn1(x)
57 | x = self.relu(x) # 1/2, 64
58 | x = self.maxpool(x) # 1/4, 64
59 | x = self.layer1(x) # 1/4, 256
60 | x = self.layer2(x) # 1/8, 512
61 | x = self.layer3(x) # 1/16, 1024
62 |
63 | return x
64 |
65 |
66 | class MaskRGBEncoder(nn.Module):
67 | def __init__(self):
68 | super().__init__()
69 |
70 | resnet = mod_resnet.resnet50(pretrained=True, extra_chan=2)
71 | self.conv1 = resnet.conv1
72 | self.bn1 = resnet.bn1
73 | self.relu = resnet.relu # 1/2, 64
74 | self.maxpool = resnet.maxpool
75 |
76 | self.layer1 = resnet.layer1 # 1/4, 256
77 | self.layer2 = resnet.layer2 # 1/8, 512
78 | self.layer3 = resnet.layer3 # 1/16, 1024
79 |
80 | def forward(self, f, m, o):
81 |
82 | f = torch.cat([f, m, o], 1)
83 |
84 | x = self.conv1(f)
85 | x = self.bn1(x)
86 | x = self.relu(x) # 1/2, 64
87 | x = self.maxpool(x) # 1/4, 64
88 | x = self.layer1(x) # 1/4, 256
89 | x = self.layer2(x) # 1/8, 512
90 | x = self.layer3(x) # 1/16, 1024
91 |
92 | return x
93 |
94 |
95 | class RGBEncoder(nn.Module):
96 | def __init__(self):
97 | super().__init__()
98 | # resnet = models.resnet50(pretrained=True)
99 | resnet = mod_resnet.resnet50(pretrained=True) #use mod_resnet as backbone
100 | self.conv1 = resnet.conv1
101 | self.bn1 = resnet.bn1
102 | self.relu = resnet.relu # 1/2, 64
103 | self.maxpool = resnet.maxpool
104 |
105 | self.res2 = resnet.layer1 # 1/4, 256
106 | self.layer2 = resnet.layer2 # 1/8, 512
107 | self.layer3 = resnet.layer3 # 1/16, 1024
108 |
109 | def forward(self, f):
110 | x = self.conv1(f)
111 | x = self.bn1(x)
112 | x = self.relu(x) # 1/2, 64
113 | x = self.maxpool(x) # 1/4, 64
114 | f4 = self.res2(x) # 1/4, 256
115 | f8 = self.layer2(f4) # 1/8, 512
116 | f16 = self.layer3(f8) # 1/16, 1024
117 |
118 | return f16, f8, f4
119 |
120 |
121 | class UpsampleBlock(nn.Module):
122 | def __init__(self, skip_c, up_c, out_c, scale_factor=2):
123 | super().__init__()
124 | self.skip_conv1 = nn.Conv2d(skip_c, up_c, kernel_size=3, padding=1)
125 | self.skip_conv2 = ResBlock(up_c, up_c)
126 | self.out_conv = ResBlock(up_c, out_c)
127 | self.scale_factor = scale_factor
128 |
129 | def forward(self, skip_f, up_f):
130 | x = self.skip_conv2(self.skip_conv1(skip_f))
131 | x = x + F.interpolate(up_f, scale_factor=self.scale_factor, mode='bilinear', align_corners=False)
132 | x = self.out_conv(x)
133 | return x
134 |
135 |
136 | class KeyValue(nn.Module):
137 | def __init__(self, indim, keydim, valdim):
138 | super().__init__()
139 | self.key_proj = nn.Conv2d(indim, keydim, kernel_size=3, padding=1)
140 | self.val_proj = nn.Conv2d(indim, valdim, kernel_size=3, padding=1)
141 |
142 | def forward(self, x):
143 | return self.key_proj(x), self.val_proj(x)
144 |
145 | class Score(nn.Module):
146 | def __init__(self, input_chan):
147 | super(Score, self).__init__()
148 | input_channels = input_chan
149 | self.conv1 = nn.Conv2d(input_channels, 256, 3, 1, 1)
150 | self.conv2 = nn.Conv2d(256, 256, 3, 1, 1)
151 | self.conv3 = nn.Conv2d(256, 256, 3, 1, 1)
152 | self.conv4 = nn.Conv2d(256, 256, 3, 2, 1)
153 |
154 | self.fc1 = nn.Linear(256*12*12, 1024) #fc layers
155 | self.fc2 = nn.Linear(1024, 1)
156 |
157 | # self.gav = nn.AdaptiveAvgPool2d(1) #global average pooling layers
158 | # self.fc = nn.Linear(256, 1)
159 |
160 | for i in [self.conv1, self.conv2, self.conv3, self.conv4]:
161 | nn.init.kaiming_normal_(i.weight, mode='fan_out', nonlinearity='relu')
162 | nn.init.constant_(i.bias, 0)
163 |
164 | for i in [self.fc1, self.fc2]:
165 | nn.init.kaiming_uniform_(i.weight, a=1)
166 | nn.init.constant_(i.bias, 0)
167 |
168 | def forward(self, x):
169 | x = F.relu(self.conv1(x))
170 | x = F.relu(self.conv2(x))
171 | x = F.relu(self.conv3(x))
172 | x = F.relu(self.conv4(x))
173 | x = x.view(x.size(0), -1)
174 | x = F.relu(self.fc1(x))
175 | x = F.relu(self.fc2(x))
176 |
177 | # x = self.gav(x) #the method of gav
178 | # x = x.view(x.size(0), -1)
179 | # x = F.leaky_relu(self.fc(x))
180 | return x
181 |
182 | class _ASPPModule(nn.Module):
183 | def __init__(self, inplanes, planes, kernel_size, padding, dilation, BatchNorm):
184 | super(_ASPPModule, self).__init__()
185 | self.atrous_conv = nn.Conv2d(inplanes, planes, kernel_size=kernel_size, stride=1,
186 | padding=padding, dilation=dilation, bias=False)
187 |
188 | self.bn = BatchNorm(planes)
189 | self.relu = nn.ReLU(inplace=True)
190 | self._init_weight()
191 |
192 | def forward(self, x):
193 | x = self.atrous_conv(x)
194 | x = self.bn(x)
195 | return self.relu(x)
196 |
197 | def _init_weight(self):
198 | for m in self.modules():
199 | if isinstance(m, nn.Conv2d):
200 | torch.nn.init.kaiming_normal_(m.weight)
201 | elif isinstance(m, nn.BatchNorm2d):
202 | m.weight.data.fill_(1)
203 | m.bias.data.zero_()
204 |
205 | class ASPP(nn.Module):
206 | def __init__(self, inplanes, output_stride=16, BatchNorm=nn.BatchNorm2d):
207 | super(ASPP, self).__init__()
208 | if output_stride == 16:
209 | dilations = [1, 6, 12, 18]
210 | elif output_stride == 8:
211 | dilations = [1, 12, 24, 36]
212 | else:
213 | raise NotImplementedError
214 |
215 | self.aspp1 = _ASPPModule(inplanes, 256, 1, padding=0, dilation=dilations[0], BatchNorm=BatchNorm)
216 | self.aspp2 = _ASPPModule(inplanes, 256, 3, padding=dilations[1], dilation=dilations[1], BatchNorm=BatchNorm)
217 | self.aspp3 = _ASPPModule(inplanes, 256, 3, padding=dilations[2], dilation=dilations[2], BatchNorm=BatchNorm)
218 | self.aspp4 = _ASPPModule(inplanes, 256, 3, padding=dilations[3], dilation=dilations[3], BatchNorm=BatchNorm)
219 | self.global_avg_pool = nn.Sequential(nn.AdaptiveAvgPool2d((1,1)), nn.Conv2d(inplanes, 256, 1, stride=1, bias=False),
220 | BatchNorm(256), nn.ReLU(inplace=True))
221 | self.conv1 = nn.Conv2d(1280, 1024, 1, bias=False)
222 | self.bn1 = BatchNorm(1024)
223 | self.relu = nn.ReLU(inplace=True)
224 | self.dropout = nn.Dropout(0.1)
225 | self._init_weight()
226 |
227 | def forward(self, x):
228 | x1 = self.aspp1(x)
229 | x2 = self.aspp2(x)
230 | x3 = self.aspp3(x)
231 | x4 = self.aspp4(x)
232 | x5 = self.global_avg_pool(x)
233 | x5 = F.interpolate(x5, size=x4.size()[2:], mode='bilinear', align_corners=True)
234 | x = torch.cat((x1, x2, x3, x4, x5), dim=1)
235 | x = self.conv1(x)
236 | x = self.bn1(x)
237 | x = self.relu(x)
238 | return self.dropout(x)
239 |
240 | def _init_weight(self):
241 | for m in self.modules():
242 | if isinstance(m, nn.Conv2d):
243 | torch.nn.init.kaiming_normal_(m.weight)
244 | elif isinstance(m, nn.BatchNorm2d):
245 | m.weight.data.fill_(1)
246 | m.bias.data.zero_()
247 |
--------------------------------------------------------------------------------
/model/network.py:
--------------------------------------------------------------------------------
1 | import math
2 | import torch
3 | import torch.nn as nn
4 | import torch.nn.functional as F
5 |
6 | from model.modules import *
7 |
8 |
9 | class Decoder(nn.Module):
10 | def __init__(self):
11 | super().__init__()
12 | self.compress = ResBlock(1024, 512)
13 | self.up_16_8 = UpsampleBlock(512, 512, 256) # 1/16 -> 1/8
14 | self.up_8_4 = UpsampleBlock(256, 256, 256) # 1/8 -> 1/4
15 |
16 | self.pred = nn.Conv2d(256, 1, kernel_size=(3,3), padding=(1,1), stride=1)
17 |
18 | def forward(self, f16, f8, f4):
19 | x = self.compress(f16)
20 | x = self.up_16_8(f8, x)
21 | x = self.up_8_4(f4, x)
22 |
23 | x = self.pred(F.relu(x))
24 |
25 | x = F.interpolate(x, scale_factor=4, mode='bilinear', align_corners=False)
26 | return x
27 |
28 | class MemoryReader(nn.Module):
29 | def __init__(self):
30 | super().__init__()
31 |
32 | def forward(self, mk, mv, qk, qv):
33 | B, CK, T, H, W = mk.shape
34 | _, CV, _, _, _ = mv.shape
35 |
36 | mi = mk.view(B, CK, T*H*W)
37 | mi = torch.transpose(mi, 1, 2) # B * THW * CK
38 |
39 | qi = qk.view(B, CK, H*W) / math.sqrt(CK) # B * CK * HW
40 |
41 | affinity = torch.bmm(mi, qi) # B, THW, HW
42 | affinity = F.softmax(affinity, dim=1) # B, THW, HW
43 |
44 | mv = mv.view(B, CV, T*H*W)
45 | mem = torch.bmm(mv, affinity) # Weighted-sum B, CV, HW
46 | mem = mem.view(B, CV, H, W)
47 |
48 | mem_out = torch.cat([mem, qv], dim=1)
49 |
50 | return mem_out
51 |
52 |
53 | class PropagationNetwork(nn.Module):
54 | def __init__(self, single_object):
55 | super().__init__()
56 | self.single_object = single_object
57 |
58 | if single_object:
59 | self.mask_rgb_encoder = MaskRGBEncoderSO()
60 | else:
61 | self.mask_rgb_encoder = MaskRGBEncoder()
62 | self.rgb_encoder = RGBEncoder()
63 |
64 | self.kv_m_f16 = KeyValue(1024, keydim=128, valdim=512)
65 | self.kv_q_f16 = KeyValue(1024, keydim=128, valdim=512)
66 |
67 | self.memory = MemoryReader()
68 | self.decoder = Decoder()
69 | self.aspp = ASPP(1024)
70 | self.score = Score(1024)
71 | self.concat_conv = nn.Conv2d(1025, 1, 3, 1, 1)
72 |
73 | def aggregate(self, prob):
74 | new_prob = torch.cat([
75 | torch.prod(1-prob, dim=1, keepdim=True),
76 | prob
77 | ], 1).clamp(1e-7, 1-1e-7)
78 | logits = torch.log((new_prob /(1-new_prob)))
79 | return logits
80 |
81 | def memorize(self, frame, mask, other_mask=None):
82 | # Extract memory key/value for a frame
83 | if self.single_object:
84 | f16 = self.mask_rgb_encoder(frame, mask)
85 | else:
86 | f16 = self.mask_rgb_encoder(frame, mask, other_mask)
87 | k16, v16 = self.kv_m_f16(f16)
88 | mask_score = self.score(f16)
89 | return k16.unsqueeze(2), v16.unsqueeze(2), mask_score # B*C*T*H*W
90 |
91 | def segment(self, frame, keys, values, mask1=None, mask2=None, selector=None):
92 | b, k = keys.shape[:2]
93 |
94 | ###enhance
95 | if self.single_object:
96 | mask = mask1.clone().detach()
97 | else:
98 | mask1_detach = mask1.clone().detach()
99 | mask2_detach = mask2.clone().detach()
100 | mask1_detach = mask1_detach.unsqueeze(0)
101 | mask2_detach = mask2_detach.unsqueeze(0)
102 | mask_all = torch.cat([mask1_detach, mask2_detach], dim=0)
103 | mask, _ = torch.max(mask_all, dim=0)
104 |
105 | f16, f8, f4 = self.rgb_encoder(frame)
106 | b, c, h, w = f16.size()
107 | mask_reshape = F.interpolate(mask, size=[h, w], mode='bilinear')
108 | concat_f16 = torch.cat([f16, mask_reshape], dim=1) #B,C+1,H,W
109 | concat_f16 = torch.sigmoid(self.concat_conv(concat_f16))
110 | concat_f16 = f16 * concat_f16
111 |
112 | k16, v16 = self.kv_q_f16(concat_f16) #B,C,H,W
113 |
114 | if self.single_object:
115 | mr = self.memory(keys, values, k16, v16)
116 | mr = self.aspp(mr)
117 | logits = self.decoder(mr, f8, f4)
118 | prob = torch.sigmoid(logits)
119 | else:
120 | mr_0 = self.memory(keys[:,0], values[:,0], k16, v16)
121 | mr_0 = self.aspp(mr_0)
122 | logits_0 = self.decoder(mr_0, f8, f4)
123 | mr_1 = self.memory(keys[:,1], values[:,1], k16, v16)
124 | mr_1 = self.aspp(mr_1)
125 | logits_1 = self.decoder(mr_1, f8, f4)
126 | logits = torch.cat([logits_0, logits_1], dim=1)
127 | prob = torch.sigmoid(logits)
128 | prob = prob * selector.unsqueeze(2).unsqueeze(2)
129 |
130 | logits = self.aggregate(prob)
131 | prob = F.softmax(logits, dim=1)[:, 1:]
132 |
133 | return logits, prob
134 |
135 | def forward(self, *args, **kwargs):
136 | if args[1].dim() > 4: # keys
137 | return self.segment(*args, **kwargs)
138 | else:
139 | return self.memorize(*args, **kwargs)
140 |
141 |
142 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | tensorboard
2 | argparse
3 | opencv-python
4 | progressbar2
5 | gdown
6 | gitPython
7 | pynvml
8 | pillow
9 | git+https://github.com/cheind/py-thin-plate-spline
--------------------------------------------------------------------------------
/scripts/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/yongliu20/QDMN/301f7bff500dc5d642407f2de11248b97e96178f/scripts/__init__.py
--------------------------------------------------------------------------------
/scripts/resize_length.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import os
3 | import cv2
4 |
5 | from progressbar import progressbar
6 |
7 | input_dir = sys.argv[1]
8 | output_dir = sys.argv[2]
9 |
10 | # max_length = 500
11 | min_length = 384
12 |
13 | def process_fun():
14 |
15 | for f in progressbar(os.listdir(input_dir)):
16 | img = cv2.imread(os.path.join(input_dir, f))
17 | h, w, _ = img.shape
18 |
19 | # scale = max(h, w) / max_length
20 | scale = min(h, w) / min_length
21 |
22 | img = cv2.resize(img, (int(w/scale), int(h/scale)), interpolation=cv2.INTER_AREA)
23 | cv2.imwrite(os.path.join(output_dir, os.path.basename(f)), img)
24 |
25 | if __name__ == '__main__':
26 |
27 | os.makedirs(output_dir, exist_ok=True)
28 | process_fun()
29 |
30 | print('All done.')
--------------------------------------------------------------------------------
/scripts/resize_youtube.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import os
3 | from os import path
4 |
5 | from PIL import Image
6 | import numpy as np
7 | from progressbar import progressbar
8 | from multiprocessing import Pool
9 |
10 | new_min_size = 480
11 |
12 | def resize_vid_jpeg(inputs):
13 | vid_name, folder_path, out_path = inputs
14 |
15 | vid_path = path.join(folder_path, vid_name)
16 | vid_out_path = path.join(out_path, 'JPEGImages', vid_name)
17 | os.makedirs(vid_out_path, exist_ok=True)
18 |
19 | for im_name in os.listdir(vid_path):
20 | hr_im = Image.open(path.join(vid_path, im_name))
21 | w, h = hr_im.size
22 |
23 | ratio = new_min_size / min(w, h)
24 |
25 | lr_im = hr_im.resize((int(w*ratio), int(h*ratio)), Image.BICUBIC)
26 | lr_im.save(path.join(vid_out_path, im_name))
27 |
28 | def resize_vid_anno(inputs):
29 | vid_name, folder_path, out_path = inputs
30 |
31 | vid_path = path.join(folder_path, vid_name)
32 | vid_out_path = path.join(out_path, 'Annotations', vid_name)
33 | os.makedirs(vid_out_path, exist_ok=True)
34 |
35 | for im_name in os.listdir(vid_path):
36 | hr_im = Image.open(path.join(vid_path, im_name)).convert('P')
37 | w, h = hr_im.size
38 |
39 | ratio = new_min_size / min(w, h)
40 |
41 | lr_im = hr_im.resize((int(w*ratio), int(h*ratio)), Image.NEAREST)
42 | lr_im.save(path.join(vid_out_path, im_name))
43 |
44 |
45 | def resize_all(in_path, out_path):
46 | for folder in os.listdir(in_path):
47 |
48 | if folder not in ['JPEGImages', 'Annotations']:
49 | continue
50 | folder_path = path.join(in_path, folder)
51 | videos = os.listdir(folder_path)
52 |
53 | videos = [(v, folder_path, out_path) for v in videos]
54 |
55 | if folder == 'JPEGImages':
56 | print('Processing images')
57 | os.makedirs(path.join(out_path, 'JPEGImages'), exist_ok=True)
58 |
59 | pool = Pool(processes=8)
60 | for _ in progressbar(pool.imap_unordered(resize_vid_jpeg, videos), max_value=len(videos)):
61 | pass
62 | else:
63 | print('Processing annotations')
64 | os.makedirs(path.join(out_path, 'Annotations'), exist_ok=True)
65 |
66 | pool = Pool(processes=8)
67 | for _ in progressbar(pool.imap_unordered(resize_vid_anno, videos), max_value=len(videos)):
68 | pass
69 |
70 |
71 | if __name__ == '__main__':
72 | in_path = sys.argv[1]
73 | out_path = sys.argv[2]
74 |
75 | resize_all(in_path, out_path)
76 |
77 | print('Done.')
--------------------------------------------------------------------------------
/train.py:
--------------------------------------------------------------------------------
1 | import datetime
2 | from os import path
3 | import math
4 |
5 | import random
6 | import numpy as np
7 | import torch
8 | from torch.utils.data import DataLoader, ConcatDataset
9 | import torch.distributed as distributed
10 |
11 | from model.model import PropagationModel
12 | from dataset.static_dataset import StaticTransformDataset
13 | from dataset.vos_dataset import VOSDataset
14 |
15 | from util.logger import TensorboardLogger
16 | from util.hyper_para import HyperParameters
17 | from util.load_subset import load_sub_davis, load_sub_yv
18 |
19 |
20 | """
21 | Initial setup
22 | """
23 | # Init distributed environment
24 | distributed.init_process_group(backend="nccl")
25 | torch.manual_seed(14159265)
26 | np.random.seed(14159265)
27 | random.seed(14159265)
28 |
29 | print('CUDA Device count: ', torch.cuda.device_count())
30 |
31 | # Parse command line arguments
32 | para = HyperParameters()
33 | para.parse()
34 |
35 | if para['benchmark']:
36 | torch.backends.cudnn.benchmark = True
37 |
38 | local_rank = torch.distributed.get_rank()
39 | world_size = torch.distributed.get_world_size()
40 | torch.cuda.set_device(local_rank)
41 |
42 | print('I am rank %d in this world of size %d!' % (local_rank, world_size))
43 |
44 | """
45 | Model related
46 | """
47 | if local_rank == 0:
48 | # Logging
49 | if para['id'].lower() != 'null':
50 | print('I will take the role of logging!')
51 | long_id = '%s_%s' % (datetime.datetime.now().strftime('%b%d_%H.%M.%S'), para['id'])
52 | else:
53 | long_id = None
54 | logger = TensorboardLogger(para['id'], long_id)
55 | logger.log_string('hyperpara', str(para))
56 |
57 | # Construct the rank 0 model
58 | model = PropagationModel(para, logger=logger,
59 | save_path=path.join('saves', long_id, long_id) if long_id is not None else None,
60 | local_rank=local_rank, world_size=world_size).train()
61 | else:
62 | # Construct model for other ranks
63 | model = PropagationModel(para, local_rank=local_rank, world_size=world_size).train()
64 |
65 | # Load pertrained model if needed
66 | if para['load_model'] is not None:
67 | total_iter = model.load_model(para['load_model'])
68 | print('Previously trained model loaded!')
69 | else:
70 | total_iter = 0
71 |
72 | if para['load_network'] is not None:
73 | model.load_network(para['load_network'])
74 | print('Previously trained network loaded!')
75 |
76 | """
77 | Dataloader related
78 | """
79 | # To re-seed the randomness everytime we start a worker
80 | def worker_init_fn(worker_id):
81 | return np.random.seed(torch.initial_seed()%(2**31) + worker_id + local_rank*100)
82 |
83 | def construct_loader(dataset):
84 | train_sampler = torch.utils.data.distributed.DistributedSampler(dataset, rank=local_rank, shuffle=True)
85 | train_loader = DataLoader(dataset, para['batch_size'], sampler=train_sampler, num_workers=8,
86 | worker_init_fn=worker_init_fn, drop_last=True, pin_memory=True)
87 | return train_sampler, train_loader
88 |
89 | def renew_vos_loader(max_skip_yt, max_skip_da):
90 | yv_dataset = VOSDataset(path.join(yv_root, 'JPEGImages'),
91 | path.join(yv_root, 'Annotations'), max_skip_yt//5, is_bl=False, subset=load_sub_yv())
92 | davis_dataset = VOSDataset(path.join(davis_root, 'JPEGImages', '480p'),
93 | path.join(davis_root, 'Annotations', '480p'), max_skip_da, is_bl=False, subset=load_sub_davis())
94 | train_dataset = ConcatDataset([davis_dataset]*5 + [yv_dataset])
95 |
96 | print('YouTube dataset size: ', len(yv_dataset))
97 | print('DAVIS dataset size: ', len(davis_dataset))
98 | print('Concat dataset size: ', len(train_dataset))
99 |
100 |
101 | return construct_loader(train_dataset)
102 |
103 | def renew_bl_loader(max_skip):
104 | train_dataset = VOSDataset(path.join(bl_root, 'JPEGImages'),
105 | path.join(bl_root, 'Annotations'), max_skip, is_bl=True)
106 |
107 | print('Blender dataset size: ', len(train_dataset))
108 | print('Renewed with skip: ', max_skip)
109 |
110 | return construct_loader(train_dataset)
111 |
112 | """
113 | Dataset related
114 | """
115 | skip_values = [10, 15, 20, 25, 10, 5]
116 | davis_skip = [10, 15, 25, 10, 5, 1]
117 |
118 | if para['stage'] == 0:
119 | static_root = path.expanduser(para['static_root'])
120 | fss_dataset = StaticTransformDataset(path.join(static_root, 'fss'), method=0)
121 | duts_tr_dataset = StaticTransformDataset(path.join(static_root, 'DUTS-TR'), method=1)
122 | duts_te_dataset = StaticTransformDataset(path.join(static_root, 'DUTS-TE'), method=1)
123 | ecssd_dataset = StaticTransformDataset(path.join(static_root, 'ecssd'), method=1)
124 |
125 | big_dataset = StaticTransformDataset(path.join(static_root, 'BIG_small'), method=1)
126 | hrsod_dataset = StaticTransformDataset(path.join(static_root, 'HRSOD_small'), method=1)
127 |
128 | # BIG and HRSOD have higher quality, use them more
129 | train_dataset = ConcatDataset([fss_dataset, duts_tr_dataset, duts_te_dataset, ecssd_dataset]
130 | + [big_dataset, hrsod_dataset]*5)
131 | train_sampler, train_loader = construct_loader(train_dataset)
132 |
133 | print('Static dataset size: ', len(train_dataset))
134 | elif para['stage'] == 1:
135 | increase_skip_fraction = [0.1, 0.2, 0.3, 0.4, 0.8, 1.0]
136 | bl_root = path.join(path.expanduser(para['bl_root']))
137 |
138 | train_sampler, train_loader = renew_bl_loader(5)
139 | renew_loader = renew_bl_loader
140 | else:
141 | increase_skip_fraction = [0.1, 0.2, 0.3, 0.4, 0.7, 0.9, 1.0]
142 | # VOS dataset, 480p is used for both datasets
143 | yv_root = path.join(path.expanduser(para['yv_root']), 'train_480p')
144 | davis_root = path.join(path.expanduser(para['davis_root']), '2017', 'trainval')
145 |
146 | train_sampler, train_loader = renew_vos_loader(5, 5)
147 | renew_loader = renew_vos_loader
148 |
149 |
150 | """
151 | Determine current/max epoch
152 | """
153 | total_epoch = math.ceil(para['iterations']/len(train_loader))
154 | current_epoch = total_iter // len(train_loader)
155 | print('Number of training epochs (the last epoch might not complete): ', total_epoch)
156 | if para['stage'] != 0:
157 | increase_skip_epoch = [round(total_epoch*f) for f in increase_skip_fraction]
158 | # Skip will only change after an epoch, not in the middle
159 | print('The skip value will increase approximately at the following epochs: ', increase_skip_epoch[:-1])
160 |
161 | """
162 | Starts training
163 | """
164 | # Need this to select random bases in different workers
165 | np.random.seed(np.random.randint(2**30-1) + local_rank*100)
166 | try:
167 | for e in range(current_epoch, total_epoch):
168 | print('Epoch %d/%d' % (e, total_epoch))
169 | if para['stage']==2 and e!=total_epoch and e>=increase_skip_epoch[0]:
170 | while e >= increase_skip_epoch[0]:
171 | cur_skip = skip_values[0]
172 | cur_skip_davis = davis_skip[0]
173 | skip_values = skip_values[1:]
174 | davis_skip = davis_skip[1:]
175 | increase_skip_epoch = increase_skip_epoch[1:]
176 | print('Increasing skip to: ', cur_skip)
177 | train_sampler, train_loader = renew_loader(cur_skip, cur_skip_davis)
178 |
179 | if para['stage']==1 and e!=total_epoch and e>=increase_skip_epoch[0]:
180 | while e >= increase_skip_epoch[0]:
181 | cur_skip = skip_values[0]
182 | skip_values = skip_values[1:]
183 | increase_skip_epoch = increase_skip_epoch[1:]
184 | print('Increasing skip to: ', cur_skip)
185 | train_sampler, train_loader = renew_loader(cur_skip)
186 |
187 | # Crucial for randomness!
188 | train_sampler.set_epoch(e)
189 |
190 | # Train loop
191 | model.train()
192 | for data in train_loader:
193 | model.do_pass(data, total_iter)
194 | total_iter += 1
195 |
196 | if total_iter >= para['iterations']:
197 | break
198 | finally:
199 | if not para['debug'] and model.logger is not None and total_iter>5000:
200 | model.save(total_iter)
201 | # Clean up
202 | distributed.destroy_process_group()
203 |
--------------------------------------------------------------------------------
/util/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/yongliu20/QDMN/301f7bff500dc5d642407f2de11248b97e96178f/util/__init__.py
--------------------------------------------------------------------------------
/util/davis_subset.txt:
--------------------------------------------------------------------------------
1 | bear
2 | bmx-bumps
3 | boat
4 | boxing-fisheye
5 | breakdance-flare
6 | bus
7 | car-turn
8 | cat-girl
9 | classic-car
10 | color-run
11 | crossing
12 | dance-jump
13 | dancing
14 | disc-jockey
15 | dog-agility
16 | dog-gooses
17 | dogs-scale
18 | drift-turn
19 | drone
20 | elephant
21 | flamingo
22 | hike
23 | hockey
24 | horsejump-low
25 | kid-football
26 | kite-walk
27 | koala
28 | lady-running
29 | lindy-hop
30 | longboard
31 | lucia
32 | mallard-fly
33 | mallard-water
34 | miami-surf
35 | motocross-bumps
36 | motorbike
37 | night-race
38 | paragliding
39 | planes-water
40 | rallye
41 | rhino
42 | rollerblade
43 | schoolgirls
44 | scooter-board
45 | scooter-gray
46 | sheep
47 | skate-park
48 | snowboard
49 | soccerball
50 | stroller
51 | stunt
52 | surf
53 | swing
54 | tennis
55 | tractor-sand
56 | train
57 | tuk-tuk
58 | upside-down
59 | varanus-cage
60 | walking
--------------------------------------------------------------------------------
/util/hyper_para.py:
--------------------------------------------------------------------------------
1 | from argparse import ArgumentParser
2 |
3 |
4 | def none_or_default(x, default):
5 | return x if x is not None else default
6 |
7 | class HyperParameters():
8 | def parse(self, unknown_arg_ok=False):
9 | parser = ArgumentParser()
10 |
11 | # Enable torch.backends.cudnn.benchmark -- Faster in some cases, test in your own environment
12 | parser.add_argument('--benchmark', action='store_true')
13 |
14 | # Data parameters
15 | parser.add_argument('--static_root', help='Static training data root', default='data/static')
16 | parser.add_argument('--bl_root', help='Blender training data root', default='data/BL30K')
17 | parser.add_argument('--yv_root', help='YouTubeVOS data root', default='data/YouTube')
18 | parser.add_argument('--davis_root', help='DAVIS data root', default='data/DAVIS')
19 |
20 | parser.add_argument('--stage', help='Training stage (0-static images, 1-Blender dataset, 2-DAVIS+YouTubeVOS)', type=int, default=0)
21 |
22 | # Generic learning parameters
23 | parser.add_argument('-b', '--batch_size', help='Default is dependent on the training stage, see below', default=None, type=int)
24 | parser.add_argument('-i', '--iterations', help='Default is dependent on the training stage, see below', default=None, type=int)
25 | parser.add_argument('--steps', help='Default is dependent on the training stage, see below', nargs="*", default=None, type=int)
26 |
27 | parser.add_argument('--lr', help='Initial learning rate', default=2e-5, type=float)
28 | parser.add_argument('--gamma', help='LR := LR*gamma at every decay step', default=0.1, type=float)
29 |
30 | # Loading
31 | parser.add_argument('--load_network', help='Path to pretrained network weight only')
32 | parser.add_argument('--load_model', help='Path to the model file, including network, optimizer and such')
33 |
34 | # Logging information
35 | parser.add_argument('--id', help='Experiment UNIQUE id, use NULL to disable logging to tensorboard', default='NULL')
36 | parser.add_argument('--debug', help='Debug mode which logs information more often', action='store_true')
37 |
38 | # Multiprocessing parameters, not set by users
39 | parser.add_argument('--local_rank', default=0, type=int, help='Local rank of this process')
40 |
41 | if unknown_arg_ok:
42 | args, _ = parser.parse_known_args()
43 | self.args = vars(args)
44 | else:
45 | self.args = vars(parser.parse_args())
46 |
47 | # Stage-dependent hyperparameters
48 | # Assign default if not given
49 | if self.args['stage'] == 0:
50 | self.args['batch_size'] = none_or_default(self.args['batch_size'], 14)
51 | self.args['iterations'] = none_or_default(self.args['iterations'], 160000)
52 | self.args['steps'] = none_or_default(self.args['steps'], [250000])
53 | self.args['single_object'] = True
54 | elif self.args['stage'] == 1:
55 | self.args['batch_size'] = none_or_default(self.args['batch_size'], 8)
56 | self.args['iterations'] = none_or_default(self.args['iterations'], 250000)
57 | self.args['steps'] = none_or_default(self.args['steps'], [450000])
58 | self.args['single_object'] = False
59 | else:
60 | self.args['batch_size'] = none_or_default(self.args['batch_size'], 8)
61 | self.args['iterations'] = none_or_default(self.args['iterations'], 150000)
62 | self.args['steps'] = none_or_default(self.args['steps'], [70000])
63 | self.args['single_object'] = False
64 |
65 | def __getitem__(self, key):
66 | return self.args[key]
67 |
68 | def __setitem__(self, key, value):
69 | self.args[key] = value
70 |
71 | def __str__(self):
72 | return str(self.args)
73 |
--------------------------------------------------------------------------------
/util/image_saver.py:
--------------------------------------------------------------------------------
1 | import cv2
2 | import numpy as np
3 |
4 | import torch
5 | from dataset.range_transform import inv_im_trans
6 | from collections import defaultdict
7 |
8 | def tensor_to_numpy(image):
9 | image_np = (image.numpy() * 255).astype('uint8')
10 | return image_np
11 |
12 | def tensor_to_np_float(image):
13 | image_np = image.numpy().astype('float32')
14 | return image_np
15 |
16 | def detach_to_cpu(x):
17 | return x.detach().cpu()
18 |
19 | def transpose_np(x):
20 | return np.transpose(x, [1,2,0])
21 |
22 | def tensor_to_gray_im(x):
23 | x = detach_to_cpu(x)
24 | x = tensor_to_numpy(x)
25 | x = transpose_np(x)
26 | return x
27 |
28 | def tensor_to_im(x):
29 | x = detach_to_cpu(x)
30 | x = inv_im_trans(x).clamp(0, 1)
31 | x = tensor_to_numpy(x)
32 | x = transpose_np(x)
33 | return x
34 |
35 | # Predefined key <-> caption dict
36 | key_captions = {
37 | 'im': 'Image',
38 | 'gt': 'GT',
39 | }
40 |
41 | """
42 | Return an image array with captions
43 | keys in dictionary will be used as caption if not provided
44 | values should contain lists of cv2 images
45 | """
46 | def get_image_array(images, grid_shape, captions={}):
47 | h, w = grid_shape
48 | cate_counts = len(images)
49 | rows_counts = len(next(iter(images.values())))
50 |
51 | font = cv2.FONT_HERSHEY_SIMPLEX
52 |
53 | output_image = np.zeros([w*cate_counts, h*(rows_counts+1), 3], dtype=np.uint8)
54 | col_cnt = 0
55 | for k, v in images.items():
56 |
57 | # Default as key value itself
58 | caption = captions.get(k, k)
59 |
60 | # Handles new line character
61 | dy = 40
62 | for i, line in enumerate(caption.split('\n')):
63 | cv2.putText(output_image, line, (10, col_cnt*w+100+i*dy),
64 | font, 0.8, (255,255,255), 2, cv2.LINE_AA)
65 |
66 | # Put images
67 | for row_cnt, img in enumerate(v):
68 | im_shape = img.shape
69 | if len(im_shape) == 2:
70 | img = img[..., np.newaxis]
71 |
72 | img = (img * 255).astype('uint8')
73 |
74 | output_image[(col_cnt+0)*w:(col_cnt+1)*w,
75 | (row_cnt+1)*h:(row_cnt+2)*h, :] = img
76 |
77 | col_cnt += 1
78 |
79 | return output_image
80 |
81 | def base_transform(im, size):
82 | im = tensor_to_np_float(im)
83 | if len(im.shape) == 3:
84 | im = im.transpose((1, 2, 0))
85 | else:
86 | im = im[:, :, None]
87 |
88 | # Resize
89 | if im.shape[1] != size:
90 | im = cv2.resize(im, size, interpolation=cv2.INTER_NEAREST)
91 |
92 | return im.clip(0, 1)
93 |
94 | def im_transform(im, size):
95 | return base_transform(inv_im_trans(detach_to_cpu(im)), size=size)
96 |
97 | def mask_transform(mask, size):
98 | return base_transform(detach_to_cpu(mask), size=size)
99 |
100 | def out_transform(mask, size):
101 | return base_transform(detach_to_cpu(torch.sigmoid(mask)), size=size)
102 |
103 | def pool_pairs(images, size, so):
104 | req_images = defaultdict(list)
105 |
106 | b, s, _, _, _ = images['gt'].shape
107 |
108 | GT_name = 'GT'
109 | for b_idx in range(b):
110 | GT_name += ' %s\n' % images['info']['name'][b_idx]
111 |
112 | for b_idx in range(b):
113 | for s_idx in range(s):
114 | req_images['RGB'].append(im_transform(images['rgb'][b_idx,s_idx], size))
115 | if s_idx == 0:
116 | req_images['Mask'].append(np.zeros((size[1], size[0], 3)))
117 | if not so:
118 | req_images['Mask 2'].append(np.zeros((size[1], size[0], 3)))
119 | else:
120 | req_images['Mask'].append(mask_transform(images['mask_%d'%s_idx][b_idx], size))
121 | if not so:
122 | req_images['Mask 2'].append(mask_transform(images['sec_mask_%d'%s_idx][b_idx], size))
123 | req_images[GT_name].append(mask_transform(images['gt'][b_idx,s_idx], size))
124 | if not so:
125 | req_images[GT_name + '_2'].append(mask_transform(images['sec_gt'][b_idx,s_idx], size))
126 |
127 | return get_image_array(req_images, size, key_captions)
--------------------------------------------------------------------------------
/util/load_subset.py:
--------------------------------------------------------------------------------
1 | """
2 | load_subset.py - Presents a subset of data
3 | DAVIS - only the training set
4 | YouTubeVOS - I manually filtered some erroneous ones out but I haven't checked all
5 | """
6 |
7 |
8 | def load_sub_davis(path='util/davis_subset.txt'):
9 | with open(path, mode='r') as f:
10 | subset = set(f.read().splitlines())
11 | return subset
12 |
13 | def load_sub_yv(path='util/yv_subset.txt'):
14 | with open(path, mode='r') as f:
15 | subset = set(f.read().splitlines())
16 | return subset
17 |
--------------------------------------------------------------------------------
/util/log_integrator.py:
--------------------------------------------------------------------------------
1 | """
2 | Integrate numerical values for some iterations
3 | Typically used for loss computation / logging to tensorboard
4 | Call finalize and create a new Integrator when you want to display/log
5 | """
6 |
7 | import torch
8 |
9 |
10 | class Integrator:
11 | def __init__(self, logger, distributed=True, local_rank=0, world_size=1):
12 | self.values = {}
13 | self.counts = {}
14 | self.hooks = [] # List is used here to maintain insertion order
15 |
16 | self.logger = logger
17 |
18 | self.distributed = distributed
19 | self.local_rank = local_rank
20 | self.world_size = world_size
21 |
22 | def add_tensor(self, key, tensor):
23 | if key not in self.values:
24 | self.counts[key] = 1
25 | if type(tensor) == float or type(tensor) == int:
26 | self.values[key] = tensor
27 | else:
28 | self.values[key] = tensor.mean().item()
29 | else:
30 | self.counts[key] += 1
31 | if type(tensor) == float or type(tensor) == int:
32 | self.values[key] += tensor
33 | else:
34 | self.values[key] += tensor.mean().item()
35 |
36 | def add_dict(self, tensor_dict):
37 | for k, v in tensor_dict.items():
38 | self.add_tensor(k, v)
39 |
40 | def add_hook(self, hook):
41 | """
42 | Adds a custom hook, i.e. compute new metrics using values in the dict
43 | The hook takes the dict as argument, and returns a (k, v) tuple
44 | e.g. for computing IoU
45 | """
46 | if type(hook) == list:
47 | self.hooks.extend(hook)
48 | else:
49 | self.hooks.append(hook)
50 |
51 | def reset_except_hooks(self):
52 | self.values = {}
53 | self.counts = {}
54 |
55 | # Average and output the metrics
56 | def finalize(self, prefix, it, f=None):
57 |
58 | for hook in self.hooks:
59 | k, v = hook(self.values)
60 | self.add_tensor(k, v)
61 |
62 | for k, v in self.values.items():
63 |
64 | if k[:4] == 'hide':
65 | continue
66 |
67 | avg = v / self.counts[k]
68 |
69 | if self.distributed:
70 | # Inplace operation
71 | avg = torch.tensor(avg).cuda()
72 | torch.distributed.reduce(avg, dst=0)
73 |
74 | if self.local_rank == 0:
75 | avg = (avg/self.world_size).cpu().item()
76 | self.logger.log_metrics(prefix, k, avg, it, f)
77 | else:
78 | # Simple does it
79 | self.logger.log_metrics(prefix, k, avg, it, f)
80 |
81 |
--------------------------------------------------------------------------------
/util/logger.py:
--------------------------------------------------------------------------------
1 | """
2 | Dumps things to tensorboard and console
3 | """
4 |
5 | import os
6 | import warnings
7 |
8 | import torchvision.transforms as transforms
9 | from torch.utils.tensorboard import SummaryWriter
10 |
11 |
12 | def tensor_to_numpy(image):
13 | image_np = (image.numpy() * 255).astype('uint8')
14 | return image_np
15 |
16 | def detach_to_cpu(x):
17 | return x.detach().cpu()
18 |
19 | def fix_width_trunc(x):
20 | return ('{:.9s}'.format('{:0.9f}'.format(x)))
21 |
22 | class TensorboardLogger:
23 | def __init__(self, short_id, id):
24 | self.short_id = short_id
25 | if self.short_id == 'NULL':
26 | self.short_id = 'DEBUG'
27 |
28 | if id is None:
29 | self.no_log = True
30 | warnings.warn('Logging has been disbaled.')
31 | else:
32 | self.no_log = False
33 |
34 | self.inv_im_trans = transforms.Normalize(
35 | mean=[-0.485/0.229, -0.456/0.224, -0.406/0.225],
36 | std=[1/0.229, 1/0.224, 1/0.225])
37 |
38 | self.inv_seg_trans = transforms.Normalize(
39 | mean=[-0.5/0.5],
40 | std=[1/0.5])
41 |
42 | log_path = os.path.join('.', 'log', '%s' % id)
43 | self.logger = SummaryWriter(log_path)
44 |
45 | # repo = git.Repo(".")
46 | # self.log_string('git', str(repo.active_branch) + ' ' + str(repo.head.commit.hexsha))
47 |
48 | def log_scalar(self, tag, x, step):
49 | if self.no_log:
50 | warnings.warn('Logging has been disabled.')
51 | return
52 | self.logger.add_scalar(tag, x, step)
53 |
54 | def log_metrics(self, l1_tag, l2_tag, val, step, f=None):
55 | tag = l1_tag + '/' + l2_tag
56 | text = '{:s} - It {:6d} [{:5s}] [{:13}]: {:s}'.format(self.short_id, step, l1_tag.upper(), l2_tag, fix_width_trunc(val))
57 | print(text)
58 | if f is not None:
59 | f.write(text + '\n')
60 | f.flush()
61 | self.log_scalar(tag, val, step)
62 |
63 | def log_im(self, tag, x, step):
64 | if self.no_log:
65 | warnings.warn('Logging has been disabled.')
66 | return
67 | x = detach_to_cpu(x)
68 | x = self.inv_im_trans(x)
69 | x = tensor_to_numpy(x)
70 | self.logger.add_image(tag, x, step)
71 |
72 | def log_cv2(self, tag, x, step):
73 | if self.no_log:
74 | warnings.warn('Logging has been disabled.')
75 | return
76 | x = x.transpose((2, 0, 1))
77 | self.logger.add_image(tag, x, step)
78 |
79 | def log_seg(self, tag, x, step):
80 | if self.no_log:
81 | warnings.warn('Logging has been disabled.')
82 | return
83 | x = detach_to_cpu(x)
84 | x = self.inv_seg_trans(x)
85 | x = tensor_to_numpy(x)
86 | self.logger.add_image(tag, x, step)
87 |
88 | def log_gray(self, tag, x, step):
89 | if self.no_log:
90 | warnings.warn('Logging has been disabled.')
91 | return
92 | x = detach_to_cpu(x)
93 | x = tensor_to_numpy(x)
94 | self.logger.add_image(tag, x, step)
95 |
96 | def log_string(self, tag, x):
97 | print(tag, x)
98 | if self.no_log:
99 | warnings.warn('Logging has been disabled.')
100 | return
101 | self.logger.add_text(tag, x)
102 |
103 |
--------------------------------------------------------------------------------
/util/tensor_util.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn.functional as F
3 |
4 | def compute_tensor_iu(seg, gt):
5 | intersection = (seg & gt).float().sum()
6 | union = (seg | gt).float().sum()
7 |
8 | return intersection, union
9 |
10 | def compute_tensor_iou(seg, gt):
11 | intersection, union = compute_tensor_iu(seg, gt)
12 | iou = (intersection + 1e-6) / (union + 1e-6)
13 |
14 | return iou
15 |
16 | def pad_divide_by(in_img, d):
17 | h, w = in_img.shape[-2:]
18 |
19 | if h % d > 0:
20 | new_h = h + d - h % d
21 | else:
22 | new_h = h
23 | if w % d > 0:
24 | new_w = w + d - w % d
25 | else:
26 | new_w = w
27 | lh, uh = int((new_h-h) / 2), int(new_h-h) - int((new_h-h) / 2)
28 | lw, uw = int((new_w-w) / 2), int(new_w-w) - int((new_w-w) / 2)
29 | pad_array = (int(lw), int(uw), int(lh), int(uh))
30 | out = F.pad(in_img, pad_array)
31 | return out, pad_array
32 |
33 | def unpad(img, pad):
34 | if pad[2]+pad[3] > 0:
35 | img = img[:,:,pad[2]:-pad[3],:]
36 | if pad[0]+pad[1] > 0:
37 | img = img[:,:,:,pad[0]:-pad[1]]
38 | return img
39 |
40 | def maskiou(mask1, mask2):
41 | b, c, h, w = mask1.size()
42 | mask1 = mask1.view(b, -1)
43 | mask2 = mask2.view(b, -1)
44 | area1 = mask1.sum(dim=1, keepdim=True)
45 | area2 = mask2.sum(dim=1, keepdim=True)
46 | inter = ((mask1 + mask2) == 2).sum(dim=1, keepdim=True)
47 | union = (area1 + area2 - inter)
48 | for a in range(b):
49 | if union[a][0] == torch.tensor(0):
50 | union[a][0] = torch.tensor(1)
51 | maskiou = inter / union
52 | return maskiou
--------------------------------------------------------------------------------
/util/yv_subset.txt:
--------------------------------------------------------------------------------
1 | 003234408d
2 | 0043f083b5
3 | 0044fa5fba
4 | 005a527edd
5 | 0065b171f9
6 | 00917dcfc4
7 | 00a23ccf53
8 | 00ad5016a4
9 | 01082ae388
10 | 011ac0a06f
11 | 013099c098
12 | 0155498c85
13 | 01694ad9c8
14 | 017ac35701
15 | 01b80e8e1a
16 | 01baa5a4e1
17 | 01c3111683
18 | 01c4cb5ffe
19 | 01c76f0a82
20 | 01c783268c
21 | 01ed275c6e
22 | 01ff60d1fa
23 | 020cd28cd2
24 | 02264db755
25 | 0248626d9a
26 | 02668dbffa
27 | 0274193026
28 | 02d28375aa
29 | 02f3a5c4df
30 | 031ccc99b1
31 | 0321b18c10
32 | 0348a45bca
33 | 0355e92655
34 | 0358b938c1
35 | 0368107cf1
36 | 0379ddf557
37 | 038b2cc71d
38 | 038c15a5dd
39 | 03a06cc98a
40 | 03a63e187f
41 | 03c95b4dae
42 | 03e2b57b0e
43 | 04194e1248
44 | 0444918a5f
45 | 04460a7a52
46 | 04474174a4
47 | 0450095513
48 | 045f00aed2
49 | 04667fabaa
50 | 04735c5030
51 | 04990d1915
52 | 04d62d9d98
53 | 04f21da964
54 | 04fbad476e
55 | 04fe256562
56 | 0503bf89c9
57 | 0536c9eed0
58 | 054acb238f
59 | 05579ca250
60 | 056c200404
61 | 05774f3a2c
62 | 058a7592c8
63 | 05a0a513df
64 | 05a569d8aa
65 | 05aa652648
66 | 05d7715782
67 | 05e0b0f28f
68 | 05fdbbdd7a
69 | 05ffcfed85
70 | 0630391881
71 | 06840b2bbe
72 | 068f7dce6f
73 | 0693719753
74 | 06ce2b51fb
75 | 06e224798e
76 | 06ee361788
77 | 06fbb3fa2c
78 | 0700264286
79 | 070c918ca7
80 | 07129e14a4
81 | 07177017e9
82 | 07238ffc58
83 | 07353b2a89
84 | 0738493cbf
85 | 075926c651
86 | 075c701292
87 | 0762ea9a30
88 | 07652ee4af
89 | 076f206928
90 | 077d32af19
91 | 079049275c
92 | 07913cdda7
93 | 07a11a35e8
94 | 07ac33b6df
95 | 07b6e8fda8
96 | 07c62c3d11
97 | 07cc1c7d74
98 | 080196ef01
99 | 081207976e
100 | 081ae4fa44
101 | 081d8250cb
102 | 082900c5d4
103 | 0860df21e2
104 | 0866d4c5e3
105 | 0891ac2eb6
106 | 08931bc458
107 | 08aa2705d5
108 | 08c8450db7
109 | 08d50b926c
110 | 08e1e4de15
111 | 08e48c1a48
112 | 08f561c65e
113 | 08feb87790
114 | 09049f6fe3
115 | 092e4ff450
116 | 09338adea8
117 | 093c335ccc
118 | 0970d28339
119 | 0974a213dc
120 | 097b471ed8
121 | 0990941758
122 | 09a348f4fa
123 | 09a6841288
124 | 09c5bad17b
125 | 09c9ce80c7
126 | 09ff54fef4
127 | 0a23765d15
128 | 0a275e7f12
129 | 0a2f2bd294
130 | 0a7a2514aa
131 | 0a7b27fde9
132 | 0a8c467cc3
133 | 0ac8c560ae
134 | 0b1627e896
135 | 0b285c47f6
136 | 0b34ec1d55
137 | 0b5b5e8e5a
138 | 0b68535614
139 | 0b6f9105fc
140 | 0b7dbfa3cb
141 | 0b9cea51ca
142 | 0b9d012be8
143 | 0bcfc4177d
144 | 0bd37b23c1
145 | 0bd864064c
146 | 0c11c6bf7b
147 | 0c26bc77ac
148 | 0c3a04798c
149 | 0c44a9d545
150 | 0c817cc390
151 | 0ca839ee9a
152 | 0cd7ac0ac0
153 | 0ce06e0121
154 | 0cfe974a89
155 | 0d2fcc0dcd
156 | 0d3aad05d2
157 | 0d40b015f4
158 | 0d97fba242
159 | 0d9cc80d7e
160 | 0dab85b6d3
161 | 0db5c427a5
162 | 0dbaf284f1
163 | 0de4923598
164 | 0df28a9101
165 | 0e04f636c4
166 | 0e05f0e232
167 | 0e0930474b
168 | 0e27472bea
169 | 0e30020549
170 | 0e621feb6c
171 | 0e803c7d73
172 | 0e9ebe4e3c
173 | 0e9f2785ec
174 | 0ea68d418b
175 | 0eb403a222
176 | 0ee92053d6
177 | 0eefca067f
178 | 0f17fa6fcb
179 | 0f1ac8e9a3
180 | 0f202e9852
181 | 0f2ab8b1ff
182 | 0f51a78756
183 | 0f5fbe16b0
184 | 0f6072077b
185 | 0f6b69b2f4
186 | 0f6c2163de
187 | 0f74ec5599
188 | 0f9683715b
189 | 0fa7b59356
190 | 0fb173695b
191 | 0fc958cde2
192 | 0fe7b1a621
193 | 0ffcdb491c
194 | 101caff7d4
195 | 1022fe8417
196 | 1032e80b37
197 | 103f501680
198 | 104e64565f
199 | 104f1ab997
200 | 106242403f
201 | 10b31f5431
202 | 10eced835e
203 | 110d26fa3a
204 | 1122c1d16a
205 | 1145b49a5f
206 | 11485838c2
207 | 114e7676ec
208 | 1157472b95
209 | 115ee1072c
210 | 1171141012
211 | 117757b4b8
212 | 1178932d2f
213 | 117cc76bda
214 | 1180cbf814
215 | 1187bbd0e3
216 | 1197e44b26
217 | 119cf20728
218 | 119dd54871
219 | 11a0c3b724
220 | 11a6ba8c94
221 | 11c722a456
222 | 11cbcb0b4d
223 | 11ccf5e99d
224 | 11ce6f452e
225 | 11e53de6f2
226 | 11feabe596
227 | 120cb9514d
228 | 12156b25b3
229 | 122896672d
230 | 1232b2f1d4
231 | 1233ac8596
232 | 1239c87234
233 | 1250423f7c
234 | 1257a1bc67
235 | 125d1b19dd
236 | 126d203967
237 | 1295e19071
238 | 12ad198c54
239 | 12bddb2bcb
240 | 12ec9b93ee
241 | 12eebedc35
242 | 132852e094
243 | 1329409f2a
244 | 13325cfa14
245 | 134d06dbf9
246 | 135625b53d
247 | 13870016f9
248 | 13960b3c84
249 | 13adaad9d9
250 | 13ae097e20
251 | 13e3070469
252 | 13f6a8c20d
253 | 1416925cf2
254 | 142d2621f5
255 | 145d5d7c03
256 | 145fdc3ac5
257 | 1471274fa7
258 | 14a6b5a139
259 | 14c21cea0d
260 | 14dae0dc93
261 | 14f9bd22b5
262 | 14fd28ae99
263 | 15097d5d4e
264 | 150ea711f2
265 | 1514e3563f
266 | 152aaa3a9e
267 | 152b7d3bd7
268 | 15617297cc
269 | 15abbe0c52
270 | 15d1fb3de5
271 | 15f67b0fab
272 | 161eb59aad
273 | 16288ea47f
274 | 164410ce62
275 | 165c3c8cd4
276 | 165c42b41b
277 | 165ec9e22b
278 | 1669502269
279 | 16763cccbb
280 | 16adde065e
281 | 16af445362
282 | 16afd538ad
283 | 16c3fa4d5d
284 | 16d1d65c27
285 | 16e8599e94
286 | 16fe9fb444
287 | 1705796b02
288 | 1724db7671
289 | 17418e81ea
290 | 175169edbb
291 | 17622326fd
292 | 17656bae77
293 | 17b0d94172
294 | 17c220e4f6
295 | 17c7bcd146
296 | 17cb4afe89
297 | 17cd79a434
298 | 17d18604c3
299 | 17d8ca1a37
300 | 17e33f4330
301 | 17f7a6d805
302 | 180abc8378
303 | 183ba3d652
304 | 185bf64702
305 | 18913cc690
306 | 1892651815
307 | 189ac8208a
308 | 189b44e92c
309 | 18ac264b76
310 | 18b245ab49
311 | 18b5cebc34
312 | 18bad52083
313 | 18bb5144d5
314 | 18c6f205c5
315 | 1903f9ea15
316 | 1917b209f2
317 | 191e74c01d
318 | 19367bb94e
319 | 193ffaa217
320 | 19696b67d3
321 | 197f3ab6f3
322 | 1981e763cc
323 | 198afe39ae
324 | 19a6e62b9b
325 | 19b60d5335
326 | 19c00c11f9
327 | 19e061eb88
328 | 19e8bc6178
329 | 19ee80dac6
330 | 1a25a9170a
331 | 1a359a6c1a
332 | 1a3e87c566
333 | 1a5fe06b00
334 | 1a6c0fbd1e
335 | 1a6f3b5a4b
336 | 1a8afbad92
337 | 1a8bdc5842
338 | 1a95752aca
339 | 1a9c131cb7
340 | 1aa3da3ee3
341 | 1ab27ec7ea
342 | 1abf16d21d
343 | 1acd0f993b
344 | 1ad202e499
345 | 1af8d2395d
346 | 1afd39a1fa
347 | 1b2d31306f
348 | 1b3fa67f0e
349 | 1b43fa74b4
350 | 1b73ea9fc2
351 | 1b7e8bb255
352 | 1b8680f8cd
353 | 1b883843c0
354 | 1b8898785b
355 | 1b88ba1aa4
356 | 1b96a498e5
357 | 1bbc4c274f
358 | 1bd87fe9ab
359 | 1c4090c75b
360 | 1c41934f84
361 | 1c72b04b56
362 | 1c87955a3a
363 | 1c9f9eb792
364 | 1ca240fede
365 | 1ca5673803
366 | 1cada35274
367 | 1cb44b920d
368 | 1cd10e62be
369 | 1d3087d5e5
370 | 1d3685150a
371 | 1d6ff083aa
372 | 1d746352a6
373 | 1da256d146
374 | 1da4e956b1
375 | 1daf812218
376 | 1dba687bce
377 | 1dce57d05d
378 | 1de4a9e537
379 | 1dec5446c8
380 | 1dfbe6f586
381 | 1e1a18c45a
382 | 1e1e42529d
383 | 1e4be70796
384 | 1eb60959c8
385 | 1ec8b2566b
386 | 1ecdc2941c
387 | 1ee0ac70ff
388 | 1ef8e17def
389 | 1f1a2a9fc0
390 | 1f1beb8daa
391 | 1f2609ee13
392 | 1f3876f8d0
393 | 1f4ec0563d
394 | 1f64955634
395 | 1f7d31b5b2
396 | 1f8014b7fd
397 | 1f9c7d10f1
398 | 1fa350df76
399 | 1fc9538993
400 | 1fe2f0ec59
401 | 2000c02f9d
402 | 20142b2f05
403 | 201a8d75e5
404 | 2023b3ee4f
405 | 202b767bbc
406 | 203594a418
407 | 2038987336
408 | 2039c3aecb
409 | 204a90d81f
410 | 207bc6cf01
411 | 208833d1d1
412 | 20c6d8b362
413 | 20e3e52e0a
414 | 2117fa0c14
415 | 211bc5d102
416 | 2120d9c3c3
417 | 2125235a49
418 | 21386f5978
419 | 2142af8795
420 | 215dfc0f73
421 | 217bae91e5
422 | 217c0d44e4
423 | 219057c87b
424 | 21d0edbf81
425 | 21df87ad76
426 | 21f1d089f5
427 | 21f4019116
428 | 222597030f
429 | 222904eb5b
430 | 223a0e0657
431 | 223bd973ab
432 | 22472f7395
433 | 224e7c833e
434 | 225aba51d9
435 | 2261d421ea
436 | 2263a8782b
437 | 2268cb1ffd
438 | 2268e93b0a
439 | 2293c99f3f
440 | 22a1141970
441 | 22b13084b2
442 | 22d9f5ab0c
443 | 22f02efe3a
444 | 232c09b75b
445 | 2350d71b4b
446 | 2376440551
447 | 2383d8aafd
448 | 238b84e67f
449 | 238d4b86f6
450 | 238d947c6b
451 | 23993ce90d
452 | 23b0c8a9ab
453 | 23b3beafcc
454 | 23d80299fe
455 | 23f404a9fc
456 | 240118e58a
457 | 2431dec2fd
458 | 24440e0ac7
459 | 2457274dbc
460 | 2465bf515d
461 | 246b142c4d
462 | 247d729e36
463 | 2481ceafeb
464 | 24866b4e6a
465 | 2489d78320
466 | 24ab0b83e8
467 | 24b0868d92
468 | 24b5207cd9
469 | 24ddf05c03
470 | 250116161c
471 | 256ad2e3fc
472 | 256bd83d5e
473 | 256dcc8ab8
474 | 2589956baa
475 | 258b3b33c6
476 | 25ad437e29
477 | 25ae395636
478 | 25c750c6db
479 | 25d2c3fe5d
480 | 25dc80db7c
481 | 25f97e926f
482 | 26011bc28b
483 | 260846ffbe
484 | 260dd9ad33
485 | 267964ee57
486 | 2680861931
487 | 268ac7d3fc
488 | 26b895d91e
489 | 26bc786d4f
490 | 26ddd2ef12
491 | 26de3d18ca
492 | 26f7784762
493 | 2703e52a6a
494 | 270ed80c12
495 | 2719b742ab
496 | 272f4163d0
497 | 27303333e1
498 | 27659fa7d6
499 | 279214115d
500 | 27a5f92a9c
501 | 27cf2af1f3
502 | 27f0d5f8a2
503 | 28075f33c1
504 | 281629cb41
505 | 282b0d51f5
506 | 282fcab00b
507 | 28449fa0dc
508 | 28475208ca
509 | 285580b7c4
510 | 285b69e223
511 | 288c117201
512 | 28a8eb9623
513 | 28bf9c3cf3
514 | 28c6b8f86a
515 | 28c972dacd
516 | 28d9fa6016
517 | 28e392de91
518 | 28f4a45190
519 | 298c844fc9
520 | 29a0356a2b
521 | 29d779f9e3
522 | 29dde5f12b
523 | 29de7b6579
524 | 29e630bdd0
525 | 29f2332d30
526 | 2a18873352
527 | 2a3824ff31
528 | 2a559dd27f
529 | 2a5c09acbd
530 | 2a63eb1524
531 | 2a6a30a4ea
532 | 2a6d9099d1
533 | 2a821394e3
534 | 2a8c5b1342
535 | 2abc8d66d2
536 | 2ac9ef904a
537 | 2b08f37364
538 | 2b351bfd7d
539 | 2b659a49d7
540 | 2b69ee5c26
541 | 2b6c30bbbd
542 | 2b88561cf2
543 | 2b8b14954e
544 | 2ba621c750
545 | 2bab50f9a7
546 | 2bb00c2434
547 | 2bbde474ef
548 | 2bdd82fb86
549 | 2be06fb855
550 | 2bf545c2f5
551 | 2bffe4cf9a
552 | 2c04b887b7
553 | 2c05209105
554 | 2c0ad8cf39
555 | 2c11fedca8
556 | 2c1a94ebfb
557 | 2c1e8c8e2f
558 | 2c29fabcf1
559 | 2c2c076c01
560 | 2c3ea7ee7d
561 | 2c41fa0648
562 | 2c44bb6d1c
563 | 2c54cfbb78
564 | 2c5537eddf
565 | 2c6e63b7de
566 | 2cb10c6a7e
567 | 2cbcd5ccd1
568 | 2cc5d9c5f6
569 | 2cd01cf915
570 | 2cdbf5f0a7
571 | 2ce660f123
572 | 2cf114677e
573 | 2d01eef98e
574 | 2d03593bdc
575 | 2d183ac8c4
576 | 2d33ad3935
577 | 2d3991d83e
578 | 2d4333577b
579 | 2d4d015c64
580 | 2d8f5e5025
581 | 2d900bdb8e
582 | 2d9a1a1d49
583 | 2db0576a5c
584 | 2dc0838721
585 | 2dcc417f82
586 | 2df005b843
587 | 2df356de14
588 | 2e00393d96
589 | 2e03b8127a
590 | 2e0f886168
591 | 2e2bf37e6d
592 | 2e42410932
593 | 2ea78f46e4
594 | 2ebb017a26
595 | 2ee2edba2a
596 | 2efb07554a
597 | 2f17e4fc1e
598 | 2f2c65c2f3
599 | 2f2d9b33be
600 | 2f309c206b
601 | 2f53822e88
602 | 2f53998171
603 | 2f5b0c89b1
604 | 2f680909e6
605 | 2f710f66bd
606 | 2f724132b9
607 | 2f7e3517ae
608 | 2f96f5fc6f
609 | 2f97d9fecb
610 | 2fbfa431ec
611 | 2fc9520b53
612 | 2fcd9f4c62
613 | 2feb30f208
614 | 2ff7f5744f
615 | 30085a2cc6
616 | 30176e3615
617 | 301f72ee11
618 | 3026bb2f61
619 | 30318465dc
620 | 3054ca937d
621 | 306121e726
622 | 3064ad91e8
623 | 307444a47f
624 | 307bbb7409
625 | 30a20194ab
626 | 30c35c64a4
627 | 30dbdb2cd6
628 | 30fc77d72f
629 | 310021b58b
630 | 3113140ee8
631 | 3150b2ee57
632 | 31539918c4
633 | 318dfe2ce2
634 | 3193da4835
635 | 319f725ad9
636 | 31bbd0d793
637 | 322505c47f
638 | 322b237865
639 | 322da43910
640 | 3245e049fb
641 | 324c4c38f6
642 | 324e35111a
643 | 3252398f09
644 | 327dc4cabf
645 | 328d918c7d
646 | 3290c0de97
647 | 3299ae3116
648 | 32a7cd687b
649 | 33098cedb4
650 | 3332334ac4
651 | 334cb835ac
652 | 3355e056eb
653 | 33639a2847
654 | 3373891cdc
655 | 337975816b
656 | 33e29d7e91
657 | 34046fe4f2
658 | 3424f58959
659 | 34370a710f
660 | 343bc6a65a
661 | 3450382ef7
662 | 3454303a08
663 | 346aacf439
664 | 346e92ff37
665 | 34a5ece7dd
666 | 34b109755a
667 | 34d1b37101
668 | 34dd2c70a7
669 | 34efa703df
670 | 34fbee00a6
671 | 3504df2fda
672 | 35195a56a1
673 | 351c822748
674 | 351cfd6bc5
675 | 3543d8334c
676 | 35573455c7
677 | 35637a827f
678 | 357a710863
679 | 358bf16f9e
680 | 35ab34cc34
681 | 35c6235b8d
682 | 35d01a438a
683 | 3605019d3b
684 | 3609bc3f88
685 | 360e25da17
686 | 36299c687c
687 | 362c5bc56e
688 | 3649228783
689 | 365b0501ea
690 | 365f459863
691 | 369893f3ad
692 | 369c9977e1
693 | 369dde050a
694 | 36c7dac02f
695 | 36d5b1493b
696 | 36f5cc68fd
697 | 3735480d18
698 | 374b479880
699 | 375a49d38f
700 | 375a5c0e09
701 | 376bda9651
702 | 377db65f60
703 | 37c19d1087
704 | 37d4ae24fc
705 | 37ddce7f8b
706 | 37e10d33af
707 | 37e45c6247
708 | 37fa0001e8
709 | 3802d458c0
710 | 382caa3cb4
711 | 383bb93111
712 | 388843df90
713 | 38924f4a7f
714 | 38b00f93d7
715 | 38c197c10e
716 | 38c9c3d801
717 | 38eb2bf67f
718 | 38fe9b3ed1
719 | 390352cced
720 | 390c51b987
721 | 390ca6f1d6
722 | 392bc0f8a1
723 | 392ecb43bd
724 | 3935291688
725 | 3935e63b41
726 | 394454fa9c
727 | 394638fc8b
728 | 39545e20b7
729 | 397abeae8f
730 | 3988074b88
731 | 398f5d5f19
732 | 39bc49a28c
733 | 39befd99fb
734 | 39c3c7bf55
735 | 39d584b09f
736 | 39f6f6ffb1
737 | 3a079fb484
738 | 3a0d3a81b7
739 | 3a1d55d22b
740 | 3a20a7583e
741 | 3a2c1f66e5
742 | 3a33f4d225
743 | 3a3bf84b13
744 | 3a4565e5ec
745 | 3a4e32ed5e
746 | 3a7ad86ce0
747 | 3a7bdde9b8
748 | 3a98867cbe
749 | 3aa3f1c9e8
750 | 3aa7fce8b6
751 | 3aa876887d
752 | 3ab807ded6
753 | 3ab9b1a85a
754 | 3adac8d7da
755 | 3ae1a4016f
756 | 3ae2deaec2
757 | 3ae81609d6
758 | 3af847e62f
759 | 3b23792b84
760 | 3b3b0af2ee
761 | 3b512dad74
762 | 3b6c7988f6
763 | 3b6e983b5b
764 | 3b74a0fc20
765 | 3b7a50b80d
766 | 3b96d3492f
767 | 3b9ad0c5a9
768 | 3b9ba0894a
769 | 3bb4e10ed7
770 | 3bd9a9b515
771 | 3beef45388
772 | 3c019c0a24
773 | 3c090704aa
774 | 3c2784fc0d
775 | 3c47ab95f8
776 | 3c4db32d74
777 | 3c5ff93faf
778 | 3c700f073e
779 | 3c713cbf2f
780 | 3c8320669c
781 | 3c90d225ee
782 | 3cadbcc404
783 | 3cb9be84a5
784 | 3cc37fd487
785 | 3cc6f90cb2
786 | 3cd5e035ef
787 | 3cdf03531b
788 | 3cdf828f59
789 | 3d254b0bca
790 | 3d5aeac5ba
791 | 3d690473e1
792 | 3d69fed2fb
793 | 3d8997aeb6
794 | 3db0d6b07e
795 | 3db1ddb8cf
796 | 3db907ac77
797 | 3dcbc0635b
798 | 3dd48ed55f
799 | 3de4ac4ec4
800 | 3decd63d88
801 | 3e04a6be11
802 | 3e108fb65a
803 | 3e1448b01c
804 | 3e16c19634
805 | 3e2845307e
806 | 3e38336da5
807 | 3e3a819865
808 | 3e3e4be915
809 | 3e680622d7
810 | 3e7d2aeb07
811 | 3e7d8f363d
812 | 3e91f10205
813 | 3ea4c49bbe
814 | 3eb39d11ab
815 | 3ec273c8d5
816 | 3ed3f91271
817 | 3ee062a2fd
818 | 3eede9782c
819 | 3ef2fa99cb
820 | 3efc6e9892
821 | 3f0b0dfddd
822 | 3f0c860359
823 | 3f18728586
824 | 3f3b15f083
825 | 3f45a470ad
826 | 3f4f3bc803
827 | 3fd96c5267
828 | 3fea675fab
829 | 3fee8cbc9f
830 | 3fff16d112
831 | 401888b36c
832 | 4019231330
833 | 402316532d
834 | 402680df52
835 | 404d02e0c0
836 | 40709263a8
837 | 4083cfbe15
838 | 40a96c5cb1
839 | 40b8e50f82
840 | 40f4026bf5
841 | 4100b57a3a
842 | 41059fdd0b
843 | 41124e36de
844 | 4122aba5f9
845 | 413bab0f0d
846 | 4164faee0b
847 | 418035eec9
848 | 4182d51532
849 | 418bb97e10
850 | 41a34c20e7
851 | 41dab05200
852 | 41ff6d5e2a
853 | 420caf0859
854 | 42264230ba
855 | 425a0c96e0
856 | 42da96b87c
857 | 42eb5a5b0f
858 | 42f17cd14d
859 | 42f5c61c49
860 | 42ffdcdee9
861 | 432f9884f9
862 | 43326d9940
863 | 4350f3ab60
864 | 4399ffade3
865 | 43a6c21f37
866 | 43b5555faa
867 | 43d63b752a
868 | 4416bdd6ac
869 | 4444753edd
870 | 444aa274e7
871 | 444d4e0596
872 | 446b8b5f7a
873 | 4478f694bb
874 | 44b1da0d87
875 | 44b4dad8c9
876 | 44b5ece1b9
877 | 44d239b24e
878 | 44eaf8f51e
879 | 44f4f57099
880 | 44f7422af2
881 | 450787ac97
882 | 4523656564
883 | 4536c882e5
884 | 453b65daa4
885 | 454f227427
886 | 45636d806a
887 | 456fb9362e
888 | 457e717a14
889 | 45a89f35e1
890 | 45bf0e947d
891 | 45c36a9eab
892 | 45d9fc1357
893 | 45f8128b97
894 | 4607f6c03c
895 | 46146dfd39
896 | 4620e66b1e
897 | 4625f3f2d3
898 | 462b22f263
899 | 4634736113
900 | 463c0f4fdd
901 | 46565a75f8
902 | 46630b55ae
903 | 466839cb37
904 | 466ba4ae0c
905 | 4680236c9d
906 | 46bf4e8709
907 | 46e18e42f1
908 | 46f5093c59
909 | 47269e0499
910 | 472da1c484
911 | 47354fab09
912 | 4743bb84a7
913 | 474a796272
914 | 4783d2ab87
915 | 479cad5da3
916 | 479f5d7ef6
917 | 47a05fbd1d
918 | 4804ee2767
919 | 4810c3fbca
920 | 482fb439c2
921 | 48375af288
922 | 484ab44de4
923 | 485f3944cd
924 | 4867b84887
925 | 486a8ac57e
926 | 486e69c5bd
927 | 48812cf33e
928 | 4894b3b9ea
929 | 48bd66517d
930 | 48d83b48a4
931 | 49058178b8
932 | 4918d10ff0
933 | 4932911f80
934 | 49405b7900
935 | 49972c2d14
936 | 499bf07002
937 | 49b16e9377
938 | 49c104258e
939 | 49c879f82d
940 | 49e7326789
941 | 49ec3e406a
942 | 49fbf0c98a
943 | 4a0255c865
944 | 4a088fe99a
945 | 4a341402d0
946 | 4a3471bdf5
947 | 4a4b50571c
948 | 4a50f3d2e9
949 | 4a6e3faaa1
950 | 4a7191f08a
951 | 4a86fcfc30
952 | 4a885fa3ef
953 | 4a8af115de
954 | 4aa2e0f865
955 | 4aa9d6527f
956 | 4abb74bb52
957 | 4ae13de1cd
958 | 4af8cb323f
959 | 4b02c272b3
960 | 4b19c529fb
961 | 4b2974eff4
962 | 4b3154c159
963 | 4b54d2587f
964 | 4b556740ff
965 | 4b67aa9ef6
966 | 4b97cc7b8d
967 | 4baa1ed4aa
968 | 4bc8c676bb
969 | 4beaea4dbe
970 | 4bf5763d24
971 | 4bffa92b67
972 | 4c25dfa8ec
973 | 4c397b6fd4
974 | 4c51e75d66
975 | 4c7710908f
976 | 4c9b5017be
977 | 4ca2ffc361
978 | 4cad2e93bc
979 | 4cd427b535
980 | 4cd9a4b1ef
981 | 4cdfe3c2b2
982 | 4cef87b649
983 | 4cf208e9b3
984 | 4cf5bc3e60
985 | 4cfdd73249
986 | 4cff5c9e42
987 | 4d26d41091
988 | 4d5c23c554
989 | 4d67c59727
990 | 4d983cad9f
991 | 4da0d00b55
992 | 4daa179861
993 | 4dadd57153
994 | 4db117e6c5
995 | 4de4ce4dea
996 | 4dfaee19e5
997 | 4dfdd7fab0
998 | 4e3f346aa5
999 | 4e49c2a9c7
1000 | 4e4e06a749
1001 | 4e70279712
1002 | 4e72856cc7
1003 | 4e752f8075
1004 | 4e7a28907f
1005 | 4e824b9247
1006 | 4e82b1df57
1007 | 4e87a639bc
1008 | 4ea77bfd15
1009 | 4eb6fc23a2
1010 | 4ec9da329e
1011 | 4efb9a0720
1012 | 4f062fbc63
1013 | 4f35be0e0b
1014 | 4f37e86797
1015 | 4f414dd6e7
1016 | 4f424abded
1017 | 4f470cc3ae
1018 | 4f601d255a
1019 | 4f7386a1ab
1020 | 4f824d3dcd
1021 | 4f827b0751
1022 | 4f8db33a13
1023 | 4fa160f8a3
1024 | 4fa9c30a45
1025 | 4facd8f0e8
1026 | 4fca07ad01
1027 | 4fded94004
1028 | 4fdfef4dea
1029 | 4feb3ac01f
1030 | 4fffec8479
1031 | 500c835a86
1032 | 50168342bf
1033 | 50243cffdc
1034 | 5031d5a036
1035 | 504dd9c0fd
1036 | 50568fbcfb
1037 | 5069c7c5b3
1038 | 508189ac91
1039 | 50b6b3d4b7
1040 | 50c6f4fe3e
1041 | 50cce40173
1042 | 50efbe152f
1043 | 50f290b95d
1044 | 5104aa1fea
1045 | 5110dc72c0
1046 | 511e8ecd7f
1047 | 513aada14e
1048 | 5158d6e985
1049 | 5161e1fa57
1050 | 51794ddd58
1051 | 517d276725
1052 | 51a597ee04
1053 | 51b37b6d97
1054 | 51b5dc30a0
1055 | 51e85b347b
1056 | 51eea1fdac
1057 | 51eef778af
1058 | 51f384721c
1059 | 521cfadcb4
1060 | 52355da42f
1061 | 5247d4b160
1062 | 524b470fd0
1063 | 524cee1534
1064 | 5252195e8a
1065 | 5255c9ca97
1066 | 525928f46f
1067 | 526df007a7
1068 | 529b12de78
1069 | 52c7a3d653
1070 | 52c8ec0373
1071 | 52d225ed52
1072 | 52ee406d9e
1073 | 52ff1ccd4a
1074 | 53143511e8
1075 | 5316d11eb7
1076 | 53253f2362
1077 | 534a560609
1078 | 5352c4a70e
1079 | 536096501f
1080 | 536b17bcea
1081 | 5380eaabff
1082 | 5390a43a54
1083 | 53af427bb2
1084 | 53bf5964ce
1085 | 53c30110b5
1086 | 53cad8e44a
1087 | 53d9c45013
1088 | 53e274f1b5
1089 | 53e32d21ea
1090 | 540850e1c7
1091 | 540cb31cfe
1092 | 541c4da30f
1093 | 541d7935d7
1094 | 545468262b
1095 | 5458647306
1096 | 54657855cd
1097 | 547b3fb23b
1098 | 5497dc3712
1099 | 549c56f1d4
1100 | 54a4260bb1
1101 | 54b98b8d5e
1102 | 54e1054b0f
1103 | 54e8867b83
1104 | 54ebe34f6e
1105 | 5519b4ad13
1106 | 551acbffd5
1107 | 55341f42da
1108 | 5566ab97e1
1109 | 556c79bbf2
1110 | 5589637cc4
1111 | 558aa072f0
1112 | 559824b6f6
1113 | 55c1764e90
1114 | 55eda6c77e
1115 | 562d173565
1116 | 5665c024cb
1117 | 566cef4959
1118 | 5675d78833
1119 | 5678a91bd8
1120 | 567a2b4bd0
1121 | 569c282890
1122 | 56cc449917
1123 | 56e71f3e07
1124 | 56f09b9d92
1125 | 56fc0e8cf9
1126 | 571ca79c71
1127 | 57243657cf
1128 | 57246af7d1
1129 | 57427393e9
1130 | 574b682c19
1131 | 578f211b86
1132 | 5790ac295d
1133 | 579393912d
1134 | 57a344ab1a
1135 | 57bd3bcda4
1136 | 57bfb7fa4c
1137 | 57c010175e
1138 | 57c457cc75
1139 | 57c7fc2183
1140 | 57d5289a01
1141 | 58045fde85
1142 | 58163c37cd
1143 | 582d463e5c
1144 | 5851739c15
1145 | 585dd0f208
1146 | 587250f3c3
1147 | 589e4cc1de
1148 | 589f65f5d5
1149 | 58a07c17d5
1150 | 58adc6d8b6
1151 | 58b9bcf656
1152 | 58c374917e
1153 | 58fc75fd42
1154 | 5914c30f05
1155 | 59323787d5
1156 | 5937b08d69
1157 | 594065ddd7
1158 | 595a0ceea6
1159 | 59623ec40b
1160 | 597ff7ef78
1161 | 598935ef05
1162 | 598c2ad3b2
1163 | 59a6459751
1164 | 59b175e138
1165 | 59bf0a149f
1166 | 59d53d1649
1167 | 59e3e6fae7
1168 | 59fe33e560
1169 | 5a13a73fe5
1170 | 5a25c22770
1171 | 5a4a785006
1172 | 5a50640995
1173 | 5a75f7a1cf
1174 | 5a841e59ad
1175 | 5a91c5ab6d
1176 | 5ab49d9de0
1177 | 5aba1057fe
1178 | 5abe46ba6d
1179 | 5ac7c88d0c
1180 | 5aeb95cc7d
1181 | 5af15e4fc3
1182 | 5afe381ae4
1183 | 5b07b4229d
1184 | 5b1001cc4f
1185 | 5b1df237d2
1186 | 5b263013bf
1187 | 5b27d19f0b
1188 | 5b48ae16c5
1189 | 5b5babc719
1190 | 5baaebdf00
1191 | 5bab55cdbe
1192 | 5bafef6e79
1193 | 5bd1f84545
1194 | 5bddc3ba25
1195 | 5bdf7c20d2
1196 | 5bf23bc9d3
1197 | 5c01f6171a
1198 | 5c021681b7
1199 | 5c185cff1d
1200 | 5c42aba280
1201 | 5c44bf8ab6
1202 | 5c4c574894
1203 | 5c52fa4662
1204 | 5c6ea7dac3
1205 | 5c74315dc2
1206 | 5c7668855e
1207 | 5c83e96778
1208 | 5ca36173e4
1209 | 5cac477371
1210 | 5cb0cb1b2f
1211 | 5cb0cfb98f
1212 | 5cb49a19cf
1213 | 5cbf7dc388
1214 | 5d0e07d126
1215 | 5d1e24b6e3
1216 | 5d663000ff
1217 | 5da6b2dc5d
1218 | 5de9b90f24
1219 | 5e08de0ed7
1220 | 5e1011df9a
1221 | 5e1ce354fd
1222 | 5e35512dd7
1223 | 5e418b25f9
1224 | 5e4849935a
1225 | 5e4ee19663
1226 | 5e886ef78f
1227 | 5e8d00b974
1228 | 5e8d59dc31
1229 | 5ed838bd5c
1230 | 5edda6ee5a
1231 | 5ede4d2f7a
1232 | 5ede9767da
1233 | 5eec4d9fe5
1234 | 5eecf07824
1235 | 5eef7ed4f4
1236 | 5ef5860ac6
1237 | 5ef6573a99
1238 | 5f1193e72b
1239 | 5f29ced797
1240 | 5f32cf521e
1241 | 5f51876986
1242 | 5f6ebe94a9
1243 | 5f6f14977c
1244 | 5f808d0d2d
1245 | 5fb8aded6a
1246 | 5fba90767d
1247 | 5fd1c7a3df
1248 | 5fd3da9f68
1249 | 5fee2570ae
1250 | 5ff66140d6
1251 | 5ff8b85b53
1252 | 600803c0f6
1253 | 600be7f53e
1254 | 6024888af8
1255 | 603189a03c
1256 | 6057307f6e
1257 | 6061ddbb65
1258 | 606c86c455
1259 | 60c61cc2e5
1260 | 60e51ff1ae
1261 | 610e38b751
1262 | 61344be2f6
1263 | 6135e27185
1264 | 614afe7975
1265 | 614e571886
1266 | 614e7078db
1267 | 619812a1a7
1268 | 61b481a78b
1269 | 61c7172650
1270 | 61cf7e40d2
1271 | 61d08ef5a1
1272 | 61da008958
1273 | 61ed178ecb
1274 | 61f5d1282c
1275 | 61fd977e49
1276 | 621584cffe
1277 | 625817a927
1278 | 625892cf0b
1279 | 625b89d28a
1280 | 629995af95
1281 | 62a0840bb5
1282 | 62ad6e121c
1283 | 62d6ece152
1284 | 62ede7b2da
1285 | 62f025e1bc
1286 | 6316faaebc
1287 | 63281534dc
1288 | 634058dda0
1289 | 6353f09384
1290 | 6363c87314
1291 | 636e4872e0
1292 | 637681cd6b
1293 | 6376d49f31
1294 | 6377809ec2
1295 | 63936d7de5
1296 | 639bddef11
1297 | 63d37e9fd3
1298 | 63d90c2bae
1299 | 63e544a5d6
1300 | 63ebbcf874
1301 | 63fff40b31
1302 | 6406c72e4d
1303 | 64148128be
1304 | 6419386729
1305 | 643092bc41
1306 | 644081b88d
1307 | 64453cf61d
1308 | 644bad9729
1309 | 6454f548fd
1310 | 645913b63a
1311 | 64750b825f
1312 | 64a43876b7
1313 | 64dd6c83e3
1314 | 64e05bf46e
1315 | 64f55f1478
1316 | 650b0165e4
1317 | 651066ed39
1318 | 652b67d960
1319 | 653821d680
1320 | 6538d00d73
1321 | 65866dce22
1322 | 6589565c8c
1323 | 659832db64
1324 | 65ab7e1d98
1325 | 65b7dda462
1326 | 65bd5eb4f5
1327 | 65dcf115ab
1328 | 65e9825801
1329 | 65f9afe51c
1330 | 65ff12bcb5
1331 | 666b660284
1332 | 6671643f31
1333 | 668364b372
1334 | 66852243cb
1335 | 6693a52081
1336 | 669b572898
1337 | 66e98e78f5
1338 | 670f12e88f
1339 | 674c12c92d
1340 | 675c27208a
1341 | 675ed3e1ca
1342 | 67741db50a
1343 | 678a2357eb
1344 | 67b0f4d562
1345 | 67cfbff9b1
1346 | 67e717d6bd
1347 | 67ea169a3b
1348 | 67ea809e0e
1349 | 681249baa3
1350 | 683de643d9
1351 | 6846ac20df
1352 | 6848e012ef
1353 | 684bcd8812
1354 | 684dc1c40c
1355 | 685a1fa9cf
1356 | 686dafaac9
1357 | 68807d8601
1358 | 6893778c77
1359 | 6899d2dabe
1360 | 68a2fad4ab
1361 | 68cb45fda3
1362 | 68cc4a1970
1363 | 68dcb40675
1364 | 68ea4a8c3d
1365 | 68f6e7fbf0
1366 | 68fa8300b4
1367 | 69023db81f
1368 | 6908ccf557
1369 | 691a111e7c
1370 | 6927723ba5
1371 | 692ca0e1a2
1372 | 692eb57b63
1373 | 69340faa52
1374 | 693cbf0c9d
1375 | 6942f684ad
1376 | 6944fc833b
1377 | 69491c0ebf
1378 | 695b61a2b0
1379 | 6979b4d83f
1380 | 697d4fdb02
1381 | 69910460a4
1382 | 6997636670
1383 | 69a436750b
1384 | 69aebf7669
1385 | 69b8c17047
1386 | 69c67f109f
1387 | 69e0e7b868
1388 | 69ea9c09d1
1389 | 69f0af42a6
1390 | 6a078cdcc7
1391 | 6a37a91708
1392 | 6a42176f2e
1393 | 6a48e4aea8
1394 | 6a5977be3a
1395 | 6a5de0535f
1396 | 6a80d2e2e5
1397 | 6a96c8815d
1398 | 6a986084e2
1399 | 6aa8e50445
1400 | 6ab9dce449
1401 | 6abf0ba6b2
1402 | 6acc6049d9
1403 | 6adb31756c
1404 | 6ade215eb0
1405 | 6afb7d50e4
1406 | 6afd692f1a
1407 | 6b0b1044fe
1408 | 6b17c67633
1409 | 6b1b6ef28b
1410 | 6b1e04d00d
1411 | 6b2261888d
1412 | 6b25d6528a
1413 | 6b3a24395c
1414 | 6b685eb75b
1415 | 6b79be238c
1416 | 6b928b7ba6
1417 | 6b9c43c25a
1418 | 6ba99cc41f
1419 | 6bdab62bcd
1420 | 6bf2e853b1
1421 | 6bf584200f
1422 | 6bf95df2b9
1423 | 6c0949c51c
1424 | 6c11a5f11f
1425 | 6c23d89189
1426 | 6c4387daf5
1427 | 6c4ce479a4
1428 | 6c5123e4bc
1429 | 6c54265f16
1430 | 6c56848429
1431 | 6c623fac5f
1432 | 6c81b014e9
1433 | 6c99ea7c31
1434 | 6c9d29d509
1435 | 6c9e3b7d1a
1436 | 6ca006e283
1437 | 6caeb928d6
1438 | 6cb2ee722a
1439 | 6cbfd32c5e
1440 | 6cc791250b
1441 | 6cccc985e0
1442 | 6d12e30c48
1443 | 6d4bf200ad
1444 | 6d6d2b8843
1445 | 6d6eea5682
1446 | 6d7a3d0c21
1447 | 6d7efa9b9e
1448 | 6da21f5c91
1449 | 6da6adabc0
1450 | 6dd2827fbb
1451 | 6dd36705b9
1452 | 6df3637557
1453 | 6dfe55e9e5
1454 | 6e1a21ba55
1455 | 6e2f834767
1456 | 6e36e4929a
1457 | 6e4f460caf
1458 | 6e618d26b6
1459 | 6ead4670f7
1460 | 6eaff19b9f
1461 | 6eb2e1cd9e
1462 | 6eb30b3b5a
1463 | 6eca26c202
1464 | 6ecad29e52
1465 | 6ef0b44654
1466 | 6efcfe9275
1467 | 6f4789045c
1468 | 6f49f522ef
1469 | 6f67d7c4c4
1470 | 6f96e91d81
1471 | 6fc6fce380
1472 | 6fc9b44c00
1473 | 6fce7f3226
1474 | 6fdf1ca888
1475 | 702fd8b729
1476 | 70405185d2
1477 | 7053e4f41e
1478 | 707bf4ce41
1479 | 7082544248
1480 | 708535b72a
1481 | 7094ac0f60
1482 | 70a6b875fa
1483 | 70c3e97e41
1484 | 7106b020ab
1485 | 711dce6fe2
1486 | 7136a4453f
1487 | 7143fb084f
1488 | 714d902095
1489 | 7151c53b32
1490 | 715357be94
1491 | 7163b8085f
1492 | 716df1aa59
1493 | 71caded286
1494 | 71d2665f35
1495 | 71d67b9e19
1496 | 71e06dda39
1497 | 720b398b9c
1498 | 720e3fa04c
1499 | 720e7a5f1e
1500 | 721bb6f2cb
1501 | 722803f4f2
1502 | 72552a07c9
1503 | 726243a205
1504 | 72690ef572
1505 | 728cda9b65
1506 | 728e81c319
1507 | 72a810a799
1508 | 72acb8cdf6
1509 | 72b01281f9
1510 | 72cac683e4
1511 | 72cadebbce
1512 | 72cae058a5
1513 | 72d8dba870
1514 | 72e8d1c1ff
1515 | 72edc08285
1516 | 72f04f1a38
1517 | 731b825695
1518 | 7320b49b13
1519 | 732626383b
1520 | 732df1eb05
1521 | 73329902ab
1522 | 733798921e
1523 | 733824d431
1524 | 734ea0d7fb
1525 | 735a7cf7b9
1526 | 7367a42892
1527 | 7368d5c053
1528 | 73c6ae7711
1529 | 73e1852735
1530 | 73e4e5cc74
1531 | 73eac9156b
1532 | 73f8441a88
1533 | 7419e2ab3f
1534 | 74267f68b9
1535 | 7435690c8c
1536 | 747c44785c
1537 | 747f1b1f2f
1538 | 748b2d5c01
1539 | 74d4cee0a4
1540 | 74ec2b3073
1541 | 74ef677020
1542 | 750be4c4d8
1543 | 75172d4ac8
1544 | 75285a7eb1
1545 | 75504539c3
1546 | 7550949b1d
1547 | 7551cbd537
1548 | 75595b453d
1549 | 7559b4b0ec
1550 | 755bd1fbeb
1551 | 756f76f74d
1552 | 7570ca7f3c
1553 | 757a69746e
1554 | 757cac96c6
1555 | 7584129dc3
1556 | 75a058dbcd
1557 | 75b09ce005
1558 | 75cae39a8f
1559 | 75cee6caf0
1560 | 75cf58fb2c
1561 | 75d5c2f32a
1562 | 75eaf5669d
1563 | 75f7937438
1564 | 75f99bd3b3
1565 | 75fa586876
1566 | 7613df1f84
1567 | 762e1b3487
1568 | 76379a3e69
1569 | 764271f0f3
1570 | 764503c499
1571 | 7660005554
1572 | 7666351b84
1573 | 76693db153
1574 | 767856368b
1575 | 768671f652
1576 | 768802b80d
1577 | 76962c7ed2
1578 | 76a75f4eee
1579 | 76b90809f7
1580 | 770a441457
1581 | 772a0fa402
1582 | 772f2ffc3e
1583 | 774f6c2175
1584 | 77610860e0
1585 | 777e58ff3d
1586 | 77920f1708
1587 | 7799df28e7
1588 | 779e847a9a
1589 | 77ba4edc72
1590 | 77c834dc43
1591 | 77d8aa8691
1592 | 77e7f38f4d
1593 | 77eea6845e
1594 | 7806308f33
1595 | 78254660ea
1596 | 7828af8bff
1597 | 784398620a
1598 | 784d201b12
1599 | 78613981ed
1600 | 78896c6baf
1601 | 78aff3ebc0
1602 | 78c7c03716
1603 | 78d3676361
1604 | 78e29dd4c3
1605 | 78f1a1a54f
1606 | 79208585cd
1607 | 792218456c
1608 | 7923bad550
1609 | 794e6fc49f
1610 | 796e6762ce
1611 | 797cd21f71
1612 | 79921b21c2
1613 | 79a5778027
1614 | 79bc006280
1615 | 79bf95e624
1616 | 79d9e00c55
1617 | 79e20fc008
1618 | 79e9db913e
1619 | 79f014085e
1620 | 79fcbb433a
1621 | 7a13a5dfaa
1622 | 7a14bc9a36
1623 | 7a3c535f70
1624 | 7a446a51e9
1625 | 7a56e759c5
1626 | 7a5f46198d
1627 | 7a626ec98d
1628 | 7a802264c4
1629 | 7a8b5456ca
1630 | 7abdff3086
1631 | 7aecf9f7ac
1632 | 7b0fd09c28
1633 | 7b18b3db87
1634 | 7b39fe7371
1635 | 7b49e03d4c
1636 | 7b5388c9f1
1637 | 7b5cf7837f
1638 | 7b733d31d8
1639 | 7b74fd7b98
1640 | 7b918ccb8a
1641 | 7ba3ce3485
1642 | 7bb0abc031
1643 | 7bb5bb25cd
1644 | 7bb7dac673
1645 | 7bc7761b8c
1646 | 7bf3820566
1647 | 7c03a18ec1
1648 | 7c078f211b
1649 | 7c37d7991a
1650 | 7c4ec17eff
1651 | 7c649c2aaf
1652 | 7c73340ab7
1653 | 7c78a2266d
1654 | 7c88ce3c5b
1655 | 7ca6843a72
1656 | 7cc9258dee
1657 | 7cec7296ae
1658 | 7d0ffa68a4
1659 | 7d11b4450f
1660 | 7d1333fcbe
1661 | 7d18074fef
1662 | 7d18c8c716
1663 | 7d508fb027
1664 | 7d55f791f0
1665 | 7d74e3c2f6
1666 | 7d783f67a9
1667 | 7d83a5d854
1668 | 7dd409947e
1669 | 7de45f75e5
1670 | 7e0cd25696
1671 | 7e1922575c
1672 | 7e1e3bbcc1
1673 | 7e24023274
1674 | 7e2f212fd3
1675 | 7e6d1cc1f4
1676 | 7e7cdcb284
1677 | 7e9b6bef69
1678 | 7ea5b49283
1679 | 7eb2605d96
1680 | 7eb26b8485
1681 | 7ecd1f0c69
1682 | 7f02b3cfe2
1683 | 7f1723f0d5
1684 | 7f21063c3a
1685 | 7f3658460e
1686 | 7f54132e48
1687 | 7f559f9d4a
1688 | 7f5faedf8b
1689 | 7f838baf2b
1690 | 7fa5f527e3
1691 | 7ff84d66dd
1692 | 802b45c8c4
1693 | 804382b1ad
1694 | 804c558adb
1695 | 804f6338a4
1696 | 8056117b89
1697 | 806b6223ab
1698 | 8088bda461
1699 | 80b790703b
1700 | 80c4a94706
1701 | 80ce2e351b
1702 | 80db581acd
1703 | 80e12193df
1704 | 80e41b608f
1705 | 80f16b016d
1706 | 81541b3725
1707 | 8175486e6a
1708 | 8179095000
1709 | 8193671178
1710 | 81a58d2c6b
1711 | 81aa1286fb
1712 | 81dffd30fb
1713 | 8200245704
1714 | 823e7a86e8
1715 | 824973babb
1716 | 824ca5538f
1717 | 827171a845
1718 | 8273a03530
1719 | 827cf4f886
1720 | 82b865c7dd
1721 | 82c1517708
1722 | 82d15514d6
1723 | 82e117b900
1724 | 82fec06574
1725 | 832b5ef379
1726 | 83424c9fbf
1727 | 8345358fb8
1728 | 834b50b31b
1729 | 835e3b67d7
1730 | 836ea92b15
1731 | 837c618777
1732 | 838eb3bd89
1733 | 839381063f
1734 | 839bc71489
1735 | 83a8151377
1736 | 83ae88d217
1737 | 83ca8bcad0
1738 | 83ce590d7f
1739 | 83d3130ba0
1740 | 83d40bcba5
1741 | 83daba503a
1742 | 83de906ec0
1743 | 84044f37f3
1744 | 84696b5a5e
1745 | 84752191a3
1746 | 847eeeb2e0
1747 | 848e7835a0
1748 | 84a4b29286
1749 | 84a4bf147d
1750 | 84be115c09
1751 | 84d95c4350
1752 | 84e0922cf7
1753 | 84f0cfc665
1754 | 8515f6db22
1755 | 851f2f32c1
1756 | 852a4d6067
1757 | 854c48b02a
1758 | 857a387c86
1759 | 859633d56a
1760 | 85a4f4a639
1761 | 85ab85510c
1762 | 85b1eda0d9
1763 | 85dc1041c6
1764 | 85e081f3c7
1765 | 85f75187ad
1766 | 8604bb2b75
1767 | 860745b042
1768 | 863b4049d7
1769 | 8643de22d0
1770 | 8647d06439
1771 | 864ffce4fe
1772 | 8662d9441a
1773 | 8666521b13
1774 | 868d6a0685
1775 | 869fa45998
1776 | 86a40b655d
1777 | 86a8ae4223
1778 | 86b2180703
1779 | 86c85d27df
1780 | 86d3755680
1781 | 86e61829a1
1782 | 871015806c
1783 | 871e409c5c
1784 | 8744b861ce
1785 | 8749369ba0
1786 | 878a299541
1787 | 8792c193a0
1788 | 8799ab0118
1789 | 87d1f7d741
1790 | 882b9e4500
1791 | 885673ea17
1792 | 8859dedf41
1793 | 8873ab2806
1794 | 887a93b198
1795 | 8883e991a9
1796 | 8891aa6dfa
1797 | 8899d8cbcd
1798 | 88b8274d67
1799 | 88d3b80af6
1800 | 88ede83da2
1801 | 88f345941b
1802 | 890976d6da
1803 | 8909bde9ab
1804 | 8929c7d5d9
1805 | 89363acf76
1806 | 89379487e0
1807 | 8939db6354
1808 | 893f658345
1809 | 8953138465
1810 | 895c96d671
1811 | 895cbf96f9
1812 | 895e8b29a7
1813 | 898fa256c8
1814 | 89986c60be
1815 | 89b874547b
1816 | 89bdb021d5
1817 | 89c802ff9c
1818 | 89d6336c2b
1819 | 89ebb27334
1820 | 8a27e2407c
1821 | 8a31f7bca5
1822 | 8a4a2fc105
1823 | 8a5d6c619c
1824 | 8a75ad7924
1825 | 8aa817e4ed
1826 | 8aad0591eb
1827 | 8aca214360
1828 | 8ae168c71b
1829 | 8b0cfbab97
1830 | 8b3645d826
1831 | 8b3805dbd4
1832 | 8b473f0f5d
1833 | 8b4f6d1186
1834 | 8b4fb018b7
1835 | 8b518ee936
1836 | 8b523bdfd6
1837 | 8b52fb5fba
1838 | 8b91036e5c
1839 | 8b99a77ac5
1840 | 8ba04b1e7b
1841 | 8ba782192f
1842 | 8bbeaad78b
1843 | 8bd1b45776
1844 | 8bd7a2dda6
1845 | 8bdb091ccf
1846 | 8be56f165d
1847 | 8be950d00f
1848 | 8bf84e7d45
1849 | 8bffc4374b
1850 | 8bfff50747
1851 | 8c09867481
1852 | 8c0a3251c3
1853 | 8c3015cccb
1854 | 8c469815cf
1855 | 8c9ccfedc7
1856 | 8ca1af9f3c
1857 | 8ca3f6e6c1
1858 | 8ca6a4f60f
1859 | 8cac6900fe
1860 | 8cba221a1e
1861 | 8cbbe62ccd
1862 | 8d064b29e2
1863 | 8d167e7c08
1864 | 8d4ab94e1c
1865 | 8d81f6f899
1866 | 8d87897d66
1867 | 8dcccd2bd2
1868 | 8dcfb878a8
1869 | 8dd3ab71b9
1870 | 8dda6bf10f
1871 | 8ddd51ca94
1872 | 8dea22c533
1873 | 8def5bd3bf
1874 | 8e1848197c
1875 | 8e3a83cf2d
1876 | 8e478e73f3
1877 | 8e98ae3c84
1878 | 8ea6687ab0
1879 | 8eb0d315c1
1880 | 8ec10891f9
1881 | 8ec3065ec2
1882 | 8ecf51a971
1883 | 8eddbab9f7
1884 | 8ee198467a
1885 | 8ee2368f40
1886 | 8ef595ce82
1887 | 8f0a653ad7
1888 | 8f1204a732
1889 | 8f1600f7f6
1890 | 8f16366707
1891 | 8f1ce0a411
1892 | 8f2e05e814
1893 | 8f320d0e09
1894 | 8f3b4a84ad
1895 | 8f3fdad3da
1896 | 8f5d3622d8
1897 | 8f62a2c633
1898 | 8f81c9405a
1899 | 8f8c974d53
1900 | 8f918598b6
1901 | 8ff61619f6
1902 | 9002761b41
1903 | 90107941f3
1904 | 90118a42ee
1905 | 902bc16b37
1906 | 903e87e0d6
1907 | 9041a0f489
1908 | 9047bf3222
1909 | 9057bfa502
1910 | 90617b0954
1911 | 9076f4b6db
1912 | 9077e69b08
1913 | 909655b4a6
1914 | 909c2eca88
1915 | 909dbd1b76
1916 | 90bc4a319a
1917 | 90c7a87887
1918 | 90cc785ddd
1919 | 90d300f09b
1920 | 9101ea9b1b
1921 | 9108130458
1922 | 911ac9979b
1923 | 9151cad9b5
1924 | 9153762797
1925 | 91634ee0c9
1926 | 916942666f
1927 | 9198cfb4ea
1928 | 919ac864d6
1929 | 91b67d58d4
1930 | 91bb8df281
1931 | 91be106477
1932 | 91c33b4290
1933 | 91ca7dd9f3
1934 | 91d095f869
1935 | 91f107082e
1936 | 920329dd5e
1937 | 920c959958
1938 | 92128fbf4b
1939 | 9223dacb40
1940 | 923137bb7f
1941 | 9268e1f88a
1942 | 927647fe08
1943 | 9276f5ba47
1944 | 92a28cd233
1945 | 92b5c1fc6d
1946 | 92c46be756
1947 | 92dabbe3a0
1948 | 92e3159361
1949 | 92ebab216a
1950 | 934bdc2893
1951 | 9359174efc
1952 | 935d97dd2f
1953 | 935feaba1b
1954 | 93901858ee
1955 | 939378f6d6
1956 | 939bdf742e
1957 | 93a22bee7e
1958 | 93da9aeddf
1959 | 93e2feacce
1960 | 93e6f1fdf9
1961 | 93e811e393
1962 | 93e85d8fd3
1963 | 93f623d716
1964 | 93ff35e801
1965 | 94031f12f2
1966 | 94091a4873
1967 | 94125907e3
1968 | 9418653742
1969 | 941c870569
1970 | 94209c86f0
1971 | 9437c715eb
1972 | 9445c3eca2
1973 | 9467c8617c
1974 | 946d71fb5d
1975 | 948f3ae6fb
1976 | 9498baa359
1977 | 94a33abeab
1978 | 94bf1af5e3
1979 | 94cf3a8025
1980 | 94db712ac8
1981 | 94e4b66cff
1982 | 94e76cbaf6
1983 | 950be91db1
1984 | 952058e2d0
1985 | 952633c37f
1986 | 952ec313fe
1987 | 9533fc037c
1988 | 9574b81269
1989 | 9579b73761
1990 | 957f7bc48b
1991 | 958073d2b0
1992 | 9582e0eb33
1993 | 9584092d0b
1994 | 95b58b8004
1995 | 95bd88da55
1996 | 95f74a9959
1997 | 962781c601
1998 | 962f045bf5
1999 | 964ad23b44
2000 | 967b90590e
2001 | 967bffe201
2002 | 96825c4714
2003 | 968492136a
2004 | 9684ef9d64
2005 | 968c41829e
2006 | 96a856ef9a
2007 | 96dfc49961
2008 | 96e1a5b4f8
2009 | 96e6ff0917
2010 | 96fb88e9d7
2011 | 96fbe5fc23
2012 | 96fc924050
2013 | 9715cc83dc
2014 | 9720eff40f
2015 | 972c187c0d
2016 | 97476eb38d
2017 | 97659ed431
2018 | 9773492949
2019 | 97756b264f
2020 | 977bff0d10
2021 | 97ab569ff3
2022 | 97ba838008
2023 | 97d9d008c7
2024 | 97e59f09fa
2025 | 97eb642e56
2026 | 98043e2d14
2027 | 981ff580cf
2028 | 983e66cbfc
2029 | 984f0f1c36
2030 | 98595f2bb4
2031 | 985c3be474
2032 | 9869a12362
2033 | 986b5a5e18
2034 | 9877af5063
2035 | 98911292da
2036 | 9893a3cf77
2037 | 9893d9202d
2038 | 98a8b06e7f
2039 | 98ac6f93d9
2040 | 98b6974d12
2041 | 98ba3c9417
2042 | 98c7c00a19
2043 | 98d044f206
2044 | 98e909f9d1
2045 | 98fe7f0410
2046 | 990f2742c7
2047 | 992bd0779a
2048 | 994b9b47ba
2049 | 9955b76bf5
2050 | 9966f3adac
2051 | 997117a654
2052 | 999d53d841
2053 | 99c04108d3
2054 | 99c4277aee
2055 | 99c6b1acf2
2056 | 99dc8bb20b
2057 | 99fcba71e5
2058 | 99fecd4efb
2059 | 9a02c70ba2
2060 | 9a08e7a6f8
2061 | 9a2f2c0f86
2062 | 9a3254a76e
2063 | 9a3570a020
2064 | 9a39112493
2065 | 9a4e9fd399
2066 | 9a50af4bfb
2067 | 9a68631d24
2068 | 9a72318dbf
2069 | 9a767493b7
2070 | 9a7fc1548b
2071 | 9a84ccf6a7
2072 | 9a9c0e15b7
2073 | 9adf06d89b
2074 | 9b22b54ee4
2075 | 9b473fc8fe
2076 | 9b4f081782
2077 | 9b997664ba
2078 | 9bc454e109
2079 | 9bccfd04de
2080 | 9bce4583a2
2081 | 9bebf1b87f
2082 | 9bfc50d261
2083 | 9c166c86ff
2084 | 9c293ef4d7
2085 | 9c29c047b0
2086 | 9c3bc2e2a7
2087 | 9c3ce23bd1
2088 | 9c404cac0c
2089 | 9c5180d23a
2090 | 9c7feca6e4
2091 | 9caa49d3ff
2092 | 9cb2f1b646
2093 | 9ce6f765c3
2094 | 9cfee34031
2095 | 9d01f08ec6
2096 | 9d04c280b8
2097 | 9d12ceaddc
2098 | 9d15f8cb3c
2099 | 9d2101e9bf
2100 | 9d407c3aeb
2101 | 9ddefc6165
2102 | 9df0b1e298
2103 | 9e16f115d8
2104 | 9e249b4982
2105 | 9e29b1982c
2106 | 9e493e4773
2107 | 9e4c752cd0
2108 | 9e4de40671
2109 | 9e6319faeb
2110 | 9e6ddbb52d
2111 | 9eadcea74f
2112 | 9ecec5f8ea
2113 | 9efb47b595
2114 | 9f30bfe61e
2115 | 9f3734c3a4
2116 | 9f5b858101
2117 | 9f66640cda
2118 | 9f913803e9
2119 | 9f97bc74c8
2120 | 9fbad86e20
2121 | 9fc2bad316
2122 | 9fc5c3af78
2123 | 9fcb310255
2124 | 9fcc256871
2125 | 9fd2fd4d47
2126 | a0071ae316
2127 | a023141022
2128 | a046399a74
2129 | a066e739c1
2130 | a06722ba82
2131 | a07a15dd64
2132 | a07b47f694
2133 | a09c39472e
2134 | a0b208fe2e
2135 | a0b61c959e
2136 | a0bc6c611d
2137 | a0e6da5ba2
2138 | a1193d6490
2139 | a14ef483ff
2140 | a14f709908
2141 | a15ccc5658
2142 | a16062456f
2143 | a174e8d989
2144 | a177c2733c
2145 | a17c62e764
2146 | a18ad065fc
2147 | a1aaf63216
2148 | a1bb65fb91
2149 | a1bd8e5349
2150 | a1dfdd0cac
2151 | a2052e4f6c
2152 | a20fd34693
2153 | a21ffe4d81
2154 | a22349e647
2155 | a235d01ec1
2156 | a24f63e8a2
2157 | a2554c9f6d
2158 | a263ce8a87
2159 | a29bfc29ec
2160 | a2a80072d4
2161 | a2a800ab63
2162 | a2bcd10a33
2163 | a2bdaff3b0
2164 | a2c146ab0d
2165 | a2c996e429
2166 | a2dc51ebe8
2167 | a2e6608bfa
2168 | a2f2a55f01
2169 | a301869dea
2170 | a31fccd2cc
2171 | a34f440f33
2172 | a35e0206da
2173 | a36bdc4cab
2174 | a36e8c79d8
2175 | a378053b20
2176 | a37db3a2b3
2177 | a38950ebc2
2178 | a39a0eb433
2179 | a39c9bca52
2180 | a3a945dc8c
2181 | a3b40a0c1e
2182 | a3b8588550
2183 | a3c502bec3
2184 | a3f2878017
2185 | a3f4d58010
2186 | a3f51855c3
2187 | a402dc0dfe
2188 | a4065a7eda
2189 | a412bb2fef
2190 | a416b56b53
2191 | a41ec95906
2192 | a43299e362
2193 | a4757bd7af
2194 | a48c53c454
2195 | a49dcf9ad5
2196 | a4a506521f
2197 | a4ba7753d9
2198 | a4bac06849
2199 | a4f05d681c
2200 | a50c10060f
2201 | a50eb5a0ea
2202 | a5122c6ec6
2203 | a522b1aa79
2204 | a590915345
2205 | a5b5b59139
2206 | a5b77abe43
2207 | a5c2b2c3e1
2208 | a5cd17bb11
2209 | a5da03aef1
2210 | a5dd11de0d
2211 | a5ea2b93b6
2212 | a5eaeac80b
2213 | a5ec5b0265
2214 | a5f350a87e
2215 | a5f472caf4
2216 | a6027a53cf
2217 | a61715bb1b
2218 | a61cf4389d
2219 | a61d9bbd9b
2220 | a6470dbbf5
2221 | a64a40f3eb
2222 | a653d5c23b
2223 | a65bd23cb5
2224 | a66e0b7ad4
2225 | a66fc5053c
2226 | a68259572b
2227 | a6a810a92c
2228 | a6bc36937f
2229 | a6c3a374e9
2230 | a6d8a4228d
2231 | a6f4e0817f
2232 | a71e0481f5
2233 | a7203deb2d
2234 | a7392d4438
2235 | a73d3c3902
2236 | a7491f1578
2237 | a74b9ca19c
2238 | a77b7a91df
2239 | a78195a5f5
2240 | a78758d4ce
2241 | a7e6d6c29a
2242 | a800d85e88
2243 | a832fa8790
2244 | a83d06410d
2245 | a8999af004
2246 | a8f78125b9
2247 | a907b18df1
2248 | a919392446
2249 | a965504e88
2250 | a96b84b8d2
2251 | a973f239cd
2252 | a977126596
2253 | a9804f2a08
2254 | a984e56893
2255 | a99738f24c
2256 | a99bdd0079
2257 | a9c9c1517e
2258 | a9cbf9c41b
2259 | a9e42e3c0c
2260 | aa07b7c1c0
2261 | aa175e5ec7
2262 | aa1a338630
2263 | aa27d7b868
2264 | aa45f1caaf
2265 | aa49e46432
2266 | aa51934e1b
2267 | aa6287bb6c
2268 | aa6d999971
2269 | aa85278334
2270 | aab33f0e2a
2271 | aaba004362
2272 | aade4cf385
2273 | aae78feda4
2274 | aaed233bf3
2275 | aaff16c2db
2276 | ab199e8dfb
2277 | ab23b78715
2278 | ab2e1b5577
2279 | ab33a18ded
2280 | ab45078265
2281 | ab56201494
2282 | ab90f0d24b
2283 | abab2e6c20
2284 | abb50c8697
2285 | abbe2d15a0
2286 | abbe73cd21
2287 | abe61a11bb
2288 | abeae8ce21
2289 | ac2b431d5f
2290 | ac2cb1b9eb
2291 | ac31fcd6d0
2292 | ac3d3a126d
2293 | ac46bd8087
2294 | ac783ef388
2295 | acb73e4297
2296 | acbf581760
2297 | accafc3531
2298 | acf2c4b745
2299 | acf44293a2
2300 | acf736a27b
2301 | acff336758
2302 | ad1fe56886
2303 | ad28f9b9d9
2304 | ad2de9f80e
2305 | ad397527b2
2306 | ad3d1cfbcb
2307 | ad3fada9d9
2308 | ad4108ee8e
2309 | ad54468654
2310 | ad573f7d31
2311 | ad6255bc29
2312 | ad65ebaa07
2313 | ad97cc064a
2314 | adabbd1cc4
2315 | adb0b5a270
2316 | adc648f890
2317 | add21ee467
2318 | adfd15ceef
2319 | adfdd52eac
2320 | ae01cdab63
2321 | ae0b50ff4f
2322 | ae13ee3d70
2323 | ae1bcbd423
2324 | ae20d09dea
2325 | ae2cecf5f6
2326 | ae3bc4a0ef
2327 | ae499c7514
2328 | ae628f2cd4
2329 | ae8545d581
2330 | ae93214fe6
2331 | ae9cd16dbf
2332 | aeba9ac967
2333 | aebb242b5c
2334 | aed4e0b4c4
2335 | aedd71f125
2336 | aef3e2cb0e
2337 | af0b54cee3
2338 | af3de54c7a
2339 | af5fd24a36
2340 | af8826d084
2341 | af8ad72057
2342 | afb71e22c5
2343 | afcb331e1f
2344 | afe1a35c1e
2345 | b01080b5d3
2346 | b05ad0d345
2347 | b0623a6232
2348 | b064dbd4b7
2349 | b06ed37831
2350 | b06f5888e6
2351 | b08dcc490e
2352 | b0a68228dc
2353 | b0aece727f
2354 | b0b0731606
2355 | b0c7f11f9f
2356 | b0cca8b830
2357 | b0dd580a89
2358 | b0de66ca08
2359 | b0df7c5c5c
2360 | b0f5295608
2361 | b11099eb09
2362 | b132a53086
2363 | b1399fac64
2364 | b13abc0c69
2365 | b1457e3b5e
2366 | b15bf4453b
2367 | b179c4a82d
2368 | b17ee70e8c
2369 | b190b1aa65
2370 | b19b3e22c0
2371 | b19c561fab
2372 | b1d1cd2e6e
2373 | b1d7c03927
2374 | b1d7fe2753
2375 | b1f540a4bd
2376 | b1fc9c64e1
2377 | b1fcbb3ced
2378 | b220939e93
2379 | b22099b419
2380 | b241e95235
2381 | b2432ae86d
2382 | b2456267df
2383 | b247940d01
2384 | b24af1c35c
2385 | b24f600420
2386 | b24fe36b2a
2387 | b258fb0b7d
2388 | b26b219919
2389 | b26d9904de
2390 | b274456ce1
2391 | b27b28d581
2392 | b2a26bc912
2393 | b2a9c51e1b
2394 | b2b0baf470
2395 | b2b2756fe7
2396 | b2ce7699e3
2397 | b2edc76bd2
2398 | b2f6b52100
2399 | b30bf47bcd
2400 | b34105a4e9
2401 | b372a82edf
2402 | b3779a1962
2403 | b379ab4ff5
2404 | b37a1d69e3
2405 | b37c01396e
2406 | b382b09e25
2407 | b3996e4ba5
2408 | b3d9ca2aee
2409 | b3dde1e1e9
2410 | b3eb7f05eb
2411 | b40b25055c
2412 | b41e0f1f19
2413 | b44e32a42b
2414 | b4805ae9cd
2415 | b4807569a5
2416 | b48efceb3e
2417 | b493c25c7f
2418 | b4b565aba1
2419 | b4b715a15b
2420 | b4d0c90bf4
2421 | b4d84bc371
2422 | b4e5ad97aa
2423 | b4eaea9e6b
2424 | b50f4b90d5
2425 | b53f675641
2426 | b54278cd43
2427 | b554843889
2428 | b573c0677a
2429 | b58d853734
2430 | b5943b18ab
2431 | b5a09a83f3
2432 | b5aae1fe25
2433 | b5b9da5364
2434 | b5eb64d419
2435 | b5ebb1d000
2436 | b5f1c0c96a
2437 | b5f7fece90
2438 | b6070de1bb
2439 | b60a76fe73
2440 | b61f998772
2441 | b62c943664
2442 | b63094ba0c
2443 | b64fca8100
2444 | b673e7dcfb
2445 | b678b7db00
2446 | b68fc1b217
2447 | b69926d9fa
2448 | b6a1df3764
2449 | b6a4859528
2450 | b6b4738b78
2451 | b6b4f847b7
2452 | b6b8d502d4
2453 | b6bb00e366
2454 | b6d65a9eef
2455 | b6d79a0845
2456 | b6e9ec577f
2457 | b6ec609f7b
2458 | b6f92a308d
2459 | b70a2c0ab1
2460 | b70a5a0d50
2461 | b70c052f2f
2462 | b70d231781
2463 | b72ac6e10b
2464 | b7302d8226
2465 | b73867d769
2466 | b751e767f2
2467 | b76df6e059
2468 | b77e5eddef
2469 | b7a2c2c83c
2470 | b7bcbe6466
2471 | b7c2a469c4
2472 | b7d69da8f0
2473 | b7f31b7c36
2474 | b7f675fb98
2475 | b7fb871660
2476 | b82e5ad1c9
2477 | b841cfb932
2478 | b84b8ae665
2479 | b85b78ac2b
2480 | b86c17caa6
2481 | b86e50d82d
2482 | b871db031a
2483 | b87d56925a
2484 | b8aaa59b75
2485 | b8c03d1091
2486 | b8c3210036
2487 | b8e16df00b
2488 | b8f34cf72e
2489 | b8fb75864e
2490 | b9004db86c
2491 | b9166cbae9
2492 | b920b256a6
2493 | b938d79dff
2494 | b93963f214
2495 | b941aef1a0
2496 | b94d34d14e
2497 | b964c57da4
2498 | b96a95bc7a
2499 | b96c57d2c7
2500 | b9b6bdde0c
2501 | b9bcb3e0f2
2502 | b9d3b92169
2503 | b9dd4b306c
2504 | b9f43ef41e
2505 | ba1f03c811
2506 | ba3a775d7b
2507 | ba3c7f2a31
2508 | ba3fcd417d
2509 | ba5e1f4faa
2510 | ba795f3089
2511 | ba8a291e6a
2512 | ba98512f97
2513 | bac9db04f5
2514 | baedae3442
2515 | baff40d29d
2516 | bb04e28695
2517 | bb1b0ee89f
2518 | bb1c770fe7
2519 | bb1fc34f99
2520 | bb2d220506
2521 | bb334e5cdb
2522 | bb337f9830
2523 | bb721eb9aa
2524 | bb87ff58bd
2525 | bb89a6b18a
2526 | bbaa9a036a
2527 | bbb4302dda
2528 | bbd31510cf
2529 | bbe0256a75
2530 | bc141b9ad5
2531 | bc17ab8a99
2532 | bc318160de
2533 | bc3b9ee033
2534 | bc4240b43c
2535 | bc4ce49105
2536 | bc4f71372d
2537 | bc6b8d6371
2538 | bcaad44ad7
2539 | bcc241b081
2540 | bcc5d8095e
2541 | bcd1d39afb
2542 | bd0d849da4
2543 | bd0e9ed437
2544 | bd2c94730f
2545 | bd321d2be6
2546 | bd3ec46511
2547 | bd5b2e2848
2548 | bd7e02b139
2549 | bd96f9943a
2550 | bda224cb25
2551 | bda4a82837
2552 | bdb74e333f
2553 | bdccd69dde
2554 | bddcc15521
2555 | be116aab29
2556 | be15e18f1e
2557 | be1a284edb
2558 | be2a367a7b
2559 | be376082d0
2560 | be3e3cffbd
2561 | be5d1d89a0
2562 | be8b72fe37
2563 | be9b29e08e
2564 | bea1f6e62c
2565 | bea83281b5
2566 | beb921a4c9
2567 | bec5e9edcd
2568 | beeb8a3f92
2569 | bf2232b58d
2570 | bf28751739
2571 | bf443804e8
2572 | bf461df850
2573 | bf5374f122
2574 | bf551a6f60
2575 | bf8d0f5ada
2576 | bf961167a6
2577 | bfab1ad8f9
2578 | bfcb05d88d
2579 | bfd8f6e6c9
2580 | bfd91d0742
2581 | bfe262322f
2582 | c013f42ed7
2583 | c01878083f
2584 | c01faff1ed
2585 | c046fd0edb
2586 | c053e35f97
2587 | c079a6482d
2588 | c0847b521a
2589 | c0a1e06710
2590 | c0e8d4635c
2591 | c0e973ad85
2592 | c0f49c6579
2593 | c0f5b222d7
2594 | c10d07c90d
2595 | c1268d998c
2596 | c130c3fc0c
2597 | c14826ad5e
2598 | c15b922281
2599 | c16f09cb63
2600 | c18e19d922
2601 | c1c830a735
2602 | c1e8aeea45
2603 | c20a5ccc99
2604 | c20fd5e597
2605 | c219d6f8dc
2606 | c2406ae462
2607 | c26f7b5824
2608 | c279e641ee
2609 | c27adaeac5
2610 | c2a35c1cda
2611 | c2a9903b8b
2612 | c2b62567c1
2613 | c2b974ec8c
2614 | c2baaff7bf
2615 | c2be6900f2
2616 | c304dd44d5
2617 | c307f33da2
2618 | c30a7b62c9
2619 | c3128733ee
2620 | c31fa6c598
2621 | c325c8201e
2622 | c32d4aa5d1
2623 | c33f28249a
2624 | c34365e2d7
2625 | c3457af795
2626 | c34d120a88
2627 | c3509e728d
2628 | c35e4fa6c4
2629 | c36240d96f
2630 | c3641dfc5a
2631 | c37b17a4a9
2632 | c39559ddf6
2633 | c3b0c6e180
2634 | c3b3d82e6c
2635 | c3be369fdb
2636 | c3bf1e40c2
2637 | c3c760b015
2638 | c3dd38bf98
2639 | c3e4274614
2640 | c3edc48cbd
2641 | c41e6587f5
2642 | c4272227b0
2643 | c42917fe82
2644 | c438858117
2645 | c44676563f
2646 | c44beb7472
2647 | c45411dacb
2648 | c4571bedc8
2649 | c46deb2956
2650 | c479ee052e
2651 | c47d551843
2652 | c49f07d46d
2653 | c4cc40c1fc
2654 | c4f256f5d5
2655 | c4f5b1ddcc
2656 | c4ff9b4885
2657 | c52bce43db
2658 | c544da6854
2659 | c55784c766
2660 | c557b69fbf
2661 | c593a3f7ab
2662 | c598faa682
2663 | c5ab1f09c8
2664 | c5b6da8602
2665 | c5b9128d94
2666 | c5e845c6b7
2667 | c5fba7b341
2668 | c60897f093
2669 | c61fe6ed7c
2670 | c62188c536
2671 | c64035b2e2
2672 | c69689f177
2673 | c6a12c131f
2674 | c6bb6d2d5c
2675 | c6c18e860f
2676 | c6d9526e0d
2677 | c6e55c33f0
2678 | c7030b28bd
2679 | c70682c7cc
2680 | c70f9be8c5
2681 | c71f30d7b6
2682 | c73c8e747f
2683 | c760eeb8b3
2684 | c7637cab0a
2685 | c7a1a17308
2686 | c7bf937af5
2687 | c7c2860db3
2688 | c7cef4aee2
2689 | c7ebfc5d57
2690 | c813dcf13c
2691 | c82235a49a
2692 | c82a7619a1
2693 | c82ecb90cb
2694 | c844f03dc7
2695 | c8557963f3
2696 | c89147e6e8
2697 | c8a46ff0c8
2698 | c8ab107dd5
2699 | c8b869a04a
2700 | c8c7b306a6
2701 | c8c8b28781
2702 | c8d79e3163
2703 | c8edab0415
2704 | c8f494f416
2705 | c8f6cba9fd
2706 | c909ceea97
2707 | c9188f4980
2708 | c922365dd4
2709 | c92c8c3c75
2710 | c937eb0b83
2711 | c94b31b5e5
2712 | c95cd17749
2713 | c96379c03c
2714 | c96465ee65
2715 | c965afa713
2716 | c9734b451f
2717 | c9862d82dc
2718 | c98b6fe013
2719 | c9999b7c48
2720 | c99e92aaf0
2721 | c9b3a8fbda
2722 | c9bf64e965
2723 | c9c3cb3797
2724 | c9d1c60cd0
2725 | c9de9c22c4
2726 | ca1828fa54
2727 | ca346f17eb
2728 | ca3787d3d3
2729 | ca4b99cbac
2730 | ca91c69e3b
2731 | ca91e99105
2732 | caa8e97f81
2733 | caac5807f8
2734 | cabba242c2
2735 | cad5a656a9
2736 | cad673e375
2737 | cad8a85930
2738 | cae7b0a02b
2739 | cae7ef3184
2740 | caeb6b6cbb
2741 | caecf0a5db
2742 | cb15312003
2743 | cb2e35d610
2744 | cb35a87504
2745 | cb3f22b0cf
2746 | cbb410da64
2747 | cc8728052e
2748 | cc892997b8
2749 | cce03c2a9b
2750 | cd47a23e31
2751 | cd4dc03dc0
2752 | cd5ae611da
2753 | cd603bb9d1
2754 | cd8f49734c
2755 | cdc6b1c032
2756 | cdcfe008ad
2757 | cdd57027c2
2758 | ce1af99b4b
2759 | ce1bc5743a
2760 | ce25872021
2761 | ce2776f78f
2762 | ce49b1f474
2763 | ce4f0a266f
2764 | ce5641b195
2765 | ce6866aa19
2766 | ce712ed3c9
2767 | ce7d1c8117
2768 | ce7dbeaa88
2769 | ce9b015a5e
2770 | cea7697b25
2771 | cebbd826cf
2772 | cec3415361
2773 | cec41ad4f4
2774 | ced49d26df
2775 | ced7705ab2
2776 | cef824a1e1
2777 | cf13f5c95a
2778 | cf4376a52d
2779 | cf85ab28b5
2780 | cfc2e50b9d
2781 | cfcd571fff
2782 | cfd9d4ae47
2783 | cfda2dcce5
2784 | cff035928b
2785 | cff8191891
2786 | d01608c2a5
2787 | d01a8f1f83
2788 | d021d68bca
2789 | d04258ca14
2790 | d0483573dc
2791 | d04a90aaff
2792 | d05279c0bd
2793 | d0696bd5fc
2794 | d072fda75b
2795 | d0a83bcd9f
2796 | d0ab39112e
2797 | d0acde820f
2798 | d0b4442c71
2799 | d0c65e9e95
2800 | d0fb600c73
2801 | d107a1457c
2802 | d123d674c1
2803 | d14d1e9289
2804 | d154e3388e
2805 | d177e9878a
2806 | d1802f69f8
2807 | d182c4483a
2808 | d195d31128
2809 | d200838929
2810 | d205e3cff5
2811 | d247420c4c
2812 | d2484bff33
2813 | d26f6ed9b0
2814 | d280fcd1cb
2815 | d2857f0faa
2816 | d292a50c7f
2817 | d295ea2dc7
2818 | d2a58b4fa6
2819 | d2b026739a
2820 | d2ebe0890f
2821 | d2ede5d862
2822 | d301ca58cc
2823 | d3069da8bb
2824 | d343d4a77d
2825 | d355e634ef
2826 | d367fb5253
2827 | d36d16358e
2828 | d38bc77e2c
2829 | d38d1679e2
2830 | d3932ad4bd
2831 | d3987b2930
2832 | d39934abe3
2833 | d3ae1c3f4c
2834 | d3b088e593
2835 | d3e6e05e16
2836 | d3eefae7c5
2837 | d3f55f5ab8
2838 | d3f5c309cc
2839 | d4034a7fdf
2840 | d4193011f3
2841 | d429c67630
2842 | d42c0ff975
2843 | d44a764409
2844 | d44e6acd1d
2845 | d45158c175
2846 | d454e8444f
2847 | d45f62717e
2848 | d48ebdcf74
2849 | d49ab52a25
2850 | d4a607ad81
2851 | d4b063c7db
2852 | d4da13e9ba
2853 | d4dd1a7d00
2854 | d4f4f7c9c3
2855 | d521aba02e
2856 | d535bb1b97
2857 | d53b955f78
2858 | d55cb7a205
2859 | d55f247a45
2860 | d5695544d8
2861 | d5853d9b8b
2862 | d5b6c6d94a
2863 | d5cae12834
2864 | d5df027f0c
2865 | d5ee40e5d0
2866 | d600046f73
2867 | d632fd3510
2868 | d6476cad55
2869 | d65a7bae86
2870 | d664c89912
2871 | d689658f06
2872 | d6917db4be
2873 | d69967143e
2874 | d699d3d798
2875 | d69f757a3f
2876 | d6ac0e065c
2877 | d6c02bfda5
2878 | d6c1b5749e
2879 | d6e12ef6cc
2880 | d6eed152c4
2881 | d6faaaf726
2882 | d704766646
2883 | d708e1350c
2884 | d7135cf104
2885 | d7157a9f44
2886 | d719cf9316
2887 | d724134cfd
2888 | d73a60a244
2889 | d7411662da
2890 | d74875ea7c
2891 | d756f5a694
2892 | d7572b7d8a
2893 | d763bd6d96
2894 | d7697c8b13
2895 | d7797196b4
2896 | d79c834768
2897 | d7b34e5d73
2898 | d7bb6b37a7
2899 | d7c7e064a6
2900 | d7fbf545b3
2901 | d82a0aa15b
2902 | d847e24abd
2903 | d8596701b7
2904 | d86101499c
2905 | d87069ba86
2906 | d87160957b
2907 | d874654b52
2908 | d88a403092
2909 | d8aee40f3f
2910 | d8e77a222d
2911 | d8eb07c381
2912 | d9010348a1
2913 | d90e3cf281
2914 | d92532c7b2
2915 | d927fae122
2916 | d95707bca8
2917 | d973b31c00
2918 | d991cb471d
2919 | d992c69d37
2920 | d99d770820
2921 | d9b63abc11
2922 | d9db6f1983
2923 | d9e52be2d2
2924 | d9edc82650
2925 | da01070697
2926 | da070ea4b7
2927 | da080507b9
2928 | da0e944cc4
2929 | da28d94ff4
2930 | da5d78b9d1
2931 | da6003fc72
2932 | da690fee9f
2933 | da6c68708f
2934 | da7a816676
2935 | dac361e828
2936 | dac71659b8
2937 | dad980385d
2938 | daebc12b77
2939 | db0968cdd3
2940 | db231a7100
2941 | db59282ace
2942 | db7f267c3f
2943 | dba35b87fd
2944 | dbba735a50
2945 | dbca076acd
2946 | dbd66dc3ac
2947 | dbdc3c292b
2948 | dbf4a5b32b
2949 | dbfc417d28
2950 | dc1745e0a2
2951 | dc32a44804
2952 | dc34b35e30
2953 | dc504a4f79
2954 | dc704dd647
2955 | dc71bc6918
2956 | dc7771b3be
2957 | dcf8c93617
2958 | dd0f4c9fb9
2959 | dd415df125
2960 | dd601f9a3f
2961 | dd61d903df
2962 | dd77583736
2963 | dd8636bd8b
2964 | dd9fe6c6ac
2965 | ddb2da4c14
2966 | ddcd450d47
2967 | dde8e67fb4
2968 | ddfc3f04d3
2969 | de2ab79dfa
2970 | de2f35b2fd
2971 | de30990a51
2972 | de36b216da
2973 | de37403340
2974 | de46e4943b
2975 | de4ddbccb1
2976 | de5e480f05
2977 | de6a9382ca
2978 | de74a601d3
2979 | de827c510d
2980 | ded6069f7b
2981 | defb71c741
2982 | df01f277f1
2983 | df05214b82
2984 | df0638b0a0
2985 | df11931ffe
2986 | df1b0e4620
2987 | df20a8650d
2988 | df2bc56d7c
2989 | df365282c6
2990 | df39a0d9df
2991 | df3c430c24
2992 | df5536cfb9
2993 | df59cfd91d
2994 | df5e2152b3
2995 | df741313c9
2996 | df7626172f
2997 | df8ad5deb9
2998 | df96aa609a
2999 | df9705605c
3000 | df9c91c4da
3001 | dfc0d3d27a
3002 | dfdbf91a99
3003 | e00baaae9b
3004 | e0a938c6e7
3005 | e0b2ceee6f
3006 | e0bdb5dfae
3007 | e0be1f6e17
3008 | e0c478f775
3009 | e0de82caa7
3010 | e0f217dd59
3011 | e0f7208874
3012 | e0fb58395e
3013 | e1194c2e9d
3014 | e11adcd05d
3015 | e128124b9d
3016 | e1495354e4
3017 | e1561d6d4b
3018 | e158805399
3019 | e16945b951
3020 | e19edcd34b
3021 | e1a1544285
3022 | e1ab7957f4
3023 | e1d26d35be
3024 | e1e957085b
3025 | e1f14510fa
3026 | e214b160f4
3027 | e2167379b8
3028 | e21acb20ab
3029 | e221105579
3030 | e22ddf8a1b
3031 | e22de45950
3032 | e22ffc469b
3033 | e23cca5244
3034 | e252f46f0b
3035 | e25fa6cf39
3036 | e26e486026
3037 | e275760245
3038 | e27bbedbfe
3039 | e29e9868a8
3040 | e2b37ff8af
3041 | e2b608d309
3042 | e2bef4da9a
3043 | e2c87a6421
3044 | e2ea25542c
3045 | e2fb1d6497
3046 | e2fcc99117
3047 | e33c18412a
3048 | e348377191
3049 | e352cb59c8
3050 | e36ac982f0
3051 | e391bc981e
3052 | e39e3e0a06
3053 | e3bf38265f
3054 | e3d5b2cd21
3055 | e3d60e82d5
3056 | e3e3245492
3057 | e3e4134877
3058 | e3f4635e03
3059 | e4004ee048
3060 | e402d1afa5
3061 | e415093d27
3062 | e41ceb5d81
3063 | e424653b78
3064 | e42b6d3dbb
3065 | e42d60f0d4
3066 | e436d0ff1e
3067 | e43d7ae2c5
3068 | e4428801bc
3069 | e44e0b4917
3070 | e470345ede
3071 | e48e8b4263
3072 | e4922e3726
3073 | e4936852bb
3074 | e495f32c60
3075 | e499228f26
3076 | e4af66e163
3077 | e4b2095f58
3078 | e4d19c8283
3079 | e4d4872dab
3080 | e4e2983570
3081 | e4eaa63aab
3082 | e4ef0a3a34
3083 | e4f8e5f46e
3084 | e4ffb6d0dd
3085 | e53e21aa02
3086 | e57f4f668b
3087 | e588433c1e
3088 | e597442c99
3089 | e5abc0e96b
3090 | e5be628030
3091 | e5ce96a55d
3092 | e5d6b70a9f
3093 | e5fde1574c
3094 | e625e1d27b
3095 | e6261d2348
3096 | e6267d46bc
3097 | e6295f223f
3098 | e63463d8c6
3099 | e6387bd1e0
3100 | e653883384
3101 | e65f134e0b
3102 | e668ef5664
3103 | e672ccd250
3104 | e674510b20
3105 | e676107765
3106 | e699da0cdf
3107 | e6be243065
3108 | e6deab5e0b
3109 | e6f065f2b9
3110 | e71629e7b5
3111 | e72a7d7b0b
3112 | e72f6104e1
3113 | e75a466eea
3114 | e76c55933f
3115 | e7784ec8ad
3116 | e78922e5e6
3117 | e78d450a9c
3118 | e7c6354e77
3119 | e7c8de1fce
3120 | e7ea10db28
3121 | e803918710
3122 | e8073a140b
3123 | e828dd02db
3124 | e845994987
3125 | e8485a2615
3126 | e85c5118a7
3127 | e88b6736e4
3128 | e8962324e3
3129 | e8b3018d36
3130 | e8cee8bf0b
3131 | e8d97ebece
3132 | e8da49ea6a
3133 | e8ed1a3ccf
3134 | e8f7904326
3135 | e8f8341dec
3136 | e8fa21eb13
3137 | e90c10fc4c
3138 | e914b8cac8
3139 | e92b6bfea4
3140 | e92e1b7623
3141 | e93f83e512
3142 | e9422ad240
3143 | e9460b55f9
3144 | e9502628f6
3145 | e950befd5f
3146 | e9582bdd1b
3147 | e95e5afe0f
3148 | e97cfac475
3149 | e98d57d99c
3150 | e98eda8978
3151 | e99706b555
3152 | e9bc0760ba
3153 | e9d3c78bf3
3154 | e9ec1b7ea8
3155 | ea065cc205
3156 | ea138b6617
3157 | ea16d3fd48
3158 | ea2545d64b
3159 | ea286a581c
3160 | ea320da917
3161 | ea345f3627
3162 | ea3b94a591
3163 | ea444a37eb
3164 | ea4a01216b
3165 | ea5672ffa8
3166 | eaa99191cb
3167 | eaab4d746c
3168 | eac7a59bc1
3169 | ead5d3835a
3170 | eaec65cfa7
3171 | eaed1a87be
3172 | eb2f821c6f
3173 | eb383cb82e
3174 | eb6992fe02
3175 | eb6ac20a01
3176 | eb6d7ab39e
3177 | eb7921facd
3178 | eb8fce51a6
3179 | ebbb90e9f9
3180 | ebbf5c9ee1
3181 | ebc4ec32e6
3182 | ebe56e5ef8
3183 | ec1299aee4
3184 | ec139ff675
3185 | ec193e1a01
3186 | ec28252938
3187 | ec387be051
3188 | ec3d4fac00
3189 | ec4186ce12
3190 | ec579c2f96
3191 | ecae59b782
3192 | ecb33a0448
3193 | ece6bc9e92
3194 | ecfedd4035
3195 | ecfff22fd6
3196 | ed3291c3d6
3197 | ed3cd5308d
3198 | ed3e6fc1a5
3199 | ed72ae8825
3200 | ed7455da68
3201 | ed844e879f
3202 | ed8f814b2b
3203 | ed911a1f63
3204 | ed9ff4f649
3205 | eda8ab984b
3206 | edb8878849
3207 | edbfdfe1b4
3208 | edd22c46a2
3209 | edd663afa3
3210 | ede3552eae
3211 | edeab61ee0
3212 | ee07583fc0
3213 | ee316eaed6
3214 | ee3f509537
3215 | ee40a1e491
3216 | ee4bf100f1
3217 | ee6f9b01f9
3218 | ee947ed771
3219 | ee9706ac7f
3220 | ee9a7840ae
3221 | eeb90cb569
3222 | eebf45e5c5
3223 | eeed0c7d73
3224 | ef0061a309
3225 | ef07f1a655
3226 | ef0a8e8f35
3227 | ef232a2aed
3228 | ef308ad2e9
3229 | ef44945428
3230 | ef45ce3035
3231 | ef5dde449d
3232 | ef5e770988
3233 | ef6359cea3
3234 | ef65268834
3235 | ef6cb5eae0
3236 | ef78972bc2
3237 | ef8cfcfc4f
3238 | ef96501dd0
3239 | ef9a2e976b
3240 | efb24f950f
3241 | efce0c1868
3242 | efe5ac6901
3243 | efe828affa
3244 | efea4e0523
3245 | f0268aa627
3246 | f0483250c8
3247 | f04cf99ee6
3248 | f05b189097
3249 | f08928c6d3
3250 | f09d74856f
3251 | f0a7607d63
3252 | f0ad38da27
3253 | f0c34e1213
3254 | f0c7f86c29
3255 | f0dfa18ba7
3256 | f0eb3179f7
3257 | f119bab27d
3258 | f14409b6a3
3259 | f1489baff4
3260 | f14c18cf6a
3261 | f15c607b92
3262 | f1af214222
3263 | f1b77bd309
3264 | f1ba9e1a3e
3265 | f1d99239eb
3266 | f1dc710cf4
3267 | f1ec5c08fa
3268 | f22648fe12
3269 | f22d21f1f1
3270 | f233257395
3271 | f23e95dbe5
3272 | f2445b1572
3273 | f253b3486d
3274 | f277c7a6a4
3275 | f2ab2b84d6
3276 | f2b7c9b1f3
3277 | f2b83d5ce5
3278 | f2c276018f
3279 | f2cfd94d64
3280 | f2dd6e3add
3281 | f2e7653f16
3282 | f2f333ad06
3283 | f2f55d6713
3284 | f2fdb6abec
3285 | f305a56d9f
3286 | f3085d6570
3287 | f3325c3338
3288 | f3400f1204
3289 | f34497c932
3290 | f34a56525e
3291 | f36483c824
3292 | f3704d5663
3293 | f3734c4913
3294 | f38e5aa5b4
3295 | f3986fba44
3296 | f3a0ffc7d9
3297 | f3b24a7d28
3298 | f3e6c35ec3
3299 | f3fc0ea80b
3300 | f40a683fbe
3301 | f4207ca554
3302 | f4377499c2
3303 | f46184f393
3304 | f46c2d0a6d
3305 | f46c364dca
3306 | f46f7a0b63
3307 | f46fe141b0
3308 | f470b9aeb0
3309 | f47eb7437f
3310 | f48b535719
3311 | f49e4866ac
3312 | f4aa882cfd
3313 | f4daa3dbd5
3314 | f4dd51ac35
3315 | f507a1b9dc
3316 | f51c5ac84b
3317 | f52104164b
3318 | f54c67b9bb
3319 | f5966cadd2
3320 | f5bddf5598
3321 | f5d85cfd17
3322 | f5e2e7d6a0
3323 | f5f051e9b4
3324 | f5f8a93a76
3325 | f6283e8af5
3326 | f635e9568b
3327 | f6474735be
3328 | f659251be2
3329 | f66981af4e
3330 | f6708fa398
3331 | f697fe8e8f
3332 | f6adb12c42
3333 | f6c7906ca4
3334 | f6cd0a8016
3335 | f6d6f15ae7
3336 | f6e501892c
3337 | f6f59d986f
3338 | f6fe8c90a5
3339 | f714160545
3340 | f74c3888d7
3341 | f7782c430e
3342 | f7783ae5f2
3343 | f77ab47923
3344 | f788a98327
3345 | f7961ac1f0
3346 | f7a71e7574
3347 | f7a8521432
3348 | f7afbf4947
3349 | f7b7cd5f44
3350 | f7cf4b4a39
3351 | f7d49799ad
3352 | f7e0c9bb83
3353 | f7e5b84928
3354 | f7e6bd58be
3355 | f7f2a38ac6
3356 | f7f6cb2d6d
3357 | f83f19e796
3358 | f85796a921
3359 | f8603c26b2
3360 | f8819b42ec
3361 | f891f8eaa1
3362 | f89288d10c
3363 | f895ae8cc1
3364 | f8b4ac12f1
3365 | f8c3fb2b01
3366 | f8c8de2764
3367 | f8db369b40
3368 | f8fcb6a78c
3369 | f94aafdeef
3370 | f95d217b70
3371 | f9681d5103
3372 | f9750192a4
3373 | f9823a32c2
3374 | f991ddb4c2
3375 | f99d535567
3376 | f9ae3d98b7
3377 | f9b6217959
3378 | f9bd1fabf5
3379 | f9c68eaa64
3380 | f9d3e04c4f
3381 | f9daf64494
3382 | f9e4cc5a0a
3383 | f9ea6b7f31
3384 | f9f3852526
3385 | fa04c615cf
3386 | fa08e00a56
3387 | fa4370d74d
3388 | fa67744af3
3389 | fa88d48a92
3390 | fa8b904cc9
3391 | fa9526bdf1
3392 | fa9b9d2426
3393 | fad633fbe1
3394 | faf5222dc3
3395 | faff0e15f1
3396 | fb08c64e8c
3397 | fb23455a7f
3398 | fb2e19fa6e
3399 | fb34dfbb77
3400 | fb47fcea1e
3401 | fb49738155
3402 | fb4cbc514b
3403 | fb4e6062f7
3404 | fb5ba7ad6e
3405 | fb63cd1236
3406 | fb81157a07
3407 | fb92abdaeb
3408 | fba22a6848
3409 | fbaca0c9df
3410 | fbc645f602
3411 | fbd77444cd
3412 | fbe53dc8e8
3413 | fbe541dd73
3414 | fbe8488798
3415 | fbfd25174f
3416 | fc28cb305e
3417 | fc33b1ffd6
3418 | fc6186f0bb
3419 | fc918e3a40
3420 | fc96cda9d8
3421 | fc9832eea4
3422 | fcb10d0f81
3423 | fcd20a2509
3424 | fcf637e3ab
3425 | fcfd81727f
3426 | fd31890379
3427 | fd33551c28
3428 | fd542da05e
3429 | fd6789b3fe
3430 | fd77828200
3431 | fd7af75f4d
3432 | fdb28d0fbb
3433 | fdb3d1fb1e
3434 | fdb8b04124
3435 | fdc6e3d581
3436 | fdfce7e6fc
3437 | fe0f76d41b
3438 | fe24b0677d
3439 | fe3c02699d
3440 | fe58b48235
3441 | fe6a5596b8
3442 | fe6c244f63
3443 | fe7afec086
3444 | fe985d510a
3445 | fe9db35d15
3446 | fea8ffcd36
3447 | feb1080388
3448 | fed208bfca
3449 | feda5ad1c2
3450 | feec95b386
3451 | ff15a5eff6
3452 | ff204daf4b
3453 | ff25f55852
3454 | ff2ada194f
3455 | ff2ce142e8
3456 | ff49d36d20
3457 | ff5a1ec4f3
3458 | ff66152b25
3459 | ff692fdc56
3460 | ff773b1a1e
3461 | ff97129478
3462 | ffb904207d
3463 | ffc43fc345
3464 | fffe5f8df6
3465 |
--------------------------------------------------------------------------------