├── .gitignore ├── README.md ├── activity_net ├── __init__.py └── data.py ├── analyze.py ├── anet_vocab.py ├── data ├── anet_precomp │ └── .gitignore ├── captions │ ├── readme.txt │ ├── test_ids.json │ ├── train.json │ ├── train_ids.json │ ├── val_1.json │ ├── val_2.json │ └── val_ids.json └── didemo_precomp │ └── .gitignore ├── decoder ├── __init__.py ├── layers.py ├── loss.py └── model.py ├── didemo_dev ├── __init__.py ├── data.py ├── test_data_bwzhang.json ├── train_data_bwzhang.json └── val_data_bwzhang.json ├── didemo_vocab.py ├── evaluation.py ├── gen_vocab.py ├── layers.py ├── loss.py ├── model.py ├── train.py └── vocab ├── anet_precomp_vocab.pkl ├── anet_precomp_vocab_no_emb.pkl ├── anet_precomp_vocab_total.pkl ├── anet_precomp_w2v.npz ├── anet_precomp_w2v_total.npz ├── didemo_precomp_vocab.pkl ├── didemo_precomp_vocab_total.pkl ├── didemo_precomp_w2v.npz └── didemo_precomp_w2v_total.npz /.gitignore: -------------------------------------------------------------------------------- 1 | *.pyc 2 | *.hdf5 3 | runs 4 | *.tar 5 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Cross-Modal and Hierarchical Modeling of Video and Text 2 | The code repository for "[Cross-Modal and Hierarchical Modeling of Video and Text](https://arxiv.org/abs/1810.07212)" in PyTorch 3 | 4 | ### Prerequisites 5 | 6 | The following packages are required to run the scripts: 7 | 8 | - [PyTorch >= 0.4 and torchvision](https://pytorch.org) 9 | 10 | - Package [tensorboardX](https://github.com/lanpa/tensorboardX) and NLTK 11 | 12 | - Dataset: please download [features](https://drive.google.com/drive/folders/1341zliZg8-kveVFqRIgmreG8re_JcoUy?usp=sharing) (The feature is ~~still uploading~~ uploaded.) and put them into the folder data/anet_precomp and data/didemo_precomp 13 | 14 | 15 | ### Model Evaluation 16 | 17 | The learned model on ActivityNet and DiDeMo can be found in [this link](https://drive.google.com/file/d/1ELrUGE315JEOKudNeCh42gpXkGFx4vBd/view?usp=sharing). You can run **train.py** with option **--resume** and **--eval_only** to evaluate a given model, with options similar to the training scripts as below. 18 | 19 | For a model with Inception feature on ActivityNet dataset at "./runs/release/activitynet/ICEP/hse_tau5e-4/run1/checkpoint.pth.tar", it can be evaluated by: 20 | 21 | $ python train.py anet_precomp --feat_name icep --img_dim 2048 --resume ./runs/release/activitynet/ICEP/hse_tau5e-4/run1/checkpoint.pth.tar --eval_only 22 | 23 | For a model with C3D feature on ActivityNet dataset at "./runs/release/activitynet/C3D/hse_tau5e-4/run1/checkpoint.pth.tar", it can be evaluated by: 24 | 25 | $ python train.py anet_precomp --feat_name c3d --img_dim 500 --resume ./runs/release/activitynet/C3D/hse_tau5e-4/run1/checkpoint.pth.tar --eval_only 26 | 27 | We presume the input model is a GPU stored model. 28 | 29 | ### Model Training 30 | 31 | To reproduce our experiments with HSE, please use **train.py** and follow the instructions below. To train HSE with \tau=5e-4, please with 32 | 33 | $ --reconstruct_loss --lowest_reconstruct_loss 34 | 35 | For example, to train HSE with \tau=5e-4 on ActivityNet with C3D feature: 36 | 37 | $ python train.py anet_precomp --feat_name c3d --img_dim 500 --low_level_loss --reconstruct_loss --lowest_reconstruct_loss --norm 38 | 39 | To train HSE with \tau=5e-4 on ActivityNet with Inception feature ~~(The feature is still being uploaded)~~: 40 | 41 | $ python train.py anet_precomp --feat_name icep --img_dim 2048 --low_level_loss --reconstruct_loss --lowest_reconstruct_loss --norm 42 | 43 | To train HSE with \tau=5e-4 on Didemo with Inception feature: 44 | 45 | $ python train.py didemo_precomp --feat_name icep --img_dim 2048 --low_level_loss --reconstruct_loss --lowest_reconstruct_loss --norm 46 | 47 | To train HSE with \tau=0 on ActivityNet with C3D feature: 48 | 49 | $ python train.py anet_precomp --feat_name c3d --img_dim 500 --low_level_loss --norm 50 | 51 | 52 | ## .bib citation 53 | If this repo helps in your work, please cite the following paper: 54 | 55 | @inproceedings{DBLP:conf/eccv/ZhangHS18, 56 | author = {Bowen Zhang and 57 | Hexiang Hu and 58 | Fei Sha}, 59 | title = {Cross-Modal and Hierarchical Modeling of Video and Text}, 60 | booktitle = {Computer Vision - {ECCV} 2018 - 15th European Conference, Munich, 61 | Germany, September 8-14, 2018, Proceedings, Part {XIII}}, 62 | pages = {385--401}, 63 | year = {2018} 64 | } 65 | 66 | 67 | ## Acknowledgment 68 | We thank following repos providing helpful components/functions in our work. 69 | 70 | - [VSE++](https://github.com/fartashf/vsepp) for the framework 71 | - [TSN](https://github.com/yjxiong/temporal-segment-networks) for the inception-v3 feature 72 | 73 | ## Contacts 74 | Please report bugs and errors to 75 | 76 | Bowen Zhang: zbwglory [at] gmail.com 77 | Hexiang Hu: hexiang.frank.hu [at] gmail.com 78 | -------------------------------------------------------------------------------- /activity_net/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Sha-Lab/CMHSE/2c4f8d44afcc0870ccb0c44a673b37d42cb81636/activity_net/__init__.py -------------------------------------------------------------------------------- /activity_net/data.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.utils.data as data 3 | import torchvision.transforms as transforms 4 | import os 5 | import os.path as osp 6 | import nltk 7 | from PIL import Image 8 | import numpy as np 9 | import json as jsonmod 10 | import h5py 11 | import copy 12 | from IPython import embed 13 | 14 | this_dir = osp.abspath(osp.join(osp.dirname(__file__), '..')) 15 | class PrecompDataset(data.Dataset): 16 | """ 17 | Load precomputed captions and image features 18 | Possible options: f8k, f30k, coco, 10crop 19 | """ 20 | 21 | def __init__(self, data_path, data_split, vocab, opt): 22 | self.vocab = vocab 23 | json_path = osp.join(this_dir, 'data', 'captions', '{}.json'.format(data_split) ) 24 | 25 | # Captions 26 | self.jsondict = jsonmod.load(open(json_path, 'r')) 27 | self.ann_id = {} 28 | for i, keys in enumerate(self.jsondict.keys()): 29 | self.ann_id[i] = keys 30 | 31 | # Image features 32 | self.video_emb = h5py.File(osp.join(this_dir, 'data', 'anet_precomp', 'sub_activitynet_v1-3.c3d.hdf5-0'),'r') 33 | 34 | self.length = len(self.ann_id) 35 | self.feat_name = opt.feat_name 36 | 37 | def img_cap_feat_combine(self, video_feat, caption_feat, cur_vid): 38 | 39 | #### Videos #### 40 | frame_duration = self.jsondict[cur_vid]['duration'] 41 | segment_number = len(self.jsondict[cur_vid]['timestamps']) 42 | video_feat_len = video_feat.shape[0] 43 | 44 | clips = [] 45 | 46 | for seg_id in range(segment_number): 47 | picked_segment = self.jsondict[cur_vid]['timestamps'][seg_id] 48 | start_frame = int(np.floor(picked_segment[0] / frame_duration * video_feat_len)) 49 | end_frame = int(np.ceil(picked_segment[1] / frame_duration * video_feat_len)) 50 | 51 | if start_frame > end_frame:end_frame = start_frame 52 | 53 | current_feat = video_feat[start_frame: end_frame+1, :] 54 | max_frames = 80.0 55 | if current_feat.shape[0] > max_frames: 56 | ind = np.arange(0, current_feat.shape[0], current_feat.shape[0]/max_frames).astype(int).tolist() 57 | current_feat = current_feat[ind,:] 58 | 59 | clips.append(current_feat) 60 | 61 | video = video_feat 62 | if video.shape[0] > max_frames: 63 | ind = np.arange(0, video.shape[0], video.shape[0]/max_frames).astype(int).tolist() 64 | video = video[ind,:] 65 | 66 | #### Captions #### 67 | vocab = self.vocab 68 | 69 | captions = [] 70 | length_cap = [] 71 | for cap in caption_feat: 72 | tokens_cap = nltk.tokenize.word_tokenize(cap.lower()) 73 | _cap = [] 74 | _cap.extend([vocab(token) for token in tokens_cap]) 75 | target_cap = torch.Tensor(_cap) 76 | captions.append(target_cap) 77 | 78 | paragraph = torch.cat(captions, 0) 79 | 80 | lengths_clip = [len(vid) for vid in clips] 81 | lengths_cap = [len(cap) for cap in captions] 82 | 83 | num_clip = len(clips) 84 | num_caption = len(captions) 85 | assert num_clip == num_caption, 'something is wrong: {} vs. {}'.format(num_clip, num_caption) 86 | 87 | lengths_clip = torch.Tensor(lengths_clip).long() 88 | lengths_cap = torch.Tensor(lengths_cap).long() 89 | 90 | clips_stack = torch.cat(clips,0) 91 | word_stack = torch.cat(captions,0) 92 | 93 | return clips, captions, video, paragraph, lengths_clip, lengths_cap, num_clip, num_caption 94 | 95 | def __getitem__(self, index): 96 | # handle the image redundancy 97 | cur_vid = self.ann_id[index] 98 | if self.feat_name == 'c3d': 99 | image_data = self.video_emb[cur_vid]['c3d_features'].value 100 | if self.feat_name == 'icep': 101 | image_data = np.load(os.path.join(this_dir,'data', 'anet_precomp', 'ICEP_V3_global_pool_skip_8_direct_resize/'+cur_vid+'.npz'))['frame_scores'].squeeze() 102 | # image_data = self.video_emb[cur_vid]['c3d_features'].value 103 | image = torch.Tensor(image_data) 104 | caption_json = self.jsondict[cur_vid]['sentences'] 105 | 106 | clips, captions, video, paragraph, lengths_clip, lengths_cap, \ 107 | num_clip, num_caption = self.img_cap_feat_combine(image, caption_json, cur_vid) 108 | 109 | return clips, captions, video, paragraph, lengths_clip, lengths_cap, num_clip, num_caption, index, cur_vid 110 | 111 | def __len__(self): 112 | return self.length 113 | 114 | def collate_fn(data_batch): 115 | _clips, _captions, _video, _paragraph, _lengths_clip, _lengths_cap, _num_clip, _num_caption, _index, _cur_vid = zip(*data_batch) 116 | 117 | # Merge images 118 | lengths_clip = torch.cat(_lengths_clip, 0) 119 | clips = torch.zeros(len(lengths_clip), lengths_clip.max(), _clips[0][0].shape[1]) 120 | _cur_ind = 0 121 | for i, _vid_seg in enumerate(_clips): 122 | for j, vid in enumerate(_vid_seg): 123 | end = lengths_clip[_cur_ind] 124 | clips[_cur_ind, :end, :] = vid[:end, :] 125 | _cur_ind += 1 126 | 127 | lengths_video = torch.Tensor([len(x) for x in _video]).long() 128 | videos = torch.zeros(len(_video), lengths_video.max(), _video[0].shape[1]) 129 | for i, vid in enumerate(_video): 130 | end = lengths_video[i] 131 | videos[i, :end, :] = vid[:end, : ] 132 | 133 | # Merget captions 134 | lengths_cap = torch.cat(_lengths_cap, 0) 135 | captions = torch.zeros(len(lengths_cap), lengths_cap.max()).long() 136 | _cur_ind = 0 137 | for i, _cap_seg in enumerate(_captions): 138 | for j, cap in enumerate(_cap_seg): 139 | end = lengths_cap[_cur_ind] 140 | captions[_cur_ind, :end] = cap[:end] 141 | _cur_ind += 1 142 | 143 | lengths_paragraph = torch.Tensor([len(x) for x in _paragraph]).long() 144 | paragraphs = torch.zeros(len(_paragraph), lengths_paragraph.max()).long() 145 | for i, cap in enumerate(_paragraph): 146 | end = lengths_paragraph[i] 147 | paragraphs[i, :end] = cap[:end ] 148 | 149 | 150 | return clips, captions, videos, paragraphs, lengths_clip, lengths_cap, lengths_video, lengths_paragraph, _num_clip, _num_caption, _index, _cur_vid 151 | 152 | def get_precomp_loader(data_path, data_split, vocab, opt, batch_size=100, 153 | shuffle=True, num_workers=2): 154 | """Returns torch.utils.data.DataLoader for custom coco dataset.""" 155 | dset = PrecompDataset(data_path, data_split, vocab, opt) 156 | 157 | data_loader = torch.utils.data.DataLoader(dataset=dset, 158 | batch_size=batch_size, 159 | shuffle=shuffle, 160 | pin_memory=True, 161 | collate_fn=collate_fn, 162 | num_workers=num_workers) 163 | return data_loader 164 | 165 | def get_loaders(data_name, vocab, batch_size, workers, opt): 166 | dpath = os.path.join(opt.data_path, data_name) 167 | if opt.data_name.endswith('_precomp'): 168 | if opt.feat_name == 'c3d': 169 | workers = 1 170 | train_loader = get_precomp_loader(dpath, 'train', vocab, opt, 171 | batch_size, True, workers) 172 | val_loader = get_precomp_loader(dpath, 'val_1', vocab, opt, 173 | batch_size, False, workers) 174 | return train_loader, val_loader 175 | 176 | 177 | def get_test_loader(split_name, data_name, vocab, batch_size, 178 | workers, opt): 179 | dpath = os.path.join(opt.data_path, data_name) 180 | if opt.data_name.endswith('_precomp'): 181 | test_loader = get_precomp_loader(dpath, split_name, vocab, opt, 182 | batch_size, False, workers) 183 | return test_loader 184 | -------------------------------------------------------------------------------- /analyze.py: -------------------------------------------------------------------------------- 1 | import pickle 2 | import os 3 | import time 4 | import shutil 5 | 6 | import torch 7 | 8 | import data 9 | from vocab import Vocabulary # NOQA 10 | from model import VSE 11 | from evaluation import i2t, t2i, AverageMeter, LogCollector, encode_eval_data 12 | 13 | import logging 14 | import tensorboard_logger as tb_logger 15 | 16 | import argparse 17 | 18 | from IPython import embed 19 | 20 | torch.cuda.manual_seed_all(1) 21 | torch.manual_seed(1) 22 | 23 | def main(): 24 | # Hyper Parameters 25 | parser = argparse.ArgumentParser() 26 | parser.add_argument('--data_path', default='/data1/hexianghu/activitynet/captions/', 27 | help='path to datasets') 28 | parser.add_argument('--data_name', default='anet_precomp', 29 | help='anet_precomp') 30 | parser.add_argument('--vocab_path', default='./vocab/', 31 | help='Path to saved vocabulary pickle files.') 32 | parser.add_argument('--margin', default=0.2, type=float, 33 | help='Rank loss margin.') 34 | parser.add_argument('--num_epochs', default=50, type=int, 35 | help='Number of training epochs.') 36 | parser.add_argument('--batch_size', default=64, type=int, 37 | help='Size of a training mini-batch.') 38 | parser.add_argument('--word_dim', default=300, type=int, 39 | help='Dimensionality of the word embedding.') 40 | parser.add_argument('--embed_size', default=1024, type=int, 41 | help='Dimensionality of the joint embedding.') 42 | parser.add_argument('--grad_clip', default=0., type=float, 43 | help='Gradient clipping threshold.') 44 | parser.add_argument('--num_layers', default=1, type=int, 45 | help='Number of GRU layers.') 46 | parser.add_argument('--learning_rate', default=.001, type=float, 47 | help='Initial learning rate.') 48 | parser.add_argument('--lr_update', default=10, type=int, 49 | help='Number of epochs to update the learning rate.') 50 | parser.add_argument('--workers', default=10, type=int, 51 | help='Number of data loader workers.') 52 | parser.add_argument('--log_step', default=10, type=int, 53 | help='Number of steps to print and record the log.') 54 | parser.add_argument('--val_step', default=500, type=int, 55 | help='Number of steps to run validation.') 56 | parser.add_argument('--logger_name', default='runs/runX', 57 | help='Path to save the model and Tensorboard log.') 58 | parser.add_argument('--resume', default='', type=str, metavar='PATH', required=True, 59 | help='path to latest checkpoint (default: none)') 60 | parser.add_argument('--max_violation', action='store_true', 61 | help='Use max instead of sum in the rank loss.') 62 | parser.add_argument('--img_dim', default=500, type=int, 63 | help='Dimensionality of the image embedding.') 64 | parser.add_argument('--measure', default='cosine', 65 | help='Similarity measure used (cosine|order)') 66 | parser.add_argument('--use_abs', action='store_true', 67 | help='Take the absolute value of embedding vectors.') 68 | parser.add_argument('--no_imgnorm', action='store_true', 69 | help='Do not normalize the image embeddings.') 70 | parser.add_argument('--gpu_id', default=0, type=int, 71 | help='GPU to use.') 72 | parser.add_argument('--rnn_type', default='maxout', choices=['maxout', 'seq2seq', 'attention'], 73 | help='Type of recurrent model.') 74 | parser.add_argument('--img_first_size', default=1024, type=int, 75 | help='first img layer emb size') 76 | parser.add_argument('--cap_first_size', default=1024, type=int, 77 | help='first cap layer emb size') 78 | parser.add_argument('--img_first_dropout', default=0, type=float, 79 | help='first img layer emb size') 80 | parser.add_argument('--cap_first_dropout', default=0, type=float, 81 | help='first cap layer emb size') 82 | 83 | opt = parser.parse_args() 84 | print(opt) 85 | 86 | torch.cuda.set_device(opt.gpu_id) 87 | 88 | logging.basicConfig(format='%(asctime)s %(message)s', level=logging.INFO) 89 | tb_logger.configure(opt.logger_name, flush_secs=5) 90 | 91 | # Load Vocabulary Wrapper 92 | vocab = pickle.load(open(os.path.join( 93 | opt.vocab_path, '%s_vocab.pkl' % opt.data_name), 'rb')) 94 | opt.vocab_size = len(vocab) 95 | 96 | # Load data loaders 97 | train_loader, val_loader = data.get_loaders( 98 | opt.data_name, vocab, opt.batch_size, opt.workers, opt) 99 | 100 | # Construct the model 101 | model = VSE(opt) 102 | 103 | print('Print out models:') 104 | print(model.img_enc) 105 | print(model.txt_enc) 106 | print(model.img_seq_enc) 107 | print(model.txt_seq_enc) 108 | 109 | # optionally resume from a checkpoint 110 | if os.path.isfile(opt.resume): 111 | print("=> loading checkpoint '{}'".format(opt.resume)) 112 | checkpoint = torch.load(opt.resume) 113 | start_epoch = checkpoint['epoch'] 114 | best_rsum = checkpoint['best_rsum'] 115 | model.load_state_dict(checkpoint['model']) 116 | # Eiters is used to show logs as the continuation of another 117 | # training 118 | model.Eiters = checkpoint['Eiters'] 119 | print("=> loaded checkpoint '{}' (epoch {}, best_rsum {})" 120 | .format(opt.resume, start_epoch, best_rsum)) 121 | validate(opt, val_loader, model) 122 | else: 123 | print("=> no checkpoint found at '{}'".format(opt.resume)) 124 | 125 | def validate(opt, val_loader, model, num_offsets=10): 126 | # compute the encoding for all the validation images and captions 127 | img_seq_embs, cap_seq_embs = encode_eval_data( 128 | model, val_loader, opt.log_step, logging.info, num_offsets=num_offsets) 129 | 130 | for _offset in xrange(num_offsets): 131 | logging.info("Offset: %.1f" % _offset ) 132 | 133 | # caption retrieval 134 | (seq_r1, seq_r5, seq_r10, seq_medr, seq_meanr) = i2t( 135 | img_seq_embs[_offset], cap_seq_embs[_offset], measure=opt.measure) 136 | logging.info("seq_Image to seq_text: %.1f, %.1f, %.1f, %.1f, %.1f" % 137 | (seq_r1, seq_r5, seq_r10, seq_medr, seq_meanr)) 138 | # image retrieval 139 | (seq_r1i, seq_r5i, seq_r10i, seq_medri, seq_meanr) = t2i( 140 | img_seq_embs[_offset], cap_seq_embs[_offset], measure=opt.measure) 141 | logging.info("seq_Text to seq_image: %.1f, %.1f, %.1f, %.1f, %.1f" % 142 | (seq_r1i, seq_r5i, seq_r10i, seq_medri, seq_meanr)) 143 | 144 | def save_checkpoint(state, is_best, filename='checkpoint.pth.tar', prefix=''): 145 | torch.save(state, prefix + filename) 146 | if is_best: 147 | shutil.copyfile(prefix + filename, prefix + 'model_best.pth.tar') 148 | 149 | def accuracy(output, target, topk=(1,)): 150 | """Computes the precision@k for the specified values of k""" 151 | maxk = max(topk) 152 | batch_size = target.size(0) 153 | 154 | _, pred = output.topk(maxk, 1, True, True) 155 | pred = pred.t() 156 | correct = pred.eq(target.view(1, -1).expand_as(pred)) 157 | 158 | res = [] 159 | for k in topk: 160 | correct_k = correct[:k].view(-1).float().sum(0) 161 | res.append(correct_k.mul_(100.0 / batch_size)) 162 | return res 163 | 164 | 165 | if __name__ == '__main__': 166 | main() 167 | -------------------------------------------------------------------------------- /anet_vocab.py: -------------------------------------------------------------------------------- 1 | # Create a vocabulary wrapper 2 | import nltk 3 | import pickle 4 | from collections import Counter 5 | import json 6 | import argparse 7 | import os 8 | from tqdm import tqdm 9 | 10 | annotations = { 11 | 'anet': ['/data2/bwzhang/anet_img/captions/train.json', '/data2/bwzhang/anet_img/captions/val_1.json', '/data2/bwzhang/anet_img/captions/val_2.json'] 12 | } 13 | 14 | 15 | class Vocabulary(object): 16 | """Simple vocabulary wrapper.""" 17 | 18 | def __init__(self): 19 | self.word2idx = {} 20 | self.idx2word = {} 21 | self.idx = 0 22 | 23 | def add_word(self, word): 24 | if word not in self.word2idx: 25 | if word in glove: 26 | self.word2idx[word] = self.idx 27 | self.idx2word[self.idx] = word 28 | self.idx += 1 29 | 30 | def __call__(self, word): 31 | if word not in self.word2idx: 32 | return self.word2idx['UNK'] 33 | return self.word2idx[word] 34 | 35 | def __len__(self): 36 | return len(self.word2idx) 37 | 38 | def build_vocab(data_path, data_name, jsons, threshold): 39 | global glove 40 | """Build a simple vocabulary wrapper.""" 41 | counter = Counter() 42 | 43 | f_glove = open('/data1/bwzhang/DataStorage/glove.840B.300d.txt','r') 44 | glove = {} 45 | for line in tqdm(f_glove): 46 | line = line[0:-1] 47 | line_split = line.split(' ') 48 | word = line_split[0] 49 | glove[word] = 1 50 | 51 | 52 | for path in jsons[data_name]: 53 | full_path = os.path.join(os.path.join(data_path, data_name), path) 54 | dic = json.load(open(full_path,'r')) 55 | for i, video in enumerate(dic.keys()): 56 | for sen in dic[video]['sentences']: 57 | tokens = nltk.tokenize.word_tokenize(sen.lower()) 58 | counter.update(tokens) 59 | 60 | 61 | # Discard if the occurrence of the word is less than min_word_cnt. 62 | words = [word for word, cnt in counter.items() if cnt >= threshold] 63 | 64 | # Create a vocab wrapper and add some special tokens. 65 | # vocab = pickle.load(open('vocab/coco_vocab.pkl','r')) 66 | vocab = Vocabulary() 67 | vocab.add_word('PAD') 68 | vocab.add_word('BOS') 69 | vocab.add_word('EOS') 70 | vocab.add_word('UNK') 71 | 72 | # Add words to the vocabulary. 73 | for i, word in enumerate(words): 74 | vocab.add_word(word) 75 | print (len(vocab)) 76 | return vocab 77 | 78 | 79 | def main(data_path, data_name): 80 | vocab = build_vocab(data_path, data_name, jsons=annotations, threshold=1) 81 | with open('./vocab/%s_precomp_vocab_total.pkl' % data_name, 'wb') as f: 82 | pickle.dump(vocab, f, pickle.HIGHEST_PROTOCOL) 83 | print("Saved vocabulary file to ", './vocab/%s_precomp_vocab.pkl' % data_name) 84 | 85 | 86 | if __name__ == '__main__': 87 | parser = argparse.ArgumentParser() 88 | parser.add_argument('--data_path', default='/w/31/faghri/vsepp_data/') 89 | parser.add_argument('--data_name', default='anet', 90 | help='coco|anet') 91 | opt = parser.parse_args() 92 | main(opt.data_path, opt.data_name) 93 | -------------------------------------------------------------------------------- /data/anet_precomp/.gitignore: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Sha-Lab/CMHSE/2c4f8d44afcc0870ccb0c44a673b37d42cb81636/data/anet_precomp/.gitignore -------------------------------------------------------------------------------- /data/captions/readme.txt: -------------------------------------------------------------------------------- 1 | Dense-Captioning Events in Video 2 | ================================ 3 | Citation: 4 | @inproceedings{krishna2017dense, 5 | title={Dense-Captioning Events in Videos}, 6 | author={Krishna, Ranjay and Hata, Kenji and Ren, Frederic and Bernstein, Michael and Li, Fei-Fei, and Niebles, Juan Carlos}, 7 | booktitle={ArXiv}, 8 | year={2017} 9 | } 10 | 11 | zip file generated by 12 | Ranjay Krishna 13 | @RanjayKrishna 14 | ranjaykrishna@gmail.com 15 | 16 | 17 | Files 18 | ================================ 19 | train_ids.json - Contains video ids of training set. 20 | val_ids.json - Contains video ids of valing set. 21 | test_ids.json - Contains video ids of testing set. 22 | train.json - Contains timestamped captions for videos in training set 23 | val_1.json - Contains timestamped captions for videos in validation set 24 | val_2.json - A second set of annotations for the same vides in validation set 25 | 26 | Format for {train/val/test}_ids.json 27 | ================================ 28 | [v_4Lu8ECLHvK4, v_2D22fVcAcyo, ...] 29 | 30 | Format for {train/val_1/val_2}.json 31 | ================================ 32 | {"v_4Lu8ECLHvK4": { 33 | 'duration': 114.64, 34 | 'sentences': [ 35 | "A woman is seen speaking to ...", 36 | "...", 37 | ], 38 | 'timestamps': [ 39 | [1.15, 64.77], 40 | [38.4, 105.47] 41 | ], 42 | }, 43 | "v_2D22fVcAcyo": ..., 44 | } 45 | -------------------------------------------------------------------------------- /data/captions/val_ids.json: -------------------------------------------------------------------------------- 1 | ["v_JDg--pjY5gg", "v_PLek2e8NlKc", "v__uOfIm1tFcI", "v_0n3VRoYYYGU", "v_T_q3f10pkOg", "v_c5dvRUBZw2Q", "v_A1EflBqBv14", "v_R8vqzwGs6aE", "v_1RVu0qNtWCc", "v_mh_0QLZhrSY", "v_RTnNxbG2V5o", "v_UkA6pgt29VI", "v_3joaQzU05MY", "v_w2HnFjJei7k", "v_lW4OZ8eP3ns", "v_rWQz-EwA4EA", "v_Yk3pQ18So90", "v_M1-G6KEhY-M", "v_i0AsepC37Sk", "v_0VoNAs7Ia0A", "v_whJ6ESGNoyY", "v_xcBJP14YBvg", "v_jzNdWjZm92s", "v_j4Ru2L4u0Qk", "v_obVMUmZQW_M", "v_aoIGBV31OT4", "v_KyrDumISv4A", "v_Sd08rsPTroE", "v_XELYXH6fqeA", "v_TOP1Fwili-k", "v_uJZFC7gHZGI", "v_SokK_O2s9tQ", "v_k_ZXmr8pmrs", "v_SLisp6hn700", "v_V1SEaTS9hos", "v_TlDh_RZ3HDk", "v_jW1isCO6MYk", "v_nzjivjnk2Ac", "v_231pbDe3uQc", "v_hDPLy21Yyuk", "v_h-MWdTHW_Eg", "v_rDT4ngAfeHs", "v_q6sLCLnTuik", "v_zLZTqSaGxJo", "v_TRWDARS8lRE", "v_FLZPaPf027E", "v_WdCoVsU8Bbs", "v_C8Toxe4fE30", "v_zsw9WEsSowI", "v_2GCNxOKMtBo", "v_Ny49eEt1OJg", "v_Obj__zWaZqk", "v_puK4NxGKNdQ", "v_8FO4W-SBpxs", "v_-LtQMRfj0eM", "v_QeL3ScQVelo", "v_IoGpS8NQklE", "v_hz3n1wrXYAc", "v_nHwqBo0xvog", "v_FgRIl4bNl_M", "v_RjZ7jJBE1Qw", "v_1NAlbF88oUI", "v_ATk8OkvNHHQ", "v_RhEunVjB_Ns", "v_UBQfURrVB_Y", "v_dKiy-7TZqeI", "v_QN6YFgrx_Ig", "v_oVPFTkC4-Lc", "v_El4QfhJ6RvE", "v_o-aSCtwxsTw", "v_QASZ8CTxBSg", "v_uyGxlvak-Bg", "v__vUE7PhDBcA", "v_uPqp30C6MDE", "v_-MFzpFMdWZs", "v_pf49xhMRrgQ", "v_QQBmydn6--I", "v_FX4inHmWQtE", "v_QMm6gYzsMw8", "v_v2zVnmbPmeQ", "v_ObkyDlB5wvs", "v_oNpkjfX2rTc", "v_enASD1KDX24", "v_7ET-TtUVU7s", "v_Z1img-5JbDo", "v_fSUwyioi_ws", "v_STAvUAslEYM", "v_OOqGdga8t4s", "v_v-LmMLpvsbk", "v_cXfflEWa83E", "v_70rKlFJdkKw", "v_RrEJ2-TfWCI", "v_5O9myPtSriI", "v_9VWoQpg9wqE", "v_OuEQLjwBIPI", "v_4IC1_C_dtvk", "v_M6FdaEstXbI", "v_96c3BWVDoq4", "v_kBh_98QimD4", "v_eLJYFoCx-gc", "v_bodOObk5K00", "v_Ed7kAmkawTo", "v_BkjWeWUwG0A", "v_2PFU_Ee0x14", "v_zPu3JBSPa94", "v_qNHfEf72V3g", "v_wts5XRikF1Y", "v_BngR6rNiO_g", "v_UWgzslG97MQ", "v_1PQiq8zajCE", "v_pb0k7YrMwZY", "v_ABBQqwPOxw4", "v_JEvEoAESqJ0", "v_57buK1yvKPk", "v_puPMvwv2kmg", "v_2QydTDAYJsA", "v_hfBTv5b_Cok", "v_VRwI8Iydb_o", "v_kMRQmijCc5Y", "v_-oExUcmbTEE", "v_32z1yiC0Co0", "v_v7OluHKECRA", "v_8Kv7piYg9yc", "v_3lkZPJES45Q", "v_Adjpq4A5WtI", "v_uDpLB-JDjEA", "v_KEJP4Uxa5m0", "v_fErOJ98E15w", "v_HPrkxpOoep0", "v_ELlRh3gjpeE", "v_29UfCoftDkA", "v_EGrXaq213Oc", "v_GxSh-XQhIjU", "v_my9Z4bun_Dk", "v_xaCOYdzox0g", "v_T_5ANYuDWOA", "v_wvFJbY3SmXI", "v_Y5uVICaJU-0", "v_x_FAc0KqMVw", "v_teIE_kXbMiw", "v_35SpLMcN-m8", "v_0PS48XWOsKA", "v_nWpFumm3Z5g", "v_0AbJgWxIYVI", "v_evkiciK1nRc", "v_jQVT4u6NojM", "v_YzyCFfrX_4I", "v_5T_P4x0q0VM", "v_Vtsv9iPHDqg", "v_hGziyfXmotc", "v_ay_YB-S4qR0", "v_q2KR3lzTrq4", "v_e4AIrRnWakA", "v_Ttx3kt2fW1o", "v_1oM26-0yFcM", "v_Mm-bwu8Q2IU", "v_qXi05KUFOSk", "v_j89l589KFrg", "v_NH2TVi42xDE", "v_nKn2uQTVo-U", "v_LGS_yzsScfw", "v__R-jA9hOFCE", "v_HzyTD2uZ8jI", "v_AmWcQz_KJG4", "v_mE02JHvCEUM", "v_gM7JdDs5f4o", "v_gCDpUPvD3s4", "v_75xhANnCOEg", "v_BTUgRe4aSsg", "v_mucFmOzqWW8", "v_qjacthwabek", "v_p3vw2UJvLZE", "v_XxfatT0sWXw", "v_a2JBhm22-M4", "v_VYOKYSmoyk0", "v_1B3XsffrM4M", "v_HWuRcUpcsHY", "v_5g7bqiT7Y3c", "v_nLAm7USuYbA", "v_SeVftOMjNMM", "v_FmLxc-aNh88", "v_mbGpp_nDwI4", "v_gzs6VcYc0OI", "v_h2xV6mTpUCM", "v_4oZtb0kglx4", "v_QV4E2B0AdGw", "v_p3PEMCN4h_g", "v_QhiKgeJV3k0", "v_rYcac4QmSms", "v_xOmfJGR5fBw", "v_G6ayznrS0tY", "v_H3PWbSF9ax4", "v_gA7GpvB10UY", "v_n_sfeihU3f8", "v_G1hRHCymRGE", "v_9XjHgUP5QW0", "v_kt3hzGla8r4", "v_aCknCFmU0sA", "v_Dl0JNkGbZT4", "v_mNM01g9wLy4", "v_D4wcmmQsPng", "v_6J0IdWi4O0Q", "v_ZGK-w7-bkNw", "v_OCdmlTxq1Co", "v_uMGfCaGMnEE", "v_s0-xTG38cPw", "v_akUXL2VzFEs", "v_cW2R4AuUnK8", "v_h4Cf5u1j0TU", "v_JZz2O0y0ufY", "v_8UXuHMmOYGI", "v_M_WEOecjwLY", "v_GG_Bi89pNlg", "v_PVdd6E1S0Yc", "v_DzxPreFrmFE", "v_rdkrg8Bj9_I", "v_-5Q7iNtaWCU", "v_ZLmoqxkCJL0", "v_j9QPrMZuegY", "v_X5UoLcloHIM", "v_ymJTN8aKZEw", "v_HfN967uah8o", "v_9JMbahMzBjk", "v_ox2AGCcE9a0", "v_K3Z3z8t-RIQ", "v_9IwS7pfJXu4", "v_QXAs-KJj7K8", "v_DWsO49YhWUI", "v_tP0viuKibJU", "v_D0pVkTEYQg8", "v_CqscMsSNiNY", "v_0F8F-ON083s", "v_-DpnaHTk8PA", "v_-OH1BDqao9w", "v_tAgVokWkdnQ", "v_VIROYxBPp70", "v_60Y0DfZhlHM", "v_6y_gnZgf0N8", "v_TSO5Phe2ZM4", "v_BRuansCVV3U", "v_z9l32VOM6wY", "v__MWyhJS4KbM", "v_rdkPwRWW91s", "v_-ZBsdK10Trs", "v_P5Y-b-lcBs0", "v_1RKExOpIGas", "v_a68fUj833qg", "v_pIk9qMEyEd4", "v_ZTHsS5lQyvQ", "v_xf9iLflgRro", "v_1H2bRd91sZw", "v_7JoYkshshVI", "v_pYvqbfVY-s8", "v_MP31A6fHsh4", "v_xVPTVGpOkGE", "v_E15Q3Z9J-Zg", "v_5OWJ7WqKWMU", "v_uevUOX7Wpz4", "v_fxgbk_Kk4Rw", "v_0VVNybUx7DE", "v_5Foo5NSjEXQ", "v_YDSSJ6Tp47g", "v_2nPrH4Tv0yc", "v_DW7Zm9DzEDk", "v_JHuLY-ygFkc", "v_xQljKBB3498", "v_pwaSQyDNyWs", "v_7KEM_rbhASw", "v_yH018Jl5GMQ", "v_mo8CBVntUjE", "v_r-BJYixThME", "v_t1urvYx1X_w", "v_tjEMbP2SODQ", "v_Lr5GuPjfU7Q", "v_05BGDQvQ2YM", "v_HWcWElJfEjw", "v_OYAyb_Ire24", "v_bON69f83fSY", "v_ORKAMBnsX64", "v_jQHGyqk21GI", "v_Mv2uecqTSdY", "v_juIOpLYnW64", "v_RjFoJggnfj4", "v_NsqW8ZwYDEk", "v_uQDTcusxDCg", "v_e8gJpLlqzA8", "v__SzFi60-OGA", "v_T60xwc6nKJI", "v_mqwC7rqeXsk", "v_5yrLDF_ZmN0", "v_pHhcYS_wPys", "v_Cgquef_qgcs", "v_KRGiJIHSd9E", "v_al_NNsjwU-Q", "v_WGEKoGRIJGk", "v_6O5UcjQMwoQ", "v_7SxEQiFHGm8", "v_AOteP9srRpw", "v_6Ke30NtYOC0", "v_X6CpfuJLx0U", "v_GE2q5qDJ-xU", "v_A0XGYLim9IU", "v_-CEi03j4-Bw", "v_TOBHIXCu4Ic", "v_IfKGdI5egKc", "v_64UBH371Jj8", "v_HOTCR1uIaBM", "v_K98WGaMR4eM", "v_aljYWkDQzN8", "v_YRMbCxetWtg", "v_L_8Gyi8FMk4", "v_Y0G_wA38HkI", "v_u6f9COsww0w", "v_IROb83YwQ8Y", "v_Po8gmt7hVTY", "v_J2gGPC98yec", "v_44FeihJUKvM", "v_5co1E0umtJQ", "v_aPEqCGdCsp0", "v_mBnLy9ZgMkc", "v_wPCQfs0Rgx0", "v_FmaW2KK4wWU", "v_vth3IYGHu5k", "v_OwSdSL_4sxU", "v_s69uPXLvzIg", "v_TM4-Miytfv4", "v_UfjR8ewF8xo", "v_yfjnahzAPSc", "v_wUXpeZHrTWw", "v_5Q_FrGFVGNY", "v_FtbrPGaINt0", "v_P1gGM89_T2g", "v_QylENMzsW9w", "v_kUQ4bTeoG-Y", "v_5ayMRPi7Lg4", "v_SIf4H2dqbpg", "v_j8i-9T0UeRQ", "v_tFjGMdff3WM", "v__QyQSAtMdj8", "v_x18x9BKMAlk", "v_MmOVjM5-D-U", "v_LTBrHLqhRMs", "v_Qo3riKtRg2c", "v_eTVzSwuCfd8", "v_YODfHuzK2As", "v_Uw_0h2UrfyY", "v_orwTrxIwCpo", "v_lq20hEghHtU", "v_045Tkq12H_c", "v_95UgspVYJSM", "v_yzN9jN3qncA", "v_EwEV5_sHGJk", "v_UhNgPK81rKM", "v_Zkz4ef53YjA", "v_5QbiJmDyoM0", "v_dJgea9sOlBY", "v_4mzM3JjBJ74", "v_4zLTW7lT3fs", "v_rJpFVvho0o4", "v_euF5okzyaaA", "v_UajYunTsr70", "v_ux_qqONPSrc", "v_Ofnuo7FTHfM", "v_ho8cKYrtufU", "v_U5wliityRuU", "v_lU-PEm5L5EU", "v_TqcoukXhXeA", "v_GqkvSUNfZFk", "v_mg0n3DNtUZU", "v_D9hS68pULz0", "v_aj-klsonETc", "v_am4Z43QlUrg", "v_LTPrtyWIcA4", "v_Sd4C8_FMdjA", "v_Nj_rPQwzllA", "v_2r7qhNGm44I", "v_dbMPw8PfXHo", "v_fs8yU4pBNm4", "v_aWnpbk007cE", "v_iaKlx11RAiY", "v_i_16EfqIrFg", "v_le0tJsyuPks", "v_7RDn5qTQquE", "v_2WyRPSKFUi8", "v_KpmdpL5btYo", "v_d0FP6xp9O4c", "v_DvDfifKGXXg", "v_Usowsx0PDbA", "v_l_T3zfRQhic", "v_ZsicrMkZEN8", "v_IVFGb72s3oY", "v_LAicExwwM54", "v_0ZXc2fEDgg8", "v_Y7VWbYGI0Oc", "v_kWtY5wkkAMY", "v_KjkD7CZcXK8", "v_tZzse87ICr8", "v_kTBEGydNpgg", "v_5ry-UTd0y_o", "v_vJiOYQE9tts", "v__wITx73-BXw", "v__8KsVaJLOYI", "v_IhWxuvzIHkc", "v_hzuQYOG0a_g", "v_5F4jcV8dHVs", "v_nc4twXSueZo", "v_AXw2bkQyRPo", "v_VWpRBfhoFVg", "v_Pv6oIFroaCQ", "v_mC_8ckG6WpU", "v_-oJb3Acw-_s", "v_zMWhT5Rv6WE", "v_APAxAnwS9oM", "v_0pegrKSh4iw", "v_qhnWJ4G5JMA", "v_0gwhdJGq2eg", "v_SKMVLKmgxAw", "v_BioBrxuKOsw", "v_D5xp0LuEcKw", "v_CCRPXH8ui-s", "v_-UwqKYkkKlU", "v_eYgXvnnlPQA", "v_8P1vKpL3Zcs", "v_RNAUncQEASo", "v_-FWGLSfI13Q", "v_KoHzXi7Usl8", "v_y8RpTBtGG1g", "v_VwmYoF9Rh_8", "v_r5mwKEhEsHA", "v_m-B1tlnywNY", "v_g7glOdM6BYo", "v_CjPN7fw0B48", "v_1BUnQWRBpYg", "v_eCXiGAChev4", "v_ev-RTtbVjFI", "v_291szrilAVE", "v_siGEHA6fs80", "v_OAot8XBeLrs", "v_iXF01UxOtLI", "v_UE2mDvY9rew", "v_Qhs0AjFvcOA", "v_GAqzjkkb98Q", "v_YBK6SfHd-0Y", "v_FCiKVtVqTAA", "v_A8dBgZCuQow", "v_Ff8QLpH5T1c", "v_EfjzkyLrnDg", "v_-nlAKyoyIuU", "v_j7hSNqcWIO4", "v_mtU66vCjVVs", "v_V35ubrbe8gA", "v_5rw6n16ILgY", "v_altXks0a0qY", "v_xuoWaq6XPZo", "v_SndKvA_2DcE", "v_HZWdMK6zhec", "v_YTBmMSIczEc", "v_iiqaJGokpEw", "v_uID_HFDKFKw", "v_-TuxT19bogQ", "v_63HZk1SInLk", "v_97LW-ivu01A", "v_YozbZM_nA0c", "v_xCedPpnP6Wg", "v_2AE847UXu3Q", "v_t7WI6H6UVG4", "v_bZ4r3Y_qceE", "v_DxhdDYQkQU8", "v_6skP3w9WDIM", "v_9svdYGBSMvM", "v_52PO939EtGw", "v_PBzlHfEMU5s", "v_QlTddnlIJpA", "v_uKzelWWaYB4", "v_tfPm9xAZ5z0", "v_gpKYclCmQHM", "v_ubNDaGOws0E", "v_ERGoTBC8NkA", "v_uhiQp0GCeKg", "v_ko89yQozE-4", "v_sbr3HKm2Y9I", "v_WpKQV53ENHE", "v_UhgVO1QaP2s", "v__kj3B0T_TE0", "v__QTQEw1b_-U", "v_swkSdgwCxHs", "v_dIC0nm4nrI8", "v_h67ctuwV-Nw", "v_DeoqC3oVV38", "v_o0cVs7THLi8", "v_B_9S_qzlD38", "v_V0e5tItt1RM", "v_W84TQm1l90U", "v_A4L4ObzZ5VE", "v_ND9mMyNjm5M", "v_HzSCfBOefA4", "v_iLHVaeiPpuw", "v_sc_L4zUEb7E", "v_SrcZRhXkr2k", "v_CMTiL1ctmDs", "v_Ktun1-2Y540", "v_s4Ryxk3TxKA", "v_2oizmWFx4PA", "v_J3DxJ8gI95U", "v_E3dV8LdAPx8", "v_FCe1NVTbaZ4", "v_5Lv0g7ISQVU", "v_kSdWy3subNE", "v_91WRZuT4c6E", "v_GKBYgS99oiI", "v_na4vSFfVi2s", "v_GQdkuWJGYFg", "v_7B1FZR0IA6M", "v_0RIc6mwDRaQ", "v_Zzj03Cew2vk", "v_91XkPU8A5hs", "v_dGxJGvw_sUg", "v_5vv5e_E93gM", "v_owLv-_CPNJI", "v_mhYFpct97UE", "v_cnjaB6GFpSc", "v_mwUP1yZQsh0", "v_iGax3fokst8", "v_yxDsp8EBZtY", "v_wy_oDiDK6lk", "v_4CsTbXdERSU", "v_v6go4RA0ZB4", "v_xaicDAewb6o", "v_RgMAHuMVRcU", "v_69DNcmkoapw", "v_CjuTFlxFvH0", "v_dWJIJM3qmyQ", "v_cdcn6XP1N6A", "v_4W4mrswC2tA", "v_4YJ_L7jqgoA", "v_wHGDq_8dCuc", "v_eVTMUEYhwDE", "v_BLamvR0GIE8", "v_ne7uJQ0MUtE", "v_WzDnorAzWVU", "v__Zq8ugolzlA", "v_MT852hP9wVk", "v_KgM8_YBJbM4", "v_XH-YlSbgxkY", "v_2Tuht3F2uc8", "v_T97WL2cKD6M", "v_2zVpWu1i5qM", "v_9Bo7Hr77DgA", "v_cAhu8H9qsAI", "v_KhAtzEJxz9M", "v_IMto8gJvRek", "v_1jX8p54Dfjs", "v_C8IHSB9mfeE", "v_9cxGx2BsKkM", "v_HD_vpQCUSCQ", "v_T4ZeB_TvS68", "v_3bi2XM3scQA", "v_moGDCWEoaK8", "v_f_BzYUCp4J0", "v_VRRLOIP6EmA", "v_3SjuIcAfeWk", "v_UIPTzsWiGSU", "v_5X7zeOps9uA", "v_u3uYs6SZFKo", "v_NzxZdC-63LE", "v_KjUxjcpIG_Y", "v_CXbVcrVgNzQ", "v_FDvZUUc5tw4", "v_txsupdxCToQ", "v_1-Ud-q4y1oc", "v_fmtW5lcdT_0", "v_OXbfnzs-qUU", "v_WMVJqLMtaws", "v_0x4TP4MPelY", "v_rMy6sItJID0", "v_gA0m7YUH408", "v_gKLbdLKEG6U", "v_hiVs1hNyPpw", "v_mUsjm4oBBvw", "v_kpAwQpA1nPs", "v_lARaqx1e7wM", "v__1vYKA7mNLI", "v_gUQFX_IydG8", "v_FIaXCUPjFY0", "v_R37pbIySnjg", "v_eUCVKv4R-7A", "v_86hyAYM5d3E", "v_gdYr4E3qobI", "v_uG_G4g6ixms", "v_vgC8jB2FhAg", "v_ED7SKNfAKyI", "v_WrFNI5GQFPM", "v_Vh4TxFOCNM8", "v_LoBjzA2z2Ls", "v_3JcvtncHhLw", "v_vvHrSeomFtg", "v_Te9e32TDiZ8", "v_xIG7FQWBWZU", "v_sR3_5j8pUdI", "v_EWlt9TTOw30", "v_3jZq0UaDIks", "v_WDbG2_sDHow", "v_LQ7X62seYYI", "v_zL7Rz4I8UyA", "v_eS1r2Qi0qUM", "v_LYqq0dPB-U8", "v_rWdXyKZnL2U", "v_lKSlIMfWZXI", "v_qcsGJTJstZ4", "v_n1NqFiDdlEU", "v_--1DO2V4K74", "v_TUfYisuVrs0", "v_yUYTlwiP16E", "v_IxaoK4TbALQ", "v_IsHM24qWmpI", "v_ccKJg_f1UDo", "v_HgKZ4KAuhdI", "v_4mSPGxeKK2k", "v_Pt5jMqQXTZ8", "v_f3YyN44Dx8M", "v_Khxa5Ey3udM", "v_jqYzz6YoMEY", "v_idTzZaMtGy8", "v_2SYTRqm4Ym4", "v_yFJVEplkVHA", "v_2oNsMva04MM", "v_D4Y6DyRD0kY", "v_OMa1i3ITBbo", "v_I8nK8c7k9ko", "v_t6d__c9sIUU", "v_ksKlcjeIBi0", "v_lZ6zN5Q447M", "v_DjT4-5H3xDQ", "v_3TxZTZEEg44", "v_aFSaGCvYXXY", "v_LElk0AlBpbI", "v_j8NwT9JBQJk", "v_WglqrQ9uR-A", "v_gtAl_FkXdR8", "v_iFA1XhZ6VM8", "v_qeyFjCAA_dg", "v_Mno1JV_6y_M", "v_F1xZKduLnWg", "v_Q_W0GL3ljUY", "v_CJWSed5v4jE", "v_Jd5tpIdMGh8", "v_eRN5gqZFXHI", "v_qMJi2nXWOkg", "v_Jp8L9h4aaV4", "v_RpVkFIpEhIE", "v_vbHLA5l_BRc", "v_wVHD_Y5J3qE", "v_ssBiSN8XofA", "v_jdz4PzF1pO0", "v_Vl4gId1_zxo", "v_Ix8WFQ6-yx8", "v_od1jHUzgrAU", "v_kyafh7Ownao", "v_uJuGXnGqozs", "v_U0d68z5HTwE", "v_SO5KnbKienU", "v_gVixuVE0-ek", "v_06Eq9tgprBw", "v_isUCIXYjOXE", "v_5fW_2c_kKfc", "v_bCD6_kGsF9A", "v_CSruNOwxCRY", "v_jr7JA5eKkwY", "v_rBJBnf4F9sA", "v_f7ndXtwTep0", "v_FTmGHtBdWi0", "v_EpWZ_-hNKKs", "v_plMBtIbzX6w", "v_39dTxOhrW68", "v_Sx5MlpX6NIY", "v_ox6cIfguQ00", "v_tAEGMVLn0wk", "v_W6JJ1L_EEBY", "v_Y-lL9JiLhz0", "v_yH0xeA_OvWg", "v_OMRu1rPRBHo", "v_cIRMaWUTHuk", "v_jCB1EC3RzWI", "v_HuUIIKA3o_A", "v_UXi0Cy16-0Y", "v_K_IqYFJKIgk", "v_zRNS_ebpi7o", "v_CYPfbnL0bCI", "v_dFlmzpAb6AQ", "v_FBUtGL5_tto", "v_p1gH8y8X0kA", "v_CuZpm0Il6YM", "v_oAJlaJ8xcwY", "v_o-BGGr-DU5g", "v_aGKySEwCMnI", "v_EGLJPCJnG64", "v_t3eRbi1Uk5E", "v_DFAodsf1dWk", "v_1926p23ooUM", "v_2HmhRdKRVb4", "v_4OIkfJ_IkpA", "v_fh68-PXZ9Oo", "v_Y1UwPTU61uk", "v_q92zSoMudWU", "v_9FYVaOGQV6o", "v_60Fyun_Szw4", "v_TxgvL4ZJZbo", "v_q4QPF-qNBTY", "v_hHMqyl_Dugs", "v_2SKZB0bfqF8", "v_fqFqQjH8M20", "v_HCntSYltlmA", "v_iJqLgrShN-w", "v_4cd0sNdLmT4", "v_AK-9sj8btp8", "v_tCfu0LplM64", "v_q5tYHwZLRYU", "v_IQGg87yZZjs", "v_0yi-nkwLEnI", "v_bSZnvk2Cx28", "v_t9j3GNVm8jw", "v_g_Cz69Q5bKM", "v_GOxmnVFdMfY", "v_8iHklV25LaE", "v_-sd2XAFkeC0", "v_X8o3FbH0gyo", "v_kuMevlNUDCs", "v_sq0cKsoX7mg", "v_GavbA_SHlVM", "v_XBMiD_7fdF4", "v_FnbVnRX6WxQ", "v_uBls-XJdcBs", "v_eZbdiuUu0S8", "v_F67zl57FSXE", "v_-VcxQ6i6Ejk", "v_m3yLm_dJU94", "v_Ws5jA8cMKas", "v_fJ4xMCc5SKk", "v_h1t5QZjERms", "v_yA3AD9jU7QU", "v_L4rKeN_4CLk", "v_rCmpRDbS_O4", "v_Vvi0HQ6Pu7c", "v_sahQxLbmM0U", "v_4Oug7S32B-4", "v_sMITf5WBIxM", "v_ejIEsnkvLWY", "v_N-LaOcSqZaM", "v_HWV_ccmZVPA", "v_MR0vMF_5hp8", "v_R0YS8JS_0rk", "v_Nq3b9OReeEI", "v_j4-w606GnYw", "v_y0Kio7VOk5o", "v_MYi6p113py8", "v_5Yq5GMPBguI", "v_xunKd050v7U", "v_GJDl-whUpq4", "v_Iiwz1JtC7rk", "v_1U8y7e22SQg", "v_jmerKGN0VPs", "v_4fQUWOuFjwQ", "v_UPfQNZl0_dg", "v_PaAJ-6HT6bw", "v_ps0a-GGomX4", "v_CNdCnkKhitI", "v_xSWpGhhM1H8", "v_6TUA9ipKk9I", "v_NBXH7A2EO7Q", "v_WXEq3OeD68o", "v_2imjxY43yYM", "v_T-Ngg5bptUc", "v_xDD9rWISPpk", "v_dSww-S8qyCM", "v_pIk_bbjCNDo", "v_cT4EquMmRiw", "v_pZxteNqdweM", "v_sWtwatYMbX0", "v_VR19Scunfhg", "v_BLTOTjVYiuE", "v_OsrRpGbIpKA", "v_3hZjxdMcG6o", "v_8RMrbKCQheM", "v_wU-8acM-IUM", "v_0N8iIUS660o", "v_AUFI2wx5Z48", "v_GI8tylrKKlA", "v_UGjF8G0HLZA", "v_iiQQ8xZvZok", "v_Kkkrap77n5M", "v_YeZz5PZiiwU", "v_BP9MfTepAv4", "v_BSlVLi81VGM", "v_nkrA8sJydF0", "v_fUa3pwpNZ6I", "v_1wjnveHAhGE", "v_ZGL-PmMopeM", "v_mkwCGf92vqo", "v_eKiRykHu734", "v_9Zn0zErRckc", "v_r6z6Ct16I_8", "v_9I4H8O6B7yM", "v_FNlEHAIh6LQ", "v_chMp_uvII5g", "v_Cl96RZAFcZo", "v_8Q-P5KEvXN0", "v_KRSBbX-itrY", "v_Pu5p7SC3sqg", "v_B7Ddfw2PXOI", "v_TIEzvhv6xaI", "v_GsR4fagoV-Q", "v_5j5_YV25cFA", "v_T3bTwmccIEQ", "v_RAluocUocdw", "v_MsalIjwP3no", "v_7mDiIJ9r4EU", "v_-5c9WHk408g", "v_UTiSAR1o2nU", "v_4bUxtqX_oxM", "v_svWiQtzgtOc", "v_0r-_a6m5k-0", "v_qGID8CHyClA", "v_7ZbH4vHTmVs", "v_k3oPZS_Id3M", "v_8cbHNUbu3Tk", "v_66nA52ux2Sk", "v_w87EDMJo5NM", "v_n9TuUTNpKwg", "v_N5XBi-uPkAU", "v_yZErFOSkogc", "v_oqVNFPUANfs", "v_CQXhtaNkhrw", "v_ZpyCrs-q-so", "v_iubDO1DSMZk", "v_3O2acf8oRVA", "v_qRPq2PEiyM0", "v_Cy2wqpjppy8", "v_5hXH-TorJ6M", "v_U1nvAxorOPQ", "v_rfH9VLQAuwY", "v_bMJlN9iPpCI", "v_gxJeNdvNzhU", "v_t3wyR2VQy20", "v_4ImpZRtbzYw", "v_JkZZvDHTty4", "v_g_AwwSsBj0s", "v_m16Cn9VA3Lk", "v_l_KhWbeZeRA", "v_jl7aBkPfcS8", "v_O0-CRPl0TR0", "v_O62LVI0XNHo", "v_Fi2Al65EH0g", "v_BGHQbw5HZ9Y", "v_hYAE418i-ZY", "v_LZC9MLWo9bE", "v_cWpT8nb2a9s", "v_Cu-p0FZOqi8", "v_A4PdcfWqrN8", "v_OJiLPJkzel4", "v_1XtjXqqPvyQ", "v_aotVhoXjqS0", "v_sS-KyhAzeUY", "v_arnKDX_ToxE", "v_8-1h1YXYvhk", "v_vg-FrXO1coA", "v_2FIQwmB362w", "v_jDeBuorU4hY", "v_g4uvBcIE1Os", "v_uuhcDXyGrEI", "v_6NqS3vYvf6Q", "v_Q-879RNVOdg", "v_2oc0OBWkYfg", "v__7XW-BFK_ZY", "v_yE5whKJ-DE4", "v_7GOPv-XegSc", "v_1ioKX0iuico", "v_Z1N185E4gsk", "v_x44fn0snUvw", "v_7p99ez6MEeo", "v_i1CVl-0-gJE", "v_4Ex-sB0vtwk", "v_00KMCm2oGhk", "v_gXk9TiqGUHs", "v_XkzEXA4b20k", "v_RIr3Y2XS5NA", "v_iSH43hQoxio", "v_33eH3ozXLmU", "v_F3iZD7tm8Io", "v_TX8FGTL1flw", "v_krNVpENNPCM", "v_IaT8-cA_AVU", "v_E15z95ZcEYU", "v_jikOPvJPU-c", "v_ZoCRdAYWtKg", "v_9fw8ODTEso4", "v_4EloxAiCydc", "v_hr8zkCXbTTk", "v_immCYvN8pwQ", "v_g_bb4RSu6TQ", "v_atw5LkvnAyo", "v_YIEv6_HQtAc", "v_kqzIDPXbATw", "v_DTWZhe352y8", "v_86iCOCtA4Ww", "v_bqnRA6rZcqs", "v_cr9VTwfM_2w", "v_jLcYOkRvdic", "v_D5jLypnn6Ps", "v_Lm4oeMdqOgw", "v_7O9kkDxEvaY", "v_U7QjLGMeGOo", "v_7J6cZ_Gz8q4", "v_Y-2nhi8JdO8", "v_v9APkG4il4Q", "v_ez9pf35BMtc", "v_i_tMiGS11fs", "v_olBh9KMAHMQ", "v_QHmZWkRK528", "v_vdq_xoRyxCU", "v_b1wnLw3H1vo", "v_Lp3c3nwHrqM", "v_a39_RoOBkX0", "v_POvVSjY_8HU", "v_JoQywfQ6B-8", "v_Nx4rK_jvvR4", "v_JfF80Uho8U8", "v_H4spfNy_LG4", "v_fbIEeQknsuo", "v_42XFIWVIWpw", "v_8uV6u0QcTSs", "v_4aBJ_L0u7Lo", "v_iA2Q4t-o58w", "v_KNyWPCoHEng", "v_py26bxAfOEg", "v_rIr091-LMGY", "v_5koLOwu786I", "v_-doxoUNGLJE", "v_xb8iMASjw1A", "v_iEaiLh3GZA8", "v_gWz4P3Jnis8", "v_pSWcVR96xlc", "v_3q_MOQNfSmA", "v_g4G1gg-9y7w", "v_lCIJJgxTs2U", "v_GHBeLaysVaA", "v_L21zcZlFfIY", "v_TxYZLJQOHvY", "v_egT7FYHlWho", "v_MdOAr_4FJvc", "v_84OwFujqHyw", "v_cduejHfXPDc", "v_6cPXFUqRB1s", "v_J2gJYNO2qh8", "v_BJ9r8_JnG0k", "v_HZ0tf9Cp340", "v_SwbvD590YtQ", "v_VE9MAMmF1wc", "v_oUdEoaKDHpA", "v_1ebIpLiTCvw", "v_HktZZPJMU8s", "v_CKjHXMoXye4", "v_yw1IZdbEzck", "v_IBscTNN6qfY", "v_ItukN-TWrJM", "v_1T66cuSjizE", "v_nfIM66dU_J0", "v_tj0sI8M3tro", "v_jnOqi_9KJiE", "v_8BsIeOSzK_U", "v_Lf3oTCD4d08", "v_e4XYZAs7tcs", "v_zKHMKAOb1iw", "v_MWWDqMI-rxU", "v_Xu54UPG1cME", "v_yKLX0iXyLsQ", "v_81dGQTVec_s", "v_ow9bWn5gOvg", "v_nB0JECwGK0c", "v_n3MGZcDHr-U", "v_ykdRdg1XvFM", "v_MmOQhq95Z_g", "v_dRqbDamDLT0", "v_JhihdPxI_Xc", "v_FeWZkO6kZl0", "v_N2zoVF76Pgg", "v_ndJqptBTxAY", "v_E2nAOID5DLM", "v__3I4nm2zF5Y", "v_6UPfqdssD6g", "v_zSOK9jmWE1E", "v_VgQmPHpRFXQ", "v_EhjiQFHfDmY", "v_EbGq9gXcXLQ", "v_-zZJmRT9udU", "v_IkbEC202hYg", "v_KFS_lGlO-Ew", "v_TVmuh_sR1KI", "v_DAPX3S1Nmqg", "v_JTQsElq5UN4", "v_G3H3Gflf1SM", "v_jYyN-nJcm0M", "v_xBR7YEKPgDA", "v_mHVmDOxtVt0", "v_RPkLocpR8VQ", "v_vLL-voBPWM4", "v_EpV0Zmg50nQ", "v_l2drIA62T8w", "v_ADUmfTuiDH8", "v_701qhmCLPxU", "v_3G9zc_SEOHM", "v_A0LLegTPpWk", "v_IDVWoE02zjM", "v_InVpvGiubi0", "v_qlqF8K072UU", "v__bz66SOrklQ", "v_JMpwIWxoB5s", "v_Wyr2o0lsSTU", "v_3nX5ZwzHftM", "v_slHv7r8A4OI", "v_J-uW8raljqE", "v_sVma83g_wmg", "v_Cy3tUZIN8nk", "v_i3DJXbrg0vk", "v_zLeCGU8SVVc", "v_DOgmd5jNhXY", "v_LdDB7xXXHQM", "v_EXUKhI7WTqo", "v_6f1HnAlpphA", "v_ZMTi498qnPc", "v_xXXQyLS1uuY", "v_FRJLhGFpCGE", "v_eM2miz5uf8Q", "v_Qre7RVxEn78", "v_p42wxuN8MZE", "v_maE7PmL7Zjk", "v_JKa3jnnowNo", "v_lviFcaF4HUo", "v_L61Le9sOGK0", "v_1jl5qtS4mNQ", "v_EFEI0-awheU", "v_qeYKXF8tsp4", "v_DZx2G-OZAPk", "v_Doy6s1y58uc", "v_3ve9a8YKP90", "v_fOgfpA9MTOQ", "v_dGHryLMDBIU", "v_59R_1aBnFn4", "v_EVQlh2Et5tc", "v_C0t3fbC2RCg", "v_uE9MHR27_gc", "v_S0Kl5D5mrvQ", "v_uOk4EFDsDP4", "v_e-k2J91a954", "v_caPl3Aszru0", "v_UrQ7Jq1s95o", "v_DCjklOgbzGs", "v_1DmdX5QwqFI", "v_nZ40a3LSFeU", "v_BzxK6r4UG5k", "v_dID-dQpaLbc", "v_pTmlOZY0e4c", "v_mmRpNwb0NZ0", "v_czmYE1FzBXM", "v_-F7QWQA8Eh8", "v_U9Dcet1qdRE", "v_VFsRRXYbuHs", "v_45P3UDcb4Gc", "v_Nn-KZMYbOv4", "v_TtAEG3yXDnI", "v_BodF651KcIg", "v_AUHORHUgC-s", "v_LB9-RIKxk6E", "v_uSuHnQPWfNY", "v_SyOdA4ZKEtQ", "v_k_7hLIwul48", "v_lgu-DBDWlEs", "v_gMLA3a0FMS0", "v_s04x6lhUmtY", "v_DmAOCYuMgtE", "v_uDmEOkAXTfo", "v_nEj34gf508E", "v_ILIpCfCWyT0", "v_wyOf_L4cNHc", "v_Yzb_4XMgcM4", "v_StTr5O_wGXI", "v_uxMOn-NmmZo", "v_4WikrzXQ3Bo", "v_rczR9C00KOM", "v_bJ6SpcLM7GE", "v_U6-j4rUn3dk", "v_UgXPt2LydrY", "v_RKzwMrL5Th4", "v_PmeBYO3ARvk", "v__vK_sDOdgbM", "v_vGcH8N8sJlM", "v_ZojEQYIV_o8", "v_6fuOwhx91zM", "v_G77y1JRjZDU", "v_KbbEbeCJTJg", "v_IajP-SB2D5c", "v_KfzVxgHEyzI", "v_v0cihSAXQbI", "v_NjzUWVoc8rw", "v_tik7rHU_DM8", "v_l5QQ1vVctOo", "v_z08g5S7J-CY", "v_wqc2KnHfPHk", "v_WpQHQeY43zo", "v_LnDz1rvDaPY", "v_LV0nevBELso", "v_xAMZGWqRmqE", "v_j7Tk8I_DCtw", "v_S_1_ZSMxRfg", "v_N8BlpYSpgg4", "v_BH-kBRn84i8", "v_jpclX7wgcZU", "v_x8cuLOUppmU", "v_eI_LceS_qnQ", "v_FARJEomZRrc", "v_ZH8hnmjRDsI", "v_RMkaNGdydws", "v_4WUFEnFE5sY", "v_TdqEtrrPX_Q", "v_G71xFbDSSno", "v_Y7BBrdCwIJw", "v_hXSee4C6pyE", "v_3f6G-qzwzfg", "v_20ooSJixdyg", "v_kW4ajodPtWU", "v_smh90DBXsBg", "v_s_VFaQTlskE", "v_lBhNeACY8y4", "v_I52lhI6txNo", "v_NCvNIKw4EZ4", "v_QjMNQxu3Zf8", "v_dL-ybVv7Sgs", "v_EbBlHnunlSI", "v_0GWJ-VHFlTk", "v_eUecHAdv1uU", "v_coEvniePQLA", "v_MkKUQ4MMHd8", "v_9R2wP-iceaw", "v_SBj7yuFEwQI", "v_FEqLmpNzxdg", "v_I5F59PkcDWM", "v_pUIicfDCZC0", "v_U_ia-tINzpw", "v_WFqr6QPsszQ", "v_8QEG_1GhoEc", "v_70GQ4Nnrk4E", "v_CGIrDfEP5lE", "v_OHNH7IV0768", "v_W30cufYc_ZI", "v_krFle3KU4Ts", "v_WebWWFKJ4b0", "v_hokqvyeqhmg", "v_IAtxK0w_ybY", "v_7oeFpnRCJkY", "v__nGlzZystmo", "v_mbB7UFoTwpo", "v_IoJoUIxzdac", "v_6Y8wppTQFPo", "v_lNvX6h3o4EA", "v_i1PpX1IOcIs", "v_Hhc10CrukfA", "v_sEGceBU8icE", "v_oKauZV0DHHk", "v_2DwBXRhtX4s", "v_O_e0pqEMZMw", "v_mtSJG4q2vP4", "v_rze0k4LklN4", "v_3KmMvfdidvQ", "v_cRP9tyF1N4I", "v_O-YKLVm0ciI", "v_QZWyv6SShks", "v_yxcikJ3Hp8w", "v_ogOrhXUgna0", "v_dzR4voNDZ7g", "v_9iJ8snVY2s0", "v_VtIMPJjcdn4", "v_j5M9l0qxwnU", "v_QUJXOFPJ_YI", "v_B73wt5icB-c", "v_6UjZaj86bKs", "v_61wzTjdnXe4", "v_HgOHqD0lWTE", "v_hoYF0DhYVOI", "v_QVdsLRKpCT0", "v_XlEmG7nM0jw", "v_m_gr7WdjJmc", "v_cdufbM2OCwM", "v_4zYY4abpCgI", "v_J8pZtBhpqMI", "v_nCzB1iXKYk4", "v_X9Z9uqrb9EY", "v_ZdaS-WZHUZY", "v_nibek2g971I", "v_EQMDnhIKU4w", "v_kPM3RAn0Mk4", "v_g5CYoFJFkPk", "v_lztbD1NRU4M", "v_ybkcKusf-Kg", "v_MubE2kOK6z0", "v_7ghaFHKMUZ0", "v_lKi-hl_KGJ4", "v_TscC5kgurqY", "v_ID44l9VqqGQ", "v_86Yl3F2HSik", "v_clSku91LoQQ", "v_zfU85oBVpfA", "v_224E-VtB4k4", "v_UL4YwgCFrDI", "v_NNqghz7Fd0M", "v_nuaTROuaZPY", "v_45AIj4-_RBw", "v_99xnJSBRzkE", "v_bPZRYmr7p1k", "v_-mX18jJkPDk", "v_WQmJrfjOF7o", "v_JLDZdxTf5TA", "v_LbXhdPZakpo", "v_0zjA3KPnLK8", "v_IlKOWIBAEFE", "v_k1aFJ-F8xTs", "v_maHLwXvNN3w", "v_5JVHUcOW0GE", "v_aGlfi9PqRdY", "v_GbykXyc8LA8", "v_eA5ANAdLvFE", "v_axzmwzPQ134", "v_xuq9oRm8QZo", "v_V4cYhOQ6Pfs", "v_lE3Hs4bsPhg", "v_0bzSBV3jHIY", "v_1MBVaveQDd8", "v_ym_OhvcJ--w", "v_YrdpvaBDDlE", "v_XjV0D7nJx0Q", "v_3ImTO0bzXPA", "v_0fvL6IHKYF0", "v_KePjkCySBCs", "v_SaG9e90z1j8", "v_6DXH6kwMe-Q", "v_HlYwtqJALns", "v_8C1EFngZC3Q", "v_EF74-5YIhAk", "v_LaWlIUKH3PU", "v_spJaetMCD20", "v_-8awLlFLcQc", "v_NHznDFD3V3k", "v_B42CY1Z6eV8", "v_B5hzlU0OepQ", "v_5n8wY8hwy3Y", "v_vlqrUu4gi0Q", "v_MtFX7uTHwFQ", "v_fgIJnjuMyoc", "v_WaXfGbfUYJg", "v_xYeqvN8cihg", "v_F51cKkjt6tk", "v_bh6VHVHMoo8", "v_OUIS4bnEhU0", "v_AdP2aMo6OgY", "v_hPJw9_nPo_s", "v_CBoitanoH4s", "v_V1IHwwpyFUE", "v_079MEwdDNjg", "v_jelxK3R-heg", "v_IjwOh2YmT9U", "v_aAY_M6M26TI", "v_LLFhSU-XuTI", "v_0pcrpO0Gd8M", "v_cGoj8xGxrG0", "v__DlDtsPxdyY", "v_4qnrM4k6qN0", "v_HzAlvJ1fNWU", "v_otWTm1_aAqI", "v_aoY0XhAXm7M", "v_34KalqGygZ0", "v_ssktVpcv9WI", "v_rulzKikXMHo", "v_yu1XjQUctiM", "v_f07eWOCKLI8", "v_op58Lalekrk", "v_zy7rd78yBnY", "v_1RQ27XZKU1E", "v_E5FiPYZARLE", "v_3abD7z6vRPM", "v_bXApJtAf6Qo", "v_RPkH81M6-NE", "v_kgvbU_3jEy8", "v_qY7LG7r_IA8", "v_NbO4k5EtU4Y", "v_JHHHuKeA-WQ", "v_tOVv0cAyjcg", "v_rFTVKkMqpIQ", "v_fQB76oAKOQc", "v_t1-GV2bAL4I", "v_D_y9uXMbImA", "v_3Hbm8FdirRc", "v_AKShRE_4eTA", "v_mRbqt5ugQSE", "v_HVKveVRZ-JY", "v_NLdyQ1oMmAo", "v_euyYRNOSPE0", "v_6Ni6csyQbzw", "v_zh0haUMeZV0", "v_9PP5_HGpu4c", "v_CUG8vpMIFEQ", "v_qoVYcplxgFE", "v_SSJjjggYBxc", "v_7OYvyg32iqw", "v_ZJ6BFrKcRe0", "v_JcMOzfurtK4", "v_685wnEW1Uq4", "v_iXaA7PVRhIY", "v_SfQku6CicrU", "v_4j7sZBThR7s", "v_Y5zJT3BjIxM", "v_GxOjqC_IDX4", "v_8OEts-YLeW0", "v_V9MTU7xLukc", "v_rXgxlwrRFTg", "v_jM3Buw2Kidk", "v_SipyRTPgdfY", "v_PJysE5c1WDs", "v_6rOmYOU7748", "v_DfOqhNeHDgM", "v_Zr1xfVeUGeo", "v_50b9lVikSeo", "v_VUlsdTzaKV4", "v_kdfJW8YV378", "v_ywFa_D5QZ-k", "v_er6fi7nYsuw", "v_gB_xHRJY7sw", "v_75cjK13ylJM", "v_uIOIcv5MhuA", "v_IEPoIqIrprg", "v_j0JsoWxrGh0", "v_NLdhDlsMnxQ", "v_RxXBMgsu6uU", "v_D0RDF1ez-8Y", "v_tJ2xOG_EWOg", "v_QryL-hVKAOA", "v_Gg32cIypcdc", "v_J__1J4MmH4w", "v_lQWij22wbNU", "v_kTf-Id-lWX8", "v_Pr6zL1ToSC4", "v_9SY9ufDznFQ", "v_xv8OYJ7t7-E", "v_Aygp8JaMkqQ", "v_K0MzjnMzbj4", "v_9WOvWFdA7lY", "v_DXhVbxfmrYM", "v_8ZA8UGBEx74", "v_kmWf36zfL7o", "v_0e-qdFlRmPU", "v_tXLvsYsWCoY", "v_Wy0u0amd4Ko", "v_Gl6EMAgTNKo", "v_WXST-TXQjoU", "v_huUb8mM5fv4", "v_jDfTrTtPs5s", "v_2aw1pVJsnKs", "v_GvJxJf4m6_M", "v_r1mrueEHDDE", "v__tRAypMWUdc", "v_OPp3DqFq0O0", "v_5SNtTQZnN4g", "v_Ti1ZaH0VGfg", "v_wPYr19iFxhw", "v_fvslbZDJ3C8", "v_sRol1BJ9EUk", "v_8A7nbBMC4eA", "v_u1VIetb75rs", "v_jUmfhYsA5r0", "v_XQaaA2UZYh8", "v_jpWevi1HBYo", "v_1hiyhNqakMI", "v__i6yjCO8nzQ", "v_J8mSgO4r-kQ", "v_IAj0JIDDaOQ", "v_cfaBPxE-A5k", "v_ZYcZZJ0XItM", "v_UL_3QfD3ERM", "v_Ck-9AHZNkq4", "v_kNAgK0nC9Ig", "v_K7f0co9akMI", "v_UadYaZOC6B8", "v_xlr_sSnttZo", "v_yl37hI-Bgkk", "v_otq24Pdm3sc", "v_MtmQjudesdM", "v_G8OyFOhVGCI", "v_Z86tpjRaiK8", "v_0HhNhRExwSQ", "v_f6wAW0Jv2Eo", "v_i4SvqrGYH-Q", "v_9IvKkq9k81o", "v_cVuHOF56B64", "v_lgB0Ynn38-k", "v_CQ4dPckD_Xc", "v_GcE-0A4Titg", "v_RHtpBRwZ9hM", "v_PJgy8J1f3jg", "v_prqwtY9cn6s", "v_NpBZn7OHUKo", "v_H80bs53Arrw", "v_ke9gaIRnaEo", "v_OUpTPRtEITY", "v_K8XNOs0AwaE", "v_8E8bytYxwAA", "v_isGfZVCL4gE", "v_CHBpVOfPmRA", "v_PLnfT1PoVHw", "v_TGvY7GtyTK4", "v_6vylz7u_tHw", "v_VvD2fdPNWEg", "v_RblRzlmSFak", "v_lFlQ_xWVt8M", "v_MWnYL4JiMP0", "v_gjz9pSK0Y9I", "v_gUR1wXosHMo", "v_vrwJEvpeHyM", "v_JxbmHo84AC4", "v_8wB0BOjuyes", "v_c7HroaL0WDc", "v_9gU5be5YCVw", "v_Yu18MvEn-To", "v_yQ2AirKmnTM", "v_VlLq4bAHCXI", "v_8zq6C0SRyDQ", "v_-bqaXU4s8Qs", "v__jxpaVW4_cE", "v_kkIClKG5xY8", "v_jEppv00aBBc", "v_iWSKl7vOd2s", "v_qTAG23IVSeM", "v_sqcJOpPrexQ", "v_SL7iKDqir6g", "v_yt0K2HWC0WI", "v_PjcTk1hcf4k", "v_9_zC7CdvYu4", "v_mIC02-VKqUE", "v_djE5A2S1Ezg", "v_Likt_9dbMqE", "v_gbuRv8phs1Y", "v_kU2FVf0ldx4", "v_JWN0cMm-8ug", "v_p7j6yY99vEg", "v_TUPCQpyoSbI", "v_Fr7rhb2Vw_k", "v_sGwra7t-ARo", "v_nqbYEJlRwoQ", "v_y3Zq6RZZNtc", "v_dgRYwmcRpuo", "v_7o7hL0VccJQ", "v_xs5imfBbWmw", "v_z8VqGGu5vPc", "v_nGABbRHJ2Ug", "v_yISeNkFiVAg", "v_saB1t3Znhk0", "v_EqSXihtiv5g", "v_Cz5fahiO1AA", "v_7_HWPDDW7Cw", "v_EwjDShmfFHM", "v_KR1-rdV18pI", "v_iBefG1qFbsE", "v_Y9B22Ii7-eE", "v_Rai5nKbB6wU", "v_S5bjFaZUnOM", "v_hc4DBHpRuGM", "v_FP0tI2Tjigs", "v_gXvRxyT5rWw", "v_SdsoRu3953g", "v_dP2DgvNt12Y", "v_qHRCGBIiNFg", "v_lOCw2uO3UK8", "v_pWotXONgXtc", "v__CMIO5R_OGA", "v_1L_4N307nBk", "v_DgJ-GG1Agyw", "v_L67RSiR2X78", "v_PY6WgOIZlhw", "v__E_9te0nq3A", "v_6czh95dpwAA", "v_Jsx38_s3Mnc", "v_wkSm7bUCgGQ", "v_f4CSejhkTd8", "v_PjNF7HoQ6yY", "v_vgXU0u-rN9c", "v_T7Mg-Owb14Y", "v_Auy0KGsXAIg", "v_y3xcwZpcLvI", "v_dJO_4TrLr7E", "v_qI1ZayfiGHI", "v_M_E1i4S8Vp0", "v_GO_36Qd9bb0", "v_oRtMsdNQ1LE", "v_nB90Q8sTBgE", "v_a_c-FIC_W4A", "v_N49yT-kvXuw", "v_Hk-wwGuHuC0", "v_gJxR-KzawO4", "v_OQPBLjX1LHk", "v_oW0G_C86fz0", "v_PMdba6f_cho", "v_YZvdzvM-124", "v_ykov_joUUTk", "v_Lwy92HbuZII", "v_YTuQrhSKkNE", "v_ZU4Mgdd3omA", "v_KBG7wrKsZAI", "v_DrQgYA5_8VA", "v_IsHMvAfUOGs", "v_cr2lbZ6or1Q", "v_YveUW4bLL5A", "v_0EewuppFjEw", "v_JmtcnoHa66U", "v_lM4FQ_FqEhQ", "v_pMVo7PaXD1c", "v_aBdrTqSnWbw", "v_HEw5wIWVpWE", "v_EQNJfWiAS28", "v_DWvFgDSAUzE", "v_8onOVVuN_Is", "v_OjV4UScwkU0", "v_2D22fVcAcyo", "v_wBcP3SQ3Qg4", "v_rprQvEVVpIc", "v_PRT0Z9HPF4U", "v_rXgC48CLncg", "v_oS7Twj3Pou0", "v_8itO1pQI9ww", "v_9Zy5ylJYiA4", "v_jFZRNe7xFY8", "v_TP8lUusp66Y", "v_Z6l2Yu9Q0mU", "v_aYSJn94g_Io", "v_ennVaOEePHk", "v_mSonugqhYuE", "v_0h4UT-2XTAw", "v_GLL1vOrV5Qo", "v_elgmPvU19K8", "v_xynscQyItDQ", "v_s9MNW35YCMw", "v_lkC_md7KKq0", "v_w-KZEq6JhnQ", "v_qL7kMgxpFJY", "v_L963epA4MFU", "v_JjGhHZgdWVI", "v_S-DOW63629o", "v_ku65ME0vW8s", "v_b380n1dci9I", "v_qrvPTE0kb5U", "v_Dzj5X11anrk", "v_Q6KyDc24uSk", "v_9A3z0W8U124", "v_e8MK2naV6E8", "v_6YIZ00dNpMU", "v_2fndjkCHsEY", "v_QixK0AeqcsI", "v_b8eqn-GTdcc", "v_KNyM0KvDHMM", "v_DUb48prwNZk", "v_kkcTQHFNXAg", "v_1jjsTfZS5DY", "v_shpZ47Mvxfg", "v_DqsaFxxfONY", "v_obUkL-Ya8dE", "v_e1_oskOyQoU", "v_Q3tPDohXUYc", "v_EvJqfGXb5Fo", "v_aIXUWoP-L-E", "v_0hdwFR5qWz4", "v_UN0bAa_ko4I", "v_kQ4rE7o6rrg", "v_M-n0vW3p2sE", "v_JgAlMwG3fWw", "v__uKKSGTNJAY", "v_PCTqA_ov8RA", "v_XFijgUPprk4", "v_Fr9F2xRLd0A", "v_w--X02F3MHM", "v_o-RbNz6gD5k", "v_ZWlwKbuK2fM", "v_t7J7SugZPlE", "v_Et4GHTvGbg0", "v_tXuNa_h804c", "v_RX-9yj3PkYI", "v_KGTPkiDRpfE", "v_U0qUFAPUg_Y", "v_4lxS8OJRsa8", "v_XdqHO4x2FL0", "v_C-6kvesNmU4", "v_ncXZIWMNKZQ", "v_SgnBsgrqfj0", "v_iNs17kcwlDk", "v_i4yQ54eWfy4", "v_PwyvQ3BKziA", "v_cxDPCkefl1A", "v_1fbU_MkV7NE", "v_sfT9Siql3P8", "v_Zc8zn0sKfwo", "v_ZOKC86lF6E8", "v_cFCN9QE1M0c", "v_mdu7eHlbDwc", "v_RpH774VD6Hw", "v_R6kXT4Spiwo", "v_ZlwkO1oFBHw", "v_w0d32MVTY9Q", "v_aYtnkEWM_Cg", "v_cyznGwlE9hM", "v_9ZboVy59qrw", "v_YCqbvmEG-Uw", "v_SKdouCRLoKE", "v_S8oIiWRiIfE", "v_AAfFlwaXW3c", "v_ywWHBghVyJ4", "v_gzVpwbiB9fE", "v_qokr0bO828E", "v_ClW3USojCoE", "v_FiFzHgBjryA", "v_lTDkfbr7znU", "v_R7DhZaY3A08", "v_DV1ITGBfo5w", "v_yWCEDAQvhzA", "v_gLfvk2SSj1c", "v_dSdZz_Royyc", "v_3-KLYPzd1zU", "v_LCyLWiw7n5Q", "v_I2Y-5EEXAE0", "v_G0DPDo44wt8", "v_z-iuSgXKUcw", "v_GhXniQgRUTY", "v_sgUMHHuAhZg", "v_4OCbTYrThtc", "v_wKThOOUV6lY", "v_IlCsGkFnRkc", "v_wnnoaLzYqVQ", "v_Fyi7pbkKk7w", "v_n2L9F6cMNaE", "v_WtNvqSFTgxI", "v_F559bkkKSp8", "v_U37UAWdI-vY", "v_7x_1tRem1gA", "v_qA_KTu8oTW4", "v_s0N0PzdwVik", "v_6DYQHmsezUw", "v_HGduo1zU6Ok", "v_iUMDlxU14bM", "v_r1y_ASZDdEo", "v_MU2DUVy_wqQ", "v_y_Ak7a3oXRY", "v_lh8ths6sKAE", "v_LANB732DHbo", "v_R4oYA0Zu-m0", "v_aa0MLYA8F7s", "v_jzCnWUUUviE", "v_Rd18n3PeZvk", "v_hRuHqoXEvsI", "v_xf_exEkpJe8", "v_bDK-_jU_KzI", "v_hgLDMHCcw4k", "v_7o-2My6U3GU", "v_Ieb7EkMxpJk", "v_12v5k4Z8lAE", "v_NVR52Aed_7s", "v_M4npKXFKxPA", "v_5_M10vevgJg", "v_CRNycmwvGXs", "v_3TLhUYQ8geM", "v_Gfsk28SzgXk", "v_wD-9KvI1-AI", "v_AUPs7Ukfc1I", "v_ur873jaQO3A", "v_P3_YQbHXEIs", "v_rzm4V_McRhQ", "v_CArYinl5tFo", "v_WqDep-4l0yc", "v_8r167TmBebg", "v_Rvx2EoMScKg", "v_I8jhEprzTN8", "v_ukPz_13Agis", "v_xevpFDYTJ0U", "v_lVOBMs6op7o", "v_cUdIbmXb2yI", "v_fPCfTJLh46A", "v_zg6BRB4a3Fo", "v_xkIhTMJ_ThA", "v_-9l1Rh10bO8", "v_u9aFICSj7zw", "v_u7OvguFW-Hs", "v_wB9LBEHR5-c", "v_I4kjOE8HnU0", "v_ihdkXBpzKbE", "v_Z2KHO87wHzg", "v_W1krUTxgsMc", "v_15IRaGI4Ml0", "v_pmlK-IV4vko", "v_HnCUykqco5M", "v_8nQGd6hiduA", "v_QpJ5npI8qO0", "v_BZAzrFF5emE", "v_psgIH8U1adg", "v_8EfkFxoXI_4", "v_n25mDmcBC6E", "v_N-1b20gDnCM", "v_2fMpsSrmeIA", "v_c1tbdVxIhH4", "v_gVMG_FHDrvo", "v_9FK7tjzBKio", "v_FYhB2rQwfCc", "v_zmmiX3_TJ84", "v_Xf0c2abFH3Y", "v_UqU_mAjgknQ", "v_VxoBV76IkLM", "v_CRDBKk44RWg", "v_0gw1Qq3WRbU", "v_s1YjWVUu6pM", "v_cZWgq6ATrRI", "v_WoB4lSNBDww", "v_YS8swiRbbIE", "v_F8K9WQfHth4", "v_No5ZwqHdEQU", "v_Cqbs_wM3oc4", "v_jDL2tRtoxN4", "v_gIf0VWXI_DY", "v_tyeLIzY0MJ4", "v_nI_XzNfxjlY", "v_vjMuhHo6wMY", "v_9jivQgF6J1g", "v_kYkwA_lvqYc", "v_g0vAi9iuVPA", "v_Rq1MoqtH8fM", "v_BwR1DPCVsP8", "v_7NG6UrY2Foo", "v_UU8Xtm8Gl3I", "v_D2JvqkKa-qM", "v_nrvB8pcrY7o", "v_51wFW1g42VQ", "v_6HmKyms-U2s", "v_oNrWO_VQQbk", "v_el-ogdlS5nc", "v_HtCQ-OmHJl4", "v_FFyJjF4MjHM", "v_MMB5Cn3JCGI", "v_rzIaKwWJDZI", "v_HwTSF0VgmMU", "v_H_dERoTis5Y", "v_5kmGgH4xFW0", "v_0YHCiC7IIg8", "v_WuO75Sb0Kgg", "v_dUzqM38vwPI", "v_LdzaFk5VrD0", "v_7H5oYHs7EJw", "v_C0gGikr-Dw8", "v_xFIfGrhYpAg", "v_Xq2LIzE5eDs", "v_YeikEC85CGk", "v_CIgdBoHjGXU", "v_HsGz6S2MBU4", "v_LbS-C68GTX8", "v_v5patZyuYys", "v_kyvxaxRFLG8", "v_kuJO1VapxuQ", "v_xc0Wm-TH5K8", "v_VXCV3KUtCdk", "v_lfH_S2LTEXA", "v_3JHIcli-Wlg", "v_sCxGclun1E0", "v_4R37E4Kevs4", "v_4Q5YJKHa5W0", "v_H1cKUnazzFM", "v_VAG6ECk5WYo", "v_oy1XjDer7o4", "v_rFM3OUUL5fI", "v_jimvzigX1ak", "v_pouxwDABDrg", "v_aKacWW7Mn2c", "v_TjLoGNBzNRA", "v_rMZtiiLAqoY", "v_-7wfTI8Qv1Q", "v_wb8TkqxxEuo", "v_gdyEfPbUEjw", "v_LM8C4FSpN0w", "v_QGzrtgTrwiQ", "v_pwKZRo19Vf0", "v_len7R78v5NY", "v_4KE6dUAGZ94", "v_9pJBfTZOcxI", "v_tm_CL7A0W4M", "v_rRSTE1EsAUM", "v__KOVk8iGbrA", "v_JQpx7CcTstU", "v_unI7FhokvbM", "v_5rVXCKLihyg", "v_yX_DJiboktI", "v_xmSN6La-2vQ", "v_et029cxyEOs", "v_kCb2Km85Yn4", "v_T8ae3_Pm5eE", "v_BtrGC6PUPJk", "v_06xJ8-Dg_j8", "v_KgEHEyz3oKw", "v_gLfIPN_WM48", "v_2zohqWPmeQU", "v_asLRIsN6wLQ", "v_VcQHv5PHb-M", "v_GD9SfOn3irM", "v_jzVxdBzCuoM", "v_DmtaWx7QcZ8", "v_qZk7okgCU2M", "v_Oa26_SgrY8w", "v_A32TgJfp2z8", "v_UYGiq0CsYEs", "v_2Ks8gsK22PA", "v_WKoHUS5B2u4", "v_BD7txKlwoj8", "v_lVu-4SKcb4c", "v_qEU4vKowVo4", "v_tBGeBbO8gh0", "v_nEmuDmbOp1E", "v_rwxSphRRIL8", "v_feWO_gqAcGk", "v_oFc4uYTxEqs", "v_HGSZ9_CVuM4", "v_cyhWzLsM29E", "v_V4U5SaPDL0E", "v_4NSWcmO_u4I", "v_-E9YQ_Uhu50", "v_USWExMIMcik", "v_MXDeLfF5rok", "v_j05b3qqgRxw", "v_Re-SsHmajds", "v__aEHpGmhHe8", "v_Mpph0kFsyZ0", "v_kbdBKIWKOWk", "v_8jUdeuAOEJg", "v_Fok7z0mLNbU", "v_E9R1H8xRIW8", "v_bvnXdr-Hre4", "v_dea_92hDJnU", "v_-_gDSRlC1kg", "v_NIY1f2KcEe0", "v_kh42ufAYMZQ", "v_79FMLEeVp7Q", "v_hfk93bEIjwc", "v_shBiO7aGy6k", "v_LcXB-fSLTKY", "v_FQVs9_IbgOY", "v_hsPepNAzu_Q", "v_mYHezmI0U6U", "v_mpLYUgMhacA", "v_VXLyTLY1PAw", "v_9qVcdqGeAzE", "v_OD7lx6blG9M", "v_-01K1HxqPB8", "v_GyBIC-DBoss", "v_WhBnR7yIvJc", "v_pqVWGi0d4RU", "v_dAiqJJKezPE", "v_dMryzJswHY8", "v_KEMMmoIdT3g", "v_sC7xUkNTpP4", "v_ZUM89wyBcYY", "v_EwYgRPVDQWQ", "v_4At1Vd-0lWE", "v_dSF2i1OQtMc", "v_HCphw9_Jku8", "v_eUxFTEeNIGg", "v_EHianByJXXM", "v_Tv8r1w-rLME", "v_8xvoAyY70I8", "v_WPK5VeqNSh8", "v_bH6KL0ai3Ww", "v_DfFqlrv7F2g", "v_vrY1ZMqjMog", "v_bJ5YjjFLGyA", "v_V4S9ppnrXzc", "v_qNE6ju5dRc0", "v_duZnMXDWkGw", "v_aNE5ZWD5E34", "v_AeOUzM7nl5w", "v_E88Sr9H3Wi8", "v_nIymjHWIz7Y", "v_SGiMk9KdOQw", "v_X0UmqVLOAK0", "v_t0y6dkIwEvc", "v_NnMMEFglHBQ", "v_qRI4UJ2HR2g", "v_kWdIYqh6kEo", "v_87JvCGMC514", "v_Zo7oziWT-7o", "v_foZ88hBB77I", "v_swmNnPkPBek", "v_YlK_P4Ys6hE", "v_fnPX_0Rs4eE", "v_dBzWXTH5j00", "v_HytB88Fhqw8", "v_mjbzWcSeiwQ", "v_6hjRnngC73o", "v_Ny8NDMWfGJk", "v_pBaeRTgaNBM", "v_Zxi0V2pBPlA", "v_xpKAvKrrBDs", "v_3j2d27w3x5Q", "v_HrKO4BfXVbk", "v_jfhKC2WFDTo", "v_FmDGejzydo8", "v_lKDTjsH9XtU", "v_u_HDCcby_B0", "v_3MJQEQ98168", "v_8tlLBffNjf8", "v_1sTTv-XC-RA", "v_4o8MaHTb7E4", "v_xsdrqauYhJs", "v_tTkavaWq0QM", "v_Hj3kEemIPic", "v_5TjIJOFGupI", "v_ybAEMliC7p4", "v_vS0ppdYTwTc", "v_vTbeVoT1Gsg", "v_62BPME-ikJU", "v_j1XZ3FA8EYY", "v_8FSKFy1tPQc", "v_NcTZ3wgdNOQ", "v_4ACqWG_p1bI", "v_dcEdjqyHj8M", "v_ZFJkIiqOErk", "v_2uBPhFis_4Y", "v_YAiCO8en_ls", "v_PUWg7fXnCf0", "v_P_zz379qSuo", "v_R246xMs2aig", "v_FIw076A69Oc", "v_D-0MV6LRvbs", "v_NzrOOXRyDPM", "v_JDM9Akcs96g", "v_an5XI45pIl8", "v_6LLDsbc8XMM", "v_uBPWqgUiQWA", "v_ep2Kyk8CHT8", "v_qr5vqi5tTL8", "v_slQuWp_rMTE", "v_kyObhFkHrak", "v_Nsl_tnIRNEo", "v_RhokmoZJrco", "v_NhcOmldkGIo", "v_sVDRluetSyg", "v_aTQaYDmcMDY", "v_1a8PCm9e1YU", "v_4KMbeat6yoE", "v_onFddYAkyyc", "v_zemqddZ_YO4", "v_KNLGluuewIU", "v_gT_8511vwVE", "v_qJ4ObH27qjc", "v_QrQN-Hm5xew", "v_zhPqZtWuhow", "v_laeOL4ipHck", "v_6VT2jBflMAM", "v_xoSA8_kTiBY", "v_O1z0Q-3OUg0", "v_LYqfB7HsQwQ", "v_eic6dpU0ytM", "v_98OypfeTKEc", "v_Px08sPeSsG0", "v_0cscG-qOaQY", "v_iDhzxzLmwoI", "v_bY-4XBIGiwI", "v_PwOMgya8qYI", "v_zmaDLAZu4kA", "v_IXUh06YCtjw", "v_XZZRyOhxQBE", "v_L1B_cE8waag", "v_xzuQIbnXt2U", "v_LUDZ7e0RdEE", "v_LygR7ds26JY", "v_kzbQWKUMyS0", "v_OS-h1xzAZno", "v_E3IP4Y8e_ho", "v_ZlwU7HKcoYs", "v_9L-aeZsgwZs", "v_1rf7t4sYtIA", "v_A7ER02-zr54", "v_ZQSa_8wofFw", "v_Tab-dSCaMC8", "v_RTD_JWmhNkA", "v_QnQ2D-tJ9pM", "v_xV7uPiqNuwQ", "v_5zT1GWfmVLU", "v_TVPiI9551As", "v_t_Creyg6ANs", "v_CTWo9EfQ4Hc", "v_5WHnYEinw4A", "v_Sx3NHkPp3Jo", "v_u-yFENQQxAo", "v_RVbejE3s3m4", "v_vy91mJTl7rQ", "v_Zcq_xLi2NGo", "v_mDvWGOr_sws", "v_U7vWTmVzWSc", "v_y56qXoJh6U0", "v_bc7r5_gSAVg", "v_p1fpQ4yR1co", "v_QuEHZ2Y3H40", "v_5BbHu0WQZqw", "v_gN8F0o1baAo", "v_Ul8qLMmszx4", "v_pzkwJYJol7o", "v_IUnqrqZ_x_A", "v_te5xo60oVZM", "v_mwDQENGsvd8", "v_K5v9-h2S5pw", "v_ahpoDWYqtfw", "v_cICxG-28hK0", "v_7H4-gDM3r0w", "v_bb-DPA34qvw", "v_iRwRwpVLE_Y", "v_HdZjxdQhtZo", "v_qW926_opnTE", "v_WCCkmuFrSQ0", "v_zWiu-wdKeWs", "v_GSFyEkGCUVo", "v_ANwaFSIHdW0", "v_PFrFwE3CfjE", "v_vGKdr_au240", "v_ELiXlJUBzzw", "v_Q0FbJovQ0Lw", "v_r_jey4tT7zo", "v_zufK6CufVhA", "v_zYQ-WdosIwI", "v_hsJct3UsbAs", "v_3L0MnbQkLWM", "v_p-l6as8o1f4", "v_rQZIJBinOsw", "v_fjN9Qe237bw", "v_Ls-0SqAeXW0", "v_QTPz2j16KFk", "v_EWWCQH6WbtQ", "v_qKG1mU0Feug", "v_ngxs6ngJR4k", "v_spZ_RrpyNJw", "v_iyOyZJm7fVU", "v_dZZE8HI0OBE", "v_y-rgla4aNUo", "v_WqnnGmL-lmU", "v_lEGetBydfl4", "v_R1Q-KP8GHFE", "v_CsvEXvHlO3M", "v_A92F-HvSZx0", "v_s0swzu1jIpc", "v_lGESoAdgps8", "v_ETHVjrG7S4k", "v_vlX9sU9bM9s", "v_OabVylOVys4", "v_jRXF5_vNUWE", "v_S6VgTNGiIkg", "v_GaIvG8u1tzo", "v_y6sx0u3MYFo", "v_7toItxBIVtk", "v_WHYEBsWp5qY", "v_sZ95YHZtVCc", "v_x0PE_98UO3s", "v_vWde8sMxe1w", "v_OWyqpSBJH8M", "v_gwpQuO5DPOA", "v_vynLNpomc30", "v_KEXm-3H6eTg", "v_Tw1vg9qWLx0", "v_HCFF0svChQY", "v_h3qKte2gv14", "v_msiX-xky6Ac", "v_XvM1rCVQWWY", "v_wlcU-u-xsH4", "v_zuBJzdDI9MY", "v_U36rsW_WhUA", "v_HguqDEvSN68", "v_Vhf-vNRYQEg", "v_JyfelXz6GaA", "v_XFlKGUFgBnc", "v_R_YZNqP1gSE", "v_xMuC8lmVX3A", "v_sVT71OQjHE0", "v_8tddzer_NfY", "v_FExyWFc1nU0", "v_Zp9mSiw8Vkw", "v_nfjIQXyL7_Y", "v_JUfowIpmwaE", "v_RrVsNvO6Yd4", "v_3FAvxuTw4NI", "v_y9kk0ptXevk", "v_F7u4kpwhs5g", "v_S5MD51gg-vA", "v_bb9AIdvKkZU", "v_98YZQ0gNjpQ", "v_0y_5NIIvUzI", "v_pe0MhPhhVIk", "v_pXcFBfv5Sf4", "v_m--b-Ltjm_Y", "v_6kgJx6ahgq0", "v_Z0eBz6QsI-c", "v_ChH3zlLeWug", "v_1dvrNvxw43Q", "v_03JdaRepHkA", "v_UlWLcqIvLKk", "v_WcBB6DfMTWA", "v_xiICsWY0xOk", "v_ZeBrPKBGb_k", "v_BSKolF3MMe4", "v_x86YIU9TIPw", "v_eyBSKNXo6Vo", "v_-VexUX6OJBM", "v_a5Xc9ZgN2yo", "v_GlJ4DvArV6Q", "v_B8KJJecq2F0", "v_KRES3eBM2l4", "v_FMVECEaQ0Jo", "v_O_fdvOxYqiY", "v_m5NK0eErs90", "v_NYRlfaKwTag", "v_jPaeFy4Phz4", "v_-npRRmY2wBs", "v_ecUypvzBAOQ", "v_TFIlTCvL4oQ", "v_1lagsBNqNe4", "v_G5frRzhSNJ8", "v_ot-Y1sa-ujc", "v_35WvCw9Qcqk", "v_yBL1hCKmX7s", "v_Yl85vnsndx0", "v_dnQcp43wbRY", "v_OyKEEws65l8", "v_M7Lc8nh9auA", "v_BhAQhPasmhU", "v_RzFqIN5hWJQ", "v_Bh8RcPBQjxo", "v_z1tV0-C3IBw", "v_ruNII4WvE3k", "v_PxGggNnMGtQ", "v_ZBAQx9DxYTo", "v_hiYPv3MrrUw", "v_8inrvRctXQ0", "v_cIKAwgMLKw4", "v_lyjz4sNglQg", "v_YZQ_qh9wC4w", "v_8yeUJm0Pl24", "v_uOUjBTlwoxg", "v_VtS4vy8Z0RQ", "v_fwwo0GsYB7c", "v_F3jJVS3NHf8", "v_MRzsZN5p9QY", "v_uF9othvTXn8", "v_C3Mdjku7ZmM", "v_FcfoTk3UK5g", "v_PL1JmxPH7y4", "v_a0Zlu4AvdnI", "v_aVH9QsSATKM", "v_5c8HvpeRWrc", "v_NGiDXRIx1gk", "v_qy-LbstiMYg", "v_TomBet77rDc", "v_45WdXofnTkI", "v_yToUeIIlkOg", "v_J4rzLO4u_pI", "v_bjKd--KFl0E", "v_rO9SwC42Goo", "v_sIYRsGZm2XY", "v_zN9COeDCm9Q", "v_DhgdEfKAvO0", "v_JQcN61A1MEU", "v_hhQ1Xbytds4", "v_TFwELfVs19g", "v_6DzBNkTen1g", "v_GgiaxJ1JeSM", "v_zhH7wxXrGSY", "v_jherly5DNjg", "v_OFe8toY6Ch4", "v_1VwNfMlb4JU", "v_ujltXvkQK_g", "v_JSqJmZPqDy8", "v_furUOKw0Qzs", "v_92fD8Cy2zL0", "v_hiifjzLG8Io", "v_P6cR-26pTSY", "v_wyNM_7YDgfo", "v_q1yuDuO01tg", "v_JfifgnVgJEU", "v_f38Jt5D0z4A", "v_SvYeqLg4dQU", "v_fpIcr1RaEDc", "v_I5g6I-FOguQ", "v_1AiQt87brik", "v_RSyk6rS8ay0", "v_nw32dno_RcM", "v_lI6h3H4Zs98", "v_IJAR9ERJt4s", "v_LSaUJwsU4GQ", "v_9cJi1iD7Iyo", "v_5JG8Dc2wsdc", "v_LqCg09IRp-o", "v_ABQYqpWF1LA", "v_0nPeqy-DA2E", "v_63d_t0U1pXw", "v_MoSuxL57xRY", "v_LMMimz1-fa0", "v_mUyMYnGXKgk", "v_THOLslLjRqs", "v_RVHx_Otzcl0", "v_XBO6AIdaCzU", "v_NIJTz15ikgA", "v_rnhtmtW_a8o", "v_s2VpBgSWIPg", "v_Trzd5ijRN1A", "v_suyh4tGuScw", "v_IgyBIt3GTAU", "v_QacSWR8c-8Y", "v_vXcfhKnUjRc", "v__S6D21MV8Ks", "v_TokZDNwr664", "v_p0menuS7Mlk", "v_vvvjTjsXbzE", "v_iA8ylJWzzVc", "v_ZZ71FIfxX-c", "v_9PY28-zQhm4", "v_5MLEO5JWRYI", "v_LCe0toF3058", "v_KI6FNa3BwMM", "v_qN0a8-A-5Pg", "v_aqQ7-J9kbUE", "v_mlNP3uaTB3Q", "v_XeRiPVEZ6pY", "v_3CeZS6-0NfU", "v_2AQg1DDVYHI", "v_jWPr92KwXeY", "v_yDH9iAn82Q8", "v_7-_Nur_xiV4", "v_uLeJBFypCHE", "v_WPVb8fYLFUM", "v_LBh2kEwx2cQ", "v__cA6yS9SeEc", "v_k7MXH55q28U", "v_ZJk05q3y5iM", "v_DVXOr56dlKg", "v_CoP3xaSZt7A", "v_4SLvbRa2NI0", "v_ijChwOwYDWc", "v_R-RQx5pbMvo", "v_RTwa2d6Oqvo", "v_Jv-bPV8eswU", "v_C1IuvUSmcvA", "v_YSrnHPcdGL4", "v_pGKTRM1vcfw", "v_hi4aLY1ajTY", "v_8ma-p7ap2MQ", "v_IJ76Wtgg2g4", "v_RN2QwhcAsUY", "v_3TNDCTlLlGk", "v_s5QkiA-w5YE", "v_9xHLzVojpBc", "v_DgcoDX3HbKY", "v_dc8pLGl9Ccc", "v_KZLEUd2ALVI", "v_agM7yjqVKo0", "v_cinmiQ4tHYk", "v_XoFikALe8Q0", "v_msd9vrplD-I", "v_mO1T8zhIliY", "v_cWBbuw_DA2c", "v_uElCsF1fOgE", "v_pMmlJGSucss", "v_WJfMz7joX4s", "v_qmlohhdz784", "v_tIbSsad0z9U", "v_RfchfRzuV8I", "v_AZx_lm2XLHk", "v_4_wfCFTnExI", "v_EVtM8DKW4bc", "v_qaB0igbuKuQ", "v_ayLeSjJz53I", "v_N6ERAg1EKcc", "v_9fQ2wWFJJGo", "v_ArzhjEk4j_Y", "v_-GRvxWH4axc", "v_Yc9pZ8Vy-3s", "v_tZswexUR6Q0", "v_iN1DEIADG9o", "v_mua8hNPuQHw", "v_ZjvmWr5LoFw", "v_MnZ9L54twws", "v_lc-piYwzqsA", "v_moUL_qLnNDM", "v_mj0lRelI0xw", "v_vwpaEsh0-1U", "v_NK5FWZ2BOQs", "v_NiinNJg-uyg", "v_jkay2K3RA1M", "v_IlN_XipVf44", "v_yj2WJBqmEHk", "v_SrKGO2Xu670", "v_ICBrXUuwvgg", "v_MmipoQF8EJs", "v_oD0RWEO8D1g", "v_uptOE6bfBgA", "v_J27dBmSpRW4", "v_U7vH9pEfGVw", "v__QdPfYK9s6o", "v_XkkOVpXegS8", "v_z3-tII3XcUs", "v_f-JfdEfNQlE", "v_CRH5U5XKb2Q", "v_-0r0HEwAYiQ", "v_ZYwfvPJv4Rk", "v_hchuVbHYK_k", "v_tN1_lOJlUlc", "v_sMVf7HDvsEc", "v_h3thb-S-3L8", "v_ESlUzrtqC98", "v_O5vpeIfQxLQ", "v_Ce0t7gfJl5w", "v_rrKGM5hck1A", "v_7-u3OI6HDns", "v_jNPOEMYJlgc", "v_esuEWVNHfsM", "v_OqA83jGQtfg", "v_xXTfM9xXFQM", "v_cLTDcBhgRw8", "v_-02DygXbn6w", "v_ivkF2jbavhc", "v_p9hJmlWGvFI", "v_Hxgjh9Yb408", "v_xekPSA9h_jg", "v_Uc0Z2tuIJVA", "v_NiPqyUecGdc", "v_6gyD-Mte2ZM", "v_bpB0GiH6uDw", "v_6UqWORrn3KI", "v_dgas2Fku3No", "v_nX-GvQmf5Tc", "v_7s7YqryNMAE", "v_KzZlSbM16aY", "v_lmcBk-gqMzA", "v_SymvoBsqt3Y", "v_a2HjLtnVDaY", "v_Jd6dM5p91M4", "v_5AG9Q5bF4pM", "v_vc820BteGzY", "v_fk_hkHmnmJo", "v_uDlyfvy0NOs", "v_Ue90f5r-2Qw", "v_CSsilC4QbB4", "v_Mdt2E8KYpCg", "v_JKZ-3N1fYL8", "v_twJ2uE1GS2I", "v_R_TRpIHkgMs", "v_MC0L0ljTUiw", "v_bmIWsU8sNlw", "v_UcI4miTi0Cg", "v_e_0bMJEFiN8", "v_iBz_YrU-T80", "v_UWTpfygMUQw", "v_dnJLvsqqSgQ", "v_83oa1S0x9zI", "v_2Voht8wf3dQ", "v_kWmf0_XSfBU", "v_ibWb6iRQiD4", "v_p5Ynl_rGoEU", "v_0w7cO4tscBc", "v_qiT-OtAHtvk", "v_vCAGiXqYXBk", "v_AcLZk6JyXUM", "v_RrKCACSu9xU", "v_ke3R1rOeQzE", "v_OBfVj8mCVUw", "v_fCLnOf-YjEI", "v_pX-ik8n_eNQ", "v_LCLDhKiMAPA", "v_ml4aMGCJgP0", "v_sxQ9H3c5bRM", "v__8Zk9dfBgPg", "v_RUAWJc2OIJY", "v_aw4ehW-wTKA", "v_EXr5QXCpkYY", "v_suL3ZeuQ3DA", "v_ZVNRQ_MPZAs", "v_CgWVpLVd16o", "v_ycA2gqWhPGk", "v_wONwHYy59Tc", "v_QMHF20eV9N0", "v_rWDMssiL7hE", "v_c4ctwOucndQ", "v_MrVj3D-DuJI", "v_W_hux-Z6Ll0", "v_Ty0BvWyYPVA", "v_gFv3PrFkeL0", "v_N5x5VUK7Kx8", "v_3ohvA6Raf4w", "v_LIJBolW8k5o", "v_ksvK_P-Eas4", "v_3OXh6OV2Zrc", "v_aYrhuTGO440", "v_BcMHGhxdMl4", "v_tnB7LNIcXC0", "v_t-8wEopB3AQ", "v_U7k6GFEOt7g", "v_x6E92fGgdH4", "v_eFbZ0_TJLE0", "v_Z47QGlaQ1NE", "v_9RAW6QibWRs", "v_Jd0hvO7erXM", "v_m6H1tLAkyjQ", "v_2belnHaa36g", "v_4T8uFygBeNo", "v_-voGnJbk3CI", "v_WZUxscN9rW8", "v_scwBQj4GE7Y", "v_HFDsuGHojDU", "v_PCSlAOYPMOs", "v_w3N0Pyz2-m0", "v_XDOtHC4E6L8", "v_AR6_PW1um-I", "v_TNFoUBRsngY", "v_470dhR3Yrjs", "v_8XxsgEw49p0", "v_xwSHzGCP6iA", "v_RYl-eG9hasI", "v_IsM_xfhJzps", "v_WFL8DhccHr0", "v_mShwD_I43ao", "v_iAQY-FHckIM", "v_dWyE0o2NetQ", "v_j2ESEJmy7aA", "v_AO-0r8H2DOo", "v_2tf414bkudE", "v_vL8s-b4eJiU", "v_jmxzDxfSbZM", "v_hIJ6VTEKji8", "v_kbe4iowYMqM", "v_FrRKm_V0lZU", "v_hj7rkE0fPsE", "v_eMI2x3HFozQ", "v_yAkVtmP7654", "v_dNgXkPmvU-c", "v_b4b6YkxsHk4", "v_8YKUwWUU-O0", "v_SqEHpHNuy-w", "v_ZkIGGQ9iOSA", "v_gIhVeU8xbrs", "v_VhzPqd0Su5I", "v_N_um3L3w1uQ", "v_bG7hnpAeja0", "v_L149Uf5V7K0", "v_KlheP4IiS8w", "v_IZNrdIkMCoc", "v_xor90CAOc94", "v_sWQ65uwxXbA", "v_4x0LdQRN248", "v_wvmuUuLOoEQ", "v_WCChCrg9eZU", "v_ijSmiDjlmlU", "v_XASTWKClhPU", "v_YuuWL4EK7Q4", "v_2GEZgHcA7zU", "v_vFVg-ImCW9w", "v_ICMcCoyuBAw", "v_3iHHhCHcT8I", "v_fmRio4-6Xqc", "v_EN63ldqfGsI", "v_K6oVOQG0lOo", "v_mOISOUKHpNM", "v_QBI5ZH_cdik", "v_Rho3u46ZIEE", "v_GtCXZRGSaqk", "v_2ISOAmuzs24", "v_pKFBr0pMn7I", "v_RKDjetk5Kko", "v_B53aHHzgTzc", "v_ywSeEtroEXo", "v_CL6TbOgnLzA", "v_kGvs0Nv5zJo", "v_iB20nDf5yJs", "v_BgAiDS4fF_I", "v_VpZ3PaLi2RQ", "v_zgnBeiEB5pE", "v_trl-RCWyhb0", "v_gY0qgtM0Gt8", "v_8nyOw9vBh2E", "v_fWD0rL_72nw", "v_ijgLl3PHHE0", "v_Ule69iMpA3Y", "v_RFgusQogDyQ", "v_osjru9UsWsI", "v_2UfljrwzsLs", "v_AMMECm7Huhk", "v_dcsQy55tjw4", "v_myHHyzx6TPc", "v_AyicWbHhUWc", "v_MM2ZYfEWCQo", "v_F3FjEM9ls0o", "v_-A6e83tl4Y8", "v_3nzXMKByUnk", "v_AZn294ubbps", "v_lQUqzLT7bl0", "v_ZJGXWbt6cbU", "v_f11zga3X2L4", "v_apjGHMrnMV0", "v_3I4EzlMo124", "v_9JrRZ9i1sXo", "v_QCeGGnd4QB0", "v_oFh_AGspaEQ", "v_APlxSpTZVPI", "v_49drGj3JUg4", "v_21biKVGaY1Y", "v_Iq9cAZxki9Y", "v_gh0GD6OvLHE", "v_fid8KlncwTA", "v_Jh07fhoPWEI", "v_YfxK4HAp8jI", "v_pSp7zYRYjHE", "v_UuLBAMSmwgc", "v_mFWRIp164r4", "v_6hu3V1PS4vM", "v_iwhejKH3DSw", "v_sPSfixKrDc0", "v_3fyR5F18WKg", "v_bY2dgTJFWko", "v_dfjl7sS1IGo", "v_PfleCcLgZ7E", "v_JZN0L8pp5hY", "v_3SLaaTD8t3Q", "v_0qQvcJJekN8", "v_2tlLq9qvG-c", "v__4LZrf1GL1s", "v_pXyT_AybrQ8", "v_G72MBCYwT8E", "v_B8d9FYuZglQ", "v_ViCGpj478Ik", "v_juLxWt_3omw", "v_qdE6dbQOnt0", "v_sG3JpMuXFnU", "v_uqaSFllHrco", "v_0ZHZ1ZqmT7s", "v_qenGkKGoq6o", "v_weKPXw4nxKA", "v_i5-OVkjT0nM", "v_Zguc8yykcgk", "v_mK3keyPMe3o", "v_fWVUEOVUzS4", "v_tcGO-GHcQIQ", "v_MSSb3wPd5hM", "v_PvB98KAatK8", "v_Oc8ACBiwIyE", "v_T7kOKW76EsA", "v_N2hi_TNBk94", "v_KxJpfKZbNiI", "v_KyDcuYjDi_Y", "v_4DMnMu2Cb_c", "v_B1DNoole3Wo", "v_ma9R2AjCRZE", "v_5zYETEiYiCQ", "v_DbBqhlSvr-o", "v_oDZlW0OgEgg", "v_oSDHYvvYo5M", "v_MHhMO3yhcfU", "v_vh55SaEpuws", "v_aELu8QS8T54", "v_w7IeqGuuA7Q", "v_xqzsv8VpaNM", "v_dTwH5Fzu4eE", "v_6VygM9-XgAk", "v_A3a6MNgab0c", "v_x6Z0xTgWoVI", "v_Zc44Ddk2NG8", "v_wMwJObSq21Y", "v_TMGG5x-UQ2s", "v_4mlA78hn4mY", "v_Z4yZr5dIMec", "v_XxYoSn6NE_4", "v_OYIAhO9nJmk", "v_BtYKJOmw-aU", "v_85RJm2qymRY", "v_1Se1ZqCSQvk", "v_blOgPoTkhks", "v_0QNcOwi5bu8", "v_4x3dgSgXQ38", "v_dwCeFVAaP9c", "v_Pcro3S-4EnQ", "v_6r3qgd1y5KE", "v_8eUBLvj3veI", "v_Q8iXOTXdy2Y", "v_Ihmu18WVMpk", "v_jcQy1x8lDaQ", "v_riuJrZqkYYU", "v_Amgt0yzQido", "v_vgUSEkvJRlI", "v_p-vfyM7ew04", "v_UH9qJ4Y6ENA", "v_5JCqKshcfHE", "v_kShrO0yutUQ", "v_1qKXZ9fThTg", "v_OM58jhy61Mc", "v_3K_8CdJS9lE", "v_NKblxYCeetg", "v_nTkMD63Wj14", "v_uaBTWbu0jps", "v_umi5d_a6bfc", "v_i0Z8I2WCLNk", "v_vRNcq6nOk0E", "v_EQPiYEvFmSo", "v_UGujWA07GkM", "v_JeSxkw4ed-I", "v_O_L0CSZ7nnA", "v_d7gY7YJ3Fdc", "v_U7oDqpIYsxI", "v_waFqh-Qkafc", "v_cjUz6gVQPEs", "v_qVHazdU4_vY", "v_A8KtrGjBodw", "v_Ta_Kf0dCd3U", "v_o_JAjYZDs9Y", "v_LmioUbGNv04", "v_tMM166j4YEw", "v_ufBz1xfqQoM", "v_-erT3ckPkAg", "v_A7oh6l1AIvs", "v_Mx6Gt14tnmY", "v_FlLDPameKGM", "v_T9gKHEOvRKk", "v_ggyGuKFjdxk", "v_Lb43_7s9t7E", "v_6Kp_fvkZWTE", "v__MR8G1jwM4o", "v_yeWCfvmeUvM", "v_RjBXzs2XvbY", "v_RLMvrl_vaqc", "v__AKzq9X1Aik", "v_Srpn1NaBueI", "v_zrnxRV3yLR8", "v_lyJpgvmTOpo", "v_pznmOdbp7E0", "v_00SfeRtiM2o", "v_0drl-yrfBAA", "v_1Vu0bzAKL8Q", "v_dcMFJ-8Eo7g", "v_2icoQWmbocU", "v_kElViDpjunQ", "v_ubVPP8BVcfs", "v_jd609r5yKkI", "v_wDFpFJ1CP9g", "v_fJNauQt9Di0", "v_swbCsf51XVg", "v_f-Cf16fQTB4", "v_a2jpe1QfZdM", "v_Q9n6B1AVO4E", "v_bnQVFmXUx_U", "v_inFPa4wxOwQ", "v_ROMy00dG8Ds", "v_gyQ3NBwXhDU", "v_2_hcULoN4Ls", "v_a7YSE6dZ1yk", "v_-E2dqOULQgY", "v_gl_0jjJBUkc", "v_nMFEEBtIu-Y", "v_lfllVwgOWBk", "v_RPLbUeV3-o0", "v_j18sB8o2IQw", "v_re4vD9S8ThA", "v_oS4w7-0aH8Q", "v_pn41XETdQB4", "v_Vyj5eIh3jh0", "v_eU27exUJZSM", "v_vdTisVMhW7I", "v_0gLAhptj34w", "v_xbcP38aF5Ok", "v_MDucYea4ie8", "v_DepG0r3JiV4", "v_k85EQoiLckw", "v_j8lH0saRXl4", "v_F9Wv_Lxe_QM", "v_3MS3CAyl_YA", "v_JnpcJP82WLI", "v_9A9_sNvJ8zQ", "v_4R0fSNCWUo0", "v_P6Z-7k-erfc", "v_81k4vwur1Gk", "v_X4l1wbSYQFo", "v_f6j6lb0AaxM", "v_J76bFZWXHFY", "v_D2ggFcgEbFo", "v_IRvFx8K0gAI", "v_BWKKwqX62Y4", "v_AH4v5vqsUlc", "v_NcjQI0avKHE", "v_ULwdDmQ8Z_8", "v_tlgEi4bU9Fc", "v_dn1qrAHh7k0", "v_AFb77tjPuwQ", "v_G8dCenteoT0", "v_-2VzSMAdzl4", "v_SrA6k_iQNGA", "v_zPV8s8ZuLBY", "v_xsBFnpdLWkU", "v_MjRsR_7ECi8", "v_UNbC2c1C824", "v_Z7ZODw0C_hY", "v_EIibo7aTpys", "v_ASXqlsSfZ5E", "v_H0l29-F7Edg", "v_jprf0pE-4uI", "v_DvIng_zQPyY", "v_EYIYohKR0Qo", "v_I3BWhaDRxGk", "v_Hi0L9rcsXUI", "v_hjuvoK5En4s", "v_ntS2PA5YWuA", "v_6RePzOd3GvQ", "v_XEriJg8cW4g", "v_UUjTMDSUvs0", "v_r9eXOf4hvCE", "v_e51ld7ANyQg", "v_fgP2pf2rh4Q", "v_fd7VuzALBCM", "v_U6S8a3WI19w", "v_I6B4g85H2iI", "v_Zv78Or7fW5U", "v_9XmzbuByY_E", "v_igrjxhf0XyY", "v_0y4mO86t4Z0", "v_3dAJEnMn6QA", "v_uz04njTFKP8", "v_FklvvNrpsUk", "v_LDIemY9nO-4", "v_cZFThsHMC5w", "v_5oD3-y66g_8", "v_0gf3AgK1YLY", "v_XbkGlZTlixw", "v_b8pCuIPzb3o", "v_46YBNutTwKg", "v_sAAARH12tdc", "v_hvy_V1EWKEI", "v_rCSFBiXxbVs", "v_CdwgIN9FkdY", "v_4CSyAAoO18s", "v_gIwTydKpIe4", "v_Ujgmih4OtMs", "v_O1WvjCFqLz0", "v_Opqg11Nkb7c", "v_C_2EFIuyDSA", "v_fBlvOzfFq-k", "v_ave_VDl3LwE", "v_zLjAfrfqRcc", "v_AimG8xzchfI", "v_VthI1KPjEq8", "v_iq0h4m3I8hY", "v_y1IjkACdnfs", "v_-DzTAnE1t3w", "v_LkUnT9fMIXc", "v_5GFpN0YZEog", "v_iHO42zwYsu0", "v_RnwidjJiDEE", "v_GnLUmMkyvCo", "v_Ad9jrt2bP1o", "v_yVKmkR78Jn0", "v_nIfYhQHFWZI", "v_1scjpxusQx0", "v_RiF_iAc0keQ", "v_mRgS35iyhYE", "v_5I0K3y27EUM", "v_K8f4LNNiQy8", "v_I2w4N_GnyT8", "v_Snj5CuEUbPI", "v_iSIzuN9cEAs", "v__D9oML1HvVw", "v_DinaQYSgbtg", "v_kXvFkU7gQSM", "v_30Yk_1Yc7Vk", "v_Ez7s36AwgLk", "v_8bppcsg07Rc", "v_roavmdw1ORo", "v_MTJ1EtiizVQ", "v_RnN4BXyOtxU", "v_dPZfExDmX9Y", "v_LnMvFpR0xCY", "v_o86qcfpzO0g", "v_Q0U51Hqn21w", "v_oNo8ZpqE_6k", "v_PAGuZzrzSO4", "v_HPNZi_WsUeY", "v_B5VIJnAFlK8", "v_D3NZ45e9llI", "v_9xKOEE8Ni-Q", "v_j_YzK7aHTIA", "v_6Yn2U58qxPs", "v_5E2OdhrgG8s", "v_GKM-K8jbfyE", "v_FrkXeG1YoKg", "v_s60we-9PBhw", "v_W3ozAI2ozCs", "v_82cpSdoHdg4", "v_NURr5XJcwFs", "v_1VSqWp5DZiU", "v_kwaCAq-9LnM", "v_5pl_qttD8Fc", "v_U_FxyViYYBA", "v_ZefWc2tgltY", "v_UnOzWl0EGCA", "v_Ja8QImLWYII", "v_hz0W27EwjQ8", "v_z-_snl6eaPE", "v_hzpFVURhKwo", "v_JDn95TW9WoM", "v_TeLWp5sSxg0", "v_AeefhelpxGA", "v_FQEGKGn9vnU", "v_2Jr1K1wBKfQ", "v_WRX7aUqgZJ0", "v_9GwsrWUq7mY", "v_RVZprJDJz1U", "v_OhpaFQeQtKs", "v_4usf67inE3w", "v_8_XQPqLdblg", "v_31TT2oiYRO4", "v_WNd6SHMi30M", "v_Y8-7fr5bv24", "v_0uOMJSUza68", "v_n4apOkL24BE", "v_KMz8f9vDK38", "v_6it_yeIb_L0", "v_Nqh3RtLRleU", "v_pev7rvOE8eM", "v_fU4EgYmISro", "v_Czw85LWCGes", "v_sARnRvNdl-Y", "v_7eR0DyDg7wQ", "v_FfYNkePtHjo", "v_QBqfrJzcrns", "v_GGSY1Qvo990", "v_iUVz4A5oblQ", "v_ByF8Pg3xXNA", "v_S9alQwrQ-oo", "v_LACH47i14lY", "v_YqairWJU2Vw", "v_LA5UXJ_hVU4", "v_1cCRZztswFA", "v_aOTtBZynDOQ", "v_lZcTesK6CfA", "v_MBTSe-NHK-I", "v_YTdLk7Nsn_k", "v_-SCRtjT7dto", "v_Tc0nHNkf0KM", "v_plE3KNmuwj4", "v_KgccYb6ufPY", "v_5BCWB7Pf2Tk", "v_qgasVDGUw3E", "v_O-upcCp0jIs", "v_d79uK3AhtTU", "v_Zr8cz8QrBp4", "v_4mBVik8dq_w", "v_g-rw2Kyh9xo", "v_e5_lP2HgtSE", "v_6-beYw2R10s", "v_iY7bZQnHXlk", "v_mpFNy97oV0c", "v_I0yNAIWHcQQ", "v_UuJwtJBJ7oU", "v_PmmKHLmG5Ec", "v_DBGea9pST1A", "v_ak5mpw8komA", "v_3VAq3wYxnMs", "v_0iIY3HLF3lU", "v_R2EZlSlDCuE", "v_Zl3YebXhXC0", "v_YjE1by2PX08", "v_bNRE808ALfM", "v_iANrLcieixM", "v_-n0F3QTuxug", "v_wH-uaN8gL_k", "v_OszjSKHCvKI", "v_SGQQSH88isc", "v_I4y6q9oIIQo", "v_sf77PM1CtNQ", "v_bWdufJDosIo", "v_P2fUelA4BfA", "v_l4UJiGsZVfE", "v_RnRUwLtR33g", "v_Cx3QGeQu7xM", "v_MwQTeFD0OKQ", "v_hxbp-zM5JPQ", "v_b3mJ5rPzDv8", "v_GrACpo7aonA", "v_a_uamUiKq1o", "v_y8ENWnuzCIE", "v_hcFw88RcAbo", "v_5wQLpjdsRUg", "v_1U0VxGw1cdA", "v_BMxtjh9E7BY", "v_XyQSmMYbP6o", "v_i4SNM6xSLI8", "v_62Dwj4l7_qs", "v_q4Oy6EDTJiM", "v_37gHYr2uDZo", "v_2VTEseqA5SA", "v_at8e-jBBU5E", "v_tyjUDi3uLd0", "v_Dx4LpX-X9JY", "v_iosb2TdQ7yY", "v_TIAAUayALPI", "v_zE0vlPLBVJo", "v_HeOj7jZ0igI", "v_CaQkeVwKiUs", "v_nKa1e_CpvoY", "v_dFsFL_WJasg", "v_fD9JNH5FWCk", "v_B7t85SESTXI", "v_G00TjQ7JJ8Y", "v_747hJQNJpeg", "v_zyylgHTPUS8", "v_RHpigjSwhVM", "v_esNQZCjMZaM", "v_we6Ddq1ABcQ", "v_ewGW8hMlxnA", "v_M9Z2RKnwiz4", "v_1cU8sp05Bu0", "v_JE50XTpCN78", "v_IEqnfSiCIXc", "v_VbhW_K3NvmQ", "v_1zEcIngghq4", "v_FiqkrBh1VOI", "v_uz91AvGxjbw", "v_8QY00KU3gkw", "v_DSMSAIk_xhY", "v_HWgQhsTgj90", "v_8kO6A3W_kQ8", "v_2mXGnG6ZBDA", "v_Jd0KWW9LN4Q", "v_j6zAdpBqRu0", "v_yPA6klGWEsc", "v_h7iCyiNUxeE", "v_TK5FnYshy10", "v_iLaye6q55qk", "v__A5iOie5VkM", "v_jwndE_xn8sA", "v_nDo0nfs9Ee4", "v_6J45AbWiGIE", "v_yFPxSn69pcc", "v_dpS_S4Zi2Po", "v_CtQ25XC45As", "v_90SltIDizo8", "v_fZQclIXmRHE", "v_mr2wnh2GwL8", "v_KoqE2gPCLe4", "v_45gAK3x_0ds", "v_TotbMcWIoyE", "v_CN01Gm2Yc4k", "v_GMHzZXAQzIA", "v_iaXlCCgLBdo", "v_MFow119nrOk", "v_hwGvU9Csz98", "v_RpB5_XYoYhk", "v_xQH8YS2_NxY", "v_XLdqEn8pqis", "v_ah3tGziTbds", "v_pbz8c7TAlDs", "v_JoiZmVQCLCI", "v_DRSH-_Ye9eE", "v_wpxozv4Yois", "v_CocYQOgnegg", "v_zwa44U585FE", "v_mZ1Di2gg-I4", "v_2-1MNxfX5Bc", "v_oey6DFvL9Xk", "v_cIpBpGQ0XTI", "v_2ooY3GqZieg", "v_itgR5a-hH_o", "v_A_ndiCY-rDc", "v_WaVrNbTmbU4", "v_l4C-l6XeNRc", "v_RllrUfp2EIU", "v__ye90Ou8SnE", "v_j46ll2_jR7k", "v_NBawYEfglow", "v_bwRsZtPzipc", "v_YigV1ARspVU", "v_7RESODKApso", "v_69X7tP6p7E0", "v_gt2Sp_iG2hU", "v_mkEME_iWi9o", "v_4JnXF13ktSs", "v_90vop6PS2Y0", "v_UFVeN-ThOwU", "v_9GYLUAFgCXE", "v__xgGaxc1jNE", "v_UxlSiLBleX4", "v_vBOejU7dBzY", "v_Sjx7K9Ybx9Q", "v_RGMSc1tfkzA", "v_ToLMOwlrgm0", "v_GuwWFip-AF0", "v_9Xrw-WOipSI", "v_Dk3DiAp2yAU", "v_tScqYRQ7zyo", "v_vCeaFAiokrU", "v_ZblmMtkVXIc", "v_feY5JrgSpzE", "v_f4UdgFrorCo", "v_1nltPeGC5ZQ", "v_7phIVBx1BzQ", "v_ix40OdQd7iE", "v_c-C_9InvwKE", "v_fG7iLOObw30", "v_W50sQxSWDwM", "v_wuknZBoyMRE", "v_wZZ1W6D1nwA", "v_y9bLCC26MGQ", "v_cdpPn-7R3GQ", "v_bYUmtLBL7W4", "v_LNLsmdVMCmY", "v_74AJ-1e1qGA", "v_e2IL0BusPNM", "v_jRfTdoqG7Tw", "v_fU-kGMQ68jg", "v_kNUpypAppjk", "v_IeTMYNbQSp0", "v_MHo5kioyrFM", "v_u3XYsINR-y4", "v_ayXuNcjC8wk", "v_LRhkbJ9dcP8", "v_-qcPtBHelmc", "v_0KTued0g034", "v_Z_hwYD3_lBY", "v_ZcgahXg_ELw", "v_DQLotF3P9Fc", "v_xqI9M6QiHws", "v_Vn4wrgBpgP8", "v_shGGt9TRlkk", "v_Snq0l-gKpWo", "v_6-cHUULLVGQ", "v_F30odTEdsxo", "v__ZiTTLhXjZQ", "v_8_jbsmj5Z9w", "v_sF859t5osSg", "v_nS9PgniAQAE", "v_lnHdEtuXU8w", "v_QQe2n2yjJuc", "v_q81H-V1_gGo", "v_jNJnPpIvtTU", "v_pRkJ_9zq16A", "v_zwFxq1MnaO0", "v_ZY8UyWtoMWg", "v_9KPRS9y8Fvo", "v_r46Vy3p19a0", "v_odMI0DGsn7k", "v_sdR443ncw-I", "v_a4-5QFOiAiw", "v_9q6wWG6ql4E", "v_OFKGyZxazQk", "v_oUQPIZu5bVU", "v_5nOc03oiFvk", "v_cFzo-Zgxk1M", "v_G_US7iMc6Y4", "v_5y9Lw8--ulU", "v_8c-s3TKrtdE", "v_ywsH9kD033I", "v_UI98gtpg7FE", "v_--6bJUbfpnQ", "v_-u2zAMnrCC4", "v_B-6kP8M_GmM", "v_NQOPahBcpSE", "v_993xtlhuVII", "v_656VWQU5dgE", "v_55IErOrgQOA", "v_VdY1Shdks6o", "v_tGHLUWWm_zU", "v_K3sJnHGHQHM", "v_I4uZkBmE5eM", "v_T69Cadlc62E", "v_32H1n87WgCM", "v_gxILsv1RTEI", "v_LZEiFNEAyyw", "v_oc4v7GPk05c", "v_f6Id4KERnoI", "v_AT_pPlJTiyE", "v_-cJova7MiO8", "v_mSyfGQigb8U", "v_skIP_U4EYDQ", "v_tzwIHzuzG9c", "v_uG_hgODoDes", "v_mDaZqz7lB0o", "v_c6nEk5N4fSU", "v_j55LAXY-T0E", "v_sgFp3HCSgCo", "v_Mk1gOZ5EOUk", "v_S5zweEQSnho", "v_fdHpRUOSi28", "v_PDjtB578yRk", "v_QoTM5tmcJeI", "v_bfBTnUiGVUo", "v_B5Ea3Bs8hC4", "v_1UIathRb404", "v_mV5DfYFg4H0", "v_FJ7yrh2UiQ8", "v_5ssP_EapV9Q", "v_E0niuPtg16o", "v_39HCogCoD7Y", "v__RCe4Q0p1aA", "v_PEpfA3L4m20", "v_1gp-5iOIfVo", "v_DEduSDgovOQ", "v_TnvAN5iwpIw", "v_M2wdIwZMNm8", "v_RzMKERQ9vOU", "v_oA8ZUG1y4Lc", "v_bmoS216hsoc", "v_oO1g33vi4hg", "v_ZjYttT9itfY", "v_uFhZhnlYKRw", "v_d_JH9U-UI3c", "v_x6Gs4PINiiI", "v_Dh3bLRYJkiY", "v_mmgoptOJM0s", "v_9elfMU_LRKc", "v_cIaqen3kVIA", "v_AxaksczuL80", "v_0yGGccaHMnI", "v_PJ72Yl0B1rY", "v_DQ9EaCSFwGI", "v_qUFPq8D0jMc", "v_G8gTBLLf8Bo", "v_drzTgrfN19M", "v_6ChRD-1NwSg", "v_dDN37ufNu84", "v__3xMhj4mbsk", "v_pnFRC2_HPrE", "v_W1FmiUTYt3I", "v_qMj2sCoRHqY", "v_C0MIMsY6okw", "v_DvTZ5mmF8NM", "v_CAG75_XxmEE", "v_7tDDXbiQ8AI", "v_8Da6w-Eg3Ko", "v_v6Ui5kgi2OI", "v_MYFVsllwDnc", "v_iAPv-QSvZF4", "v_hsI_BHN5h_0", "v_E1AVyl1RwF8", "v_19LxLS1_Yn0", "v_xSIh6JjAR_Y", "v_BEtftLo6NKQ", "v_XCJ2StGMgW4", "v_MlnK2sa7mm4", "v_9aRUmbcYxUM", "v_5oy5Yi6fzJU", "v_SSTom962aPk", "v_2VoWT4gnQDg", "v_bOBQLGfEeyg", "v_PT4x_Y5lu_g", "v_24vWSTx6N5M", "v_AJ_a4fE-rR0", "v_e6mpdQ3BFhA", "v_GR6Ul2pD8_Y", "v_FGFPyp9nJug", "v_yG4C_s7ItA4", "v_VGvjsCblFY0", "v_lKKimizxQJM", "v_cHYZPYLwvks", "v_xyMCaug7LXM", "v__n9eNF1WaFU", "v_WaFDgdqY1DM", "v_mzbhfWgJ2sU", "v_mFDC1CLt6B4", "v_EbQJuDQdW8U", "v_iMATWwGyAUM", "v_IHpBwsyMT9Q", "v_5TMKHLOACYg", "v_QKEFacWrn_8", "v_sQwx_m8Vghw", "v_wHxyzVcKq0c", "v_q0KrlywYHM8", "v_l9HcwQPNvWo", "v_BSdXxBOJ12A", "v_K5wPwCFVkhU", "v_szl1InYab_k", "v_kl9xvnAKfdE", "v_2q_4I3ae0J4", "v_vYHtmvftHoU", "v_FZk40J_drws", "v_rmzMfd9ftU8", "v_-jNouTszLJ0", "v_Oent5pguFk4", "v_zRBspE-uJUo", "v_x768VAsOQSw", "v_GIy6ZbAooOQ", "v_NGBaYycOQT0", "v_WtWw-GNpr4E", "v_U9pnR51t6As", "v_dXONZBWOKHk", "v_WEjMCo8OfjE", "v_pLVCuSq560Y", "v_ZYPKueJon34", "v_2ahuZDlObAQ", "v_XX2sXEmR4BE", "v_o3yvGAz5IJ8", "v_Jz7bt59z6Qg", "v_EInkc1uEX3c", "v_kxfOrs5ZWkw", "v_BCdt22s9hlU", "v_I5Q0DcmTs9c", "v_n41Ypwpn-P8", "v_z-ttrQ38mOc", "v_n3v9Znovl98", "v_HQP20PGfwYM", "v_YYmx8EHIjAE", "v_qH3HnhEaeok", "v_aN9vCyXMbb8", "v_5C0G3BQ-Nds", "v_IK9kE9IrcOM", "v_15npAlupNU4", "v_Ir_Ul8FaXs4", "v_BQRidRi2V1c", "v_plhiqYw0P_g", "v_-UWE4jXuLoo", "v_yeLB4QXA3NQ", "v_LAZHNzFbDNY", "v_Boa880LnJ3w", "v_qBqUu4_qOnU", "v_dBCiKzkJogg", "v_VNUVKrN4ndc", "v_6xAe1YVbxuY", "v_yweAN9o4QYI", "v_TbLBu2TDey8", "v_Aa33vHLEXJA", "v_MVzypK0eMKc", "v_RTM6iJxc-G0", "v_4Lu8ECLHvK4", "v_IZx-EMbylmM", "v_Yi-0wjSu0E0", "v_dB50ZkOlDzY", "v_gXdFGYPKClE", "v_z2qG-TOSwqw", "v_TUMk0wpBiP0", "v_0LJ1mSpqGJg", "v_fklBsM-H7-Y", "v_Vkr3r1Cd0mI", "v_pk7LcugO3zg", "v_cc9iCNPSiKc", "v_T7fzZX0qKKQ", "v_Pfc7KbwqdYk", "v_MbEtgOmOY-4", "v_mk3srKjFB3A", "v_nQQ-tcG6wBA", "v_FLJzzot6F-s", "v_2ji02dSx1nM", "v_ZXlJIrRiXrA", "v_ivWTI2J_UnY", "v_e6J_ygZ779A", "v_KUejIghF6K4", "v_Zw4illqWzFI", "v_JSYv9uYZP2o", "v_lGWAepvduTI", "v_IWhEUNOUIyc", "v_BSsXKG9dFHI", "v_QJfuxpFMn8s", "v_nVHL9qP11aA", "v_td15Nx9J0a4", "v_lSgkR94_h8Q", "v_o9ghRI_Iddk", "v_fgoXpih2Kws", "v_D5A6eBnKmD8", "v_1y2aqd5HQlU", "v_m2hiQ9EOUUI", "v_a74RMGL_c8E", "v_cROJALtLB1k", "v_QWqEi91fWOQ", "v_cZZM3bgmXE4", "v_HM_rHjh-wqQ", "v_9VRLj4IfUzY", "v_hfZQBDePOOE", "v_YQfJWGJ75Pk", "v_KLr1ZVJDFDs", "v_mzGbmHjdCM8", "v_d-eoNpp8mNM", "v_0_-Q1zOC3Kw", "v_GOZ305xZvz8", "v__15t4WTR19s", "v_r015El3onHw", "v_bI1L2D_erOY", "v_hT_4wWPNYxo", "v_SvDnZ47J37U", "v_iAIl5eawd6I", "v_JFBd-R1YuXY", "v_doZb3RlLSts", "v_6jTH_gFx6Ik", "v_cbR34GknrBs", "v_YySTmiavdMc", "v_EYoyxe8hd3g", "v_C03QJbrKzaw", "v_Ji3qvOdmOZA", "v_akwJwcvfjLA", "v_Zt8zZhMs4Es", "v_ZOQSDsJYXIA", "v_zBm3FR-CCI0", "v_Q_LhL-t0Yls", "v_7A_NgDs7jZY", "v_quoyW7FZqdI", "v_x4Vk5wSH7xE", "v_vutxJfF0Rlg", "v_NvOo-wtEPPk", "v_Feo8xSjY5A8", "v_8IJJGK2td2c", "v_hCJTKVzkYFE", "v_HpJ2pr0ykqo", "v_CRdgzvZxB8A", "v_hBjVRKwCUNA", "v_73LjSLUZGZc", "v_qXD7myRvw0M", "v_L0Fdx2r3qA8", "v_FSe9tVYHgBc", "v_4fEMDQnD4Xg", "v_vL8Hy6lcnF8", "v_odbjmsyfJe4", "v_f0On10HA3HQ", "v_l9o9R7UcPuc", "v_nrh2jDsmeLQ", "v_IWdJF6lBSnM", "v_KiKZEKwn4Aw", "v_XJmBiSBx7Ss", "v_Rj_SwlpOhNk", "v_fgeW0L2acbI", "v_E-M2Cq0RNTs", "v_jA05XIX7Yh0", "v_gIgim1Dp8HU", "v_MRxC-Ygp4go", "v_41LaEr0i2Dc", "v_wnsy_i-IXpM", "v_fOuFF7dGPtI", "v_kFP91VjB1AI", "v__HMwzNA9DNY", "v_IQ4SUx8ythk", "v_y7i-jRmrwnI", "v_5JlwYD_GChY", "v_BBRNbo8c8gA", "v_a1WhnMcTbrY", "v_Fv1qhPABYk0", "v_lSTqYESahrY", "v_2lUqeOw61QY", "v_2wcD0wSzB5w", "v_tY6UFSLtIoE", "v_JHYMG87h3XI", "v_EolA3Rd_Vm4", "v_gSkE0KCvves", "v_E3UCEbGZmz0", "v_ncgzVLi_hlI", "v_UqSjGwxBuqA", "v_FPv0qnoQbq0", "v_Htp7EK8IB18", "v_IN4nGNF9gi8", "v_fDPNV463JuE", "v_kuv1yEeNQzQ", "v_kAQML4pRtck", "v_5ya20wcGE-8", "v_oobYvNJU5ko", "v_DAv8CEings8", "v_crbkEVcbF2M", "v_kl7qwEgYLZU", "v_bVAUJAAg3TM", "v_3cQg4XOkC5Y", "v_gVKgXyKh4BQ", "v_naCGjbEz1T8", "v_3OcAjx8e4LU", "v_-uR5-jYe0Ag", "v_s2ra7HNzIF0", "v_g0upuaWM74M", "v_zI6PsewSm7w", "v_Y9xPzIiy6mI", "v_lCX7y_KAihU", "v_sGGnEgCnEt8", "v_Xc70KHd4zhI", "v_1X4hgrBjw-U", "v_ZHOPn9lONHA", "v_yUCSKSMVrPo", "v_Fb-t6zr7K5c", "v_tF4Tl56ntnE", "v_Aj0Pd6snB-k", "v_R7iFa9OpoTY", "v_vvk6f13VO5c", "v_koEfnIoZB_4", "v_gd7SO0TQ-sY", "v_dTkMZlj7jFU", "v_uM3RiCL0g2U", "v_ez8ram5yd70", "v_7LimgSQsHm0", "v_x4c_wI6kQyE", "v_5qsXmDi8d74", "v_KzogfJrOqJE", "v_WROGzgOpPXc", "v_MVxXCu4zxSM", "v_0PmrImNqA2w", "v_hYRNSJwhVPw", "v_sa5ZuxFDZNw", "v_SvIUXZqy8Hs", "v_Wrbf7c58IuU", "v_phg81-nhqH4", "v_WRc1Jv1j3nk", "v_6F9C3dIU4kU", "v_1JKgr3KfoHo", "v_UALnEw4XhTY", "v_-U4lNtzVQ8s", "v_B67jaG6qKWE", "v_CvmhLCrOjhM", "v_NdnosxA2c5g", "v_ooCciCGrdcA", "v_z6U8CyJRNXw", "v_E22gU_8tafI", "v_ekbZecn088U", "v_eeoQE0dbA6U", "v_E5zIMqTj4nc", "v_HYAlS44yzdo", "v_ripbruSSD8w", "v_QHn9KyE-zZo", "v_9AOVI0OCZqg", "v_u9ec3Exc5mI", "v_9PxPcJS47js", "v_CnrvRF_N7fU", "v_b1QkoG9hxk8", "v_sf2zGT5nN04", "v_QzbZxKJ-YBY", "v_yVp99wxlW90", "v_ofZURf7w9wk", "v_Q2jdtN4-RE0", "v_u8ykXBc2Efs", "v_rSxO9uspxT8", "v_pniQHSjY7dc", "v_xXj-oQm-NbE", "v_qVqlImNflY8", "v_rxfkWIGZtlQ", "v_OzXD3WO6jrs", "v_OiL6Aj0gC14", "v_70bS0DkAeDo", "v_Yi3xUQcaOnE", "v_CMYeHWoB1FM", "v_oA_uJ9gLvUQ", "v_lbtW7nHTnwA", "v_yVE4t-X5b-M", "v_YDNgm6ufrJc", "v_Ih8bPM3p0rE", "v_jed5hUKCCk0", "v_nezTU6Bq5hM", "v_Yezk4k2E5s0", "v_7ih5UMIU7zE", "v_nXNczyQpljQ", "v_cEVHZc_uT7c", "v_VI2qAFwvPSc", "v_0czF2CCgq6I", "v_V9LudLaWGOM", "v_rZu5ZJmAlbI", "v_xJ23geP1Hss", "v_RHb_nF11Scc", "v_x2xC5lm0cZw", "v_YiBenqCKGcA", "v__8aVDfNQtq0", "v_kPnqo24kemc", "v_le1aEgEms9Y", "v_kj8L5yu-fGs", "v_knnQ99kDt8w", "v_LPV3n9LeQ80", "v_zto8JvkVLVw", "v_0fsMeZoZzJI", "v_OD4MrhX85-M", "v_Rd9TrjbCkAE", "v_E2Vd-sOC_ik", "v_Q_32kySHzCQ", "v_gwZleaX_ZR8", "v_FrV8r4l5ZUM", "v_u9YrRYp2t3I", "v_-TWiYyvt2Ec", "v_NLCNBK2YJQU", "v_vKNsvOvC5mA", "v_8VPjByN_v9w", "v_cTioh2vzxGE", "v_f4IL30BPe2w", "v_m9CbLJdYqHw", "v_1buoiCgXG1Q", "v_B_PhHrBEeNI", "v_RpyIg_j4I3E", "v_vBpYwyXfE0o", "v_48zOi9j1E0A", "v_dJknA-jTNGc", "v_jVC3DZdphYM", "v_wLKePf07V14", "v_zQVUXbyCV1o", "v_9-yueOtwiL8", "v_5GZNSTv1rVs", "v_PVed6JEd3ZM", "v_XnvaW1HQyg4", "v_J0-OVQ-JB5g", "v_2aHetC-N-P4", "v_9uxkazuxmDw", "v_aEpRYY_wi0M", "v_5Wp2dxIAocI", "v_73AGD3RWPEw", "v_2MRR5NxbO9k", "v_dFVX_2UQ2WY", "v_ccfffP3pXrc", "v_VSeBb4e9ysU", "v_mKm75VWThAI", "v_HkzMA1jrm00", "v_NWaMWZUuTZc", "v_XYW6F_4qKJU", "v_bZF4nakRNF4", "v_hW6aZXhKl9M", "v_5UlxCwq-LOs", "v_YjxjsP6A5H8", "v_6hNV9oxC51k", "v_KwBuRjh_v9M", "v_Ez5uEh7YyIM", "v_uuFJdgTT5kE", "v_1hB5jVAhSDE", "v_esTcWwmykKQ", "v_hPV-Z73KXak", "v_kQ7ensWEW08", "v_TfpCjzGqA7w", "v_n1iu-AlcS-Q", "v_tVC_5_SgseY", "v_aQulBdlcGNU", "v_wmrrBnxbHjk", "v_Lyaozxv4_qU", "v_yCcqJnlviQI", "v_YBrcJxnXuVU", "v_Xjw9vUwILOE", "v_ZPVrC5185NM", "v_m6yPz9fHJnY", "v_ZfXkzv-hNlg", "v_FQkvwPpDomw", "v_cBAlXvu38dg", "v_xabaKyhx7cg", "v_bG55LSFBA9M", "v_5wBo0Gd81-I", "v_2vAaAy_WC7Y", "v_K2Pws9z20Do", "v_lDJpGI4BZ8k", "v_XBBT8UvESiE", "v_ovq0Fqbxt1c", "v_bCEdkW675dQ", "v_hUynCsek8I0", "v_JHITVq5zJOM", "v_eRHbpYeYtxo", "v_Ey2SmPzJTKM", "v_LJdI1neOr2c", "v_foFFu7bY5ow", "v_oCicjtc1t9Y", "v_M0sa3xWhFGo", "v_OaFYMXKxTbk", "v_lt--z8nFIT0", "v_ksNvNH4fpdo", "v_VYuQAfG0gKw", "v_1IhbkbuDPpc", "v_W3TQnn0q9kc", "v_6j-H-tIjJvA", "v__Wag6CT_0j8", "v_gsJ953MHtpY", "v_-OLPVREPy6Y", "v_1ftLLKrC81s", "v_baSx0q9LKg0", "v_qx1FNJxiUuE", "v_firp_OhUMPc", "v_Ol7JKNItQC4", "v_wN2XnDS0aGc", "v_I6IfZiNmlWA", "v_0KqeKi2CBqg", "v_Sh8r9g_lp7U", "v_L1lXij7Fyvo", "v_hRk-3fep5WQ", "v_ScKbopywnvM", "v_ynUBEoobKW0", "v_hltWAq_Odxk", "v_MiOJxYa5Nt4", "v_GGv0sCOf_tM", "v_P3kWD8Oocio", "v_EQajiMQAW74", "v_00ZRoqhhb8g", "v_IqXaLlFSWwc", "v_cY541XSdz50", "v_hhk7A9gJcu8", "v_d3RF0qC6RJs", "v_geuUVSJyovM", "v_vopKTwCiHrA", "v_FU0EPNGKsv8", "v_Nt6cha3hK_s", "v_qkk2tK19sx8", "v_sRN_crwj3B4", "v_UJwWjTvDEpQ", "v_gSH5ya0pfko", "v_OwchMqCYaF4", "v_iddZ6YIWLWc", "v_4Sf9C_vtYIs", "v_BrgYIg6UXhU", "v_6SOluodeJ7s", "v_yjazHd6a5SQ", "v_FWbCX1wBVoE", "v_vD9oh7NZ2PA", "v_P2hrv6QzDPI", "v_NgG4AWP1F6Q", "v_0EdDWY0Zuqw", "v_633ZdPm_GjM", "v_J4hnBPgwDlw", "v_UgSLUt8X1Lc", "v_MOvLBw1EzmI", "v_K6Tm5xHkJ5c", "v_z_ExqQ80T5g", "v_3pjVV7A6Apw", "v_Xq9ueKle4fY", "v_VUvEWwghANE", "v_s5y4xXcphcc", "v_YfcxIgsqs5M", "v_yHaTlDD-qHA", "v_Vncj0EkAGio", "v_SSoHwNbASQQ", "v_tghS4UnuWzk", "v_Tzm6TEManmQ", "v_AauepSs1kUU", "v_Jx4GCjGARqs", "v_t2DdSm_MGXo", "v_yACg55C3IlM", "v_4yZ1agUX004", "v_7QxUtHqQdbY", "v_6LWkrN1qz8E", "v_x4QVVFhamJ4", "v_s_QH-5G33Fw", "v_uhnY3lZ9ZCI", "v_W_ZNdQLFmAA", "v_hmb86jpgWfE", "v_aPjbJ4ZNcVQ", "v_CfDdbeAk8LE", "v_PCoxnf59j5U", "v_noKDv_a8u-Y", "v_CHMk7efu1ro", "v_2rHsoF35eQw", "v_NLuNMeYBeoc", "v_Uqte3S_ErTM", "v_Otm5TV4XI7w", "v_t3dHI5TeY7I", "v_3JrxcNxNMU4", "v_mio5dnRbo4w", "v_UH_z4C6sv3E", "v_87pCIcWgwVM", "v_gmnwqOPcOo0", "v_SnZnAVuMn4M", "v_r8AXq1Q5bn0", "v_laKctaVegPg", "v_0JgcRWHCi4c", "v_CR_79ZjQG_w", "v_IEtCboPbTXI", "v_1rdecGieY-M", "v_3La7NPOBVN8", "v_HW9SFCj0dVU", "v_c_DQ7Y8ZRBQ", "v_wTBJ4PRnU4k", "v_hJJas1Zat1s", "v_ebmi7XJA8Oo", "v_WZeMQ-5dFlM", "v_qNxLTF4Q6yk", "v_c1RR1cmS9LU", "v_sfCfrWpHpu0", "v_oIEDMaMo7UE", "v_jmSrbVNKF6U", "v_Rokj1EIAHHk", "v_pPM1jC_NlzI", "v_N9xp9VbpklQ", "v_94bJbSWNw3o", "v_POYg9zju63U", "v_3TsNntqwbSQ", "v_62s1ZSNLJ6g", "v_2g9GrshWQrU", "v_7X_wgaRaJYQ", "v_Irg5qYkjJoY", "v_qiRrR2Nj2SQ", "v_v34qczSoYLo", "v_VLjfzOpn-AQ", "v_wZgBJlWqWWI", "v_keaMf0raxF8", "v_FOm0uKw7dXc", "v_xIld1Pt1QGs", "v_r-_JFgDJRrQ", "v_7mmXZeOJT8w", "v_niqc-dW54ic", "v_Fmr6mPyvE-g", "v_j6HDZh7W6Z4", "v_9VGbtQrlcN4", "v_ANeDHelwzK0", "v_3qkNnr1_78I", "v_wt0XC2EEh7Y", "v_iZk3PH8ghlI", "v_uBmUiouilQY", "v_tUCGJk6aSeg", "v_WV-Sf5-aCcc", "v_I6nuNE-Qibw", "v_2mAKLFVhV9Y", "v_xKePBw5XZHs", "v_ffUtqOyJ7fM", "v_VSONGdnvKiM", "v_VrNHEv6aR38", "v_4cqesj6HwTU", "v_5asz3rt3QyQ", "v_Kyo1nkGKRqw", "v__b_9BQvJ_v4", "v_hSlydQ9rJuk", "v_a1nRXQZ6-Fo", "v_3nrianTc060", "v_dMjOeGJBF9M", "v_5WWvCSCGXmc", "v_HImOluKZgp0", "v_cKMGacBQX0E", "v_a6kF1_4rs2E", "v_bUfhRJjHNoU", "v_BS9UPqgR89E", "v_7rpq2RXAoKE", "v_-faeAVsbBG0", "v_r8hXEpP7HH0", "v_9Rvz-oIAn50", "v_BWqsgYhgUbI", "v_Q2wd5aLtZ1E", "v_PsddM2OmOGo", "v_gEYutYXODs4", "v_8AQopjogplo", "v_Q-fUXywUo7o", "v_i2dFL7sGf9c", "v_0vQs3ztG7vg", "v_2dA1fAU3o6o", "v_R0B5bBr6t8w", "v_tnt6Wpv_kHc", "v_6PnPu_cLCvE", "v_yvOOFjG-FEo", "v_NjTk2naIaac", "v_BWsjIONsXlM", "v_lcEGoZAC7GI", "v_-e9e4ke_wJk", "v_mzVJHw9Jrb4", "v_Mk9PMED8K4g", "v_ZWudhOEyE_0", "v_7hfaWQgcDyo", "v_Rnux3rCLdmI", "v_Tsht1n005fI", "v_5WqnKjOz1z4", "v_fykq7xuc3zk", "v_47OMV7rZrQA", "v_xMEwcb1P6dQ", "v_J7JLo0nQ5pA", "v_mkK9iEzRrqQ", "v_6b8h8ztnj9Q", "v_pCUun9uE3h8", "v_j7fPZQE3-fQ", "v_rrc9Ph5juXM", "v_yRgei7gpr-I", "v_mfJj5gBQg-4", "v_ZwxvczODMbM", "v_JmL6BiuXr_g", "v_b5NP9oI-urM", "v_P7UbKv72LAs", "v_lGKUEUBeo8U", "v_EDkYPikPWW8", "v_qxmrH20IA2Q", "v_wJYsD3_CS6E", "v_1epGZvRN3Fw", "v_Y4IsLkxb5CI", "v_l2MB-KxbVEs", "v_bHAzuAnnvcU", "v_vigHVj40dO4", "v_VdGZfI-8RuA", "v_RseCMmSvcPY", "v_rvcSqYeUZ9s", "v_HUZ9PuMm8yM", "v_BOVYcAeBxyY", "v_z5xZrF421HE", "v_EiXW33yuAcw", "v_NJjoTu1vS7A", "v_g17h49EYsJY", "v_o3Nuqg4w_b8", "v_NSMAftE6fb4", "v_yWEFVfX-JoI", "v_T47mErD2KeA", "v_J1fcLhB-Slg", "v_DguywhRJ7ds", "v_6rfFmqz6s8M", "v_s_XdqaQj0uI", "v_vuntaZJBcfI", "v_MxKuqpxmKKk", "v_n-BJ753InB0", "v_Ub88_ql0B78", "v_biyf6Q-xF0M", "v_xfhwYTFCGYY", "v_D-wP7_1A_Kw", "v_Ujm7CiWkOBY", "v_Zomv5zlkkEc", "v_RnShLAifVno", "v_tFiXLhbKdnk", "v_AVIMCVsLrVw", "v_-wXbBZDSIa8", "v_x8AR0FD5Jqo", "v_ZWzPz-LX9Qg", "v_MpqXCbsqVNQ", "v_Ggtcmy29TxE", "v_gzdasX0KIVg", "v_kW63TeJo4JY", "v_eUKMPNZ3NI4", "v_KU8VVtam3ig", "v_dVcnkTR5EBE", "v_rcDw6If4hjc", "v_QwnUZ-5JaOM", "v_uwLM5n-rYmA", "v_2_KTq85YQcY", "v_yGwevg8vwuU", "v_UKo5IFacUyE", "v_qoS5nkk7Rgk", "v_I9NukwdINyY", "v_p8UOE62POAE", "v_nUoN18FTeug", "v_Lo848n58uoM", "v_tpDhYD9e_cU", "v_N2WxAkVh-C4", "v_Lzvtnr4gT8Y", "v_PAiJNr97C6g", "v_gqK_jApRT5E", "v_0V8mzi_89Fw", "v_fO8b3U8fuGo", "v_H91Dm6jaUPg", "v_jF33TElZc_Y", "v_nqB4Zn6UWdk", "v_WxlJBRUU1A0", "v_WFbUBMgOMn8", "v_CjoAnld43C4", "v_TDZsE3yValQ", "v_wPLEmDBfgok", "v_sP416nSD4xQ", "v_hghdjiQlYko", "v_IgAE9XJVIlk", "v_kXbc9D0sF5k", "v_l-gHWS0oXiw", "v_vqqoDYma9F8", "v_h4SzYWJUqVQ", "v_dR3hrw9dVdw", "v_n1yugby5jC0", "v_BkBbzC6nIvA", "v_rZGxJN2AOQY", "v_bQGegLwVc8I", "v_PLqTX6ij52U", "v_dnzcNZBtUG4", "v_vjUx3k63oZI", "v_xhBvsWa0PCs", "v_jeaaS1NK_d4", "v_cQYAi2drreo", "v_0NgQr2-AieQ", "v_z0tiCqKa4cs", "v_ibjvKk93__g", "v_QBXswoKU4S4", "v_x-rGfBaFQek", "v_FLImHIKzzm4", "v_cw0HRDIQ10I", "v_CPnLc0MtBYc", "v_WN5EWPfDbog", "v_Bule85koN3o", "v_yINX46xPRf0", "v_CG-itBlFOzc", "v_LURZ8QDfowU", "v_VWsyA_RJIzg", "v_E50qKeeMbgU", "v_ng14GLT_hHQ", "v_fKDl_CnA8nY", "v_-MB6Wxglgzw", "v_UMhZGJqeSuU", "v_gpJ7veSnhUs", "v_z3xkE5Ox-2A", "v_TspdPLMqTx0", "v_TjR436qaQw4", "v_6Z4Qg_fNo0Q", "v_H1bmoIihWwo", "v_ucHq8B0-1BA", "v_qAZStAHJ3CQ", "v__yFOkxb22RI", "v_E7C91KoML-o", "v_hL11sP4Hlrg", "v_vrXqd_Ct298", "v_8jyqeivzs2M", "v_jYU215e-dKg", "v_9UpVdljXQ4E", "v_thhFfqcOfJQ", "v_vKCxWIzJTm0", "v_XuG2V9gDD9M", "v_ynvCxrj1UNg", "v_l7gWFOa7FnI", "v_RPr1ZbIGLwU", "v_x4f4jp_eHHo", "v_id4XtnLsw7c", "v_xUUmAdQJgjg", "v_ll91M5topgU", "v_lRB6XvAm_FU", "v_ph7d2H77tks", "v_Ey-0Q6VNJaY", "v_akJbB6LWP34", "v_ibKFezOKsBQ", "v_l3EBfLkfAX8", "v_wtoKUYBw9f4", "v_w9CC0wf27zs", "v_E4n0KcS_zgI", "v_2Mh-OomUNpQ", "v_zzE2VrQMvbc", "v_YcjLd_XBK5Y", "v_43R60vMRook", "v_OuVncktxGw0", "v_z85nM9V4058", "v_bc-DycGxV9E", "v_qwxmpiaT-kk", "v_Liha_xwiwtc", "v_-r_bvqjYjYg", "v_dJ0kxnyVzFI", "v__GQaltSDMAk", "v_nMiXX2jqI40", "v_Vnj0j648Emw", "v_OMq736aZeV8", "v_6l0JqBhldeA", "v_8Ny9NjNpQQA", "v_dd1LE0m_KVg", "v_bDwGZOk7njI", "v_V3LvKGRzkeg", "v_A80eMz7rJUM", "v_Yz7FjWlA6U4", "v_wwh94C7NB1I", "v_ssHXm1LqovI", "v_LSFmrUdURCs", "v_7AsHuXeoSpA", "v_u-X4YO91V78", "v_EW3zRMVjkoU", "v_9abGikdleAU", "v_fE3j74_s4KY", "v_-VKGwqL83w8", "v_9ut_IDtfVzY", "v_FqiMsRnatP0", "v_3UrypnvwAOY", "v_ew7XlNRrKyM", "v_mYfo8LhPB5Y", "v_ZKkjR2VTb7Y", "v_VFqkLp5mzBM", "v_gYqXtgtyFnY", "v_8TGG-FZx0cc", "v_z7zj8stU-kw", "v_PagM71op4HU", "v_jNJg1TYq3c8", "v_0JHOEr3YdNM", "v_Sul7NDmB5HM", "v_w1FFMG52FZE", "v_D7Oc3SLX0wo", "v_nB50V0OBto0", "v_O9phka35v6I", "v_txyXUXWybt4", "v_r8DXz1FOb90", "v_PrR-kkpy1c8", "v_bH-S32gOlCA", "v_ISHKwbnOzXY", "v_cXY-ONmtylc", "v_MaJlWFemO68", "v_xKLnBh0zmL4", "v_7tlXgKBTD_0", "v_-l16smV_uYg", "v_V--Xz2FtJXA", "v_aUCdj7acYos", "v_-MldnTjJ-zE", "v_g49F9coR2VU", "v_FsS_NCZEfaI", "v_sY31L_r7dsk", "v_gXp3KSWhf1g", "v_JXMD8Obk0yg", "v_BQ_BJNFGmTg", "v_Ht9WSqhFD34", "v_sUL9HAplalo", "v_bOUtD3leN0E", "v_UcVbSLmILaY", "v_kfiF8A8g7UE", "v_wEn3nAJHhtw", "v_vzrZJX-Slzg", "v_sGFbsMKkoYs", "v_6qojVSLbyUU", "v_fZQS02Ypca4", "v_bQhCEXZwnMM", "v_JLA4Ck8_BRI", "v_GV_BDNmUiLY", "v_V4srMOGRlU8", "v_RAw8sshR51c", "v_E3UJv-NC1E8", "v_ISEbX4WvBW4", "v_6fWXqCWuU9Y", "v_bmWICdhvyJw", "v_zOGg5-Mll4o", "v_aDaazrgvjJg", "v_Br1Ty6PCrv8", "v_FNAt8Pew0HA", "v_7vgokK5_Pvc", "v_9WhPG89P-tg", "v__crwKCjKRjg", "v_YVxuIAwOyoE", "v_hSSHf_c1q5I", "v_oEC5UG-rBFc", "v_7-jcXxwqf5E", "v_JRs2MpyP0SQ", "v_OBb4013eIc8", "v_gSwjTXkXK3Q", "v_uqiMw7tQ1Cc", "v_-l18hJp8ShE", "v_41__Qick6tM", "v_iEZgExTrv70", "v_ShKrNPaSdhY", "v_UxR9fdD0Vzw", "v_37pnsj0hlZ4", "v_tfepV4CXF7c", "v_fPtKNj6jCPU", "v_PJgB6h-fImY", "v_hzeK-DdGOsc", "v_mNM-JUC7ZEA", "v_3gPjMvTmE2g", "v_-rCYwovSK4s", "v_Wzo3_EYrfAY", "v_GX1EjqXAszM", "v_q8c_0JTe5r8", "v_jExOw6W1I3E", "v_WU4ISFy651Y", "v_je5KvCND9xo", "v_8AsV0ojyUMU", "v_hmPeCPjaxAM", "v_nNldj5g7W5o", "v_4sm-tTbfamM", "v_iuQHLWWhSEY", "v_as7KugARkLE", "v_mpC_UTM1tWQ", "v_1uiEkwykOxo", "v_xDc407xoYUM", "v_i2X7z9ywHV8", "v_Rn5qprCWXFg", "v_Zc7uU4Qwolc", "v_FMYu8k1b_DM", "v_OlH5t7EKOKM", "v_iS_ms9ajumY", "v_u02UsNRxclU", "v_oY22VETX20w", "v__8HTgaTPFRo", "v_ai80XIxFqqg", "v_-Jp86pFKlsw", "v_FDLhpMkJwCM", "v_xmW27Mi-jbg", "v_movzxpiGX8k", "v_GoFV8lTD4ug", "v_ILF-93buuSY", "v_Jl2lDgcsvmA", "v_v-YKnFqX_L0", "v_r97vYbzloD8", "v_YNQphOFqDOA", "v_0Zg-7EgFiC8", "v_yeEe8-aYA2E", "v_s56ctLdnOdw", "v_R3MPcPKQYKE", "v_pCWlZ37fGEo", "v_cY3QbnSeu9k", "v_mjKcoY18QG0", "v_k0ruZZZ5Gxw", "v_94q8YdJoPUw", "v_7xpkFhlxo2Q", "v_RG98kemBdyg", "v_KzVRgHnpCOQ", "v_D2TQ_RR2Q50", "v_m49gj6Y6SDo", "v_AqTZd5HZKNI", "v_DN3v5LhGsx0", "v_CFBmZ1g16H8", "v_TIjwhYSIRgg", "v_o0gdMKlKLcU", "v_5QS_VBDwKzw", "v_mWNTl9Bh7kI", "v_r64pATF3vCI", "v_L36MIRUpcrI", "v_HtkuvF7VbSQ", "v_j7vUMNMB4Yo", "v_iSJ87SnNLPc", "v_suxZhXSVNKY", "v_HwRiUpC5mf4", "v_k_xDTGiDp9A", "v_rRkwB9EcEMs", "v_o67-Z8n-jEE", "v_dySzHZniFCo", "v_HpjomKhpIdk", "v_hFlDERq1ThU", "v_svG8RyP-OlU", "v_WPrlU-Im5Ko", "v_EI_6eT-0-X4", "v_tznMNEWglxY", "v_aYSm25veKTs", "v_AC9mml3mqps", "v_nKBjM-kdeeI", "v_8RntjHIwMNo", "v_IdEXShfpQHs", "v_B4qwjeJBk0s", "v_3X6eP273RoI", "v_45llr44Pu9g", "v_z1QgzOfUjow", "v_lPCl1ZYH2xI", "v_nLdRqOTb0Ik", "v_ymmBQHiNK24", "v_UjJ8yWaFNGg", "v_B0cb0B90Ubg", "v_optJ47P_5Ys", "v_ig867kFeLic", "v_En9FemmDusk", "v_QVe7NojAHjY", "v_QoRUUJz-PU0", "v_aS0wGPhD48o", "v_juiMCvZUYwk", "v_c-zbA4zixfE", "v_5GiIqXY__74", "v_ayDMt_8KajY", "v_WKXIl7wvlk0", "v_6fyIc1vrK4Q", "v_4SSbyJ6pMuE", "v_S8RXX1uOGgQ", "v_woUdHiRWKMg", "v_Mz-yz0fQ_Hk", "v_9dSOQrpovQI", "v_59NxymNdzBE", "v_URgF15eyQvg", "v_5VwGzOLPFAQ", "v_lG5d8bCHLM4", "v_ulV37d5wFaw", "v_s43eZJ0hy44", "v_0EepbsAtiDk", "v_ipcvgAb5y0U", "v_xftFhOCEqFs", "v_ChPzol03Hqs", "v_ekJtPwfLM-M", "v_pPrW3iW0DA8", "v_WT7ZtXsTslM", "v_kWN4zFblj6o", "v_Xt86M-mRxi8", "v_GkwkHQJifDU", "v_W8XwSNt8P5A", "v_u2329Chp6IY", "v_EFGtb9IDQao", "v_TpgtCuYz0RQ", "v_okSvWjK0okw", "v_iZUwLKd5TTk", "v_VuiuqKX8srs", "v_DZrCkQ2z-u4", "v_HatKNbfqL-k", "v_LN8UWHvoELs", "v_MOG4eTo4Q4Y", "v_hu714U34avg", "v_hJf7uOUiEFo", "v_ugDN2gDN99E", "v_JMlNfZlOyX8", "v_sSVG3g2iKL8", "v_eBlYGGmeBY0", "v_fgQ2HYMl3pA", "v_5PgDTLR7wFQ", "v_Mgym0F-T7Js", "v_Qu-Y2u1Xn_U", "v_Z9k8GiGjkZ8", "v_mi6f8kGVR70", "v_fs-goyuhTi8", "v_taOJ9kUiwgM", "v_2rgamh4uty8", "v_P9qhbSYblG4", "v_Yh6xzcNlAjo", "v_YAhHfaXnpKg", "v_ss6XN-JP_x8", "v_K1TizK5Sg78", "v_MzNI-qdQfQc", "v_IjULOynkK5I", "v_rNb4Jz_t9F4", "v_OvGxDaayPcw", "v_9nndNUHadcg", "v_od9EdcDcByA", "v_rAO-_VxIJng", "v_HlAjWgz7zZ4", "v_88TLZbT_KkE", "v_jAk-vBePtTU", "v__I1zlicAxpM", "v_cMuQUTKMc0k", "v_RZogaNvPuNs", "v_XKu57UKSqPc", "v_gwKy0W1xof4", "v_UPwDuuYlLfQ", "v_TcrLMpMA1WM", "v_lLHAzwAs_9I", "v_rMWCaPh9UqE", "v__roK9m9UOvM", "v_BOOX9aGlSEs", "v_e4bcTIoiMIk", "v_gGai6uu5Yjs", "v_kx0ZSPOOFJ0", "v_D_yO_40uREE", "v_vbnuIUgUVXA", "v_21qQL15lUNY", "v_NNfAlym-xh8", "v_0BHufmWSI6Y", "v_xnCw4tvy0uQ", "v_JGuVc7z_YOQ", "v_QCj7IGUGs2Y", "v_JOBSEatasv4", "v_QgjNH6sAziM", "v_NrlITLsd7Fk", "v_y-X0DjEHD_k", "v_p81NOkb2rww", "v_BR9dr2iOyNc", "v_nKPkHO9ajs8", "v_l-PDSOCk7z0", "v_R9qRR8CcSJA", "v_Y2UkP0rySHA", "v_fG0nn2IVdDM", "v_5SpWmZxECqc", "v_D1E_KJRxGvQ", "v_cCqjsuJa2vk", "v_Eb_9_Bcij0Q", "v_JcsnMUVBlac", "v_Fdzw3niNDYY", "v_LsK452h29ng", "v_CrWlXxqj4ac", "v_6QbIJ2pnXXo", "v_sxQbiXWFdKs", "v_lneRTkBTPwg", "v_jBFn08ZRKSE", "v_BSg989GP5ro", "v_djpr7UMlnSw", "v_Wt7Ca_mHbL0", "v_V6B8zFv1DdA", "v_OBDq689jDDY", "v_zqcJ0N_a6y8", "v_ABBA086Gmq0", "v_yl3bjdUZrmM", "v_mglEC2-MH14", "v_qs_VoH8fOhs", "v_OTm43dbEEuE", "v_seQE5VZt3K0", "v_dOUCAVnJLko", "v_A8xThM3onkc", "v_5vDPgcyRtOU", "v_rKtktLDSOpA", "v_CBW_uJJpmZY", "v_K2dU4-Rg354", "v_jVoj7XaUoU8", "v_b4DhjwkO-b4", "v_0w-3O0ZOQFQ", "v_KBMvitQaXzE", "v_vVvImml1A8g", "v_RaQE93FNLQI", "v_OaG9uH7BgjI", "v_6q3EIv2X8BQ", "v_g4OlXwjgwSs", "v_JE0xYYOp5_s", "v_qm8sJxsZ5VY", "v_xRuZMDClaQM", "v_Qci4EFEIZuo", "v_Y7gywSk5i0M", "v_1517CiM5c0A", "v_ccirM2NGwMA", "v_tBFX7g605Go", "v_Ovtfld_ZyCs", "v_4GrPMa_BE6M", "v_rgAALWYnRrg", "v_TDROfnEk0NQ", "v_rmoa-Ffel2k", "v_Wgh8e4V8hBc", "v_NFErgnaSRRY", "v_sCj-ME5RkLY", "v_dFgwKTH-FhY", "v_1jWMd8QaN5s", "v_aPzHheM0Egw", "v_WTfeKnRJ17g", "v_7kO_qcJEiu0", "v_Xj1R81SK_zs", "v_DfOiHMcrCbs", "v_HlhQ3-WOdgI", "v_pgff9mC5y3s", "v_zdMvd5Cr5jM", "v_zz3Mw8FMA70", "v_9pNfaRJ0K4o", "v_Gq8-XVrlAt4", "v_DYwF_1xX4dU", "v_HDVk1O78gwc", "v_ePaFTey15ho", "v_Lomlff9wClo", "v_fyxXJJhCGBQ", "v_dQR6VEemP24", "v_fU-OulK7lZs", "v_RYJ3yzxZB8k", "v_onBAyGhqubg", "v_q8-iXvYyCGg", "v_sGUkc9ajgiU", "v_snvSHNYvRks", "v_lL2XqxgNIeQ", "v_sZRUTtoxY_s", "v_P4PQ5tC3gX8", "v_W8ayZca_fAY", "v__Ew3g9PXhvo", "v_Si6LZFiQT3k", "v_GfiqDJA-qqU", "v_zzci2xZ011A", "v_H4wC2d_Vbog", "v_Xrjkjz1l4qw", "v_xe6-tTvxQxk", "v_Cg_jN5G1ZpY", "v_pMDFkrK0KRc", "v_fxbEiZrQQzM", "v_UYe6JGaUZzg", "v_r-iXUXMP4DY", "v_X82bc2v5kcM", "v_ranTpEJvqs8", "v_upoS4Jct7kE", "v_lrlUN65DM8c", "v_p3vqC_FFyyM", "v_KzxVQ19pRUU", "v_TaLEPzEyZ34", "v_Ey7w7pu5HZc", "v_gnmtsqvTO_c", "v_Upd7zpT6tuc", "v_oyLTgy93soQ", "v_2GACaR0GdD8", "v_MMVfzKCnpnI", "v_Eu3QFCldg0s", "v_kZB7yxzHOrA", "v_GHU3G24jFjI", "v_b6VAlwv45q4", "v_iChE4EoYG6k", "v_H-5nHSHwFOk", "v_jkaevzzYdP8", "v_oSoi5owiybU", "v_7DJDUzdw_I4", "v_6pnabYJdqxc", "v_sIzcPVbn0lg", "v_4QvpJ71d8Nk", "v_NGk3v4sKqdg", "v_fnKOW7tJA1A", "v_eSpPY2yMg70", "v_ddLFSNa3ci0", "v_nDJgThY8zi8", "v_tTEAlDsmZrA", "v_5KYUiMysyb0", "v_-7eQ2bHNPUw", "v_L5kxbN9wFAg", "v_4VAhZEpQsv8", "v_01_BrVxYsE0", "v_ICl9CT-9fKY", "v_94w7SEcPDho", "v_AFtFitXAFks", "v_xzmcOKHP-sM", "v_13Y47Uk_w1o", "v_S3EA0yDdaWY", "v_CQ0r8ldAKl8", "v_nEOpfvJ7g_g", "v_-TubttTNt90", "v_-ZDCHvzbnoU", "v_aObyxa8gdAo", "v_Rg9qviHZ3qc", "v_wd7W8NTi_58", "v_MjljlkQaHh4", "v_SibfKtVX3CQ", "v_KWhXvv1WtFM", "v_wu0G4yQIwKo", "v_t6f_O8a4sSg", "v_m4EcgRjCpi8", "v_ReKUs0km4X8", "v_fdd5ixvEXOE", "v_tqqWTxQ5-kY", "v_AGjhryYGVs4", "v_BfLrltipDDU", "v_O0KUnuhLwj0", "v_8e80cJTrJDs", "v_lol04SNoopE", "v_vBKIXqRd-eA", "v_HLZLkI1NYAs", "v_Hy8WbkpvUlA", "v_Ld2a5ogu9k8", "v_RihO8i98QJg", "v_lUds16WLsHI", "v_3-_Eld2NwJ0", "v_EZKrOWEKX_Q", "v_Q6uc1kl008o", "v_kRom61pt8zk", "v_XvFv0n2mJUk", "v_4SecbKo1iGE", "v_uTQyPHg8r0M", "v_ux3h_qEusvw", "v_rqnzzNYt2cE", "v_NHDjJ8auZQ0", "v_RL4V-Sx619M", "v_RD7AUdgtchE", "v_UodvUEkuVig", "v_vGZO5lM61D4", "v_Hn3-SRXssY4", "v_Ve37zGVerDU", "v_d7gbNqcKXps", "v_5-vAXCUN8X0", "v_Pp5DCsgaALg", "v_CdjU2OZri4c", "v_PyPu-6wATfw", "v_4MBGT228QiQ", "v_BDQHEemWnSk", "v_VcEW9F8TyqU", "v_LUGksGa4WJA", "v_7JXae2so5-E", "v_NNZKinEXYc4", "v_hrwcr7BxS5I", "v_XncWGxekE30", "v_w8kVVzMOC98", "v_Q7cgJD7-sEM", "v_AFdqkU6FyqY", "v_5vd8j0hKIgs", "v_hpZ5XnuiRPw", "v_PZ1FVhgTRWU", "v_a8dUtKcAunw", "v_y2jDV7tFUXg", "v_ZVIi4lPU6h0", "v_tAWTfutrwg0", "v_w1VJnYDYYY0", "v_rXwSSTGmvb8", "v_AzmaqkS88YM", "v_9bERRZ2eTbo", "v_Xxng1g1PrdE", "v_U40FhqwfBvs", "v_OqLUp37WKMA", "v_DVlMzGPhWO4", "v_Vre3tO7xV98", "v_2Iakg-Z-iXM", "v_lOtplLrtapE", "v_CSDApI2nHPU", "v_YzcgGHmfaKE", "v_MvZFYjs80Y4", "v_FbmK-7sZ3O4", "v_-hEr3ydGyoM", "v_HW5QhCSKTsw", "v_PKEw32TJRWs", "v_Eucw0oPrFUs", "v_hoyQ36EH1a8", "v_O_tZAD_opA4", "v_K9ccE4wrTts", "v_cQr-HSUKbsw", "v_-M-Dr6HqDhU", "v_ylo_0z8si1g", "v_JW0VZ5NoC8A", "v_EMDTvPUEr7E", "v_4ZoBfU4b5Ko", "v_2TEJnQzCPUM", "v_4r6fQ5RvuGE", "v_MAyYq3HilFc", "v_IqRN2sOQ7Mo", "v_j4iaeT5xIdw", "v_exCENNu1qBU", "v_DJE9nX2qKYs", "v_gvr1dpCpvhw", "v_AuVVP8q6tFY", "v_PSh-caJvSHU", "v_tz3zHV1Z5po", "v_RhOV_K2XzZA", "v_50nJ8UkOGwg", "v_j56eH9M0ObY", "v_hDb19ih3jAA", "v_El_q7DhzArg", "v_pOVICBn8QMw", "v_NDK0XQnsnmA", "v_mf6UsZuW9Nw", "v_XIMi2oydVB8", "v_Q78FBGHniCc", "v_Bh35Q9vNsSA", "v_kfwwya1qzXM", "v_c-8GvZKndyQ", "v_iId8WcbiKZI", "v_FUvUDCZxAO8", "v_D7WhCBcddSA", "v_NDyc4PZE954", "v_ML0XZMcKk_E", "v_cgPt46YiXNo", "v__WMRdq7yFpA", "v_e2QVdX-JdIg", "v_5dXi-tAGqbs", "v_5ptxyeHlcwM", "v_2-SPZIF5lPY", "v_ESecNZbZgug", "v_unFlcSwdDFc", "v_jHXqbgeq83Y", "v_f1kY1-9XR1k", "v_udpVICVTQrQ", "v_uyr3E9ZReAw", "v_Keuj_3QyLq0", "v_UR1e1MIRvvc", "v_J8B2dX3FLTo", "v_ZncidS9kQ-g", "v_D0aZaiBAHxg", "v_Uqs8NaPzHKU", "v_VDYSVR0HbpM", "v_n2wq_9TeNYM", "v_OXTQsO5abO4", "v_HNR_HofJ_Fs", "v_Zn84iOuIkDs", "v_afqUOlnLHX0", "v_p8MvTi8hJdE", "v_06r6DtoTtSQ", "v_GP2S0V5NiPs", "v_-DGsqL65o4k", "v_iMB_mb11KWM", "v_1UgjxeAPq_A", "v_o8qR72Ymru8", "v_fKy5rh-SoTM", "v_X9AnhFjdiXA", "v_fnf7FbZkL6k", "v_aLv03Fznf5A", "v_-lEsnrNNZFU", "v_-TddN8oBvhQ", "v_Zt9nALIsHPc", "v_tyuyI30cZ00", "v_UHNUmpx0nww", "v_vw065HaGq3I", "v_3FZ47muWIYA", "v_25Wxe9TQzY8", "v_t0XM3ivJYUo", "v_QOMvNgo6CQ4", "v_JTrwGfPJNzU", "v_3RTmWrwgKek", "v_gOniW-yEZ0k", "v_pQsk5XPTLoY", "v_sAi1aMHR89A", "v_LZxTeIeuqT8", "v_94wjthSzsSQ", "v_xYu5luMTycc", "v_yJSQmNSFlNI", "v_PKYg6_rs3LQ", "v_TRXLUcm2CuQ", "v_9hE6VRD3qXQ", "v_o_-a7AMw74M", "v_m1aF1CVo-s8", "v_tOCFOu8eOkU", "v_PllZQ09sBuI", "v_0_PdI-5l62o", "v_4qZckue0QU4", "v_u5ri43qbi1A", "v_gdmGZK_vFAc", "v_okvQJRTfGHk", "v_bJ695Pp7Vng", "v_iJWmjVjBNzE", "v_EXxckPa76vc", "v_38ZxXyECPPU", "v_o8ja3mhecQI", "v_nbcRj00xCKM", "v_vKC23-I4pBc", "v_zSeLjjo3KF0", "v_tMTvOaUYNeg", "v_GgAXP4FTFnA", "v_vmUbGiOyUbU", "v_qVuRcevXgMk", "v_bfk3xsTt0XA", "v_V4wwal5FQZE", "v_173d8EtsIpE", "v_-Z98HU6T7J8", "v_pnxgTQofPQo", "v_8gUKEh27AFM", "v_pnEYhDVXVJ0", "v_hbHkS0GAOLE", "v_EKfhRuD3x9s", "v_HcZ3irBAcE0", "v_9btLaLqX-Zk", "v_BCRFFkvfB_Q", "v_AY6QSTuHGRc", "v_t97xM9sY2yg", "v_bcXc6mKSEEM", "v_cyXWvxVt8qE", "v_JJyV1AIQj4M", "v_Z0mxEFOm_Wc", "v_7hvq4VqEGCE", "v_gXAMD_KxXII", "v_izZqZFVpW4c", "v_VeWdsZb5tog", "v_PoamN_DEInI", "v_MSPslSgkp60", "v_SwIxaPdYIJE", "v_Zjfw0n32DBA", "v_6EWzgWd72Cs", "v_C10_qXWxpsk", "v_jxk6KOLu5kU", "v_zAr9k1-umvY", "v_bXdq2zI1Ms0", "v_MdrK2uQ-GvA", "v_VvlJjaLwGqY", "v_CgaWju3yGc4", "v_yc9Bc8G7Y_Q", "v_QSV7f5XHohE", "v_QokthYjtPzM", "v_zGTqXydTuQs", "v_xAoQ6JisbhI", "v_uJ_QCxMDfag", "v_y_bXP4NtAw0", "v_WHchTZ61VT4", "v_-79MZQX4CEA", "v_2NMTArm9IkA", "v_9ukVV07rszg", "v_VaT3qsoHPQ8", "v_EEMGyhO3OVI", "v_4eHP5IvDl6o", "v_Czd1PFeumIo", "v_TEh6gfRUFZQ", "v_9njq_aC4AS4", "v_frePM0YGtQE", "v_v1Vmf5s42No", "v_lAN2pe1lW-o", "v_SLdf2ZUdgEQ", "v_gXffXyAkcHM", "v_0dkIbKXXFzI", "v_twrPZghmNtA", "v_DDh5-FjIegY", "v_iFJaqDgYsp0", "v_5Y1AJsAE9UE", "v_pHiulmPx7ek", "v_Uv_6SJlvCl0", "v_eH8PT9fzbqU", "v_GvP6gZbHn30", "v_Ydep68S6ViE", "v_je6wJ_Ky5wg", "v_eXMF6Skt2To", "v_zb6WUBWwXfk", "v_E2NKQZNMAO0", "v_aDWrPrNFdR0", "v_xrl3oxTa6sQ", "v_A3160tXXLGg", "v_dRAn_gsx9Wc", "v_z48kSSKMoXo", "v_q_nBBJS-eJo", "v_DFJBJkCR0Bk", "v_HNvolNt5RU0", "v_8-QcL1k5n6k", "v_n_9skH6xGeM", "v_3HYQV_zu2RA", "v_IiNf2F4P5sE", "v_1VBg21aaiKM", "v_Wzg4d-3ym1E", "v_hryx3zm06U8", "v_NNOsdZr802w", "v_KbEoaYhMZ6c", "v_kHBTnFweJfw", "v_HaGLPOqibaM", "v_OBua42LRiF8", "v_kaRZaCGzNzw", "v_SoWow2cxfac", "v_-HZtgP41I_o", "v_iDMzTPfELoc", "v_dfgwl-_IMic", "v_2-S2fehRKVc", "v__XRJk2oFwZw", "v_7FPvAakfM9Y", "v_sODu6d-3zAQ", "v_UGKGBBAckJw", "v_CzyMYAvKE2E", "v_01vNlQLepsE", "v_yUbdrBSmUHE", "v_qsYElirHVUU", "v_-g-qMUjVA-s", "v_3AsQjx1lxLU", "v_TAC-5hXVLPY", "v_TDfWOcKi684", "v_4OfhHE72V8c", "v_iMwLP3y0VcQ", "v_v5i_NAlJX1Y", "v_xzQRc682Isc", "v_CRzaKuaCXr8", "v_BKdKbFPerGo", "v_iwMXYbYyJy4", "v_4XTJzFjjFp0", "v_qhYQd9nwOts", "v_bBtzyRzk0UM", "v_6nMQRUhOcwM", "v_Z2QA7dUVwMM", "v_XDBugI_CcYs", "v_iL5abexk3vQ", "v_4OeZViscNp4", "v_ZGsYV0KDB-4", "v_SI8HO5-e24c", "v_stVRtmxHVaE", "v_R8WbSI3m1lI", "v_JXyi7hFT26w", "v_ieWgalZPc2g", "v_2Sev8z4P7pE", "v_tBNOJJx4Z9k", "v_IU6LVYI0FZM", "v_JgDfOMDfNZs", "v_Flh6nxGkf74", "v_PYNTOqgOXWc", "v_rbKPBMRj9jY", "v_SF3pw17yBB4", "v_wGEaIInAtT4", "v_7X3wPRKuAsU", "v_CMGjxw3X1dI", "v_ZF4oT2P0a54", "v_ucsAN6pGv6w", "v_9Hxcuf80TK0", "v_sOMA_oI7dgk", "v_-NM-0NZXRNw", "v_zPDbMflNURc", "v_WOZbWqJMkRg", "v_Fky1ioAUt38", "v_OEQM6wYtYlk", "v_FsS8cQbfKTQ", "v_YWfLZFXwjTE", "v_iIVOAvu3qtM", "v_Dt2KQcKR4T8", "v_TbxVdELEiO8", "v_AJ15GW-sS5M", "v_JXazqQitVdQ", "v_L54gbbqtxOg", "v_YNnyUVFE4uM", "v_SFfB6qvT5FI", "v_Qu3_80O0j5w", "v_XuYmybr9uDE", "v_iGxMm7C1q48", "v_zpBZ7HMNO34", "v_tCQiu-qY9XA", "v_eaI8My4pGq4", "v_GNg5kjnJlOE", "v_Z_YXWLkRmjQ", "v_8nhuvbFSSmw", "v_7S15OsGinjw", "v_lETAKUG4pQw", "v_6WQSZekz8vQ", "v_VceicZDzH3U", "v_ZhUC4qTGdHY", "v_unE-vkRljRs", "v_8v2ewQE-QK0", "v_dZyb8t-4ATQ", "v_ye7e0mitDdU", "v_Dydb923dXss", "v_WPM0vuERyfc", "v_SMNXIkCGh_0", "v_1cLxW-FhgpA", "v_KZyg_UYyL0s", "v_xbQQhK7wQZQ", "v_m5PO3T2uGzs", "v_-76d-7Ju7L0", "v_BhCNHWQhhEw", "v__VPf75tGIHQ", "v_iUGuDzgow2I", "v_frbNKAZALzI", "v_5oPGbuL8G5Y", "v_hlvs-e3bCq0", "v_QDTo_ss6INM", "v_pc5_pexVob8", "v_eCh_SqpkjtA", "v_aSxSgymPOBw", "v_oEDBkmmVKM0", "v_un6VqJYUpDo", "v_2tO1ApNwXpQ", "v_dvHj856L8zY", "v_a0YyuiZVtFU", "v_tb7s5a1H-IU", "v_I91LmNcwN4Y", "v_cz2ESqP3PDk", "v_qXNYHbnGvto", "v_l0btLzdAeuM", "v_wdD-UHM8rTg", "v_pem8BpCspUM", "v_GBFRHM7i-NQ", "v_1imA9vLRd3k", "v_SAaqnGbci6Y", "v_e9bdQGmyrKA", "v_rqmi-DjYp0U", "v_E2yPoqpNVdM", "v_Uj1QtIM8500", "v_JJ811udnROI", "v_aa5jHg4E3O0", "v_HQk5hngL4Us", "v_J_CJSmMFWlg", "v_lAsPxkZD6Xc", "v_tAbB24pczrs", "v_jQR4Hhaf8o8", "v_3G1T_V102GA", "v_t1U8fJVEztQ", "v_JuD1OdoXe9Q", "v__i6kvwg1Oyo", "v_2XOTxAZZhsQ", "v_QOuNt8YH3Rk", "v_dDmc6n79ek0", "v_JlCQlNjvXzA", "v_P-04xkAdWSY", "v_KxAxMZ6dYa4", "v_O3HFalRZVts", "v_hZ0jI9U5Nws", "v_d950IKYTYY0", "v_GSo0lqq5zmM", "v_fvroOk6TpKk", "v_Vg043D46E7Q", "v_2iW1Eq9SDW4", "v_-Lxv663IEaI", "v_CvsFEsXakwo", "v_9T1C2CW_P0A", "v_-WrOnvkUTXg", "v_1mYtNMDFyXQ", "v_xIhTY02lRSE", "v_kl4vLrvGAmM", "v_9wRQsxVFwkE", "v_oZTFplEHVDo", "v_o4z1nEiyr4E", "v_sy-xNiKnfBU", "v_4HxmQpkryjA", "v_dAcdSkaoK64", "v_6QrVxwNUbBk", "v_v621l04N1QQ", "v_mgNfayAiTQc", "v_wq4H7L15NMA", "v_RpgTxW7lYJM", "v_iHm8ZXs2XdY", "v_vSV7arHrH5k", "v_Yd3G3732WbI", "v_x7-2_HigN8c", "v_dEG-OgH9zmU", "v_ByIIq3jFOKo", "v_6U081DbNJIY", "v_Ii3jLIcf92s", "v_CQocaUwWcQI", "v_R_AsoAmxd4o", "v_1qU2CdUQbw0", "v_ZBoa0UN86Qw", "v_Cm8hWFFA16I", "v_b7B0NRizzYo", "v_rDxEl9bPodU", "v_tS2d90ZGmeA", "v_IiCN1md2MV4", "v_iOa_svsqGxQ", "v_HMIv7qpDmH0", "v_V9_mEvC24nk", "v_L2FgftH2VD8", "v_QMCHIR3nDLs", "v_v3t4Z5cEgZM", "v_RF0ChBe9HHI", "v_ujS0VNOXeVg", "v_DZBu_U_Jt4c", "v_7fwrkFHTm-Q", "v_oQuAwR_t5Ig", "v_AISkvED80lU", "v_outMi06JZss", "v_sbIh_M0oGs8", "v_fJCkM6secVM", "v_wcmO0R3Kqzo", "v_LGt_KpgXymU", "v_ULH_AqrP3to", "v_qblFXnyqf1o", "v_m6C4SOxfNGQ", "v_lL-YnWr815o", "v_cSwDKlxiqXQ", "v_l4LFSd-7hxU", "v_7qBA7XPDsC4", "v_4G2jW3hbiO4", "v_YNo7-L8VQWw", "v_j5mhELw7XaM", "v_ywJQotAB3dw", "v_87hsTxVtn-A", "v_aW8LjbEpY1c", "v_j3QSVh_AhDc", "v_YSnCGTXJtig", "v_QjaEDlh805g", "v_Qnr73D2zIjU", "v_wDw3i5ODGWA", "v_yyCsQ7QzAJ8", "v_A8NAj6NQ5vM", "v_dBNZf90PLJ0", "v_e9AsyRGUzTc", "v_wZJeEV6sZXE", "v_Uc1_7BXtXZs", "v_3Y_4Azzta6Q", "v_mzewLmZSCMU", "v_B8imoIn6NUE", "v_21Pz1cjdd2I", "v_Cc_DmDsXm6M", "v_1cWWCiNIYnc", "v_8fZbv6OUEm8", "v_uyzQkTArIwU", "v_UmH4VPH0KG4", "v_o8-v0rPP06U", "v_3iLo6lxAarc", "v_vgdcVhRSa9E", "v_E9HbfcT1ZWM", "v_TyHLBe6__rc", "v_SqIVJrXxO3g", "v_TqO5Ddh5Lp4", "v_j73Wh1olDsA", "v_P2Fcv3cC8bI", "v_Xz3F4x70qjQ", "v_uzUVSpklbRs", "v_8ulb1O_5gRs", "v_AmhfmeKk6Bg", "v_3KsOJiA_uak", "v_DP9hfhq8sro", "v_7Eh6c1eYMFk", "v_v9bcQsDl-yk", "v_SKtUq_1cOSs", "v_wAAu-2U5Pso", "v_6jgWCFWtCfU", "v_msELZwMnoFo", "v_O2Y6rn4gFd4", "v_SBn1i9YqN1k", "v_JQavlg895jU", "v_MjmDj36sVxM", "v_6g80a1NnftU", "v_DQVkDzj4cPE", "v_ulopyhvgyQg", "v_2sbF8W0_bbg", "v_R3HC-IAZVZg", "v_HHG1kCydLYU", "v_bEcSrzeCGyA", "v_9I42aiA-UcY", "v_wUvC0TXK1PM", "v_8iTz6Jy3lJg", "v_UCzKdpP9sLE", "v_qmKSDwVvxVk", "v_4vOxhqUbHL8", "v_8s3b1f6OMw0", "v_MVUqd8iVUEk", "v_6VUsbs84lCc", "v_i49blayQ93Q", "v_pi6sBUrSNGk", "v_PUJYZEq8H64", "v_2UbwK1Qtveg", "v_MIQiVsnwcWE", "v_PqcdYoa--8g", "v_b-3l2qIHL5w", "v_BFxxrjqgF0w", "v_DkiJwIJQKaM", "v_t0ZuC58UIOM"] -------------------------------------------------------------------------------- /data/didemo_precomp/.gitignore: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Sha-Lab/CMHSE/2c4f8d44afcc0870ccb0c44a673b37d42cb81636/data/didemo_precomp/.gitignore -------------------------------------------------------------------------------- /decoder/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Sha-Lab/CMHSE/2c4f8d44afcc0870ccb0c44a673b37d42cb81636/decoder/__init__.py -------------------------------------------------------------------------------- /decoder/layers.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.init as init 4 | import torchvision.models as models 5 | from torch.autograd import Variable 6 | from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence 7 | import torch.backends.cudnn as cudnn 8 | import numpy as np 9 | import torch.nn.functional as F 10 | from IPython import embed 11 | 12 | class Seq2Seq_Decode(nn.Module): 13 | def __init__(self, embedding_features, rnn_features, rnn_bidirectional=False): 14 | super(Seq2Seq_Decode, self).__init__() 15 | self.bidirectional = rnn_bidirectional 16 | print embedding_features, rnn_features 17 | 18 | self.rnn = nn.GRU(input_size=embedding_features, 19 | hidden_size=rnn_features, 20 | num_layers=1, batch_first=True, 21 | bidirectional=rnn_bidirectional) 22 | 23 | self.features = rnn_features 24 | 25 | self._init_rnn(self.rnn.weight_ih_l0) 26 | self._init_rnn(self.rnn.weight_hh_l0) 27 | self.rnn.bias_ih_l0.data.zero_() 28 | self.rnn.bias_hh_l0.data.zero_() 29 | 30 | def _init_rnn(self, weight): 31 | for w in weight.chunk(3, 0): 32 | init.xavier_uniform(w) 33 | 34 | def forward(self, q_emb, q_len, hidden=None): 35 | lengths = q_len.long() 36 | lens, indices = torch.sort(lengths, 0, True) 37 | 38 | packed_batch = pack_padded_sequence(q_emb[indices.cuda()], lens.tolist(), batch_first=True) 39 | if hidden is not None: 40 | N_, H_ = hidden.size() 41 | hidden, _ = self.rnn(packed_batch, hidden[indices.cuda()].view(1, N_, H_)) 42 | else: 43 | hidden, _ = self.rnn(packed_batch) 44 | hidden_depack = pad_packed_sequence(hidden, True)[0] 45 | 46 | if self.bidirectional: 47 | NotImplemented 48 | 49 | _, _indices = torch.sort(indices, 0) 50 | hidden_depack = hidden_depack[_indices.cuda()] 51 | 52 | return hidden_depack 53 | 54 | 55 | -------------------------------------------------------------------------------- /decoder/loss.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.init as init 4 | import torchvision.models as models 5 | from torch.autograd import Variable 6 | import torch.backends.cudnn as cudnn 7 | import numpy as np 8 | from collections import OrderedDict 9 | import torch.nn.functional as F 10 | from IPython import embed 11 | 12 | class EuclideanLoss(nn.Module): 13 | def __init__(self, norm=True): 14 | super(EuclideanLoss, self).__init__() 15 | self.norm = norm 16 | 17 | def forward_loss(self, clip_remap, clip_emb): 18 | # compute image-sentence score matrix 19 | score = clip_remap - clip_emb 20 | 21 | score_sub = torch.sqrt((score**2).sum(dim=1)) 22 | 23 | if self.norm: 24 | return score_sub.mean() 25 | else: 26 | return score_sub.sum() 27 | 28 | def forward(self, clip_remap, clip_emb): 29 | return self.forward_loss(clip_remap, clip_emb) 30 | -------------------------------------------------------------------------------- /decoder/model.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.init as init 4 | import torchvision.models as models 5 | from torch.autograd import Variable 6 | from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence 7 | import torch.backends.cudnn as cudnn 8 | from torch.nn.utils.clip_grad import clip_grad_norm 9 | import numpy as np 10 | from collections import OrderedDict 11 | import torch.nn.functional as F 12 | from IPython import embed 13 | 14 | from layers import * 15 | 16 | class DecoderSequence(nn.Module): 17 | def __init__(self, img_dim, embed_size, dropout=0, 18 | no_imgnorm=False, bidirectional=False, rnn_type='seq2seq'): 19 | super(DecoderSequence, self).__init__() 20 | self.embed_size = embed_size 21 | self.no_imgnorm = no_imgnorm 22 | self.bidirectional = bidirectional 23 | self.img_dim = img_dim 24 | 25 | num_layers = 1 26 | if dropout > 0: 27 | self.dropout = nn.Dropout(dropout) 28 | if rnn_type == 'seq2seq': 29 | self.rnn = Seq2Seq_Decode(img_dim, embed_size, rnn_bidirectional=bidirectional) 30 | else: 31 | raise ValueError('Unsupported RNN type') 32 | 33 | def forward(self, x, lengths): 34 | """Extract image feature vectors.""" 35 | outputs = self.rnn(x, lengths) 36 | lengths = lengths.numpy().astype(int) 37 | sum_total = sum(lengths) 38 | # print (x.shape, sum_total) 39 | outputs_reshape = Variable(torch.zeros(sum_total,outputs.shape[2])).cuda() 40 | # print outputs_reshape.shape 41 | pos = 0 42 | for i,leng in enumerate(lengths): 43 | outputs_reshape[pos:pos+leng,:] = outputs[i,0:leng,:] 44 | pos = pos + leng 45 | 46 | # normalization in the joint embedding space 47 | return outputs_reshape 48 | 49 | -------------------------------------------------------------------------------- /didemo_dev/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Sha-Lab/CMHSE/2c4f8d44afcc0870ccb0c44a673b37d42cb81636/didemo_dev/__init__.py -------------------------------------------------------------------------------- /didemo_dev/data.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.utils.data as data 3 | import torchvision.transforms as transforms 4 | import os 5 | import os.path as osp 6 | import nltk 7 | from PIL import Image 8 | import numpy as np 9 | import json as jsonmod 10 | import h5py 11 | import copy 12 | from IPython import embed 13 | 14 | this_dir = osp.abspath(osp.join(osp.dirname(__file__), '..')) 15 | class PrecompDataset(data.Dataset): 16 | """ 17 | Load precomputed captions and image features 18 | Possible options: f8k, f30k, coco, 10crop 19 | """ 20 | 21 | def __init__(self, data_path, data_split, vocab, opt): 22 | self.vocab = vocab 23 | json_path = osp.join(this_dir, 'didemo_dev', '{}.json'.format(data_split) ) 24 | 25 | # Captions 26 | self.jsondict = jsonmod.load(open(json_path, 'r')) 27 | self.ann_id = {} 28 | for i, keys in enumerate(self.jsondict.keys()): 29 | self.ann_id[i] = str(keys) 30 | 31 | # Image features 32 | #self.video_emb = h5py.File(osp.join(this_dir, 'data', 'didemo_incep_v3.h5'+str(opt.data_switch)),'r') 33 | self.video_emb = h5py.File(osp.join(this_dir, 'data', 'didemo_precomp', 'didemo_incep_v3.h5'),'r') 34 | 35 | self.length = len(self.ann_id) 36 | 37 | def img_cap_feat_combine(self, video_feat, caption_feat, cur_vid): 38 | 39 | #### Videos #### 40 | segment_number = len(self.jsondict[cur_vid]['times']) 41 | video_feat_len = video_feat.shape[0] 42 | 43 | clips = [] 44 | 45 | prev_time = None 46 | groups = [] 47 | group_count = 1 48 | 49 | for seg_id in range(segment_number): 50 | picked_segment = self.jsondict[cur_vid]['times'][seg_id] 51 | start_frame = picked_segment[0] 52 | end_frame = picked_segment[1] 53 | 54 | if start_frame > video_feat_len or end_frame > video_feat_len: 55 | end_frame=video_feat.shape[0] 56 | start_frame=0 57 | 58 | if prev_time != None: 59 | if start_frame == prev_time[0] and end_frame == prev_time[1]: 60 | group_count = group_count + 1 61 | else: 62 | groups.append(group_count) 63 | group_count = 1 64 | prev_time = [start_frame, end_frame] 65 | else: 66 | group_count = 1 67 | prev_time = [start_frame, end_frame] 68 | 69 | current_feat = video_feat[start_frame: end_frame+1, :] 70 | 71 | max_frames = 80.0 72 | if current_feat.shape[0] > max_frames: 73 | ind = np.arange(0, current_feat.shape[0], current_feat.shape[0]/max_frames).astype(int).tolist() 74 | current_feat = current_feat[ind,:] 75 | 76 | clips.append(current_feat) 77 | 78 | groups.append(group_count) 79 | 80 | video = video_feat 81 | if video.shape[0] > max_frames: 82 | ind = np.arange(0, video.shape[0], video.shape[0]/max_frames).astype(int).tolist() 83 | video = video[ind,:] 84 | 85 | #### Captions #### 86 | vocab = self.vocab 87 | 88 | captions = [] 89 | length_cap = [] 90 | for cap in caption_feat: 91 | tokens_cap = nltk.tokenize.word_tokenize(cap.lower()) 92 | _cap = [] 93 | _cap.extend([vocab(token) for token in tokens_cap]) 94 | target_cap = torch.Tensor(_cap) 95 | captions.append(target_cap) 96 | 97 | paragraph = torch.cat(captions, 0) 98 | 99 | lengths_clip = [len(vid) for vid in clips] 100 | lengths_cap = [len(cap) for cap in captions] 101 | 102 | num_clip = len(clips) 103 | num_caption = len(captions) 104 | assert num_clip == num_caption, 'something is wrong: {} vs. {}'.format(num_clip, num_caption) 105 | 106 | lengths_clip = torch.Tensor(lengths_clip).long() 107 | lengths_cap = torch.Tensor(lengths_cap).long() 108 | 109 | groups = torch.from_numpy(np.array(groups)).long() 110 | 111 | return clips, captions, video, paragraph, lengths_clip, lengths_cap, num_clip, num_caption, groups 112 | 113 | def __getitem__(self, index): 114 | # handle the image redundancy 115 | cur_vid = self.ann_id[index] 116 | image_data = self.video_emb[cur_vid+'.npz'].value 117 | image = torch.Tensor(image_data) 118 | caption_json = self.jsondict[cur_vid]['description'] 119 | 120 | clips, captions, video, paragraph, lengths_clip, lengths_cap, \ 121 | num_clip, num_caption, groups = self.img_cap_feat_combine(image, caption_json, cur_vid) 122 | 123 | return clips, captions, video, paragraph, lengths_clip, lengths_cap, num_clip, num_caption, index, groups 124 | 125 | def __len__(self): 126 | return self.length 127 | 128 | def collate_fn(data_batch): 129 | _clips, _captions, _video, _paragraph, _lengths_clip, _lengths_cap, _num_clip, _num_caption, _index, groups = zip(*data_batch) 130 | 131 | # Merge images 132 | lengths_clip = torch.cat(_lengths_clip, 0) 133 | clips = torch.zeros(len(lengths_clip), lengths_clip.max(), 2048) 134 | _cur_ind = 0 135 | for i, _vid_seg in enumerate(_clips): 136 | for j, vid in enumerate(_vid_seg): 137 | end = lengths_clip[_cur_ind] 138 | clips[_cur_ind, :end, :] = vid[:end, :] 139 | _cur_ind += 1 140 | 141 | lengths_video = torch.Tensor([len(x) for x in _video]).long() 142 | videos = torch.zeros(len(_video), lengths_video.max(), 2048) 143 | for i, vid in enumerate(_video): 144 | end = lengths_video[i] 145 | videos[i, :end, :] = vid[:end, : ] 146 | 147 | # Merget captions 148 | lengths_cap = torch.cat(_lengths_cap, 0) 149 | captions = torch.zeros(len(lengths_cap), lengths_cap.max()).long() 150 | _cur_ind = 0 151 | for i, _cap_seg in enumerate(_captions): 152 | for j, cap in enumerate(_cap_seg): 153 | end = lengths_cap[_cur_ind] 154 | captions[_cur_ind, :end] = cap[:end] 155 | _cur_ind += 1 156 | 157 | lengths_paragraph = torch.Tensor([len(x) for x in _paragraph]).long() 158 | paragraphs = torch.zeros(len(_paragraph), lengths_paragraph.max()).long() 159 | for i, cap in enumerate(_paragraph): 160 | end = lengths_paragraph[i] 161 | paragraphs[i, :end] = cap[:end ] 162 | 163 | groups = torch.cat(groups) 164 | 165 | return clips, captions, videos, paragraphs, lengths_clip, lengths_cap, lengths_video, lengths_paragraph, _num_clip, _num_caption, _index, groups 166 | 167 | def get_precomp_loader(data_path, data_split, vocab, opt, batch_size=100, 168 | shuffle=True, num_workers=2): 169 | """Returns torch.utils.data.DataLoader for custom coco dataset.""" 170 | dset = PrecompDataset(data_path, data_split, vocab, opt) 171 | 172 | data_loader = torch.utils.data.DataLoader(dataset=dset, 173 | batch_size=batch_size, 174 | shuffle=shuffle, 175 | pin_memory=True, 176 | collate_fn=collate_fn) 177 | return data_loader 178 | 179 | def get_loaders(data_name, vocab, batch_size, workers, opt): 180 | dpath = os.path.join(opt.data_path, data_name) 181 | if opt.data_name.endswith('_precomp'): 182 | train_loader = get_precomp_loader(dpath, 'train_data_bwzhang', vocab, opt, 183 | batch_size, True, workers) 184 | val_loader = get_precomp_loader(dpath, 'test_data_bwzhang', vocab, opt, 185 | batch_size, False, workers) 186 | return train_loader, val_loader 187 | 188 | 189 | def get_test_loader(split_name, data_name, vocab, batch_size, 190 | workers, opt): 191 | dpath = os.path.join(opt.data_path, data_name) 192 | if opt.data_name.endswith('_precomp'): 193 | test_loader = get_precomp_loader(dpath, split_name, vocab, opt, 194 | batch_size, False, workers) 195 | return test_loader 196 | -------------------------------------------------------------------------------- /didemo_vocab.py: -------------------------------------------------------------------------------- 1 | # Create a vocabulary wrapper 2 | import nltk 3 | import pickle 4 | from collections import Counter 5 | import json 6 | import argparse 7 | import os 8 | from tqdm import tqdm 9 | 10 | annotations = { 11 | 'didemo': ['../../../playground/vsepp-develop-develop/data/data/train_data.json', '../../../playground/vsepp-develop-develop/data/data/val_data.json', '../../../playground/vsepp-develop-develop/data/data/test_data.json'] 12 | } 13 | 14 | 15 | class Vocabulary(object): 16 | """Simple vocabulary wrapper.""" 17 | 18 | def __init__(self): 19 | self.word2idx = {} 20 | self.idx2word = {} 21 | self.idx = 0 22 | 23 | def add_word(self, word): 24 | if word not in self.word2idx: 25 | if word in glove: 26 | self.word2idx[word] = self.idx 27 | self.idx2word[self.idx] = word 28 | self.idx += 1 29 | 30 | def __call__(self, word): 31 | if word not in self.word2idx: 32 | return self.word2idx['UNK'] 33 | return self.word2idx[word] 34 | 35 | def __len__(self): 36 | return len(self.word2idx) 37 | 38 | 39 | 40 | def build_vocab(data_path, data_name, jsons, threshold): 41 | global glove 42 | """Build a simple vocabulary wrapper.""" 43 | counter = Counter() 44 | 45 | f_glove = open('/data1/bwzhang/glove.840B.300d.txt','r') 46 | glove = {} 47 | for line in tqdm(f_glove): 48 | line = line[0:-1] 49 | line_split = line.split(' ') 50 | word = line_split[0] 51 | glove[word] = 1 52 | 53 | 54 | for path in jsons[data_name]: 55 | full_path = os.path.join(path) 56 | print full_path 57 | dic = json.load(open(full_path,'r')) 58 | for i in range(len(dic)): 59 | sen = dic[i]['description'] 60 | tokens = nltk.tokenize.word_tokenize(sen.lower()) 61 | counter.update(tokens) 62 | 63 | 64 | # Discard if the occurrence of the word is less than min_word_cnt. 65 | words = [word for word, cnt in counter.items() if cnt >= threshold] 66 | 67 | # Create a vocab wrapper and add some special tokens. 68 | # vocab = pickle.load(open('vocab/coco_vocab.pkl','r')) 69 | vocab = Vocabulary() 70 | vocab.add_word('PAD') 71 | vocab.add_word('BOS') 72 | vocab.add_word('EOS') 73 | vocab.add_word('UNK') 74 | 75 | # Add words to the vocabulary. 76 | for i, word in enumerate(words): 77 | vocab.add_word(word) 78 | print (len(vocab)) 79 | return vocab 80 | 81 | 82 | def main(data_path, data_name): 83 | vocab = build_vocab(data_path, data_name, jsons=annotations, threshold=1) 84 | with open('./vocab/%s_precomp_vocab_total.pkl' % data_name, 'wb') as f: 85 | pickle.dump(vocab, f, pickle.HIGHEST_PROTOCOL) 86 | print("Saved vocabulary file to ", './vocab/%s_precomp_vocab_total.pkl' % data_name) 87 | 88 | 89 | if __name__ == '__main__': 90 | parser = argparse.ArgumentParser() 91 | parser.add_argument('--data_path', default='/w/31/faghri/vsepp_data/') 92 | parser.add_argument('--data_name', default='didemo', 93 | help='coco|anet|didemo') 94 | opt = parser.parse_args() 95 | main(opt.data_path, opt.data_name) 96 | -------------------------------------------------------------------------------- /evaluation.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | import os 3 | import pickle 4 | 5 | import numpy 6 | from activity_net.data import get_test_loader 7 | import time 8 | import numpy as np 9 | from anet_vocab import Vocabulary # NOQA 10 | import torch 11 | from model import VSE 12 | from collections import OrderedDict 13 | from IPython import embed 14 | import torch.nn.functional as F 15 | 16 | 17 | 18 | class AverageMeter(object): 19 | """Computes and stores the average and current value""" 20 | 21 | def __init__(self): 22 | self.reset() 23 | 24 | def reset(self): 25 | self.val = 0 26 | self.avg = 0 27 | self.sum = 0 28 | self.count = 0 29 | 30 | def update(self, val, n=0): 31 | self.val = val 32 | self.sum += val * n 33 | self.count += n 34 | self.avg = self.sum / (.0001 + self.count) 35 | 36 | def __str__(self): 37 | """String representation for logging 38 | """ 39 | # for values that should be recorded exactly e.g. iteration number 40 | if self.count == 0: 41 | return str(self.val) 42 | # for stats 43 | return '%.4f (%.4f)' % (self.val, self.avg) 44 | 45 | 46 | class LogCollector(object): 47 | """A collection of logging objects that can change from train to val""" 48 | 49 | def __init__(self): 50 | # to keep the order of logged variables deterministic 51 | self.meters = OrderedDict() 52 | 53 | def update(self, k, v, n=0): 54 | # create a new meter if previously not recorded 55 | if k not in self.meters: 56 | self.meters[k] = AverageMeter() 57 | self.meters[k].update(v, n) 58 | 59 | def __str__(self): 60 | """Concatenate the meters in one log line 61 | """ 62 | s = '' 63 | for i, (k, v) in enumerate(self.meters.iteritems()): 64 | if i > 0: 65 | s += ' ' 66 | s += k + ' ' + str(v) 67 | return s 68 | 69 | def tb_log(self, tb_logger, prefix='', step=None): 70 | """Log using tensorboard 71 | """ 72 | for k, v in self.meters.iteritems(): 73 | tb_logger.log_value(prefix + k, v.val, step=step) 74 | 75 | def LogReporter(tb_logger, result, epoch, name): 76 | for key in result: 77 | tb_logger.log_value(name+key, result[key], step=epoch) 78 | return 79 | 80 | def encode_data(opt, model, data_loader, log_step=10, logging=print, contextual_model=True): 81 | """Encode all images and captions loadable by `data_loader` 82 | """ 83 | batch_time = AverageMeter() 84 | val_logger = LogCollector() 85 | 86 | # switch to evaluate mode 87 | model.val_start(opt) 88 | 89 | end = time.time() 90 | 91 | # numpy array to keep all the embeddings 92 | clip_embs, cap_embs = [], [] 93 | vid_embs, para_embs = [], [] 94 | vid_contexts, para_contexts = [], [] 95 | num_clips_total = [] 96 | cur_vid_total = [] 97 | for i, (clips, captions, videos, paragraphs, lengths_clip, lengths_cap, lengths_video, lengths_paragraph, num_clips, num_caps, ind, cur_vid) in enumerate(data_loader): 98 | # make sure val logger is used 99 | model.logger = val_logger 100 | num_clips_total.extend(num_clips) 101 | 102 | # compute the embeddings 103 | clip_emb, cap_emb = model.forward_emb(clips, captions, lengths_clip, lengths_cap) 104 | vid_context, para_context = model.forward_emb(videos, paragraphs, lengths_video, lengths_paragraph) 105 | if contextual_model: 106 | vid_emb, para_emb = model.structure_emb(clip_emb, cap_emb, num_clips, num_caps, vid_context, para_context) 107 | else: 108 | vid_emb, para_emb = model.structure_emb(clip_emb, cap_emb, num_clips, num_caps) 109 | 110 | 111 | clip_emb = F.normalize(clip_emb) 112 | cap_emb = F.normalize(cap_emb) 113 | vid_emb = F.normalize(vid_emb) 114 | para_emb = F.normalize(para_emb) 115 | vid_context = F.normalize(vid_context) 116 | para_context = F.normalize(para_context) 117 | 118 | 119 | # initialize the numpy arrays given the size of the embeddings 120 | clip_embs.extend(clip_emb.data.cpu()) 121 | cap_embs.extend(cap_emb.data.cpu()) 122 | vid_embs.extend(vid_emb.data.cpu()) 123 | para_embs.extend(para_emb.data.cpu()) 124 | vid_contexts.extend(vid_context.data.cpu()) 125 | para_contexts.extend(para_context.data.cpu()) 126 | cur_vid_total.extend(cur_vid) 127 | 128 | # measure accuracy and record loss 129 | model.forward_loss(vid_emb, para_emb, 'test') 130 | 131 | # measure elapsed time 132 | batch_time.update(time.time() - end) 133 | end = time.time() 134 | 135 | if i % log_step == 0: 136 | logging('Test: [{0}/{1}]\t' 137 | '{e_log}\t' 138 | 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t' 139 | .format( 140 | i, len(data_loader), batch_time=batch_time, 141 | e_log=str(model.logger))) 142 | 143 | vid_embs = torch.stack(vid_embs, 0) 144 | para_embs = torch.stack(para_embs, 0) 145 | vid_embs = vid_embs.numpy() 146 | para_embs = para_embs.numpy() 147 | 148 | clip_embs = torch.stack(clip_embs, 0) 149 | cap_embs = torch.stack(cap_embs, 0) 150 | clip_embs = clip_embs.numpy() 151 | cap_embs = cap_embs.numpy() 152 | 153 | vid_contexts = torch.stack(vid_contexts, 0) 154 | para_contexts = torch.stack(para_contexts, 0) 155 | vid_contexts = vid_contexts.numpy() 156 | para_contexts = para_contexts.numpy() 157 | 158 | return vid_embs, para_embs, clip_embs, cap_embs, vid_contexts, para_contexts, num_clips_total, cur_vid_total 159 | 160 | def i2t(images, captions, npts=None, measure='cosine'): 161 | npts = images.shape[0] 162 | ranks = numpy.zeros(npts) 163 | top1 = numpy.zeros(npts) 164 | d = numpy.dot(images, captions.T) 165 | 166 | for index in range(npts): 167 | inds = numpy.argsort(d[index])[::-1] 168 | 169 | rank = numpy.where(inds == index)[0][0] 170 | ranks[index] = rank 171 | top1[index] = inds[0] 172 | 173 | r1 = 100.0 * len(numpy.where(ranks < 1)[0]) / len(ranks) 174 | r5 = 100.0 * len(numpy.where(ranks < 5)[0]) / len(ranks) 175 | r10 = 100.0 * len(numpy.where(ranks < 50)[0]) / len(ranks) 176 | medr = numpy.floor(numpy.median(ranks)) + 1 177 | meanr = ranks.mean() + 1 178 | report_dict = dict() 179 | report_dict['r1'] = r1 180 | report_dict['r5'] = r5 181 | report_dict['r10'] = r10 182 | report_dict['medr'] = medr 183 | report_dict['meanr'] = meanr 184 | report_dict['sum'] = r1+r5+r10 185 | return report_dict, top1, ranks 186 | 187 | 188 | def t2i(images, captions, npts=None, measure='cosine'): 189 | npts = captions.shape[0] 190 | ranks = numpy.zeros(npts) 191 | top1 = numpy.zeros(npts) 192 | d = numpy.dot(captions, images.T) 193 | 194 | for index in range(npts): 195 | inds = numpy.argsort(d[index])[::-1] 196 | 197 | rank = numpy.where(inds == index)[0][0] 198 | ranks[index] = rank 199 | top1[index] = inds[0] 200 | 201 | r1 = 100.0 * len(numpy.where(ranks < 1)[0]) / len(ranks) 202 | r5 = 100.0 * len(numpy.where(ranks < 5)[0]) / len(ranks) 203 | r10 = 100.0 * len(numpy.where(ranks < 50)[0]) / len(ranks) 204 | medr = numpy.floor(numpy.median(ranks)) + 1 205 | meanr = ranks.mean() + 1 206 | report_dict = dict() 207 | report_dict['r1'] = r1 208 | report_dict['r5'] = r5 209 | report_dict['r10'] = r10 210 | report_dict['medr'] = medr 211 | report_dict['meanr'] = meanr 212 | report_dict['sum'] = r1+r5+r10 213 | return report_dict, top1, ranks 214 | -------------------------------------------------------------------------------- /gen_vocab.py: -------------------------------------------------------------------------------- 1 | import pickle as pl 2 | from anet_vocab import Vocabulary 3 | import argparse 4 | from tqdm import tqdm 5 | import numpy as np 6 | 7 | parser = argparse.ArgumentParser(description="using vocab to get glove") 8 | parser.add_argument('vocab_path', help='path for vocab') 9 | parser.add_argument('glove_path', help='path for glove') 10 | parser.add_argument('output_path', help='path for output') 11 | args = parser.parse_args() 12 | print (args) 13 | 14 | 15 | def main(): 16 | vocab = pl.load(open(args.vocab_path,'r')) 17 | f_glove = open(args.glove_path,'r') 18 | glove = {} 19 | output_glove = [] 20 | 21 | for line in tqdm(f_glove): 22 | line = line[0:-1] 23 | line_split = line.split(' ') 24 | word = line_split[0] 25 | vector = line_split[1:] 26 | glove[word] = vector 27 | 28 | for ind in tqdm(range(len(vocab))): 29 | cur_word = vocab.idx2word[ind] 30 | if cur_word in glove: 31 | output_glove.append(glove[cur_word]) 32 | np.savez(args.output_path, output_glove) 33 | 34 | 35 | 36 | if __name__ == "__main__": 37 | main() 38 | -------------------------------------------------------------------------------- /layers.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.init as init 4 | import torchvision.models as models 5 | from torch.autograd import Variable 6 | from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence 7 | import torch.backends.cudnn as cudnn 8 | import numpy as np 9 | import torch.nn.functional as F 10 | from IPython import embed 11 | 12 | class GroupMLP(nn.Module): 13 | def __init__(self, in_features, mid_features, out_features, drop=0.5, groups=1): 14 | super(GroupMLP, self).__init__() 15 | 16 | self.conv1 = nn.Conv1d(in_features, mid_features, 1) 17 | self.drop = nn.Dropout(p=drop) 18 | self.relu = nn.ReLU() 19 | self.conv2 = nn.Conv1d(mid_features, out_features, 1, groups=groups) 20 | 21 | def forward(self, a): 22 | N, C = a.size() 23 | h = self.relu(self.conv1(a.view(N, C, 1))) 24 | return self.conv2(self.drop(h)).view(N, -1) 25 | 26 | class Seq2Seq(nn.Module): 27 | def __init__(self, embedding_features, rnn_features, rnn_bidirectional=False): 28 | super(Seq2Seq, self).__init__() 29 | self.bidirectional = rnn_bidirectional 30 | 31 | self.rnn = nn.GRU(input_size=embedding_features, 32 | hidden_size=rnn_features, 33 | num_layers=1, batch_first=True, 34 | bidirectional=rnn_bidirectional) 35 | 36 | self.features = rnn_features 37 | 38 | self._init_rnn(self.rnn.weight_ih_l0) 39 | self._init_rnn(self.rnn.weight_hh_l0) 40 | self.rnn.bias_ih_l0.data.zero_() 41 | self.rnn.bias_hh_l0.data.zero_() 42 | 43 | def _init_rnn(self, weight): 44 | for w in weight.chunk(3, 0): 45 | init.xavier_uniform(w) 46 | 47 | def forward(self, q_emb, q_len, hidden=None): 48 | lengths = q_len.long() 49 | lens, indices = torch.sort(lengths, 0, True) 50 | 51 | packed_batch = pack_padded_sequence(q_emb[indices.cuda()], lens.tolist(), batch_first=True) 52 | if hidden is not None: 53 | N_, H_ = hidden.size() 54 | _, outputs = self.rnn(packed_batch, hidden[indices.cuda()].view(1, N_, H_)) 55 | else: 56 | _, outputs = self.rnn(packed_batch) 57 | 58 | if self.bidirectional: 59 | outputs = torch.cat([ outputs[0, :, :], outputs[1, :, :] ], dim=1) 60 | else: 61 | outputs = outputs.squeeze(0) 62 | 63 | _, _indices = torch.sort(indices, 0) 64 | outputs = outputs[_indices.cuda()] 65 | 66 | return outputs 67 | 68 | 69 | class Attention(nn.Module): 70 | def __init__(self, embedding_features, rnn_features, rnn_bidirectional=False): 71 | super(Attention, self).__init__() 72 | hid_size = rnn_features 73 | natt = rnn_features 74 | 75 | self.rnn = nn.GRU(input_size=embedding_features, 76 | hidden_size=rnn_features, 77 | num_layers=1, 78 | batch_first=True, 79 | bidirectional=rnn_bidirectional) 80 | self.lin = nn.Linear(hid_size,natt) 81 | self.att_w = nn.Linear(natt,1,bias=False) 82 | self.tanh = nn.Tanh() 83 | 84 | self._init_rnn(self.rnn.weight_ih_l0) 85 | self._init_rnn(self.rnn.weight_hh_l0) 86 | self.rnn.bias_ih_l0.data.zero_() 87 | self.rnn.bias_hh_l0.data.zero_() 88 | 89 | def _init_rnn(self, weight): 90 | for w in weight.chunk(3, 0): 91 | init.xavier_uniform(w) 92 | 93 | def forward(self, q_emb, q_len, hidden=None): 94 | lengths = q_len.long() 95 | lens, indices = torch.sort(lengths, 0, True) 96 | 97 | packed_batch = pack_padded_sequence(q_emb[indices.cuda()], lens.tolist(), batch_first=True) 98 | if hidden is not None: 99 | N_, H_ = hidden.size() 100 | hs, _ = self.rnn(packed_batch, hidden[indices.cuda()].view(1, N_, H_)) 101 | else: 102 | hs, _ = self.rnn(packed_batch) 103 | enc_sents, len_s = pad_packed_sequence(hs, batch_first=True) 104 | 105 | emb_h = self.tanh(self.lin(enc_sents.contiguous().view(enc_sents.size(0)*enc_sents.size(1),-1))) # Nwords * Emb 106 | attend = self.att_w(emb_h).view(enc_sents.size(0), 107 | enc_sents.size(1)) 108 | mask = self._list_to_bytemask(list(len_s)) 109 | all_att = self._masked_softmax(attend, mask) 110 | try: 111 | attended = all_att.unsqueeze(2).expand_as(enc_sents) * enc_sents 112 | except: 113 | embed() 114 | raise 115 | 116 | _, _indices = torch.sort(indices, 0) 117 | outputs = attended.sum(1,True).squeeze(1)[_indices.cuda()] 118 | 119 | return outputs 120 | 121 | def forward_att(self, q_emb, q_len, hidden=None): 122 | lengths = q_len.long() 123 | lens, indices = torch.sort(lengths, 0, True) 124 | 125 | packed_batch = pack_padded_sequence(q_emb[indices.cuda()], lens.tolist(), batch_first=True) 126 | if hidden is not None: 127 | N_, H_ = hidden.size() 128 | hs, _ = self.rnn(packed_batch, hidden[indices.cuda()].view(1, N_, H_)) 129 | else: 130 | hs, _ = self.rnn(packed_batch) 131 | enc_sents, len_s = pad_packed_sequence(hs, batch_first=True) 132 | 133 | _, _indices = torch.sort(indices, 0) 134 | enc_sents = rnn_sents[_indices.cuda()] 135 | 136 | emb_h = self.tanh(self.lin(enc_sents.contiguous().view(enc_sents.size(0)*enc_sents.size(1),-1))) # Nwords * Emb 137 | attend = self.att_w(emb_h).view(enc_sents.size(0), 138 | enc_sents.size(1)) 139 | mask = self._list_to_bytemask(list(lens.tolist)) 140 | all_att = self._masked_softmax(attend, mask) 141 | try: 142 | attended = all_att.unsqueeze(2).expand_as(enc_sents) * enc_sents 143 | except: 144 | embed() 145 | raise 146 | 147 | _, _indices = torch.sort(indices, 0) 148 | return attended.sum(1,True).squeeze(1)[_indices.cuda()], all_att 149 | 150 | def _list_to_bytemask(self,l): 151 | mask = torch.FloatTensor(len(l),l[0]).fill_(1) 152 | 153 | for i,j in enumerate(l): 154 | if j != l[0]: mask[i,j:l[0]] = 0 155 | 156 | return mask.cuda() 157 | 158 | def _masked_softmax(self,mat,mask): 159 | exp = torch.exp(mat) * Variable(mask,requires_grad=False) 160 | sum_exp = exp.sum(1,True)+0.0001 161 | 162 | return exp/sum_exp.expand_as(exp) 163 | 164 | class Maxout(nn.Module): 165 | def __init__(self, embedding_features, rnn_features, rnn_bidirectional=False): 166 | super(Maxout, self).__init__() 167 | self.bidirectional = rnn_bidirectional 168 | 169 | self.rnn = nn.GRU(input_size=embedding_features, 170 | hidden_size=rnn_features, 171 | num_layers=1, batch_first=True, 172 | bidirectional=False) 173 | 174 | self.features = rnn_features 175 | 176 | self._init_rnn(self.rnn.weight_ih_l0) 177 | self._init_rnn(self.rnn.weight_hh_l0) 178 | self.rnn.bias_ih_l0.data.zero_() 179 | self.rnn.bias_hh_l0.data.zero_() 180 | 181 | def _init_rnn(self, weight): 182 | for w in weight.chunk(3, 0): 183 | init.xavier_uniform(w) 184 | 185 | def forward(self, q_emb, q_len, hidden=None): 186 | lengths = q_len.long() 187 | lens, indices = torch.sort(lengths, 0, True) 188 | 189 | packed_batch = pack_padded_sequence(q_emb[indices.cuda()], lens.tolist(), batch_first=True) 190 | if hidden is not None: 191 | N_, H_ = hidden.size() 192 | hs, _ = self.rnn(packed_batch, hidden[indices.cuda()].view(1, N_, H_)) 193 | else: 194 | hs, _ = self.rnn(packed_batch) 195 | outputs, _ = pad_packed_sequence(hs, batch_first=True) 196 | 197 | _, _indices = torch.sort(indices, 0) 198 | outputs = outputs[_indices.cuda()] 199 | N, L, H = outputs.size() 200 | maxout = [] 201 | for batch_ind, length in enumerate(lengths.tolist()): 202 | maxout.append( F.max_pool1d(outputs[batch_ind, :length, :].view(1, length, H).transpose(1, 2), length).squeeze().view(1, -1) ) 203 | 204 | return torch.cat(maxout) 205 | -------------------------------------------------------------------------------- /loss.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.init as init 4 | import torchvision.models as models 5 | from torch.autograd import Variable 6 | import torch.backends.cudnn as cudnn 7 | import numpy as np 8 | from collections import OrderedDict 9 | import torch.nn.functional as F 10 | from IPython import embed 11 | 12 | def cosine_sim(im, s): 13 | return im.mm(s.t()) 14 | 15 | class GroupWiseContrastiveLoss(nn.Module): 16 | def __init__(self, margin=0, measure=False, max_violation=False, norm=True): 17 | super(GroupWiseContrastiveLoss, self).__init__() 18 | self.margin = margin 19 | if measure == 'order': 20 | NotImplemented 21 | else: 22 | self.sim = cosine_sim 23 | 24 | self.norm = norm 25 | self.max_violation = max_violation 26 | 27 | def forward(self, im, s, num_clips, num_caps): 28 | # compute image-sentence score matrix 29 | scores = self.sim(im, s) 30 | 31 | # generate mask 32 | N_ = len(num_clips) 33 | scores_reduced = Variable(torch.zeros(N_, N_).cuda()) 34 | assert N_ == len(num_caps) 35 | for i in range(N_): 36 | clip_start, clip_end = sum(num_clips[0:i]), sum(num_clips[0:i+1]) 37 | for j in range(N_): 38 | cap_start, cap_end = sum(num_caps[0:j]), sum(num_caps[0:j+1]) 39 | if self.max_violation: 40 | scores_reduced[i, j] = scores[clip_start:clip_end, cap_start:cap_end].max() 41 | else: 42 | scores_reduced[i, j] = scores[clip_start:clip_end, cap_start:cap_end].mean() 43 | 44 | diagonal = scores_reduced.diag().view(N_, 1) 45 | d1 = diagonal.expand_as(scores_reduced) 46 | d2 = diagonal.t().expand_as(scores_reduced) 47 | 48 | # compare every diagonal score to scores in its column 49 | # caption retrieval 50 | cost_s = (self.margin + scores_reduced - d1).clamp(min=0) 51 | # compare every diagonal score to scores in its row 52 | # image retrieval 53 | cost_im = (self.margin + scores_reduced - d2).clamp(min=0) 54 | 55 | mask = torch.eye(scores_reduced.size(0)) > .5 56 | I = Variable(mask) 57 | if torch.cuda.is_available(): 58 | I = I.cuda() 59 | cost_s = cost_s.masked_fill_(I, 0) 60 | cost_im = cost_im.masked_fill_(I, 0) 61 | 62 | # keep the maximum violating negative for each query 63 | if self.max_violation: 64 | cost_s = cost_s.max(1)[0] 65 | cost_im = cost_im.max(0)[0] 66 | 67 | #embed() 68 | if self.norm: 69 | return (cost_s.sum() + cost_im.sum()).div(len(num_clips) * len(num_caps)) 70 | else: 71 | return cost_s.sum() + cost_im.sum() 72 | #return cost_s.sum() + cost_im.sum() 73 | 74 | class ContrastiveLoss(nn.Module): 75 | def __init__(self, margin=0, measure=False, max_violation=False, norm=True): 76 | super(ContrastiveLoss, self).__init__() 77 | self.margin = margin 78 | if measure == 'order': 79 | NotImplemented 80 | else: 81 | self.sim = cosine_sim 82 | 83 | self.norm = norm 84 | self.max_violation = max_violation 85 | 86 | def forward(self, im, s): 87 | # compute image-sentence score matrix 88 | scores = self.sim(im, s) 89 | diagonal = scores.diag().view(im.size(0), 1) 90 | d1 = diagonal.expand_as(scores) 91 | d2 = diagonal.t().expand_as(scores) 92 | 93 | # compare every diagonal score to scores in its column 94 | # caption retrieval 95 | cost_s = (self.margin + scores - d1).clamp(min=0) 96 | # compare every diagonal score to scores in its row 97 | # image retrieval 98 | cost_im = (self.margin + scores - d2).clamp(min=0) 99 | 100 | # clear diagonals 101 | mask = torch.eye(scores.size(0)) > .5 102 | I = Variable(mask) 103 | if torch.cuda.is_available(): 104 | I = I.cuda() 105 | cost_s = cost_s.masked_fill_(I, 0) 106 | cost_im = cost_im.masked_fill_(I, 0) 107 | 108 | # keep the maximum violating negative for each query 109 | if self.max_violation: 110 | cost_s = cost_s.max(1)[0] 111 | cost_im = cost_im.max(0)[0] 112 | 113 | loss = cost_s.sum() + cost_im.sum() 114 | if self.norm: 115 | return (cost_s.sum() + cost_im.sum()).div(im.shape[0] * s.shape[0]) 116 | else: 117 | return cost_s.sum() + cost_im.sum() 118 | # return cost_s.sum() 119 | 120 | 121 | class CenterLoss(nn.Module): 122 | def __init__(self, margin=0, measure=False, max_violation=False, tune_center=False): 123 | super(CenterLoss, self).__init__() 124 | self.margin = margin 125 | self.sim = cosine_sim 126 | self.max_violation = max_violation 127 | self.tune_center=tune_center 128 | 129 | def forward_loss(self, im, vid, seg_num): 130 | # compute image-sentence score matrix 131 | if self.tune_center: 132 | pass 133 | else: 134 | vid = vid.detach() 135 | scores = self.sim(im, vid) 136 | 137 | middle_block = Variable(torch.zeros(scores.shape[0])).cuda() 138 | 139 | mask = torch.zeros(scores.shape) 140 | for i in range(len(seg_num)): 141 | cur_block = scores[sum(seg_num[0:i]):sum(seg_num[0:i+1]), i] 142 | middle_block[sum(seg_num[0:i]):sum(seg_num[0:i+1])] = cur_block 143 | mask[sum(seg_num[0:i]):sum(seg_num[0:i+1]), i] = 1 144 | middle_block_reshape = middle_block.view(middle_block.shape[0],1).expand_as(scores) 145 | 146 | # compare every diagonal score to scores in its column 147 | # caption retrieval 148 | cost_s = (self.margin + scores - middle_block_reshape).clamp(min=0) 149 | 150 | # clear diagonals 151 | mask = mask > .5 152 | I = Variable(mask) 153 | if torch.cuda.is_available(): 154 | I = I.cuda() 155 | cost_s = cost_s.masked_fill_(I, 0) 156 | 157 | # keep the maximum violating negative for each query 158 | if self.max_violation: 159 | cost_s = cost_s.max(1)[0] 160 | 161 | return cost_s.sum() 162 | 163 | def forward(self, clips, videos, caps, paragraphs, num_clips, num_caps): 164 | return self.forward_loss(clips, videos, num_clips) + self.forward_loss(caps, paragraphs, num_caps) 165 | 166 | -------------------------------------------------------------------------------- /model.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.init as init 4 | import torchvision.models as models 5 | from torch.autograd import Variable 6 | from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence 7 | import torch.backends.cudnn as cudnn 8 | from torch.nn.utils.clip_grad import clip_grad_norm 9 | import numpy as np 10 | from collections import OrderedDict 11 | import torch.nn.functional as F 12 | from IPython import embed 13 | 14 | from layers import * 15 | from loss import * 16 | from decoder.loss import * 17 | from decoder.model import * 18 | from decoder.layers import * 19 | import time 20 | 21 | class EncoderImage(nn.Module): 22 | def __init__(self, img_dim, embed_size, bidirectional=False, rnn_type='maxout'): 23 | super(EncoderImage, self).__init__() 24 | self.embed_size = embed_size 25 | self.bidirectional = bidirectional 26 | 27 | if rnn_type == 'attention': 28 | self.rnn = Attention(img_dim, embed_size, rnn_bidirectional=bidirectional) 29 | elif rnn_type == 'seq2seq': 30 | self.rnn = Seq2Seq(img_dim, embed_size, rnn_bidirectional=bidirectional) 31 | elif rnn_type == 'maxout': 32 | self.rnn = Maxout(img_dim, embed_size, rnn_bidirectional=bidirectional) 33 | else: 34 | raise ValueError('Unsupported RNN type') 35 | 36 | def forward(self, x, lengths): 37 | """Extract image feature vectors.""" 38 | outputs = self.rnn(x, lengths) 39 | 40 | # normalization in the joint embedding space 41 | # return F.normalize(outputs) 42 | return outputs 43 | 44 | class EncoderSequence(nn.Module): 45 | def __init__(self, img_dim, embed_size, bidirectional=False, rnn_type='maxout'): 46 | super(EncoderSequence, self).__init__() 47 | self.embed_size = embed_size 48 | self.bidirectional = bidirectional 49 | 50 | if rnn_type == 'attention': 51 | self.rnn = Attention(img_dim, embed_size, rnn_bidirectional=bidirectional) 52 | elif rnn_type == 'seq2seq': 53 | self.rnn = Seq2Seq(img_dim, embed_size, rnn_bidirectional=bidirectional) 54 | elif rnn_type == 'maxout': 55 | self.rnn = Maxout(img_dim, embed_size, rnn_bidirectional=bidirectional) 56 | else: 57 | raise ValueError('Unsupported RNN type') 58 | 59 | def forward(self, x, lengths, hidden=None): 60 | """Extract image feature vectors.""" 61 | outputs = self.rnn(x, lengths, hidden) 62 | 63 | # normalization in the joint embedding space 64 | # return F.normalize(outputs) 65 | return outputs 66 | 67 | class EncoderText(nn.Module): 68 | def __init__(self, vocab_size, word_dim, embed_size, 69 | bidirectional=False, rnn_type='maxout', data_name='anet_precomp'): 70 | super(EncoderText, self).__init__() 71 | self.embed_size = embed_size 72 | self.bidirectional = bidirectional 73 | 74 | # word embedding 75 | self.embed = nn.Embedding(vocab_size, word_dim) 76 | 77 | # caption embedding 78 | if rnn_type == 'attention': 79 | self.rnn = Attention(word_dim, embed_size, rnn_bidirectional=bidirectional) 80 | elif rnn_type == 'seq2seq': 81 | self.rnn = Seq2Seq(word_dim, embed_size, rnn_bidirectional=bidirectional) 82 | elif rnn_type == 'maxout': 83 | self.rnn = Maxout(word_dim, embed_size, rnn_bidirectional=bidirectional) 84 | else: 85 | raise ValueError('Unsupported RNN type') 86 | 87 | self.init_weights(data_name) 88 | 89 | def init_weights(self, data_name): 90 | self.embed.weight.data = torch.from_numpy(np.load('vocab/{}_w2v_total.npz'.format(data_name))['arr_0'].astype(float)).float() 91 | 92 | def forward(self, x, lengths): 93 | # Embed word ids to vectors 94 | cap_emb = self.embed(x) 95 | outputs = self.rnn(cap_emb, lengths) 96 | 97 | # normalization in the joint embedding space 98 | # return F.normalize(outputs), cap_emb 99 | return outputs, cap_emb 100 | 101 | 102 | class VSE(object): 103 | def __init__(self, opt): 104 | self.norm = opt.norm 105 | self.grad_clip = opt.grad_clip 106 | self.clip_enc = EncoderImage(opt.img_dim, opt.img_first_size, 107 | rnn_type=opt.rnn_type) 108 | self.txt_enc = EncoderText(opt.vocab_size, opt.word_dim, opt.cap_first_size, 109 | rnn_type=opt.rnn_type, data_name = opt.data_name) 110 | self.vid_seq_enc = EncoderSequence(opt.img_first_size, opt.embed_size, 111 | rnn_type=opt.rnn_type) 112 | self.txt_seq_enc = EncoderSequence(opt.cap_first_size, opt.embed_size, 113 | rnn_type=opt.rnn_type) 114 | 115 | if torch.cuda.is_available(): 116 | self.clip_enc.cuda() 117 | self.txt_enc.cuda() 118 | self.vid_seq_enc.cuda() 119 | self.txt_seq_enc.cuda() 120 | cudnn.benchmark = True 121 | 122 | # Loss and Optimizer 123 | self.criterion = ContrastiveLoss(margin=opt.margin, 124 | measure=opt.measure, 125 | max_violation=opt.max_violation, norm=self.norm) 126 | 127 | self.weak_criterion = GroupWiseContrastiveLoss(margin=opt.margin, 128 | measure=opt.measure, 129 | max_violation=opt.max_violation, norm=self.norm) 130 | 131 | 132 | params = list(self.txt_enc.parameters()) 133 | params += list(self.clip_enc.parameters()) 134 | params += list(self.vid_seq_enc.parameters()) 135 | params += list(self.txt_seq_enc.parameters()) 136 | 137 | 138 | if opt.reconstruct_loss: 139 | self.vid_seq_dec = DecoderSequence(opt.embed_size, opt.img_first_size, 140 | rnn_type=opt.decode_rnn_type) 141 | self.txt_seq_dec = DecoderSequence(opt.embed_size, opt.cap_first_size, 142 | rnn_type=opt.decode_rnn_type) 143 | self.vid_seq_dec.cuda() 144 | self.txt_seq_dec.cuda() 145 | 146 | self.criterion_Euclid_Distance = EuclideanLoss(norm=self.norm) 147 | 148 | params += list(self.vid_seq_dec.parameters()) 149 | params += list(self.txt_seq_dec.parameters()) 150 | 151 | if opt.lowest_reconstruct_loss: 152 | self.clip_seq_dec = DecoderSequence(opt.embed_size, opt.img_dim, rnn_type=opt.decode_rnn_type) 153 | self.sent_seq_dec = DecoderSequence(opt.embed_size, opt.word_dim, rnn_type=opt.decode_rnn_type) 154 | self.clip_seq_dec.cuda() 155 | self.sent_seq_dec.cuda() 156 | 157 | params += list(self.clip_seq_dec.parameters()) 158 | params += list(self.sent_seq_dec.parameters()) 159 | 160 | self.params = params 161 | 162 | self.optimizer = torch.optim.Adam(params, lr=opt.learning_rate) 163 | 164 | self.Eiters = 0 165 | 166 | def state_dict(self, opt): 167 | state_dict = [self.clip_enc.state_dict(), self.txt_enc.state_dict(), \ 168 | self.vid_seq_enc.state_dict(), self.txt_seq_enc.state_dict()] 169 | if opt.reconstruct_loss: 170 | state_dict = [self.clip_enc.state_dict(), self.txt_enc.state_dict(), \ 171 | self.vid_seq_enc.state_dict(), self.txt_seq_enc.state_dict(), \ 172 | self.vid_seq_dec.state_dict(), self.txt_seq_dec.state_dict()] 173 | if opt.lowest_reconstruct_loss: 174 | state_dict = [self.clip_enc.state_dict(), self.txt_enc.state_dict(), \ 175 | self.vid_seq_enc.state_dict(), self.txt_seq_enc.state_dict(), \ 176 | self.vid_seq_dec.state_dict(), self.txt_seq_dec.state_dict(), \ 177 | self.clip_seq_dec.state_dict(), self.sent_seq_dec.state_dict()] 178 | 179 | return state_dict 180 | 181 | def load_state_dict(self, state_dict, opt): 182 | self.clip_enc.load_state_dict(state_dict[0]) 183 | self.txt_enc.load_state_dict(state_dict[1]) 184 | self.vid_seq_enc.load_state_dict(state_dict[2]) 185 | self.txt_seq_enc.load_state_dict(state_dict[3]) 186 | if opt.reconstruct_loss: 187 | self.vid_seq_dec.load_state_dict(state_dict[4]) 188 | self.txt_seq_dec.load_state_dict(state_dict[5]) 189 | if opt.lowest_reconstruct_loss: 190 | self.clip_seq_dec.load_state_dict(state_dict[6]) 191 | self.sent_seq_dec.load_state_dict(state_dict[7]) 192 | 193 | def train_start(self, opt): 194 | """switch to train mode 195 | """ 196 | self.clip_enc.train() 197 | self.txt_enc.train() 198 | self.vid_seq_enc.train() 199 | self.txt_seq_enc.train() 200 | if opt.reconstruct_loss: 201 | self.vid_seq_dec.train() 202 | self.txt_seq_dec.train() 203 | if opt.lowest_reconstruct_loss: 204 | self.clip_seq_dec.train() 205 | self.sent_seq_dec.train() 206 | 207 | def val_start(self, opt): 208 | """switch to evaluate mode 209 | """ 210 | self.clip_enc.eval() 211 | self.txt_enc.eval() 212 | self.vid_seq_enc.eval() 213 | self.txt_seq_enc.eval() 214 | if opt.reconstruct_loss: 215 | self.vid_seq_dec.eval() 216 | self.txt_seq_dec.eval() 217 | if opt.lowest_reconstruct_loss: 218 | self.clip_seq_dec.eval() 219 | self.sent_seq_dec.eval() 220 | 221 | 222 | def forward_emb(self, clips, captions, lengths_clip, lengths_cap, return_word=False): 223 | clips = Variable(clips) 224 | captions = Variable(captions) 225 | if torch.cuda.is_available(): 226 | clips = clips.cuda() 227 | captions = captions.cuda() 228 | 229 | # Forward 230 | clip_emb = self.clip_enc(clips, Variable(lengths_clip)) 231 | cap_emb, word = self.txt_enc(captions, Variable(lengths_cap)) 232 | 233 | if return_word: 234 | return clip_emb, cap_emb, word 235 | else: 236 | return clip_emb, cap_emb 237 | 238 | def structure_emb(self, clip_emb, cap_emb, num_clips, num_caps, vid_context=None, para_context=None): 239 | img_reshape_emb = Variable(torch.zeros(len(num_clips), max(num_clips), clip_emb.shape[1])).cuda() 240 | cap_reshape_emb = Variable(torch.zeros(len(num_caps), max(num_caps), cap_emb.shape[1])).cuda() 241 | 242 | cur_displace = 0 243 | for i, end_place in enumerate(num_clips): 244 | img_reshape_emb[i, 0:end_place, :] = clip_emb[cur_displace : cur_displace + end_place, :] 245 | cur_displace = cur_displace + end_place 246 | 247 | cur_displace = 0 248 | for i, end_place in enumerate(num_caps): 249 | cap_reshape_emb[i, 0:end_place, :] = cap_emb[cur_displace : cur_displace + end_place, :] 250 | cur_displace = cur_displace + end_place 251 | 252 | vid_emb = self.vid_seq_enc(img_reshape_emb, Variable(torch.Tensor(num_clips).long()), vid_context) 253 | para_emb = self.txt_seq_enc(cap_reshape_emb, Variable(torch.Tensor(num_caps).long()), para_context) 254 | 255 | return vid_emb, para_emb 256 | 257 | def reconstruct_emb(self, vid_emb, para_emb, num_clips, num_caps): 258 | vid_reshape_emb = Variable(torch.zeros(len(num_clips), max(num_clips), vid_emb.shape[1])).cuda() 259 | para_reshape_emb = Variable(torch.zeros(len(num_caps), max(num_caps), para_emb.shape[1])).cuda() 260 | 261 | for i, end_place in enumerate(num_clips): 262 | vid_reshape_emb[i, :end_place, :] = vid_emb[i].expand(1, end_place, vid_emb.shape[1]) 263 | 264 | for i, end_place in enumerate(num_caps): 265 | para_reshape_emb[i, :end_place, :] = para_emb[i,:].expand(1, end_place, para_emb.shape[1]) 266 | 267 | clip_emb = self.vid_seq_dec(vid_reshape_emb, Variable(torch.Tensor(num_clips))) 268 | sent_emb = self.txt_seq_dec(para_reshape_emb, Variable(torch.Tensor(num_caps))) 269 | 270 | return clip_emb, sent_emb 271 | 272 | def lowest_reconstruct_emb(self, vid_emb, para_emb, num_clips, num_caps): 273 | vid_reshape_emb = Variable(torch.zeros(len(num_clips), max(num_clips), vid_emb.shape[1])).cuda() 274 | para_reshape_emb = Variable(torch.zeros(len(num_caps), max(num_caps), para_emb.shape[1])).cuda() 275 | 276 | for i, end_place in enumerate(num_clips): 277 | vid_reshape_emb[i, :end_place, :] = vid_emb[i].view(1,1,-1).expand(1, end_place, vid_emb.shape[1]) 278 | 279 | for i, end_place in enumerate(num_caps): 280 | para_reshape_emb[i, :end_place, :] = para_emb[i,:].view(1,1,-1).expand(1, end_place, para_emb.shape[1]) 281 | 282 | frame_emb = self.clip_seq_dec(vid_reshape_emb, Variable(torch.Tensor(num_clips))) 283 | word_emb = self.sent_seq_dec(para_reshape_emb, Variable(torch.Tensor(num_caps))) 284 | 285 | return frame_emb, word_emb 286 | 287 | def forward_loss(self, clip_emb, cap_emb, name, **kwargs): 288 | """Compute the loss given pairs of image and caption embeddings 289 | """ 290 | loss = self.criterion(clip_emb, cap_emb) 291 | self.logger.update('Le'+name, loss.item(), clip_emb.size(0)) 292 | return loss 293 | 294 | def forward_weak_loss(self, clip_emb, cap_emb, num_clips, num_caps, name, **kwargs): 295 | """Compute the loss given pairs of image and caption embeddings 296 | """ 297 | loss = self.weak_criterion(clip_emb, cap_emb, num_clips, num_caps) 298 | self.logger.update('Le'+name, loss.item(), clip_emb.size(0)) 299 | return loss 300 | 301 | def forward_reconstruct_loss(self, clip_recon, clip_emb, name, **kwargs): 302 | """Compute the loss given pairs of image and caption embeddings 303 | """ 304 | loss = self.criterion_Euclid_Distance(clip_recon, clip_emb) 305 | self.logger.update('Le'+name, loss.item(), clip_emb.size(0)) 306 | return loss 307 | 308 | 309 | def train_emb(self, opts, clips, captions, videos, paragraphs, 310 | lengths_clip, lengths_cap, lengths_video, lengths_paragraph, 311 | num_clips, num_caps, ind, cur_vid, *args): 312 | """One training step given clips and captions. 313 | """ 314 | self.Eiters += 1 315 | self.logger.update('Eit', self.Eiters) 316 | self.logger.update('lr', self.optimizer.param_groups[0]['lr']) 317 | 318 | # compute the embeddings 319 | clip_emb, cap_emb, word = self.forward_emb(clips, captions, lengths_clip, lengths_cap, return_word=True) 320 | vid_context, para_context = self.forward_emb(videos, paragraphs, lengths_video, lengths_paragraph) 321 | vid_emb, para_emb = self.structure_emb(clip_emb, cap_emb, num_clips, num_caps, vid_context, para_context) 322 | 323 | if opts.reconstruct_loss: 324 | clip_recon, cap_recon = self.reconstruct_emb(vid_emb, para_emb, num_clips, num_caps) 325 | if opts.lowest_reconstruct_loss: 326 | frame_recon, sent_recon = self.lowest_reconstruct_emb(clip_recon, cap_recon, lengths_clip.numpy(), lengths_cap.numpy()) 327 | 328 | # measure accuracy and record loss 329 | self.optimizer.zero_grad() 330 | 331 | loss = 0 332 | 333 | loss_1 = self.forward_loss(F.normalize(vid_emb), F.normalize(para_emb), '_vid') 334 | loss_3 = self.forward_loss(F.normalize(vid_context), F.normalize(para_context), '_ctx_low_lvel') 335 | loss_5 = (self.forward_loss(F.normalize(vid_emb), F.normalize(vid_emb), '_vid_inloss') + self.forward_loss(F.normalize(para_emb), F.normalize(para_emb), '_para_inloss'))/2 336 | loss = loss + loss_1 + loss_3 + loss_5 337 | 338 | if opts.low_level_loss: 339 | if opts.weak_low_level_loss: 340 | loss_2 = self.forward_weak_loss(F.normalize(clip_emb), F.normalize(cap_emb), num_clips, num_caps, '_wlow_lvel') 341 | else: 342 | loss_2 = self.forward_loss(F.normalize(clip_emb), F.normalize(cap_emb), '_low_lvel') 343 | loss_6 = (self.forward_loss(F.normalize(clip_emb), F.normalize(clip_emb), '_clip_inloss') + self.forward_loss(F.normalize(cap_emb), F.normalize(cap_emb), '_cap_inloss'))/2 344 | loss = loss + loss_2 + loss_6 345 | 346 | if opts.reconstruct_loss: 347 | loss_recon = (self.forward_reconstruct_loss(clip_recon, clip_emb.detach(), '_clip_recon') + self.forward_reconstruct_loss(cap_recon, cap_emb.detach(), '_cap_recon')) 348 | loss = loss + loss_recon * opts.weight_recon 349 | 350 | if opts.lowest_reconstruct_loss: 351 | clips_var = torch.zeros(lengths_clip.sum().item(), opts.img_dim) 352 | curpos = 0 353 | for i in range(clips.shape[0]): 354 | clips_var[curpos: curpos+lengths_clip[i],:] = clips[i,0:lengths_clip[i],:] 355 | curpos = curpos + lengths_clip[i] 356 | 357 | words_var = Variable(torch.zeros(lengths_cap.sum().item(), 300)).cuda() 358 | curpos = 0 359 | for i in range(captions.shape[0]): 360 | words_var[curpos: curpos+lengths_cap[i],:] = word[i,0:lengths_cap[i],:] 361 | curpos = curpos + lengths_cap[i] 362 | 363 | loss_lowest_recon = self.forward_reconstruct_loss(frame_recon, Variable(clips_var).cuda().detach(), '_reconstruct_frame_hier') + self.forward_reconstruct_loss(sent_recon, words_var.detach(), '_reconstruct_word_hier') 364 | loss = loss + loss_lowest_recon * opts.lowest_weight_recon 365 | 366 | # compute gradient and do SGD step 367 | loss.backward() 368 | if self.grad_clip > 0: clip_grad_norm(self.params, self.grad_clip) 369 | self.optimizer.step() 370 | -------------------------------------------------------------------------------- /train.py: -------------------------------------------------------------------------------- 1 | import pickle 2 | import os 3 | import time 4 | import shutil 5 | 6 | import torch 7 | 8 | from anet_vocab import Vocabulary 9 | from model import VSE 10 | from evaluation import i2t, t2i, AverageMeter, LogCollector, encode_data, LogReporter 11 | 12 | import logging 13 | import tensorboard_logger as tb_logger 14 | 15 | import argparse 16 | 17 | from IPython import embed 18 | 19 | parser = argparse.ArgumentParser() 20 | parser.add_argument('--data_path', default='/data2/bwzhang/anet_img/captions/', 21 | help='path to datasets') 22 | parser.add_argument('data_name', default='anet_precomp', 23 | help='anet_precomp') 24 | parser.add_argument('--feat_name', default='c3d', 25 | help='c3d or icep') 26 | parser.add_argument('--vocab_path', default='./vocab/', 27 | help='Path to saved vocabulary pickle files.') 28 | parser.add_argument('--margin', default=0.2, type=float, 29 | help='Rank loss margin.') 30 | parser.add_argument('--num_epochs', default=30, type=int, 31 | help='Number of training epochs.') 32 | parser.add_argument('--batch_size', default=64, type=int, 33 | help='Size of a training mini-batch.') 34 | parser.add_argument('--word_dim', default=300, type=int, 35 | help='Dimensionality of the word embedding.') 36 | parser.add_argument('--embed_size', default=1024, type=int, 37 | help='Dimensionality of the joint embedding.') 38 | parser.add_argument('--grad_clip', default=0., type=float, 39 | help='Gradient clipping threshold.') 40 | parser.add_argument('--num_layers', default=1, type=int, 41 | help='Number of GRU layers.') 42 | parser.add_argument('--learning_rate', default=.001, type=float, 43 | help='Initial learning rate.') 44 | parser.add_argument('--lr_update', default=10, type=int, 45 | help='Number of epochs to update the learning rate.') 46 | parser.add_argument('--workers', default=10, type=int, 47 | help='Number of data loader workers.') 48 | parser.add_argument('--log_step', default=10, type=int, 49 | help='Number of steps to print and record the log.') 50 | parser.add_argument('--val_step', default=500, type=int, 51 | help='Number of steps to run validation.') 52 | parser.add_argument('--logger_name', default='runs/runX', 53 | help='Path to save the model and Tensorboard log.') 54 | parser.add_argument('--resume', default='', type=str, metavar='PATH', 55 | help='path to latest checkpoint (default: none)') 56 | parser.add_argument('--max_violation', action='store_true', 57 | help='Use max instead of sum in the rank loss.') 58 | parser.add_argument('--img_dim', default=500, type=int, 59 | help='Dimensionality of the image embedding.') 60 | parser.add_argument('--measure', default='cosine', 61 | help='Similarity measure used (cosine|order)') 62 | parser.add_argument('--use_abs', action='store_true', 63 | help='Take the absolute value of embedding vectors.') 64 | parser.add_argument('--no_imgnorm', action='store_true', 65 | help='Do not normalize the image embeddings.') 66 | parser.add_argument('--gpu_id', default=0, type=int, 67 | help='GPU to use.') 68 | parser.add_argument('--rnn_type', default='maxout', choices=['maxout', 'seq2seq', 'attention'], 69 | help='Type of recurrent model.') 70 | parser.add_argument('--img_first_size', default=1024, type=int, 71 | help='first img layer emb size') 72 | parser.add_argument('--cap_first_size', default=1024, type=int, 73 | help='first cap layer emb size') 74 | parser.add_argument('--img_first_dropout', default=0, type=float, 75 | help='first img layer emb size') 76 | parser.add_argument('--cap_first_dropout', default=0, type=float, 77 | help='first cap layer emb size') 78 | 79 | parser.add_argument('--weight_recon', default=0.0005, type=float) 80 | parser.add_argument('--lowest_weight_recon', default=0.0001, type=float) 81 | parser.add_argument('--decode_rnn_type', default='seq2seq') 82 | 83 | parser.add_argument('--low_level_loss', action='store_true') 84 | parser.add_argument('--weak_low_level_loss', action='store_true') 85 | parser.add_argument('--reconstruct_loss', action='store_true') 86 | parser.add_argument('--lowest_reconstruct_loss', action='store_true') 87 | parser.add_argument('--norm', action='store_true') 88 | parser.add_argument('--eval_only', action='store_true') 89 | 90 | opt = parser.parse_args() 91 | print (opt) 92 | 93 | if opt.data_name == 'anet_precomp': 94 | import activity_net.data as data 95 | if opt.data_name == 'didemo_precomp': 96 | import didemo_dev.data as data 97 | 98 | def main(): 99 | # Hyper Parameters 100 | 101 | torch.cuda.set_device(opt.gpu_id) 102 | 103 | tb_logger.configure(opt.logger_name, flush_secs=5) 104 | logging.basicConfig(format='%(asctime)s %(message)s', level=logging.INFO, filename=opt.logger_name+'/log.log') 105 | console = logging.StreamHandler() 106 | console.setLevel(logging.INFO) 107 | formatter = logging.Formatter('%(asctime)s %(message)s') 108 | console.setFormatter(formatter) 109 | logging.getLogger('').addHandler(console) 110 | 111 | logging.info(opt) 112 | 113 | # Load Vocabulary Wrapper 114 | vocab_path = os.path.join(opt.vocab_path, '%s_vocab_total.pkl' % opt.data_name) 115 | print (vocab_path) 116 | vocab = pickle.load(open(vocab_path, 'rb')) 117 | opt.vocab_size = len(vocab) 118 | 119 | # Load data loaders 120 | train_loader, val_loader = data.get_loaders( 121 | opt.data_name, vocab, opt.batch_size, opt.workers, opt) 122 | 123 | # Construct the model 124 | model = VSE(opt) 125 | 126 | print('Print out models:') 127 | print(model.clip_enc) 128 | print(model.txt_enc) 129 | print(model.vid_seq_enc) 130 | print(model.txt_seq_enc) 131 | 132 | start_epoch = 0 133 | # optionally resume from a checkpoint 134 | if opt.resume: 135 | if os.path.isfile(opt.resume): 136 | print("=> loading checkpoint '{}'".format(opt.resume)) 137 | checkpoint = torch.load(opt.resume) 138 | start_epoch = checkpoint['epoch'] 139 | best_rsum = checkpoint['best_rsum'] 140 | model.load_state_dict(checkpoint['model'], opt) 141 | # Eiters is used to show logs as the continuation of another 142 | # training 143 | model.Eiters = checkpoint['Eiters'] 144 | print("=> loaded checkpoint '{}' (epoch {}, best_rsum {})" 145 | .format(opt.resume, start_epoch, best_rsum)) 146 | validate(opt, val_loader, model) 147 | if opt.eval_only: 148 | return 149 | else: 150 | print("=> no checkpoint found at '{}'".format(opt.resume)) 151 | 152 | # Train the Model 153 | best_rsum = 0 154 | for epoch in range(start_epoch, opt.num_epochs): 155 | adjust_learning_rate(opt, model.optimizer, epoch) 156 | 157 | # train for one epoch 158 | train(opt, train_loader, model, epoch, val_loader) 159 | 160 | # evaluate on validation set 161 | rsum = validate(opt, val_loader, model) 162 | 163 | # remember best R@ sum and save checkpoint 164 | is_best = rsum > best_rsum 165 | best_rsum = max(rsum, best_rsum) 166 | save_checkpoint({ 167 | 'epoch': epoch + 1, 168 | 'model': model.state_dict(opt), 169 | 'best_rsum': best_rsum, 170 | 'opt': opt, 171 | 'Eiters': model.Eiters, 172 | }, is_best, prefix=opt.logger_name + '/', epoch=epoch) 173 | 174 | 175 | def train(opt, train_loader, model, epoch, val_loader): 176 | # average meters to record the training statistics 177 | batch_time = AverageMeter() 178 | data_time = AverageMeter() 179 | train_logger = LogCollector() 180 | 181 | # switch to train mode 182 | model.train_start(opt) 183 | 184 | end = time.time() 185 | for i, train_data in enumerate(train_loader): 186 | # measure data loading time 187 | data_time.update(time.time() - end) 188 | 189 | # make sure train logger is used 190 | model.logger = train_logger 191 | 192 | # Update the model 193 | model.train_emb(opt, *train_data) 194 | 195 | # measure elapsed time 196 | batch_time.update(time.time() - end) 197 | end = time.time() 198 | 199 | # Print log info 200 | if model.Eiters % opt.log_step == 0: 201 | logging.info( 202 | 'Epoch: [{0}][{1}/{2}]\t' 203 | '{e_log}\t' 204 | 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t' 205 | 'Data {data_time.val:.3f} ({data_time.avg:.3f})\t' 206 | .format( 207 | epoch, i, len(train_loader), batch_time=batch_time, 208 | data_time=data_time, e_log=str(model.logger))) 209 | 210 | # Record logs in tensorboard 211 | tb_logger.log_value('epoch', epoch, step=model.Eiters) 212 | tb_logger.log_value('step', i, step=model.Eiters) 213 | tb_logger.log_value('batch_time', batch_time.val, step=model.Eiters) 214 | tb_logger.log_value('data_time', data_time.val, step=model.Eiters) 215 | model.logger.tb_log(tb_logger, step=model.Eiters) 216 | 217 | # validate at every val_step 218 | if model.Eiters % opt.val_step == 0: 219 | validate(opt, val_loader, model) 220 | model.train_start(opt) 221 | 222 | 223 | def validate(opt, val_loader, model): 224 | # compute the encoding for all the validation images and captions 225 | vid_seq_embs, para_seq_embs, clip_embs, cap_embs, _, _, num_clips, cur_vid_total = encode_data( 226 | opt, model, val_loader, opt.log_step, logging.info, contextual_model=True) 227 | 228 | # caption retrieval 229 | # vid_clip_rep, _, _ = i2t(clip_embs, cap_embs, measure=opt.measure) 230 | # image retrieval 231 | # cap_clip_rep, _, _ = t2i(clip_embs, cap_embs, measure=opt.measure) 232 | 233 | # caption retrieval 234 | vid_seq_rep, top1_v2p, rank_vid_v2p = i2t(vid_seq_embs, para_seq_embs, measure=opt.measure) 235 | # image retrieval 236 | para_seq_rep, top1_p2v, rank_para_p2v = t2i(vid_seq_embs, para_seq_embs, measure=opt.measure) 237 | 238 | currscore = vid_seq_rep['sum'] + para_seq_rep['sum'] 239 | 240 | # logging.info("Clip to Sent: %.1f, %.1f, %.1f, %.1f, %.1f" % 241 | # (vid_clip_rep['r1'], vid_clip_rep['r5'], vid_clip_rep['r10'], vid_clip_rep['medr'], vid_clip_rep['meanr'])) 242 | # logging.info("Sent to Clip: %.1f, %.1f, %.1f, %.1f, %.1f" % 243 | # (cap_clip_rep['r1'], cap_clip_rep['r5'], cap_clip_rep['r10'], cap_clip_rep['medr'], cap_clip_rep['meanr'])) 244 | logging.info("Video to Paragraph: %.1f, %.1f, %.1f, %.1f, %.1f" % 245 | (vid_seq_rep['r1'], vid_seq_rep['r5'], vid_seq_rep['r10'], vid_seq_rep['medr'], vid_seq_rep['meanr'])) 246 | logging.info("Paragraph to Video: %.1f, %.1f, %.1f, %.1f, %.1f" % 247 | (para_seq_rep['r1'], para_seq_rep['r5'], para_seq_rep['r10'], para_seq_rep['medr'], para_seq_rep['meanr'])) 248 | logging.info("Currscore: %.1f" % (currscore)) 249 | 250 | # record metrics in tensorboard 251 | # LogReporter(tb_logger, vid_clip_rep, model.Eiters, 'clip') 252 | # LogReporter(tb_logger, cap_clip_rep, model.Eiters, 'clipi') 253 | LogReporter(tb_logger, vid_seq_rep, model.Eiters, 'seq') 254 | LogReporter(tb_logger, para_seq_rep, model.Eiters, 'seqi') 255 | tb_logger.log_value('rsum', currscore, step=model.Eiters) 256 | 257 | return currscore 258 | 259 | def save_checkpoint(state, is_best, filename='checkpoint.pth.tar', epoch=0, prefix=''): 260 | torch.save(state, prefix + str(epoch) + filename) 261 | if is_best: 262 | shutil.copyfile(prefix + str(epoch) + filename, prefix + 'model_best.pth.tar') 263 | 264 | 265 | def adjust_learning_rate(opt, optimizer, epoch): 266 | """Sets the learning rate to the initial LR 267 | decayed by 10 every 30 epochs""" 268 | lr = opt.learning_rate * (0.1 ** (epoch // opt.lr_update)) 269 | for param_group in optimizer.param_groups: 270 | param_group['lr'] = lr 271 | 272 | 273 | 274 | if __name__ == '__main__': 275 | main() 276 | -------------------------------------------------------------------------------- /vocab/anet_precomp_vocab.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Sha-Lab/CMHSE/2c4f8d44afcc0870ccb0c44a673b37d42cb81636/vocab/anet_precomp_vocab.pkl -------------------------------------------------------------------------------- /vocab/anet_precomp_vocab_no_emb.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Sha-Lab/CMHSE/2c4f8d44afcc0870ccb0c44a673b37d42cb81636/vocab/anet_precomp_vocab_no_emb.pkl -------------------------------------------------------------------------------- /vocab/anet_precomp_vocab_total.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Sha-Lab/CMHSE/2c4f8d44afcc0870ccb0c44a673b37d42cb81636/vocab/anet_precomp_vocab_total.pkl -------------------------------------------------------------------------------- /vocab/anet_precomp_w2v.npz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Sha-Lab/CMHSE/2c4f8d44afcc0870ccb0c44a673b37d42cb81636/vocab/anet_precomp_w2v.npz -------------------------------------------------------------------------------- /vocab/anet_precomp_w2v_total.npz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Sha-Lab/CMHSE/2c4f8d44afcc0870ccb0c44a673b37d42cb81636/vocab/anet_precomp_w2v_total.npz -------------------------------------------------------------------------------- /vocab/didemo_precomp_vocab.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Sha-Lab/CMHSE/2c4f8d44afcc0870ccb0c44a673b37d42cb81636/vocab/didemo_precomp_vocab.pkl -------------------------------------------------------------------------------- /vocab/didemo_precomp_vocab_total.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Sha-Lab/CMHSE/2c4f8d44afcc0870ccb0c44a673b37d42cb81636/vocab/didemo_precomp_vocab_total.pkl -------------------------------------------------------------------------------- /vocab/didemo_precomp_w2v.npz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Sha-Lab/CMHSE/2c4f8d44afcc0870ccb0c44a673b37d42cb81636/vocab/didemo_precomp_w2v.npz -------------------------------------------------------------------------------- /vocab/didemo_precomp_w2v_total.npz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Sha-Lab/CMHSE/2c4f8d44afcc0870ccb0c44a673b37d42cb81636/vocab/didemo_precomp_w2v_total.npz --------------------------------------------------------------------------------