├── README.md ├── config ├── kitti.yaml ├── nyu.yaml ├── nyu_192.yaml ├── nyu_posenet_192.yaml ├── odo.yaml └── sintel.yaml ├── core ├── config │ ├── __init__.py │ └── config_utils.py ├── dataset │ ├── __init__.py │ ├── kitti_2012.py │ ├── kitti_2015.py │ ├── kitti_odo.py │ ├── kitti_prepared.py │ ├── kitti_raw.py │ ├── nyu_v2.py │ ├── sintel.py │ ├── sintel_prepared.py │ └── sintel_raw.py ├── evaluation │ ├── __init__.py │ ├── __pycache__ │ │ ├── __init__.cpython-35.pyc │ │ ├── evaluate_depth.cpython-35.pyc │ │ ├── evaluate_flow.cpython-35.pyc │ │ ├── evaluate_mask.cpython-35.pyc │ │ ├── evaluation_utils.cpython-35.pyc │ │ └── flowlib.cpython-35.pyc │ ├── eval_odom.py │ ├── evaluate_depth.py │ ├── evaluate_flow.py │ ├── evaluate_mask.py │ ├── evaluation_utils.py │ └── flowlib.py ├── networks │ ├── __init__.py │ ├── model_flow_paper.py │ ├── pytorch_ssim │ │ ├── __init__.py │ │ └── ssim.py │ └── structures │ │ ├── __init__.py │ │ ├── feature_pyramid.py │ │ ├── inverse_warp.py │ │ ├── net_utils.py │ │ └── pwc_tf.py └── visualize │ ├── __init__.py │ ├── profiler.py │ └── visualizer.py ├── data └── eigen │ ├── static_frames.txt │ ├── test_files.txt │ └── test_scenes.txt ├── requirements.txt ├── test.py └── train.py /README.md: -------------------------------------------------------------------------------- 1 | ## [Occulsion Aware Unsupervised Learning of Optical Flow from Video](https://www.spiedigitallibrary.org/conference-proceedings-of-spie/11605/116050T/Occlusion-aware-unsupervised-learning-of-optical-flow-from-video/10.1117/12.2588381.short) 2 | 3 | ## Introduction 4 | We proposed a new method of dealing with occulsion problem in unsupervised learning of optical flow by calculating occlusion mask. 5 | Compared with UnFlow(AAAI 2018) and OAFlow(CVPR 2018), we achieved more precise results in KITTI dataset. 6 | |method |KITTI 2012| KITTI 2015 train set| KITTI 2015 test set| 7 | |------ |----------|-----------|------------| 8 | |UnFlow| 3.78 | 8.80 | 23.27% | 9 | |OAFlow| 3.55 | 8.88| 31.2% | 10 | |Ours | **2.67** | **7.1** | 22% | 11 | 12 | ## Installation 13 | The code is based on Python3.6. You could use either virtualenv or conda to setup a specified environment. And then run: 14 | ``` 15 | pip install -r requirements.txt 16 | ``` 17 | 18 | ## Run experiments 19 | 20 | ### Prepare training data: 21 | 1. Download KITTI raw dataset using the script provided on the official website. You also need to download KITTI 2015 dataset to evaluate the predicted optical flow. 22 | 23 | ### Training: 24 | 1. Modify the configuration file in the ./config directory to set up your path. The config file contains the important paths and default hyper-parameters used in the training process. 25 | 26 | ```bash 27 | 1. python train.py --config_file ./config/kitti.yaml --gpu [gpu_id] --mode flow --prepared_save_dir [name_of_your_prepared_dataset] --model_dir [your/directory/to/save/training/models] 28 | ``` 29 | If you are running experiments on the dataset for the first time, it would first process data and save in the [prepared_base_dir] path defined in your config file. 30 | 31 | ### Evaluation: 32 | the network weights after training on kitti raw data is [here](https://drive.google.com/file/d/1_sXqSUysOy56JiUjVmwdM1Z-1D3B54HH/view?usp=sharing) 33 | 34 | 1. To evaluate the optical flow estimation on KITTI 2015, run: 35 | ```bash 36 | python test.py --config_file ./config/kitti.yaml --gpu [gpu_id] --mode flow --task kitti_flow --pretrained_model [path/to/your/model] 37 | ``` 38 | 39 | ### Acknowledgement 40 | We implemented our idea based on TrainFlow 41 | 42 | ### Cite 43 | @inproceedings{10.1117/12.2588381, 44 | author = {Jianfeng Li and Junqiao Zhao and Shuangfu Song and Tiantian Feng}, 45 | title = {{Occlusion aware unsupervised learning of optical flow from video}}, 46 | volume = {11605}, 47 | booktitle = {Thirteenth International Conference on Machine Vision}, 48 | editor = {Wolfgang Osten and Dmitry P. Nikolaev and Jianhong Zhou}, 49 | organization = {International Society for Optics and Photonics}, 50 | publisher = {SPIE}, 51 | pages = {224 -- 231}, 52 | year = {2021}, 53 | doi = {10.1117/12.2588381}, 54 | URL = {https://doi.org/10.1117/12.2588381} 55 | } 56 | 57 | ### one more thing 58 | This unsupervised optical flow estimation project is intergrated and updated into https://github.com/jianfenglihg/Unsupervised_depth_flow_egomotion. 59 | -------------------------------------------------------------------------------- /config/kitti.yaml: -------------------------------------------------------------------------------- 1 | cfg_name: 'default' 2 | 3 | # dataset 4 | raw_base_dir: '/media/ljf/Data/kitti/kitti_raw' 5 | prepared_base_dir: '/home/ljf/Dataset/kitti_release' 6 | gt_2012_dir: '/media/ljf/Work/DATASET/kitti-flow/kitti2012/training' 7 | gt_2015_dir: '/media/ljf/Work/DATASET/kitti-flow/kitti2015/data_scene_flow/training' 8 | static_frames_txt: '/home/ljf/TrianFlow/data/eigen/static_frames.txt' 9 | test_scenes_txt: '/home/ljf/TrianFlow/data/eigen/test_scenes.txt' 10 | dataset: 'kitti_depth' 11 | num_scales: 3 12 | 13 | # training 14 | num_iterations: 200000 15 | 16 | # loss hyperparameters 17 | w_ssim: 0.85 # w_pixel = 1 - w_ssim 18 | w_flow_smooth: 10.0 19 | w_flow_consis: 0.01 20 | w_geo: 1.0 21 | w_pt_depth: 1.0 22 | w_pj_depth: 0.1 23 | w_flow_error: 0.0 24 | w_depth_smooth: 0.001 25 | 26 | h_flow_consist_alpha: 3.0 27 | h_flow_consist_beta: 0.05 28 | 29 | ransac_iters: 100 30 | ransac_points: 6000 31 | 32 | # Depth Setting 33 | depth_match_num: 6000 34 | depth_sample_ratio: 0.20 35 | depth_scale: 1 36 | 37 | # basic info 38 | img_hw: [256, 832] 39 | use_svd_gpu: False 40 | 41 | -------------------------------------------------------------------------------- /config/nyu.yaml: -------------------------------------------------------------------------------- 1 | cfg_name: 'default' 2 | 3 | # dataset 4 | raw_base_dir: '/home5/zhaow/data/nyuv2' 5 | prepared_base_dir: '/home5/zhaow/data/nyu_seq_release' 6 | nyu_test_dir: '/home5/zhaow/data/nyuv2_test' 7 | dataset: 'nyuv2' 8 | num_scales: 3 9 | 10 | # training 11 | num_iterations: 400000 12 | 13 | # loss hyperparameters 14 | w_ssim: 0.85 # w_pixel = 1 - w_ssim 15 | w_flow_smooth: 10.0 16 | w_flow_consis: 0.01 17 | w_geo: 0.0 18 | w_pt_depth: 1.0 19 | w_pj_depth: 0.1 20 | w_flow_error: 0.00 21 | w_depth_smooth: 0.0001 22 | 23 | 24 | h_flow_consist_alpha: 3.0 25 | h_flow_consist_beta: 0.05 26 | 27 | ransac_iters: 100 28 | ransac_points: 6000 29 | 30 | # Depth Setting 31 | depth_match_num: 6000 32 | depth_sample_ratio: 0.20 33 | depth_scale: 1 34 | 35 | # basic info 36 | img_hw: [448, 576] 37 | #img_hw: [192, 256] 38 | block_tri_grad: False 39 | 40 | -------------------------------------------------------------------------------- /config/nyu_192.yaml: -------------------------------------------------------------------------------- 1 | cfg_name: 'default' 2 | 3 | # dataset 4 | raw_base_dir: '/home5/zhaow/data/nyuv2' 5 | prepared_base_dir: '/home5/zhaow/data/nyu_seq_release' 6 | nyu_test_dir: '/home5/zhaow/data/nyuv2_test' 7 | dataset: 'nyuv2' 8 | num_scales: 3 9 | 10 | # training 11 | num_iterations: 400000 12 | 13 | # loss hyperparameters 14 | w_ssim: 0.85 # w_pixel = 1 - w_ssim 15 | w_flow_smooth: 10.0 16 | w_flow_consis: 0.01 17 | w_geo: 0.0 18 | w_pt_depth: 1.0 19 | w_pj_depth: 0.1 20 | w_flow_error: 0.00 21 | w_depth_smooth: 0.0001 22 | 23 | 24 | h_flow_consist_alpha: 3.0 25 | h_flow_consist_beta: 0.05 26 | 27 | ransac_iters: 100 28 | ransac_points: 6000 29 | 30 | # Depth Setting 31 | depth_match_num: 6000 32 | depth_sample_ratio: 0.20 33 | depth_scale: 1 34 | 35 | # basic info 36 | #img_hw: [448, 576] 37 | img_hw: [192, 256] 38 | block_tri_grad: False 39 | 40 | -------------------------------------------------------------------------------- /config/nyu_posenet_192.yaml: -------------------------------------------------------------------------------- 1 | cfg_name: 'default' 2 | 3 | # dataset 4 | raw_base_dir: '/home5/zhaow/data/nyuv2_sub2' 5 | prepared_base_dir: '/home5/zhaow/data/nyu_seq_release' 6 | nyu_test_dir: '/home5/zhaow/data/nyuv2_test' 7 | dataset: 'nyuv2' 8 | num_scales: 3 9 | 10 | # training 11 | num_iterations: 500000 # set -1 to use num_epochs 12 | num_epochs: 0 13 | 14 | # loss hyperparameters 15 | w_ssim: 0.85 # w_pixel = 1 - w_ssim 16 | w_flow_smooth: 10.0 17 | w_flow_consis: 0.01 18 | w_geo: 0.0 19 | w_pt_depth: 0.0 20 | w_pj_depth: 0.5 21 | w_flow_error: 10.0 22 | w_depth_smooth: 0.00001 23 | 24 | 25 | h_flow_consist_alpha: 3.0 26 | h_flow_consist_beta: 0.05 27 | 28 | ransac_iters: 100 29 | ransac_points: 6000 30 | 31 | # Depth Setting 32 | depth_match_num: 6000 33 | depth_scale: 1 34 | 35 | # basic info 36 | img_hw: [192,256] 37 | block_tri_grad: False 38 | 39 | -------------------------------------------------------------------------------- /config/odo.yaml: -------------------------------------------------------------------------------- 1 | cfg_name: 'default' 2 | 3 | # dataset 4 | raw_base_dir: '/home4/zhaow/data/kitti_odometry/sequences' 5 | prepared_base_dir: '/home5/zhaow/data/kitti_odo_release/' 6 | gt_2012_dir: '/home4/zhaow/data/kitti_stereo/kitti_2012/training' 7 | gt_2015_dir: '/home4/zhaow/data/kitti_stereo/kitti_2015/training' 8 | dataset: 'kitti_odo' 9 | num_scales: 3 10 | 11 | # training 12 | num_iterations: 200000 13 | 14 | 15 | w_ssim: 0.85 # w_pixel = 1 - w_ssim 16 | w_flow_smooth: 10.0 17 | w_flow_consis: 0.01 18 | w_geo: 0.1 19 | w_pt_depth: 1.0 20 | w_pj_depth: 0.1 21 | w_flow_error: 0.0 22 | w_depth_smooth: 0.0001 23 | 24 | 25 | h_flow_consist_alpha: 3.0 26 | h_flow_consist_beta: 0.05 27 | 28 | ransac_iters: 100 29 | ransac_points: 6000 30 | 31 | # Depth Setting 32 | depth_match_num: 6000 33 | depth_sample_ratio: 0.20 34 | depth_scale: 1 35 | 36 | # basic info 37 | img_hw: [256, 832] 38 | block_tri_grad: False 39 | 40 | -------------------------------------------------------------------------------- /config/sintel.yaml: -------------------------------------------------------------------------------- 1 | cfg_name: 'default' 2 | 3 | # dataset 4 | raw_base_dir: '/home/ljf/Dataset/Sintel/scene' 5 | prepared_base_dir: '/home/ljf/Dataset/sintel_release' 6 | gt_2012_dir: '/media/ljf/Work/DATASET/kitti-flow/kitti2012/training' 7 | gt_2015_dir: '/media/ljf/Work/DATASET/kitti-flow/kitti2015/data_scene_flow/training' 8 | static_frames_txt: '/home/ljf/TrianFlow/data/eigen/static_frames.txt' 9 | test_scenes_txt: '/home/ljf/TrianFlow/data/eigen/test_scenes.txt' 10 | dataset: 'sintel_raw' 11 | num_scales: 3 12 | stride: 2 13 | 14 | # training 15 | num_iterations: 200000 16 | 17 | # loss hyperparameters 18 | w_ssim: 0.85 # w_pixel = 1 - w_ssim 19 | w_flow_smooth: 6.0 20 | w_flow_consis: 0.01 21 | 22 | h_flow_consist_alpha: 3.0 23 | h_flow_consist_beta: 0.05 24 | 25 | 26 | # basic info 27 | # img_hw: [448, 1024] 28 | img_hw: [384, 832] 29 | use_svd_gpu: False 30 | 31 | -------------------------------------------------------------------------------- /core/config/__init__.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | sys.path.append(os.path.dirname(os.path.abspath(__file__))) 3 | from config_utils import generate_loss_weights_dict 4 | 5 | -------------------------------------------------------------------------------- /core/config/config_utils.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | 3 | def generate_loss_weights_dict(cfg): 4 | weight_dict = {} 5 | weight_dict['loss_pixel'] = 1 - cfg.w_ssim 6 | weight_dict['loss_ssim'] = cfg.w_ssim 7 | weight_dict['loss_flow_smooth'] = cfg.w_flow_smooth 8 | weight_dict['loss_flow_consis'] = cfg.w_flow_consis 9 | return weight_dict 10 | -------------------------------------------------------------------------------- /core/dataset/__init__.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | sys.path.append(os.path.dirname(os.path.abspath(__file__))) 3 | from kitti_raw import KITTI_RAW 4 | from kitti_prepared import KITTI_Prepared 5 | from sintel_raw import SINTEL_RAW 6 | from sintel_prepared import SINTEL_Prepared 7 | from kitti_2012 import KITTI_2012 8 | from kitti_2015 import KITTI_2015 9 | from nyu_v2 import NYU_Prepare, NYU_v2 10 | from kitti_odo import KITTI_Odo -------------------------------------------------------------------------------- /core/dataset/kitti_2012.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | sys.path.append(os.path.dirname(os.path.abspath(__file__))) 3 | from kitti_prepared import KITTI_Prepared 4 | sys.path.append(os.path.join(os.path.dirname(os.path.abspath(__file__)), '..', 'evaluation')) 5 | from evaluate_flow import get_scaled_intrinsic_matrix, eval_flow_avg 6 | import numpy as np 7 | import cv2 8 | import copy 9 | 10 | import torch 11 | import pdb 12 | 13 | class KITTI_2012(KITTI_Prepared): 14 | def __init__(self, data_dir, img_hw=(256, 832), init=True): 15 | self.data_dir = data_dir 16 | self.img_hw = img_hw 17 | self.num_total = 194 18 | if init: 19 | self.data_list = self.get_data_list() 20 | 21 | def get_data_list(self): 22 | data_list = [] 23 | for i in range(self.num_total): 24 | data = {} 25 | data['img1_dir'] = os.path.join(self.data_dir, 'image_2', str(i).zfill(6) + '_10.png') 26 | data['img2_dir'] = os.path.join(self.data_dir, 'image_2', str(i).zfill(6) + '_11.png') 27 | data['calib_file_dir'] = os.path.join(self.data_dir, 'calib_cam_to_cam', str(i).zfill(6) + '.txt') 28 | data_list.append(data) 29 | return data_list 30 | 31 | def __len__(self): 32 | return len(self.data_list) 33 | 34 | def read_cam_intrinsic(self, calib_file): 35 | input_intrinsic = get_scaled_intrinsic_matrix(calib_file, zoom_x=1.0, zoom_y=1.0) 36 | return input_intrinsic 37 | 38 | def __getitem__(self, idx): 39 | ''' 40 | Returns: 41 | - img torch.Tensor (N * H, W, 3) 42 | - K torch.Tensor (num_scales, 3, 3) 43 | - K_inv torch.Tensor (num_scales, 3, 3) 44 | ''' 45 | data = self.data_list[idx] 46 | # load img 47 | img1 = cv2.imread(data['img1_dir']) 48 | img2 = cv2.imread(data['img2_dir']) 49 | img_hw_orig = (img1.shape[0], img1.shape[1]) 50 | img = np.concatenate([img1, img2], 0) 51 | #img = self.preprocess_img(img, self.img_hw, is_test=True) 52 | img = self.preprocess_img_origin(img, self.img_hw, is_test=True) 53 | img = img.transpose(2,0,1) 54 | 55 | return torch.from_numpy(img).float() 56 | 57 | if __name__ == '__main__': 58 | pass 59 | 60 | -------------------------------------------------------------------------------- /core/dataset/kitti_2015.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | sys.path.append(os.path.dirname(os.path.abspath(__file__))) 3 | from kitti_2012 import KITTI_2012 4 | 5 | class KITTI_2015(KITTI_2012): 6 | def __init__(self, data_dir, img_hw=(256, 832)): 7 | super(KITTI_2015, self).__init__(data_dir, img_hw, init=False) 8 | self.num_total = 200 9 | 10 | self.data_list = self.get_data_list() 11 | 12 | if __name__ == '__main__': 13 | pass 14 | 15 | -------------------------------------------------------------------------------- /core/dataset/kitti_odo.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | import numpy as np 3 | import imageio 4 | from tqdm import tqdm 5 | import torch.multiprocessing as mp 6 | 7 | def process_folder(q, data_dir, output_dir, stride=1): 8 | while True: 9 | if q.empty(): 10 | break 11 | folder = q.get() 12 | image_path = os.path.join(data_dir, folder, 'image_2/') 13 | dump_image_path = os.path.join(output_dir, folder) 14 | if not os.path.isdir(dump_image_path): 15 | os.makedirs(dump_image_path) 16 | f = open(os.path.join(dump_image_path, 'train.txt'), 'w') 17 | 18 | # Note. the os.listdir method returns arbitary order of list. We need correct order. 19 | numbers = len(os.listdir(image_path)) 20 | for n in range(numbers - stride): 21 | s_idx = n 22 | e_idx = s_idx + stride 23 | curr_image = imageio.imread(os.path.join(image_path, '%.6d'%s_idx)+'.png') 24 | next_image = imageio.imread(os.path.join(image_path, '%.6d'%e_idx)+'.png') 25 | seq_images = np.concatenate([curr_image, next_image], axis=0) 26 | imageio.imsave(os.path.join(dump_image_path, '%.6d'%s_idx)+'.png', seq_images.astype('uint8')) 27 | 28 | # Write training files 29 | f.write('%s %s\n' % (os.path.join(folder, '%.6d'%s_idx)+'.png', os.path.join(folder, 'calib.txt'))) 30 | print(folder) 31 | 32 | 33 | class KITTI_Odo(object): 34 | def __init__(self, data_dir): 35 | self.data_dir = data_dir 36 | self.train_seqs = ['00','01','02','03','04','05','06','07','08'] 37 | 38 | def __len__(self): 39 | raise NotImplementedError 40 | 41 | def prepare_data_mp(self, output_dir, stride=1): 42 | num_processes = 16 43 | processes = [] 44 | q = mp.Queue() 45 | if not os.path.isfile(os.path.join(output_dir, 'train.txt')): 46 | os.makedirs(output_dir) 47 | #f = open(os.path.join(output_dir, 'train.txt'), 'w') 48 | print('Preparing sequence data....') 49 | if not os.path.isdir(self.data_dir): 50 | raise 51 | dirlist = os.listdir(self.data_dir) 52 | total_dirlist = [] 53 | # Get the different folders of images 54 | for d in dirlist: 55 | if d in self.train_seqs: 56 | q.put(d) 57 | # Process every folder 58 | for rank in range(num_processes): 59 | p = mp.Process(target=process_folder, args=(q, self.data_dir, output_dir, stride)) 60 | p.start() 61 | processes.append(p) 62 | for p in processes: 63 | p.join() 64 | 65 | f = open(os.path.join(output_dir, 'train.txt'), 'w') 66 | for d in self.train_seqs: 67 | train_file = open(os.path.join(output_dir, d, 'train.txt'), 'r') 68 | for l in train_file.readlines(): 69 | f.write(l) 70 | 71 | command = 'cp ' + os.path.join(self.data_dir, d, 'calib.txt') + ' ' + os.path.join(output_dir, d, 'calib.txt') 72 | os.system(command) 73 | 74 | print('Data Preparation Finished.') 75 | 76 | def __getitem__(self, idx): 77 | raise NotImplementedError 78 | 79 | 80 | if __name__ == '__main__': 81 | data_dir = '/home4/zhaow/data/kitti' 82 | dirlist = os.listdir('/home4/zhaow/data/kitti') 83 | output_dir = '/home4/zhaow/data/kitti_seq/data_generated_s2' 84 | total_dirlist = [] 85 | # Get the different folders of images 86 | for d in dirlist: 87 | seclist = os.listdir(os.path.join(data_dir, d)) 88 | for s in seclist: 89 | if os.path.isdir(os.path.join(data_dir, d, s)): 90 | total_dirlist.append(os.path.join(d, s)) 91 | 92 | F = open(os.path.join(output_dir, 'train.txt'), 'w') 93 | for p in total_dirlist: 94 | traintxt = os.path.join(os.path.join(output_dir, p), 'train.txt') 95 | f = open(traintxt, 'r') 96 | for line in f.readlines(): 97 | F.write(line) 98 | print(traintxt) -------------------------------------------------------------------------------- /core/dataset/kitti_prepared.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | import numpy as np 3 | import cv2 4 | import copy 5 | 6 | import torch 7 | import torch.utils.data 8 | import pdb 9 | 10 | class KITTI_Prepared(torch.utils.data.Dataset): 11 | def __init__(self, data_dir, num_scales=3, img_hw=(256, 832), num_iterations=None): 12 | super(KITTI_Prepared, self).__init__() 13 | self.data_dir = data_dir 14 | self.num_scales = num_scales 15 | self.img_hw = img_hw 16 | self.num_iterations = num_iterations 17 | 18 | info_file = os.path.join(self.data_dir, 'train.txt') 19 | #info_file = os.path.join(self.data_dir, 'train_flow.txt') 20 | self.data_list = self.get_data_list(info_file) 21 | 22 | def get_data_list(self, info_file): 23 | with open(info_file, 'r') as f: 24 | lines = f.readlines() 25 | data_list = [] 26 | for line in lines: 27 | k = line.strip('\n').split() 28 | data = {} 29 | data['image_file'] = os.path.join(self.data_dir, k[0]) 30 | data['cam_intrinsic_file'] = os.path.join(self.data_dir, k[1]) 31 | data_list.append(data) 32 | print('A total of {} image pairs found'.format(len(data_list))) 33 | return data_list 34 | 35 | def count(self): 36 | return len(self.data_list) 37 | 38 | def rand_num(self, idx): 39 | num_total = self.count() 40 | np.random.seed(idx) 41 | num = np.random.randint(num_total) 42 | return num 43 | 44 | def __len__(self): 45 | if self.num_iterations is None: 46 | return self.count() 47 | else: 48 | return self.num_iterations 49 | 50 | def resize_img_origin(self, img, img_hw): 51 | ''' 52 | Input size (N*H, W, 3) 53 | Output size (N*H', W', 3), where (H', W') == self.img_hw 54 | ''' 55 | img_h, img_w = img.shape[0], img.shape[1] 56 | img_hw_orig = (int(img_h / 2), img_w) 57 | img1, img2 = img[:img_hw_orig[0], :, :], img[img_hw_orig[0]:, :, :] 58 | img1_new = cv2.resize(img1, (img_hw[1], img_hw[0])) 59 | img2_new = cv2.resize(img2, (img_hw[1], img_hw[0])) 60 | img_new = np.concatenate([img1_new, img2_new], 0) 61 | return img_new 62 | 63 | def resize_img(self, img, img_hw): 64 | ''' 65 | Input size (N*H, W, 3) 66 | Output size (N*H', W', 3), where (H', W') == self.img_hw 67 | ''' 68 | img_h, img_w = img.shape[0], img.shape[1] 69 | img_hw_orig = (int(img_h / 3), img_w) 70 | img1, img2, img3 = img[:img_hw_orig[0], :, :], img[img_hw_orig[0]:2*img_hw_orig[0], :, :], img[2*img_hw_orig[0]:3*img_hw_orig[0], :, :] 71 | img1_new = cv2.resize(img1, (img_hw[1], img_hw[0])) 72 | img2_new = cv2.resize(img2, (img_hw[1], img_hw[0])) 73 | img3_new = cv2.resize(img3, (img_hw[1], img_hw[0])) 74 | img_new = np.concatenate([img1_new, img2_new, img3_new], 0) 75 | return img_new 76 | 77 | def random_flip_img(self, img): 78 | is_flip = (np.random.rand() > 0.5) 79 | if is_flip: 80 | img = cv2.flip(img, 1) 81 | return img 82 | 83 | def preprocess_img(self, img, img_hw=None, is_test=False): 84 | if img_hw is None: 85 | img_hw = self.img_hw 86 | img = self.resize_img(img, img_hw) 87 | if not is_test: 88 | img = self.random_flip_img(img) 89 | img = img / 255.0 90 | return img 91 | 92 | 93 | def preprocess_img_origin(self, img, img_hw=None, is_test=False): 94 | if img_hw is None: 95 | img_hw = self.img_hw 96 | img = self.resize_img_origin(img, img_hw) 97 | if not is_test: 98 | img = self.random_flip_img(img) 99 | img = img / 255.0 100 | return img 101 | 102 | def read_cam_intrinsic(self, fname): 103 | with open(fname, 'r') as f: 104 | lines = f.readlines() 105 | data = lines[-1].strip('\n').split(' ')[1:] 106 | data = [float(k) for k in data] 107 | data = np.array(data).reshape(3,4) 108 | cam_intrinsics = data[:3,:3] 109 | return cam_intrinsics 110 | 111 | def rescale_intrinsics(self, K, img_hw_orig, img_hw_new): 112 | K[0,:] = K[0,:] * img_hw_new[0] / img_hw_orig[0] 113 | K[1,:] = K[1,:] * img_hw_new[1] / img_hw_orig[1] 114 | return K 115 | 116 | def get_intrinsics_per_scale(self, K, scale): 117 | K_new = copy.deepcopy(K) 118 | K_new[0,:] = K_new[0,:] / (2**scale) 119 | K_new[1,:] = K_new[1,:] / (2**scale) 120 | K_new_inv = np.linalg.inv(K_new) 121 | return K_new, K_new_inv 122 | 123 | def get_multiscale_intrinsics(self, K, num_scales): 124 | K_ms, K_inv_ms = [], [] 125 | for s in range(num_scales): 126 | K_new, K_new_inv = self.get_intrinsics_per_scale(K, s) 127 | K_ms.append(K_new[None,:,:]) 128 | K_inv_ms.append(K_new_inv[None,:,:]) 129 | K_ms = np.concatenate(K_ms, 0) 130 | K_inv_ms = np.concatenate(K_inv_ms, 0) 131 | return K_ms, K_inv_ms 132 | 133 | def __getitem__(self, idx): 134 | ''' 135 | Returns: 136 | - img torch.Tensor (N * H, W, 3) 137 | - K torch.Tensor (num_scales, 3, 3) 138 | - K_inv torch.Tensor (num_scales, 3, 3) 139 | ''' 140 | if self.num_iterations is not None: 141 | idx = self.rand_num(idx) 142 | data = self.data_list[idx] 143 | # load img 144 | img = cv2.imread(data['image_file']) 145 | img_hw_orig = (int(img.shape[0] / 3), img.shape[1]) 146 | img = self.preprocess_img(img, self.img_hw) # (img_h * 3, img_w, 3) 147 | img = img.transpose(2,0,1) 148 | 149 | # load intrinsic 150 | cam_intrinsic = self.read_cam_intrinsic(data['cam_intrinsic_file']) 151 | cam_intrinsic = self.rescale_intrinsics(cam_intrinsic, img_hw_orig, self.img_hw) 152 | K_ms, K_inv_ms = self.get_multiscale_intrinsics(cam_intrinsic, self.num_scales) # (num_scales, 3, 3), (num_scales, 3, 3) 153 | return torch.from_numpy(img).float() 154 | 155 | if __name__ == '__main__': 156 | pass 157 | 158 | -------------------------------------------------------------------------------- /core/dataset/kitti_raw.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | import numpy as np 3 | import cv2 4 | from tqdm import tqdm 5 | import torch.multiprocessing as mp 6 | import pdb 7 | 8 | def process_folder(q, static_frames, test_scenes, data_dir, output_dir, stride=1): 9 | while True: 10 | if q.empty(): 11 | break 12 | folder = q.get() 13 | if folder in static_frames.keys(): 14 | static_ids = static_frames[folder] 15 | else: 16 | static_ids = [] 17 | scene = folder.split('/')[1] 18 | if scene[:-5] in test_scenes: 19 | continue 20 | image_path = os.path.join(data_dir, folder, 'image_02/data') 21 | dump_image_path = os.path.join(output_dir, folder) 22 | if not os.path.isdir(dump_image_path): 23 | os.makedirs(dump_image_path) 24 | f = open(os.path.join(dump_image_path, 'train.txt'), 'w') 25 | 26 | # Note. the os.listdir method returns arbitary order of list. We need correct order. 27 | numbers = len(os.listdir(image_path)) 28 | if numbers < 3: 29 | print("this folder do not have enough image, numbers < 3!") 30 | for n in range(numbers - 2*stride): 31 | s_idx = n 32 | m_idx = s_idx + stride 33 | e_idx = s_idx + 2*stride 34 | if '%.10d'%s_idx in static_ids or '%.10d'%e_idx in static_ids or '%.10d'%m_idx in static_ids: 35 | #print('%.10d'%s_idx) 36 | continue 37 | curr_image = cv2.imread(os.path.join(image_path, '%.10d'%s_idx)+'.png') 38 | middle_image = cv2.imread(os.path.join(image_path, '%.10d'%m_idx)+'.png') 39 | next_image = cv2.imread(os.path.join(image_path, '%.10d'%e_idx)+'.png') 40 | 41 | if curr_image is None: 42 | print(os.path.join(image_path, '%.10d'%s_idx)+'.png') 43 | continue 44 | 45 | if middle_image is None: 46 | print(os.path.join(image_path, '%.10d'%m_idx)+'.png') 47 | continue 48 | 49 | if next_image is None: 50 | print(os.path.join(image_path, '%.10d'%e_idx)+'.png') 51 | continue 52 | 53 | seq_images = np.concatenate([curr_image, middle_image, next_image], axis=0) 54 | # seq_images = np.concatenate([seq_images, next_image], axis=0) 55 | cv2.imwrite(os.path.join(dump_image_path, '%.10d'%s_idx)+'.png', seq_images.astype('uint8')) 56 | # cv2.imwrite(os.path.join(dump_image_path, '%.10d'%s_idx)+'.png', seq_images.astype('uint8')) 57 | 58 | # Write training files 59 | date = folder.split('/')[0] 60 | f.write('%s %s\n' % (os.path.join(folder, '%.10d'%s_idx)+'.png', os.path.join(date, 'calib_cam_to_cam.txt'))) 61 | print(folder) 62 | 63 | 64 | class KITTI_RAW(object): 65 | def __init__(self, data_dir, static_frames_txt, test_scenes_txt): 66 | self.data_dir = data_dir 67 | self.static_frames_txt = static_frames_txt 68 | self.test_scenes_txt = test_scenes_txt 69 | 70 | def __len__(self): 71 | raise NotImplementedError 72 | 73 | def collect_static_frame(self): 74 | f = open(self.static_frames_txt) 75 | static_frames = {} 76 | for line in f.readlines(): 77 | line = line.strip() 78 | date, drive, frame_id = line.split(' ') 79 | curr_fid = '%.10d' % (np.int(frame_id)) 80 | if os.path.join(date, drive) not in static_frames.keys(): 81 | static_frames[os.path.join(date, drive)] = [] 82 | static_frames[os.path.join(date, drive)].append(curr_fid) 83 | return static_frames 84 | 85 | def collect_test_scenes(self): 86 | f = open(self.test_scenes_txt) 87 | test_scenes = [] 88 | for line in f.readlines(): 89 | line = line.strip() 90 | test_scenes.append(line) 91 | return test_scenes 92 | 93 | def prepare_data_mp(self, output_dir, stride=1): 94 | num_processes = 16 95 | processes = [] 96 | q = mp.Queue() 97 | static_frames = self.collect_static_frame() 98 | test_scenes = self.collect_test_scenes() 99 | if not os.path.isfile(os.path.join(output_dir, 'train.txt')): 100 | os.makedirs(output_dir) 101 | #f = open(os.path.join(output_dir, 'train.txt'), 'w') 102 | print('Preparing sequence data....') 103 | if not os.path.isdir(self.data_dir): 104 | raise 105 | dirlist = os.listdir(self.data_dir) 106 | total_dirlist = [] 107 | # Get the different folders of images 108 | for d in dirlist: 109 | seclist = os.listdir(os.path.join(self.data_dir, d)) 110 | for s in seclist: 111 | if os.path.isdir(os.path.join(self.data_dir, d, s)): 112 | total_dirlist.append(os.path.join(d, s)) 113 | q.put(os.path.join(d, s)) 114 | # Process every folder 115 | for rank in range(num_processes): 116 | p = mp.Process(target=process_folder, args=(q, static_frames, test_scenes, self.data_dir, output_dir, stride)) 117 | p.start() 118 | processes.append(p) 119 | for p in processes: 120 | p.join() 121 | 122 | # Collect the training frames. 123 | f = open(os.path.join(output_dir, 'train.txt'), 'w') 124 | for date in os.listdir(output_dir): 125 | if os.path.isdir(os.path.join(output_dir, date)): 126 | drives = os.listdir(os.path.join(output_dir, date)) 127 | for d in drives: 128 | train_file = open(os.path.join(output_dir, date, d, 'train.txt'), 'r') 129 | for l in train_file.readlines(): 130 | f.write(l) 131 | 132 | # Get calib files 133 | for date in os.listdir(self.data_dir): 134 | command = 'cp ' + os.path.join(self.data_dir, date, 'calib_cam_to_cam.txt') + ' ' + os.path.join(output_dir, date, 'calib_cam_to_cam.txt') 135 | os.system(command) 136 | 137 | print('Data Preparation Finished.') 138 | 139 | 140 | 141 | def prepare_data(self, output_dir): 142 | static_frames = self.collect_static_frame() 143 | test_scenes = self.collect_test_scenes() 144 | if not os.path.isfile(os.path.join(output_dir, 'train.txt')): 145 | os.makedirs(output_dir) 146 | f = open(os.path.join(output_dir, 'train.txt'), 'w') 147 | print('Preparing sequence data....') 148 | if not os.path.isdir(self.data_dir): 149 | raise 150 | dirlist = os.listdir(self.data_dir) 151 | total_dirlist = [] 152 | # Get the different folders of images 153 | for d in dirlist: 154 | seclist = os.listdir(os.path.join(self.data_dir, d)) 155 | for s in seclist: 156 | if os.path.isdir(os.path.join(self.data_dir, d, s)): 157 | total_dirlist.append(os.path.join(d, s)) 158 | # Process every folder 159 | for folder in tqdm(total_dirlist): 160 | if folder in static_frames.keys(): 161 | static_ids = static_frames[folder] 162 | else: 163 | static_ids = [] 164 | scene = folder.split('/')[1] 165 | if scene in test_scenes: 166 | continue 167 | image_path = os.path.join(self.data_dir, folder, 'image_02/data') 168 | dump_image_path = os.path.join(output_dir, folder) 169 | if not os.path.isdir(dump_image_path): 170 | os.makedirs(dump_image_path) 171 | # Note. the os.listdir method returns arbitary order of list. We need correct order. 172 | numbers = len(os.listdir(image_path)) 173 | if numbers < 3: 174 | print("this folder do not have enough image, numbers < 3!") 175 | 176 | for n in range(numbers - 2): 177 | s_idx = n 178 | m_idx = s_idx + 1 179 | e_idx = s_idx + 2 180 | if '%.10d'%s_idx in static_ids or '%.10d'%e_idx in static_ids: 181 | print('%.10d'%s_idx) 182 | continue 183 | curr_image = cv2.imread(os.path.join(image_path, '%.10d'%s_idx)+'.png') 184 | middle_image = cv2.imread(os.path.join(image_path, '%.10d'%m_idx)+'.png') 185 | next_image = cv2.imread(os.path.join(image_path, '%.10d'%e_idx)+'.png') 186 | 187 | 188 | if curr_image is None: 189 | print(os.path.join(image_path, '%.10d'%s_idx)+'.png') 190 | continue 191 | 192 | if middle_image is None: 193 | print(os.path.join(image_path, '%.10d'%m_idx)+'.png') 194 | continue 195 | 196 | if next_image is None: 197 | print(os.path.join(image_path, '%.10d'%e_idx)+'.png') 198 | continue 199 | 200 | 201 | seq_images = np.concatenate([curr_image, middle_image, next_image], axis=0) 202 | # seq_images = np.concatenate([seq_images, next_image], axis=0) 203 | cv2.imwrite(os.path.join(dump_image_path, '%.10d'%s_idx)+'.png', seq_images.astype('uint8')) 204 | # cv2.imwrite(os.path.join(dump_image_path, '%.10d'%s_idx)+'.png', seq_images.astype('uint8')) 205 | 206 | # Write training files 207 | date = folder.split('/')[0] 208 | f.write('%s %s\n' % (os.path.join(folder, '%.10d'%s_idx)+'.png', os.path.join(date, 'calib_cam_to_cam.txt'))) 209 | print(folder) 210 | 211 | # Get calib files 212 | for date in os.listdir(self.data_dir): 213 | command = 'cp ' + os.path.join(self.data_dir, date, 'calib_cam_to_cam.txt') + ' ' + os.path.join(output_dir, date, 'calib_cam_to_cam.txt') 214 | os.system(command) 215 | 216 | return os.path.join(output_dir, 'train.txt') 217 | 218 | def __getitem__(self, idx): 219 | raise NotImplementedError 220 | 221 | 222 | if __name__ == '__main__': 223 | data_dir = '/home4/zhaow/data/kitti' 224 | dirlist = os.listdir('/home4/zhaow/data/kitti') 225 | output_dir = '/home4/zhaow/data/kitti_seq/data_generated_s2' 226 | total_dirlist = [] 227 | # Get the different folders of images 228 | for d in dirlist: 229 | seclist = os.listdir(os.path.join(data_dir, d)) 230 | for s in seclist: 231 | if os.path.isdir(os.path.join(data_dir, d, s)): 232 | total_dirlist.append(os.path.join(d, s)) 233 | 234 | F = open(os.path.join(output_dir, 'train.txt'), 'w') 235 | for p in total_dirlist: 236 | traintxt = os.path.join(os.path.join(output_dir, p), 'train.txt') 237 | f = open(traintxt, 'r') 238 | for line in f.readlines(): 239 | F.write(line) 240 | print(traintxt) 241 | 242 | 243 | 244 | 245 | 246 | -------------------------------------------------------------------------------- /core/dataset/nyu_v2.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | import numpy as np 3 | import imageio 4 | import cv2 5 | import copy 6 | import h5py 7 | import scipy.io as sio 8 | import torch 9 | import torch.utils.data 10 | import pdb 11 | from tqdm import tqdm 12 | import torch.multiprocessing as mp 13 | 14 | def collect_image_list(path): 15 | # Get ppm images list of a folder. 16 | files = os.listdir(path) 17 | sorted_file = sorted([f for f in files]) 18 | image_list = [] 19 | for l in sorted_file: 20 | if l.split('.')[-1] == 'ppm': 21 | image_list.append(l) 22 | return image_list 23 | 24 | def process_folder(q, data_dir, output_dir, stride, train_scenes): 25 | # Directly process the original nyu v2 depth dataset. 26 | while True: 27 | if q.empty(): 28 | break 29 | folder = q.get() 30 | scene_name = folder.split('/')[-1] 31 | s1,s2 = scene_name.split('_')[:-1], scene_name.split('_')[-1] 32 | scene_name_full = '' 33 | for j in s1: 34 | scene_name_full = scene_name_full + j + '_' 35 | scene_name_full = scene_name_full + s2[:4] 36 | 37 | if scene_name_full not in train_scenes: 38 | continue 39 | image_path = os.path.join(data_dir, folder) 40 | dump_image_path = os.path.join(output_dir, folder) 41 | if not os.path.isdir(dump_image_path): 42 | os.makedirs(dump_image_path) 43 | f = open(os.path.join(dump_image_path, 'train.txt'), 'w') 44 | 45 | # Note. the os.listdir method returns arbitary order of list. We need correct order. 46 | image_list = collect_image_list(image_path) 47 | #image_list = open(os.path.join(image_path, 'index.txt')).readlines() 48 | numbers = len(image_list) - 1 # The last ppm file seems truncated. 49 | for n in range(numbers - stride): 50 | s_idx = n 51 | e_idx = s_idx + stride 52 | s_name = image_list[s_idx].strip() 53 | e_name = image_list[e_idx].strip() 54 | 55 | curr_image = imageio.imread(os.path.join(image_path, s_name)) 56 | next_image = imageio.imread(os.path.join(image_path, e_name)) 57 | #curr_image = cv2.imread(os.path.join(image_path, s_name)) 58 | #next_image = cv2.imread(os.path.join(image_path, e_name)) 59 | seq_images = np.concatenate([curr_image, next_image], axis=0) 60 | imageio.imsave(os.path.join(dump_image_path, os.path.splitext(s_name)[0]+'.png'), seq_images.astype('uint8')) 61 | #cv2.imwrite(os.path.join(dump_image_path, os.path.splitext(s_name)[0]+'.png'), seq_images.astype('uint8')) 62 | 63 | # Write training files 64 | #date = folder.split('_')[2] 65 | f.write('%s %s\n' % (os.path.join(folder, os.path.splitext(s_name)[0]+'.png'), 'calib_cam_to_cam.txt')) 66 | print(folder) 67 | 68 | class NYU_Prepare(object): 69 | def __init__(self, data_dir, test_dir): 70 | self.data_dir = data_dir 71 | self.test_data = os.path.join(test_dir, 'nyu_depth_v2_labeled.mat') 72 | self.splits = os.path.join(test_dir, 'splits.mat') 73 | self.get_all_scenes() 74 | self.get_test_scenes() 75 | self.get_train_scenes() 76 | 77 | 78 | def __len__(self): 79 | raise NotImplementedError 80 | 81 | def get_all_scenes(self): 82 | self.all_scenes = [] 83 | paths = os.listdir(self.data_dir) 84 | for p in paths: 85 | if os.path.isdir(os.path.join(self.data_dir, p)): 86 | pp = os.listdir(os.path.join(self.data_dir, p)) 87 | for path in pp: 88 | self.all_scenes.append(path) 89 | 90 | def get_test_scenes(self): 91 | self.test_scenes = [] 92 | test_data = h5py.File(self.test_data, 'r') 93 | test_split = sio.loadmat(self.splits)['testNdxs'] 94 | test_split = np.array(test_split).squeeze(1) 95 | 96 | test_scenes = test_data['scenes'][0][test_split-1] 97 | for i in range(len(test_scenes)): 98 | obj = test_data[test_scenes[i]] 99 | name = "".join(chr(j) for j in obj[:]) 100 | if name not in self.test_scenes: 101 | self.test_scenes.append(name) 102 | #pdb.set_trace() 103 | 104 | def get_train_scenes(self): 105 | self.train_scenes = [] 106 | train_data = h5py.File(self.test_data, 'r') 107 | train_split = sio.loadmat(self.splits)['trainNdxs'] 108 | train_split = np.array(train_split).squeeze(1) 109 | 110 | train_scenes = train_data['scenes'][0][train_split-1] 111 | for i in range(len(train_scenes)): 112 | obj = train_data[train_scenes[i]] 113 | name = "".join(chr(j) for j in obj[:]) 114 | if name not in self.train_scenes: 115 | self.train_scenes.append(name) 116 | 117 | 118 | def prepare_data_mp(self, output_dir, stride=1): 119 | num_processes = 32 120 | processes = [] 121 | q = mp.Queue() 122 | if not os.path.isfile(os.path.join(output_dir, 'train.txt')): 123 | os.makedirs(output_dir) 124 | #f = open(os.path.join(output_dir, 'train.txt'), 'w') 125 | print('Preparing sequence data....') 126 | if not os.path.isdir(self.data_dir): 127 | raise 128 | dirlist = os.listdir(self.data_dir) 129 | total_dirlist = [] 130 | # Get the different folders of images 131 | for d in dirlist: 132 | if not os.path.isdir(os.path.join(self.data_dir, d)): 133 | continue 134 | seclist = os.listdir(os.path.join(self.data_dir, d)) 135 | for s in seclist: 136 | if os.path.isdir(os.path.join(self.data_dir, d, s)): 137 | total_dirlist.append(os.path.join(d, s)) 138 | q.put(os.path.join(d, s)) 139 | # Process every folder 140 | for rank in range(num_processes): 141 | p = mp.Process(target=process_folder, args=(q, self.data_dir, output_dir, stride, self.train_scenes)) 142 | p.start() 143 | processes.append(p) 144 | for p in processes: 145 | p.join() 146 | 147 | # Collect the training frames. 148 | f = open(os.path.join(output_dir, 'train.txt'), 'w') 149 | for dirlist in os.listdir(output_dir): 150 | if os.path.isdir(os.path.join(output_dir, dirlist)): 151 | seclists = os.listdir(os.path.join(output_dir, dirlist)) 152 | for s in seclists: 153 | train_file = open(os.path.join(output_dir, dirlist, s, 'train.txt'), 'r') 154 | for l in train_file.readlines(): 155 | f.write(l) 156 | f.close() 157 | 158 | f = open(os.path.join(output_dir, 'calib_cam_to_cam.txt'), 'w') 159 | f.write('P_rect: 5.1885790117450188e+02 0.0 3.2558244941119034e+02 0.0 0.0 5.1946961112127485e+02 2.5373616633400465e+02 0.0 0.0 0.0 1.0 0.0') 160 | f.close() 161 | print('Data Preparation Finished.') 162 | 163 | def __getitem__(self, idx): 164 | raise NotImplementedError 165 | 166 | 167 | 168 | class NYU_v2(torch.utils.data.Dataset): 169 | def __init__(self, data_dir, num_scales=3, img_hw=(448, 576), num_iterations=None): 170 | super(NYU_v2, self).__init__() 171 | self.data_dir = data_dir 172 | self.num_scales = num_scales 173 | self.img_hw = img_hw 174 | self.num_iterations = num_iterations 175 | self.undist_coeff = np.array([2.07966153e-01, -5.8613825e-01, 7.223136313e-04, 1.047962719e-03, 4.98569866e-01]) 176 | self.mapx, self.mapy = None, None 177 | self.roi = None 178 | 179 | info_file = os.path.join(self.data_dir, 'train.txt') 180 | self.data_list = self.get_data_list(info_file) 181 | 182 | def get_data_list(self, info_file): 183 | with open(info_file, 'r') as f: 184 | lines = f.readlines() 185 | data_list = [] 186 | for line in lines: 187 | k = line.strip('\n').split() 188 | data = {} 189 | data['image_file'] = os.path.join(self.data_dir, k[0]) 190 | data['cam_intrinsic_file'] = os.path.join(self.data_dir, k[1]) 191 | data_list.append(data) 192 | print('A total of {} image pairs found'.format(len(data_list))) 193 | return data_list 194 | 195 | def count(self): 196 | return len(self.data_list) 197 | 198 | def rand_num(self, idx): 199 | num_total = self.count() 200 | np.random.seed(idx) 201 | num = np.random.randint(num_total) 202 | return num 203 | 204 | def __len__(self): 205 | if self.num_iterations is None: 206 | return self.count() 207 | else: 208 | return self.num_iterations 209 | 210 | def resize_img(self, img, img_hw): 211 | ''' 212 | Input size (N*H, W, 3) 213 | Output size (N*H', W', 3), where (H', W') == self.img_hw 214 | ''' 215 | img_h, img_w = img.shape[0], img.shape[1] 216 | img_hw_orig = (int(img_h / 2), img_w) 217 | img1, img2 = img[:img_hw_orig[0], :, :], img[img_hw_orig[0]:, :, :] 218 | img1_new = cv2.resize(img1, (img_hw[1], img_hw[0])) 219 | img2_new = cv2.resize(img2, (img_hw[1], img_hw[0])) 220 | img_new = np.concatenate([img1_new, img2_new], 0) 221 | return img_new 222 | 223 | def random_flip_img(self, img): 224 | is_flip = (np.random.rand() > 0.5) 225 | if is_flip: 226 | img = cv2.flip(img, 1) 227 | return img 228 | 229 | def undistort_img(self, img, K): 230 | img_h, img_w = img.shape[0], img.shape[1] 231 | img_hw_orig = (int(img_h / 2), img_w) 232 | img1, img2 = img[:img_hw_orig[0], :, :], img[img_hw_orig[0]:, :, :] 233 | 234 | h, w = img_hw_orig 235 | if self.mapx is None: 236 | newcameramtx, self.roi = cv2.getOptimalNewCameraMatrix(K, self.undist_coeff, (w,h), 1, (w,h)) 237 | self.mapx, self.mapy = cv2.initUndistortRectifyMap(K, self.undist_coeff, None, newcameramtx, (w,h), 5) 238 | 239 | img1_undist = cv2.remap(img1, self.mapx, self.mapy, cv2.INTER_LINEAR) 240 | img2_undist = cv2.remap(img2, self.mapx, self.mapy, cv2.INTER_LINEAR) 241 | x,y,w,h = self.roi 242 | img1_undist = img1_undist[y:y+h, x:x+w] 243 | img2_undist = img2_undist[y:y+h, x:x+w] 244 | img_undist = np.concatenate([img1_undist, img2_undist], 0) 245 | #cv2.imwrite('./test.png', img) 246 | #cv2.imwrite('./test_undist.png', img_undist) 247 | #pdb.set_trace() 248 | return img_undist 249 | 250 | def preprocess_img(self, img, K, img_hw=None, is_test=False): 251 | if img_hw is None: 252 | img_hw = self.img_hw 253 | if not is_test: 254 | #img = img 255 | img = self.undistort_img(img, K) 256 | #img = self.random_flip_img(img) 257 | 258 | img = self.resize_img(img, img_hw) 259 | img = img / 255.0 260 | return img 261 | 262 | def read_cam_intrinsic(self, fname): 263 | with open(fname, 'r') as f: 264 | lines = f.readlines() 265 | data = lines[-1].strip('\n').split(' ')[1:] 266 | data = [float(k) for k in data] 267 | data = np.array(data).reshape(3,4) 268 | cam_intrinsics = data[:3,:3] 269 | return cam_intrinsics 270 | 271 | def rescale_intrinsics(self, K, img_hw_orig, img_hw_new): 272 | K_new = copy.deepcopy(K) 273 | K_new[0,:] = K_new[0,:] * img_hw_new[0] / img_hw_orig[0] 274 | K_new[1,:] = K_new[1,:] * img_hw_new[1] / img_hw_orig[1] 275 | return K_new 276 | 277 | def get_intrinsics_per_scale(self, K, scale): 278 | K_new = copy.deepcopy(K) 279 | K_new[0,:] = K_new[0,:] / (2**scale) 280 | K_new[1,:] = K_new[1,:] / (2**scale) 281 | K_new_inv = np.linalg.inv(K_new) 282 | return K_new, K_new_inv 283 | 284 | def get_multiscale_intrinsics(self, K, num_scales): 285 | K_ms, K_inv_ms = [], [] 286 | for s in range(num_scales): 287 | K_new, K_new_inv = self.get_intrinsics_per_scale(K, s) 288 | K_ms.append(K_new[None,:,:]) 289 | K_inv_ms.append(K_new_inv[None,:,:]) 290 | K_ms = np.concatenate(K_ms, 0) 291 | K_inv_ms = np.concatenate(K_inv_ms, 0) 292 | return K_ms, K_inv_ms 293 | 294 | def __getitem__(self, idx): 295 | ''' 296 | Returns: 297 | - img torch.Tensor (N * H, W, 3) 298 | - K torch.Tensor (num_scales, 3, 3) 299 | - K_inv torch.Tensor (num_scales, 3, 3) 300 | ''' 301 | if idx >= self.num_iterations: 302 | raise IndexError 303 | if self.num_iterations is not None: 304 | idx = self.rand_num(idx) 305 | data = self.data_list[idx] 306 | # load img 307 | img = cv2.imread(data['image_file']) 308 | img_hw_orig = (int(img.shape[0] / 2), img.shape[1]) 309 | 310 | # load intrinsic 311 | cam_intrinsic_orig = self.read_cam_intrinsic(data['cam_intrinsic_file']) 312 | cam_intrinsic = self.rescale_intrinsics(cam_intrinsic_orig, img_hw_orig, self.img_hw) 313 | K_ms, K_inv_ms = self.get_multiscale_intrinsics(cam_intrinsic, self.num_scales) # (num_scales, 3, 3), (num_scales, 3, 3) 314 | 315 | # image preprocessing 316 | img = self.preprocess_img(img, cam_intrinsic_orig, self.img_hw) # (img_h * 2, img_w, 3) 317 | img = img.transpose(2,0,1) 318 | 319 | 320 | return torch.from_numpy(img).float(), torch.from_numpy(K_ms).float(), torch.from_numpy(K_inv_ms).float() 321 | 322 | 323 | 324 | 325 | 326 | if __name__ == '__main__': 327 | data_dir = '/home4/zhaow/data/kitti' 328 | dirlist = os.listdir('/home4/zhaow/data/kitti') 329 | output_dir = '/home4/zhaow/data/kitti_seq/data_generated_s2' 330 | total_dirlist = [] 331 | # Get the different folders of images 332 | for d in dirlist: 333 | seclist = os.listdir(os.path.join(data_dir, d)) 334 | for s in seclist: 335 | if os.path.isdir(os.path.join(data_dir, d, s)): 336 | total_dirlist.append(os.path.join(d, s)) 337 | 338 | F = open(os.path.join(output_dir, 'train.txt'), 'w') 339 | for p in total_dirlist: 340 | traintxt = os.path.join(os.path.join(output_dir, p), 'train.txt') 341 | f = open(traintxt, 'r') 342 | for line in f.readlines(): 343 | F.write(line) 344 | print(traintxt) 345 | 346 | 347 | 348 | 349 | 350 | -------------------------------------------------------------------------------- /core/dataset/sintel.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | import numpy as np 3 | import cv2 4 | from tqdm import tqdm 5 | import torch.multiprocessing as mp 6 | import pdb 7 | 8 | def process_folder(q, data_dir, output_dir, stride=1): 9 | while True: 10 | if q.empty(): 11 | break 12 | folder = q.get() 13 | image_path = os.path.join(data_dir, folder) 14 | dump_image_path = os.path.join(output_dir, folder) 15 | if not os.path.isdir(dump_image_path): 16 | os.makedirs(dump_image_path) 17 | f = open(os.path.join(dump_image_path, 'train.txt'), 'w') 18 | 19 | # Note. the os.listdir method returns arbitary order of list. We need correct order. 20 | numbers = len(os.listdir(image_path)) 21 | names = list(os.listdir(image_path)) 22 | names.sort() 23 | if numbers < 3: 24 | print("this folder do not have enough image, numbers < 3!") 25 | for n in range(numbers - 2*stride): 26 | s_idx = n 27 | m_idx = s_idx + stride 28 | e_idx = s_idx + 2*stride 29 | 30 | #curr_image = cv2.imread(os.path.join(image_path, '%.5d'%s_idx)+'.png') 31 | #middle_image = cv2.imread(os.path.join(image_path, '%.5d'%m_idx)+'.png') 32 | #next_image = cv2.imread(os.path.join(image_path, '%.5d'%e_idx)+'.png') 33 | curr_image = cv2.imread(os.path.join(image_path, names[s_idx])) 34 | middle_image = cv2.imread(os.path.join(image_path, names[m_idx])) 35 | next_image = cv2.imread(os.path.join(image_path, names[e_idx])) 36 | 37 | if curr_image is None: 38 | print(os.path.join(image_path, '%.5d'%s_idx)+'.png') 39 | continue 40 | 41 | if middle_image is None: 42 | print(os.path.join(image_path, '%.5d'%m_idx)+'.png') 43 | continue 44 | 45 | if next_image is None: 46 | print(os.path.join(image_path, '%.5d'%e_idx)+'.png') 47 | continue 48 | 49 | seq_images = np.concatenate([curr_image, middle_image, next_image], axis=0) 50 | cv2.imwrite(os.path.join(dump_image_path, '%.10d'%s_idx)+'.png', seq_images.astype('uint8')) 51 | 52 | # Write training files 53 | f.write('%s\n' % (os.path.join(folder, '%.10d'%s_idx)+'.png')) 54 | print(folder) 55 | 56 | 57 | class SINTEL(object): 58 | def __init__(self, data_dir): 59 | self.data_dir = data_dir 60 | 61 | def __len__(self): 62 | raise NotImplementedError 63 | 64 | 65 | def prepare_data_mp(self, output_dir, stride=1): 66 | num_processes = 8 67 | processes = [] 68 | q = mp.Queue() 69 | if not os.path.isfile(os.path.join(output_dir, 'train.txt')): 70 | os.makedirs(output_dir) 71 | #f = open(os.path.join(output_dir, 'train.txt'), 'w') 72 | print('Preparing sequence data....') 73 | if not os.path.isdir(self.data_dir): 74 | raise NotImplementedError 75 | dirlist = os.listdir(self.data_dir) 76 | total_dirlist = [] 77 | # Get the different folders of images 78 | for d in dirlist: 79 | if os.path.isdir(os.path.join(self.data_dir, d)): 80 | total_dirlist.append(d) 81 | q.put(d) 82 | # Process every folder 83 | for rank in range(num_processes): 84 | p = mp.Process(target=process_folder, args=(q, self.data_dir, output_dir, stride)) 85 | p.start() 86 | processes.append(p) 87 | for p in processes: 88 | p.join() 89 | 90 | # Collect the training frames. 91 | f = open(os.path.join(output_dir, 'train.txt'), 'w') 92 | for date in os.listdir(output_dir): 93 | if os.path.isdir(os.path.join(output_dir, date)): 94 | train_file = open(os.path.join(output_dir, date, 'train.txt'), 'r') 95 | for l in train_file.readlines(): 96 | f.write(l) 97 | 98 | 99 | print('Data Preparation Finished.') 100 | 101 | 102 | def __getitem__(self, idx): 103 | raise NotImplementedError 104 | 105 | 106 | if __name__ == '__main__': 107 | data_dir = '/home/ljf/Dataset/Sintel/scene' 108 | dirlist = os.listdir('/home4/zhaow/data/kitti') 109 | output_dir = '/home4/zhaow/data/kitti_seq/data_generated_s2' 110 | total_dirlist = [] 111 | # Get the different folders of images 112 | for d in dirlist: 113 | seclist = os.listdir(os.path.join(data_dir, d)) 114 | for s in seclist: 115 | if os.path.isdir(os.path.join(data_dir, d, s)): 116 | total_dirlist.append(os.path.join(d, s)) 117 | 118 | F = open(os.path.join(output_dir, 'train.txt'), 'w') 119 | for p in total_dirlist: 120 | traintxt = os.path.join(os.path.join(output_dir, p), 'train.txt') 121 | f = open(traintxt, 'r') 122 | for line in f.readlines(): 123 | F.write(line) 124 | print(traintxt) 125 | 126 | 127 | 128 | 129 | 130 | -------------------------------------------------------------------------------- /core/dataset/sintel_prepared.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | import numpy as np 3 | import cv2 4 | import copy 5 | 6 | import torch 7 | import torch.utils.data 8 | import pdb 9 | 10 | class SINTEL_Prepared(torch.utils.data.Dataset): 11 | def __init__(self, data_dir, num_scales=3, img_hw=(256, 832), num_iterations=None): 12 | super(SINTEL_Prepared, self).__init__() 13 | self.data_dir = data_dir 14 | self.num_scales = num_scales 15 | self.img_hw = img_hw 16 | self.num_iterations = num_iterations 17 | 18 | info_file = os.path.join(self.data_dir, 'train.txt') 19 | #info_file = os.path.join(self.data_dir, 'train_flow.txt') 20 | self.data_list = self.get_data_list(info_file) 21 | 22 | def get_data_list(self, info_file): 23 | with open(info_file, 'r') as f: 24 | lines = f.readlines() 25 | data_list = [] 26 | for line in lines: 27 | k = line.strip('\n').split() 28 | data = {} 29 | data['image_file'] = os.path.join(self.data_dir, k[0]) 30 | data_list.append(data) 31 | print('A total of {} image pairs found'.format(len(data_list))) 32 | return data_list 33 | 34 | def count(self): 35 | return len(self.data_list) 36 | 37 | def rand_num(self, idx): 38 | num_total = self.count() 39 | np.random.seed(idx) 40 | num = np.random.randint(num_total) 41 | return num 42 | 43 | def __len__(self): 44 | if self.num_iterations is None: 45 | return self.count() 46 | else: 47 | return self.num_iterations 48 | 49 | def resize_img_origin(self, img, img_hw): 50 | ''' 51 | Input size (N*H, W, 3) 52 | Output size (N*H', W', 3), where (H', W') == self.img_hw 53 | ''' 54 | img_h, img_w = img.shape[0], img.shape[1] 55 | img_hw_orig = (int(img_h / 2), img_w) 56 | img1, img2 = img[:img_hw_orig[0], :, :], img[img_hw_orig[0]:, :, :] 57 | img1_new = cv2.resize(img1, (img_hw[1], img_hw[0])) 58 | img2_new = cv2.resize(img2, (img_hw[1], img_hw[0])) 59 | img_new = np.concatenate([img1_new, img2_new], 0) 60 | return img_new 61 | 62 | def resize_img(self, img, img_hw): 63 | ''' 64 | Input size (N*H, W, 3) 65 | Output size (N*H', W', 3), where (H', W') == self.img_hw 66 | ''' 67 | img_h, img_w = img.shape[0], img.shape[1] 68 | img_hw_orig = (int(img_h / 3), img_w) 69 | img1, img2, img3 = img[:img_hw_orig[0], :, :], img[img_hw_orig[0]:2*img_hw_orig[0], :, :], img[2*img_hw_orig[0]:3*img_hw_orig[0], :, :] 70 | img1_new = cv2.resize(img1, (img_hw[1], img_hw[0])) 71 | img2_new = cv2.resize(img2, (img_hw[1], img_hw[0])) 72 | img3_new = cv2.resize(img3, (img_hw[1], img_hw[0])) 73 | img_new = np.concatenate([img1_new, img2_new, img3_new], 0) 74 | return img_new 75 | 76 | def random_flip_img(self, img): 77 | is_flip = (np.random.rand() > 0.5) 78 | if is_flip: 79 | img = cv2.flip(img, 1) 80 | return img 81 | 82 | def preprocess_img(self, img, img_hw=None, is_test=False): 83 | if img_hw is None: 84 | img_hw = self.img_hw 85 | img = self.resize_img(img, img_hw) 86 | if not is_test: 87 | img = self.random_flip_img(img) 88 | img = img / 255.0 89 | return img 90 | 91 | def preprocess_img_origin(self, img, img_hw=None, is_test=False): 92 | if img_hw is None: 93 | img_hw = self.img_hw 94 | img = self.resize_img_origin(img, img_hw) 95 | if not is_test: 96 | img = self.random_flip_img(img) 97 | img = img / 255.0 98 | return img 99 | 100 | def __getitem__(self, idx): 101 | ''' 102 | Returns: 103 | - img torch.Tensor (N * H, W, 3) 104 | - K torch.Tensor (num_scales, 3, 3) 105 | - K_inv torch.Tensor (num_scales, 3, 3) 106 | ''' 107 | if self.num_iterations is not None: 108 | idx = self.rand_num(idx) 109 | data = self.data_list[idx] 110 | # load img 111 | img = cv2.imread(data['image_file']) 112 | #img_hw_orig = (int(img.shape[0] / 3), img.shape[1]) 113 | img = self.preprocess_img(img, self.img_hw) # (img_h * 3, img_w, 3) 114 | img = img.transpose(2,0,1) 115 | 116 | return torch.from_numpy(img).float() 117 | 118 | if __name__ == '__main__': 119 | pass 120 | 121 | -------------------------------------------------------------------------------- /core/dataset/sintel_raw.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | import numpy as np 3 | import cv2 4 | from tqdm import tqdm 5 | import torch.multiprocessing as mp 6 | import pdb 7 | 8 | def process_folder(q, data_dir, output_dir, stride=1): 9 | while True: 10 | if q.empty(): 11 | break 12 | folder = q.get() 13 | image_path = os.path.join(data_dir, folder) 14 | dump_image_path = os.path.join(output_dir, folder) 15 | if not os.path.isdir(dump_image_path): 16 | os.makedirs(dump_image_path) 17 | f = open(os.path.join(dump_image_path, 'train.txt'), 'w') 18 | 19 | # Note. the os.listdir method returns arbitary order of list. We need correct order. 20 | numbers = len(os.listdir(image_path)) 21 | names = list(os.listdir(image_path)) 22 | names.sort() 23 | if numbers < 3: 24 | print("this folder do not have enough image, numbers < 3!") 25 | for n in range(numbers - 2*stride): 26 | s_idx = n 27 | m_idx = s_idx + stride 28 | e_idx = s_idx + 2*stride 29 | 30 | #curr_image = cv2.imread(os.path.join(image_path, '%.5d'%s_idx)+'.png') 31 | #middle_image = cv2.imread(os.path.join(image_path, '%.5d'%m_idx)+'.png') 32 | #next_image = cv2.imread(os.path.join(image_path, '%.5d'%e_idx)+'.png') 33 | curr_image = cv2.imread(os.path.join(image_path, names[s_idx])) 34 | middle_image = cv2.imread(os.path.join(image_path, names[m_idx])) 35 | next_image = cv2.imread(os.path.join(image_path, names[e_idx])) 36 | 37 | if curr_image is None: 38 | print(os.path.join(image_path, '%.5d'%s_idx)+'.png') 39 | continue 40 | 41 | if middle_image is None: 42 | print(os.path.join(image_path, '%.5d'%m_idx)+'.png') 43 | continue 44 | 45 | if next_image is None: 46 | print(os.path.join(image_path, '%.5d'%e_idx)+'.png') 47 | continue 48 | 49 | seq_images = np.concatenate([curr_image, middle_image, next_image], axis=0) 50 | cv2.imwrite(os.path.join(dump_image_path, '%.10d'%s_idx)+'.png', seq_images.astype('uint8')) 51 | 52 | # Write training files 53 | f.write('%s\n' % (os.path.join(folder, '%.10d'%s_idx)+'.png')) 54 | print(folder) 55 | 56 | 57 | class SINTEL_RAW(object): 58 | def __init__(self, data_dir): 59 | self.data_dir = data_dir 60 | 61 | def __len__(self): 62 | raise NotImplementedError 63 | 64 | 65 | def prepare_data_mp(self, output_dir, stride=1): 66 | num_processes = 8 67 | processes = [] 68 | q = mp.Queue() 69 | if not os.path.isfile(os.path.join(output_dir, 'train.txt')): 70 | os.makedirs(output_dir) 71 | #f = open(os.path.join(output_dir, 'train.txt'), 'w') 72 | print('Preparing sequence data....') 73 | if not os.path.isdir(self.data_dir): 74 | raise NotImplementedError 75 | dirlist = os.listdir(self.data_dir) 76 | total_dirlist = [] 77 | # Get the different folders of images 78 | for d in dirlist: 79 | if os.path.isdir(os.path.join(self.data_dir, d)): 80 | total_dirlist.append(d) 81 | q.put(d) 82 | # Process every folder 83 | for rank in range(num_processes): 84 | p = mp.Process(target=process_folder, args=(q, self.data_dir, output_dir, stride)) 85 | p.start() 86 | processes.append(p) 87 | for p in processes: 88 | p.join() 89 | 90 | # Collect the training frames. 91 | f = open(os.path.join(output_dir, 'train.txt'), 'w') 92 | for date in os.listdir(output_dir): 93 | if os.path.isdir(os.path.join(output_dir, date)): 94 | train_file = open(os.path.join(output_dir, date, 'train.txt'), 'r') 95 | for l in train_file.readlines(): 96 | f.write(l) 97 | 98 | 99 | print('Data Preparation Finished.') 100 | 101 | 102 | def __getitem__(self, idx): 103 | raise NotImplementedError 104 | 105 | 106 | if __name__ == '__main__': 107 | data_dir = '/home/ljf/Dataset/Sintel/scene' 108 | dirlist = os.listdir('/home4/zhaow/data/kitti') 109 | output_dir = '/home4/zhaow/data/kitti_seq/data_generated_s2' 110 | total_dirlist = [] 111 | # Get the different folders of images 112 | for d in dirlist: 113 | seclist = os.listdir(os.path.join(data_dir, d)) 114 | for s in seclist: 115 | if os.path.isdir(os.path.join(data_dir, d, s)): 116 | total_dirlist.append(os.path.join(d, s)) 117 | 118 | F = open(os.path.join(output_dir, 'train.txt'), 'w') 119 | for p in total_dirlist: 120 | traintxt = os.path.join(os.path.join(output_dir, p), 'train.txt') 121 | f = open(traintxt, 'r') 122 | for line in f.readlines(): 123 | F.write(line) 124 | print(traintxt) 125 | 126 | 127 | 128 | 129 | 130 | -------------------------------------------------------------------------------- /core/evaluation/__init__.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | sys.path.append(os.path.dirname(os.path.abspath(__file__))) 3 | from evaluate_flow import eval_flow_avg, load_gt_flow_kitti 4 | from evaluate_mask import load_gt_mask 5 | from evaluate_depth import eval_depth 6 | -------------------------------------------------------------------------------- /core/evaluation/__pycache__/__init__.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jianfenglihg/UnOpticalFlow/4feefe8ce94e68fa8c6cbc873e12b712b313049d/core/evaluation/__pycache__/__init__.cpython-35.pyc -------------------------------------------------------------------------------- /core/evaluation/__pycache__/evaluate_depth.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jianfenglihg/UnOpticalFlow/4feefe8ce94e68fa8c6cbc873e12b712b313049d/core/evaluation/__pycache__/evaluate_depth.cpython-35.pyc -------------------------------------------------------------------------------- /core/evaluation/__pycache__/evaluate_flow.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jianfenglihg/UnOpticalFlow/4feefe8ce94e68fa8c6cbc873e12b712b313049d/core/evaluation/__pycache__/evaluate_flow.cpython-35.pyc -------------------------------------------------------------------------------- /core/evaluation/__pycache__/evaluate_mask.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jianfenglihg/UnOpticalFlow/4feefe8ce94e68fa8c6cbc873e12b712b313049d/core/evaluation/__pycache__/evaluate_mask.cpython-35.pyc -------------------------------------------------------------------------------- /core/evaluation/__pycache__/evaluation_utils.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jianfenglihg/UnOpticalFlow/4feefe8ce94e68fa8c6cbc873e12b712b313049d/core/evaluation/__pycache__/evaluation_utils.cpython-35.pyc -------------------------------------------------------------------------------- /core/evaluation/__pycache__/flowlib.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jianfenglihg/UnOpticalFlow/4feefe8ce94e68fa8c6cbc873e12b712b313049d/core/evaluation/__pycache__/flowlib.cpython-35.pyc -------------------------------------------------------------------------------- /core/evaluation/eval_odom.py: -------------------------------------------------------------------------------- 1 | import copy 2 | from matplotlib import pyplot as plt 3 | import numpy as np 4 | import os 5 | from glob import glob 6 | import pdb 7 | 8 | 9 | def scale_lse_solver(X, Y): 10 | """Least-sqaure-error solver 11 | Compute optimal scaling factor so that s(X)-Y is minimum 12 | Args: 13 | X (KxN array): current data 14 | Y (KxN array): reference data 15 | Returns: 16 | scale (float): scaling factor 17 | """ 18 | scale = np.sum(X * Y)/np.sum(X ** 2) 19 | return scale 20 | 21 | 22 | def umeyama_alignment(x, y, with_scale=False): 23 | """ 24 | Computes the least squares solution parameters of an Sim(m) matrix 25 | that minimizes the distance between a set of registered points. 26 | Umeyama, Shinji: Least-squares estimation of transformation parameters 27 | between two point patterns. IEEE PAMI, 1991 28 | :param x: mxn matrix of points, m = dimension, n = nr. of data points 29 | :param y: mxn matrix of points, m = dimension, n = nr. of data points 30 | :param with_scale: set to True to align also the scale (default: 1.0 scale) 31 | :return: r, t, c - rotation matrix, translation vector and scale factor 32 | """ 33 | if x.shape != y.shape: 34 | assert False, "x.shape not equal to y.shape" 35 | 36 | # m = dimension, n = nr. of data points 37 | m, n = x.shape 38 | 39 | # means, eq. 34 and 35 40 | mean_x = x.mean(axis=1) 41 | mean_y = y.mean(axis=1) 42 | 43 | # variance, eq. 36 44 | # "transpose" for column subtraction 45 | sigma_x = 1.0 / n * (np.linalg.norm(x - mean_x[:, np.newaxis])**2) 46 | 47 | # covariance matrix, eq. 38 48 | outer_sum = np.zeros((m, m)) 49 | for i in range(n): 50 | outer_sum += np.outer((y[:, i] - mean_y), (x[:, i] - mean_x)) 51 | cov_xy = np.multiply(1.0 / n, outer_sum) 52 | 53 | # SVD (text betw. eq. 38 and 39) 54 | u, d, v = np.linalg.svd(cov_xy) 55 | 56 | # S matrix, eq. 43 57 | s = np.eye(m) 58 | if np.linalg.det(u) * np.linalg.det(v) < 0.0: 59 | # Ensure a RHS coordinate system (Kabsch algorithm). 60 | s[m - 1, m - 1] = -1 61 | 62 | # rotation, eq. 40 63 | r = u.dot(s).dot(v) 64 | 65 | # scale & translation, eq. 42 and 41 66 | c = 1 / sigma_x * np.trace(np.diag(d).dot(s)) if with_scale else 1.0 67 | t = mean_y - np.multiply(c, r.dot(mean_x)) 68 | 69 | return r, t, c 70 | 71 | 72 | class KittiEvalOdom(): 73 | # ---------------------------------------------------------------------- 74 | # poses: N,4,4 75 | # pose: 4,4 76 | # ---------------------------------------------------------------------- 77 | def __init__(self): 78 | self.lengths = [100, 200, 300, 400, 500, 600, 700, 800] 79 | self.num_lengths = len(self.lengths) 80 | 81 | def loadPoses(self, file_name): 82 | # ---------------------------------------------------------------------- 83 | # Each line in the file should follow one of the following structures 84 | # (1) idx pose(3x4 matrix in terms of 12 numbers) 85 | # (2) pose(3x4 matrix in terms of 12 numbers) 86 | # ---------------------------------------------------------------------- 87 | f = open(file_name, 'r') 88 | s = f.readlines() 89 | f.close() 90 | file_len = len(s) 91 | poses = {} 92 | for cnt, line in enumerate(s): 93 | P = np.eye(4) 94 | line_split = [float(i) for i in line.split(" ")] 95 | withIdx = int(len(line_split) == 13) 96 | for row in range(3): 97 | for col in range(4): 98 | P[row, col] = line_split[row*4 + col + withIdx] 99 | if withIdx: 100 | frame_idx = line_split[0] 101 | else: 102 | frame_idx = cnt 103 | poses[frame_idx] = P 104 | return poses 105 | 106 | def trajectory_distances(self, poses): 107 | # ---------------------------------------------------------------------- 108 | # poses: dictionary: [frame_idx: pose] 109 | # ---------------------------------------------------------------------- 110 | dist = [0] 111 | sort_frame_idx = sorted(poses.keys()) 112 | for i in range(len(sort_frame_idx)-1): 113 | cur_frame_idx = sort_frame_idx[i] 114 | next_frame_idx = sort_frame_idx[i+1] 115 | P1 = poses[cur_frame_idx] 116 | P2 = poses[next_frame_idx] 117 | dx = P1[0, 3] - P2[0, 3] 118 | dy = P1[1, 3] - P2[1, 3] 119 | dz = P1[2, 3] - P2[2, 3] 120 | dist.append(dist[i]+np.sqrt(dx**2+dy**2+dz**2)) 121 | return dist 122 | 123 | def rotation_error(self, pose_error): 124 | a = pose_error[0, 0] 125 | b = pose_error[1, 1] 126 | c = pose_error[2, 2] 127 | d = 0.5*(a+b+c-1.0) 128 | rot_error = np.arccos(max(min(d, 1.0), -1.0)) 129 | return rot_error 130 | 131 | def translation_error(self, pose_error): 132 | dx = pose_error[0, 3] 133 | dy = pose_error[1, 3] 134 | dz = pose_error[2, 3] 135 | return np.sqrt(dx**2+dy**2+dz**2) 136 | 137 | def last_frame_from_segment_length(self, dist, first_frame, len_): 138 | for i in range(first_frame, len(dist), 1): 139 | if dist[i] > (dist[first_frame] + len_): 140 | return i 141 | return -1 142 | 143 | def calc_sequence_errors(self, poses_gt, poses_result): 144 | err = [] 145 | dist = self.trajectory_distances(poses_gt) 146 | self.step_size = 10 147 | 148 | for first_frame in range(0, len(poses_gt), self.step_size): 149 | for i in range(self.num_lengths): 150 | len_ = self.lengths[i] 151 | last_frame = self.last_frame_from_segment_length(dist, first_frame, len_) 152 | 153 | # ---------------------------------------------------------------------- 154 | # Continue if sequence not long enough 155 | # ---------------------------------------------------------------------- 156 | if last_frame == -1 or not(last_frame in poses_result.keys()) or not(first_frame in poses_result.keys()): 157 | continue 158 | 159 | # ---------------------------------------------------------------------- 160 | # compute rotational and translational errors 161 | # ---------------------------------------------------------------------- 162 | pose_delta_gt = np.dot(np.linalg.inv(poses_gt[first_frame]), poses_gt[last_frame]) 163 | pose_delta_result = np.dot(np.linalg.inv(poses_result[first_frame]), poses_result[last_frame]) 164 | pose_error = np.dot(np.linalg.inv(pose_delta_result), pose_delta_gt) 165 | 166 | r_err = self.rotation_error(pose_error) 167 | t_err = self.translation_error(pose_error) 168 | 169 | # ---------------------------------------------------------------------- 170 | # compute speed 171 | # ---------------------------------------------------------------------- 172 | num_frames = last_frame - first_frame + 1.0 173 | speed = len_/(0.1*num_frames) 174 | 175 | err.append([first_frame, r_err/len_, t_err/len_, len_, speed]) 176 | return err 177 | 178 | def save_sequence_errors(self, err, file_name): 179 | fp = open(file_name, 'w') 180 | for i in err: 181 | line_to_write = " ".join([str(j) for j in i]) 182 | fp.writelines(line_to_write+"\n") 183 | fp.close() 184 | 185 | def compute_overall_err(self, seq_err): 186 | t_err = 0 187 | r_err = 0 188 | 189 | seq_len = len(seq_err) 190 | 191 | for item in seq_err: 192 | r_err += item[1] 193 | t_err += item[2] 194 | ave_t_err = t_err / seq_len 195 | ave_r_err = r_err / seq_len 196 | return ave_t_err, ave_r_err 197 | 198 | def plotPath(self, seq, poses_gt, poses_result): 199 | plot_keys = ["Ground Truth", "Ours"] 200 | fontsize_ = 20 201 | plot_num =-1 202 | 203 | poses_dict = {} 204 | poses_dict["Ground Truth"] = poses_gt 205 | poses_dict["Ours"] = poses_result 206 | 207 | fig = plt.figure() 208 | ax = plt.gca() 209 | ax.set_aspect('equal') 210 | 211 | for key in plot_keys: 212 | pos_xz = [] 213 | # for pose in poses_dict[key]: 214 | for frame_idx in sorted(poses_dict[key].keys()): 215 | pose = poses_dict[key][frame_idx] 216 | pos_xz.append([pose[0,3], pose[2,3]]) 217 | pos_xz = np.asarray(pos_xz) 218 | plt.plot(pos_xz[:,0], pos_xz[:,1], label = key) 219 | 220 | plt.legend(loc="upper right", prop={'size': fontsize_}) 221 | plt.xticks(fontsize=fontsize_) 222 | plt.yticks(fontsize=fontsize_) 223 | plt.xlabel('x (m)', fontsize=fontsize_) 224 | plt.ylabel('z (m)', fontsize=fontsize_) 225 | fig.set_size_inches(10, 10) 226 | png_title = "sequence_"+(seq) 227 | plt.savefig(self.plot_path_dir + "/" + png_title + ".pdf", bbox_inches='tight', pad_inches=0) 228 | # plt.show() 229 | 230 | def compute_segment_error(self, seq_errs): 231 | # ---------------------------------------------------------------------- 232 | # This function calculates average errors for different segment. 233 | # ---------------------------------------------------------------------- 234 | 235 | segment_errs = {} 236 | avg_segment_errs = {} 237 | for len_ in self.lengths: 238 | segment_errs[len_] = [] 239 | # ---------------------------------------------------------------------- 240 | # Get errors 241 | # ---------------------------------------------------------------------- 242 | for err in seq_errs: 243 | len_ = err[3] 244 | t_err = err[2] 245 | r_err = err[1] 246 | segment_errs[len_].append([t_err, r_err]) 247 | # ---------------------------------------------------------------------- 248 | # Compute average 249 | # ---------------------------------------------------------------------- 250 | for len_ in self.lengths: 251 | if segment_errs[len_] != []: 252 | avg_t_err = np.mean(np.asarray(segment_errs[len_])[:, 0]) 253 | avg_r_err = np.mean(np.asarray(segment_errs[len_])[:, 1]) 254 | avg_segment_errs[len_] = [avg_t_err, avg_r_err] 255 | else: 256 | avg_segment_errs[len_] = [] 257 | return avg_segment_errs 258 | 259 | def scale_optimization(self, gt, pred): 260 | """ Optimize scaling factor 261 | Args: 262 | gt (4x4 array dict): ground-truth poses 263 | pred (4x4 array dict): predicted poses 264 | Returns: 265 | new_pred (4x4 array dict): predicted poses after optimization 266 | """ 267 | pred_updated = copy.deepcopy(pred) 268 | xyz_pred = [] 269 | xyz_ref = [] 270 | for i in pred: 271 | pose_pred = pred[i] 272 | pose_ref = gt[i] 273 | xyz_pred.append(pose_pred[:3, 3]) 274 | xyz_ref.append(pose_ref[:3, 3]) 275 | xyz_pred = np.asarray(xyz_pred) 276 | xyz_ref = np.asarray(xyz_ref) 277 | scale = scale_lse_solver(xyz_pred, xyz_ref) 278 | for i in pred_updated: 279 | pred_updated[i][:3, 3] *= scale 280 | return pred_updated 281 | 282 | def eval(self, gt_txt, result_txt, seq=None): 283 | # gt_dir: the directory of groundtruth poses txt 284 | # results_dir: the directory of predicted poses txt 285 | self.plot_path_dir = os.path.dirname(result_txt) + "/plot_path" 286 | if not os.path.exists(self.plot_path_dir): 287 | os.makedirs(self.plot_path_dir) 288 | 289 | self.gt_txt = gt_txt 290 | 291 | ave_t_errs = [] 292 | ave_r_errs = [] 293 | 294 | poses_result = self.loadPoses(result_txt) 295 | poses_gt = self.loadPoses(self.gt_txt) 296 | 297 | # Pose alignment to first frame 298 | idx_0 = sorted(list(poses_result.keys()))[0] 299 | pred_0 = poses_result[idx_0] 300 | gt_0 = poses_gt[idx_0] 301 | for cnt in poses_result: 302 | poses_result[cnt] = np.linalg.inv(pred_0) @ poses_result[cnt] 303 | poses_gt[cnt] = np.linalg.inv(gt_0) @ poses_gt[cnt] 304 | 305 | # get XYZ 306 | xyz_gt = [] 307 | xyz_result = [] 308 | for cnt in poses_result: 309 | xyz_gt.append([poses_gt[cnt][0, 3], poses_gt[cnt][1, 3], poses_gt[cnt][2, 3]]) 310 | xyz_result.append([poses_result[cnt][0, 3], poses_result[cnt][1, 3], poses_result[cnt][2, 3]]) 311 | xyz_gt = np.asarray(xyz_gt).transpose(1, 0) 312 | xyz_result = np.asarray(xyz_result).transpose(1, 0) 313 | 314 | r, t, scale = umeyama_alignment(xyz_result, xyz_gt, True) 315 | 316 | align_transformation = np.eye(4) 317 | align_transformation[:3:, :3] = r 318 | align_transformation[:3, 3] = t 319 | 320 | for cnt in poses_result: 321 | poses_result[cnt][:3, 3] *= scale 322 | poses_result[cnt] = align_transformation @ poses_result[cnt] 323 | 324 | # ---------------------------------------------------------------------- 325 | # compute sequence errors 326 | # ---------------------------------------------------------------------- 327 | seq_err = self.calc_sequence_errors(poses_gt, poses_result) 328 | 329 | # ---------------------------------------------------------------------- 330 | # Compute segment errors 331 | # ---------------------------------------------------------------------- 332 | avg_segment_errs = self.compute_segment_error(seq_err) 333 | 334 | # ---------------------------------------------------------------------- 335 | # compute overall error 336 | # ---------------------------------------------------------------------- 337 | ave_t_err, ave_r_err = self.compute_overall_err(seq_err) 338 | print("Sequence: " + seq) 339 | print("Translational error (%): ", ave_t_err*100) 340 | print("Rotational error (deg/100m): ", ave_r_err/np.pi*180*100) 341 | ave_t_errs.append(ave_t_err) 342 | ave_r_errs.append(ave_r_err) 343 | 344 | # Plotting 345 | self.plotPath(seq, poses_gt, poses_result) 346 | 347 | print("-------------------- For Copying ------------------------------") 348 | for i in range(len(ave_t_errs)): 349 | print("{0:.2f}".format(ave_t_errs[i]*100)) 350 | print("{0:.2f}".format(ave_r_errs[i]/np.pi*180*100)) 351 | 352 | 353 | 354 | if __name__ == '__main__': 355 | import argparse 356 | parser = argparse.ArgumentParser(description='KITTI evaluation') 357 | parser.add_argument('--gt_txt', type=str, required=True, help="Groundtruth directory") 358 | parser.add_argument('--result_txt', type=str, required=True, help="Result directory") 359 | parser.add_argument('--seq', type=str, help="sequences to be evaluated", default='09') 360 | args = parser.parse_args() 361 | 362 | eval_tool = KittiEvalOdom() 363 | eval_tool.eval(args.gt_txt, args.result_txt, seq=args.seq) 364 | -------------------------------------------------------------------------------- /core/evaluation/evaluate_depth.py: -------------------------------------------------------------------------------- 1 | from evaluation_utils import * 2 | 3 | def process_depth(gt_depth, pred_depth, min_depth, max_depth): 4 | mask = gt_depth > 0 5 | pred_depth[pred_depth < min_depth] = min_depth 6 | pred_depth[pred_depth > max_depth] = max_depth 7 | gt_depth[gt_depth < min_depth] = min_depth 8 | gt_depth[gt_depth > max_depth] = max_depth 9 | 10 | return gt_depth, pred_depth, mask 11 | 12 | 13 | def eval_depth(gt_depths, 14 | pred_depths, 15 | min_depth=1e-3, 16 | max_depth=80, nyu=False): 17 | num_samples = len(pred_depths) 18 | rms = np.zeros(num_samples, np.float32) 19 | log_rms = np.zeros(num_samples, np.float32) 20 | abs_rel = np.zeros(num_samples, np.float32) 21 | sq_rel = np.zeros(num_samples, np.float32) 22 | d1_all = np.zeros(num_samples, np.float32) 23 | a1 = np.zeros(num_samples, np.float32) 24 | a2 = np.zeros(num_samples, np.float32) 25 | a3 = np.zeros(num_samples, np.float32) 26 | 27 | for i in range(num_samples): 28 | gt_depth = gt_depths[i] 29 | pred_depth = pred_depths[i] 30 | mask = np.logical_and(gt_depth > min_depth, gt_depth < max_depth) 31 | 32 | if not nyu: 33 | gt_height, gt_width = gt_depth.shape 34 | crop = np.array([0.40810811 * gt_height, 0.99189189 * gt_height, 35 | 0.03594771 * gt_width, 0.96405229 * gt_width]).astype(np.int32) 36 | crop_mask = np.zeros(mask.shape) 37 | crop_mask[crop[0]:crop[1], crop[2]:crop[3]] = 1 38 | mask = np.logical_and(mask, crop_mask) 39 | 40 | gt_depth = gt_depth[mask] 41 | pred_depth = pred_depth[mask] 42 | scale = np.median(gt_depth) / np.median(pred_depth) 43 | pred_depth *= scale 44 | 45 | gt_depth, pred_depth, mask = process_depth( 46 | gt_depth, pred_depth, min_depth, max_depth) 47 | 48 | abs_rel[i], sq_rel[i], rms[i], log_rms[i], a1[i], a2[i], a3[ 49 | i] = compute_errors(gt_depth, pred_depth, nyu=nyu) 50 | 51 | 52 | return [abs_rel.mean(), sq_rel.mean(), rms.mean(), log_rms.mean(), a1.mean(), a2.mean(), a3.mean()] 53 | 54 | -------------------------------------------------------------------------------- /core/evaluation/evaluate_flow.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | sys.path.append(os.path.dirname(os.path.abspath(__file__))) 3 | import numpy as np 4 | from flowlib import read_flow_png, flow_to_image 5 | import cv2 6 | import multiprocessing 7 | import functools 8 | 9 | def get_scaled_intrinsic_matrix(calib_file, zoom_x, zoom_y): 10 | intrinsics = load_intrinsics_raw(calib_file) 11 | intrinsics = scale_intrinsics(intrinsics, zoom_x, zoom_y) 12 | 13 | intrinsics[0, 1] = 0.0 14 | intrinsics[1, 0] = 0.0 15 | intrinsics[2, 0] = 0.0 16 | intrinsics[2, 1] = 0.0 17 | return intrinsics 18 | 19 | def load_intrinsics_raw(calib_file): 20 | filedata = read_raw_calib_file(calib_file) 21 | if "P_rect_02" in filedata: 22 | P_rect = filedata['P_rect_02'] 23 | else: 24 | P_rect = filedata['P2'] 25 | P_rect = np.reshape(P_rect, (3, 4)) 26 | intrinsics = P_rect[:3, :3] 27 | return intrinsics 28 | 29 | def read_raw_calib_file(filepath): 30 | # From https://github.com/utiasSTARS/pykitti/blob/master/pykitti/utils.py 31 | """Read in a calibration file and parse into a dictionary.""" 32 | data = {} 33 | 34 | with open(filepath, 'r') as f: 35 | for line in f.readlines(): 36 | key, value = line.split(':', 1) 37 | # The only non-float values in these files are dates, which 38 | # we don't care about anyway 39 | try: 40 | data[key] = np.array([float(x) for x in value.split()]) 41 | except ValueError: 42 | pass 43 | return data 44 | 45 | def scale_intrinsics(mat, sx, sy): 46 | out = np.copy(mat) 47 | out[0, 0] *= sx 48 | out[0, 2] *= sx 49 | out[1, 1] *= sy 50 | out[1, 2] *= sy 51 | return out 52 | 53 | def read_flow_gt_worker(dir_gt, i): 54 | flow_true = read_flow_png( 55 | os.path.join(dir_gt, "flow_occ", str(i).zfill(6) + "_10.png")) 56 | flow_noc_true = read_flow_png( 57 | os.path.join(dir_gt, "flow_noc", str(i).zfill(6) + "_10.png")) 58 | return flow_true, flow_noc_true[:, :, 2] 59 | 60 | def load_gt_flow_kitti(gt_dataset_dir, mode): 61 | gt_flows = [] 62 | noc_masks = [] 63 | if mode == "kitti_2012": 64 | num_gt = 194 65 | dir_gt = gt_dataset_dir 66 | elif mode == "kitti_2015": 67 | num_gt = 200 68 | dir_gt = gt_dataset_dir 69 | else: 70 | num_gt = None 71 | dir_gt = None 72 | raise ValueError('Mode {} not found.'.format(mode)) 73 | 74 | fun = functools.partial(read_flow_gt_worker, dir_gt) 75 | pool = multiprocessing.Pool(5) 76 | results = pool.imap(fun, range(num_gt), chunksize=10) 77 | pool.close() 78 | pool.join() 79 | 80 | for result in results: 81 | gt_flows.append(result[0]) 82 | noc_masks.append(result[1]) 83 | return gt_flows, noc_masks 84 | 85 | def calculate_error_rate(epe_map, gt_flow, mask): 86 | bad_pixels = np.logical_and( 87 | epe_map * mask > 3, 88 | epe_map * mask / np.maximum( 89 | np.sqrt(np.sum(np.square(gt_flow), axis=2)), 1e-10) > 0.05) 90 | return bad_pixels.sum() / mask.sum() 91 | 92 | 93 | def eval_flow_avg(gt_flows, 94 | noc_masks, 95 | pred_flows, 96 | cfg, 97 | moving_masks=None, 98 | write_img=False): 99 | error, error_noc, error_occ, error_move, error_static, error_rate = 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 100 | error_move_rate, error_static_rate = 0.0, 0.0 101 | 102 | num = len(gt_flows) 103 | for gt_flow, noc_mask, pred_flow, i in zip(gt_flows, noc_masks, pred_flows, 104 | range(len(gt_flows))): 105 | H, W = gt_flow.shape[0:2] 106 | 107 | pred_flow = np.copy(pred_flow) 108 | pred_flow[:, :, 0] = pred_flow[:, :, 0] / cfg.img_hw[1] * W 109 | pred_flow[:, :, 1] = pred_flow[:, :, 1] / cfg.img_hw[0] * H 110 | 111 | flo_pred = cv2.resize( 112 | pred_flow, (W, H), interpolation=cv2.INTER_LINEAR) 113 | 114 | if write_img: 115 | if not os.path.exists(os.path.join(cfg.model_dir, "pred_flow")): 116 | os.mkdir(os.path.join(cfg.model_dir, "pred_flow")) 117 | cv2.imwrite( 118 | os.path.join(cfg.model_dir, "pred_flow", 119 | str(i).zfill(6) + "_10.png"), 120 | flow_to_image(flo_pred)) 121 | cv2.imwrite( 122 | os.path.join(cfg.model_dir, "pred_flow", 123 | str(i).zfill(6) + "_10_gt.png"), 124 | flow_to_image(gt_flow[:, :, 0:2])) 125 | cv2.imwrite( 126 | os.path.join(cfg.model_dir, "pred_flow", 127 | str(i).zfill(6) + "_10_err.png"), 128 | flow_to_image( 129 | (flo_pred - gt_flow[:, :, 0:2]) * gt_flow[:, :, 2:3])) 130 | 131 | epe_map = np.sqrt( 132 | np.sum(np.square(flo_pred[:, :, 0:2] - gt_flow[:, :, 0:2]), 133 | axis=2)) 134 | error += np.sum(epe_map * gt_flow[:, :, 2]) / np.sum(gt_flow[:, :, 2]) 135 | 136 | error_noc += np.sum(epe_map * noc_mask) / np.sum(noc_mask) 137 | 138 | error_occ += np.sum(epe_map * (gt_flow[:, :, 2] - noc_mask)) / max( 139 | np.sum(gt_flow[:, :, 2] - noc_mask), 1.0) 140 | 141 | error_rate += calculate_error_rate(epe_map, gt_flow[:, :, 0:2], 142 | gt_flow[:, :, 2]) 143 | 144 | if moving_masks: 145 | move_mask = moving_masks[i] 146 | 147 | error_move_rate += calculate_error_rate( 148 | epe_map, gt_flow[:, :, 0:2], gt_flow[:, :, 2] * move_mask) 149 | error_static_rate += calculate_error_rate( 150 | epe_map, gt_flow[:, :, 0:2], 151 | gt_flow[:, :, 2] * (1.0 - move_mask)) 152 | 153 | error_move += np.sum(epe_map * gt_flow[:, :, 2] * 154 | move_mask) / np.sum(gt_flow[:, :, 2] * 155 | move_mask) 156 | error_static += np.sum(epe_map * gt_flow[:, :, 2] * ( 157 | 1.0 - move_mask)) / np.sum(gt_flow[:, :, 2] * 158 | (1.0 - move_mask)) 159 | 160 | if moving_masks: 161 | result = "{:>10}, {:>10}, {:>10}, {:>10}, {:>10}, {:>10}, {:>10}, {:>10} \n".format( 162 | 'epe', 'epe_noc', 'epe_occ', 'epe_move', 'epe_static', 163 | 'move_err_rate', 'static_err_rate', 'err_rate') 164 | result += "{:10.4f}, {:10.4f}, {:10.4f}, {:10.4f}, {:10.4f}, {:10.4f}, {:10.4f}, {:10.4f} \n".format( 165 | error / num, error_noc / num, error_occ / num, error_move / num, 166 | error_static / num, error_move_rate / num, error_static_rate / num, 167 | error_rate / num) 168 | return result 169 | else: 170 | result = "{:>10}, {:>10}, {:>10}, {:>10} \n".format( 171 | 'epe', 'epe_noc', 'epe_occ', 'err_rate') 172 | result += "{:10.4f}, {:10.4f}, {:10.4f}, {:10.4f} \n".format( 173 | error / num, error_noc / num, error_occ / num, error_rate / num) 174 | return result 175 | -------------------------------------------------------------------------------- /core/evaluation/evaluate_mask.py: -------------------------------------------------------------------------------- 1 | import os 2 | import numpy as np 3 | import cv2 4 | import functools 5 | import matplotlib.pyplot as plt 6 | import multiprocessing 7 | 8 | """ 9 | Adopted from https://github.com/martinkersner/py_img_seg_eval 10 | """ 11 | 12 | class EvalSegErr(Exception): 13 | def __init__(self, value): 14 | self.value = value 15 | 16 | def __str__(self): 17 | return repr(self.value) 18 | 19 | 20 | def pixel_accuracy(eval_segm, gt_segm): 21 | ''' 22 | sum_i(n_ii) / sum_i(t_i) 23 | ''' 24 | 25 | check_size(eval_segm, gt_segm) 26 | 27 | cl, n_cl = extract_classes(gt_segm) 28 | eval_mask, gt_mask = extract_both_masks(eval_segm, gt_segm, cl, n_cl) 29 | 30 | sum_n_ii = 0 31 | sum_t_i = 0 32 | 33 | for i, c in enumerate(cl): 34 | curr_eval_mask = eval_mask[i, :, :] 35 | curr_gt_mask = gt_mask[i, :, :] 36 | 37 | sum_n_ii += np.sum(np.logical_and(curr_eval_mask, curr_gt_mask)) 38 | sum_t_i += np.sum(curr_gt_mask) 39 | 40 | if (sum_t_i == 0): 41 | pixel_accuracy_ = 0 42 | else: 43 | pixel_accuracy_ = sum_n_ii / sum_t_i 44 | 45 | return pixel_accuracy_ 46 | 47 | 48 | def mean_accuracy(eval_segm, gt_segm): 49 | ''' 50 | (1/n_cl) sum_i(n_ii/t_i) 51 | ''' 52 | 53 | check_size(eval_segm, gt_segm) 54 | 55 | cl, n_cl = extract_classes(gt_segm) 56 | eval_mask, gt_mask = extract_both_masks(eval_segm, gt_segm, cl, n_cl) 57 | 58 | accuracy = list([0]) * n_cl 59 | 60 | for i, c in enumerate(cl): 61 | curr_eval_mask = eval_mask[i, :, :] 62 | curr_gt_mask = gt_mask[i, :, :] 63 | 64 | n_ii = np.sum(np.logical_and(curr_eval_mask, curr_gt_mask)) 65 | t_i = np.sum(curr_gt_mask) 66 | 67 | if (t_i != 0): 68 | accuracy[i] = n_ii / t_i 69 | 70 | mean_accuracy_ = np.mean(accuracy) 71 | return mean_accuracy_ 72 | 73 | 74 | def mean_IU(eval_segm, gt_segm): 75 | ''' 76 | (1/n_cl) * sum_i(n_ii / (t_i + sum_j(n_ji) - n_ii)) 77 | ''' 78 | 79 | check_size(eval_segm, gt_segm) 80 | 81 | cl, n_cl = union_classes(eval_segm, gt_segm) 82 | _, n_cl_gt = extract_classes(gt_segm) 83 | eval_mask, gt_mask = extract_both_masks(eval_segm, gt_segm, cl, n_cl) 84 | 85 | IU = list([0]) * n_cl 86 | 87 | for i, c in enumerate(cl): 88 | curr_eval_mask = eval_mask[i, :, :] 89 | curr_gt_mask = gt_mask[i, :, :] 90 | 91 | if (np.sum(curr_eval_mask) == 0) or (np.sum(curr_gt_mask) == 0): 92 | continue 93 | 94 | n_ii = np.sum(np.logical_and(curr_eval_mask, curr_gt_mask)) 95 | t_i = np.sum(curr_gt_mask) 96 | n_ij = np.sum(curr_eval_mask) 97 | 98 | IU[i] = n_ii / (t_i + n_ij - n_ii) 99 | 100 | mean_IU_ = np.sum(IU) / n_cl_gt 101 | return mean_IU_, np.array(IU) 102 | 103 | 104 | def frequency_weighted_IU(eval_segm, gt_segm): 105 | ''' 106 | sum_k(t_k)^(-1) * sum_i((t_i*n_ii)/(t_i + sum_j(n_ji) - n_ii)) 107 | ''' 108 | 109 | check_size(eval_segm, gt_segm) 110 | 111 | cl, n_cl = union_classes(eval_segm, gt_segm) 112 | eval_mask, gt_mask = extract_both_masks(eval_segm, gt_segm, cl, n_cl) 113 | 114 | frequency_weighted_IU_ = list([0]) * n_cl 115 | 116 | for i, c in enumerate(cl): 117 | curr_eval_mask = eval_mask[i, :, :] 118 | curr_gt_mask = gt_mask[i, :, :] 119 | 120 | if (np.sum(curr_eval_mask) == 0) or (np.sum(curr_gt_mask) == 0): 121 | continue 122 | 123 | n_ii = np.sum(np.logical_and(curr_eval_mask, curr_gt_mask)) 124 | t_i = np.sum(curr_gt_mask) 125 | n_ij = np.sum(curr_eval_mask) 126 | 127 | frequency_weighted_IU_[i] = (t_i * n_ii) / (t_i + n_ij - n_ii) 128 | 129 | sum_k_t_k = get_pixel_area(eval_segm) 130 | 131 | frequency_weighted_IU_ = np.sum(frequency_weighted_IU_) / sum_k_t_k 132 | return frequency_weighted_IU_ 133 | 134 | 135 | ''' 136 | Auxiliary functions used during evaluation. 137 | ''' 138 | 139 | 140 | def get_pixel_area(segm): 141 | return segm.shape[0] * segm.shape[1] 142 | 143 | 144 | def extract_both_masks(eval_segm, gt_segm, cl, n_cl): 145 | eval_mask = extract_masks(eval_segm, cl, n_cl) 146 | gt_mask = extract_masks(gt_segm, cl, n_cl) 147 | 148 | return eval_mask, gt_mask 149 | 150 | 151 | def extract_classes(segm): 152 | cl = np.unique(segm) 153 | n_cl = len(cl) 154 | 155 | return cl, n_cl 156 | 157 | 158 | def union_classes(eval_segm, gt_segm): 159 | eval_cl, _ = extract_classes(eval_segm) 160 | gt_cl, _ = extract_classes(gt_segm) 161 | 162 | cl = np.union1d(eval_cl, gt_cl) 163 | n_cl = len(cl) 164 | 165 | return cl, n_cl 166 | 167 | 168 | def extract_masks(segm, cl, n_cl): 169 | h, w = segm_size(segm) 170 | masks = np.zeros((n_cl, h, w)) 171 | 172 | for i, c in enumerate(cl): 173 | masks[i, :, :] = segm == c 174 | 175 | return masks 176 | 177 | 178 | def segm_size(segm): 179 | try: 180 | height = segm.shape[0] 181 | width = segm.shape[1] 182 | except IndexError: 183 | raise 184 | 185 | return height, width 186 | 187 | 188 | def check_size(eval_segm, gt_segm): 189 | h_e, w_e = segm_size(eval_segm) 190 | h_g, w_g = segm_size(gt_segm) 191 | 192 | if (h_e != h_g) or (w_e != w_g): 193 | raise EvalSegErr("DiffDim: Different dimensions of matrices!") 194 | 195 | def read_mask_gt_worker(gt_dataset_dir, idx): 196 | return cv2.imread( 197 | gt_dataset_dir + "/obj_map/" + str(idx).zfill(6) + "_10.png", -1) 198 | 199 | def load_gt_mask(gt_dataset_dir): 200 | num_gt = 200 201 | 202 | # the dataset dir should be the directory of kitti-2015. 203 | fun = functools.partial(read_mask_gt_worker, gt_dataset_dir) 204 | pool = multiprocessing.Pool(5) 205 | results = pool.imap(fun, range(num_gt), chunksize=10) 206 | pool.close() 207 | pool.join() 208 | 209 | gt_masks = [] 210 | for m in results: 211 | m[m > 0.0] = 1.0 212 | gt_masks.append(m) 213 | return gt_masks 214 | 215 | 216 | def eval_mask(pred_masks, gt_masks, opt): 217 | grey_cmap = plt.get_cmap("Greys") 218 | if not os.path.exists(os.path.join(opt.trace, "pred_mask")): 219 | os.mkdir(os.path.join(opt.trace, "pred_mask")) 220 | 221 | pa_res, ma_res, mIU_res, fwIU_res = 0.0, 0.0, 0.0, 0.0 222 | IU_res = np.array([0.0, 0.0]) 223 | 224 | num_total = len(gt_masks) 225 | for i in range(num_total): 226 | gt_mask = gt_masks[i] 227 | H, W = gt_mask.shape[0:2] 228 | 229 | pred_mask = cv2.resize( 230 | pred_masks[i], (W, H), interpolation=cv2.INTER_LINEAR) 231 | 232 | pred_mask[pred_mask >= 0.5] = 1.0 233 | pred_mask[pred_mask < 0.5] = 0.0 234 | 235 | cv2.imwrite( 236 | os.path.join(opt.trace, "pred_mask", 237 | str(i).zfill(6) + "_10_plot.png"), 238 | grey_cmap(pred_mask)) 239 | cv2.imwrite( 240 | os.path.join(opt.trace, "pred_mask", str(i).zfill(6) + "_10.png"), 241 | pred_mask) 242 | 243 | pa_res += pixel_accuracy(pred_mask, gt_mask) 244 | ma_res += mean_accuracy(pred_mask, gt_mask) 245 | 246 | mIU, IU = mean_IU(pred_mask, gt_mask) 247 | mIU_res += mIU 248 | IU_res += IU 249 | 250 | fwIU_res += frequency_weighted_IU(pred_mask, gt_mask) 251 | 252 | return pa_res / 200., ma_res / 200., mIU_res / 200., fwIU_res / 200., IU_res / 200. 253 | -------------------------------------------------------------------------------- /core/evaluation/evaluation_utils.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import os, sys 3 | sys.path.append(os.path.dirname(os.path.abspath(__file__))) 4 | import cv2, skimage 5 | import skimage.io 6 | #import scipy.misc as sm 7 | import imageio as sm 8 | 9 | 10 | # Adopted from https://github.com/mrharicot/monodepth 11 | def compute_errors(gt, pred, nyu=False): 12 | thresh = np.maximum((gt / pred), (pred / gt)) 13 | a1 = (thresh < 1.25).mean() 14 | a2 = (thresh < 1.25**2).mean() 15 | a3 = (thresh < 1.25**3).mean() 16 | 17 | rmse = (gt - pred)**2 18 | rmse = np.sqrt(rmse.mean()) 19 | 20 | rmse_log = (np.log(gt) - np.log(pred))**2 21 | rmse_log = np.sqrt(rmse_log.mean()) 22 | 23 | log10 = np.mean(np.abs((np.log10(gt) - np.log10(pred)))) 24 | 25 | abs_rel = np.mean(np.abs(gt - pred) / (gt)) 26 | 27 | sq_rel = np.mean(((gt - pred)**2) / (gt)) 28 | 29 | if nyu: 30 | return abs_rel, sq_rel, rmse, log10, a1, a2, a3 31 | else: 32 | return abs_rel, sq_rel, rmse, rmse_log, a1, a2, a3 33 | 34 | -------------------------------------------------------------------------------- /core/evaluation/flowlib.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | """ 3 | Adopted from https://github.com/liruoteng/OpticalFlowToolkit 4 | # ============================== 5 | # flowlib.py 6 | # library for optical flow processing 7 | # Author: Ruoteng Li 8 | # Date: 6th Aug 2016 9 | # ============================== 10 | """ 11 | import png 12 | import scipy 13 | import numpy as np 14 | import matplotlib.colors as cl 15 | import matplotlib.pyplot as plt 16 | from PIL import Image 17 | #import cv2 18 | 19 | UNKNOWN_FLOW_THRESH = 1e7 20 | SMALLFLOW = 0.0 21 | LARGEFLOW = 1e8 22 | """ 23 | ============= 24 | Flow Section 25 | ============= 26 | """ 27 | 28 | 29 | def show_flow(filename): 30 | """ 31 | visualize optical flow map using matplotlib 32 | :param filename: optical flow file 33 | :return: None 34 | """ 35 | flow = read_flow(filename) 36 | img = flow_to_image(flow) 37 | plt.imshow(img) 38 | plt.show() 39 | 40 | 41 | def visualize_flow(flow, mode='Y'): 42 | """ 43 | this function visualize the input flow 44 | :param flow: input flow in array 45 | :param mode: choose which color mode to visualize the flow (Y: Ccbcr, RGB: RGB color) 46 | :return: None 47 | """ 48 | if mode == 'Y': 49 | # Ccbcr color wheel 50 | img = flow_to_image(flow) 51 | plt.imshow(img) 52 | plt.show() 53 | elif mode == 'RGB': 54 | (h, w) = flow.shape[0:2] 55 | du = flow[:, :, 0] 56 | dv = flow[:, :, 1] 57 | valid = flow[:, :, 2] 58 | max_flow = max(np.max(du), np.max(dv)) 59 | img = np.zeros((h, w, 3), dtype=np.float64) 60 | # angle layer 61 | img[:, :, 0] = np.arctan2(dv, du) / (2 * np.pi) 62 | # magnitude layer, normalized to 1 63 | img[:, :, 1] = np.sqrt(du * du + dv * dv) * 8 / max_flow 64 | # phase layer 65 | img[:, :, 2] = 8 - img[:, :, 1] 66 | # clip to [0,1] 67 | small_idx = img[:, :, 0:3] < 0 68 | large_idx = img[:, :, 0:3] > 1 69 | img[small_idx] = 0 70 | img[large_idx] = 1 71 | # convert to rgb 72 | img = cl.hsv_to_rgb(img) 73 | # remove invalid point 74 | img[:, :, 0] = img[:, :, 0] * valid 75 | img[:, :, 1] = img[:, :, 1] * valid 76 | img[:, :, 2] = img[:, :, 2] * valid 77 | # show 78 | plt.imshow(img) 79 | plt.show() 80 | 81 | return None 82 | 83 | 84 | def read_flow(filename): 85 | """ 86 | read optical flow from Middlebury .flo file 87 | :param filename: name of the flow file 88 | :return: optical flow data in matrix 89 | """ 90 | f = open(filename, 'rb') 91 | magic = np.fromfile(f, np.float32, count=1) 92 | data2d = None 93 | 94 | if 202021.25 != magic: 95 | print('Magic number incorrect. Invalid .flo file') 96 | else: 97 | w = np.fromfile(f, np.int32, count=1)[0] 98 | h = np.fromfile(f, np.int32, count=1)[0] 99 | print("Reading %d x %d flo file" % (h, w)) 100 | data2d = np.fromfile(f, np.float32, count=2 * w * h) 101 | # reshape data into 3D array (columns, rows, channels) 102 | data2d = np.resize(data2d, (h, w, 2)) 103 | f.close() 104 | return data2d 105 | 106 | 107 | def read_flow_png(flow_file): 108 | """ 109 | Read optical flow from KITTI .png file 110 | :param flow_file: name of the flow file 111 | :return: optical flow data in matrix 112 | """ 113 | flow_object = png.Reader(filename=flow_file) 114 | flow_direct = flow_object.asDirect() 115 | flow_data = list(flow_direct[2]) 116 | (w, h) = flow_direct[3]['size'] 117 | flow = np.zeros((h, w, 3), dtype=np.float64) 118 | for i in range(len(flow_data)): 119 | flow[i, :, 0] = flow_data[i][0::3] 120 | flow[i, :, 1] = flow_data[i][1::3] 121 | flow[i, :, 2] = flow_data[i][2::3] 122 | 123 | invalid_idx = (flow[:, :, 2] == 0) 124 | flow[:, :, 0:2] = (flow[:, :, 0:2] - 2**15) / 64.0 125 | flow[invalid_idx, 0] = 0 126 | flow[invalid_idx, 1] = 0 127 | return flow 128 | 129 | 130 | def write_flow_png(flo, flow_file): 131 | h, w, _ = flo.shape 132 | out_flo = np.ones((h, w, 3), dtype=np.float32) 133 | out_flo[:, :, 0] = np.maximum( 134 | np.minimum(flo[:, :, 0] * 64.0 + 2**15, 2**16 - 1), 0) 135 | out_flo[:, :, 1] = np.maximum( 136 | np.minimum(flo[:, :, 1] * 64.0 + 2**15, 2**16 - 1), 0) 137 | out_flo = out_flo.astype(np.uint16) 138 | 139 | with open(flow_file, 'wb') as f: 140 | writer = png.Writer(width=w, height=h, bitdepth=16) 141 | # Convert z to the Python list of lists expected by 142 | # the png writer. 143 | z2list = out_flo.reshape(-1, w * 3).tolist() 144 | writer.write(f, z2list) 145 | 146 | 147 | def write_flow(flow, filename): 148 | """ 149 | write optical flow in Middlebury .flo format 150 | :param flow: optical flow map 151 | :param filename: optical flow file path to be saved 152 | :return: None 153 | """ 154 | f = open(filename, 'wb') 155 | magic = np.array([202021.25], dtype=np.float32) 156 | (height, width) = flow.shape[0:2] 157 | w = np.array([width], dtype=np.int32) 158 | h = np.array([height], dtype=np.int32) 159 | magic.tofile(f) 160 | w.tofile(f) 161 | h.tofile(f) 162 | flow.tofile(f) 163 | f.close() 164 | 165 | 166 | def segment_flow(flow): 167 | h = flow.shape[0] 168 | w = flow.shape[1] 169 | u = flow[:, :, 0] 170 | v = flow[:, :, 1] 171 | 172 | idx = ((abs(u) > LARGEFLOW) | (abs(v) > LARGEFLOW)) 173 | idx2 = (abs(u) == SMALLFLOW) 174 | class0 = (v == 0) & (u == 0) 175 | u[idx2] = 0.00001 176 | tan_value = v / u 177 | 178 | class1 = (tan_value < 1) & (tan_value >= 0) & (u > 0) & (v >= 0) 179 | class2 = (tan_value >= 1) & (u >= 0) & (v >= 0) 180 | class3 = (tan_value < -1) & (u <= 0) & (v >= 0) 181 | class4 = (tan_value < 0) & (tan_value >= -1) & (u < 0) & (v >= 0) 182 | class8 = (tan_value >= -1) & (tan_value < 0) & (u > 0) & (v <= 0) 183 | class7 = (tan_value < -1) & (u >= 0) & (v <= 0) 184 | class6 = (tan_value >= 1) & (u <= 0) & (v <= 0) 185 | class5 = (tan_value >= 0) & (tan_value < 1) & (u < 0) & (v <= 0) 186 | 187 | seg = np.zeros((h, w)) 188 | 189 | seg[class1] = 1 190 | seg[class2] = 2 191 | seg[class3] = 3 192 | seg[class4] = 4 193 | seg[class5] = 5 194 | seg[class6] = 6 195 | seg[class7] = 7 196 | seg[class8] = 8 197 | seg[class0] = 0 198 | seg[idx] = 0 199 | 200 | return seg 201 | 202 | 203 | def flow_error(tu, tv, u, v): 204 | """ 205 | Calculate average end point error 206 | :param tu: ground-truth horizontal flow map 207 | :param tv: ground-truth vertical flow map 208 | :param u: estimated horizontal flow map 209 | :param v: estimated vertical flow map 210 | :return: End point error of the estimated flow 211 | """ 212 | smallflow = 0.0 213 | ''' 214 | stu = tu[bord+1:end-bord,bord+1:end-bord] 215 | stv = tv[bord+1:end-bord,bord+1:end-bord] 216 | su = u[bord+1:end-bord,bord+1:end-bord] 217 | sv = v[bord+1:end-bord,bord+1:end-bord] 218 | ''' 219 | stu = tu[:] 220 | stv = tv[:] 221 | su = u[:] 222 | sv = v[:] 223 | 224 | idxUnknow = (abs(stu) > UNKNOWN_FLOW_THRESH) | ( 225 | abs(stv) > UNKNOWN_FLOW_THRESH) 226 | stu[idxUnknow] = 0 227 | stv[idxUnknow] = 0 228 | su[idxUnknow] = 0 229 | sv[idxUnknow] = 0 230 | 231 | ind2 = [(np.absolute(stu) > smallflow) | (np.absolute(stv) > smallflow)] 232 | index_su = su[ind2] 233 | index_sv = sv[ind2] 234 | an = 1.0 / np.sqrt(index_su**2 + index_sv**2 + 1) 235 | un = index_su * an 236 | vn = index_sv * an 237 | 238 | index_stu = stu[ind2] 239 | index_stv = stv[ind2] 240 | tn = 1.0 / np.sqrt(index_stu**2 + index_stv**2 + 1) 241 | tun = index_stu * tn 242 | tvn = index_stv * tn 243 | ''' 244 | angle = un * tun + vn * tvn + (an * tn) 245 | index = [angle == 1.0] 246 | angle[index] = 0.999 247 | ang = np.arccos(angle) 248 | mang = np.mean(ang) 249 | mang = mang * 180 / np.pi 250 | ''' 251 | 252 | epe = np.sqrt((stu - su)**2 + (stv - sv)**2) 253 | epe = epe[ind2] 254 | mepe = np.mean(epe) 255 | return mepe 256 | 257 | 258 | def flow_to_image(flow): 259 | """ 260 | Convert flow into middlebury color code image 261 | :param flow: optical flow map 262 | :return: optical flow image in middlebury color 263 | """ 264 | u = flow[:, :, 0] 265 | v = flow[:, :, 1] 266 | 267 | maxu = -999. 268 | maxv = -999. 269 | minu = 999. 270 | minv = 999. 271 | 272 | idxUnknow = (abs(u) > UNKNOWN_FLOW_THRESH) | (abs(v) > UNKNOWN_FLOW_THRESH) 273 | u[idxUnknow] = 0 274 | v[idxUnknow] = 0 275 | 276 | maxu = max(maxu, np.max(u)) 277 | minu = min(minu, np.min(u)) 278 | 279 | maxv = max(maxv, np.max(v)) 280 | minv = min(minv, np.min(v)) 281 | 282 | rad = np.sqrt(u**2 + v**2) 283 | maxrad = max(-1, np.max(rad)) 284 | 285 | print("max flow: %.4f\nflow range:\nu = %.3f .. %.3f\nv = %.3f .. %.3f" % ( 286 | maxrad, minu, maxu, minv, maxv)) 287 | 288 | u = u / (maxrad + np.finfo(float).eps) 289 | v = v / (maxrad + np.finfo(float).eps) 290 | 291 | img = compute_color(u, v) 292 | 293 | idx = np.repeat(idxUnknow[:, :, np.newaxis], 3, axis=2) 294 | img[idx] = 0 295 | 296 | return np.uint8(img) 297 | 298 | 299 | def evaluate_flow_file(gt, pred): 300 | """ 301 | evaluate the estimated optical flow end point error according to ground truth provided 302 | :param gt: ground truth file path 303 | :param pred: estimated optical flow file path 304 | :return: end point error, float32 305 | """ 306 | # Read flow files and calculate the errors 307 | gt_flow = read_flow(gt) # ground truth flow 308 | eva_flow = read_flow(pred) # predicted flow 309 | # Calculate errors 310 | average_pe = flow_error(gt_flow[:, :, 0], gt_flow[:, :, 1], 311 | eva_flow[:, :, 0], eva_flow[:, :, 1]) 312 | return average_pe 313 | 314 | 315 | def evaluate_flow(gt_flow, pred_flow): 316 | """ 317 | gt: ground-truth flow 318 | pred: estimated flow 319 | """ 320 | average_pe = flow_error(gt_flow[:, :, 0], gt_flow[:, :, 1], 321 | pred_flow[:, :, 0], pred_flow[:, :, 1]) 322 | return average_pe 323 | 324 | 325 | """ 326 | ============== 327 | Disparity Section 328 | ============== 329 | """ 330 | 331 | 332 | def read_disp_png(file_name): 333 | """ 334 | Read optical flow from KITTI .png file 335 | :param file_name: name of the flow file 336 | :return: optical flow data in matrix 337 | """ 338 | image_object = png.Reader(filename=file_name) 339 | image_direct = image_object.asDirect() 340 | image_data = list(image_direct[2]) 341 | (w, h) = image_direct[3]['size'] 342 | channel = len(image_data[0]) / w 343 | flow = np.zeros((h, w, channel), dtype=np.uint16) 344 | for i in range(len(image_data)): 345 | for j in range(channel): 346 | flow[i, :, j] = image_data[i][j::channel] 347 | return flow[:, :, 0] / 256 348 | 349 | 350 | def disp_to_flowfile(disp, filename): 351 | """ 352 | Read KITTI disparity file in png format 353 | :param disp: disparity matrix 354 | :param filename: the flow file name to save 355 | :return: None 356 | """ 357 | f = open(filename, 'wb') 358 | magic = np.array([202021.25], dtype=np.float32) 359 | (height, width) = disp.shape[0:2] 360 | w = np.array([width], dtype=np.int32) 361 | h = np.array([height], dtype=np.int32) 362 | empty_map = np.zeros((height, width), dtype=np.float32) 363 | data = np.dstack((disp, empty_map)) 364 | magic.tofile(f) 365 | w.tofile(f) 366 | h.tofile(f) 367 | data.tofile(f) 368 | f.close() 369 | 370 | 371 | """ 372 | ============== 373 | Image Section 374 | ============== 375 | """ 376 | 377 | 378 | def read_image(filename): 379 | """ 380 | Read normal image of any format 381 | :param filename: name of the image file 382 | :return: image data in matrix uint8 type 383 | """ 384 | img = Image.open(filename) 385 | im = np.array(img) 386 | return im 387 | 388 | 389 | def warp_image(im, flow): 390 | """ 391 | Use optical flow to warp image to the next 392 | :param im: image to warp 393 | :param flow: optical flow 394 | :return: warped image 395 | """ 396 | image_height = im.shape[0] 397 | image_width = im.shape[1] 398 | flow_height = flow.shape[0] 399 | flow_width = flow.shape[1] 400 | n = image_height * image_width 401 | (iy, ix) = np.mgrid[0:image_height, 0:image_width] 402 | (fy, fx) = np.mgrid[0:flow_height, 0:flow_width] 403 | fx += flow[:, :, 0] 404 | fy += flow[:, :, 1] 405 | mask = fx < 0 | fx > flow_width | fy < 0 | fy > flow_height 406 | fx = np.min(np.max(fx, 0), flow_width) 407 | fy = np.min(np.max(fy, 0), flow_height) 408 | points = np.concatenate((ix.reshape(n, 1), iy.reshape(n, 1)), axis=1) 409 | xi = np.concatenate((fx.reshape(n, 1), fy.reshape(n, 1)), axis=1) 410 | warp = np.zeros((image_height, image_width, im.shape[2])) 411 | for i in range(im.shape[2]): 412 | channel = im[:, :, i] 413 | values = channel.reshape(n, 1) 414 | new_channel = scipy.interpolate.griddata( 415 | points, values, xi, method='cubic') 416 | new_channel[mask] = 1 417 | warp[:, :, i] = new_channel 418 | return warp 419 | 420 | 421 | """ 422 | ============== 423 | Others 424 | ============== 425 | """ 426 | 427 | 428 | def scale_image(image, new_range): 429 | """ 430 | Linearly scale the image into desired range 431 | :param image: input image 432 | :param new_range: the new range to be aligned 433 | :return: image normalized in new range 434 | """ 435 | min_val = np.min(image).astype(np.float32) 436 | max_val = np.max(image).astype(np.float32) 437 | min_val_new = np.array(min(new_range), dtype=np.float32) 438 | max_val_new = np.array(max(new_range), dtype=np.float32) 439 | scaled_image = (image - min_val) / (max_val - min_val) * ( 440 | max_val_new - min_val_new) + min_val_new 441 | return scaled_image.astype(np.uint8) 442 | 443 | 444 | def compute_color(u, v): 445 | """ 446 | compute optical flow color map 447 | :param u: optical flow horizontal map 448 | :param v: optical flow vertical map 449 | :return: optical flow in color code 450 | """ 451 | [h, w] = u.shape 452 | img = np.zeros([h, w, 3]) 453 | nanIdx = np.isnan(u) | np.isnan(v) 454 | u[nanIdx] = 0 455 | v[nanIdx] = 0 456 | 457 | colorwheel = make_color_wheel() 458 | ncols = np.size(colorwheel, 0) 459 | 460 | rad = np.sqrt(u**2 + v**2) 461 | 462 | a = np.arctan2(-v, -u) / np.pi 463 | 464 | fk = (a + 1) / 2 * (ncols - 1) + 1 465 | 466 | k0 = np.floor(fk).astype(int) 467 | 468 | k1 = k0 + 1 469 | k1[k1 == ncols + 1] = 1 470 | f = fk - k0 471 | 472 | for i in range(0, np.size(colorwheel, 1)): 473 | tmp = colorwheel[:, i] 474 | col0 = tmp[k0 - 1] / 255 475 | col1 = tmp[k1 - 1] / 255 476 | col = (1 - f) * col0 + f * col1 477 | 478 | idx = rad <= 1 479 | col[idx] = 1 - rad[idx] * (1 - col[idx]) 480 | notidx = np.logical_not(idx) 481 | 482 | col[notidx] *= 0.75 483 | img[:, :, i] = np.uint8(np.floor(255 * col * (1 - nanIdx))) 484 | 485 | return img 486 | 487 | 488 | def make_color_wheel(): 489 | """ 490 | Generate color wheel according Middlebury color code 491 | :return: Color wheel 492 | """ 493 | RY = 15 494 | YG = 6 495 | GC = 4 496 | CB = 11 497 | BM = 13 498 | MR = 6 499 | 500 | ncols = RY + YG + GC + CB + BM + MR 501 | 502 | colorwheel = np.zeros([ncols, 3]) 503 | 504 | col = 0 505 | 506 | # RY 507 | colorwheel[0:RY, 0] = 255 508 | colorwheel[0:RY, 1] = np.transpose(np.floor(255 * np.arange(0, RY) / RY)) 509 | col += RY 510 | 511 | # YG 512 | colorwheel[col:col + YG, 0] = 255 - np.transpose( 513 | np.floor(255 * np.arange(0, YG) / YG)) 514 | colorwheel[col:col + YG, 1] = 255 515 | col += YG 516 | 517 | # GC 518 | colorwheel[col:col + GC, 1] = 255 519 | colorwheel[col:col + GC, 2] = np.transpose( 520 | np.floor(255 * np.arange(0, GC) / GC)) 521 | col += GC 522 | 523 | # CB 524 | colorwheel[col:col + CB, 1] = 255 - np.transpose( 525 | np.floor(255 * np.arange(0, CB) / CB)) 526 | colorwheel[col:col + CB, 2] = 255 527 | col += CB 528 | 529 | # BM 530 | colorwheel[col:col + BM, 2] = 255 531 | colorwheel[col:col + BM, 0] = np.transpose( 532 | np.floor(255 * np.arange(0, BM) / BM)) 533 | col += +BM 534 | 535 | # MR 536 | colorwheel[col:col + MR, 2] = 255 - np.transpose( 537 | np.floor(255 * np.arange(0, MR) / MR)) 538 | colorwheel[col:col + MR, 0] = 255 539 | 540 | return colorwheel 541 | -------------------------------------------------------------------------------- /core/networks/__init__.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | sys.path.append(os.path.dirname(os.path.abspath(__file__))) 3 | from model_flow_paper import Model_flow 4 | 5 | def get_model(mode): 6 | if mode == 'flow': 7 | return Model_flow 8 | else: 9 | raise ValueError('Mode {} not found.'.format(mode)) 10 | -------------------------------------------------------------------------------- /core/networks/model_flow_paper.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | sys.path.append(os.path.dirname(os.path.abspath(__file__))) 3 | from structures import * 4 | from pytorch_ssim import SSIM 5 | import torch 6 | import torch.nn as nn 7 | import torch.nn.functional as F 8 | import numpy as np 9 | import pdb 10 | import cv2 11 | from torch.autograd import Variable 12 | 13 | 14 | class Model_flow(nn.Module): 15 | def __init__(self, cfg): 16 | super(Model_flow, self).__init__() 17 | self.fpyramid = FeaturePyramid() 18 | self.pwc_model = PWC_tf() 19 | if cfg.mode == 'depth' or cfg.mode == 'flowposenet': 20 | # Stage 2 training 21 | for param in self.fpyramid.parameters(): 22 | param.requires_grad = False 23 | for param in self.pwc_model.parameters(): 24 | param.requires_grad = False 25 | 26 | # hyperparameters 27 | self.dataset = cfg.dataset 28 | self.num_scales = cfg.num_scales 29 | self.flow_consist_alpha = cfg.h_flow_consist_alpha 30 | self.flow_consist_beta = cfg.h_flow_consist_beta 31 | 32 | print("this is paper method.") 33 | 34 | 35 | 36 | def get_flow_norm(self, flow, p=2): 37 | ''' 38 | Inputs: 39 | flow (bs, 2, H, W) 40 | ''' 41 | flow_norm = torch.norm(flow, p=p, dim=1).unsqueeze(1) + 1e-12 42 | return flow_norm 43 | 44 | def get_flow_normalization(self, flow, p=2): 45 | ''' 46 | Inputs: 47 | flow (bs, 2, H, W) 48 | ''' 49 | flow_norm = torch.norm(flow, p=p, dim=1).unsqueeze(1) + 1e-12 50 | flow_normalization = flow / flow_norm.repeat(1,2,1,1) 51 | return flow_normalization 52 | 53 | 54 | def generate_img_pyramid(self, img, num_pyramid): 55 | img_h, img_w = img.shape[2], img.shape[3] 56 | img_pyramid = [] 57 | for s in range(num_pyramid): 58 | img_new = F.adaptive_avg_pool2d(img, [int(img_h / (2**s)), int(img_w / (2**s))]).data 59 | img_pyramid.append(img_new) 60 | return img_pyramid 61 | 62 | def warp_flow_pyramid(self, img_pyramid, flow_pyramid): 63 | img_warped_pyramid = [] 64 | for img, flow in zip(img_pyramid, flow_pyramid): 65 | img_warped_pyramid.append(warp_flow(img, flow, use_mask=True)) 66 | return img_warped_pyramid 67 | 68 | def compute_loss_pixel(self, img_pyramid, img_warped_pyramid, occ_mask_list): 69 | loss_list = [] 70 | for scale in range(self.num_scales): 71 | img, img_warped, occ_mask = img_pyramid[scale], img_warped_pyramid[scale], occ_mask_list[scale] 72 | divider = occ_mask.mean((1,2,3)) 73 | img_diff = torch.abs((img - img_warped)) * occ_mask.repeat(1,3,1,1) 74 | loss_pixel = img_diff.mean((1,2,3)) / (divider + 1e-12) # (B) 75 | loss_list.append(loss_pixel[:,None]) 76 | loss = torch.cat(loss_list, 1).sum(1) # (B) 77 | return loss 78 | 79 | def compute_loss_pixel_without_mask(self, img_pyramid, img_warped_pyramid): 80 | loss_list = [] 81 | for scale in range(self.num_scales): 82 | img, img_warped = img_pyramid[scale], img_warped_pyramid[scale] 83 | img_diff = torch.abs((img - img_warped)) 84 | loss_pixel = img_diff.mean((1,2,3)) # (B) 85 | loss_list.append(loss_pixel[:,None]) 86 | loss = torch.cat(loss_list, 1).sum(1) # (B) 87 | return loss 88 | 89 | 90 | def compute_loss_with_mask(self, diff_list, occ_mask_list): 91 | loss_list = [] 92 | for scale in range(self.num_scales): 93 | diff, occ_mask = diff_list[scale], occ_mask_list[scale] 94 | divider = occ_mask.mean((1,2,3)) 95 | img_diff = diff * occ_mask.repeat(1,3,1,1) 96 | loss_pixel = img_diff.mean((1,2,3)) / (divider + 1e-12) # (B) 97 | loss_list.append(loss_pixel[:,None]) 98 | loss = torch.cat(loss_list, 1).sum(1) # (B) 99 | return loss 100 | 101 | def compute_diff_weight(self, img_pyramid_from_l, img_pyramid, img_pyramid_from_r): 102 | diff_fwd = [] 103 | diff_bwd = [] 104 | weight_fwd = [] 105 | weight_bwd = [] 106 | valid_bwd = [] 107 | valid_fwd = [] 108 | for scale in range(self.num_scales): 109 | img_from_l, img, img_from_r = img_pyramid_from_l[scale], img_pyramid[scale], img_pyramid_from_r[scale] 110 | 111 | valid_pixels_fwd = 1 - (img_from_r == 0).prod(1, keepdim=True).type_as(img_from_r) 112 | valid_pixels_bwd = 1 - (img_from_l == 0).prod(1, keepdim=True).type_as(img_from_l) 113 | 114 | valid_bwd.append(valid_pixels_bwd) 115 | valid_fwd.append(valid_pixels_fwd) 116 | 117 | img_diff_l = torch.abs((img-img_from_l)).mean(1, True) 118 | img_diff_r = torch.abs((img-img_from_r)).mean(1, True) 119 | 120 | diff_cat = torch.cat((img_diff_l, img_diff_r),1) 121 | weight = 1 - nn.functional.softmax(diff_cat,1) 122 | weight = Variable(weight.data,requires_grad=False) 123 | 124 | # weight = (weight > 0.48).float() 125 | 126 | weight = 2*torch.exp(-(weight-0.5)**2/0.03) 127 | 128 | weight_bwd.append(torch.unsqueeze(weight[:,0,:,:],1) * valid_pixels_bwd) 129 | weight_fwd.append(torch.unsqueeze(weight[:,1,:,:],1) * valid_pixels_fwd) 130 | 131 | diff_fwd.append(img_diff_r) 132 | diff_bwd.append(img_diff_l) 133 | 134 | return diff_bwd, diff_fwd, weight_bwd, weight_fwd 135 | 136 | 137 | def compute_loss_ssim(self, img_pyramid, img_warped_pyramid, occ_mask_list): 138 | loss_list = [] 139 | for scale in range(self.num_scales): 140 | img, img_warped, occ_mask = img_pyramid[scale], img_warped_pyramid[scale], occ_mask_list[scale] 141 | divider = occ_mask.mean((1,2,3)) 142 | occ_mask_pad = occ_mask.repeat(1,3,1,1) 143 | ssim = SSIM(img * occ_mask_pad, img_warped * occ_mask_pad) 144 | loss_ssim = torch.clamp((1.0 - ssim) / 2.0, 0, 1).mean((1,2,3)) 145 | loss_ssim = loss_ssim / (divider + 1e-12) 146 | loss_list.append(loss_ssim[:,None]) 147 | loss = torch.cat(loss_list, 1).sum(1) 148 | return loss 149 | 150 | 151 | 152 | def gradients(self, img): 153 | dy = img[:,:,1:,:] - img[:,:,:-1,:] 154 | dx = img[:,:,:,1:] - img[:,:,:,:-1] 155 | return dx, dy 156 | 157 | def cal_grad2_error(self, flow, img): 158 | img_grad_x, img_grad_y = self.gradients(img) 159 | w_x = torch.exp(-10.0 * torch.abs(img_grad_x).mean(1).unsqueeze(1)) 160 | w_y = torch.exp(-10.0 * torch.abs(img_grad_y).mean(1).unsqueeze(1)) 161 | 162 | dx, dy = self.gradients(flow) 163 | dx2, _ = self.gradients(dx) 164 | _, dy2 = self.gradients(dy) 165 | error = (w_x[:,:,:,1:] * torch.abs(dx2)).mean((1,2,3)) + (w_y[:,:,1:,:] * torch.abs(dy2)).mean((1,2,3)) 166 | #error = (w_x * torch.abs(dx)).mean((1,2,3)) + (w_y * torch.abs(dy)).mean((1,2,3)) 167 | return error / 2.0 168 | 169 | def compute_loss_flow_smooth(self, optical_flows, img_pyramid): 170 | loss_list = [] 171 | for scale in range(self.num_scales): 172 | flow, img = optical_flows[scale], img_pyramid[scale] 173 | #error = self.cal_grad2_error(flow, img) 174 | error = self.cal_grad2_error(flow/20.0, img) 175 | loss_list.append(error[:,None]) 176 | loss = torch.cat(loss_list, 1).sum(1) 177 | return loss 178 | 179 | 180 | def compute_loss_flow_consis(self, fwd_flow_pyramid, bwd_flow_pyramid, occ_mask_list): 181 | loss_list = [] 182 | for scale in range(self.num_scales): 183 | fwd_flow, bwd_flow, occ_mask = fwd_flow_pyramid[scale], bwd_flow_pyramid[scale], occ_mask_list[scale] 184 | fwd_flow_norm = self.get_flow_normalization(fwd_flow) 185 | bwd_flow_norm = self.get_flow_normalization(bwd_flow) 186 | bwd_flow_norm = Variable(bwd_flow_norm.data,requires_grad=False) 187 | occ_mask = 1-occ_mask 188 | 189 | divider = occ_mask.mean((1,2,3)) 190 | 191 | loss_consis = (torch.abs(fwd_flow_norm+bwd_flow_norm) * occ_mask).mean((1,2,3)) 192 | loss_consis = loss_consis / (divider + 1e-12) 193 | loss_list.append(loss_consis[:,None]) 194 | loss = torch.cat(loss_list, 1).sum(1) 195 | return loss 196 | 197 | 198 | def inference_flow(self, img1, img2): 199 | img_hw = [img1.shape[2], img1.shape[3]] 200 | feature_list_1, feature_list_2 = self.fpyramid(img1), self.fpyramid(img2) 201 | optical_flow = self.pwc_model(feature_list_1, feature_list_2, img_hw)[0] 202 | return optical_flow 203 | 204 | 205 | def forward(self, inputs, output_flow=False, use_flow_loss=True, is_second_phase=False): 206 | images = inputs 207 | assert (images.shape[1] == 3) 208 | img_h, img_w = int(images.shape[2] / 3), images.shape[3] 209 | imgl, img, imgr = images[:,:,:img_h,:], images[:,:,img_h:2*img_h,:], images[:,:,2*img_h:3*img_h,:] 210 | batch_size = imgl.shape[0] 211 | 212 | #pdb.set_trace() 213 | # get the optical flows and reverse optical flows for each pair of adjacent images 214 | feature_list_l, feature_list, feature_list_r = self.fpyramid(imgl), self.fpyramid(img), self.fpyramid(imgr) 215 | 216 | optical_flows_bwd = self.pwc_model(feature_list, feature_list_l, [img_h, img_w]) 217 | #optical_flows_bwd_rev = self.pwc_model(feature_list_l, feature_list, [img_h, img_w]) 218 | optical_flows_fwd = self.pwc_model(feature_list, feature_list_r, [img_h, img_w]) 219 | #optical_flows_fwd_rev = self.pwc_model(feature_list_r, feature_list, [img_h, img_w]) 220 | 221 | 222 | #cv2.imwrite('./meta/imgl.png', np.transpose(255*imgl[0].cpu().detach().numpy(), [1,2,0]).astype(np.uint8)) 223 | #cv2.imwrite('./meta/img.png', np.transpose(255*img[0].cpu().detach().numpy(), [1,2,0]).astype(np.uint8)) 224 | #cv2.imwrite('./meta/imgr.png', np.transpose(255*imgr[0].cpu().detach().numpy(), [1,2,0]).astype(np.uint8)) 225 | 226 | 227 | loss_pack = {} 228 | # warp images 229 | imgl_pyramid = self.generate_img_pyramid(imgl, len(optical_flows_fwd)) 230 | img_pyramid = self.generate_img_pyramid(img, len(optical_flows_fwd)) 231 | imgr_pyramid = self.generate_img_pyramid(imgr, len(optical_flows_fwd)) 232 | 233 | img_warped_pyramid_from_l = self.warp_flow_pyramid(imgl_pyramid, optical_flows_bwd) 234 | #imgl_warped_pyramid_from_ = self.warp_flow_pyramid(img_pyramid, optical_flows_bwd_rev) 235 | img_warped_pyramid_from_r = self.warp_flow_pyramid(imgr_pyramid, optical_flows_fwd) 236 | #imgr_warped_pyramid_from_ = self.warp_flow_pyramid(img_pyramid, optical_flows_fwd_rev) 237 | 238 | 239 | 240 | diff_bwd, diff_fwd, weight_bwd, weight_fwd = self.compute_diff_weight(img_warped_pyramid_from_l, img_pyramid, img_warped_pyramid_from_r) 241 | loss_pack['loss_pixel'] = self.compute_loss_with_mask(diff_fwd, weight_fwd) + \ 242 | self.compute_loss_with_mask(diff_bwd, weight_bwd) 243 | 244 | loss_pack['loss_ssim'] = self.compute_loss_ssim(img_pyramid, img_warped_pyramid_from_r, weight_fwd) + \ 245 | self.compute_loss_ssim(img_pyramid, img_warped_pyramid_from_l,weight_bwd) 246 | #loss_pack['loss_ssim'] = torch.zeros([2]).to(imgl.get_device()).requires_grad_() 247 | 248 | loss_pack['loss_flow_smooth'] = self.compute_loss_flow_smooth(optical_flows_fwd, img_pyramid) + \ 249 | self.compute_loss_flow_smooth(optical_flows_bwd, img_pyramid) 250 | 251 | loss_pack['loss_flow_consis'] = self.compute_loss_flow_consis(optical_flows_fwd, optical_flows_bwd, weight_fwd) 252 | # loss_pack['loss_flow_consis'] = torch.zeros([2]).to(imgl.get_device()).requires_grad_() 253 | 254 | 255 | return loss_pack 256 | -------------------------------------------------------------------------------- /core/networks/pytorch_ssim/__init__.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | sys.path.append(os.path.dirname(os.path.abspath(__file__))) 3 | from ssim import SSIM 4 | 5 | -------------------------------------------------------------------------------- /core/networks/pytorch_ssim/ssim.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | 4 | def SSIM(x, y): 5 | C1 = 0.01 ** 2 6 | C2 = 0.03 ** 2 7 | 8 | mu_x = nn.AvgPool2d(3, 1, padding=1)(x) 9 | mu_y = nn.AvgPool2d(3, 1, padding=1)(y) 10 | 11 | sigma_x = nn.AvgPool2d(3, 1, padding=1)(x**2) - mu_x**2 12 | sigma_y = nn.AvgPool2d(3, 1, padding=1)(y**2) - mu_y**2 13 | sigma_xy = nn.AvgPool2d(3, 1, padding=1)(x * y) - mu_x * mu_y 14 | 15 | SSIM_n = (2 * mu_x * mu_y + C1) * (2 * sigma_xy + C2) 16 | SSIM_d = (mu_x**2 + mu_y**2 + C1) * (sigma_x + sigma_y + C2) 17 | 18 | SSIM = SSIM_n / SSIM_d 19 | return SSIM 20 | 21 | -------------------------------------------------------------------------------- /core/networks/structures/__init__.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | sys.path.append(os.path.dirname(os.path.abspath(__file__))) 3 | from feature_pyramid import FeaturePyramid 4 | from pwc_tf import PWC_tf 5 | from net_utils import conv, deconv, warp_flow 6 | from inverse_warp import inverse_warp2 7 | -------------------------------------------------------------------------------- /core/networks/structures/feature_pyramid.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | sys.path.append(os.path.dirname(os.path.abspath(__file__))) 3 | from net_utils import conv 4 | import torch 5 | import torch.nn as nn 6 | 7 | class FeaturePyramid(nn.Module): 8 | def __init__(self): 9 | super(FeaturePyramid, self).__init__() 10 | self.conv1 = conv(3, 16, kernel_size=3, stride=2) 11 | self.conv2 = conv(16, 16, kernel_size=3, stride=1) 12 | self.conv3 = conv(16, 32, kernel_size=3, stride=2) 13 | self.conv4 = conv(32, 32, kernel_size=3, stride=1) 14 | self.conv5 = conv(32, 64, kernel_size=3, stride=2) 15 | self.conv6 = conv(64, 64, kernel_size=3, stride=1) 16 | self.conv7 = conv(64, 96, kernel_size=3, stride=2) 17 | self.conv8 = conv(96, 96, kernel_size=3, stride=1) 18 | self.conv9 = conv(96, 128, kernel_size=3, stride=2) 19 | self.conv10 = conv(128, 128, kernel_size=3, stride=1) 20 | self.conv11 = conv(128, 196, kernel_size=3, stride=2) 21 | self.conv12 = conv(196, 196, kernel_size=3, stride=1) 22 | ''' 23 | for m in self.modules(): 24 | if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d): 25 | nn.init.constant_(m.weight.data, 0.0) 26 | if m.bias is not None: 27 | m.bias.data.zero_() 28 | ''' 29 | def forward(self, img): 30 | cnv2 = self.conv2(self.conv1(img)) 31 | cnv4 = self.conv4(self.conv3(cnv2)) 32 | cnv6 = self.conv6(self.conv5(cnv4)) 33 | cnv8 = self.conv8(self.conv7(cnv6)) 34 | cnv10 = self.conv10(self.conv9(cnv8)) 35 | cnv12 = self.conv12(self.conv11(cnv10)) 36 | return cnv2, cnv4, cnv6, cnv8, cnv10, cnv12 37 | 38 | -------------------------------------------------------------------------------- /core/networks/structures/inverse_warp.py: -------------------------------------------------------------------------------- 1 | from __future__ import division 2 | import torch 3 | import torch.nn.functional as F 4 | 5 | pixel_coords = None 6 | 7 | 8 | def set_id_grid(depth): 9 | global pixel_coords 10 | b, h, w = depth.size() 11 | i_range = torch.arange(0, h).view(1, h, 1).expand( 12 | 1, h, w).type_as(depth) # [1, H, W] 13 | j_range = torch.arange(0, w).view(1, 1, w).expand( 14 | 1, h, w).type_as(depth) # [1, H, W] 15 | ones = torch.ones(1, h, w).type_as(depth) 16 | 17 | pixel_coords = torch.stack((j_range, i_range, ones), dim=1) # [1, 3, H, W] 18 | 19 | 20 | def check_sizes(input, input_name, expected): 21 | condition = [input.ndimension() == len(expected)] 22 | for i, size in enumerate(expected): 23 | if size.isdigit(): 24 | condition.append(input.size(i) == int(size)) 25 | assert(all(condition)), "wrong size for {}, expected {}, got {}".format( 26 | input_name, 'x'.join(expected), list(input.size())) 27 | 28 | 29 | def pixel2cam(depth, intrinsics_inv): 30 | global pixel_coords 31 | """Transform coordinates in the pixel frame to the camera frame. 32 | Args: 33 | depth: depth maps -- [B, H, W] 34 | intrinsics_inv: intrinsics_inv matrix for each element of batch -- [B, 3, 3] 35 | Returns: 36 | array of (u,v,1) cam coordinates -- [B, 3, H, W] 37 | """ 38 | b, h, w = depth.size() 39 | if (pixel_coords is None) or pixel_coords.size(2) < h: 40 | set_id_grid(depth) 41 | current_pixel_coords = pixel_coords[:, :, :h, :w].expand( 42 | b, 3, h, w).reshape(b, 3, -1) # [B, 3, H*W] 43 | cam_coords = (intrinsics_inv @ current_pixel_coords).reshape(b, 3, h, w) 44 | return cam_coords * depth.unsqueeze(1) 45 | 46 | 47 | def cam2pixel(cam_coords, proj_c2p_rot, proj_c2p_tr, padding_mode): 48 | """Transform coordinates in the camera frame to the pixel frame. 49 | Args: 50 | cam_coords: pixel coordinates defined in the first camera coordinates system -- [B, 4, H, W] 51 | proj_c2p_rot: rotation matrix of cameras -- [B, 3, 4] 52 | proj_c2p_tr: translation vectors of cameras -- [B, 3, 1] 53 | Returns: 54 | array of [-1,1] coordinates -- [B, 2, H, W] 55 | """ 56 | b, _, h, w = cam_coords.size() 57 | cam_coords_flat = cam_coords.reshape(b, 3, -1) # [B, 3, H*W] 58 | if proj_c2p_rot is not None: 59 | pcoords = proj_c2p_rot @ cam_coords_flat 60 | else: 61 | pcoords = cam_coords_flat 62 | 63 | if proj_c2p_tr is not None: 64 | pcoords = pcoords + proj_c2p_tr # [B, 3, H*W] 65 | X = pcoords[:, 0] 66 | Y = pcoords[:, 1] 67 | Z = pcoords[:, 2].clamp(min=1e-3) 68 | 69 | # Normalized, -1 if on extreme left, 1 if on extreme right (x = w-1) [B, H*W] 70 | X_norm = 2*(X / Z)/(w-1) - 1 71 | Y_norm = 2*(Y / Z)/(h-1) - 1 # Idem [B, H*W] 72 | 73 | pixel_coords = torch.stack([X_norm, Y_norm], dim=2) # [B, H*W, 2] 74 | return pixel_coords.reshape(b, h, w, 2) 75 | 76 | 77 | def euler2mat(angle): 78 | """Convert euler angles to rotation matrix. 79 | Reference: https://github.com/pulkitag/pycaffe-utils/blob/master/rot_utils.py#L174 80 | Args: 81 | angle: rotation angle along 3 axis (in radians) -- size = [B, 3] 82 | Returns: 83 | Rotation matrix corresponding to the euler angles -- size = [B, 3, 3] 84 | """ 85 | B = angle.size(0) 86 | x, y, z = angle[:, 0], angle[:, 1], angle[:, 2] 87 | 88 | cosz = torch.cos(z) 89 | sinz = torch.sin(z) 90 | 91 | zeros = z.detach()*0 92 | ones = zeros.detach()+1 93 | zmat = torch.stack([cosz, -sinz, zeros, 94 | sinz, cosz, zeros, 95 | zeros, zeros, ones], dim=1).reshape(B, 3, 3) 96 | 97 | cosy = torch.cos(y) 98 | siny = torch.sin(y) 99 | 100 | ymat = torch.stack([cosy, zeros, siny, 101 | zeros, ones, zeros, 102 | -siny, zeros, cosy], dim=1).reshape(B, 3, 3) 103 | 104 | cosx = torch.cos(x) 105 | sinx = torch.sin(x) 106 | 107 | xmat = torch.stack([ones, zeros, zeros, 108 | zeros, cosx, -sinx, 109 | zeros, sinx, cosx], dim=1).reshape(B, 3, 3) 110 | 111 | rotMat = xmat @ ymat @ zmat 112 | return rotMat 113 | 114 | 115 | def quat2mat(quat): 116 | """Convert quaternion coefficients to rotation matrix. 117 | Args: 118 | quat: first three coeff of quaternion of rotation. fourht is then computed to have a norm of 1 -- size = [B, 3] 119 | Returns: 120 | Rotation matrix corresponding to the quaternion -- size = [B, 3, 3] 121 | """ 122 | norm_quat = torch.cat([quat[:, :1].detach()*0 + 1, quat], dim=1) 123 | norm_quat = norm_quat/norm_quat.norm(p=2, dim=1, keepdim=True) 124 | w, x, y, z = norm_quat[:, 0], norm_quat[:, 125 | 1], norm_quat[:, 2], norm_quat[:, 3] 126 | 127 | B = quat.size(0) 128 | 129 | w2, x2, y2, z2 = w.pow(2), x.pow(2), y.pow(2), z.pow(2) 130 | wx, wy, wz = w*x, w*y, w*z 131 | xy, xz, yz = x*y, x*z, y*z 132 | 133 | rotMat = torch.stack([w2 + x2 - y2 - z2, 2*xy - 2*wz, 2*wy + 2*xz, 134 | 2*wz + 2*xy, w2 - x2 + y2 - z2, 2*yz - 2*wx, 135 | 2*xz - 2*wy, 2*wx + 2*yz, w2 - x2 - y2 + z2], dim=1).reshape(B, 3, 3) 136 | return rotMat 137 | 138 | 139 | def pose_vec2mat(vec, rotation_mode='euler'): 140 | """ 141 | Convert 6DoF parameters to transformation matrix. 142 | Args:s 143 | vec: 6DoF parameters in the order of tx, ty, tz, rx, ry, rz -- [B, 6] 144 | Returns: 145 | A transformation matrix -- [B, 3, 4] 146 | """ 147 | translation = vec[:, :3].unsqueeze(-1) # [B, 3, 1] 148 | rot = vec[:, 3:] 149 | if rotation_mode == 'euler': 150 | rot_mat = euler2mat(rot) # [B, 3, 3] 151 | elif rotation_mode == 'quat': 152 | rot_mat = quat2mat(rot) # [B, 3, 3] 153 | transform_mat = torch.cat([rot_mat, translation], dim=2) # [B, 3, 4] 154 | return transform_mat 155 | 156 | 157 | def inverse_warp(img, depth, pose, intrinsics, rotation_mode='euler', padding_mode='zeros'): 158 | """ 159 | Inverse warp a source image to the target image plane. 160 | Args: 161 | img: the source image (where to sample pixels) -- [B, 3, H, W] 162 | depth: depth map of the target image -- [B, H, W] 163 | pose: 6DoF pose parameters from target to source -- [B, 6] 164 | intrinsics: camera intrinsic matrix -- [B, 3, 3] 165 | Returns: 166 | projected_img: Source image warped to the target image plane 167 | valid_points: Boolean array indicating point validity 168 | """ 169 | check_sizes(img, 'img', 'B3HW') 170 | check_sizes(depth, 'depth', 'BHW') 171 | check_sizes(pose, 'pose', 'B6') 172 | check_sizes(intrinsics, 'intrinsics', 'B33') 173 | 174 | batch_size, _, img_height, img_width = img.size() 175 | 176 | cam_coords = pixel2cam(depth, intrinsics.inverse()) # [B,3,H,W] 177 | 178 | pose_mat = pose_vec2mat(pose, rotation_mode) # [B,3,4] 179 | 180 | # Get projection matrix for tgt camera frame to source pixel frame 181 | proj_cam_to_src_pixel = intrinsics @ pose_mat # [B, 3, 4] 182 | 183 | rot, tr = proj_cam_to_src_pixel[:, :, :3], proj_cam_to_src_pixel[:, :, -1:] 184 | src_pixel_coords = cam2pixel( 185 | cam_coords, rot, tr, padding_mode) # [B,H,W,2] 186 | projected_img = F.grid_sample( 187 | img, src_pixel_coords, padding_mode=padding_mode) 188 | 189 | valid_points = src_pixel_coords.abs().max(dim=-1)[0] <= 1 190 | 191 | return projected_img, valid_points 192 | 193 | 194 | def cam2pixel2(cam_coords, proj_c2p_rot, proj_c2p_tr, padding_mode): 195 | """Transform coordinates in the camera frame to the pixel frame. 196 | Args: 197 | cam_coords: pixel coordinates defined in the first camera coordinates system -- [B, 4, H, W] 198 | proj_c2p_rot: rotation matrix of cameras -- [B, 3, 4] 199 | proj_c2p_tr: translation vectors of cameras -- [B, 3, 1] 200 | Returns: 201 | array of [-1,1] coordinates -- [B, 2, H, W] 202 | """ 203 | b, _, h, w = cam_coords.size() 204 | cam_coords_flat = cam_coords.reshape(b, 3, -1) # [B, 3, H*W] 205 | if proj_c2p_rot is not None: 206 | pcoords = proj_c2p_rot @ cam_coords_flat 207 | else: 208 | pcoords = cam_coords_flat 209 | 210 | if proj_c2p_tr is not None: 211 | pcoords = pcoords + proj_c2p_tr # [B, 3, H*W] 212 | X = pcoords[:, 0] 213 | Y = pcoords[:, 1] 214 | Z = pcoords[:, 2].clamp(min=1e-3) 215 | 216 | # Normalized, -1 if on extreme left, 1 if on extreme right (x = w-1) [B, H*W] 217 | X_norm = 2*(X / Z)/(w-1) - 1 218 | Y_norm = 2*(Y / Z)/(h-1) - 1 # Idem [B, H*W] 219 | if padding_mode == 'zeros': 220 | X_mask = ((X_norm > 1)+(X_norm < -1)).detach() 221 | # make sure that no point in warped image is a combinaison of im and gray 222 | X_norm[X_mask] = 2 223 | Y_mask = ((Y_norm > 1)+(Y_norm < -1)).detach() 224 | Y_norm[Y_mask] = 2 225 | 226 | pixel_coords = torch.stack([X_norm, Y_norm], dim=2) # [B, H*W, 2] 227 | return pixel_coords.reshape(b, h, w, 2), Z.reshape(b, 1, h, w) 228 | 229 | 230 | def inverse_warp2(img, depth, ref_depth, pose, intrinsics, padding_mode='zeros'): 231 | """ 232 | Inverse warp a source image to the target image plane. 233 | Args: 234 | img: the source image (where to sample pixels) -- [B, 3, H, W] 235 | depth: depth map of the target image -- [B, 1, H, W] 236 | ref_depth: the source depth map (where to sample depth) -- [B, 1, H, W] 237 | pose: 6DoF pose parameters from target to source -- [B, 6] 238 | intrinsics: camera intrinsic matrix -- [B, 3, 3] 239 | Returns: 240 | projected_img: Source image warped to the target image plane 241 | valid_mask: Float array indicating point validity 242 | """ 243 | check_sizes(img, 'img', 'B3HW') 244 | check_sizes(depth, 'depth', 'B1HW') 245 | check_sizes(ref_depth, 'ref_depth', 'B1HW') 246 | check_sizes(pose, 'pose', 'B6') 247 | check_sizes(intrinsics, 'intrinsics', 'B33') 248 | 249 | batch_size, _, img_height, img_width = img.size() 250 | 251 | cam_coords = pixel2cam(depth.squeeze(1), intrinsics.inverse()) # [B,3,H,W] 252 | 253 | pose_mat = pose_vec2mat(pose) # [B,3,4] 254 | 255 | # Get projection matrix for tgt camera frame to source pixel frame 256 | proj_cam_to_src_pixel = intrinsics @ pose_mat # [B, 3, 4] 257 | 258 | rot, tr = proj_cam_to_src_pixel[:, :, :3], proj_cam_to_src_pixel[:, :, -1:] 259 | src_pixel_coords, computed_depth = cam2pixel2( 260 | cam_coords, rot, tr, padding_mode) # [B,H,W,2] 261 | projected_img = F.grid_sample( 262 | img, src_pixel_coords, padding_mode=padding_mode) 263 | 264 | valid_points = src_pixel_coords.abs().max(dim=-1)[0] <= 1 265 | valid_mask = valid_points.unsqueeze(1).float() 266 | 267 | projected_depth = F.grid_sample( 268 | ref_depth, src_pixel_coords, padding_mode=padding_mode).clamp(min=1e-3) 269 | 270 | return projected_img, valid_mask, projected_depth, computed_depth -------------------------------------------------------------------------------- /core/networks/structures/net_utils.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from torch.autograd import Variable 4 | import pdb 5 | import numpy as np 6 | 7 | def conv(in_planes, out_planes, kernel_size=3, stride=1, padding=1, dilation=1): 8 | return nn.Sequential( 9 | nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, 10 | padding=padding, dilation=dilation, bias=True), 11 | nn.LeakyReLU(0.1)) 12 | 13 | def deconv(in_planes, out_planes, kernel_size=4, stride=2, padding=1): 14 | return nn.ConvTranspose2d(in_planes, out_planes, kernel_size, stride, padding, bias=True) 15 | 16 | def warp_flow(x, flow, use_mask=False): 17 | """ 18 | warp an image/tensor (im2) back to im1, according to the optical flow 19 | 20 | Inputs: 21 | x: [B, C, H, W] (im2) 22 | flow: [B, 2, H, W] flow 23 | 24 | Returns: 25 | ouptut: [B, C, H, W] 26 | """ 27 | B, C, H, W = x.size() 28 | # mesh grid 29 | xx = torch.arange(0, W).view(1,-1).repeat(H,1) 30 | yy = torch.arange(0, H).view(-1,1).repeat(1,W) 31 | xx = xx.view(1,1,H,W).repeat(B,1,1,1) 32 | yy = yy.view(1,1,H,W).repeat(B,1,1,1) 33 | grid = torch.cat((xx,yy),1).float() 34 | 35 | if grid.shape != flow.shape: 36 | raise ValueError('the shape of grid {0} is not equal to the shape of flow {1}.'.format(grid.shape, flow.shape)) 37 | if x.is_cuda: 38 | grid = grid.to(x.get_device()) 39 | vgrid = grid + flow 40 | 41 | # scale grid to [-1,1] 42 | vgrid[:,0,:,:] = 2.0*vgrid[:,0,:,:].clone() / max(W-1,1)-1.0 43 | vgrid[:,1,:,:] = 2.0*vgrid[:,1,:,:].clone() / max(H-1,1)-1.0 44 | 45 | vgrid = vgrid.permute(0,2,3,1) 46 | output = nn.functional.grid_sample(x, vgrid) 47 | if use_mask: 48 | mask = torch.autograd.Variable(torch.ones(x.size())).to(x.get_device()) 49 | mask = nn.functional.grid_sample(mask, vgrid) 50 | mask[mask < 0.9999] = 0 51 | mask[mask > 0] = 1 52 | return output * mask 53 | else: 54 | return output 55 | 56 | if __name__ == '__main__': 57 | x = np.ones([1,1,10,10]) 58 | flow = np.stack([np.ones([1,10,10])*3.0, np.zeros([1,10,10])], axis=1) 59 | y = warp_flow(torch.from_numpy(x).cuda().float(),torch.from_numpy(flow).cuda().float()).cpu().detach().numpy() 60 | print(y) 61 | 62 | -------------------------------------------------------------------------------- /core/networks/structures/pwc_tf.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | sys.path.append(os.path.dirname(os.path.abspath(__file__))) 3 | from net_utils import conv, deconv, warp_flow 4 | sys.path.append(os.path.join(os.path.dirname(os.path.abspath(__file__)), '..', '..', 'external')) 5 | # from correlation_package.correlation import Correlation 6 | # from spatial_correlation_sampler import SpatialCorrelationSampler as Correlation 7 | 8 | import torch 9 | import torch.nn as nn 10 | from torch.autograd import Variable 11 | import numpy as np 12 | import pdb 13 | import torch.nn.functional as F 14 | #from spatial_correlation_sampler import spatial_correlation_sample 15 | 16 | class PWC_tf(nn.Module): 17 | def __init__(self, md=4): 18 | super(PWC_tf, self).__init__() 19 | self.corr = self.corr_naive 20 | # self.corr = self.correlate 21 | self.leakyRELU = nn.LeakyReLU(0.1) 22 | 23 | nd = (2*md+1)**2 24 | #dd = np.cumsum([128,128,96,64,32]) 25 | dd = np.array([128,128,96,64,32]) 26 | 27 | od = nd 28 | self.conv6_0 = conv(od, 128, kernel_size=3, stride=1) 29 | self.conv6_1 = conv(dd[0], 128, kernel_size=3, stride=1) 30 | self.conv6_2 = conv(dd[0]+dd[1],96, kernel_size=3, stride=1) 31 | self.conv6_3 = conv(dd[1]+dd[2],64, kernel_size=3, stride=1) 32 | self.conv6_4 = conv(dd[2]+dd[3],32, kernel_size=3, stride=1) 33 | self.predict_flow6 = self.predict_flow(dd[3]+dd[4]) 34 | #self.deconv6 = deconv(2, 2, kernel_size=4, stride=2, padding=1) 35 | #self.upfeat6 = deconv(od+dd[4], 2, kernel_size=4, stride=2, padding=1) 36 | 37 | od = nd+128+2 38 | self.conv5_0 = conv(od, 128, kernel_size=3, stride=1) 39 | self.conv5_1 = conv(dd[0], 128, kernel_size=3, stride=1) 40 | self.conv5_2 = conv(dd[0]+dd[1],96, kernel_size=3, stride=1) 41 | self.conv5_3 = conv(dd[1]+dd[2],64, kernel_size=3, stride=1) 42 | self.conv5_4 = conv(dd[2]+dd[3],32, kernel_size=3, stride=1) 43 | self.predict_flow5 = self.predict_flow(dd[3]+dd[4]) 44 | #self.deconv5 = deconv(2, 2, kernel_size=4, stride=2, padding=1) 45 | #self.upfeat5 = deconv(od+dd[4], 2, kernel_size=4, stride=2, padding=1) 46 | 47 | od = nd+96+2 48 | self.conv4_0 = conv(od, 128, kernel_size=3, stride=1) 49 | self.conv4_1 = conv(dd[0], 128, kernel_size=3, stride=1) 50 | self.conv4_2 = conv(dd[0]+dd[1],96, kernel_size=3, stride=1) 51 | self.conv4_3 = conv(dd[1]+dd[2],64, kernel_size=3, stride=1) 52 | self.conv4_4 = conv(dd[2]+dd[3],32, kernel_size=3, stride=1) 53 | self.predict_flow4 = self.predict_flow(dd[3]+dd[4]) 54 | #self.deconv4 = deconv(2, 2, kernel_size=4, stride=2, padding=1) 55 | #self.upfeat4 = deconv(od+dd[4], 2, kernel_size=4, stride=2, padding=1) 56 | 57 | od = nd+64+2 58 | self.conv3_0 = conv(od, 128, kernel_size=3, stride=1) 59 | self.conv3_1 = conv(dd[0], 128, kernel_size=3, stride=1) 60 | self.conv3_2 = conv(dd[0]+dd[1],96, kernel_size=3, stride=1) 61 | self.conv3_3 = conv(dd[1]+dd[2],64, kernel_size=3, stride=1) 62 | self.conv3_4 = conv(dd[2]+dd[3],32, kernel_size=3, stride=1) 63 | self.predict_flow3 = self.predict_flow(dd[3]+dd[4]) 64 | #self.deconv3 = deconv(2, 2, kernel_size=4, stride=2, padding=1) 65 | #self.upfeat3 = deconv(od+dd[4], 2, kernel_size=4, stride=2, padding=1) 66 | 67 | od = nd+32+2 68 | self.conv2_0 = conv(od, 128, kernel_size=3, stride=1) 69 | self.conv2_1 = conv(dd[0], 128, kernel_size=3, stride=1) 70 | self.conv2_2 = conv(dd[0]+dd[1],96, kernel_size=3, stride=1) 71 | self.conv2_3 = conv(dd[1]+dd[2],64, kernel_size=3, stride=1) 72 | self.conv2_4 = conv(dd[2]+dd[3],32, kernel_size=3, stride=1) 73 | self.predict_flow2 = self.predict_flow(dd[3]+dd[4]) 74 | #self.deconv2 = deconv(2, 2, kernel_size=4, stride=2, padding=1) 75 | 76 | self.dc_conv1 = conv(dd[4]+2, 128, kernel_size=3, stride=1, padding=1, dilation=1) 77 | self.dc_conv2 = conv(128, 128, kernel_size=3, stride=1, padding=2, dilation=2) 78 | self.dc_conv3 = conv(128, 128, kernel_size=3, stride=1, padding=4, dilation=4) 79 | self.dc_conv4 = conv(128, 96, kernel_size=3, stride=1, padding=8, dilation=8) 80 | self.dc_conv5 = conv(96, 64, kernel_size=3, stride=1, padding=16, dilation=16) 81 | self.dc_conv6 = conv(64, 32, kernel_size=3, stride=1, padding=1, dilation=1) 82 | self.dc_conv7 = self.predict_flow(32) 83 | 84 | ''' 85 | for m in self.modules(): 86 | if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d): 87 | nn.init.kaiming_zeros_(m.weight.data) 88 | if m.bias is not None: 89 | m.bias.data.zero_() 90 | ''' 91 | def predict_flow(self, in_planes): 92 | return nn.Conv2d(in_planes,2,kernel_size=3,stride=1,padding=1,bias=True) 93 | 94 | def warp(self, x, flow): 95 | return warp_flow(x, flow, use_mask=False) 96 | 97 | def corr_naive(self, input1, input2, d=4): 98 | # naive pytorch implementation of the correlation layer. 99 | assert (input1.shape == input2.shape) 100 | batch_size, feature_num, H, W = input1.shape[0:4] 101 | input2 = F.pad(input2, (d,d,d,d), value=0) 102 | cv = [] 103 | for i in range(2 * d + 1): 104 | for j in range(2 * d + 1): 105 | cv.append((input1 * input2[:, :, i:(i + H), j:(j + W)]).mean(1).unsqueeze(1)) 106 | return torch.cat(cv, 1) 107 | 108 | def forward(self, feature_list_1, feature_list_2, img_hw): 109 | c11, c12, c13, c14, c15, c16 = feature_list_1 110 | c21, c22, c23, c24, c25, c26 = feature_list_2 111 | 112 | corr6 = self.corr(c16, c26) 113 | x0 = self.conv6_0(corr6) 114 | x1 = self.conv6_1(x0) 115 | x2 = self.conv6_2(torch.cat((x0,x1),1)) 116 | x3 = self.conv6_3(torch.cat((x1,x2),1)) 117 | x4 = self.conv6_4(torch.cat((x2,x3),1)) 118 | flow6 = self.predict_flow6(torch.cat((x3,x4),1)) 119 | up_flow6 = F.interpolate(flow6, scale_factor=2.0, mode='bilinear')*2.0 120 | 121 | warp5 = self.warp(c25, up_flow6) 122 | corr5 = self.corr(c15, warp5) 123 | x = torch.cat((corr5, c15, up_flow6), 1) 124 | x0 = self.conv5_0(x) 125 | x1 = self.conv5_1(x0) 126 | x2 = self.conv5_2(torch.cat((x0,x1),1)) 127 | x3 = self.conv5_3(torch.cat((x1,x2),1)) 128 | x4 = self.conv5_4(torch.cat((x2,x3),1)) 129 | flow5 = self.predict_flow5(torch.cat((x3,x4),1)) 130 | flow5 = flow5 + up_flow6 131 | up_flow5 = F.interpolate(flow5, scale_factor=2.0, mode='bilinear')*2.0 132 | 133 | 134 | warp4 = self.warp(c24, up_flow5) 135 | corr4 = self.corr(c14, warp4) 136 | x = torch.cat((corr4, c14, up_flow5), 1) 137 | x0 = self.conv4_0(x) 138 | x1 = self.conv4_1(x0) 139 | x2 = self.conv4_2(torch.cat((x0,x1),1)) 140 | x3 = self.conv4_3(torch.cat((x1,x2),1)) 141 | x4 = self.conv4_4(torch.cat((x2,x3),1)) 142 | flow4 = self.predict_flow4(torch.cat((x3,x4),1)) 143 | flow4 = flow4 + up_flow5 144 | up_flow4 = F.interpolate(flow4, scale_factor=2.0, mode='bilinear')*2.0 145 | 146 | warp3 = self.warp(c23, up_flow4) 147 | corr3 = self.corr(c13, warp3) 148 | x = torch.cat((corr3, c13, up_flow4), 1) 149 | x0 = self.conv3_0(x) 150 | x1 = self.conv3_1(x0) 151 | x2 = self.conv3_2(torch.cat((x0,x1),1)) 152 | x3 = self.conv3_3(torch.cat((x1,x2),1)) 153 | x4 = self.conv3_4(torch.cat((x2,x3),1)) 154 | flow3 = self.predict_flow3(torch.cat((x3,x4),1)) 155 | flow3 = flow3 + up_flow4 156 | up_flow3 = F.interpolate(flow3, scale_factor=2.0, mode='bilinear')*2.0 157 | 158 | 159 | warp2 = self.warp(c22, up_flow3) 160 | corr2 = self.corr(c12, warp2) 161 | x = torch.cat((corr2, c12, up_flow3), 1) 162 | x0 = self.conv2_0(x) 163 | x1 = self.conv2_1(x0) 164 | x2 = self.conv2_2(torch.cat((x0,x1),1)) 165 | x3 = self.conv2_3(torch.cat((x1,x2),1)) 166 | x4 = self.conv2_4(torch.cat((x2,x3),1)) 167 | flow2 = self.predict_flow2(torch.cat((x3,x4),1)) 168 | flow2 = flow2 + up_flow3 169 | 170 | x = self.dc_conv4(self.dc_conv3(self.dc_conv2(self.dc_conv1(torch.cat([flow2, x4], 1))))) 171 | flow2 = flow2 + self.dc_conv7(self.dc_conv6(self.dc_conv5(x))) 172 | 173 | img_h, img_w = img_hw[0], img_hw[1] 174 | flow2 = F.interpolate(flow2 * 4.0, [img_h, img_w], mode='bilinear') 175 | flow3 = F.interpolate(flow3 * 4.0, [img_h // 2, img_w // 2], mode='bilinear') 176 | flow4 = F.interpolate(flow4 * 4.0, [img_h // 4, img_w // 4], mode='bilinear') 177 | flow5 = F.interpolate(flow5 * 4.0, [img_h // 8, img_w // 8], mode='bilinear') 178 | 179 | return [flow2, flow3, flow4, flow5] 180 | 181 | -------------------------------------------------------------------------------- /core/visualize/__init__.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | sys.path.append(os.path.dirname(os.path.abspath(__file__))) 3 | from visualizer import Visualizer 4 | from visualizer import Visualizer_debug 5 | from profiler import Profiler 6 | -------------------------------------------------------------------------------- /core/visualize/profiler.py: -------------------------------------------------------------------------------- 1 | import os 2 | import time 3 | import torch 4 | import pdb 5 | 6 | class Profiler(object): 7 | def __init__(self, silent=False): 8 | self.silent = silent 9 | torch.cuda.synchronize() 10 | self.start = time.time() 11 | self.cache_time = self.start 12 | 13 | def reset(self, silent=None): 14 | if silent is None: 15 | silent = self.silent 16 | self.__init__(silent=silent) 17 | 18 | def report_process(self, process_name): 19 | if self.silent: 20 | return None 21 | torch.cuda.synchronize() 22 | now = time.time() 23 | print('{0}\t: {1:.4f}'.format(process_name, now - self.cache_time)) 24 | self.cache_time = now 25 | 26 | def report_all(self, whole_process_name): 27 | if self.silent: 28 | return None 29 | torch.cuda.synchronize() 30 | now = time.time() 31 | print('{0}\t: {1:.4f}'.format(whole_process_name, now - self.start)) 32 | pdb.set_trace() 33 | 34 | -------------------------------------------------------------------------------- /core/visualize/visualizer.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | import numpy as np 3 | import cv2 4 | import pdb 5 | import pickle 6 | from mpl_toolkits import mplot3d 7 | import matplotlib.pyplot as plt 8 | import PIL.Image as pil 9 | import matplotlib as mpl 10 | import matplotlib.cm as cm 11 | 12 | 13 | colorlib = [(0,0,255),(255,0,0),(0,255,0),(255,255,0),(0,255,255),(255,0,255),(0,0,0),(255,255,255)] 14 | 15 | class Visualizer(object): 16 | def __init__(self, loss_weights_dict, dump_dir=None): 17 | self.loss_weights_dict = loss_weights_dict 18 | #self.use_flow_error = (self.loss_weights_dict['flow_error'] > 0) 19 | self.dump_dir = dump_dir 20 | 21 | self.log_list = [] 22 | 23 | def add_log_pack(self, log_pack): 24 | self.log_list.append(log_pack) 25 | 26 | def dump_log(self, fname=None): 27 | if fname is None: 28 | fname = self.dump_dir 29 | with open(fname, 'wb') as f: 30 | pickle.dump(self.log_list, f) 31 | 32 | def print_loss(self, loss_pack, iter_=None): 33 | loss_pixel = loss_pack['loss_pixel'].mean().detach().cpu().numpy() 34 | loss_ssim = loss_pack['loss_ssim'].mean().detach().cpu().numpy() 35 | loss_flow_smooth = loss_pack['loss_flow_smooth'].mean().detach().cpu().numpy() 36 | loss_flow_consis = loss_pack['loss_flow_consis'].mean().detach().cpu().numpy() 37 | if 'pt_depth_loss' in loss_pack.keys(): 38 | loss_pt_depth = loss_pack['pt_depth_loss'].mean().detach().cpu().numpy() 39 | loss_pj_depth = loss_pack['pj_depth_loss'].mean().detach().cpu().numpy() 40 | loss_depth_smooth = loss_pack['depth_smooth_loss'].mean().detach().cpu().numpy() 41 | str_= ('iter: {0}, loss_pixel: {1:.6f}, loss_ssim: {2:.6f}, loss_pt_depth: {3:.6f}, loss_pj_depth: {4:.6f}, loss_depth_smooth: {5:.6f}'.format(\ 42 | iter_, loss_pixel, loss_ssim, loss_pt_depth, loss_pj_depth, loss_depth_smooth)) 43 | #if self.use_flow_error: 44 | # loss_flow_error = loss_pack['flow_error'].mean().detach().cpu().numpy() 45 | # str_ = str_ + ', loss_flow_error: {0:.6f}'.format(loss_flow_error) 46 | print(str_) 47 | else: 48 | print('iter: {4}, loss_pixel: {0:.6f}, loss_ssim: {1:.6f}, loss_flow_smooth: {2:.6f}, loss_flow_consis: {3:.6f}'.format(loss_pixel, loss_ssim, loss_flow_smooth, loss_flow_consis, iter_)) 49 | 50 | class Visualizer_debug(): 51 | def __init__(self, dump_dir=None, img1=None, img2=None): 52 | self.dump_dir = dump_dir 53 | self.img1 = img1 54 | self.img2 = img2 55 | 56 | def draw_point_corres(self, batch_idx, match, name): 57 | img1 = self.img1[batch_idx] 58 | img2 = self.img2[batch_idx] 59 | self.show_corres(img1, img2, match, name) 60 | print("Correspondence Saved in " + self.dump_dir + '/' + name) 61 | 62 | def draw_invalid_corres_ray(self, img1, img2, depth_match, point2d_1_coord, point2d_2_coord, point2d_1_depth, point2d_2_depth, P1, P2): 63 | # img: [H, W, 3] match: [4, n] point2d_coord: [n, 2] P: [3, 4] 64 | idx = np.where(point2d_1_depth < 0)[0] 65 | select_match = depth_match[:, idx] 66 | self.show_corres(img1, img2, select_match) 67 | pdb.set_trace() 68 | 69 | def draw_epipolar_line(self, batch_idx, match, F, name): 70 | # img: [H, W, 3] match: [4,n] F: [3,3] 71 | img1 = self.img1[batch_idx] 72 | img2 = self.img2[batch_idx] 73 | self.show_epipolar_line(img1, img2, match, F, name) 74 | print("Epipolar Lines Saved in " + self.dump_dir + '/' + name) 75 | 76 | def show_corres(self, img1, img2, match, name): 77 | # img: [H, W, 3] match: [4, n] 78 | cv2.imwrite(os.path.join(self.dump_dir, name+'_img1_cor.png'), img1) 79 | cv2.imwrite(os.path.join(self.dump_dir, name+'_img2_cor.png'), img2) 80 | img1 = cv2.imread(os.path.join(self.dump_dir, name+'_img1_cor.png')) 81 | img2 = cv2.imread(os.path.join(self.dump_dir, name+'_img2_cor.png')) 82 | n = np.shape(match)[1] 83 | for i in range(n): 84 | x1,y1 = match[:2,i] 85 | x2,y2 = match[2:,i] 86 | #print((x1, y1)) 87 | #print((x2, y2)) 88 | cv2.circle(img1, (x1,y1), radius=1, color=colorlib[i%len(colorlib)], thickness=2) 89 | cv2.circle(img2, (x2,y2), radius=1, color=colorlib[i%len(colorlib)], thickness=2) 90 | cv2.imwrite(os.path.join(self.dump_dir, name+'_img1_cor.png'), img1) 91 | cv2.imwrite(os.path.join(self.dump_dir, name+'_img2_cor.png'), img2) 92 | 93 | def show_mask(self, mask, name): 94 | # mask: [H, W, 1] 95 | mask = mask / np.max(mask) * 255.0 96 | cv2.imwrite(os.path.join(self.dump_dir, name+'.png'), mask) 97 | 98 | def save_img(self, img, name): 99 | cv2.imwrite(os.path.join(self.dump_dir, name+'.png'), img) 100 | 101 | def save_depth_img(self, depth, name): 102 | # depth: [h,w,1] 103 | minddepth = np.min(depth) 104 | maxdepth = np.max(depth) 105 | depth_nor = (depth-minddepth) / (maxdepth-minddepth) * 255.0 106 | depth_nor = depth_nor.astype(np.uint8) 107 | cv2.imwrite(os.path.join(self.dump_dir, name+'_depth.png'), depth_nor) 108 | 109 | def save_disp_color_img(self, disp, name): 110 | vmax = np.percentile(disp, 95) 111 | normalizer = mpl.colors.Normalize(vmin=disp.min(), vmax=vmax) 112 | mapper = cm.ScalarMappable(norm=normalizer, cmap='magma') 113 | colormapped_im = (mapper.to_rgba(disp)[:,:,:3] * 255).astype(np.uint8) 114 | im = pil.fromarray(colormapped_im) 115 | 116 | name_dest_im = os.path.join(self.dump_dir, name + '_depth.jpg') 117 | im.save(name_dest_im) 118 | 119 | 120 | def drawlines(self, img1, img2, lines, pts1, pts2): 121 | ''' img1 - image on which we draw the epilines for the points in img2 122 | lines - corresponding epilines ''' 123 | r,c, _ = img1.shape 124 | for r,pt1,pt2 in zip(lines,pts1,pts2): 125 | color = tuple(np.random.randint(0,255,3).tolist()) 126 | x0,y0 = map(int, [0, -r[2]/r[1] ]) 127 | x1,y1 = map(int, [c, -(r[2]+r[0]*c)/r[1] ]) 128 | img1 = cv2.line(img1, (x0,y0), (x1,y1), color,1) 129 | img1 = cv2.circle(img1,tuple(pt1),3,color,-1) 130 | img2 = cv2.circle(img2,tuple(pt2),3,color,-1) 131 | return img1,img2 132 | 133 | def show_epipolar_line(self, img1, img2, match, F, name): 134 | # img: [H,W,3] match: [4,n] F: [3,3] 135 | pts1 = np.transpose(match[:2,:], [1,0]) 136 | pts2 = np.transpose(match[2:,:], [1,0]) 137 | lines1 = cv2.computeCorrespondEpilines(pts2.reshape(-1,1,2), 2,F) 138 | lines1 = lines1.reshape(-1,3) 139 | img5,img6 = self.drawlines(img1,img2,lines1,pts1,pts2) 140 | 141 | # Find epilines corresponding to points in left image (first image) and 142 | # drawing its lines on right image 143 | lines2 = cv2.computeCorrespondEpilines(pts1.reshape(-1,1,2), 1,F) 144 | lines2 = lines2.reshape(-1,3) 145 | img3,img4 = self.drawlines(img2,img1,lines2,pts2,pts1) 146 | 147 | cv2.imwrite(os.path.join(self.dump_dir, name+'_1eline.png'), img5) 148 | cv2.imwrite(os.path.join(self.dump_dir, name+'_2eline.png'), img3) 149 | 150 | return None 151 | 152 | 153 | def show_ray(self, ax, K, RT, point2d, cmap='Greens'): 154 | K_inv = np.linalg.inv(K) 155 | R, T = RT[:,:3], RT[:,3] 156 | ray_direction = np.matmul(np.matmul(R.T, K_inv), np.array([point2d[0], point2d[1], 1])) 157 | ray_direction = ray_direction / (np.linalg.norm(ray_direction, ord=2) + 1e-12) 158 | ray_origin = (-1) * np.matmul(R.T, T) 159 | 160 | scatters = [ray_origin + t * ray_direction for t in np.linspace(0.0, 100.0, 1000)] 161 | scatters = np.stack(scatters, axis=0) 162 | self.visualize_points(ax, scatters, cmap=cmap) 163 | self.scatter_3d(ax, scatters[0], scatter_color='r') 164 | return ray_direction 165 | 166 | def visualize_points(self, ax, points, cmap=None): 167 | # ax.plot3D(points[:,0], points[:,1], points[:,2], c=points[:,2], cmap=cmap) 168 | # ax.plot3D(points[:,0], points[:,1], points[:,2], c=points[:,2]) 169 | ax.plot3D(points[:,0], points[:,1], points[:,2]) 170 | 171 | def scatter_3d(self, ax, point, scatter_color='r'): 172 | ax.scatter(point[0], point[1], point[2], c=scatter_color) 173 | 174 | def visualize_two_rays(self, ax, match, P1, P2): 175 | # match: [4] P: [3,4] 176 | K = P1[:,:3] # the first P1 has identity rotation matrix and zero translation. 177 | K_inv = np.linalg.inv(K) 178 | RT1, RT2 = np.matmul(K_inv, P1), np.matmul(K_inv, P2) 179 | x1, y1, x2, y2 = match 180 | d1 = self.show_ray(ax, K, RT1, [x1, y1], cmap='Greens') 181 | d2 = self.show_ray(ax, K, RT2, [x2, y2], cmap='Reds') 182 | print(np.dot(d1.squeeze(), d2.squeeze())) 183 | 184 | if __name__ == '__main__': 185 | img1 = cv2.imread('./vis/ga.png') 186 | img2 = cv2.imread('./vis/gb.png') 187 | match = np.load('./vis/gmatch.npy') 188 | print(np.shape(img1)) 189 | match = np.reshape(match, [4,-1]) 190 | select_match = match[:,np.random.randint(200000, size=100)] 191 | show_corres(img1, img2, select_match) 192 | -------------------------------------------------------------------------------- /data/eigen/test_files.txt: -------------------------------------------------------------------------------- 1 | 2011_09_26/2011_09_26_drive_0002_sync 0000000069 l 2 | 2011_09_26/2011_09_26_drive_0002_sync 0000000054 l 3 | 2011_09_26/2011_09_26_drive_0002_sync 0000000042 l 4 | 2011_09_26/2011_09_26_drive_0002_sync 0000000057 l 5 | 2011_09_26/2011_09_26_drive_0002_sync 0000000030 l 6 | 2011_09_26/2011_09_26_drive_0002_sync 0000000027 l 7 | 2011_09_26/2011_09_26_drive_0002_sync 0000000012 l 8 | 2011_09_26/2011_09_26_drive_0002_sync 0000000075 l 9 | 2011_09_26/2011_09_26_drive_0002_sync 0000000036 l 10 | 2011_09_26/2011_09_26_drive_0002_sync 0000000033 l 11 | 2011_09_26/2011_09_26_drive_0002_sync 0000000015 l 12 | 2011_09_26/2011_09_26_drive_0002_sync 0000000072 l 13 | 2011_09_26/2011_09_26_drive_0002_sync 0000000003 l 14 | 2011_09_26/2011_09_26_drive_0002_sync 0000000039 l 15 | 2011_09_26/2011_09_26_drive_0002_sync 0000000009 l 16 | 2011_09_26/2011_09_26_drive_0002_sync 0000000051 l 17 | 2011_09_26/2011_09_26_drive_0002_sync 0000000060 l 18 | 2011_09_26/2011_09_26_drive_0002_sync 0000000021 l 19 | 2011_09_26/2011_09_26_drive_0002_sync 0000000000 l 20 | 2011_09_26/2011_09_26_drive_0002_sync 0000000024 l 21 | 2011_09_26/2011_09_26_drive_0002_sync 0000000045 l 22 | 2011_09_26/2011_09_26_drive_0002_sync 0000000018 l 23 | 2011_09_26/2011_09_26_drive_0002_sync 0000000048 l 24 | 2011_09_26/2011_09_26_drive_0002_sync 0000000006 l 25 | 2011_09_26/2011_09_26_drive_0002_sync 0000000063 l 26 | 2011_09_26/2011_09_26_drive_0009_sync 0000000000 l 27 | 2011_09_26/2011_09_26_drive_0009_sync 0000000016 l 28 | 2011_09_26/2011_09_26_drive_0009_sync 0000000032 l 29 | 2011_09_26/2011_09_26_drive_0009_sync 0000000048 l 30 | 2011_09_26/2011_09_26_drive_0009_sync 0000000064 l 31 | 2011_09_26/2011_09_26_drive_0009_sync 0000000080 l 32 | 2011_09_26/2011_09_26_drive_0009_sync 0000000096 l 33 | 2011_09_26/2011_09_26_drive_0009_sync 0000000112 l 34 | 2011_09_26/2011_09_26_drive_0009_sync 0000000128 l 35 | 2011_09_26/2011_09_26_drive_0009_sync 0000000144 l 36 | 2011_09_26/2011_09_26_drive_0009_sync 0000000160 l 37 | 2011_09_26/2011_09_26_drive_0009_sync 0000000176 l 38 | 2011_09_26/2011_09_26_drive_0009_sync 0000000196 l 39 | 2011_09_26/2011_09_26_drive_0009_sync 0000000212 l 40 | 2011_09_26/2011_09_26_drive_0009_sync 0000000228 l 41 | 2011_09_26/2011_09_26_drive_0009_sync 0000000244 l 42 | 2011_09_26/2011_09_26_drive_0009_sync 0000000260 l 43 | 2011_09_26/2011_09_26_drive_0009_sync 0000000276 l 44 | 2011_09_26/2011_09_26_drive_0009_sync 0000000292 l 45 | 2011_09_26/2011_09_26_drive_0009_sync 0000000308 l 46 | 2011_09_26/2011_09_26_drive_0009_sync 0000000324 l 47 | 2011_09_26/2011_09_26_drive_0009_sync 0000000340 l 48 | 2011_09_26/2011_09_26_drive_0009_sync 0000000356 l 49 | 2011_09_26/2011_09_26_drive_0009_sync 0000000372 l 50 | 2011_09_26/2011_09_26_drive_0009_sync 0000000388 l 51 | 2011_09_26/2011_09_26_drive_0013_sync 0000000090 l 52 | 2011_09_26/2011_09_26_drive_0013_sync 0000000050 l 53 | 2011_09_26/2011_09_26_drive_0013_sync 0000000110 l 54 | 2011_09_26/2011_09_26_drive_0013_sync 0000000115 l 55 | 2011_09_26/2011_09_26_drive_0013_sync 0000000060 l 56 | 2011_09_26/2011_09_26_drive_0013_sync 0000000105 l 57 | 2011_09_26/2011_09_26_drive_0013_sync 0000000125 l 58 | 2011_09_26/2011_09_26_drive_0013_sync 0000000020 l 59 | 2011_09_26/2011_09_26_drive_0013_sync 0000000140 l 60 | 2011_09_26/2011_09_26_drive_0013_sync 0000000085 l 61 | 2011_09_26/2011_09_26_drive_0013_sync 0000000070 l 62 | 2011_09_26/2011_09_26_drive_0013_sync 0000000080 l 63 | 2011_09_26/2011_09_26_drive_0013_sync 0000000065 l 64 | 2011_09_26/2011_09_26_drive_0013_sync 0000000095 l 65 | 2011_09_26/2011_09_26_drive_0013_sync 0000000130 l 66 | 2011_09_26/2011_09_26_drive_0013_sync 0000000100 l 67 | 2011_09_26/2011_09_26_drive_0013_sync 0000000010 l 68 | 2011_09_26/2011_09_26_drive_0013_sync 0000000030 l 69 | 2011_09_26/2011_09_26_drive_0013_sync 0000000000 l 70 | 2011_09_26/2011_09_26_drive_0013_sync 0000000135 l 71 | 2011_09_26/2011_09_26_drive_0013_sync 0000000040 l 72 | 2011_09_26/2011_09_26_drive_0013_sync 0000000005 l 73 | 2011_09_26/2011_09_26_drive_0013_sync 0000000120 l 74 | 2011_09_26/2011_09_26_drive_0013_sync 0000000045 l 75 | 2011_09_26/2011_09_26_drive_0013_sync 0000000035 l 76 | 2011_09_26/2011_09_26_drive_0020_sync 0000000003 l 77 | 2011_09_26/2011_09_26_drive_0020_sync 0000000069 l 78 | 2011_09_26/2011_09_26_drive_0020_sync 0000000057 l 79 | 2011_09_26/2011_09_26_drive_0020_sync 0000000012 l 80 | 2011_09_26/2011_09_26_drive_0020_sync 0000000072 l 81 | 2011_09_26/2011_09_26_drive_0020_sync 0000000018 l 82 | 2011_09_26/2011_09_26_drive_0020_sync 0000000063 l 83 | 2011_09_26/2011_09_26_drive_0020_sync 0000000000 l 84 | 2011_09_26/2011_09_26_drive_0020_sync 0000000084 l 85 | 2011_09_26/2011_09_26_drive_0020_sync 0000000015 l 86 | 2011_09_26/2011_09_26_drive_0020_sync 0000000066 l 87 | 2011_09_26/2011_09_26_drive_0020_sync 0000000006 l 88 | 2011_09_26/2011_09_26_drive_0020_sync 0000000048 l 89 | 2011_09_26/2011_09_26_drive_0020_sync 0000000060 l 90 | 2011_09_26/2011_09_26_drive_0020_sync 0000000009 l 91 | 2011_09_26/2011_09_26_drive_0020_sync 0000000033 l 92 | 2011_09_26/2011_09_26_drive_0020_sync 0000000021 l 93 | 2011_09_26/2011_09_26_drive_0020_sync 0000000075 l 94 | 2011_09_26/2011_09_26_drive_0020_sync 0000000027 l 95 | 2011_09_26/2011_09_26_drive_0020_sync 0000000045 l 96 | 2011_09_26/2011_09_26_drive_0020_sync 0000000078 l 97 | 2011_09_26/2011_09_26_drive_0020_sync 0000000036 l 98 | 2011_09_26/2011_09_26_drive_0020_sync 0000000051 l 99 | 2011_09_26/2011_09_26_drive_0020_sync 0000000054 l 100 | 2011_09_26/2011_09_26_drive_0020_sync 0000000042 l 101 | 2011_09_26/2011_09_26_drive_0023_sync 0000000018 l 102 | 2011_09_26/2011_09_26_drive_0023_sync 0000000090 l 103 | 2011_09_26/2011_09_26_drive_0023_sync 0000000126 l 104 | 2011_09_26/2011_09_26_drive_0023_sync 0000000378 l 105 | 2011_09_26/2011_09_26_drive_0023_sync 0000000036 l 106 | 2011_09_26/2011_09_26_drive_0023_sync 0000000288 l 107 | 2011_09_26/2011_09_26_drive_0023_sync 0000000198 l 108 | 2011_09_26/2011_09_26_drive_0023_sync 0000000450 l 109 | 2011_09_26/2011_09_26_drive_0023_sync 0000000144 l 110 | 2011_09_26/2011_09_26_drive_0023_sync 0000000072 l 111 | 2011_09_26/2011_09_26_drive_0023_sync 0000000252 l 112 | 2011_09_26/2011_09_26_drive_0023_sync 0000000180 l 113 | 2011_09_26/2011_09_26_drive_0023_sync 0000000432 l 114 | 2011_09_26/2011_09_26_drive_0023_sync 0000000396 l 115 | 2011_09_26/2011_09_26_drive_0023_sync 0000000054 l 116 | 2011_09_26/2011_09_26_drive_0023_sync 0000000468 l 117 | 2011_09_26/2011_09_26_drive_0023_sync 0000000306 l 118 | 2011_09_26/2011_09_26_drive_0023_sync 0000000108 l 119 | 2011_09_26/2011_09_26_drive_0023_sync 0000000162 l 120 | 2011_09_26/2011_09_26_drive_0023_sync 0000000342 l 121 | 2011_09_26/2011_09_26_drive_0023_sync 0000000270 l 122 | 2011_09_26/2011_09_26_drive_0023_sync 0000000414 l 123 | 2011_09_26/2011_09_26_drive_0023_sync 0000000216 l 124 | 2011_09_26/2011_09_26_drive_0023_sync 0000000360 l 125 | 2011_09_26/2011_09_26_drive_0023_sync 0000000324 l 126 | 2011_09_26/2011_09_26_drive_0027_sync 0000000077 l 127 | 2011_09_26/2011_09_26_drive_0027_sync 0000000035 l 128 | 2011_09_26/2011_09_26_drive_0027_sync 0000000091 l 129 | 2011_09_26/2011_09_26_drive_0027_sync 0000000112 l 130 | 2011_09_26/2011_09_26_drive_0027_sync 0000000007 l 131 | 2011_09_26/2011_09_26_drive_0027_sync 0000000175 l 132 | 2011_09_26/2011_09_26_drive_0027_sync 0000000042 l 133 | 2011_09_26/2011_09_26_drive_0027_sync 0000000098 l 134 | 2011_09_26/2011_09_26_drive_0027_sync 0000000133 l 135 | 2011_09_26/2011_09_26_drive_0027_sync 0000000161 l 136 | 2011_09_26/2011_09_26_drive_0027_sync 0000000014 l 137 | 2011_09_26/2011_09_26_drive_0027_sync 0000000126 l 138 | 2011_09_26/2011_09_26_drive_0027_sync 0000000168 l 139 | 2011_09_26/2011_09_26_drive_0027_sync 0000000070 l 140 | 2011_09_26/2011_09_26_drive_0027_sync 0000000084 l 141 | 2011_09_26/2011_09_26_drive_0027_sync 0000000140 l 142 | 2011_09_26/2011_09_26_drive_0027_sync 0000000049 l 143 | 2011_09_26/2011_09_26_drive_0027_sync 0000000000 l 144 | 2011_09_26/2011_09_26_drive_0027_sync 0000000182 l 145 | 2011_09_26/2011_09_26_drive_0027_sync 0000000147 l 146 | 2011_09_26/2011_09_26_drive_0027_sync 0000000056 l 147 | 2011_09_26/2011_09_26_drive_0027_sync 0000000063 l 148 | 2011_09_26/2011_09_26_drive_0027_sync 0000000021 l 149 | 2011_09_26/2011_09_26_drive_0027_sync 0000000119 l 150 | 2011_09_26/2011_09_26_drive_0027_sync 0000000028 l 151 | 2011_09_26/2011_09_26_drive_0029_sync 0000000380 l 152 | 2011_09_26/2011_09_26_drive_0029_sync 0000000394 l 153 | 2011_09_26/2011_09_26_drive_0029_sync 0000000324 l 154 | 2011_09_26/2011_09_26_drive_0029_sync 0000000000 l 155 | 2011_09_26/2011_09_26_drive_0029_sync 0000000268 l 156 | 2011_09_26/2011_09_26_drive_0029_sync 0000000366 l 157 | 2011_09_26/2011_09_26_drive_0029_sync 0000000296 l 158 | 2011_09_26/2011_09_26_drive_0029_sync 0000000014 l 159 | 2011_09_26/2011_09_26_drive_0029_sync 0000000028 l 160 | 2011_09_26/2011_09_26_drive_0029_sync 0000000182 l 161 | 2011_09_26/2011_09_26_drive_0029_sync 0000000168 l 162 | 2011_09_26/2011_09_26_drive_0029_sync 0000000196 l 163 | 2011_09_26/2011_09_26_drive_0029_sync 0000000140 l 164 | 2011_09_26/2011_09_26_drive_0029_sync 0000000084 l 165 | 2011_09_26/2011_09_26_drive_0029_sync 0000000056 l 166 | 2011_09_26/2011_09_26_drive_0029_sync 0000000112 l 167 | 2011_09_26/2011_09_26_drive_0029_sync 0000000352 l 168 | 2011_09_26/2011_09_26_drive_0029_sync 0000000126 l 169 | 2011_09_26/2011_09_26_drive_0029_sync 0000000070 l 170 | 2011_09_26/2011_09_26_drive_0029_sync 0000000310 l 171 | 2011_09_26/2011_09_26_drive_0029_sync 0000000154 l 172 | 2011_09_26/2011_09_26_drive_0029_sync 0000000098 l 173 | 2011_09_26/2011_09_26_drive_0029_sync 0000000408 l 174 | 2011_09_26/2011_09_26_drive_0029_sync 0000000042 l 175 | 2011_09_26/2011_09_26_drive_0029_sync 0000000338 l 176 | 2011_09_26/2011_09_26_drive_0036_sync 0000000000 l 177 | 2011_09_26/2011_09_26_drive_0036_sync 0000000128 l 178 | 2011_09_26/2011_09_26_drive_0036_sync 0000000192 l 179 | 2011_09_26/2011_09_26_drive_0036_sync 0000000032 l 180 | 2011_09_26/2011_09_26_drive_0036_sync 0000000352 l 181 | 2011_09_26/2011_09_26_drive_0036_sync 0000000608 l 182 | 2011_09_26/2011_09_26_drive_0036_sync 0000000224 l 183 | 2011_09_26/2011_09_26_drive_0036_sync 0000000576 l 184 | 2011_09_26/2011_09_26_drive_0036_sync 0000000672 l 185 | 2011_09_26/2011_09_26_drive_0036_sync 0000000064 l 186 | 2011_09_26/2011_09_26_drive_0036_sync 0000000448 l 187 | 2011_09_26/2011_09_26_drive_0036_sync 0000000704 l 188 | 2011_09_26/2011_09_26_drive_0036_sync 0000000640 l 189 | 2011_09_26/2011_09_26_drive_0036_sync 0000000512 l 190 | 2011_09_26/2011_09_26_drive_0036_sync 0000000768 l 191 | 2011_09_26/2011_09_26_drive_0036_sync 0000000160 l 192 | 2011_09_26/2011_09_26_drive_0036_sync 0000000416 l 193 | 2011_09_26/2011_09_26_drive_0036_sync 0000000480 l 194 | 2011_09_26/2011_09_26_drive_0036_sync 0000000800 l 195 | 2011_09_26/2011_09_26_drive_0036_sync 0000000288 l 196 | 2011_09_26/2011_09_26_drive_0036_sync 0000000544 l 197 | 2011_09_26/2011_09_26_drive_0036_sync 0000000096 l 198 | 2011_09_26/2011_09_26_drive_0036_sync 0000000384 l 199 | 2011_09_26/2011_09_26_drive_0036_sync 0000000256 l 200 | 2011_09_26/2011_09_26_drive_0036_sync 0000000320 l 201 | 2011_09_26/2011_09_26_drive_0046_sync 0000000000 l 202 | 2011_09_26/2011_09_26_drive_0046_sync 0000000005 l 203 | 2011_09_26/2011_09_26_drive_0046_sync 0000000010 l 204 | 2011_09_26/2011_09_26_drive_0046_sync 0000000015 l 205 | 2011_09_26/2011_09_26_drive_0046_sync 0000000020 l 206 | 2011_09_26/2011_09_26_drive_0046_sync 0000000025 l 207 | 2011_09_26/2011_09_26_drive_0046_sync 0000000030 l 208 | 2011_09_26/2011_09_26_drive_0046_sync 0000000035 l 209 | 2011_09_26/2011_09_26_drive_0046_sync 0000000040 l 210 | 2011_09_26/2011_09_26_drive_0046_sync 0000000045 l 211 | 2011_09_26/2011_09_26_drive_0046_sync 0000000050 l 212 | 2011_09_26/2011_09_26_drive_0046_sync 0000000055 l 213 | 2011_09_26/2011_09_26_drive_0046_sync 0000000060 l 214 | 2011_09_26/2011_09_26_drive_0046_sync 0000000065 l 215 | 2011_09_26/2011_09_26_drive_0046_sync 0000000070 l 216 | 2011_09_26/2011_09_26_drive_0046_sync 0000000075 l 217 | 2011_09_26/2011_09_26_drive_0046_sync 0000000080 l 218 | 2011_09_26/2011_09_26_drive_0046_sync 0000000085 l 219 | 2011_09_26/2011_09_26_drive_0046_sync 0000000090 l 220 | 2011_09_26/2011_09_26_drive_0046_sync 0000000095 l 221 | 2011_09_26/2011_09_26_drive_0046_sync 0000000100 l 222 | 2011_09_26/2011_09_26_drive_0046_sync 0000000105 l 223 | 2011_09_26/2011_09_26_drive_0046_sync 0000000110 l 224 | 2011_09_26/2011_09_26_drive_0046_sync 0000000115 l 225 | 2011_09_26/2011_09_26_drive_0046_sync 0000000120 l 226 | 2011_09_26/2011_09_26_drive_0048_sync 0000000000 l 227 | 2011_09_26/2011_09_26_drive_0048_sync 0000000001 l 228 | 2011_09_26/2011_09_26_drive_0048_sync 0000000002 l 229 | 2011_09_26/2011_09_26_drive_0048_sync 0000000003 l 230 | 2011_09_26/2011_09_26_drive_0048_sync 0000000004 l 231 | 2011_09_26/2011_09_26_drive_0048_sync 0000000005 l 232 | 2011_09_26/2011_09_26_drive_0048_sync 0000000006 l 233 | 2011_09_26/2011_09_26_drive_0048_sync 0000000007 l 234 | 2011_09_26/2011_09_26_drive_0048_sync 0000000008 l 235 | 2011_09_26/2011_09_26_drive_0048_sync 0000000009 l 236 | 2011_09_26/2011_09_26_drive_0048_sync 0000000010 l 237 | 2011_09_26/2011_09_26_drive_0048_sync 0000000011 l 238 | 2011_09_26/2011_09_26_drive_0048_sync 0000000012 l 239 | 2011_09_26/2011_09_26_drive_0048_sync 0000000013 l 240 | 2011_09_26/2011_09_26_drive_0048_sync 0000000014 l 241 | 2011_09_26/2011_09_26_drive_0048_sync 0000000015 l 242 | 2011_09_26/2011_09_26_drive_0048_sync 0000000016 l 243 | 2011_09_26/2011_09_26_drive_0048_sync 0000000017 l 244 | 2011_09_26/2011_09_26_drive_0048_sync 0000000018 l 245 | 2011_09_26/2011_09_26_drive_0048_sync 0000000019 l 246 | 2011_09_26/2011_09_26_drive_0048_sync 0000000020 l 247 | 2011_09_26/2011_09_26_drive_0048_sync 0000000021 l 248 | 2011_09_26/2011_09_26_drive_0052_sync 0000000046 l 249 | 2011_09_26/2011_09_26_drive_0052_sync 0000000014 l 250 | 2011_09_26/2011_09_26_drive_0052_sync 0000000036 l 251 | 2011_09_26/2011_09_26_drive_0052_sync 0000000028 l 252 | 2011_09_26/2011_09_26_drive_0052_sync 0000000026 l 253 | 2011_09_26/2011_09_26_drive_0052_sync 0000000050 l 254 | 2011_09_26/2011_09_26_drive_0052_sync 0000000040 l 255 | 2011_09_26/2011_09_26_drive_0052_sync 0000000008 l 256 | 2011_09_26/2011_09_26_drive_0052_sync 0000000016 l 257 | 2011_09_26/2011_09_26_drive_0052_sync 0000000044 l 258 | 2011_09_26/2011_09_26_drive_0052_sync 0000000018 l 259 | 2011_09_26/2011_09_26_drive_0052_sync 0000000032 l 260 | 2011_09_26/2011_09_26_drive_0052_sync 0000000042 l 261 | 2011_09_26/2011_09_26_drive_0052_sync 0000000010 l 262 | 2011_09_26/2011_09_26_drive_0052_sync 0000000020 l 263 | 2011_09_26/2011_09_26_drive_0052_sync 0000000048 l 264 | 2011_09_26/2011_09_26_drive_0052_sync 0000000052 l 265 | 2011_09_26/2011_09_26_drive_0052_sync 0000000006 l 266 | 2011_09_26/2011_09_26_drive_0052_sync 0000000030 l 267 | 2011_09_26/2011_09_26_drive_0052_sync 0000000012 l 268 | 2011_09_26/2011_09_26_drive_0052_sync 0000000038 l 269 | 2011_09_26/2011_09_26_drive_0052_sync 0000000000 l 270 | 2011_09_26/2011_09_26_drive_0052_sync 0000000002 l 271 | 2011_09_26/2011_09_26_drive_0052_sync 0000000004 l 272 | 2011_09_26/2011_09_26_drive_0052_sync 0000000022 l 273 | 2011_09_26/2011_09_26_drive_0056_sync 0000000011 l 274 | 2011_09_26/2011_09_26_drive_0056_sync 0000000033 l 275 | 2011_09_26/2011_09_26_drive_0056_sync 0000000242 l 276 | 2011_09_26/2011_09_26_drive_0056_sync 0000000253 l 277 | 2011_09_26/2011_09_26_drive_0056_sync 0000000286 l 278 | 2011_09_26/2011_09_26_drive_0056_sync 0000000154 l 279 | 2011_09_26/2011_09_26_drive_0056_sync 0000000099 l 280 | 2011_09_26/2011_09_26_drive_0056_sync 0000000220 l 281 | 2011_09_26/2011_09_26_drive_0056_sync 0000000022 l 282 | 2011_09_26/2011_09_26_drive_0056_sync 0000000077 l 283 | 2011_09_26/2011_09_26_drive_0056_sync 0000000187 l 284 | 2011_09_26/2011_09_26_drive_0056_sync 0000000143 l 285 | 2011_09_26/2011_09_26_drive_0056_sync 0000000066 l 286 | 2011_09_26/2011_09_26_drive_0056_sync 0000000176 l 287 | 2011_09_26/2011_09_26_drive_0056_sync 0000000110 l 288 | 2011_09_26/2011_09_26_drive_0056_sync 0000000275 l 289 | 2011_09_26/2011_09_26_drive_0056_sync 0000000264 l 290 | 2011_09_26/2011_09_26_drive_0056_sync 0000000198 l 291 | 2011_09_26/2011_09_26_drive_0056_sync 0000000055 l 292 | 2011_09_26/2011_09_26_drive_0056_sync 0000000088 l 293 | 2011_09_26/2011_09_26_drive_0056_sync 0000000121 l 294 | 2011_09_26/2011_09_26_drive_0056_sync 0000000209 l 295 | 2011_09_26/2011_09_26_drive_0056_sync 0000000165 l 296 | 2011_09_26/2011_09_26_drive_0056_sync 0000000231 l 297 | 2011_09_26/2011_09_26_drive_0056_sync 0000000044 l 298 | 2011_09_26/2011_09_26_drive_0059_sync 0000000056 l 299 | 2011_09_26/2011_09_26_drive_0059_sync 0000000000 l 300 | 2011_09_26/2011_09_26_drive_0059_sync 0000000344 l 301 | 2011_09_26/2011_09_26_drive_0059_sync 0000000358 l 302 | 2011_09_26/2011_09_26_drive_0059_sync 0000000316 l 303 | 2011_09_26/2011_09_26_drive_0059_sync 0000000238 l 304 | 2011_09_26/2011_09_26_drive_0059_sync 0000000098 l 305 | 2011_09_26/2011_09_26_drive_0059_sync 0000000112 l 306 | 2011_09_26/2011_09_26_drive_0059_sync 0000000028 l 307 | 2011_09_26/2011_09_26_drive_0059_sync 0000000014 l 308 | 2011_09_26/2011_09_26_drive_0059_sync 0000000330 l 309 | 2011_09_26/2011_09_26_drive_0059_sync 0000000154 l 310 | 2011_09_26/2011_09_26_drive_0059_sync 0000000042 l 311 | 2011_09_26/2011_09_26_drive_0059_sync 0000000302 l 312 | 2011_09_26/2011_09_26_drive_0059_sync 0000000182 l 313 | 2011_09_26/2011_09_26_drive_0059_sync 0000000288 l 314 | 2011_09_26/2011_09_26_drive_0059_sync 0000000140 l 315 | 2011_09_26/2011_09_26_drive_0059_sync 0000000274 l 316 | 2011_09_26/2011_09_26_drive_0059_sync 0000000224 l 317 | 2011_09_26/2011_09_26_drive_0059_sync 0000000372 l 318 | 2011_09_26/2011_09_26_drive_0059_sync 0000000196 l 319 | 2011_09_26/2011_09_26_drive_0059_sync 0000000126 l 320 | 2011_09_26/2011_09_26_drive_0059_sync 0000000084 l 321 | 2011_09_26/2011_09_26_drive_0059_sync 0000000210 l 322 | 2011_09_26/2011_09_26_drive_0059_sync 0000000070 l 323 | 2011_09_26/2011_09_26_drive_0064_sync 0000000528 l 324 | 2011_09_26/2011_09_26_drive_0064_sync 0000000308 l 325 | 2011_09_26/2011_09_26_drive_0064_sync 0000000044 l 326 | 2011_09_26/2011_09_26_drive_0064_sync 0000000352 l 327 | 2011_09_26/2011_09_26_drive_0064_sync 0000000066 l 328 | 2011_09_26/2011_09_26_drive_0064_sync 0000000000 l 329 | 2011_09_26/2011_09_26_drive_0064_sync 0000000506 l 330 | 2011_09_26/2011_09_26_drive_0064_sync 0000000176 l 331 | 2011_09_26/2011_09_26_drive_0064_sync 0000000022 l 332 | 2011_09_26/2011_09_26_drive_0064_sync 0000000242 l 333 | 2011_09_26/2011_09_26_drive_0064_sync 0000000462 l 334 | 2011_09_26/2011_09_26_drive_0064_sync 0000000418 l 335 | 2011_09_26/2011_09_26_drive_0064_sync 0000000110 l 336 | 2011_09_26/2011_09_26_drive_0064_sync 0000000440 l 337 | 2011_09_26/2011_09_26_drive_0064_sync 0000000396 l 338 | 2011_09_26/2011_09_26_drive_0064_sync 0000000154 l 339 | 2011_09_26/2011_09_26_drive_0064_sync 0000000374 l 340 | 2011_09_26/2011_09_26_drive_0064_sync 0000000088 l 341 | 2011_09_26/2011_09_26_drive_0064_sync 0000000286 l 342 | 2011_09_26/2011_09_26_drive_0064_sync 0000000550 l 343 | 2011_09_26/2011_09_26_drive_0064_sync 0000000264 l 344 | 2011_09_26/2011_09_26_drive_0064_sync 0000000220 l 345 | 2011_09_26/2011_09_26_drive_0064_sync 0000000330 l 346 | 2011_09_26/2011_09_26_drive_0064_sync 0000000484 l 347 | 2011_09_26/2011_09_26_drive_0064_sync 0000000198 l 348 | 2011_09_26/2011_09_26_drive_0084_sync 0000000283 l 349 | 2011_09_26/2011_09_26_drive_0084_sync 0000000361 l 350 | 2011_09_26/2011_09_26_drive_0084_sync 0000000270 l 351 | 2011_09_26/2011_09_26_drive_0084_sync 0000000127 l 352 | 2011_09_26/2011_09_26_drive_0084_sync 0000000205 l 353 | 2011_09_26/2011_09_26_drive_0084_sync 0000000218 l 354 | 2011_09_26/2011_09_26_drive_0084_sync 0000000153 l 355 | 2011_09_26/2011_09_26_drive_0084_sync 0000000335 l 356 | 2011_09_26/2011_09_26_drive_0084_sync 0000000192 l 357 | 2011_09_26/2011_09_26_drive_0084_sync 0000000348 l 358 | 2011_09_26/2011_09_26_drive_0084_sync 0000000101 l 359 | 2011_09_26/2011_09_26_drive_0084_sync 0000000049 l 360 | 2011_09_26/2011_09_26_drive_0084_sync 0000000179 l 361 | 2011_09_26/2011_09_26_drive_0084_sync 0000000140 l 362 | 2011_09_26/2011_09_26_drive_0084_sync 0000000374 l 363 | 2011_09_26/2011_09_26_drive_0084_sync 0000000322 l 364 | 2011_09_26/2011_09_26_drive_0084_sync 0000000309 l 365 | 2011_09_26/2011_09_26_drive_0084_sync 0000000244 l 366 | 2011_09_26/2011_09_26_drive_0084_sync 0000000062 l 367 | 2011_09_26/2011_09_26_drive_0084_sync 0000000257 l 368 | 2011_09_26/2011_09_26_drive_0084_sync 0000000088 l 369 | 2011_09_26/2011_09_26_drive_0084_sync 0000000114 l 370 | 2011_09_26/2011_09_26_drive_0084_sync 0000000075 l 371 | 2011_09_26/2011_09_26_drive_0084_sync 0000000296 l 372 | 2011_09_26/2011_09_26_drive_0084_sync 0000000231 l 373 | 2011_09_26/2011_09_26_drive_0086_sync 0000000007 l 374 | 2011_09_26/2011_09_26_drive_0086_sync 0000000196 l 375 | 2011_09_26/2011_09_26_drive_0086_sync 0000000439 l 376 | 2011_09_26/2011_09_26_drive_0086_sync 0000000169 l 377 | 2011_09_26/2011_09_26_drive_0086_sync 0000000115 l 378 | 2011_09_26/2011_09_26_drive_0086_sync 0000000034 l 379 | 2011_09_26/2011_09_26_drive_0086_sync 0000000304 l 380 | 2011_09_26/2011_09_26_drive_0086_sync 0000000331 l 381 | 2011_09_26/2011_09_26_drive_0086_sync 0000000277 l 382 | 2011_09_26/2011_09_26_drive_0086_sync 0000000520 l 383 | 2011_09_26/2011_09_26_drive_0086_sync 0000000682 l 384 | 2011_09_26/2011_09_26_drive_0086_sync 0000000628 l 385 | 2011_09_26/2011_09_26_drive_0086_sync 0000000088 l 386 | 2011_09_26/2011_09_26_drive_0086_sync 0000000601 l 387 | 2011_09_26/2011_09_26_drive_0086_sync 0000000574 l 388 | 2011_09_26/2011_09_26_drive_0086_sync 0000000223 l 389 | 2011_09_26/2011_09_26_drive_0086_sync 0000000655 l 390 | 2011_09_26/2011_09_26_drive_0086_sync 0000000358 l 391 | 2011_09_26/2011_09_26_drive_0086_sync 0000000412 l 392 | 2011_09_26/2011_09_26_drive_0086_sync 0000000142 l 393 | 2011_09_26/2011_09_26_drive_0086_sync 0000000385 l 394 | 2011_09_26/2011_09_26_drive_0086_sync 0000000061 l 395 | 2011_09_26/2011_09_26_drive_0086_sync 0000000493 l 396 | 2011_09_26/2011_09_26_drive_0086_sync 0000000466 l 397 | 2011_09_26/2011_09_26_drive_0086_sync 0000000250 l 398 | 2011_09_26/2011_09_26_drive_0093_sync 0000000000 l 399 | 2011_09_26/2011_09_26_drive_0093_sync 0000000016 l 400 | 2011_09_26/2011_09_26_drive_0093_sync 0000000032 l 401 | 2011_09_26/2011_09_26_drive_0093_sync 0000000048 l 402 | 2011_09_26/2011_09_26_drive_0093_sync 0000000064 l 403 | 2011_09_26/2011_09_26_drive_0093_sync 0000000080 l 404 | 2011_09_26/2011_09_26_drive_0093_sync 0000000096 l 405 | 2011_09_26/2011_09_26_drive_0093_sync 0000000112 l 406 | 2011_09_26/2011_09_26_drive_0093_sync 0000000128 l 407 | 2011_09_26/2011_09_26_drive_0093_sync 0000000144 l 408 | 2011_09_26/2011_09_26_drive_0093_sync 0000000160 l 409 | 2011_09_26/2011_09_26_drive_0093_sync 0000000176 l 410 | 2011_09_26/2011_09_26_drive_0093_sync 0000000192 l 411 | 2011_09_26/2011_09_26_drive_0093_sync 0000000208 l 412 | 2011_09_26/2011_09_26_drive_0093_sync 0000000224 l 413 | 2011_09_26/2011_09_26_drive_0093_sync 0000000240 l 414 | 2011_09_26/2011_09_26_drive_0093_sync 0000000256 l 415 | 2011_09_26/2011_09_26_drive_0093_sync 0000000305 l 416 | 2011_09_26/2011_09_26_drive_0093_sync 0000000321 l 417 | 2011_09_26/2011_09_26_drive_0093_sync 0000000337 l 418 | 2011_09_26/2011_09_26_drive_0093_sync 0000000353 l 419 | 2011_09_26/2011_09_26_drive_0093_sync 0000000369 l 420 | 2011_09_26/2011_09_26_drive_0093_sync 0000000385 l 421 | 2011_09_26/2011_09_26_drive_0093_sync 0000000401 l 422 | 2011_09_26/2011_09_26_drive_0093_sync 0000000417 l 423 | 2011_09_26/2011_09_26_drive_0096_sync 0000000000 l 424 | 2011_09_26/2011_09_26_drive_0096_sync 0000000019 l 425 | 2011_09_26/2011_09_26_drive_0096_sync 0000000038 l 426 | 2011_09_26/2011_09_26_drive_0096_sync 0000000057 l 427 | 2011_09_26/2011_09_26_drive_0096_sync 0000000076 l 428 | 2011_09_26/2011_09_26_drive_0096_sync 0000000095 l 429 | 2011_09_26/2011_09_26_drive_0096_sync 0000000114 l 430 | 2011_09_26/2011_09_26_drive_0096_sync 0000000133 l 431 | 2011_09_26/2011_09_26_drive_0096_sync 0000000152 l 432 | 2011_09_26/2011_09_26_drive_0096_sync 0000000171 l 433 | 2011_09_26/2011_09_26_drive_0096_sync 0000000190 l 434 | 2011_09_26/2011_09_26_drive_0096_sync 0000000209 l 435 | 2011_09_26/2011_09_26_drive_0096_sync 0000000228 l 436 | 2011_09_26/2011_09_26_drive_0096_sync 0000000247 l 437 | 2011_09_26/2011_09_26_drive_0096_sync 0000000266 l 438 | 2011_09_26/2011_09_26_drive_0096_sync 0000000285 l 439 | 2011_09_26/2011_09_26_drive_0096_sync 0000000304 l 440 | 2011_09_26/2011_09_26_drive_0096_sync 0000000323 l 441 | 2011_09_26/2011_09_26_drive_0096_sync 0000000342 l 442 | 2011_09_26/2011_09_26_drive_0096_sync 0000000361 l 443 | 2011_09_26/2011_09_26_drive_0096_sync 0000000380 l 444 | 2011_09_26/2011_09_26_drive_0096_sync 0000000399 l 445 | 2011_09_26/2011_09_26_drive_0096_sync 0000000418 l 446 | 2011_09_26/2011_09_26_drive_0096_sync 0000000437 l 447 | 2011_09_26/2011_09_26_drive_0096_sync 0000000456 l 448 | 2011_09_26/2011_09_26_drive_0101_sync 0000000692 l 449 | 2011_09_26/2011_09_26_drive_0101_sync 0000000930 l 450 | 2011_09_26/2011_09_26_drive_0101_sync 0000000760 l 451 | 2011_09_26/2011_09_26_drive_0101_sync 0000000896 l 452 | 2011_09_26/2011_09_26_drive_0101_sync 0000000284 l 453 | 2011_09_26/2011_09_26_drive_0101_sync 0000000148 l 454 | 2011_09_26/2011_09_26_drive_0101_sync 0000000522 l 455 | 2011_09_26/2011_09_26_drive_0101_sync 0000000794 l 456 | 2011_09_26/2011_09_26_drive_0101_sync 0000000624 l 457 | 2011_09_26/2011_09_26_drive_0101_sync 0000000726 l 458 | 2011_09_26/2011_09_26_drive_0101_sync 0000000216 l 459 | 2011_09_26/2011_09_26_drive_0101_sync 0000000318 l 460 | 2011_09_26/2011_09_26_drive_0101_sync 0000000488 l 461 | 2011_09_26/2011_09_26_drive_0101_sync 0000000590 l 462 | 2011_09_26/2011_09_26_drive_0101_sync 0000000454 l 463 | 2011_09_26/2011_09_26_drive_0101_sync 0000000862 l 464 | 2011_09_26/2011_09_26_drive_0101_sync 0000000386 l 465 | 2011_09_26/2011_09_26_drive_0101_sync 0000000352 l 466 | 2011_09_26/2011_09_26_drive_0101_sync 0000000420 l 467 | 2011_09_26/2011_09_26_drive_0101_sync 0000000658 l 468 | 2011_09_26/2011_09_26_drive_0101_sync 0000000828 l 469 | 2011_09_26/2011_09_26_drive_0101_sync 0000000556 l 470 | 2011_09_26/2011_09_26_drive_0101_sync 0000000114 l 471 | 2011_09_26/2011_09_26_drive_0101_sync 0000000182 l 472 | 2011_09_26/2011_09_26_drive_0101_sync 0000000080 l 473 | 2011_09_26/2011_09_26_drive_0106_sync 0000000015 l 474 | 2011_09_26/2011_09_26_drive_0106_sync 0000000035 l 475 | 2011_09_26/2011_09_26_drive_0106_sync 0000000043 l 476 | 2011_09_26/2011_09_26_drive_0106_sync 0000000051 l 477 | 2011_09_26/2011_09_26_drive_0106_sync 0000000059 l 478 | 2011_09_26/2011_09_26_drive_0106_sync 0000000067 l 479 | 2011_09_26/2011_09_26_drive_0106_sync 0000000075 l 480 | 2011_09_26/2011_09_26_drive_0106_sync 0000000083 l 481 | 2011_09_26/2011_09_26_drive_0106_sync 0000000091 l 482 | 2011_09_26/2011_09_26_drive_0106_sync 0000000099 l 483 | 2011_09_26/2011_09_26_drive_0106_sync 0000000107 l 484 | 2011_09_26/2011_09_26_drive_0106_sync 0000000115 l 485 | 2011_09_26/2011_09_26_drive_0106_sync 0000000123 l 486 | 2011_09_26/2011_09_26_drive_0106_sync 0000000131 l 487 | 2011_09_26/2011_09_26_drive_0106_sync 0000000139 l 488 | 2011_09_26/2011_09_26_drive_0106_sync 0000000147 l 489 | 2011_09_26/2011_09_26_drive_0106_sync 0000000155 l 490 | 2011_09_26/2011_09_26_drive_0106_sync 0000000163 l 491 | 2011_09_26/2011_09_26_drive_0106_sync 0000000171 l 492 | 2011_09_26/2011_09_26_drive_0106_sync 0000000179 l 493 | 2011_09_26/2011_09_26_drive_0106_sync 0000000187 l 494 | 2011_09_26/2011_09_26_drive_0106_sync 0000000195 l 495 | 2011_09_26/2011_09_26_drive_0106_sync 0000000203 l 496 | 2011_09_26/2011_09_26_drive_0106_sync 0000000211 l 497 | 2011_09_26/2011_09_26_drive_0106_sync 0000000219 l 498 | 2011_09_26/2011_09_26_drive_0117_sync 0000000312 l 499 | 2011_09_26/2011_09_26_drive_0117_sync 0000000494 l 500 | 2011_09_26/2011_09_26_drive_0117_sync 0000000104 l 501 | 2011_09_26/2011_09_26_drive_0117_sync 0000000130 l 502 | 2011_09_26/2011_09_26_drive_0117_sync 0000000156 l 503 | 2011_09_26/2011_09_26_drive_0117_sync 0000000182 l 504 | 2011_09_26/2011_09_26_drive_0117_sync 0000000598 l 505 | 2011_09_26/2011_09_26_drive_0117_sync 0000000416 l 506 | 2011_09_26/2011_09_26_drive_0117_sync 0000000364 l 507 | 2011_09_26/2011_09_26_drive_0117_sync 0000000026 l 508 | 2011_09_26/2011_09_26_drive_0117_sync 0000000078 l 509 | 2011_09_26/2011_09_26_drive_0117_sync 0000000572 l 510 | 2011_09_26/2011_09_26_drive_0117_sync 0000000468 l 511 | 2011_09_26/2011_09_26_drive_0117_sync 0000000260 l 512 | 2011_09_26/2011_09_26_drive_0117_sync 0000000624 l 513 | 2011_09_26/2011_09_26_drive_0117_sync 0000000234 l 514 | 2011_09_26/2011_09_26_drive_0117_sync 0000000442 l 515 | 2011_09_26/2011_09_26_drive_0117_sync 0000000390 l 516 | 2011_09_26/2011_09_26_drive_0117_sync 0000000546 l 517 | 2011_09_26/2011_09_26_drive_0117_sync 0000000286 l 518 | 2011_09_26/2011_09_26_drive_0117_sync 0000000000 l 519 | 2011_09_26/2011_09_26_drive_0117_sync 0000000338 l 520 | 2011_09_26/2011_09_26_drive_0117_sync 0000000208 l 521 | 2011_09_26/2011_09_26_drive_0117_sync 0000000650 l 522 | 2011_09_26/2011_09_26_drive_0117_sync 0000000052 l 523 | 2011_09_28/2011_09_28_drive_0002_sync 0000000024 l 524 | 2011_09_28/2011_09_28_drive_0002_sync 0000000021 l 525 | 2011_09_28/2011_09_28_drive_0002_sync 0000000036 l 526 | 2011_09_28/2011_09_28_drive_0002_sync 0000000000 l 527 | 2011_09_28/2011_09_28_drive_0002_sync 0000000051 l 528 | 2011_09_28/2011_09_28_drive_0002_sync 0000000018 l 529 | 2011_09_28/2011_09_28_drive_0002_sync 0000000033 l 530 | 2011_09_28/2011_09_28_drive_0002_sync 0000000090 l 531 | 2011_09_28/2011_09_28_drive_0002_sync 0000000045 l 532 | 2011_09_28/2011_09_28_drive_0002_sync 0000000054 l 533 | 2011_09_28/2011_09_28_drive_0002_sync 0000000012 l 534 | 2011_09_28/2011_09_28_drive_0002_sync 0000000039 l 535 | 2011_09_28/2011_09_28_drive_0002_sync 0000000009 l 536 | 2011_09_28/2011_09_28_drive_0002_sync 0000000003 l 537 | 2011_09_28/2011_09_28_drive_0002_sync 0000000030 l 538 | 2011_09_28/2011_09_28_drive_0002_sync 0000000078 l 539 | 2011_09_28/2011_09_28_drive_0002_sync 0000000060 l 540 | 2011_09_28/2011_09_28_drive_0002_sync 0000000048 l 541 | 2011_09_28/2011_09_28_drive_0002_sync 0000000084 l 542 | 2011_09_28/2011_09_28_drive_0002_sync 0000000081 l 543 | 2011_09_28/2011_09_28_drive_0002_sync 0000000006 l 544 | 2011_09_28/2011_09_28_drive_0002_sync 0000000057 l 545 | 2011_09_28/2011_09_28_drive_0002_sync 0000000072 l 546 | 2011_09_28/2011_09_28_drive_0002_sync 0000000087 l 547 | 2011_09_28/2011_09_28_drive_0002_sync 0000000063 l 548 | 2011_09_29/2011_09_29_drive_0071_sync 0000000252 l 549 | 2011_09_29/2011_09_29_drive_0071_sync 0000000540 l 550 | 2011_09_29/2011_09_29_drive_0071_sync 0000001054 l 551 | 2011_09_29/2011_09_29_drive_0071_sync 0000000036 l 552 | 2011_09_29/2011_09_29_drive_0071_sync 0000000360 l 553 | 2011_09_29/2011_09_29_drive_0071_sync 0000000807 l 554 | 2011_09_29/2011_09_29_drive_0071_sync 0000000879 l 555 | 2011_09_29/2011_09_29_drive_0071_sync 0000000288 l 556 | 2011_09_29/2011_09_29_drive_0071_sync 0000000771 l 557 | 2011_09_29/2011_09_29_drive_0071_sync 0000000000 l 558 | 2011_09_29/2011_09_29_drive_0071_sync 0000000216 l 559 | 2011_09_29/2011_09_29_drive_0071_sync 0000000951 l 560 | 2011_09_29/2011_09_29_drive_0071_sync 0000000324 l 561 | 2011_09_29/2011_09_29_drive_0071_sync 0000000432 l 562 | 2011_09_29/2011_09_29_drive_0071_sync 0000000504 l 563 | 2011_09_29/2011_09_29_drive_0071_sync 0000000576 l 564 | 2011_09_29/2011_09_29_drive_0071_sync 0000000108 l 565 | 2011_09_29/2011_09_29_drive_0071_sync 0000000180 l 566 | 2011_09_29/2011_09_29_drive_0071_sync 0000000072 l 567 | 2011_09_29/2011_09_29_drive_0071_sync 0000000612 l 568 | 2011_09_29/2011_09_29_drive_0071_sync 0000000915 l 569 | 2011_09_29/2011_09_29_drive_0071_sync 0000000735 l 570 | 2011_09_29/2011_09_29_drive_0071_sync 0000000144 l 571 | 2011_09_29/2011_09_29_drive_0071_sync 0000000396 l 572 | 2011_09_29/2011_09_29_drive_0071_sync 0000000468 l 573 | 2011_09_30/2011_09_30_drive_0016_sync 0000000132 l 574 | 2011_09_30/2011_09_30_drive_0016_sync 0000000011 l 575 | 2011_09_30/2011_09_30_drive_0016_sync 0000000154 l 576 | 2011_09_30/2011_09_30_drive_0016_sync 0000000022 l 577 | 2011_09_30/2011_09_30_drive_0016_sync 0000000242 l 578 | 2011_09_30/2011_09_30_drive_0016_sync 0000000198 l 579 | 2011_09_30/2011_09_30_drive_0016_sync 0000000176 l 580 | 2011_09_30/2011_09_30_drive_0016_sync 0000000231 l 581 | 2011_09_30/2011_09_30_drive_0016_sync 0000000275 l 582 | 2011_09_30/2011_09_30_drive_0016_sync 0000000220 l 583 | 2011_09_30/2011_09_30_drive_0016_sync 0000000088 l 584 | 2011_09_30/2011_09_30_drive_0016_sync 0000000143 l 585 | 2011_09_30/2011_09_30_drive_0016_sync 0000000055 l 586 | 2011_09_30/2011_09_30_drive_0016_sync 0000000033 l 587 | 2011_09_30/2011_09_30_drive_0016_sync 0000000187 l 588 | 2011_09_30/2011_09_30_drive_0016_sync 0000000110 l 589 | 2011_09_30/2011_09_30_drive_0016_sync 0000000044 l 590 | 2011_09_30/2011_09_30_drive_0016_sync 0000000077 l 591 | 2011_09_30/2011_09_30_drive_0016_sync 0000000066 l 592 | 2011_09_30/2011_09_30_drive_0016_sync 0000000000 l 593 | 2011_09_30/2011_09_30_drive_0016_sync 0000000165 l 594 | 2011_09_30/2011_09_30_drive_0016_sync 0000000264 l 595 | 2011_09_30/2011_09_30_drive_0016_sync 0000000253 l 596 | 2011_09_30/2011_09_30_drive_0016_sync 0000000209 l 597 | 2011_09_30/2011_09_30_drive_0016_sync 0000000121 l 598 | 2011_09_30/2011_09_30_drive_0018_sync 0000000107 l 599 | 2011_09_30/2011_09_30_drive_0018_sync 0000002247 l 600 | 2011_09_30/2011_09_30_drive_0018_sync 0000001391 l 601 | 2011_09_30/2011_09_30_drive_0018_sync 0000000535 l 602 | 2011_09_30/2011_09_30_drive_0018_sync 0000001819 l 603 | 2011_09_30/2011_09_30_drive_0018_sync 0000001177 l 604 | 2011_09_30/2011_09_30_drive_0018_sync 0000000428 l 605 | 2011_09_30/2011_09_30_drive_0018_sync 0000001926 l 606 | 2011_09_30/2011_09_30_drive_0018_sync 0000000749 l 607 | 2011_09_30/2011_09_30_drive_0018_sync 0000001284 l 608 | 2011_09_30/2011_09_30_drive_0018_sync 0000002140 l 609 | 2011_09_30/2011_09_30_drive_0018_sync 0000001605 l 610 | 2011_09_30/2011_09_30_drive_0018_sync 0000001498 l 611 | 2011_09_30/2011_09_30_drive_0018_sync 0000000642 l 612 | 2011_09_30/2011_09_30_drive_0018_sync 0000002740 l 613 | 2011_09_30/2011_09_30_drive_0018_sync 0000002419 l 614 | 2011_09_30/2011_09_30_drive_0018_sync 0000000856 l 615 | 2011_09_30/2011_09_30_drive_0018_sync 0000002526 l 616 | 2011_09_30/2011_09_30_drive_0018_sync 0000001712 l 617 | 2011_09_30/2011_09_30_drive_0018_sync 0000001070 l 618 | 2011_09_30/2011_09_30_drive_0018_sync 0000000000 l 619 | 2011_09_30/2011_09_30_drive_0018_sync 0000002033 l 620 | 2011_09_30/2011_09_30_drive_0018_sync 0000000214 l 621 | 2011_09_30/2011_09_30_drive_0018_sync 0000000963 l 622 | 2011_09_30/2011_09_30_drive_0018_sync 0000002633 l 623 | 2011_09_30/2011_09_30_drive_0027_sync 0000000533 l 624 | 2011_09_30/2011_09_30_drive_0027_sync 0000001040 l 625 | 2011_09_30/2011_09_30_drive_0027_sync 0000000082 l 626 | 2011_09_30/2011_09_30_drive_0027_sync 0000000205 l 627 | 2011_09_30/2011_09_30_drive_0027_sync 0000000835 l 628 | 2011_09_30/2011_09_30_drive_0027_sync 0000000451 l 629 | 2011_09_30/2011_09_30_drive_0027_sync 0000000164 l 630 | 2011_09_30/2011_09_30_drive_0027_sync 0000000794 l 631 | 2011_09_30/2011_09_30_drive_0027_sync 0000000328 l 632 | 2011_09_30/2011_09_30_drive_0027_sync 0000000615 l 633 | 2011_09_30/2011_09_30_drive_0027_sync 0000000917 l 634 | 2011_09_30/2011_09_30_drive_0027_sync 0000000369 l 635 | 2011_09_30/2011_09_30_drive_0027_sync 0000000287 l 636 | 2011_09_30/2011_09_30_drive_0027_sync 0000000123 l 637 | 2011_09_30/2011_09_30_drive_0027_sync 0000000876 l 638 | 2011_09_30/2011_09_30_drive_0027_sync 0000000410 l 639 | 2011_09_30/2011_09_30_drive_0027_sync 0000000492 l 640 | 2011_09_30/2011_09_30_drive_0027_sync 0000000958 l 641 | 2011_09_30/2011_09_30_drive_0027_sync 0000000656 l 642 | 2011_09_30/2011_09_30_drive_0027_sync 0000000000 l 643 | 2011_09_30/2011_09_30_drive_0027_sync 0000000753 l 644 | 2011_09_30/2011_09_30_drive_0027_sync 0000000574 l 645 | 2011_09_30/2011_09_30_drive_0027_sync 0000001081 l 646 | 2011_09_30/2011_09_30_drive_0027_sync 0000000041 l 647 | 2011_09_30/2011_09_30_drive_0027_sync 0000000246 l 648 | 2011_10_03/2011_10_03_drive_0027_sync 0000002906 l 649 | 2011_10_03/2011_10_03_drive_0027_sync 0000002544 l 650 | 2011_10_03/2011_10_03_drive_0027_sync 0000000362 l 651 | 2011_10_03/2011_10_03_drive_0027_sync 0000004535 l 652 | 2011_10_03/2011_10_03_drive_0027_sync 0000000734 l 653 | 2011_10_03/2011_10_03_drive_0027_sync 0000001096 l 654 | 2011_10_03/2011_10_03_drive_0027_sync 0000004173 l 655 | 2011_10_03/2011_10_03_drive_0027_sync 0000000543 l 656 | 2011_10_03/2011_10_03_drive_0027_sync 0000001277 l 657 | 2011_10_03/2011_10_03_drive_0027_sync 0000004354 l 658 | 2011_10_03/2011_10_03_drive_0027_sync 0000001458 l 659 | 2011_10_03/2011_10_03_drive_0027_sync 0000001820 l 660 | 2011_10_03/2011_10_03_drive_0027_sync 0000003449 l 661 | 2011_10_03/2011_10_03_drive_0027_sync 0000003268 l 662 | 2011_10_03/2011_10_03_drive_0027_sync 0000000915 l 663 | 2011_10_03/2011_10_03_drive_0027_sync 0000002363 l 664 | 2011_10_03/2011_10_03_drive_0027_sync 0000002725 l 665 | 2011_10_03/2011_10_03_drive_0027_sync 0000000181 l 666 | 2011_10_03/2011_10_03_drive_0027_sync 0000001639 l 667 | 2011_10_03/2011_10_03_drive_0027_sync 0000003992 l 668 | 2011_10_03/2011_10_03_drive_0027_sync 0000003087 l 669 | 2011_10_03/2011_10_03_drive_0027_sync 0000002001 l 670 | 2011_10_03/2011_10_03_drive_0027_sync 0000003811 l 671 | 2011_10_03/2011_10_03_drive_0027_sync 0000003630 l 672 | 2011_10_03/2011_10_03_drive_0027_sync 0000000000 l 673 | 2011_10_03/2011_10_03_drive_0047_sync 0000000096 l 674 | 2011_10_03/2011_10_03_drive_0047_sync 0000000800 l 675 | 2011_10_03/2011_10_03_drive_0047_sync 0000000320 l 676 | 2011_10_03/2011_10_03_drive_0047_sync 0000000576 l 677 | 2011_10_03/2011_10_03_drive_0047_sync 0000000000 l 678 | 2011_10_03/2011_10_03_drive_0047_sync 0000000480 l 679 | 2011_10_03/2011_10_03_drive_0047_sync 0000000640 l 680 | 2011_10_03/2011_10_03_drive_0047_sync 0000000032 l 681 | 2011_10_03/2011_10_03_drive_0047_sync 0000000384 l 682 | 2011_10_03/2011_10_03_drive_0047_sync 0000000160 l 683 | 2011_10_03/2011_10_03_drive_0047_sync 0000000704 l 684 | 2011_10_03/2011_10_03_drive_0047_sync 0000000736 l 685 | 2011_10_03/2011_10_03_drive_0047_sync 0000000672 l 686 | 2011_10_03/2011_10_03_drive_0047_sync 0000000064 l 687 | 2011_10_03/2011_10_03_drive_0047_sync 0000000288 l 688 | 2011_10_03/2011_10_03_drive_0047_sync 0000000352 l 689 | 2011_10_03/2011_10_03_drive_0047_sync 0000000512 l 690 | 2011_10_03/2011_10_03_drive_0047_sync 0000000544 l 691 | 2011_10_03/2011_10_03_drive_0047_sync 0000000608 l 692 | 2011_10_03/2011_10_03_drive_0047_sync 0000000128 l 693 | 2011_10_03/2011_10_03_drive_0047_sync 0000000224 l 694 | 2011_10_03/2011_10_03_drive_0047_sync 0000000416 l 695 | 2011_10_03/2011_10_03_drive_0047_sync 0000000192 l 696 | 2011_10_03/2011_10_03_drive_0047_sync 0000000448 l 697 | 2011_10_03/2011_10_03_drive_0047_sync 0000000768 l 698 | -------------------------------------------------------------------------------- /data/eigen/test_scenes.txt: -------------------------------------------------------------------------------- 1 | 2011_09_26_drive_0117 2 | 2011_09_28_drive_0002 3 | 2011_09_26_drive_0052 4 | 2011_09_30_drive_0016 5 | 2011_09_26_drive_0059 6 | 2011_09_26_drive_0027 7 | 2011_09_26_drive_0020 8 | 2011_09_26_drive_0009 9 | 2011_09_26_drive_0013 10 | 2011_09_26_drive_0101 11 | 2011_09_26_drive_0046 12 | 2011_09_26_drive_0029 13 | 2011_09_26_drive_0064 14 | 2011_09_26_drive_0048 15 | 2011_10_03_drive_0027 16 | 2011_09_26_drive_0002 17 | 2011_09_26_drive_0036 18 | 2011_09_29_drive_0071 19 | 2011_10_03_drive_0047 20 | 2011_09_30_drive_0027 21 | 2011_09_26_drive_0086 22 | 2011_09_26_drive_0084 23 | 2011_09_26_drive_0096 24 | 2011_09_30_drive_0018 25 | 2011_09_26_drive_0106 26 | 2011_09_26_drive_0056 27 | 2011_09_26_drive_0023 28 | 2011_09_26_drive_0093 -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | cffi==1.12.3 2 | cycler==0.10.0 3 | Cython==0.29.13 4 | decorator==4.4.0 5 | easydict==1.9 6 | h5py==2.10.0 7 | imageio==2.5.0 8 | joblib==0.14.0 9 | kiwisolver==1.1.0 10 | matplotlib==3.1.1 11 | networkx==2.3 12 | numpy==1.17.2 13 | opencv-python==4.1.1.26 14 | Pillow==6.1.0 15 | protobuf==3.11.2 16 | pycparser==2.19 17 | pyparsing==2.4.2 18 | pypng==0.0.20 19 | python-dateutil==2.8.0 20 | PyWavelets==1.0.3 21 | PyYAML==5.1.2 22 | scikit-image==0.15.0 23 | scikit-learn==0.21.3 24 | scipy==1.3.1 25 | six==1.12.0 26 | sklearn 27 | tensorboardX==2.0 28 | torch==1.2.0 29 | torchvision==0.4.0 30 | tqdm==4.36.1 31 | -------------------------------------------------------------------------------- /test.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | sys.path.append(os.path.dirname(os.path.abspath(__file__))) 3 | from core.dataset import KITTI_2012, KITTI_2015 4 | from core.evaluation import eval_flow_avg, load_gt_flow_kitti 5 | from core.evaluation import eval_depth 6 | from core.visualize import Visualizer_debug 7 | from core.networks import Model_flow 8 | from core.evaluation import load_gt_flow_kitti, load_gt_mask 9 | import torch 10 | from tqdm import tqdm 11 | import pdb 12 | import cv2 13 | import numpy as np 14 | import yaml 15 | 16 | def test_kitti_2012(cfg, model, gt_flows, noc_masks): 17 | dataset = KITTI_2012(cfg.gt_2012_dir) 18 | flow_list = [] 19 | for idx, inputs in enumerate(tqdm(dataset)): 20 | # img, K, K_inv = inputs 21 | img = inputs 22 | img = img[None,:,:,:] 23 | # K = K[None,:,:] 24 | # K_inv = K_inv[None,:,:] 25 | img_h = int(img.shape[2] / 2) 26 | img1, img2 = img[:,:,:img_h,:], img[:,:,img_h:,:] 27 | img1, img2 = img1.cuda(), img2.cuda() 28 | if cfg.mode == 'flow' or cfg.mode == 'flowposenet': 29 | flow = model.inference_flow(img1, img2) 30 | 31 | #pdb.set_trace() 32 | flow = flow[0].detach().cpu().numpy() 33 | flow = flow.transpose(1,2,0) 34 | flow_list.append(flow) 35 | 36 | eval_flow_res = eval_flow_avg(gt_flows, noc_masks, flow_list, cfg, write_img=False) 37 | 38 | print('CONFIG: {0}, mode: {1}'.format(cfg.config_file, cfg.mode)) 39 | print('[EVAL] [KITTI 2012]') 40 | print(eval_flow_res) 41 | return eval_flow_res 42 | 43 | def test_kitti_2015(cfg, model, gt_flows, noc_masks, gt_masks, depth_save_dir=None): 44 | dataset = KITTI_2015(cfg.gt_2015_dir) 45 | visualizer = Visualizer_debug(depth_save_dir) 46 | pred_flow_list = [] 47 | pred_disp_list = [] 48 | img_list = [] 49 | for idx, inputs in enumerate(tqdm(dataset)): 50 | # img, K, K_inv = inputs 51 | img = inputs 52 | img = img[None,:,:,:] 53 | 54 | img_h = int(img.shape[2] / 2) 55 | img1, img2 = img[:,:,:img_h,:], img[:,:,img_h:,:] 56 | img_list.append(img1) 57 | img1, img2 = img1.cuda(), img2.cuda() 58 | if cfg.mode == 'flow' or cfg.mode == 'flowposenet': 59 | flow = model.inference_flow(img1, img2) 60 | # else: 61 | # flow, disp1, disp2, Rt, _, _ = model.inference(img1, img2, K, K_inv) 62 | # disp = disp1[0].detach().cpu().numpy() 63 | # disp = disp.transpose(1,2,0) 64 | # pred_disp_list.append(disp) 65 | 66 | flow = flow[0].detach().cpu().numpy() 67 | flow = flow.transpose(1,2,0) 68 | pred_flow_list.append(flow) 69 | 70 | #pdb.set_trace() 71 | eval_flow_res = eval_flow_avg(gt_flows, noc_masks, pred_flow_list, cfg, moving_masks=gt_masks, write_img=False) 72 | print('CONFIG: {0}, mode: {1}'.format(cfg.config_file, cfg.mode)) 73 | print('[EVAL] [KITTI 2015]') 74 | print(eval_flow_res) 75 | ## depth evaluation 76 | return eval_flow_res 77 | 78 | def disp2depth(disp, min_depth=0.001, max_depth=80.0): 79 | min_disp = 1 / max_depth 80 | max_disp = 1 / min_depth 81 | scaled_disp = min_disp + (max_disp - min_disp) * disp 82 | depth = 1 / scaled_disp 83 | return scaled_disp, depth 84 | 85 | def resize_depths(gt_depth_list, pred_disp_list): 86 | gt_disp_list = [] 87 | pred_depth_list = [] 88 | pred_disp_resized = [] 89 | for i in range(len(pred_disp_list)): 90 | h, w = gt_depth_list[i].shape 91 | pred_disp = cv2.resize(pred_disp_list[i], (w,h)) 92 | pred_depth = 1.0 / (pred_disp + 1e-4) 93 | pred_depth_list.append(pred_depth) 94 | pred_disp_resized.append(pred_disp) 95 | 96 | return pred_depth_list, pred_disp_resized 97 | 98 | 99 | def test_eigen_depth(cfg, model): 100 | print('Evaluate depth using eigen split. Using model in ' + cfg.model_dir) 101 | filenames = open('./data/eigen/test_files.txt').readlines() 102 | pred_disp_list = [] 103 | for i in range(len(filenames)): 104 | path1, idx, _ = filenames[i].strip().split(' ') 105 | img = cv2.imread(os.path.join(os.path.join(cfg.raw_base_dir, path1), 'image_02/data/'+str(idx)+'.png')) 106 | #img_resize = cv2.resize(img, (832,256)) 107 | img_resize = cv2.resize(img, (cfg.img_hw[1], cfg.img_hw[0])) 108 | img_input = torch.from_numpy(img_resize / 255.0).float().cuda().unsqueeze(0).permute(0,3,1,2) 109 | disp = model.infer_depth(img_input) 110 | disp = disp[0].detach().cpu().numpy() 111 | disp = disp.transpose(1,2,0) 112 | pred_disp_list.append(disp) 113 | #print(i) 114 | 115 | gt_depths = np.load('./data/eigen/gt_depths.npz', allow_pickle=True)['data'] 116 | pred_depths, pred_disp_resized = resize_depths(gt_depths, pred_disp_list) 117 | eval_depth_res = eval_depth(gt_depths, pred_depths) 118 | abs_rel, sq_rel, rms, log_rms, a1, a2, a3 = eval_depth_res 119 | sys.stderr.write( 120 | "{:>10}, {:>10}, {:>10}, {:>10}, {:>10}, {:>10}, {:>10} \n". 121 | format('abs_rel', 'sq_rel', 'rms', 'log_rms', 122 | 'a1', 'a2', 'a3')) 123 | sys.stderr.write( 124 | "{:10.4f}, {:10.4f}, {:10.3f}, {:10.3f}, {:10.3f}, {:10.3f}, {:10.3f} \n". 125 | format(abs_rel, sq_rel, rms, log_rms, a1, a2, a3)) 126 | 127 | return eval_depth_res 128 | 129 | 130 | def resize_disp(pred_disp_list, gt_depths): 131 | pred_depths = [] 132 | h, w = gt_depths[0].shape[0], gt_depths[0].shape[1] 133 | for i in range(len(pred_disp_list)): 134 | disp = pred_disp_list[i] 135 | resize_disp = cv2.resize(disp, (w,h)) 136 | depth = 1.0 / resize_disp 137 | pred_depths.append(depth) 138 | 139 | return pred_depths 140 | 141 | import h5py 142 | import scipy.io as sio 143 | def load_nyu_test_data(data_dir): 144 | data = h5py.File(os.path.join(data_dir, 'nyu_depth_v2_labeled.mat'), 'r') 145 | splits = sio.loadmat(os.path.join(data_dir, 'splits.mat')) 146 | test = np.array(splits['testNdxs']).squeeze(1) 147 | images = np.transpose(data['images'], [0,1,3,2]) 148 | depths = np.transpose(data['depths'], [0,2,1]) 149 | images = images[test-1] 150 | depths = depths[test-1] 151 | return images, depths 152 | 153 | def test_nyu(cfg, model, test_images, test_gt_depths): 154 | leng = test_images.shape[0] 155 | print('Test nyu depth on '+str(leng)+' images. Using depth model in '+cfg.model_dir) 156 | pred_disp_list = [] 157 | crop_imgs = [] 158 | crop_gt_depths = [] 159 | for i in range(leng): 160 | img = test_images[i] 161 | img_crop = img[:,45:472,41:602] 162 | crop_imgs.append(img_crop) 163 | gt_depth_crop = test_gt_depths[i][45:472,41:602] 164 | crop_gt_depths.append(gt_depth_crop) 165 | #img = np.transpose(cv2.resize(np.transpose(img_crop, [1,2,0]), (576,448)), [2,0,1]) 166 | img = np.transpose(cv2.resize(np.transpose(img_crop, [1,2,0]), (cfg.img_hw[1],cfg.img_hw[0])), [2,0,1]) 167 | img_t = torch.from_numpy(img).float().cuda().unsqueeze(0) / 255.0 168 | disp = model.infer_depth(img_t) 169 | disp = np.transpose(disp[0].cpu().detach().numpy(), [1,2,0]) 170 | pred_disp_list.append(disp) 171 | 172 | pred_depths = resize_disp(pred_disp_list, crop_gt_depths) 173 | eval_depth_res = eval_depth(crop_gt_depths, pred_depths, nyu=True) 174 | abs_rel, sq_rel, rms, log_rms, a1, a2, a3 = eval_depth_res 175 | sys.stderr.write( 176 | "{:>10}, {:>10}, {:>10}, {:>10}, {:>10}, {:>10}, {:>10} \n". 177 | format('abs_rel', 'sq_rel', 'rms', 'log10', 178 | 'a1', 'a2', 'a3')) 179 | sys.stderr.write( 180 | "{:10.4f}, {:10.4f}, {:10.3f}, {:10.3f}, {:10.3f}, {:10.3f}, {:10.3f} \n". 181 | format(abs_rel, sq_rel, rms, log_rms, a1, a2, a3)) 182 | 183 | return eval_depth_res 184 | 185 | def test_single_image(img_path, model, training_hw, save_dir='./'): 186 | img = cv2.imread(img_path) 187 | h, w = img.shape[0:2] 188 | img_resized = cv2.resize(img, (training_hw[1], training_hw[0])) 189 | img_t = torch.from_numpy(np.transpose(img_resized, [2,0,1])).float().cuda().unsqueeze(0) / 255.0 190 | disp = model.infer_depth(img_t) 191 | disp = np.transpose(disp[0].cpu().detach().numpy(), [1,2,0]) 192 | disp_resized = cv2.resize(disp, (w,h)) 193 | depth = 1.0 / (1e-6 + disp_resized) 194 | 195 | visualizer = Visualizer_debug(dump_dir=save_dir) 196 | visualizer.save_disp_color_img(disp_resized, name='demo') 197 | print('Depth prediction saved in ' + save_dir) 198 | 199 | 200 | if __name__ == '__main__': 201 | import argparse 202 | arg_parser = argparse.ArgumentParser( 203 | description="TrianFlow testing." 204 | ) 205 | arg_parser.add_argument('-c', '--config_file', default=None, help='config file.') 206 | arg_parser.add_argument('-g', '--gpu', type=str, default=0, help='gpu id.') 207 | arg_parser.add_argument('--mode', type=str, default='depth', help='mode for testing.') 208 | arg_parser.add_argument('--task', type=str, default='kitti_depth', help='To test on which task, kitti_depth or kitti_flow or nyuv2 or demo') 209 | arg_parser.add_argument('--image_path', type=str, default=None, help='Set this only when task==demo. Depth demo for single image.') 210 | arg_parser.add_argument('--pretrained_model', type=str, default=None, help='directory for loading flow pretrained models') 211 | arg_parser.add_argument('--result_dir', type=str, default=None, help='directory for saving predictions') 212 | 213 | args = arg_parser.parse_args() 214 | if not os.path.exists(args.config_file): 215 | raise ValueError('config file not found.') 216 | with open(args.config_file, 'r') as f: 217 | cfg = yaml.safe_load(f) 218 | cfg['img_hw'] = (cfg['img_hw'][0], cfg['img_hw'][1]) 219 | #cfg['log_dump_dir'] = os.path.join(args.model_dir, 'log.pkl') 220 | cfg['model_dir'] = args.result_dir 221 | 222 | # copy attr into cfg 223 | for attr in dir(args): 224 | if attr[:2] != '__': 225 | cfg[attr] = getattr(args, attr) 226 | 227 | class pObject(object): 228 | def __init__(self): 229 | pass 230 | cfg_new = pObject() 231 | for attr in list(cfg.keys()): 232 | setattr(cfg_new, attr, cfg[attr]) 233 | 234 | if args.mode == 'flow': 235 | model = Model_flow(cfg_new) 236 | elif args.mode == 'depth' or args.mode == 'flow_3stage': 237 | model = Model_depth_pose(cfg_new) 238 | elif args.mode == 'flowposenet': 239 | model = Model_flowposenet(cfg_new) 240 | 241 | if args.task == 'demo': 242 | model = Model_depth_pose(cfg_new) 243 | 244 | model.cuda() 245 | weights = torch.load(args.pretrained_model) 246 | model.load_state_dict(weights['model_state_dict']) 247 | model.eval() 248 | print('Model Loaded.') 249 | 250 | if args.task == 'kitti_depth': 251 | depth_res = test_eigen_depth(cfg_new, model) 252 | elif args.task == 'kitti_flow': 253 | gt_flows_2015, noc_masks_2015 = load_gt_flow_kitti(cfg_new.gt_2015_dir, 'kitti_2015') 254 | gt_masks_2015 = load_gt_mask(cfg_new.gt_2015_dir) 255 | flow_res = test_kitti_2015(cfg_new, model, gt_flows_2015, noc_masks_2015, gt_masks_2015) 256 | elif args.task == 'nyuv2': 257 | test_images, test_gt_depths = load_nyu_test_data(cfg_new.nyu_test_dir) 258 | depth_res = test_nyu(cfg_new, model, test_images, test_gt_depths) 259 | elif args.task == 'demo': 260 | test_single_image(args.image_path, model, training_hw=cfg['img_hw'], save_dir=args.result_dir) 261 | 262 | -------------------------------------------------------------------------------- /train.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | import yaml 3 | sys.path.append(os.path.dirname(os.path.abspath(__file__))) 4 | from core.dataset import KITTI_RAW, KITTI_Prepared, SINTEL_RAW, SINTEL_Prepared, NYU_Prepare, NYU_v2, KITTI_Odo 5 | from core.networks import get_model 6 | from core.config import generate_loss_weights_dict 7 | from core.visualize import Visualizer 8 | from core.evaluation import load_gt_flow_kitti, load_gt_mask 9 | from test import test_kitti_2012, test_kitti_2015, test_eigen_depth, test_nyu, load_nyu_test_data 10 | 11 | from collections import OrderedDict 12 | import torch 13 | import torch.utils.data 14 | from tqdm import tqdm 15 | import shutil 16 | import pickle 17 | import pdb 18 | import random 19 | import numpy as np 20 | import torch.backends.cudnn as cudnn 21 | 22 | 23 | def save_model(iter_, model_dir, filename, model, optimizer): 24 | torch.save({"iteration": iter_, "model_state_dict": model.state_dict(), 'optimizer_state_dict': optimizer.state_dict()}, os.path.join(model_dir, filename)) 25 | 26 | def load_model(model_dir, filename, model, optimizer): 27 | data = torch.load(os.path.join(model_dir, filename)) 28 | iter_ = data['iteration'] 29 | model.load_state_dict(data['model_state_dict']) 30 | optimizer.load_state_dict(data['optimizer_state_dict']) 31 | return iter_, model, optimizer 32 | 33 | def train(cfg): 34 | # load model and optimizer 35 | model = get_model(cfg.mode)(cfg) 36 | if cfg.multi_gpu: 37 | model = torch.nn.DataParallel(model) 38 | model = model.cuda() 39 | optimizer = torch.optim.Adam([{'params': filter(lambda p: p.requires_grad, model.parameters()), 'lr': cfg.lr}]) 40 | 41 | # Load Pretrained Models 42 | if cfg.resume: 43 | if cfg.iter_start > 0: 44 | cfg.iter_start, model, optimizer = load_model(cfg.model_dir, 'iter_{}.pth'.format(cfg.iter_start), model, optimizer) 45 | else: 46 | cfg.iter_start, model, optimizer = load_model(cfg.model_dir, 'last.pth', model, optimizer) 47 | elif cfg.flow_pretrained_model: 48 | data = torch.load(cfg.flow_pretrained_model)['model_state_dict'] 49 | renamed_dict = OrderedDict() 50 | for k, v in data.items(): 51 | if cfg.multi_gpu: 52 | name = 'module.model_flow.' + k 53 | elif cfg.mode == 'flowposenet': 54 | name = 'model_flow.' + k 55 | else: 56 | name = 'model_pose.model_flow.' + k 57 | renamed_dict[name] = v 58 | missing_keys, unexp_keys = model.load_state_dict(renamed_dict, strict=False) 59 | print(missing_keys) 60 | print(unexp_keys) 61 | print('Load Flow Pretrained Model from ' + cfg.flow_pretrained_model) 62 | if cfg.depth_pretrained_model and not cfg.resume: 63 | data = torch.load(cfg.depth_pretrained_model)['model_state_dict'] 64 | if cfg.multi_gpu: 65 | renamed_dict = OrderedDict() 66 | for k, v in data.items(): 67 | name = 'module.' + k 68 | renamed_dict[name] = v 69 | missing_keys, unexp_keys = model.load_state_dict(renamed_dict, strict=False) 70 | else: 71 | missing_keys, unexp_keys = model.load_state_dict(data, strict=False) 72 | print(missing_keys) 73 | print('##############') 74 | print(unexp_keys) 75 | print('Load Depth Pretrained Model from ' + cfg.depth_pretrained_model) 76 | 77 | loss_weights_dict = generate_loss_weights_dict(cfg) 78 | visualizer = Visualizer(loss_weights_dict, cfg.log_dump_dir) 79 | 80 | # load dataset 81 | data_dir = os.path.join(cfg.prepared_base_dir, cfg.prepared_save_dir) 82 | if not os.path.exists(os.path.join(data_dir, 'train.txt')): 83 | if cfg.dataset == 'kitti_depth': 84 | kitti_raw_dataset = KITTI_RAW(cfg.raw_base_dir, cfg.static_frames_txt, cfg.test_scenes_txt) 85 | kitti_raw_dataset.prepare_data_mp(data_dir, stride=1) 86 | elif cfg.dataset == 'sintel_raw': 87 | sintel_raw_dataset = SINTEL_RAW(cfg.raw_base_dir) 88 | sintel_raw_dataset.prepare_data_mp(data_dir, cfg.stride) 89 | elif cfg.dataset == 'kitti_odo': 90 | kitti_raw_dataset = KITTI_Odo(cfg.raw_base_dir) 91 | kitti_raw_dataset.prepare_data_mp(data_dir, stride=1) 92 | elif cfg.dataset == 'nyuv2': 93 | nyu_raw_dataset = NYU_Prepare(cfg.raw_base_dir, cfg.nyu_test_dir) 94 | nyu_raw_dataset.prepare_data_mp(data_dir, stride=10) 95 | else: 96 | raise NotImplementedError 97 | 98 | 99 | if cfg.dataset == 'kitti_depth': 100 | dataset = KITTI_Prepared(data_dir, num_scales=cfg.num_scales, img_hw=cfg.img_hw, num_iterations=(cfg.num_iterations - cfg.iter_start) * cfg.batch_size) 101 | elif cfg.dataset == 'sintel_raw': 102 | dataset = SINTEL_Prepared(data_dir, num_scales=cfg.num_scales, img_hw=cfg.img_hw, num_iterations=(cfg.num_iterations - cfg.iter_start) * cfg.batch_size) 103 | elif cfg.dataset == 'kitti_odo': 104 | dataset = KITTI_Prepared(data_dir, num_scales=cfg.num_scales, img_hw=cfg.img_hw, num_iterations=(cfg.num_iterations - cfg.iter_start) * cfg.batch_size) 105 | elif cfg.dataset == 'nyuv2': 106 | dataset = NYU_v2(data_dir, num_scales=cfg.num_scales, img_hw=cfg.img_hw, num_iterations=(cfg.num_iterations - cfg.iter_start) * cfg.batch_size) 107 | else: 108 | raise NotImplementedError 109 | 110 | dataloader = torch.utils.data.DataLoader(dataset, batch_size=cfg.batch_size, shuffle=True, num_workers=cfg.num_workers, drop_last=False) 111 | if cfg.dataset == 'kitti_depth' or cfg.dataset == 'kitti_odo' or cfg.dataset == 'sintel_raw': 112 | gt_flows_2012, noc_masks_2012 = load_gt_flow_kitti(cfg.gt_2012_dir, 'kitti_2012') 113 | gt_flows_2015, noc_masks_2015 = load_gt_flow_kitti(cfg.gt_2015_dir, 'kitti_2015') 114 | gt_masks_2015 = load_gt_mask(cfg.gt_2015_dir) 115 | elif cfg.dataset == 'nyuv2': 116 | test_images, test_gt_depths = load_nyu_test_data(cfg.nyu_test_dir) 117 | 118 | # training 119 | print('starting iteration: {}.'.format(cfg.iter_start)) 120 | for iter_, inputs in enumerate(tqdm(dataloader)): 121 | if (iter_ + 1) % cfg.test_interval == 0 and (not cfg.no_test): 122 | model.eval() 123 | if args.multi_gpu: 124 | model_eval = model.module 125 | else: 126 | model_eval = model 127 | if cfg.dataset == 'kitti_depth' or cfg.dataset == 'kitti_odo' or cfg.dataset == 'sintel_raw': 128 | if not (cfg.mode == 'depth' or cfg.mode == 'flowposenet'): 129 | eval_2012_res = test_kitti_2012(cfg, model_eval, gt_flows_2012, noc_masks_2012) 130 | eval_2015_res = test_kitti_2015(cfg, model_eval, gt_flows_2015, noc_masks_2015, gt_masks_2015, depth_save_dir=os.path.join(cfg.model_dir, 'results')) 131 | visualizer.add_log_pack({'eval_2012_res': eval_2012_res, 'eval_2015_res': eval_2015_res}) 132 | elif cfg.dataset == 'nyuv2': 133 | if not cfg.mode == 'flow': 134 | eval_nyu_res = test_nyu(cfg, model_eval, test_images, test_gt_depths) 135 | visualizer.add_log_pack({'eval_nyu_res': eval_nyu_res}) 136 | visualizer.dump_log(os.path.join(cfg.model_dir, 'log.pkl')) 137 | model.train() 138 | iter_ = iter_ + cfg.iter_start 139 | optimizer.zero_grad() 140 | inputs = inputs.cuda() 141 | #inputs = [k.cuda() for k in inputs] 142 | loss_pack = model(inputs) 143 | 144 | if iter_ % cfg.log_interval == 0: 145 | visualizer.print_loss(loss_pack, iter_=iter_) 146 | 147 | loss_list = [] 148 | for key in list(loss_pack.keys()): 149 | loss_list.append((loss_weights_dict[key] * loss_pack[key].mean()).unsqueeze(0)) 150 | loss = torch.cat(loss_list, 0).sum() 151 | loss.backward() 152 | optimizer.step() 153 | if (iter_ + 1) % cfg.save_interval == 0: 154 | save_model(iter_, cfg.model_dir, 'iter_{}.pth'.format(iter_), model, optimizer) 155 | save_model(iter_, cfg.model_dir, 'last.pth'.format(iter_), model, optimizer) 156 | 157 | if cfg.dataset == 'kitti_depth': 158 | if cfg.mode == 'depth' or cfg.mode == 'depth_pose': 159 | eval_depth_res = test_eigen_depth(cfg, model_eval) 160 | 161 | if __name__ == '__main__': 162 | import argparse 163 | arg_parser = argparse.ArgumentParser( 164 | description="TrianFlow training pipeline." 165 | ) 166 | arg_parser.add_argument('-c', '--config_file', default=None, help='config file.') 167 | arg_parser.add_argument('-g', '--gpu', type=str, default=0, help='gpu id.') 168 | arg_parser.add_argument('--batch_size', type=int, default=8, help='batch size.') 169 | arg_parser.add_argument('--iter_start', type=int, default=0, help='starting iteration.') 170 | arg_parser.add_argument('--lr', type=float, default=0.0001, help='learning rate') 171 | arg_parser.add_argument('--num_workers', type=int, default=4, help='number of workers.') 172 | arg_parser.add_argument('--log_interval', type=int, default=100, help='interval for printing loss.') 173 | arg_parser.add_argument('--test_interval', type=int, default=2000, help='interval for evaluation.') 174 | arg_parser.add_argument('--save_interval', type=int, default=2000, help='interval for saving models.') 175 | arg_parser.add_argument('--mode', type=str, default='flow', help='training mode.') 176 | arg_parser.add_argument('--model_dir', type=str, default=None, help='directory for saving models') 177 | arg_parser.add_argument('--prepared_save_dir', type=str, default='data_s1', help='directory name for generated training dataset') 178 | arg_parser.add_argument('--flow_pretrained_model', type=str, default=None, help='directory for loading flow pretrained models') 179 | arg_parser.add_argument('--depth_pretrained_model', type=str, default=None, help='directory for loading depth pretrained models') 180 | arg_parser.add_argument('--resume', action='store_true', help='to resume training.') 181 | arg_parser.add_argument('--multi_gpu', action='store_true', help='to use multiple gpu for training.') 182 | arg_parser.add_argument('--no_test', action='store_true', help='without evaluation.') 183 | args = arg_parser.parse_args() 184 | #args.config_file = 'config/debug.yaml' 185 | if args.config_file is None: 186 | raise ValueError('config file needed. -c --config_file.') 187 | 188 | # set model 189 | if args.model_dir is None: 190 | args.model_dir = os.path.join('models', os.path.splitext(os.path.split(args.config_file)[1])[0]) 191 | args.model_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)), args.model_dir, args.mode) 192 | if not os.path.exists(args.model_dir): 193 | os.makedirs(args.model_dir) 194 | if not os.path.exists(args.config_file): 195 | raise ValueError('config file not found.') 196 | with open(args.config_file, 'r') as f: 197 | cfg = yaml.safe_load(f) 198 | cfg['img_hw'] = (cfg['img_hw'][0], cfg['img_hw'][1]) 199 | cfg['log_dump_dir'] = os.path.join(args.model_dir, 'log.pkl') 200 | shutil.copy(args.config_file, args.model_dir) 201 | 202 | # copy attr into cfg 203 | for attr in dir(args): 204 | if attr[:2] != '__': 205 | cfg[attr] = getattr(args, attr) 206 | 207 | # set gpu 208 | num_gpus = len(args.gpu.split(',')) 209 | if (args.multi_gpu and num_gpus <= 1) or ((not args.multi_gpu) and num_gpus > 1): 210 | raise ValueError('Error! the number of gpus used in the --gpu argument does not match the argument --multi_gpu.') 211 | if args.multi_gpu: 212 | cfg['batch_size'] = cfg['batch_size'] * num_gpus 213 | cfg['num_iterations'] = int(cfg['num_iterations'] / num_gpus) 214 | os.environ['CUDA_VISIBLE_DEVICES'] = str(args.gpu) 215 | 216 | class pObject(object): 217 | def __init__(self): 218 | pass 219 | cfg_new = pObject() 220 | for attr in list(cfg.keys()): 221 | setattr(cfg_new, attr, cfg[attr]) 222 | with open(os.path.join(args.model_dir, 'config.pkl'), 'wb') as f: 223 | pickle.dump(cfg_new, f) 224 | 225 | # main function 226 | train(cfg_new) 227 | 228 | --------------------------------------------------------------------------------