├── LICENSE
├── README.md
├── datasets
├── __init__.py
├── kitti_dataset.py
└── mono_dataset.py
├── evaluate_depth.py
├── export_gt_depth.py
├── images
├── Quantitative_result1.png
├── Quantitative_result_lite.png
└── hr_depth.gif
├── kitti_utils.py
├── layers.py
├── networks
├── HR_Depth_Decoder.py
├── __init__.py
├── depth_decoder.py
├── mobilenetV3_encoder.py
└── resnet_encoder.py
├── options.py
├── splits
├── eigen
│ └── test_files.txt
└── kitti_archives_to_download.txt
├── test_single_image.ipynb
└── utils.py
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2020 shawlyu
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # HR-Depth: High Resolution Self-Supervised Monocular Depth Estimation
2 |
3 | This is the official implementation for training and testing depth estimation using the model proposed in
4 |
5 | >HR-Depth: High Resolution Self-Supervised Monocular Depth Estimation
6 | >
7 | >Xiaoyang Lyu, Liang Liu, Mengmeng Wang, Xin Kong, Lina Liu, Yong Liu*, Xinxin Chen and Yi Yuan.
8 |
9 | This paper has been accepted by AAAI 2021.
10 |
11 | 
12 |
13 | **Note:** We temporarily release the evaluation version and some pretrained models of our paper. The training codes are modified according to [Monodepth2](https://github.com/nianticlabs/monodepth2), and we will release them soon.
14 |
15 | # Update
16 | **2021.1.27**
17 | 1. The training code will be released around the beginning of the March.
18 | 2. For re-implementing HR-Depth, you can clone [Monodepth2](https://github.com/nianticlabs/monodepth2) and simply replace the `DepthDecoder` with `HRDepthDecoder`. Our parameter settings are exactly the same as Monodepth2.
19 | 3. In our paper, we wrote the initial learning rate wrong. It should be **1e-4**, not **1e-3**. We will fix this mistake in the final version. Thanks for someone pointing out our problem.
20 | # Quantitative Results
21 |
22 | ## HR-Depth Results
23 |
24 | 
25 |
26 | ## Lite-HR-Depth Results
27 |
28 | 
29 |
30 | # Usage
31 |
32 | ## Requirements
33 |
34 | Assuming a fresh Anaconda distribution, you can install the dependencies with:
35 |
36 | ```shell
37 | conda install pytorch=1.5.0 torchvision=0.6.0 -c pytorch
38 | conda install opencv=4.2
39 | pip install scipy=1.4.1
40 | ```
41 |
42 | ## Pretrained Model
43 |
44 | We provided pretrained model as follow:
45 |
46 | | Model Name | Resolution | Dataset | Supervision | Abs_Rel | $\delta<1.25$ | $\delta<1.25^2$ | $\delta<1.25^3$ |
47 | | ------------------------------------------------------------ | --------------- | ------- | ----------- | ------- | ------------- | --------------- | --------------- |
48 | | [HR_Depth_CS_K_MS_$640\times192$](http://hr-depth-pretrain-model.s3.amazonaws.com/HR_Depth_CS_K_MS_640x192.zip) | $640\times192$ | CS+K | MS | 0.104 | 0.893 | 0.964 | 0.983 |
49 | | [HR_Depth_K_MS_$1024\times320$](http://hr-depth-pretrain-model.s3.amazonaws.com/HR_Depth_K_MS_1024x320.zip) | $1024\times320$ | K | MS | 0.101 | 0.899 | 0.966 | 0.983 |
50 | | [HR_Depth_K_M_$1280\times384$](http://hr-depth-pretrain-model.s3.amazonaws.com/HR_Depth_K_M_1280x384.zip) | $1280\times384$ | K | M | 0.104 | 0.894 | 0.966 | 0.984 |
51 | | [Lite_HR_Depth_K_T_$1280\times384$](http://hr-depth-pretrain-model.s3.amazonaws.com/Lite_HR_Depth_K_T_1280x384.zip) | $1280\times384$ | K | T | 0.104 | 0.893 | 0.967 | 0.985 |
52 |
53 | ## KITTI training data
54 |
55 | You can download the entire [KITTI_raw dataset]() by running:
56 |
57 | ```shell
58 | wget -i splits/kitti_archives_to_download.txt -P kitti_data/
59 | ```
60 |
61 | Then unzip with
62 |
63 | ```shell
64 | cd kitti_data
65 | unzip "*.zip"
66 | cd ..
67 | ```
68 |
69 | **Warning:** The size of this dataset is about 175GB, so make sure you have enough space to unzip them.
70 |
71 | ## KITTI evaluation
72 |
73 | `--data_path`: **path of KITTI dataset**
74 | `--load_weights_folder`: **path of models**
75 | `--HR_Depth`: **inference by HR-Depth**
76 | `--Lite_HR_Depth`: **inference by Lite-HR-Depth**
77 |
78 | To prepare the ground truth depth maps run:
79 |
80 | ```shell
81 | python export_gt_depth.py --data_path ./kitti_RAW
82 | ```
83 |
84 | assuming that you have placed the KITTI RAW dataset in the default location of `./kitti_data`.
85 |
86 | For HR-Depth:
87 |
88 | ```shell
89 | python evaluate_depth.py --data_path ./kitti_RAW --load_weights_folder ./models/HR_Depth_CS_K_MS_640x192 --HR_Depth
90 |
91 | python evaluate_depth.py --data_path ./kitti_RAW --load_weights_folder ./models/HR_Depth_K_M_1280x384 --HR_Depth
92 | ```
93 |
94 | For Lite-HR-Depth:
95 |
96 | ```shell
97 | python evaluate_depth.py --data_path ./kitti_RAW --load_weights_folder ./models/Lite_HR_Depth_K_T_1280x384 --Lite_HR_Depth
98 | ```
99 |
100 |
--------------------------------------------------------------------------------
/datasets/__init__.py:
--------------------------------------------------------------------------------
1 | from .kitti_dataset import KITTIRAWDataset, KITTIOdomDataset, KITTIDepthDataset
2 |
--------------------------------------------------------------------------------
/datasets/kitti_dataset.py:
--------------------------------------------------------------------------------
1 | from __future__ import absolute_import, division, print_function
2 |
3 | import os
4 | import skimage.transform
5 | import numpy as np
6 | import PIL.Image as pil
7 |
8 | from kitti_utils import generate_depth_map
9 | from .mono_dataset import MonoDataset
10 |
11 |
12 | class KITTIDataset(MonoDataset):
13 | """Superclass for different types of KITTI dataset loaders
14 | """
15 | def __init__(self, *args, **kwargs):
16 | super(KITTIDataset, self).__init__(*args, **kwargs)
17 |
18 | self.K = np.array([[0.58, 0, 0.5, 0],
19 | [0, 1.92, 0.5, 0],
20 | [0, 0, 1, 0],
21 | [0, 0, 0, 1]], dtype=np.float32)
22 |
23 | self.full_res_shape = (1242, 375)
24 | self.side_map = {"2": 2, "3": 3, "l": 2, "r": 3}
25 |
26 | def check_depth(self):
27 | line = self.filenames[0].split()
28 | scene_name = line[0]
29 | frame_index = int(line[1])
30 |
31 | velo_filename = os.path.join(
32 | self.data_path,
33 | scene_name,
34 | "velodyne_points/data/{:010d}.bin".format(int(frame_index)))
35 |
36 | return os.path.isfile(velo_filename)
37 |
38 | def get_color(self, folder, frame_index, side, do_flip):
39 | color = self.loader(self.get_image_path(folder, frame_index, side))
40 |
41 | if do_flip:
42 | color = color.transpose(pil.FLIP_LEFT_RIGHT)
43 |
44 | return color
45 |
46 |
47 | class KITTIRAWDataset(KITTIDataset):
48 | """KITTI dataset which loads the original velodyne depth maps for ground truth
49 | """
50 | def __init__(self, *args, **kwargs):
51 | super(KITTIRAWDataset, self).__init__(*args, **kwargs)
52 |
53 | def get_image_path(self, folder, frame_index, side):
54 | f_str = "{:010d}{}".format(frame_index, self.img_ext)
55 | image_path = os.path.join(
56 | self.data_path, folder, "image_0{}/data".format(self.side_map[side]), f_str)
57 | return image_path
58 |
59 | def get_depth(self, folder, frame_index, side, do_flip):
60 | calib_path = os.path.join(self.data_path, folder.split("/")[0])
61 |
62 | velo_filename = os.path.join(
63 | self.data_path,
64 | folder,
65 | "velodyne_points/data/{:010d}.bin".format(int(frame_index)))
66 |
67 | depth_gt = generate_depth_map(calib_path, velo_filename, self.side_map[side])
68 | depth_gt = skimage.transform.resize(
69 | depth_gt, self.full_res_shape[::-1], order=0, preserve_range=True, mode='constant')
70 |
71 | if do_flip:
72 | depth_gt = np.fliplr(depth_gt)
73 |
74 | return depth_gt
75 |
76 |
77 | class KITTIOdomDataset(KITTIDataset):
78 | """KITTI dataset for odometry training and testing
79 | """
80 | def __init__(self, *args, **kwargs):
81 | super(KITTIOdomDataset, self).__init__(*args, **kwargs)
82 |
83 | def get_image_path(self, folder, frame_index, side):
84 | f_str = "{:06d}{}".format(frame_index, self.img_ext)
85 | image_path = os.path.join(
86 | self.data_path,
87 | "sequences/{:02d}".format(int(folder)),
88 | "image_{}".format(self.side_map[side]),
89 | f_str)
90 | return image_path
91 |
92 |
93 | class KITTIDepthDataset(KITTIDataset):
94 | """KITTI dataset which uses the updated ground truth depth maps
95 | """
96 | def __init__(self, *args, **kwargs):
97 | super(KITTIDepthDataset, self).__init__(*args, **kwargs)
98 |
99 | def get_image_path(self, folder, frame_index, side):
100 | f_str = "{:010d}{}".format(frame_index, self.img_ext)
101 | image_path = os.path.join(
102 | self.data_path,
103 | folder,
104 | "image_0{}/data".format(self.side_map[side]),
105 | f_str)
106 | return image_path
107 |
108 | def get_depth(self, folder, frame_index, side, do_flip):
109 | f_str = "{:010d}.png".format(frame_index)
110 | depth_path = os.path.join(
111 | self.data_path,
112 | folder,
113 | "proj_depth/groundtruth/image_0{}".format(self.side_map[side]),
114 | f_str)
115 |
116 | depth_gt = pil.open(depth_path)
117 | depth_gt = depth_gt.resize(self.full_res_shape, pil.NEAREST)
118 | depth_gt = np.array(depth_gt).astype(np.float32) / 256
119 |
120 | if do_flip:
121 | depth_gt = np.fliplr(depth_gt)
122 |
123 | return depth_gt
124 |
--------------------------------------------------------------------------------
/datasets/mono_dataset.py:
--------------------------------------------------------------------------------
1 | from __future__ import absolute_import, division, print_function
2 |
3 | import os
4 | import random
5 | import numpy as np
6 | import copy
7 | from PIL import Image # using pillow-simd for increased speed
8 |
9 | import torch
10 | import torch.utils.data as data
11 | from torchvision import transforms
12 |
13 |
14 | def pil_loader(path):
15 | # open path as file to avoid ResourceWarning
16 | # (https://github.com/python-pillow/Pillow/issues/835)
17 | with open(path, 'rb') as f:
18 | with Image.open(f) as img:
19 | return img.convert('RGB')
20 |
21 |
22 | class MonoDataset(data.Dataset):
23 | """Superclass for monocular dataloaders
24 |
25 | Args:
26 | data_path
27 | filenames
28 | height
29 | width
30 | frame_idxs
31 | num_scales
32 | is_train
33 | img_ext
34 | """
35 | def __init__(self,
36 | data_path,
37 | filenames,
38 | height,
39 | width,
40 | frame_idxs,
41 | num_scales,
42 | is_train=False,
43 | img_ext='.png',
44 | vertical_flip=False):
45 | super(MonoDataset, self).__init__()
46 |
47 | self.data_path = data_path
48 | self.filenames = filenames
49 | self.height = height
50 | self.width = width
51 | self.num_scales = num_scales
52 | self.interp = Image.ANTIALIAS
53 |
54 | self.frame_idxs = frame_idxs
55 |
56 | self.is_train = is_train
57 | self.img_ext = img_ext
58 |
59 | self.loader = pil_loader
60 | self.to_tensor = transforms.ToTensor()
61 |
62 | # We need to specify augmentations differently in newer versions of torchvision.
63 | # We first try the newer tuple version; if this fails we fall back to scalars
64 | try:
65 | self.brightness = (0.8, 1.2)
66 | self.contrast = (0.8, 1.2)
67 | self.saturation = (0.8, 1.2)
68 | self.hue = (-0.1, 0.1)
69 | transforms.ColorJitter.get_params(
70 | self.brightness, self.contrast, self.saturation, self.hue)
71 | except TypeError:
72 | self.brightness = 0.2
73 | self.contrast = 0.2
74 | self.saturation = 0.2
75 | self.hue = 0.1
76 |
77 | self.resize = {}
78 | for i in range(self.num_scales):
79 | s = 2 ** i
80 | self.resize[i] = transforms.Resize((self.height // s, self.width // s),
81 | interpolation=self.interp)
82 |
83 | self.load_depth = self.check_depth()
84 |
85 | def preprocess(self, inputs, color_aug):
86 | """Resize colour images to the required scales and augment if required
87 |
88 | We create the color_aug object in advance and apply the same augmentation to all
89 | images in this item. This ensures that all images input to the pose network receive the
90 | same augmentation.
91 | """
92 | for k in list(inputs):
93 | frame = inputs[k]
94 | if "color" in k:
95 | n, im, i = k
96 | for i in range(self.num_scales):
97 | inputs[(n, im, i)] = self.resize[i](inputs[(n, im, i - 1)])
98 |
99 | for k in list(inputs):
100 | f = inputs[k]
101 | if "color" in k:
102 | n, im, i = k
103 | inputs[(n, im, i)] = self.to_tensor(f)
104 | inputs[(n + "_aug", im, i)] = self.to_tensor(color_aug(f))
105 |
106 | def __len__(self):
107 | return len(self.filenames)
108 |
109 | def __getitem__(self, index):
110 | """Returns a single training item from the dataset as a dictionary.
111 |
112 | Values correspond to torch tensors.
113 | Keys in the dictionary are either strings or tuples:
114 |
115 | ("color", , ) for raw colour images,
116 | ("color_aug", , ) for augmented colour images,
117 | ("K", scale) or ("inv_K", scale) for camera intrinsics,
118 | "stereo_T" for camera extrinsics, and
119 | "depth_gt" for ground truth depth maps.
120 |
121 | is either:
122 | an integer (e.g. 0, -1, or 1) representing the temporal step relative to 'index',
123 | or
124 | "s" for the opposite image in the stereo pair.
125 |
126 | is an integer representing the scale of the image relative to the fullsize image:
127 | -1 images at native resolution as loaded from disk
128 | 0 images resized to (self.width, self.height )
129 | 1 images resized to (self.width // 2, self.height // 2)
130 | 2 images resized to (self.width // 4, self.height // 4)
131 | 3 images resized to (self.width // 8, self.height // 8)
132 | """
133 | inputs = {}
134 |
135 | do_color_aug = self.is_train and random.random() > 0.5
136 | do_flip = self.is_train and random.random() > 0.5
137 |
138 | # inputs["vertical_flip"] = do_vertical_flip and (not do_flip)
139 | line = self.filenames[index].split()
140 | folder = line[0]
141 |
142 | if len(line) == 3:
143 | frame_index = int(line[1])
144 | else:
145 | frame_index = 0
146 |
147 | if len(line) == 3:
148 | side = line[2]
149 | else:
150 | side = None
151 |
152 | for i in self.frame_idxs:
153 | if i == "s":
154 | other_side = {"r": "l", "l": "r"}[side]
155 | inputs[("color", i, -1)] = self.get_color(folder, frame_index, other_side, do_flip)
156 | else:
157 | inputs[("color", i, -1)] = self.get_color(folder, frame_index + i, side, do_flip)
158 |
159 | # adjusting intrinsics to match each scale in the pyramid
160 | for scale in range(self.num_scales):
161 | K = self.K.copy()
162 |
163 | K[0, :] *= self.width // (2 ** scale)
164 | K[1, :] *= self.height // (2 ** scale)
165 |
166 | inv_K = np.linalg.pinv(K)
167 |
168 | inputs[("K", scale)] = torch.from_numpy(K)
169 | inputs[("inv_K", scale)] = torch.from_numpy(inv_K)
170 |
171 | if do_color_aug:
172 | color_aug = transforms.ColorJitter.get_params(
173 | self.brightness, self.contrast, self.saturation, self.hue)
174 | else:
175 | color_aug = (lambda x: x)
176 |
177 | self.preprocess(inputs, color_aug)
178 |
179 | for i in self.frame_idxs:
180 | del inputs[("color", i, -1)]
181 | del inputs[("color_aug", i, -1)]
182 |
183 | if self.load_depth:
184 | depth_gt = self.get_depth(folder, frame_index, side, do_flip)
185 | inputs["depth_gt"] = np.expand_dims(depth_gt, 0)
186 | inputs["depth_gt"] = torch.from_numpy(inputs["depth_gt"].astype(np.float32))
187 | if "s" in self.frame_idxs:
188 | stereo_T = np.eye(4, dtype=np.float32)
189 | baseline_sign = -1 if do_flip else 1
190 | side_sign = -1 if side == "l" else 1
191 | stereo_T[0, 3] = side_sign * baseline_sign * 0.1
192 |
193 | inputs["stereo_T"] = torch.from_numpy(stereo_T)
194 |
195 | return inputs
196 |
197 | def get_color(self, folder, frame_index, side, do_flip):
198 | raise NotImplementedError
199 |
200 | def check_depth(self):
201 | raise NotImplementedError
202 |
203 | def get_depth(self, folder, frame_index, side, do_flip):
204 | raise NotImplementedError
205 |
--------------------------------------------------------------------------------
/evaluate_depth.py:
--------------------------------------------------------------------------------
1 | from __future__ import absolute_import, division, print_function
2 |
3 | import os
4 | import cv2
5 | import numpy as np
6 |
7 | import torch
8 | from torch.utils.data import DataLoader
9 |
10 | from layers import disp_to_depth
11 | from utils import readlines
12 | from options import HRDepthOptions
13 | import datasets
14 | import networks
15 |
16 | cv2.setNumThreads(0) # This speeds up evaluation 5x on our unix systems (OpenCV 3.3.1)
17 |
18 |
19 | splits_dir = os.path.join(os.path.dirname(__file__), "splits")
20 |
21 |
22 | def compute_errors(gt, pred):
23 | """
24 | Computation of error metrics between predicted and ground truth depths
25 | """
26 | thresh = np.maximum((gt / pred), (pred / gt))
27 | a1 = (thresh < 1.25 ).mean()
28 | a2 = (thresh < 1.25 ** 2).mean()
29 | a3 = (thresh < 1.25 ** 3).mean()
30 |
31 | rmse = (gt - pred) ** 2
32 | rmse = np.sqrt(rmse.mean())
33 |
34 | rmse_log = (np.log(gt) - np.log(pred)) ** 2
35 | rmse_log = np.sqrt(rmse_log.mean())
36 |
37 | abs_rel = np.mean(np.abs(gt - pred) / gt)
38 |
39 | sq_rel = np.mean(((gt - pred) ** 2) / gt)
40 |
41 | return abs_rel, sq_rel, rmse, rmse_log, a1, a2, a3
42 |
43 | def evaluate(opt):
44 | """Evaluates a pretrained model using a specified test set
45 | """
46 | MIN_DEPTH = 1e-3
47 | MAX_DEPTH = 80
48 |
49 | opt.load_weights_folder = os.path.expanduser(opt.load_weights_folder)
50 |
51 | assert os.path.isdir(opt.load_weights_folder), \
52 | "Cannot find a folder at {}".format(opt.load_weights_folder)
53 |
54 | print("-> Loading weights from {}".format(opt.load_weights_folder))
55 |
56 | filenames = readlines(os.path.join(splits_dir, opt.eval_split, "test_files.txt"))
57 | encoder_path = os.path.join(opt.load_weights_folder, "encoder.pth")
58 | decoder_path = os.path.join(opt.load_weights_folder, "depth.pth")
59 |
60 | encoder_dict = torch.load(encoder_path)
61 |
62 | dataset = datasets.KITTIRAWDataset(opt.data_path, filenames,
63 | encoder_dict['height'], encoder_dict['width'],
64 | [0], 4, is_train=False)
65 | dataloader = DataLoader(dataset, 16, shuffle=False, num_workers=opt.num_workers,
66 | pin_memory=True, drop_last=False)
67 | if opt.Lite_HR_Depth:
68 | encoder = networks.MobileEncoder(pretrained=None)
69 | elif opt.HR_Depth:
70 | encoder = networks.ResnetEncoder(18, False)
71 | else:
72 | assert False," Please choose HR-Depth or Lite-HR-Depth "
73 | depth_decoder = networks.HRDepthDecoder(encoder.num_ch_enc, mobile_encoder=opt.Lite_HR_Depth)
74 |
75 | model_dict = encoder.state_dict()
76 | encoder.load_state_dict({k: v for k, v in encoder_dict.items() if k in model_dict})
77 | depth_decoder.load_state_dict(torch.load(decoder_path))
78 |
79 | encoder.cuda()
80 | encoder.eval()
81 | depth_decoder.cuda()
82 | depth_decoder.eval()
83 |
84 | pred_disps = []
85 |
86 | print("-> Computing predictions with size {}x{}".format(
87 | encoder_dict['width'], encoder_dict['height']))
88 |
89 | with torch.no_grad():
90 | for data in dataloader:
91 | input_color = data[("color", 0, 0)].cuda()
92 |
93 | output = depth_decoder(encoder(input_color))
94 | pred_disp, _ = disp_to_depth(output[("disparity", "Scale0")], 0.1, 100.0)
95 | pred_disp = pred_disp.cpu()[:, 0].numpy()
96 |
97 | pred_disps.append(pred_disp)
98 |
99 | pred_disps = np.concatenate(pred_disps)
100 |
101 | gt_path = os.path.join(splits_dir, opt.eval_split, "gt_depths.npz")
102 | gt_depths = np.load(gt_path, fix_imports=True, encoding='latin1', allow_pickle=True)["data"]
103 |
104 | print("-> Evaluating")
105 | print(" Using median scaling")
106 |
107 | errors = []
108 | ratios = []
109 |
110 | for i in range(pred_disps.shape[0]):
111 |
112 | gt_depth = gt_depths[i]
113 | gt_height, gt_width = gt_depth.shape[:2]
114 |
115 | pred_disp = pred_disps[i]
116 | pred_disp = cv2.resize(pred_disp, (gt_width, gt_height))
117 | pred_depth = 1 / pred_disp
118 |
119 | # Apply the mask proposed by Eigen
120 | mask = np.logical_and(gt_depth > MIN_DEPTH, gt_depth < MAX_DEPTH)
121 |
122 | crop = np.array([0.40810811 * gt_height, 0.99189189 * gt_height,
123 | 0.03594771 * gt_width, 0.96405229 * gt_width]).astype(np.int32)
124 | crop_mask = np.zeros(mask.shape)
125 | crop_mask[crop[0]:crop[1], crop[2]:crop[3]] = 1
126 | mask = np.logical_and(mask, crop_mask)
127 |
128 |
129 | pred_depth = pred_depth[mask]
130 | gt_depth = gt_depth[mask]
131 |
132 | ratio = np.median(gt_depth) / np.median(pred_depth)
133 | ratios.append(ratio)
134 | pred_depth *= ratio
135 |
136 | pred_depth[pred_depth < MIN_DEPTH] = MIN_DEPTH
137 | pred_depth[pred_depth > MAX_DEPTH] = MAX_DEPTH
138 |
139 | errors.append(compute_errors(gt_depth, pred_depth))
140 |
141 | ratios = np.array(ratios)
142 | med = np.median(ratios)
143 | print(" Scaling ratios | med: {:0.3f} | std: {:0.3f}".format(med, np.std(ratios / med)))
144 |
145 | mean_errors = np.array(errors).mean(0)
146 |
147 | print("\n " + ("{:>8} | " * 7).format("abs_rel", "sq_rel", "rmse", "rmse_log", "a1", "a2", "a3"))
148 | print(("&{: 8.3f} " * 7).format(*mean_errors.tolist()) + "\\\\")
149 | print("\n-> Done!")
150 |
151 |
152 | if __name__ == "__main__":
153 | options = HRDepthOptions()
154 | evaluate(options.parse())
155 |
--------------------------------------------------------------------------------
/export_gt_depth.py:
--------------------------------------------------------------------------------
1 | from __future__ import absolute_import, division, print_function
2 |
3 | import os
4 |
5 | import argparse
6 | import numpy as np
7 | import PIL.Image as pil
8 |
9 | from utils import readlines
10 | from kitti_utils import generate_depth_map
11 |
12 |
13 | def export_gt_depths_kitti():
14 |
15 | parser = argparse.ArgumentParser(description='export_gt_depth')
16 |
17 | parser.add_argument('--data_path',
18 | type=str,
19 | help='path to the root of the KITTI data',
20 | required=True)
21 | parser.add_argument('--split',
22 | type=str,
23 | help='which split to export gt from',
24 | default="eigen",
25 | choices=["eigen", "eigen_benchmark"])
26 | opt = parser.parse_args()
27 |
28 | split_folder = os.path.join(os.path.dirname(__file__), "splits", opt.split)
29 | lines = readlines(os.path.join(split_folder, "test_files.txt"))
30 |
31 | print("Exporting ground truth depths for {}".format(opt.split))
32 |
33 | gt_depths = []
34 | for line in lines:
35 |
36 | folder, frame_id, _ = line.split()
37 | frame_id = int(frame_id)
38 |
39 | if opt.split == "eigen":
40 | calib_dir = os.path.join(opt.data_path, folder.split("/")[0])
41 | velo_filename = os.path.join(opt.data_path, folder,
42 | "velodyne_points/data", "{:010d}.bin".format(frame_id))
43 | gt_depth = generate_depth_map(calib_dir, velo_filename, 2, True)
44 | elif opt.split == "eigen_benchmark":
45 | gt_depth_path = os.path.join(opt.data_path, folder, "proj_depth",
46 | "groundtruth", "image_02", "{:010d}.png".format(frame_id))
47 | gt_depth = np.array(pil.open(gt_depth_path)).astype(np.float32) / 256
48 |
49 | gt_depths.append(gt_depth.astype(np.float32))
50 |
51 | output_path = os.path.join(split_folder, "gt_depths.npz")
52 |
53 | print("Saving to {}".format(opt.split))
54 |
55 | np.savez_compressed(output_path, data=np.array(gt_depths))
56 |
57 |
58 | if __name__ == "__main__":
59 | export_gt_depths_kitti()
60 |
--------------------------------------------------------------------------------
/images/Quantitative_result1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/shawLyu/HR-Depth/1804ddd5d900949c2d4ac6eb28a25c86efb231d5/images/Quantitative_result1.png
--------------------------------------------------------------------------------
/images/Quantitative_result_lite.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/shawLyu/HR-Depth/1804ddd5d900949c2d4ac6eb28a25c86efb231d5/images/Quantitative_result_lite.png
--------------------------------------------------------------------------------
/images/hr_depth.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/shawLyu/HR-Depth/1804ddd5d900949c2d4ac6eb28a25c86efb231d5/images/hr_depth.gif
--------------------------------------------------------------------------------
/kitti_utils.py:
--------------------------------------------------------------------------------
1 | from __future__ import absolute_import, division, print_function
2 |
3 | import os
4 | import numpy as np
5 | from collections import Counter
6 |
7 |
8 | def load_velodyne_points(filename):
9 | """Load 3D point cloud from KITTI file format
10 | (adapted from https://github.com/hunse/kitti)
11 | """
12 | points = np.fromfile(filename, dtype=np.float32).reshape(-1, 4)
13 | points[:, 3] = 1.0 # homogeneous
14 | return points
15 |
16 |
17 | def read_calib_file(path):
18 | """Read KITTI calibration file
19 | (from https://github.com/hunse/kitti)
20 | """
21 | float_chars = set("0123456789.e+- ")
22 | data = {}
23 | with open(path, 'r') as f:
24 | for line in f.readlines():
25 | key, value = line.split(':', 1)
26 | value = value.strip()
27 | data[key] = value
28 | if float_chars.issuperset(value):
29 | # try to cast to float array
30 | try:
31 | data[key] = np.array(list(map(float, value.split(' '))))
32 | except ValueError:
33 | # casting error: data[key] already eq. value, so pass
34 | pass
35 |
36 | return data
37 |
38 |
39 | def sub2ind(matrixSize, rowSub, colSub):
40 | """Convert row, col matrix subscripts to linear indices
41 | """
42 | m, n = matrixSize
43 | return rowSub * (n-1) + colSub - 1
44 |
45 |
46 | def generate_depth_map(calib_dir, velo_filename, cam=2, vel_depth=False):
47 | """Generate a depth map from velodyne data
48 | """
49 | # load calibration files
50 | cam2cam = read_calib_file(os.path.join(calib_dir, 'calib_cam_to_cam.txt'))
51 | velo2cam = read_calib_file(os.path.join(calib_dir, 'calib_velo_to_cam.txt'))
52 | velo2cam = np.hstack((velo2cam['R'].reshape(3, 3), velo2cam['T'][..., np.newaxis]))
53 | velo2cam = np.vstack((velo2cam, np.array([0, 0, 0, 1.0])))
54 |
55 | # get image shape
56 | im_shape = cam2cam["S_rect_02"][::-1].astype(np.int32)
57 |
58 | # compute projection matrix velodyne->image plane
59 | R_cam2rect = np.eye(4)
60 | R_cam2rect[:3, :3] = cam2cam['R_rect_00'].reshape(3, 3)
61 | P_rect = cam2cam['P_rect_0'+str(cam)].reshape(3, 4)
62 | P_velo2im = np.dot(np.dot(P_rect, R_cam2rect), velo2cam)
63 |
64 | # load velodyne points and remove all behind image plane (approximation)
65 | # each row of the velodyne data is forward, left, up, reflectance
66 | velo = load_velodyne_points(velo_filename)
67 | velo = velo[velo[:, 0] >= 0, :]
68 |
69 | # project the points to the camera
70 | velo_pts_im = np.dot(P_velo2im, velo.T).T
71 | velo_pts_im[:, :2] = velo_pts_im[:, :2] / velo_pts_im[:, 2][..., np.newaxis]
72 |
73 | if vel_depth:
74 | velo_pts_im[:, 2] = velo[:, 0]
75 |
76 | # check if in bounds
77 | # use minus 1 to get the exact same value as KITTI matlab code
78 | velo_pts_im[:, 0] = np.round(velo_pts_im[:, 0]) - 1
79 | velo_pts_im[:, 1] = np.round(velo_pts_im[:, 1]) - 1
80 | val_inds = (velo_pts_im[:, 0] >= 0) & (velo_pts_im[:, 1] >= 0)
81 | val_inds = val_inds & (velo_pts_im[:, 0] < im_shape[1]) & (velo_pts_im[:, 1] < im_shape[0])
82 | velo_pts_im = velo_pts_im[val_inds, :]
83 |
84 | # project to image
85 | depth = np.zeros((im_shape[:2]))
86 | depth[velo_pts_im[:, 1].astype(np.int), velo_pts_im[:, 0].astype(np.int)] = velo_pts_im[:, 2]
87 |
88 | # find the duplicate points and choose the closest depth
89 | inds = sub2ind(depth.shape, velo_pts_im[:, 1], velo_pts_im[:, 0])
90 | dupe_inds = [item for item, count in Counter(inds).items() if count > 1]
91 | for dd in dupe_inds:
92 | pts = np.where(inds == dd)[0]
93 | x_loc = int(velo_pts_im[pts[0], 0])
94 | y_loc = int(velo_pts_im[pts[0], 1])
95 | depth[y_loc, x_loc] = velo_pts_im[pts, 2].min()
96 | depth[depth < 0] = 0
97 |
98 | return depth
99 |
--------------------------------------------------------------------------------
/layers.py:
--------------------------------------------------------------------------------
1 | from __future__ import absolute_import, division, print_function
2 |
3 | import numpy as np
4 | import math
5 |
6 | import torch
7 | import torch.nn as nn
8 | import torch.nn.functional as F
9 |
10 | def depth_to_disp(depth, min_depth, max_depth):
11 | min_disp = 1 / max_depth
12 | max_disp = 1 / min_depth
13 | disp = 1 / depth - min_disp
14 | return disp / (max_disp - min_disp)
15 |
16 | def disp_to_depth(disp, min_depth, max_depth):
17 | """Convert network's sigmoid output into depth prediction
18 | The formula for this conversion is given in the 'additional considerations'
19 | section of the paper.
20 | """
21 | min_disp = 1 / max_depth
22 | max_disp = 1 / min_depth
23 | scaled_disp = min_disp + (max_disp - min_disp) * disp
24 | depth = 1 / scaled_disp
25 | return scaled_disp, depth
26 |
27 |
28 | def transformation_from_parameters(axisangle, translation, invert=False):
29 | """Convert the network's (axisangle, translation) output into a 4x4 matrix
30 | """
31 | R = rot_from_axisangle(axisangle)
32 | t = translation.clone()
33 |
34 | if invert:
35 | R = R.transpose(1, 2)
36 | t *= -1
37 |
38 | T = get_translation_matrix(t)
39 |
40 | if invert:
41 | M = torch.matmul(R, T)
42 | else:
43 | M = torch.matmul(T, R)
44 |
45 | return M
46 |
47 |
48 | def get_translation_matrix(translation_vector):
49 | """Convert a translation vector into a 4x4 transformation matrix
50 | """
51 | T = torch.zeros(translation_vector.shape[0], 4, 4).to(device=translation_vector.device)
52 |
53 | t = translation_vector.contiguous().view(-1, 3, 1)
54 |
55 | T[:, 0, 0] = 1
56 | T[:, 1, 1] = 1
57 | T[:, 2, 2] = 1
58 | T[:, 3, 3] = 1
59 | T[:, :3, 3, None] = t
60 |
61 | return T
62 |
63 |
64 | def rot_from_axisangle(vec):
65 | """Convert an axisangle rotation into a 4x4 transformation matrix
66 | (adapted from https://github.com/Wallacoloo/printipi)
67 | Input 'vec' has to be Bx1x3
68 | """
69 | angle = torch.norm(vec, 2, 2, True)
70 | axis = vec / (angle + 1e-7)
71 |
72 | ca = torch.cos(angle)
73 | sa = torch.sin(angle)
74 | C = 1 - ca
75 |
76 | x = axis[..., 0].unsqueeze(1)
77 | y = axis[..., 1].unsqueeze(1)
78 | z = axis[..., 2].unsqueeze(1)
79 |
80 | xs = x * sa
81 | ys = y * sa
82 | zs = z * sa
83 | xC = x * C
84 | yC = y * C
85 | zC = z * C
86 | xyC = x * yC
87 | yzC = y * zC
88 | zxC = z * xC
89 |
90 | rot = torch.zeros((vec.shape[0], 4, 4)).to(device=vec.device)
91 |
92 | rot[:, 0, 0] = torch.squeeze(x * xC + ca)
93 | rot[:, 0, 1] = torch.squeeze(xyC - zs)
94 | rot[:, 0, 2] = torch.squeeze(zxC + ys)
95 | rot[:, 1, 0] = torch.squeeze(xyC + zs)
96 | rot[:, 1, 1] = torch.squeeze(y * yC + ca)
97 | rot[:, 1, 2] = torch.squeeze(yzC - xs)
98 | rot[:, 2, 0] = torch.squeeze(zxC - ys)
99 | rot[:, 2, 1] = torch.squeeze(yzC + xs)
100 | rot[:, 2, 2] = torch.squeeze(z * zC + ca)
101 | rot[:, 3, 3] = 1
102 |
103 | return rot
104 |
105 | class ConvBlock(nn.Module):
106 | """Layer to perform a convolution followed by ELU
107 | """
108 | def __init__(self, in_channels, out_channels):
109 | super(ConvBlock, self).__init__()
110 |
111 | self.conv = Conv3x3(in_channels, out_channels)
112 | self.nonlin = nn.ELU(inplace=True)
113 |
114 | def forward(self, x):
115 | out = self.conv(x)
116 | out = self.nonlin(out)
117 | return out
118 |
119 |
120 | class Conv3x3(nn.Module):
121 | """Layer to pad and convolve input
122 | """
123 | def __init__(self, in_channels, out_channels, use_refl=True):
124 | super(Conv3x3, self).__init__()
125 |
126 | if use_refl:
127 | self.pad = nn.ReflectionPad2d(1)
128 | else:
129 | self.pad = nn.ZeroPad2d(1)
130 | self.conv = nn.Conv2d(int(in_channels), int(out_channels), 3)
131 |
132 | def forward(self, x):
133 | out = self.pad(x)
134 | out = self.conv(out)
135 | return out
136 |
137 | class Conv1x1(nn.Module):
138 | def __init__(self, in_channels, out_channels):
139 | super(Conv1x1, self).__init__()
140 |
141 | self.conv = nn.Conv2d(in_channels, out_channels, 1, stride=1, bias=False)
142 |
143 | def forward(self, x):
144 | return self.conv(x)
145 |
146 | class ASPP(nn.Module):
147 | def __init__(self, in_channels, out_channels):
148 | super(ASPP, self).__init__()
149 |
150 | self.atrous_block1 = nn.Conv2d(in_channels, out_channels, 1, 1)
151 | self.atrous_block6 = nn.Conv2d(in_channels, out_channels, 3, 1, padding=6, dilation=6)
152 | self.atrous_block12 = nn.Conv2d(in_channels, out_channels, 3, 1, padding=12, dilation=12)
153 | self.atrous_block18 = nn.Conv2d(in_channels, out_channels, 3, 1, padding=18, dilation=18)
154 |
155 | self.conv1x1 = nn.Conv2d(out_channels*4, out_channels, 1, 1)
156 |
157 | def forward(self, features):
158 | features_1 = self.atrous_block18(features[0])
159 | features_2 = self.atrous_block12(features[1])
160 | features_3 = self.atrous_block6(features[2])
161 | features_4 = self.atrous_block1(features[3])
162 |
163 | output_feature = [features_1, features_2, features_3, features_4]
164 | output_feature = torch.cat(output_feature, 1)
165 |
166 | return self.conv1x1(output_feature)
167 |
168 | class BackprojectDepth(nn.Module):
169 | """Layer to transform a depth image into a point cloud
170 | """
171 | def __init__(self, batch_size, height, width):
172 | super(BackprojectDepth, self).__init__()
173 |
174 | self.batch_size = batch_size
175 | self.height = height
176 | self.width = width
177 |
178 | # Prepare Coordinates shape [b,3,h*w]
179 | meshgrid = np.meshgrid(range(self.width), range(self.height), indexing='xy')
180 | self.id_coords = np.stack(meshgrid, axis=0).astype(np.float32)
181 | self.id_coords = nn.Parameter(torch.from_numpy(self.id_coords),
182 | requires_grad=False)
183 |
184 | self.ones = nn.Parameter(torch.ones(self.batch_size, 1, self.height * self.width),
185 | requires_grad=False)
186 |
187 | self.pix_coords = torch.unsqueeze(torch.stack(
188 | [self.id_coords[0].view(-1), self.id_coords[1].view(-1)], 0), 0)
189 | self.pix_coords = self.pix_coords.repeat(batch_size, 1, 1)
190 | self.pix_coords = nn.Parameter(torch.cat([self.pix_coords, self.ones], 1),
191 | requires_grad=False)
192 |
193 | def forward(self, depth, inv_K):
194 | cam_points = torch.matmul(inv_K[:, :3, :3], self.pix_coords)
195 | cam_points = depth.view(self.batch_size, 1, -1) * cam_points
196 | cam_points = torch.cat([cam_points, self.ones], 1)
197 |
198 | return cam_points
199 |
200 |
201 | class Project3D(nn.Module):
202 | """Layer which projects 3D points into a camera with intrinsics K and at position T
203 | """
204 | def __init__(self, batch_size, height, width, eps=1e-7):
205 | super(Project3D, self).__init__()
206 |
207 | self.batch_size = batch_size
208 | self.height = height
209 | self.width = width
210 | self.eps = eps
211 |
212 | def forward(self, points, K, T):
213 | P = torch.matmul(K, T)[:, :3, :]
214 |
215 | cam_points = torch.matmul(P, points)
216 |
217 | pix_coords = cam_points[:, :2, :] / (cam_points[:, 2, :].unsqueeze(1) + self.eps)
218 | pix_coords = pix_coords.view(self.batch_size, 2, self.height, self.width)
219 | pix_coords = pix_coords.permute(0, 2, 3, 1)
220 | # normalize
221 | pix_coords[..., 0] /= self.width - 1
222 | pix_coords[..., 1] /= self.height - 1
223 | pix_coords = (pix_coords - 0.5) * 2
224 | return pix_coords
225 |
226 |
227 | def upsample(x):
228 | """Upsample input tensor by a factor of 2
229 | """
230 | return F.interpolate(x, scale_factor=2, mode="nearest")
231 |
232 | def get_smooth_loss(disp, img):
233 | """Computes the smoothness loss for a disparity image
234 | The color image is used for edge-aware smoothness
235 | """
236 | grad_disp_x = torch.abs(disp[:, :, :, :-1] - disp[:, :, :, 1:])
237 | grad_disp_y = torch.abs(disp[:, :, :-1, :] - disp[:, :, 1:, :])
238 |
239 | grad_img_x = torch.mean(torch.abs(img[:, :, :, :-1] - img[:, :, :, 1:]), 1, keepdim=True)
240 | grad_img_y = torch.mean(torch.abs(img[:, :, :-1, :] - img[:, :, 1:, :]), 1, keepdim=True)
241 |
242 | grad_disp_x *= torch.exp(-grad_img_x)
243 | grad_disp_y *= torch.exp(-grad_img_y)
244 |
245 | return grad_disp_x.mean() + grad_disp_y.mean()
246 |
247 |
248 | class SSIM(nn.Module):
249 | """Layer to compute the SSIM loss between a pair of images
250 | """
251 | def __init__(self):
252 | super(SSIM, self).__init__()
253 | self.mu_x_pool = nn.AvgPool2d(3, 1)
254 | self.mu_y_pool = nn.AvgPool2d(3, 1)
255 | self.sig_x_pool = nn.AvgPool2d(3, 1)
256 | self.sig_y_pool = nn.AvgPool2d(3, 1)
257 | self.sig_xy_pool = nn.AvgPool2d(3, 1)
258 |
259 | self.refl = nn.ReflectionPad2d(1)
260 |
261 | self.C1 = 0.01 ** 2
262 | self.C2 = 0.03 ** 2
263 |
264 | def forward(self, x, y):
265 | x = self.refl(x)
266 | y = self.refl(y)
267 |
268 | mu_x = self.mu_x_pool(x)
269 | mu_y = self.mu_y_pool(y)
270 |
271 | sigma_x = self.sig_x_pool(x ** 2) - mu_x ** 2
272 | sigma_y = self.sig_y_pool(y ** 2) - mu_y ** 2
273 | sigma_xy = self.sig_xy_pool(x * y) - mu_x * mu_y
274 |
275 | SSIM_n = (2 * mu_x * mu_y + self.C1) * (2 * sigma_xy + self.C2)
276 | SSIM_d = (mu_x ** 2 + mu_y ** 2 + self.C1) * (sigma_x + sigma_y + self.C2)
277 |
278 | return torch.clamp((1 - SSIM_n / SSIM_d) / 2, 0, 1)
279 |
280 |
281 | def compute_depth_errors(gt, pred):
282 | """Computation of error metrics between predicted and ground truth depths
283 | """
284 | thresh = torch.max((gt / pred), (pred / gt))
285 | a1 = (thresh < 1.25 ).float().mean()
286 | a2 = (thresh < 1.25 ** 2).float().mean()
287 | a3 = (thresh < 1.25 ** 3).float().mean()
288 |
289 | rmse = (gt - pred) ** 2
290 | rmse = torch.sqrt(rmse.mean())
291 |
292 | rmse_log = (torch.log(gt) - torch.log(pred)) ** 2
293 | rmse_log = torch.sqrt(rmse_log.mean())
294 |
295 | abs_rel = torch.mean(torch.abs(gt - pred) / gt)
296 |
297 | sq_rel = torch.mean((gt - pred) ** 2 / gt)
298 |
299 | return abs_rel, sq_rel, rmse, rmse_log, a1, a2, a3
300 |
301 |
302 | class fSEModule(nn.Module):
303 | def __init__(self, high_feature_channel, low_feature_channels, output_channel=None):
304 | super(fSEModule, self).__init__()
305 | in_channel = high_feature_channel + low_feature_channels
306 | out_channel = high_feature_channel
307 | if output_channel is not None:
308 | out_channel = output_channel
309 | reduction = 16
310 | channel = in_channel
311 | self.avg_pool = nn.AdaptiveAvgPool2d(1)
312 |
313 | self.fc = nn.Sequential(
314 | nn.Linear(channel, channel // reduction, bias=False),
315 | nn.ReLU(inplace=True),
316 | nn.Linear(channel // reduction, channel, bias=False)
317 | )
318 |
319 | self.sigmoid = nn.Sigmoid()
320 |
321 | self.conv_se = nn.Conv2d(in_channels=in_channel, out_channels=out_channel, kernel_size=1, stride=1)
322 | self.relu = nn.ReLU(inplace=True)
323 |
324 | def forward(self, high_features, low_features):
325 | features = [upsample(high_features)]
326 | features += low_features
327 | features = torch.cat(features, 1)
328 |
329 | b, c, _, _ = features.size()
330 | y = self.avg_pool(features).view(b, c)
331 | y = self.fc(y).view(b, c, 1, 1)
332 |
333 | y = self.sigmoid(y)
334 | features = features * y.expand_as(features)
335 |
336 | return self.relu(self.conv_se(features))
--------------------------------------------------------------------------------
/networks/HR_Depth_Decoder.py:
--------------------------------------------------------------------------------
1 | from __future__ import absolute_import, division, print_function
2 |
3 | from layers import *
4 |
5 | class HRDepthDecoder(nn.Module):
6 | def __init__(self, num_ch_enc, scales=range(4), num_output_channels=1, mobile_encoder=False):
7 | super(HRDepthDecoder, self).__init__()
8 |
9 | self.num_output_channels = num_output_channels
10 | self.num_ch_enc = num_ch_enc
11 | self.scales = scales
12 | self.mobile_encoder = mobile_encoder
13 | if mobile_encoder:
14 | self.num_ch_dec = np.array([4, 12, 20, 40, 80])
15 | else:
16 | self.num_ch_dec = np.array([16, 32, 64, 128, 256])
17 |
18 | self.all_position = ["01", "11", "21", "31", "02", "12", "22", "03", "13", "04"]
19 | self.attention_position = ["31", "22", "13", "04"]
20 | self.non_attention_position = ["01", "11", "21", "02", "12", "03"]
21 |
22 | self.convs = nn.ModuleDict()
23 | for j in range(5):
24 | for i in range(5 - j):
25 | # upconv 0
26 | num_ch_in = num_ch_enc[i]
27 | if i == 0 and j != 0:
28 | num_ch_in /= 2
29 | num_ch_out = num_ch_in / 2
30 | self.convs["X_{}{}_Conv_0".format(i, j)] = ConvBlock(num_ch_in, num_ch_out)
31 |
32 | # X_04 upconv 1, only add X_04 convolution
33 | if i == 0 and j == 4:
34 | num_ch_in = num_ch_out
35 | num_ch_out = self.num_ch_dec[i]
36 | self.convs["X_{}{}_Conv_1".format(i, j)] = ConvBlock(num_ch_in, num_ch_out)
37 |
38 | # declare fSEModule and original module
39 | for index in self.attention_position:
40 | row = int(index[0])
41 | col = int(index[1])
42 | if mobile_encoder:
43 | self.convs["X_" + index + "_attention"] = fSEModule(num_ch_enc[row + 1] // 2, self.num_ch_enc[row]
44 | + self.num_ch_dec[row]*2*(col-1),
45 | output_channel=self.num_ch_dec[row] * 2)
46 | else:
47 | self.convs["X_" + index + "_attention"] = fSEModule(num_ch_enc[row + 1] // 2, self.num_ch_enc[row]
48 | + self.num_ch_dec[row + 1] * (col - 1))
49 | for index in self.non_attention_position:
50 | row = int(index[0])
51 | col = int(index[1])
52 | if mobile_encoder:
53 | self.convs["X_{}{}_Conv_1".format(row + 1, col - 1)] = ConvBlock(
54 | self.num_ch_enc[row]+ self.num_ch_enc[row + 1] // 2 +
55 | self.num_ch_dec[row]*2*(col-1), self.num_ch_dec[row] * 2)
56 | else:
57 | if col == 1:
58 | self.convs["X_{}{}_Conv_1".format(row + 1, col - 1)] = ConvBlock(num_ch_enc[row + 1] // 2 +
59 | self.num_ch_enc[row], self.num_ch_dec[row + 1])
60 | else:
61 | self.convs["X_"+index+"_downsample"] = Conv1x1(num_ch_enc[row+1] // 2 + self.num_ch_enc[row]
62 | + self.num_ch_dec[row+1]*(col-1), self.num_ch_dec[row + 1] * 2)
63 | self.convs["X_{}{}_Conv_1".format(row + 1, col - 1)] = ConvBlock(self.num_ch_dec[row + 1] * 2, self.num_ch_dec[row + 1])
64 |
65 | if self.mobile_encoder:
66 | self.convs["dispConvScale0"] = Conv3x3(4, self.num_output_channels)
67 | self.convs["dispConvScale1"] = Conv3x3(8, self.num_output_channels)
68 | self.convs["dispConvScale2"] = Conv3x3(24, self.num_output_channels)
69 | self.convs["dispConvScale3"] = Conv3x3(40, self.num_output_channels)
70 | else:
71 | for i in range(4):
72 | self.convs["dispConvScale{}".format(i)] = Conv3x3(self.num_ch_dec[i], self.num_output_channels)
73 |
74 | self.decoder = nn.ModuleList(list(self.convs.values()))
75 | self.sigmoid = nn.Sigmoid()
76 |
77 | def nestConv(self, conv, high_feature, low_features):
78 | conv_0 = conv[0]
79 | conv_1 = conv[1]
80 | assert isinstance(low_features, list)
81 | high_features = [upsample(conv_0(high_feature))]
82 | for feature in low_features:
83 | high_features.append(feature)
84 | high_features = torch.cat(high_features, 1)
85 | if len(conv) == 3:
86 | high_features = conv[2](high_features)
87 | return conv_1(high_features)
88 |
89 | def forward(self, input_features):
90 | outputs = {}
91 | features = {}
92 | for i in range(5):
93 | features["X_{}0".format(i)] = input_features[i]
94 | # Network architecture
95 | for index in self.all_position:
96 | row = int(index[0])
97 | col = int(index[1])
98 |
99 | low_features = []
100 | for i in range(col):
101 | low_features.append(features["X_{}{}".format(row, i)])
102 |
103 | # add fSE block to decoder
104 | if index in self.attention_position:
105 | features["X_"+index] = self.convs["X_" + index + "_attention"](
106 | self.convs["X_{}{}_Conv_0".format(row+1, col-1)](features["X_{}{}".format(row+1, col-1)]), low_features)
107 | elif index in self.non_attention_position:
108 | conv = [self.convs["X_{}{}_Conv_0".format(row + 1, col - 1)],
109 | self.convs["X_{}{}_Conv_1".format(row + 1, col - 1)]]
110 | if col != 1 and not self.mobile_encoder:
111 | conv.append(self.convs["X_" + index + "_downsample"])
112 | features["X_" + index] = self.nestConv(conv, features["X_{}{}".format(row+1, col-1)], low_features)
113 |
114 | x = features["X_04"]
115 | x = self.convs["X_04_Conv_0"](x)
116 | x = self.convs["X_04_Conv_1"](upsample(x))
117 | outputs[("disparity", "Scale0")] = self.sigmoid(self.convs["dispConvScale0"](x))
118 | outputs[("disparity", "Scale1")] = self.sigmoid(self.convs["dispConvScale1"](features["X_04"]))
119 | outputs[("disparity", "Scale2")] = self.sigmoid(self.convs["dispConvScale2"](features["X_13"]))
120 | outputs[("disparity", "Scale3")] = self.sigmoid(self.convs["dispConvScale3"](features["X_22"]))
121 | return outputs
--------------------------------------------------------------------------------
/networks/__init__.py:
--------------------------------------------------------------------------------
1 | from .resnet_encoder import ResnetEncoder
2 | from .depth_decoder import DepthDecoder
3 | from .HR_Depth_Decoder import HRDepthDecoder
4 | from .mobilenetV3_encoder import MobileEncoder
--------------------------------------------------------------------------------
/networks/depth_decoder.py:
--------------------------------------------------------------------------------
1 | from __future__ import absolute_import, division, print_function
2 |
3 | from collections import OrderedDict
4 | from layers import *
5 |
6 |
7 | class DepthDecoder(nn.Module):
8 | def __init__(self, num_ch_enc, scales=range(4), use_skips=True, mobile_encoder=False):
9 | super(DepthDecoder, self).__init__()
10 | self.use_skips = use_skips
11 | self.upsample_mode = 'nearest'
12 | self.scales = scales
13 |
14 | self.num_ch_enc = num_ch_enc
15 | if mobile_encoder:
16 | self.num_ch_dec = np.array([4, 12, 20, 40, 80])
17 | else:
18 | self.num_ch_dec = np.array([16, 32, 64, 128, 256])
19 |
20 | # decoder
21 | self.convs = OrderedDict()
22 | for i in range(4, -1, -1):
23 | # upconv_0
24 | num_ch_in = self.num_ch_enc[-1] if i == 4 else self.num_ch_dec[i + 1]
25 | num_ch_out = self.num_ch_dec[i]
26 | self.convs[("upconv", i, 0)] = ConvBlock(num_ch_in, num_ch_out)
27 |
28 | if self.attention_method == "None" or i == 0:
29 | # upconv_1
30 | num_ch_in = self.num_ch_dec[i]
31 | if self.use_skips and i > 0:
32 | num_ch_in += self.num_ch_enc[i - 1]
33 | num_ch_out = self.num_ch_dec[i]
34 | self.convs[("upconv", i, 1)] = ConvBlock(num_ch_in, num_ch_out)
35 | else:
36 | self.convs[("attentionConv", i)] = fSEModule(self.num_ch_dec[i], self.num_ch_enc[i - 1])
37 |
38 | for s in self.scales:
39 | self.convs[("dispconv", s)] = Conv3x3(self.num_ch_dec[s], self.num_output_channels)
40 |
41 | self.decoder = nn.ModuleList(list(self.convs.values()))
42 | self.sigmoid = nn.Sigmoid()
43 |
44 | def forward(self, input_features):
45 | self.outputs = {}
46 | middle_features = []
47 | # decoder
48 | x = input_features[-1]
49 | for i in range(4, -1, -1):
50 | x = self.convs[("upconv", i, 0)](x)
51 |
52 | if self.attention_method != "None" and i != 0:
53 | x = self.convs[("attentionConv", i)](x, [input_features[i - 1]])
54 | else:
55 | middle_features.append(x)
56 | x = [upsample(x)]
57 | if self.use_skips and i > 0:
58 | x += [input_features[i - 1]]
59 | x = torch.cat(x, 1)
60 | x = self.convs[("upconv", i, 1)](x)
61 | if i in self.scales:
62 | self.outputs[("disparity", "Scale{}".format(i))] = self.sigmoid(self.convs[("dispconv", i)](x))
63 |
64 | return self.outputs
65 |
66 |
--------------------------------------------------------------------------------
/networks/mobilenetV3_encoder.py:
--------------------------------------------------------------------------------
1 | """
2 | Creates a MobileNetV3 Model as defined in:
3 | Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, Quoc V. Le, Hartwig Adam. (2019).
4 | Searching for MobileNetV3
5 | arXiv preprint arXiv:1905.02244.
6 | """
7 |
8 | import torch
9 | import torch.nn as nn
10 | import numpy as np
11 | import math
12 |
13 |
14 | def _make_divisible(v, divisor, min_value=None):
15 | """
16 | This function is taken from the original tf repo.
17 | It ensures that all layers have a channel number that is divisible by 8
18 | It can be seen here:
19 | https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py
20 | :param v:
21 | :param divisor:
22 | :param min_value:
23 | :return:
24 | """
25 | if min_value is None:
26 | min_value = divisor
27 | new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
28 | # Make sure that round down does not go down by more than 10%.
29 | if new_v < 0.9 * v:
30 | new_v += divisor
31 | return new_v
32 |
33 |
34 | class h_sigmoid(nn.Module):
35 | def __init__(self, inplace=True):
36 | super(h_sigmoid, self).__init__()
37 | self.relu = nn.ReLU6(inplace=inplace)
38 |
39 | def forward(self, x):
40 | return self.relu(x + 3) / 6
41 |
42 |
43 | class h_swish(nn.Module):
44 | def __init__(self, inplace=True):
45 | super(h_swish, self).__init__()
46 | self.sigmoid = h_sigmoid(inplace=inplace)
47 |
48 | def forward(self, x):
49 | return x * self.sigmoid(x)
50 |
51 |
52 | class SELayer(nn.Module):
53 | def __init__(self, channel, reduction=4):
54 | super(SELayer, self).__init__()
55 | self.avg_pool = nn.AdaptiveAvgPool2d(1)
56 | self.fc = nn.Sequential(
57 | nn.Linear(channel, _make_divisible(channel // reduction, 8)),
58 | nn.ReLU(inplace=True),
59 | nn.Linear(_make_divisible(channel // reduction, 8), channel),
60 | h_sigmoid()
61 | )
62 |
63 | def forward(self, x):
64 | b, c, _, _ = x.size()
65 | y = self.avg_pool(x).view(b, c)
66 | y = self.fc(y).view(b, c, 1, 1)
67 | return x * y
68 |
69 |
70 | def conv_3x3_bn(inp, oup, stride):
71 | return nn.Sequential(
72 | nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
73 | nn.BatchNorm2d(oup),
74 | h_swish()
75 | )
76 |
77 |
78 | def conv_1x1_bn(inp, oup):
79 | return nn.Sequential(
80 | nn.Conv2d(inp, oup, 1, 1, 0, bias=False),
81 | nn.BatchNorm2d(oup),
82 | h_swish()
83 | )
84 |
85 |
86 | class InvertedResidual(nn.Module):
87 | def __init__(self, inp, hidden_dim, oup, kernel_size, stride, use_se, use_hs):
88 | super(InvertedResidual, self).__init__()
89 | assert stride in [1, 2]
90 |
91 | self.identity = stride == 1 and inp == oup
92 |
93 | if inp == hidden_dim:
94 | self.conv = nn.Sequential(
95 | # dw
96 | nn.Conv2d(hidden_dim, hidden_dim, kernel_size, stride, (kernel_size - 1) // 2, groups=hidden_dim,
97 | bias=False),
98 | nn.BatchNorm2d(hidden_dim),
99 | h_swish() if use_hs else nn.ReLU(inplace=True),
100 | # Squeeze-and-Excite
101 | SELayer(hidden_dim) if use_se else nn.Identity(),
102 | # pw-linear
103 | nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),
104 | nn.BatchNorm2d(oup),
105 | )
106 | else:
107 | self.conv = nn.Sequential(
108 | # pw
109 | nn.Conv2d(inp, hidden_dim, 1, 1, 0, bias=False),
110 | nn.BatchNorm2d(hidden_dim),
111 | h_swish() if use_hs else nn.ReLU(inplace=True),
112 | # dw
113 | nn.Conv2d(hidden_dim, hidden_dim, kernel_size, stride, (kernel_size - 1) // 2, groups=hidden_dim,
114 | bias=False),
115 | nn.BatchNorm2d(hidden_dim),
116 | # Squeeze-and-Excite
117 | SELayer(hidden_dim) if use_se else nn.Identity(),
118 | h_swish() if use_hs else nn.ReLU(inplace=True),
119 | # pw-linear
120 | nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),
121 | nn.BatchNorm2d(oup),
122 | )
123 |
124 | def forward(self, x):
125 | if self.identity:
126 | return x + self.conv(x)
127 | else:
128 | return self.conv(x)
129 |
130 |
131 | class MobileNetV3(nn.Module):
132 | def __init__(self, width_mult=1.):
133 | super(MobileNetV3, self).__init__()
134 | # setting of inverted residual blocks
135 | cfgs = [
136 | # k, t, c, SE, HS, s
137 | [3, 1, 16, 0, 0, 1],
138 | [3, 4, 24, 0, 0, 2], # feature 2
139 | [3, 3, 24, 0, 0, 1],
140 | [5, 3, 40, 1, 0, 2], # feature 3
141 | [5, 3, 40, 1, 0, 1],
142 | [5, 3, 40, 1, 0, 1],
143 | [3, 6, 80, 0, 1, 2], # feature 4
144 | [3, 2.5, 80, 0, 1, 1],
145 | [3, 2.3, 80, 0, 1, 1],
146 | [3, 2.3, 80, 0, 1, 1],
147 | [3, 6, 112, 1, 1, 1],
148 | [3, 6, 112, 1, 1, 1],
149 | [5, 6, 160, 1, 1, 2],
150 | [5, 6, 160, 1, 1, 1],
151 | [5, 6, 160, 1, 1, 1] # feature 5
152 | ]
153 |
154 | # building first layer
155 | input_channel = _make_divisible(16 * width_mult, 8)
156 | layers = [conv_3x3_bn(3, input_channel, 2)]
157 | # building inverted residual blocks
158 | block = InvertedResidual
159 | for k, t, c, use_se, use_hs, s in cfgs:
160 | output_channel = _make_divisible(c * width_mult, 8)
161 | exp_size = _make_divisible(input_channel * t, 8)
162 | layers.append(block(input_channel, exp_size, output_channel, k, s, use_se, use_hs))
163 | input_channel = output_channel
164 | self.features = nn.Sequential(*layers)
165 |
166 | class MobileEncoder(nn.Module):
167 | def __init__(self, pretrained):
168 | super(MobileEncoder, self).__init__()
169 |
170 | self.num_ch_enc = np.array([16, 24, 40, 80, 160])
171 | self.encoder = MobileNetV3()
172 | if pretrained:
173 | state_dict = torch.load("pretrain_model/mobilenetV3/mobilenetv3-large-1cd25616.pth")
174 | filter_dict_enc = {k: v for k, v in state_dict.items() if k in self.encoder.state_dict()}
175 | self.encoder.load_state_dict(filter_dict_enc)
176 |
177 | def forward(self, input_image):
178 | return_features = []
179 | x = (input_image - 0.45) / 0.225
180 | return_features.append(self.encoder.features[0](x))
181 | return_features.append(self.encoder.features[1:3](return_features[-1]))
182 | return_features.append(self.encoder.features[3:5](return_features[-1]))
183 | return_features.append(self.encoder.features[5:8](return_features[-1]))
184 | return_features.append(self.encoder.features[8:](return_features[-1]))
185 | return return_features
--------------------------------------------------------------------------------
/networks/resnet_encoder.py:
--------------------------------------------------------------------------------
1 | from __future__ import absolute_import, division, print_function
2 |
3 | import numpy as np
4 |
5 | import torch
6 | import torch.nn as nn
7 | import torchvision.models as models
8 | import torch.utils.model_zoo as model_zoo
9 |
10 |
11 | class ResNetMultiImageInput(models.ResNet):
12 | """Constructs a resnet model with varying number of input images.
13 | Adapted from https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py
14 | """
15 | def __init__(self, block, layers, num_classes=1000, num_input_images=1):
16 | super(ResNetMultiImageInput, self).__init__(block, layers)
17 | self.inplanes = 64
18 | self.conv1 = nn.Conv2d(
19 | num_input_images * 3, 64, kernel_size=7, stride=2, padding=3, bias=False)
20 | self.bn1 = nn.BatchNorm2d(64)
21 | self.relu = nn.ReLU(inplace=True)
22 | self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
23 | self.layer1 = self._make_layer(block, 64, layers[0])
24 | self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
25 | self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
26 | self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
27 |
28 | for m in self.modules():
29 | if isinstance(m, nn.Conv2d):
30 | nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
31 | elif isinstance(m, nn.BatchNorm2d):
32 | nn.init.constant_(m.weight, 1)
33 | nn.init.constant_(m.bias, 0)
34 |
35 |
36 | def resnet_multiimage_input(num_layers, pretrained=False, num_input_images=1):
37 | """Constructs a ResNet model.
38 | Args:
39 | num_layers (int): Number of resnet layers. Must be 18 or 50
40 | pretrained (bool): If True, returns a model pre-trained on ImageNet
41 | num_input_images (int): Number of frames stacked as input
42 | """
43 | assert num_layers in [18, 50], "Can only run with 18 or 50 layer resnet"
44 | blocks = {18: [2, 2, 2, 2], 50: [3, 4, 6, 3]}[num_layers]
45 | block_type = {18: models.resnet.BasicBlock, 50: models.resnet.Bottleneck}[num_layers]
46 | model = ResNetMultiImageInput(block_type, blocks, num_input_images=num_input_images)
47 |
48 | if pretrained:
49 | loaded = model_zoo.load_url(models.resnet.model_urls['resnet{}'.format(num_layers)])
50 | loaded['conv1.weight'] = torch.cat(
51 | [loaded['conv1.weight']] * num_input_images, 1) / num_input_images
52 | model.load_state_dict(loaded)
53 | return model
54 |
55 |
56 | class ResnetEncoder(nn.Module):
57 | """Pytorch module for a resnet encoder
58 | """
59 | def __init__(self, num_layers, pretrained, num_input_images=1):
60 | super(ResnetEncoder, self).__init__()
61 |
62 | self.num_ch_enc = np.array([64, 64, 128, 256, 512])
63 |
64 | resnets = {18: models.resnet18,
65 | 34: models.resnet34,
66 | 50: models.resnet50,
67 | 101: models.resnet101,
68 | 152: models.resnet152}
69 |
70 | if num_layers not in resnets:
71 | raise ValueError("{} is not a valid number of resnet layers".format(num_layers))
72 |
73 | if num_input_images > 1:
74 | self.encoder = resnet_multiimage_input(num_layers, pretrained, num_input_images)
75 | else:
76 | self.encoder = resnets[num_layers](pretrained)
77 |
78 | if num_layers > 34:
79 | self.num_ch_enc[1:] *= 4
80 |
81 | def forward(self, input_image):
82 | features = []
83 | x = (input_image - 0.45) / 0.225
84 | x = self.encoder.conv1(x)
85 | x = self.encoder.bn1(x)
86 | features.append(self.encoder.relu(x))
87 | features.append(self.encoder.layer1(self.encoder.maxpool(features[-1])))
88 | features.append(self.encoder.layer2(features[-1]))
89 | features.append(self.encoder.layer3(features[-1]))
90 | features.append(self.encoder.layer4(features[-1]))
91 |
92 | return features
93 |
--------------------------------------------------------------------------------
/options.py:
--------------------------------------------------------------------------------
1 | from __future__ import absolute_import, division, print_function
2 |
3 | import os
4 | import argparse
5 |
6 | file_dir = os.path.dirname(__file__) # the directory that options.py resides in
7 |
8 |
9 | class HRDepthOptions:
10 | def __init__(self):
11 | self.parser = argparse.ArgumentParser(description="HR-Depth options")
12 |
13 | # PATHS
14 | self.parser.add_argument("--data_path",
15 | type=str,
16 | help="path to the training data",
17 | default=os.path.join(file_dir, "kitti_data"))
18 | self.parser.add_argument("--load_weights_folder",
19 | type=str,
20 | help="name of model to load")
21 |
22 | # SYSTEM options
23 | self.parser.add_argument("--num_workers",
24 | type=int,
25 | help="number of dataloader workers",
26 | default=12)
27 |
28 | # ABLATION options
29 | self.parser.add_argument("--HR_Depth",
30 | help="if set, uses HR Depth network",
31 | action="store_true")
32 | self.parser.add_argument("--Lite_HR_Depth",
33 | help="if set, uses lite hr depth network",
34 | action="store_true")
35 |
36 | # EVALUATION options
37 | self.parser.add_argument("--eval_split",
38 | type=str,
39 | default="eigen",
40 | choices=[
41 | "eigen", "eigen_benchmark", "benchmark", "odom_9", "odom_10"],
42 | help="which split to run eval on")
43 |
44 | def parse(self):
45 | options = self.parser.parse_args()
46 | return options
47 |
--------------------------------------------------------------------------------
/splits/eigen/test_files.txt:
--------------------------------------------------------------------------------
1 | 2011_09_26/2011_09_26_drive_0002_sync 0000000069 l
2 | 2011_09_26/2011_09_26_drive_0002_sync 0000000054 l
3 | 2011_09_26/2011_09_26_drive_0002_sync 0000000042 l
4 | 2011_09_26/2011_09_26_drive_0002_sync 0000000057 l
5 | 2011_09_26/2011_09_26_drive_0002_sync 0000000030 l
6 | 2011_09_26/2011_09_26_drive_0002_sync 0000000027 l
7 | 2011_09_26/2011_09_26_drive_0002_sync 0000000012 l
8 | 2011_09_26/2011_09_26_drive_0002_sync 0000000075 l
9 | 2011_09_26/2011_09_26_drive_0002_sync 0000000036 l
10 | 2011_09_26/2011_09_26_drive_0002_sync 0000000033 l
11 | 2011_09_26/2011_09_26_drive_0002_sync 0000000015 l
12 | 2011_09_26/2011_09_26_drive_0002_sync 0000000072 l
13 | 2011_09_26/2011_09_26_drive_0002_sync 0000000003 l
14 | 2011_09_26/2011_09_26_drive_0002_sync 0000000039 l
15 | 2011_09_26/2011_09_26_drive_0002_sync 0000000009 l
16 | 2011_09_26/2011_09_26_drive_0002_sync 0000000051 l
17 | 2011_09_26/2011_09_26_drive_0002_sync 0000000060 l
18 | 2011_09_26/2011_09_26_drive_0002_sync 0000000021 l
19 | 2011_09_26/2011_09_26_drive_0002_sync 0000000000 l
20 | 2011_09_26/2011_09_26_drive_0002_sync 0000000024 l
21 | 2011_09_26/2011_09_26_drive_0002_sync 0000000045 l
22 | 2011_09_26/2011_09_26_drive_0002_sync 0000000018 l
23 | 2011_09_26/2011_09_26_drive_0002_sync 0000000048 l
24 | 2011_09_26/2011_09_26_drive_0002_sync 0000000006 l
25 | 2011_09_26/2011_09_26_drive_0002_sync 0000000063 l
26 | 2011_09_26/2011_09_26_drive_0009_sync 0000000000 l
27 | 2011_09_26/2011_09_26_drive_0009_sync 0000000016 l
28 | 2011_09_26/2011_09_26_drive_0009_sync 0000000032 l
29 | 2011_09_26/2011_09_26_drive_0009_sync 0000000048 l
30 | 2011_09_26/2011_09_26_drive_0009_sync 0000000064 l
31 | 2011_09_26/2011_09_26_drive_0009_sync 0000000080 l
32 | 2011_09_26/2011_09_26_drive_0009_sync 0000000096 l
33 | 2011_09_26/2011_09_26_drive_0009_sync 0000000112 l
34 | 2011_09_26/2011_09_26_drive_0009_sync 0000000128 l
35 | 2011_09_26/2011_09_26_drive_0009_sync 0000000144 l
36 | 2011_09_26/2011_09_26_drive_0009_sync 0000000160 l
37 | 2011_09_26/2011_09_26_drive_0009_sync 0000000176 l
38 | 2011_09_26/2011_09_26_drive_0009_sync 0000000196 l
39 | 2011_09_26/2011_09_26_drive_0009_sync 0000000212 l
40 | 2011_09_26/2011_09_26_drive_0009_sync 0000000228 l
41 | 2011_09_26/2011_09_26_drive_0009_sync 0000000244 l
42 | 2011_09_26/2011_09_26_drive_0009_sync 0000000260 l
43 | 2011_09_26/2011_09_26_drive_0009_sync 0000000276 l
44 | 2011_09_26/2011_09_26_drive_0009_sync 0000000292 l
45 | 2011_09_26/2011_09_26_drive_0009_sync 0000000308 l
46 | 2011_09_26/2011_09_26_drive_0009_sync 0000000324 l
47 | 2011_09_26/2011_09_26_drive_0009_sync 0000000340 l
48 | 2011_09_26/2011_09_26_drive_0009_sync 0000000356 l
49 | 2011_09_26/2011_09_26_drive_0009_sync 0000000372 l
50 | 2011_09_26/2011_09_26_drive_0009_sync 0000000388 l
51 | 2011_09_26/2011_09_26_drive_0013_sync 0000000090 l
52 | 2011_09_26/2011_09_26_drive_0013_sync 0000000050 l
53 | 2011_09_26/2011_09_26_drive_0013_sync 0000000110 l
54 | 2011_09_26/2011_09_26_drive_0013_sync 0000000115 l
55 | 2011_09_26/2011_09_26_drive_0013_sync 0000000060 l
56 | 2011_09_26/2011_09_26_drive_0013_sync 0000000105 l
57 | 2011_09_26/2011_09_26_drive_0013_sync 0000000125 l
58 | 2011_09_26/2011_09_26_drive_0013_sync 0000000020 l
59 | 2011_09_26/2011_09_26_drive_0013_sync 0000000140 l
60 | 2011_09_26/2011_09_26_drive_0013_sync 0000000085 l
61 | 2011_09_26/2011_09_26_drive_0013_sync 0000000070 l
62 | 2011_09_26/2011_09_26_drive_0013_sync 0000000080 l
63 | 2011_09_26/2011_09_26_drive_0013_sync 0000000065 l
64 | 2011_09_26/2011_09_26_drive_0013_sync 0000000095 l
65 | 2011_09_26/2011_09_26_drive_0013_sync 0000000130 l
66 | 2011_09_26/2011_09_26_drive_0013_sync 0000000100 l
67 | 2011_09_26/2011_09_26_drive_0013_sync 0000000010 l
68 | 2011_09_26/2011_09_26_drive_0013_sync 0000000030 l
69 | 2011_09_26/2011_09_26_drive_0013_sync 0000000000 l
70 | 2011_09_26/2011_09_26_drive_0013_sync 0000000135 l
71 | 2011_09_26/2011_09_26_drive_0013_sync 0000000040 l
72 | 2011_09_26/2011_09_26_drive_0013_sync 0000000005 l
73 | 2011_09_26/2011_09_26_drive_0013_sync 0000000120 l
74 | 2011_09_26/2011_09_26_drive_0013_sync 0000000045 l
75 | 2011_09_26/2011_09_26_drive_0013_sync 0000000035 l
76 | 2011_09_26/2011_09_26_drive_0020_sync 0000000003 l
77 | 2011_09_26/2011_09_26_drive_0020_sync 0000000069 l
78 | 2011_09_26/2011_09_26_drive_0020_sync 0000000057 l
79 | 2011_09_26/2011_09_26_drive_0020_sync 0000000012 l
80 | 2011_09_26/2011_09_26_drive_0020_sync 0000000072 l
81 | 2011_09_26/2011_09_26_drive_0020_sync 0000000018 l
82 | 2011_09_26/2011_09_26_drive_0020_sync 0000000063 l
83 | 2011_09_26/2011_09_26_drive_0020_sync 0000000000 l
84 | 2011_09_26/2011_09_26_drive_0020_sync 0000000084 l
85 | 2011_09_26/2011_09_26_drive_0020_sync 0000000015 l
86 | 2011_09_26/2011_09_26_drive_0020_sync 0000000066 l
87 | 2011_09_26/2011_09_26_drive_0020_sync 0000000006 l
88 | 2011_09_26/2011_09_26_drive_0020_sync 0000000048 l
89 | 2011_09_26/2011_09_26_drive_0020_sync 0000000060 l
90 | 2011_09_26/2011_09_26_drive_0020_sync 0000000009 l
91 | 2011_09_26/2011_09_26_drive_0020_sync 0000000033 l
92 | 2011_09_26/2011_09_26_drive_0020_sync 0000000021 l
93 | 2011_09_26/2011_09_26_drive_0020_sync 0000000075 l
94 | 2011_09_26/2011_09_26_drive_0020_sync 0000000027 l
95 | 2011_09_26/2011_09_26_drive_0020_sync 0000000045 l
96 | 2011_09_26/2011_09_26_drive_0020_sync 0000000078 l
97 | 2011_09_26/2011_09_26_drive_0020_sync 0000000036 l
98 | 2011_09_26/2011_09_26_drive_0020_sync 0000000051 l
99 | 2011_09_26/2011_09_26_drive_0020_sync 0000000054 l
100 | 2011_09_26/2011_09_26_drive_0020_sync 0000000042 l
101 | 2011_09_26/2011_09_26_drive_0023_sync 0000000018 l
102 | 2011_09_26/2011_09_26_drive_0023_sync 0000000090 l
103 | 2011_09_26/2011_09_26_drive_0023_sync 0000000126 l
104 | 2011_09_26/2011_09_26_drive_0023_sync 0000000378 l
105 | 2011_09_26/2011_09_26_drive_0023_sync 0000000036 l
106 | 2011_09_26/2011_09_26_drive_0023_sync 0000000288 l
107 | 2011_09_26/2011_09_26_drive_0023_sync 0000000198 l
108 | 2011_09_26/2011_09_26_drive_0023_sync 0000000450 l
109 | 2011_09_26/2011_09_26_drive_0023_sync 0000000144 l
110 | 2011_09_26/2011_09_26_drive_0023_sync 0000000072 l
111 | 2011_09_26/2011_09_26_drive_0023_sync 0000000252 l
112 | 2011_09_26/2011_09_26_drive_0023_sync 0000000180 l
113 | 2011_09_26/2011_09_26_drive_0023_sync 0000000432 l
114 | 2011_09_26/2011_09_26_drive_0023_sync 0000000396 l
115 | 2011_09_26/2011_09_26_drive_0023_sync 0000000054 l
116 | 2011_09_26/2011_09_26_drive_0023_sync 0000000468 l
117 | 2011_09_26/2011_09_26_drive_0023_sync 0000000306 l
118 | 2011_09_26/2011_09_26_drive_0023_sync 0000000108 l
119 | 2011_09_26/2011_09_26_drive_0023_sync 0000000162 l
120 | 2011_09_26/2011_09_26_drive_0023_sync 0000000342 l
121 | 2011_09_26/2011_09_26_drive_0023_sync 0000000270 l
122 | 2011_09_26/2011_09_26_drive_0023_sync 0000000414 l
123 | 2011_09_26/2011_09_26_drive_0023_sync 0000000216 l
124 | 2011_09_26/2011_09_26_drive_0023_sync 0000000360 l
125 | 2011_09_26/2011_09_26_drive_0023_sync 0000000324 l
126 | 2011_09_26/2011_09_26_drive_0027_sync 0000000077 l
127 | 2011_09_26/2011_09_26_drive_0027_sync 0000000035 l
128 | 2011_09_26/2011_09_26_drive_0027_sync 0000000091 l
129 | 2011_09_26/2011_09_26_drive_0027_sync 0000000112 l
130 | 2011_09_26/2011_09_26_drive_0027_sync 0000000007 l
131 | 2011_09_26/2011_09_26_drive_0027_sync 0000000175 l
132 | 2011_09_26/2011_09_26_drive_0027_sync 0000000042 l
133 | 2011_09_26/2011_09_26_drive_0027_sync 0000000098 l
134 | 2011_09_26/2011_09_26_drive_0027_sync 0000000133 l
135 | 2011_09_26/2011_09_26_drive_0027_sync 0000000161 l
136 | 2011_09_26/2011_09_26_drive_0027_sync 0000000014 l
137 | 2011_09_26/2011_09_26_drive_0027_sync 0000000126 l
138 | 2011_09_26/2011_09_26_drive_0027_sync 0000000168 l
139 | 2011_09_26/2011_09_26_drive_0027_sync 0000000070 l
140 | 2011_09_26/2011_09_26_drive_0027_sync 0000000084 l
141 | 2011_09_26/2011_09_26_drive_0027_sync 0000000140 l
142 | 2011_09_26/2011_09_26_drive_0027_sync 0000000049 l
143 | 2011_09_26/2011_09_26_drive_0027_sync 0000000000 l
144 | 2011_09_26/2011_09_26_drive_0027_sync 0000000182 l
145 | 2011_09_26/2011_09_26_drive_0027_sync 0000000147 l
146 | 2011_09_26/2011_09_26_drive_0027_sync 0000000056 l
147 | 2011_09_26/2011_09_26_drive_0027_sync 0000000063 l
148 | 2011_09_26/2011_09_26_drive_0027_sync 0000000021 l
149 | 2011_09_26/2011_09_26_drive_0027_sync 0000000119 l
150 | 2011_09_26/2011_09_26_drive_0027_sync 0000000028 l
151 | 2011_09_26/2011_09_26_drive_0029_sync 0000000380 l
152 | 2011_09_26/2011_09_26_drive_0029_sync 0000000394 l
153 | 2011_09_26/2011_09_26_drive_0029_sync 0000000324 l
154 | 2011_09_26/2011_09_26_drive_0029_sync 0000000000 l
155 | 2011_09_26/2011_09_26_drive_0029_sync 0000000268 l
156 | 2011_09_26/2011_09_26_drive_0029_sync 0000000366 l
157 | 2011_09_26/2011_09_26_drive_0029_sync 0000000296 l
158 | 2011_09_26/2011_09_26_drive_0029_sync 0000000014 l
159 | 2011_09_26/2011_09_26_drive_0029_sync 0000000028 l
160 | 2011_09_26/2011_09_26_drive_0029_sync 0000000182 l
161 | 2011_09_26/2011_09_26_drive_0029_sync 0000000168 l
162 | 2011_09_26/2011_09_26_drive_0029_sync 0000000196 l
163 | 2011_09_26/2011_09_26_drive_0029_sync 0000000140 l
164 | 2011_09_26/2011_09_26_drive_0029_sync 0000000084 l
165 | 2011_09_26/2011_09_26_drive_0029_sync 0000000056 l
166 | 2011_09_26/2011_09_26_drive_0029_sync 0000000112 l
167 | 2011_09_26/2011_09_26_drive_0029_sync 0000000352 l
168 | 2011_09_26/2011_09_26_drive_0029_sync 0000000126 l
169 | 2011_09_26/2011_09_26_drive_0029_sync 0000000070 l
170 | 2011_09_26/2011_09_26_drive_0029_sync 0000000310 l
171 | 2011_09_26/2011_09_26_drive_0029_sync 0000000154 l
172 | 2011_09_26/2011_09_26_drive_0029_sync 0000000098 l
173 | 2011_09_26/2011_09_26_drive_0029_sync 0000000408 l
174 | 2011_09_26/2011_09_26_drive_0029_sync 0000000042 l
175 | 2011_09_26/2011_09_26_drive_0029_sync 0000000338 l
176 | 2011_09_26/2011_09_26_drive_0036_sync 0000000000 l
177 | 2011_09_26/2011_09_26_drive_0036_sync 0000000128 l
178 | 2011_09_26/2011_09_26_drive_0036_sync 0000000192 l
179 | 2011_09_26/2011_09_26_drive_0036_sync 0000000032 l
180 | 2011_09_26/2011_09_26_drive_0036_sync 0000000352 l
181 | 2011_09_26/2011_09_26_drive_0036_sync 0000000608 l
182 | 2011_09_26/2011_09_26_drive_0036_sync 0000000224 l
183 | 2011_09_26/2011_09_26_drive_0036_sync 0000000576 l
184 | 2011_09_26/2011_09_26_drive_0036_sync 0000000672 l
185 | 2011_09_26/2011_09_26_drive_0036_sync 0000000064 l
186 | 2011_09_26/2011_09_26_drive_0036_sync 0000000448 l
187 | 2011_09_26/2011_09_26_drive_0036_sync 0000000704 l
188 | 2011_09_26/2011_09_26_drive_0036_sync 0000000640 l
189 | 2011_09_26/2011_09_26_drive_0036_sync 0000000512 l
190 | 2011_09_26/2011_09_26_drive_0036_sync 0000000768 l
191 | 2011_09_26/2011_09_26_drive_0036_sync 0000000160 l
192 | 2011_09_26/2011_09_26_drive_0036_sync 0000000416 l
193 | 2011_09_26/2011_09_26_drive_0036_sync 0000000480 l
194 | 2011_09_26/2011_09_26_drive_0036_sync 0000000800 l
195 | 2011_09_26/2011_09_26_drive_0036_sync 0000000288 l
196 | 2011_09_26/2011_09_26_drive_0036_sync 0000000544 l
197 | 2011_09_26/2011_09_26_drive_0036_sync 0000000096 l
198 | 2011_09_26/2011_09_26_drive_0036_sync 0000000384 l
199 | 2011_09_26/2011_09_26_drive_0036_sync 0000000256 l
200 | 2011_09_26/2011_09_26_drive_0036_sync 0000000320 l
201 | 2011_09_26/2011_09_26_drive_0046_sync 0000000000 l
202 | 2011_09_26/2011_09_26_drive_0046_sync 0000000005 l
203 | 2011_09_26/2011_09_26_drive_0046_sync 0000000010 l
204 | 2011_09_26/2011_09_26_drive_0046_sync 0000000015 l
205 | 2011_09_26/2011_09_26_drive_0046_sync 0000000020 l
206 | 2011_09_26/2011_09_26_drive_0046_sync 0000000025 l
207 | 2011_09_26/2011_09_26_drive_0046_sync 0000000030 l
208 | 2011_09_26/2011_09_26_drive_0046_sync 0000000035 l
209 | 2011_09_26/2011_09_26_drive_0046_sync 0000000040 l
210 | 2011_09_26/2011_09_26_drive_0046_sync 0000000045 l
211 | 2011_09_26/2011_09_26_drive_0046_sync 0000000050 l
212 | 2011_09_26/2011_09_26_drive_0046_sync 0000000055 l
213 | 2011_09_26/2011_09_26_drive_0046_sync 0000000060 l
214 | 2011_09_26/2011_09_26_drive_0046_sync 0000000065 l
215 | 2011_09_26/2011_09_26_drive_0046_sync 0000000070 l
216 | 2011_09_26/2011_09_26_drive_0046_sync 0000000075 l
217 | 2011_09_26/2011_09_26_drive_0046_sync 0000000080 l
218 | 2011_09_26/2011_09_26_drive_0046_sync 0000000085 l
219 | 2011_09_26/2011_09_26_drive_0046_sync 0000000090 l
220 | 2011_09_26/2011_09_26_drive_0046_sync 0000000095 l
221 | 2011_09_26/2011_09_26_drive_0046_sync 0000000100 l
222 | 2011_09_26/2011_09_26_drive_0046_sync 0000000105 l
223 | 2011_09_26/2011_09_26_drive_0046_sync 0000000110 l
224 | 2011_09_26/2011_09_26_drive_0046_sync 0000000115 l
225 | 2011_09_26/2011_09_26_drive_0046_sync 0000000120 l
226 | 2011_09_26/2011_09_26_drive_0048_sync 0000000000 l
227 | 2011_09_26/2011_09_26_drive_0048_sync 0000000001 l
228 | 2011_09_26/2011_09_26_drive_0048_sync 0000000002 l
229 | 2011_09_26/2011_09_26_drive_0048_sync 0000000003 l
230 | 2011_09_26/2011_09_26_drive_0048_sync 0000000004 l
231 | 2011_09_26/2011_09_26_drive_0048_sync 0000000005 l
232 | 2011_09_26/2011_09_26_drive_0048_sync 0000000006 l
233 | 2011_09_26/2011_09_26_drive_0048_sync 0000000007 l
234 | 2011_09_26/2011_09_26_drive_0048_sync 0000000008 l
235 | 2011_09_26/2011_09_26_drive_0048_sync 0000000009 l
236 | 2011_09_26/2011_09_26_drive_0048_sync 0000000010 l
237 | 2011_09_26/2011_09_26_drive_0048_sync 0000000011 l
238 | 2011_09_26/2011_09_26_drive_0048_sync 0000000012 l
239 | 2011_09_26/2011_09_26_drive_0048_sync 0000000013 l
240 | 2011_09_26/2011_09_26_drive_0048_sync 0000000014 l
241 | 2011_09_26/2011_09_26_drive_0048_sync 0000000015 l
242 | 2011_09_26/2011_09_26_drive_0048_sync 0000000016 l
243 | 2011_09_26/2011_09_26_drive_0048_sync 0000000017 l
244 | 2011_09_26/2011_09_26_drive_0048_sync 0000000018 l
245 | 2011_09_26/2011_09_26_drive_0048_sync 0000000019 l
246 | 2011_09_26/2011_09_26_drive_0048_sync 0000000020 l
247 | 2011_09_26/2011_09_26_drive_0048_sync 0000000021 l
248 | 2011_09_26/2011_09_26_drive_0052_sync 0000000046 l
249 | 2011_09_26/2011_09_26_drive_0052_sync 0000000014 l
250 | 2011_09_26/2011_09_26_drive_0052_sync 0000000036 l
251 | 2011_09_26/2011_09_26_drive_0052_sync 0000000028 l
252 | 2011_09_26/2011_09_26_drive_0052_sync 0000000026 l
253 | 2011_09_26/2011_09_26_drive_0052_sync 0000000050 l
254 | 2011_09_26/2011_09_26_drive_0052_sync 0000000040 l
255 | 2011_09_26/2011_09_26_drive_0052_sync 0000000008 l
256 | 2011_09_26/2011_09_26_drive_0052_sync 0000000016 l
257 | 2011_09_26/2011_09_26_drive_0052_sync 0000000044 l
258 | 2011_09_26/2011_09_26_drive_0052_sync 0000000018 l
259 | 2011_09_26/2011_09_26_drive_0052_sync 0000000032 l
260 | 2011_09_26/2011_09_26_drive_0052_sync 0000000042 l
261 | 2011_09_26/2011_09_26_drive_0052_sync 0000000010 l
262 | 2011_09_26/2011_09_26_drive_0052_sync 0000000020 l
263 | 2011_09_26/2011_09_26_drive_0052_sync 0000000048 l
264 | 2011_09_26/2011_09_26_drive_0052_sync 0000000052 l
265 | 2011_09_26/2011_09_26_drive_0052_sync 0000000006 l
266 | 2011_09_26/2011_09_26_drive_0052_sync 0000000030 l
267 | 2011_09_26/2011_09_26_drive_0052_sync 0000000012 l
268 | 2011_09_26/2011_09_26_drive_0052_sync 0000000038 l
269 | 2011_09_26/2011_09_26_drive_0052_sync 0000000000 l
270 | 2011_09_26/2011_09_26_drive_0052_sync 0000000002 l
271 | 2011_09_26/2011_09_26_drive_0052_sync 0000000004 l
272 | 2011_09_26/2011_09_26_drive_0052_sync 0000000022 l
273 | 2011_09_26/2011_09_26_drive_0056_sync 0000000011 l
274 | 2011_09_26/2011_09_26_drive_0056_sync 0000000033 l
275 | 2011_09_26/2011_09_26_drive_0056_sync 0000000242 l
276 | 2011_09_26/2011_09_26_drive_0056_sync 0000000253 l
277 | 2011_09_26/2011_09_26_drive_0056_sync 0000000286 l
278 | 2011_09_26/2011_09_26_drive_0056_sync 0000000154 l
279 | 2011_09_26/2011_09_26_drive_0056_sync 0000000099 l
280 | 2011_09_26/2011_09_26_drive_0056_sync 0000000220 l
281 | 2011_09_26/2011_09_26_drive_0056_sync 0000000022 l
282 | 2011_09_26/2011_09_26_drive_0056_sync 0000000077 l
283 | 2011_09_26/2011_09_26_drive_0056_sync 0000000187 l
284 | 2011_09_26/2011_09_26_drive_0056_sync 0000000143 l
285 | 2011_09_26/2011_09_26_drive_0056_sync 0000000066 l
286 | 2011_09_26/2011_09_26_drive_0056_sync 0000000176 l
287 | 2011_09_26/2011_09_26_drive_0056_sync 0000000110 l
288 | 2011_09_26/2011_09_26_drive_0056_sync 0000000275 l
289 | 2011_09_26/2011_09_26_drive_0056_sync 0000000264 l
290 | 2011_09_26/2011_09_26_drive_0056_sync 0000000198 l
291 | 2011_09_26/2011_09_26_drive_0056_sync 0000000055 l
292 | 2011_09_26/2011_09_26_drive_0056_sync 0000000088 l
293 | 2011_09_26/2011_09_26_drive_0056_sync 0000000121 l
294 | 2011_09_26/2011_09_26_drive_0056_sync 0000000209 l
295 | 2011_09_26/2011_09_26_drive_0056_sync 0000000165 l
296 | 2011_09_26/2011_09_26_drive_0056_sync 0000000231 l
297 | 2011_09_26/2011_09_26_drive_0056_sync 0000000044 l
298 | 2011_09_26/2011_09_26_drive_0059_sync 0000000056 l
299 | 2011_09_26/2011_09_26_drive_0059_sync 0000000000 l
300 | 2011_09_26/2011_09_26_drive_0059_sync 0000000344 l
301 | 2011_09_26/2011_09_26_drive_0059_sync 0000000358 l
302 | 2011_09_26/2011_09_26_drive_0059_sync 0000000316 l
303 | 2011_09_26/2011_09_26_drive_0059_sync 0000000238 l
304 | 2011_09_26/2011_09_26_drive_0059_sync 0000000098 l
305 | 2011_09_26/2011_09_26_drive_0059_sync 0000000112 l
306 | 2011_09_26/2011_09_26_drive_0059_sync 0000000028 l
307 | 2011_09_26/2011_09_26_drive_0059_sync 0000000014 l
308 | 2011_09_26/2011_09_26_drive_0059_sync 0000000330 l
309 | 2011_09_26/2011_09_26_drive_0059_sync 0000000154 l
310 | 2011_09_26/2011_09_26_drive_0059_sync 0000000042 l
311 | 2011_09_26/2011_09_26_drive_0059_sync 0000000302 l
312 | 2011_09_26/2011_09_26_drive_0059_sync 0000000182 l
313 | 2011_09_26/2011_09_26_drive_0059_sync 0000000288 l
314 | 2011_09_26/2011_09_26_drive_0059_sync 0000000140 l
315 | 2011_09_26/2011_09_26_drive_0059_sync 0000000274 l
316 | 2011_09_26/2011_09_26_drive_0059_sync 0000000224 l
317 | 2011_09_26/2011_09_26_drive_0059_sync 0000000372 l
318 | 2011_09_26/2011_09_26_drive_0059_sync 0000000196 l
319 | 2011_09_26/2011_09_26_drive_0059_sync 0000000126 l
320 | 2011_09_26/2011_09_26_drive_0059_sync 0000000084 l
321 | 2011_09_26/2011_09_26_drive_0059_sync 0000000210 l
322 | 2011_09_26/2011_09_26_drive_0059_sync 0000000070 l
323 | 2011_09_26/2011_09_26_drive_0064_sync 0000000528 l
324 | 2011_09_26/2011_09_26_drive_0064_sync 0000000308 l
325 | 2011_09_26/2011_09_26_drive_0064_sync 0000000044 l
326 | 2011_09_26/2011_09_26_drive_0064_sync 0000000352 l
327 | 2011_09_26/2011_09_26_drive_0064_sync 0000000066 l
328 | 2011_09_26/2011_09_26_drive_0064_sync 0000000000 l
329 | 2011_09_26/2011_09_26_drive_0064_sync 0000000506 l
330 | 2011_09_26/2011_09_26_drive_0064_sync 0000000176 l
331 | 2011_09_26/2011_09_26_drive_0064_sync 0000000022 l
332 | 2011_09_26/2011_09_26_drive_0064_sync 0000000242 l
333 | 2011_09_26/2011_09_26_drive_0064_sync 0000000462 l
334 | 2011_09_26/2011_09_26_drive_0064_sync 0000000418 l
335 | 2011_09_26/2011_09_26_drive_0064_sync 0000000110 l
336 | 2011_09_26/2011_09_26_drive_0064_sync 0000000440 l
337 | 2011_09_26/2011_09_26_drive_0064_sync 0000000396 l
338 | 2011_09_26/2011_09_26_drive_0064_sync 0000000154 l
339 | 2011_09_26/2011_09_26_drive_0064_sync 0000000374 l
340 | 2011_09_26/2011_09_26_drive_0064_sync 0000000088 l
341 | 2011_09_26/2011_09_26_drive_0064_sync 0000000286 l
342 | 2011_09_26/2011_09_26_drive_0064_sync 0000000550 l
343 | 2011_09_26/2011_09_26_drive_0064_sync 0000000264 l
344 | 2011_09_26/2011_09_26_drive_0064_sync 0000000220 l
345 | 2011_09_26/2011_09_26_drive_0064_sync 0000000330 l
346 | 2011_09_26/2011_09_26_drive_0064_sync 0000000484 l
347 | 2011_09_26/2011_09_26_drive_0064_sync 0000000198 l
348 | 2011_09_26/2011_09_26_drive_0084_sync 0000000283 l
349 | 2011_09_26/2011_09_26_drive_0084_sync 0000000361 l
350 | 2011_09_26/2011_09_26_drive_0084_sync 0000000270 l
351 | 2011_09_26/2011_09_26_drive_0084_sync 0000000127 l
352 | 2011_09_26/2011_09_26_drive_0084_sync 0000000205 l
353 | 2011_09_26/2011_09_26_drive_0084_sync 0000000218 l
354 | 2011_09_26/2011_09_26_drive_0084_sync 0000000153 l
355 | 2011_09_26/2011_09_26_drive_0084_sync 0000000335 l
356 | 2011_09_26/2011_09_26_drive_0084_sync 0000000192 l
357 | 2011_09_26/2011_09_26_drive_0084_sync 0000000348 l
358 | 2011_09_26/2011_09_26_drive_0084_sync 0000000101 l
359 | 2011_09_26/2011_09_26_drive_0084_sync 0000000049 l
360 | 2011_09_26/2011_09_26_drive_0084_sync 0000000179 l
361 | 2011_09_26/2011_09_26_drive_0084_sync 0000000140 l
362 | 2011_09_26/2011_09_26_drive_0084_sync 0000000374 l
363 | 2011_09_26/2011_09_26_drive_0084_sync 0000000322 l
364 | 2011_09_26/2011_09_26_drive_0084_sync 0000000309 l
365 | 2011_09_26/2011_09_26_drive_0084_sync 0000000244 l
366 | 2011_09_26/2011_09_26_drive_0084_sync 0000000062 l
367 | 2011_09_26/2011_09_26_drive_0084_sync 0000000257 l
368 | 2011_09_26/2011_09_26_drive_0084_sync 0000000088 l
369 | 2011_09_26/2011_09_26_drive_0084_sync 0000000114 l
370 | 2011_09_26/2011_09_26_drive_0084_sync 0000000075 l
371 | 2011_09_26/2011_09_26_drive_0084_sync 0000000296 l
372 | 2011_09_26/2011_09_26_drive_0084_sync 0000000231 l
373 | 2011_09_26/2011_09_26_drive_0086_sync 0000000007 l
374 | 2011_09_26/2011_09_26_drive_0086_sync 0000000196 l
375 | 2011_09_26/2011_09_26_drive_0086_sync 0000000439 l
376 | 2011_09_26/2011_09_26_drive_0086_sync 0000000169 l
377 | 2011_09_26/2011_09_26_drive_0086_sync 0000000115 l
378 | 2011_09_26/2011_09_26_drive_0086_sync 0000000034 l
379 | 2011_09_26/2011_09_26_drive_0086_sync 0000000304 l
380 | 2011_09_26/2011_09_26_drive_0086_sync 0000000331 l
381 | 2011_09_26/2011_09_26_drive_0086_sync 0000000277 l
382 | 2011_09_26/2011_09_26_drive_0086_sync 0000000520 l
383 | 2011_09_26/2011_09_26_drive_0086_sync 0000000682 l
384 | 2011_09_26/2011_09_26_drive_0086_sync 0000000628 l
385 | 2011_09_26/2011_09_26_drive_0086_sync 0000000088 l
386 | 2011_09_26/2011_09_26_drive_0086_sync 0000000601 l
387 | 2011_09_26/2011_09_26_drive_0086_sync 0000000574 l
388 | 2011_09_26/2011_09_26_drive_0086_sync 0000000223 l
389 | 2011_09_26/2011_09_26_drive_0086_sync 0000000655 l
390 | 2011_09_26/2011_09_26_drive_0086_sync 0000000358 l
391 | 2011_09_26/2011_09_26_drive_0086_sync 0000000412 l
392 | 2011_09_26/2011_09_26_drive_0086_sync 0000000142 l
393 | 2011_09_26/2011_09_26_drive_0086_sync 0000000385 l
394 | 2011_09_26/2011_09_26_drive_0086_sync 0000000061 l
395 | 2011_09_26/2011_09_26_drive_0086_sync 0000000493 l
396 | 2011_09_26/2011_09_26_drive_0086_sync 0000000466 l
397 | 2011_09_26/2011_09_26_drive_0086_sync 0000000250 l
398 | 2011_09_26/2011_09_26_drive_0093_sync 0000000000 l
399 | 2011_09_26/2011_09_26_drive_0093_sync 0000000016 l
400 | 2011_09_26/2011_09_26_drive_0093_sync 0000000032 l
401 | 2011_09_26/2011_09_26_drive_0093_sync 0000000048 l
402 | 2011_09_26/2011_09_26_drive_0093_sync 0000000064 l
403 | 2011_09_26/2011_09_26_drive_0093_sync 0000000080 l
404 | 2011_09_26/2011_09_26_drive_0093_sync 0000000096 l
405 | 2011_09_26/2011_09_26_drive_0093_sync 0000000112 l
406 | 2011_09_26/2011_09_26_drive_0093_sync 0000000128 l
407 | 2011_09_26/2011_09_26_drive_0093_sync 0000000144 l
408 | 2011_09_26/2011_09_26_drive_0093_sync 0000000160 l
409 | 2011_09_26/2011_09_26_drive_0093_sync 0000000176 l
410 | 2011_09_26/2011_09_26_drive_0093_sync 0000000192 l
411 | 2011_09_26/2011_09_26_drive_0093_sync 0000000208 l
412 | 2011_09_26/2011_09_26_drive_0093_sync 0000000224 l
413 | 2011_09_26/2011_09_26_drive_0093_sync 0000000240 l
414 | 2011_09_26/2011_09_26_drive_0093_sync 0000000256 l
415 | 2011_09_26/2011_09_26_drive_0093_sync 0000000305 l
416 | 2011_09_26/2011_09_26_drive_0093_sync 0000000321 l
417 | 2011_09_26/2011_09_26_drive_0093_sync 0000000337 l
418 | 2011_09_26/2011_09_26_drive_0093_sync 0000000353 l
419 | 2011_09_26/2011_09_26_drive_0093_sync 0000000369 l
420 | 2011_09_26/2011_09_26_drive_0093_sync 0000000385 l
421 | 2011_09_26/2011_09_26_drive_0093_sync 0000000401 l
422 | 2011_09_26/2011_09_26_drive_0093_sync 0000000417 l
423 | 2011_09_26/2011_09_26_drive_0096_sync 0000000000 l
424 | 2011_09_26/2011_09_26_drive_0096_sync 0000000019 l
425 | 2011_09_26/2011_09_26_drive_0096_sync 0000000038 l
426 | 2011_09_26/2011_09_26_drive_0096_sync 0000000057 l
427 | 2011_09_26/2011_09_26_drive_0096_sync 0000000076 l
428 | 2011_09_26/2011_09_26_drive_0096_sync 0000000095 l
429 | 2011_09_26/2011_09_26_drive_0096_sync 0000000114 l
430 | 2011_09_26/2011_09_26_drive_0096_sync 0000000133 l
431 | 2011_09_26/2011_09_26_drive_0096_sync 0000000152 l
432 | 2011_09_26/2011_09_26_drive_0096_sync 0000000171 l
433 | 2011_09_26/2011_09_26_drive_0096_sync 0000000190 l
434 | 2011_09_26/2011_09_26_drive_0096_sync 0000000209 l
435 | 2011_09_26/2011_09_26_drive_0096_sync 0000000228 l
436 | 2011_09_26/2011_09_26_drive_0096_sync 0000000247 l
437 | 2011_09_26/2011_09_26_drive_0096_sync 0000000266 l
438 | 2011_09_26/2011_09_26_drive_0096_sync 0000000285 l
439 | 2011_09_26/2011_09_26_drive_0096_sync 0000000304 l
440 | 2011_09_26/2011_09_26_drive_0096_sync 0000000323 l
441 | 2011_09_26/2011_09_26_drive_0096_sync 0000000342 l
442 | 2011_09_26/2011_09_26_drive_0096_sync 0000000361 l
443 | 2011_09_26/2011_09_26_drive_0096_sync 0000000380 l
444 | 2011_09_26/2011_09_26_drive_0096_sync 0000000399 l
445 | 2011_09_26/2011_09_26_drive_0096_sync 0000000418 l
446 | 2011_09_26/2011_09_26_drive_0096_sync 0000000437 l
447 | 2011_09_26/2011_09_26_drive_0096_sync 0000000456 l
448 | 2011_09_26/2011_09_26_drive_0101_sync 0000000692 l
449 | 2011_09_26/2011_09_26_drive_0101_sync 0000000930 l
450 | 2011_09_26/2011_09_26_drive_0101_sync 0000000760 l
451 | 2011_09_26/2011_09_26_drive_0101_sync 0000000896 l
452 | 2011_09_26/2011_09_26_drive_0101_sync 0000000284 l
453 | 2011_09_26/2011_09_26_drive_0101_sync 0000000148 l
454 | 2011_09_26/2011_09_26_drive_0101_sync 0000000522 l
455 | 2011_09_26/2011_09_26_drive_0101_sync 0000000794 l
456 | 2011_09_26/2011_09_26_drive_0101_sync 0000000624 l
457 | 2011_09_26/2011_09_26_drive_0101_sync 0000000726 l
458 | 2011_09_26/2011_09_26_drive_0101_sync 0000000216 l
459 | 2011_09_26/2011_09_26_drive_0101_sync 0000000318 l
460 | 2011_09_26/2011_09_26_drive_0101_sync 0000000488 l
461 | 2011_09_26/2011_09_26_drive_0101_sync 0000000590 l
462 | 2011_09_26/2011_09_26_drive_0101_sync 0000000454 l
463 | 2011_09_26/2011_09_26_drive_0101_sync 0000000862 l
464 | 2011_09_26/2011_09_26_drive_0101_sync 0000000386 l
465 | 2011_09_26/2011_09_26_drive_0101_sync 0000000352 l
466 | 2011_09_26/2011_09_26_drive_0101_sync 0000000420 l
467 | 2011_09_26/2011_09_26_drive_0101_sync 0000000658 l
468 | 2011_09_26/2011_09_26_drive_0101_sync 0000000828 l
469 | 2011_09_26/2011_09_26_drive_0101_sync 0000000556 l
470 | 2011_09_26/2011_09_26_drive_0101_sync 0000000114 l
471 | 2011_09_26/2011_09_26_drive_0101_sync 0000000182 l
472 | 2011_09_26/2011_09_26_drive_0101_sync 0000000080 l
473 | 2011_09_26/2011_09_26_drive_0106_sync 0000000015 l
474 | 2011_09_26/2011_09_26_drive_0106_sync 0000000035 l
475 | 2011_09_26/2011_09_26_drive_0106_sync 0000000043 l
476 | 2011_09_26/2011_09_26_drive_0106_sync 0000000051 l
477 | 2011_09_26/2011_09_26_drive_0106_sync 0000000059 l
478 | 2011_09_26/2011_09_26_drive_0106_sync 0000000067 l
479 | 2011_09_26/2011_09_26_drive_0106_sync 0000000075 l
480 | 2011_09_26/2011_09_26_drive_0106_sync 0000000083 l
481 | 2011_09_26/2011_09_26_drive_0106_sync 0000000091 l
482 | 2011_09_26/2011_09_26_drive_0106_sync 0000000099 l
483 | 2011_09_26/2011_09_26_drive_0106_sync 0000000107 l
484 | 2011_09_26/2011_09_26_drive_0106_sync 0000000115 l
485 | 2011_09_26/2011_09_26_drive_0106_sync 0000000123 l
486 | 2011_09_26/2011_09_26_drive_0106_sync 0000000131 l
487 | 2011_09_26/2011_09_26_drive_0106_sync 0000000139 l
488 | 2011_09_26/2011_09_26_drive_0106_sync 0000000147 l
489 | 2011_09_26/2011_09_26_drive_0106_sync 0000000155 l
490 | 2011_09_26/2011_09_26_drive_0106_sync 0000000163 l
491 | 2011_09_26/2011_09_26_drive_0106_sync 0000000171 l
492 | 2011_09_26/2011_09_26_drive_0106_sync 0000000179 l
493 | 2011_09_26/2011_09_26_drive_0106_sync 0000000187 l
494 | 2011_09_26/2011_09_26_drive_0106_sync 0000000195 l
495 | 2011_09_26/2011_09_26_drive_0106_sync 0000000203 l
496 | 2011_09_26/2011_09_26_drive_0106_sync 0000000211 l
497 | 2011_09_26/2011_09_26_drive_0106_sync 0000000219 l
498 | 2011_09_26/2011_09_26_drive_0117_sync 0000000312 l
499 | 2011_09_26/2011_09_26_drive_0117_sync 0000000494 l
500 | 2011_09_26/2011_09_26_drive_0117_sync 0000000104 l
501 | 2011_09_26/2011_09_26_drive_0117_sync 0000000130 l
502 | 2011_09_26/2011_09_26_drive_0117_sync 0000000156 l
503 | 2011_09_26/2011_09_26_drive_0117_sync 0000000182 l
504 | 2011_09_26/2011_09_26_drive_0117_sync 0000000598 l
505 | 2011_09_26/2011_09_26_drive_0117_sync 0000000416 l
506 | 2011_09_26/2011_09_26_drive_0117_sync 0000000364 l
507 | 2011_09_26/2011_09_26_drive_0117_sync 0000000026 l
508 | 2011_09_26/2011_09_26_drive_0117_sync 0000000078 l
509 | 2011_09_26/2011_09_26_drive_0117_sync 0000000572 l
510 | 2011_09_26/2011_09_26_drive_0117_sync 0000000468 l
511 | 2011_09_26/2011_09_26_drive_0117_sync 0000000260 l
512 | 2011_09_26/2011_09_26_drive_0117_sync 0000000624 l
513 | 2011_09_26/2011_09_26_drive_0117_sync 0000000234 l
514 | 2011_09_26/2011_09_26_drive_0117_sync 0000000442 l
515 | 2011_09_26/2011_09_26_drive_0117_sync 0000000390 l
516 | 2011_09_26/2011_09_26_drive_0117_sync 0000000546 l
517 | 2011_09_26/2011_09_26_drive_0117_sync 0000000286 l
518 | 2011_09_26/2011_09_26_drive_0117_sync 0000000000 l
519 | 2011_09_26/2011_09_26_drive_0117_sync 0000000338 l
520 | 2011_09_26/2011_09_26_drive_0117_sync 0000000208 l
521 | 2011_09_26/2011_09_26_drive_0117_sync 0000000650 l
522 | 2011_09_26/2011_09_26_drive_0117_sync 0000000052 l
523 | 2011_09_28/2011_09_28_drive_0002_sync 0000000024 l
524 | 2011_09_28/2011_09_28_drive_0002_sync 0000000021 l
525 | 2011_09_28/2011_09_28_drive_0002_sync 0000000036 l
526 | 2011_09_28/2011_09_28_drive_0002_sync 0000000000 l
527 | 2011_09_28/2011_09_28_drive_0002_sync 0000000051 l
528 | 2011_09_28/2011_09_28_drive_0002_sync 0000000018 l
529 | 2011_09_28/2011_09_28_drive_0002_sync 0000000033 l
530 | 2011_09_28/2011_09_28_drive_0002_sync 0000000090 l
531 | 2011_09_28/2011_09_28_drive_0002_sync 0000000045 l
532 | 2011_09_28/2011_09_28_drive_0002_sync 0000000054 l
533 | 2011_09_28/2011_09_28_drive_0002_sync 0000000012 l
534 | 2011_09_28/2011_09_28_drive_0002_sync 0000000039 l
535 | 2011_09_28/2011_09_28_drive_0002_sync 0000000009 l
536 | 2011_09_28/2011_09_28_drive_0002_sync 0000000003 l
537 | 2011_09_28/2011_09_28_drive_0002_sync 0000000030 l
538 | 2011_09_28/2011_09_28_drive_0002_sync 0000000078 l
539 | 2011_09_28/2011_09_28_drive_0002_sync 0000000060 l
540 | 2011_09_28/2011_09_28_drive_0002_sync 0000000048 l
541 | 2011_09_28/2011_09_28_drive_0002_sync 0000000084 l
542 | 2011_09_28/2011_09_28_drive_0002_sync 0000000081 l
543 | 2011_09_28/2011_09_28_drive_0002_sync 0000000006 l
544 | 2011_09_28/2011_09_28_drive_0002_sync 0000000057 l
545 | 2011_09_28/2011_09_28_drive_0002_sync 0000000072 l
546 | 2011_09_28/2011_09_28_drive_0002_sync 0000000087 l
547 | 2011_09_28/2011_09_28_drive_0002_sync 0000000063 l
548 | 2011_09_29/2011_09_29_drive_0071_sync 0000000252 l
549 | 2011_09_29/2011_09_29_drive_0071_sync 0000000540 l
550 | 2011_09_29/2011_09_29_drive_0071_sync 0000001054 l
551 | 2011_09_29/2011_09_29_drive_0071_sync 0000000036 l
552 | 2011_09_29/2011_09_29_drive_0071_sync 0000000360 l
553 | 2011_09_29/2011_09_29_drive_0071_sync 0000000807 l
554 | 2011_09_29/2011_09_29_drive_0071_sync 0000000879 l
555 | 2011_09_29/2011_09_29_drive_0071_sync 0000000288 l
556 | 2011_09_29/2011_09_29_drive_0071_sync 0000000771 l
557 | 2011_09_29/2011_09_29_drive_0071_sync 0000000000 l
558 | 2011_09_29/2011_09_29_drive_0071_sync 0000000216 l
559 | 2011_09_29/2011_09_29_drive_0071_sync 0000000951 l
560 | 2011_09_29/2011_09_29_drive_0071_sync 0000000324 l
561 | 2011_09_29/2011_09_29_drive_0071_sync 0000000432 l
562 | 2011_09_29/2011_09_29_drive_0071_sync 0000000504 l
563 | 2011_09_29/2011_09_29_drive_0071_sync 0000000576 l
564 | 2011_09_29/2011_09_29_drive_0071_sync 0000000108 l
565 | 2011_09_29/2011_09_29_drive_0071_sync 0000000180 l
566 | 2011_09_29/2011_09_29_drive_0071_sync 0000000072 l
567 | 2011_09_29/2011_09_29_drive_0071_sync 0000000612 l
568 | 2011_09_29/2011_09_29_drive_0071_sync 0000000915 l
569 | 2011_09_29/2011_09_29_drive_0071_sync 0000000735 l
570 | 2011_09_29/2011_09_29_drive_0071_sync 0000000144 l
571 | 2011_09_29/2011_09_29_drive_0071_sync 0000000396 l
572 | 2011_09_29/2011_09_29_drive_0071_sync 0000000468 l
573 | 2011_09_30/2011_09_30_drive_0016_sync 0000000132 l
574 | 2011_09_30/2011_09_30_drive_0016_sync 0000000011 l
575 | 2011_09_30/2011_09_30_drive_0016_sync 0000000154 l
576 | 2011_09_30/2011_09_30_drive_0016_sync 0000000022 l
577 | 2011_09_30/2011_09_30_drive_0016_sync 0000000242 l
578 | 2011_09_30/2011_09_30_drive_0016_sync 0000000198 l
579 | 2011_09_30/2011_09_30_drive_0016_sync 0000000176 l
580 | 2011_09_30/2011_09_30_drive_0016_sync 0000000231 l
581 | 2011_09_30/2011_09_30_drive_0016_sync 0000000275 l
582 | 2011_09_30/2011_09_30_drive_0016_sync 0000000220 l
583 | 2011_09_30/2011_09_30_drive_0016_sync 0000000088 l
584 | 2011_09_30/2011_09_30_drive_0016_sync 0000000143 l
585 | 2011_09_30/2011_09_30_drive_0016_sync 0000000055 l
586 | 2011_09_30/2011_09_30_drive_0016_sync 0000000033 l
587 | 2011_09_30/2011_09_30_drive_0016_sync 0000000187 l
588 | 2011_09_30/2011_09_30_drive_0016_sync 0000000110 l
589 | 2011_09_30/2011_09_30_drive_0016_sync 0000000044 l
590 | 2011_09_30/2011_09_30_drive_0016_sync 0000000077 l
591 | 2011_09_30/2011_09_30_drive_0016_sync 0000000066 l
592 | 2011_09_30/2011_09_30_drive_0016_sync 0000000000 l
593 | 2011_09_30/2011_09_30_drive_0016_sync 0000000165 l
594 | 2011_09_30/2011_09_30_drive_0016_sync 0000000264 l
595 | 2011_09_30/2011_09_30_drive_0016_sync 0000000253 l
596 | 2011_09_30/2011_09_30_drive_0016_sync 0000000209 l
597 | 2011_09_30/2011_09_30_drive_0016_sync 0000000121 l
598 | 2011_09_30/2011_09_30_drive_0018_sync 0000000107 l
599 | 2011_09_30/2011_09_30_drive_0018_sync 0000002247 l
600 | 2011_09_30/2011_09_30_drive_0018_sync 0000001391 l
601 | 2011_09_30/2011_09_30_drive_0018_sync 0000000535 l
602 | 2011_09_30/2011_09_30_drive_0018_sync 0000001819 l
603 | 2011_09_30/2011_09_30_drive_0018_sync 0000001177 l
604 | 2011_09_30/2011_09_30_drive_0018_sync 0000000428 l
605 | 2011_09_30/2011_09_30_drive_0018_sync 0000001926 l
606 | 2011_09_30/2011_09_30_drive_0018_sync 0000000749 l
607 | 2011_09_30/2011_09_30_drive_0018_sync 0000001284 l
608 | 2011_09_30/2011_09_30_drive_0018_sync 0000002140 l
609 | 2011_09_30/2011_09_30_drive_0018_sync 0000001605 l
610 | 2011_09_30/2011_09_30_drive_0018_sync 0000001498 l
611 | 2011_09_30/2011_09_30_drive_0018_sync 0000000642 l
612 | 2011_09_30/2011_09_30_drive_0018_sync 0000002740 l
613 | 2011_09_30/2011_09_30_drive_0018_sync 0000002419 l
614 | 2011_09_30/2011_09_30_drive_0018_sync 0000000856 l
615 | 2011_09_30/2011_09_30_drive_0018_sync 0000002526 l
616 | 2011_09_30/2011_09_30_drive_0018_sync 0000001712 l
617 | 2011_09_30/2011_09_30_drive_0018_sync 0000001070 l
618 | 2011_09_30/2011_09_30_drive_0018_sync 0000000000 l
619 | 2011_09_30/2011_09_30_drive_0018_sync 0000002033 l
620 | 2011_09_30/2011_09_30_drive_0018_sync 0000000214 l
621 | 2011_09_30/2011_09_30_drive_0018_sync 0000000963 l
622 | 2011_09_30/2011_09_30_drive_0018_sync 0000002633 l
623 | 2011_09_30/2011_09_30_drive_0027_sync 0000000533 l
624 | 2011_09_30/2011_09_30_drive_0027_sync 0000001040 l
625 | 2011_09_30/2011_09_30_drive_0027_sync 0000000082 l
626 | 2011_09_30/2011_09_30_drive_0027_sync 0000000205 l
627 | 2011_09_30/2011_09_30_drive_0027_sync 0000000835 l
628 | 2011_09_30/2011_09_30_drive_0027_sync 0000000451 l
629 | 2011_09_30/2011_09_30_drive_0027_sync 0000000164 l
630 | 2011_09_30/2011_09_30_drive_0027_sync 0000000794 l
631 | 2011_09_30/2011_09_30_drive_0027_sync 0000000328 l
632 | 2011_09_30/2011_09_30_drive_0027_sync 0000000615 l
633 | 2011_09_30/2011_09_30_drive_0027_sync 0000000917 l
634 | 2011_09_30/2011_09_30_drive_0027_sync 0000000369 l
635 | 2011_09_30/2011_09_30_drive_0027_sync 0000000287 l
636 | 2011_09_30/2011_09_30_drive_0027_sync 0000000123 l
637 | 2011_09_30/2011_09_30_drive_0027_sync 0000000876 l
638 | 2011_09_30/2011_09_30_drive_0027_sync 0000000410 l
639 | 2011_09_30/2011_09_30_drive_0027_sync 0000000492 l
640 | 2011_09_30/2011_09_30_drive_0027_sync 0000000958 l
641 | 2011_09_30/2011_09_30_drive_0027_sync 0000000656 l
642 | 2011_09_30/2011_09_30_drive_0027_sync 0000000000 l
643 | 2011_09_30/2011_09_30_drive_0027_sync 0000000753 l
644 | 2011_09_30/2011_09_30_drive_0027_sync 0000000574 l
645 | 2011_09_30/2011_09_30_drive_0027_sync 0000001081 l
646 | 2011_09_30/2011_09_30_drive_0027_sync 0000000041 l
647 | 2011_09_30/2011_09_30_drive_0027_sync 0000000246 l
648 | 2011_10_03/2011_10_03_drive_0027_sync 0000002906 l
649 | 2011_10_03/2011_10_03_drive_0027_sync 0000002544 l
650 | 2011_10_03/2011_10_03_drive_0027_sync 0000000362 l
651 | 2011_10_03/2011_10_03_drive_0027_sync 0000004535 l
652 | 2011_10_03/2011_10_03_drive_0027_sync 0000000734 l
653 | 2011_10_03/2011_10_03_drive_0027_sync 0000001096 l
654 | 2011_10_03/2011_10_03_drive_0027_sync 0000004173 l
655 | 2011_10_03/2011_10_03_drive_0027_sync 0000000543 l
656 | 2011_10_03/2011_10_03_drive_0027_sync 0000001277 l
657 | 2011_10_03/2011_10_03_drive_0027_sync 0000004354 l
658 | 2011_10_03/2011_10_03_drive_0027_sync 0000001458 l
659 | 2011_10_03/2011_10_03_drive_0027_sync 0000001820 l
660 | 2011_10_03/2011_10_03_drive_0027_sync 0000003449 l
661 | 2011_10_03/2011_10_03_drive_0027_sync 0000003268 l
662 | 2011_10_03/2011_10_03_drive_0027_sync 0000000915 l
663 | 2011_10_03/2011_10_03_drive_0027_sync 0000002363 l
664 | 2011_10_03/2011_10_03_drive_0027_sync 0000002725 l
665 | 2011_10_03/2011_10_03_drive_0027_sync 0000000181 l
666 | 2011_10_03/2011_10_03_drive_0027_sync 0000001639 l
667 | 2011_10_03/2011_10_03_drive_0027_sync 0000003992 l
668 | 2011_10_03/2011_10_03_drive_0027_sync 0000003087 l
669 | 2011_10_03/2011_10_03_drive_0027_sync 0000002001 l
670 | 2011_10_03/2011_10_03_drive_0027_sync 0000003811 l
671 | 2011_10_03/2011_10_03_drive_0027_sync 0000003630 l
672 | 2011_10_03/2011_10_03_drive_0027_sync 0000000000 l
673 | 2011_10_03/2011_10_03_drive_0047_sync 0000000096 l
674 | 2011_10_03/2011_10_03_drive_0047_sync 0000000800 l
675 | 2011_10_03/2011_10_03_drive_0047_sync 0000000320 l
676 | 2011_10_03/2011_10_03_drive_0047_sync 0000000576 l
677 | 2011_10_03/2011_10_03_drive_0047_sync 0000000000 l
678 | 2011_10_03/2011_10_03_drive_0047_sync 0000000480 l
679 | 2011_10_03/2011_10_03_drive_0047_sync 0000000640 l
680 | 2011_10_03/2011_10_03_drive_0047_sync 0000000032 l
681 | 2011_10_03/2011_10_03_drive_0047_sync 0000000384 l
682 | 2011_10_03/2011_10_03_drive_0047_sync 0000000160 l
683 | 2011_10_03/2011_10_03_drive_0047_sync 0000000704 l
684 | 2011_10_03/2011_10_03_drive_0047_sync 0000000736 l
685 | 2011_10_03/2011_10_03_drive_0047_sync 0000000672 l
686 | 2011_10_03/2011_10_03_drive_0047_sync 0000000064 l
687 | 2011_10_03/2011_10_03_drive_0047_sync 0000000288 l
688 | 2011_10_03/2011_10_03_drive_0047_sync 0000000352 l
689 | 2011_10_03/2011_10_03_drive_0047_sync 0000000512 l
690 | 2011_10_03/2011_10_03_drive_0047_sync 0000000544 l
691 | 2011_10_03/2011_10_03_drive_0047_sync 0000000608 l
692 | 2011_10_03/2011_10_03_drive_0047_sync 0000000128 l
693 | 2011_10_03/2011_10_03_drive_0047_sync 0000000224 l
694 | 2011_10_03/2011_10_03_drive_0047_sync 0000000416 l
695 | 2011_10_03/2011_10_03_drive_0047_sync 0000000192 l
696 | 2011_10_03/2011_10_03_drive_0047_sync 0000000448 l
697 | 2011_10_03/2011_10_03_drive_0047_sync 0000000768 l
698 |
--------------------------------------------------------------------------------
/splits/kitti_archives_to_download.txt:
--------------------------------------------------------------------------------
1 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_calib.zip
2 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0001/2011_09_26_drive_0001_sync.zip
3 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0002/2011_09_26_drive_0002_sync.zip
4 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0005/2011_09_26_drive_0005_sync.zip
5 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0009/2011_09_26_drive_0009_sync.zip
6 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0011/2011_09_26_drive_0011_sync.zip
7 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0013/2011_09_26_drive_0013_sync.zip
8 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0014/2011_09_26_drive_0014_sync.zip
9 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0015/2011_09_26_drive_0015_sync.zip
10 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0017/2011_09_26_drive_0017_sync.zip
11 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0018/2011_09_26_drive_0018_sync.zip
12 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0019/2011_09_26_drive_0019_sync.zip
13 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0020/2011_09_26_drive_0020_sync.zip
14 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0022/2011_09_26_drive_0022_sync.zip
15 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0023/2011_09_26_drive_0023_sync.zip
16 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0027/2011_09_26_drive_0027_sync.zip
17 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0028/2011_09_26_drive_0028_sync.zip
18 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0029/2011_09_26_drive_0029_sync.zip
19 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0032/2011_09_26_drive_0032_sync.zip
20 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0035/2011_09_26_drive_0035_sync.zip
21 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0036/2011_09_26_drive_0036_sync.zip
22 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0039/2011_09_26_drive_0039_sync.zip
23 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0046/2011_09_26_drive_0046_sync.zip
24 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0048/2011_09_26_drive_0048_sync.zip
25 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0051/2011_09_26_drive_0051_sync.zip
26 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0052/2011_09_26_drive_0052_sync.zip
27 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0056/2011_09_26_drive_0056_sync.zip
28 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0057/2011_09_26_drive_0057_sync.zip
29 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0059/2011_09_26_drive_0059_sync.zip
30 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0060/2011_09_26_drive_0060_sync.zip
31 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0061/2011_09_26_drive_0061_sync.zip
32 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0064/2011_09_26_drive_0064_sync.zip
33 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0070/2011_09_26_drive_0070_sync.zip
34 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0079/2011_09_26_drive_0079_sync.zip
35 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0084/2011_09_26_drive_0084_sync.zip
36 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0086/2011_09_26_drive_0086_sync.zip
37 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0087/2011_09_26_drive_0087_sync.zip
38 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0091/2011_09_26_drive_0091_sync.zip
39 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0093/2011_09_26_drive_0093_sync.zip
40 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0095/2011_09_26_drive_0095_sync.zip
41 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0096/2011_09_26_drive_0096_sync.zip
42 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0101/2011_09_26_drive_0101_sync.zip
43 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0104/2011_09_26_drive_0104_sync.zip
44 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0106/2011_09_26_drive_0106_sync.zip
45 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0113/2011_09_26_drive_0113_sync.zip
46 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0117/2011_09_26_drive_0117_sync.zip
47 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_28_calib.zip
48 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_28_drive_0001/2011_09_28_drive_0001_sync.zip
49 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_28_drive_0002/2011_09_28_drive_0002_sync.zip
50 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_29_calib.zip
51 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_29_drive_0004/2011_09_29_drive_0004_sync.zip
52 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_29_drive_0026/2011_09_29_drive_0026_sync.zip
53 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_29_drive_0071/2011_09_29_drive_0071_sync.zip
54 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_30_calib.zip
55 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_30_drive_0016/2011_09_30_drive_0016_sync.zip
56 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_30_drive_0018/2011_09_30_drive_0018_sync.zip
57 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_30_drive_0020/2011_09_30_drive_0020_sync.zip
58 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_30_drive_0027/2011_09_30_drive_0027_sync.zip
59 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_30_drive_0028/2011_09_30_drive_0028_sync.zip
60 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_30_drive_0033/2011_09_30_drive_0033_sync.zip
61 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_30_drive_0034/2011_09_30_drive_0034_sync.zip
62 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_10_03_calib.zip
63 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_10_03_drive_0027/2011_10_03_drive_0027_sync.zip
64 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_10_03_drive_0034/2011_10_03_drive_0034_sync.zip
65 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_10_03_drive_0042/2011_10_03_drive_0042_sync.zip
66 | https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_10_03_drive_0047/2011_10_03_drive_0047_sync.zip
67 |
--------------------------------------------------------------------------------
/utils.py:
--------------------------------------------------------------------------------
1 | from __future__ import absolute_import, division, print_function
2 | from matplotlib import cm
3 | from matplotlib.colors import ListedColormap, LinearSegmentedColormap
4 | import numpy as np
5 |
6 |
7 | def readlines(filename):
8 | """Read all the lines in a text file and return as a list
9 | """
10 | with open(filename, 'r') as f:
11 | lines = f.read().splitlines()
12 | return lines
13 |
14 |
15 | def normalize_image(x):
16 | """Rescale image pixels to span range [0, 1]
17 | """
18 | ma = float(x.max().cpu().data)
19 | mi = float(x.min().cpu().data)
20 | d = ma - mi if ma != mi else 1e5
21 | return (x - mi) / d
22 |
23 |
24 | def sec_to_hm(t):
25 | """Convert time in seconds to time in hours, minutes and seconds
26 | e.g. 10239 -> (2, 50, 39)
27 | """
28 | t = int(t)
29 | s = t % 60
30 | t //= 60
31 | m = t % 60
32 | t //= 60
33 | return t, m, s
34 |
35 |
36 | def sec_to_hm_str(t):
37 | """Convert time in seconds to a nice string
38 | e.g. 10239 -> '02h50m39s'
39 | """
40 | h, m, s = sec_to_hm(t)
41 | return "{:02d}h{:02d}m{:02d}s".format(h, m, s)
42 |
43 | def high_res_colormap(low_res_cmap, resolution=1000, max_value=1):
44 | # Construct the list colormap, with interpolated values for higer resolution
45 | # For a linear segmented colormap, you can just specify the number of point in
46 | # cm.get_cmap(name, lutsize) with the parameter lutsize
47 | x = np.linspace(0,1,low_res_cmap.N)
48 | low_res = low_res_cmap(x)
49 | new_x = np.linspace(0,max_value,resolution)
50 | high_res = np.stack([np.interp(new_x, x, low_res[:,i]) for i in range(low_res.shape[1])], axis=1)
51 | return ListedColormap(high_res)
52 |
53 |
54 | def opencv_rainbow(resolution=1000):
55 | # Construct the opencv equivalent of Rainbow
56 | opencv_rainbow_data = (
57 | (0.000, (1.00, 0.00, 0.00)),
58 | (0.400, (1.00, 1.00, 0.00)),
59 | (0.600, (0.00, 1.00, 0.00)),
60 | (0.800, (0.00, 0.00, 1.00)),
61 | (1.000, (0.60, 0.00, 1.00))
62 | )
63 |
64 | return LinearSegmentedColormap.from_list('opencv_rainbow', opencv_rainbow_data, resolution)
65 |
66 |
67 | COLORMAPS = {'rainbow': opencv_rainbow(),
68 | 'magma': high_res_colormap(cm.get_cmap('magma')),
69 | 'plasma': high_res_colormap(cm.get_cmap('plasma')),
70 | 'viridis': high_res_colormap(cm.get_cmap('viridis')),
71 | 'bone': cm.get_cmap('bone', 10000),
72 | 'bwr': high_res_colormap(cm.get_cmap('bwr'))}
73 |
74 | def tensor2array(tensor, max_value=None, colormap='rainbow'):
75 | tensor = tensor.detach().cpu()
76 | if max_value is None:
77 | max_value = tensor.max().item()
78 | if tensor.ndimension() == 2 or tensor.size(0) == 1:
79 | norm_array = tensor.squeeze().numpy()/max_value
80 | array = COLORMAPS[colormap](norm_array).astype(np.float32)
81 | array = array.transpose(2, 0, 1)[:3]
82 |
83 | elif tensor.ndimension() == 3:
84 | assert(tensor.size(0) == 3)
85 | array = 0.5 + tensor.numpy()*0.5
86 | return array
--------------------------------------------------------------------------------