├── misc
├── arch.png
└── Arch_final.png
├── models
├── __init__.py
├── ObjPoseNet.py
├── EgoPoseNet.py
├── resnet_encoder.py
└── DispResNet.py
├── kitti_eval
├── __pycache__
│ └── depth_evaluation_utils.cpython-37.pyc
├── eval_depth.py
├── save_depth.py
├── depth_evaluation_utils.py
└── test_files_eigen.txt
├── requirements.txt
├── scripts
├── train_cs.sh
├── train_kt.sh
└── run_eigen_test.sh
├── README.md
├── flow_io.py
├── datasets
├── validation_folders.py
└── sequence_folders.py
├── logger.py
├── custom_transforms_val.py
├── custom_transforms.py
├── flow_reversal.py
├── drawRobotics.py
├── utils.py
├── loss_functions.py
├── rigid_warp.py
└── train.py
/misc/arch.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kieran514/Dyna-DM/HEAD/misc/arch.png
--------------------------------------------------------------------------------
/misc/Arch_final.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kieran514/Dyna-DM/HEAD/misc/Arch_final.png
--------------------------------------------------------------------------------
/models/__init__.py:
--------------------------------------------------------------------------------
1 | from .DispResNet import DispResNet
2 | from .EgoPoseNet import EgoPoseNet
3 | from .ObjPoseNet import ObjPoseNet
4 |
--------------------------------------------------------------------------------
/kitti_eval/__pycache__/depth_evaluation_utils.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kieran514/Dyna-DM/HEAD/kitti_eval/__pycache__/depth_evaluation_utils.cpython-37.pyc
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | opencv-python
2 | imageio
3 | matplotlib
4 | scipy==1.1.0
5 | scikit-image
6 | argparse
7 | tensorboardX
8 | blessings
9 | progressbar2
10 | path
11 | tqdm
12 | pypng
13 | open3d==0.8.0.0
--------------------------------------------------------------------------------
/scripts/train_cs.sh:
--------------------------------------------------------------------------------
1 | # Dataset directory
2 | TRAIN_SET=datasets/CityScape/cityscapes_256
3 |
4 |
5 | # Cityscapes model
6 | PRETRAINED=checkpoints/pretrained/CS
7 |
8 |
9 | # For training
10 | CUDA_VISIBLE_DEVICES=0 python train.py $TRAIN_SET \
11 | --pretrained-disp $PRETRAINED/resnet18_disp_cs.tar \
12 | --pretrained-ego-pose $PRETRAINED/resnet18_ego_cs.tar \
13 | --pretrained-obj-pose $PRETRAINED/resnet18_obj_cs.tar \
14 | -b 1 -p 2.0 -c 1.0 -s 0.3 -o 0.02 -mc 0.1 -hp 0.2 -dm 0 -mni 20 -dmni 20 -objsmall 0.75 -maxtheta 0.9 \
15 | --epoch-size 1000 \
16 | --epochs 40 \
17 | --with-ssim --with-mask --with-auto-mask \
18 | --name final_cs \
19 | --seed 42
20 | # --debug
21 |
22 |
23 |
--------------------------------------------------------------------------------
/scripts/train_kt.sh:
--------------------------------------------------------------------------------
1 | # Dataset directory
2 | TRAIN_SET=datasets/KITTI/kitti_256
3 |
4 |
5 | # KITTI model
6 | PRETRAINED=checkpoints/pretrained/CS+KITTI
7 |
8 |
9 | # For training
10 | CUDA_VISIBLE_DEVICES=0 python train.py $TRAIN_SET \
11 | --pretrained-disp $PRETRAINED/resnet18_disp_cs+kt.tar \
12 | --pretrained-ego-pose $PRETRAINED/resnet18_ego_cs+kt.tar \
13 | --pretrained-obj-pose $PRETRAINED/resnet18_obj_cs+kt.tar \
14 | -b 1 -p 2.0 -c 1.0 -s 0.3 -o 0.02 -mc 0.1 -hp 0.2 -dm 0 -mni 20 -dmni 20 -objsmall 0.75 -maxtheta 0.9 \
15 | --epoch-size 1000 \
16 | --epochs 40 \
17 | --with-ssim --with-mask --with-auto-mask \
18 | --with-gt \
19 | --name final_kt \
20 | --seed 42
21 | # --debug \
22 |
23 |
--------------------------------------------------------------------------------
/scripts/run_eigen_test.sh:
--------------------------------------------------------------------------------
1 | DATA_ROOT=datasets/RAW/
2 | TEST_FILE=kitti_eval/test_files_eigen.txt
3 | RESULTS_DIR=outputs/eigen_test/
4 | PRED_FILE=outputs/eigen_test/predictions.npy
5 | DISP_NET=checkpoints/pretrained/CS+KITTI/resnet18_disp_cs+kt.tar
6 |
7 | ### (1) Predict depth and save results to "$RESULTS_DIR/predictions.npy" ###
8 | CUDA_VISIBLE_DEVICES=0 python ./kitti_eval/save_depth.py --img-height 256 --img-width 832 \
9 | --pretrained-dispnet $DISP_NET --dataset-dir $DATA_ROOT --dataset-list $TEST_FILE --output-dir $RESULTS_DIR
10 |
11 |
12 | ### (2) Evaluate depth with GT ###
13 | python ./kitti_eval/eval_depth.py --kitti_dir $DATA_ROOT --pred_file $PRED_FILE --test_file_list $TEST_FILE
14 |
15 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Dyna-DM: Dynamic Object-aware Self-supervised Monocular Depth Maps
2 |
3 |
4 | >**Dyna-DM: Dynamic Object-aware Self-supervised Monocular Depth Maps**
5 | >
6 | >[[PDF](https://arxiv.org/pdf/2206.03799.pdf)]
7 |
8 |
9 |
10 |
11 |
12 |
13 | ## Install
14 |
15 | The models were trained using CUDA 11.1, Python 3.7.x (conda environment), and PyTorch 1.8.0.
16 |
17 | Create a conda environment with the PyTorch library:
18 |
19 | ```bash
20 | conda create -n my_env python=3.7.4 pytorch=1.8.0 torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia
21 | conda activate my_env
22 | ```
23 |
24 | Install prerequisite packages listed in requirements.txt:
25 |
26 | ```bash
27 | pip3 install -r requirements.txt
28 | ```
29 |
30 | Also, ensure to install torch-scatter and torch-sparse:
31 | ```bash
32 | pip3 install torch-scatter==2.0.8 torch-sparse==0.6.12 -f https://pytorch-geometric.com/whl/torch-1.8.0+cu111.html
33 | ```
34 |
35 | ## Datasets
36 |
37 | We use the datasets provided by [Insta-DM](https://github.com/SeokjuLee/Insta-DM) and evaluate the model with the [KITTI Eigen Split](https://arxiv.org/abs/1406.2283) using the raw [KITTI dataset](http://www.cvlibs.net/download.php?file=raw_data_downloader.zip).
38 |
39 | ## Models
40 |
41 | Pretrained models for CityScape and KITTI+CityScape are provided [here](https://drive.google.com/drive/folders/1xY7n3kNhpoy1VM4ohmHYN1Oc_SVWwAWY?usp=sharing), where KITTI+CityScape is trained on both CityScape and KITTI and leads to the greatest depth estimations.
42 |
43 | ## Training
44 |
45 | The models can be trained on the KITTI dataset by running:
46 |
47 | ```bash
48 | bash scripts/train_kt.sh
49 | ```
50 |
51 | Also, the models can be trained on the CityScape dataset by running:
52 |
53 | ```bash
54 | bash scripts/train_cs.sh
55 | ```
56 |
57 | The hyperparameters are defined in each script file and set at their defaults as stated in the paper.
58 |
59 | ## Evaluation
60 |
61 | We evaluate the models by running:
62 |
63 | ```bash
64 | bash scripts/run_eigen_test.sh
65 | ```
66 |
67 | ## References
68 |
69 | * [Insta-DM](https://github.com/SeokjuLee/Insta-DM) (AAAI 2021, our baseline framework)
70 |
71 | * [Struct2Depth](https://github.com/tensorflow/models/blob/archive/research/struct2depth) (AAAI 2019, object scale loss)
72 |
73 | * [SC-SfMLearner](https://github.com/JiawangBian/SC-SfMLearner-Release) (NeurIPS 2019)
74 |
75 |
76 |
77 |
--------------------------------------------------------------------------------
/flow_io.py:
--------------------------------------------------------------------------------
1 | #! /usr/bin/env python2
2 |
3 | """
4 | I/O script to save and load the data coming with the MPI-Sintel low-level
5 | computer vision benchmark.
6 |
7 | For more details about the benchmark, please visit www.mpi-sintel.de
8 |
9 | CHANGELOG:
10 | v1.0 (2015/02/03): First release
11 |
12 | Copyright (c) 2015 Jonas Wulff
13 | Max Planck Institute for Intelligent Systems, Tuebingen, Germany
14 |
15 | """
16 |
17 | # Requirements: Numpy as PIL/Pillow
18 | import numpy as np
19 | from PIL import Image
20 |
21 | # Check for endianness, based on Daniel Scharstein's optical flow code.
22 | # Using little-endian architecture, these two should be equal.
23 | TAG_FLOAT = 202021.25
24 | TAG_CHAR = 'PIEH'
25 |
26 | def flow_read(filename):
27 | """ Read optical flow from file, return (U,V) tuple.
28 |
29 | Original code by Deqing Sun, adapted from Daniel Scharstein.
30 | """
31 | f = open(filename,'rb')
32 | check = np.fromfile(f,dtype=np.float32,count=1)[0]
33 | assert check == TAG_FLOAT, ' flow_read:: Wrong tag in flow file (should be: {0}, is: {1}). Big-endian machine? '.format(TAG_FLOAT,check)
34 | width = np.fromfile(f,dtype=np.int32,count=1)[0]
35 | height = np.fromfile(f,dtype=np.int32,count=1)[0]
36 | size = width*height
37 | assert width > 0 and height > 0 and size > 1 and size < 100000000, ' flow_read:: Wrong input size (width = {0}, height = {1}).'.format(width,height)
38 | tmp = np.fromfile(f,dtype=np.float32,count=-1).reshape((height,width*2))
39 | u = tmp[:,np.arange(width)*2]
40 | v = tmp[:,np.arange(width)*2 + 1]
41 | return u,v
42 |
43 | def flow_write(filename,uv,v=None):
44 | """ Write optical flow to file.
45 |
46 | If v is None, uv is assumed to contain both u and v channels,
47 | stacked in depth.
48 |
49 | Original code by Deqing Sun, adapted from Daniel Scharstein.
50 | """
51 | nBands = 2
52 |
53 | if v is None:
54 | assert(uv.ndim == 3)
55 | assert(uv.shape[2] == 2)
56 | u = uv[:,:,0]
57 | v = uv[:,:,1]
58 | else:
59 | u = uv
60 |
61 | assert(u.shape == v.shape)
62 | height,width = u.shape
63 | f = open(filename,'wb')
64 | # write the header
65 | f.write(TAG_CHAR)
66 | np.array(width).astype(np.int32).tofile(f)
67 | np.array(height).astype(np.int32).tofile(f)
68 | # arrange into matrix form
69 | tmp = np.zeros((height, width*nBands))
70 | tmp[:,np.arange(width)*2] = u
71 | tmp[:,np.arange(width)*2 + 1] = v
72 | tmp.astype(np.float32).tofile(f)
73 | f.close()
74 |
75 |
--------------------------------------------------------------------------------
/models/ObjPoseNet.py:
--------------------------------------------------------------------------------
1 | '''
2 | Reference: https://github.com/SeokjuLee/Insta-DM/blob/master/models/ObjPoseNet.py
3 | '''
4 |
5 | from __future__ import absolute_import, division, print_function
6 | import torch
7 | import torch.nn as nn
8 | from collections import OrderedDict
9 | from .resnet_encoder import *
10 | import numpy as np
11 | import pdb
12 |
13 |
14 |
15 | class PoseDecoder(nn.Module):
16 | def __init__(self, num_ch_enc, num_input_features=1, num_frames_to_predict_for=1, stride=1):
17 | super(PoseDecoder, self).__init__()
18 |
19 | self.num_ch_enc = num_ch_enc
20 | self.num_input_features = num_input_features
21 |
22 | if num_frames_to_predict_for is None:
23 | num_frames_to_predict_for = num_input_features - 1
24 | self.num_frames_to_predict_for = num_frames_to_predict_for
25 |
26 | self.conv_squeeze = nn.Conv2d(self.num_ch_enc[-1], 256, 1)
27 |
28 | self.convs_pose = []
29 | self.convs_pose.append(nn.Conv2d(num_input_features * 256, 256, 3, stride, 1))
30 | self.convs_pose.append(nn.Conv2d(256, 256, 3, stride, 1))
31 | self.convs_pose.append(nn.Conv2d(256, 3 * num_frames_to_predict_for, 1))
32 |
33 | self.relu = nn.ReLU()
34 |
35 | self.convs_pose = nn.ModuleList(list(self.convs_pose))
36 |
37 | def forward(self, input_features):
38 | last_features = [f[-1] for f in input_features]
39 |
40 | cat_features = [self.relu(self.conv_squeeze(f)) for f in last_features]
41 | cat_features = torch.cat(cat_features, 1)
42 |
43 | out = cat_features
44 | for i in range(3):
45 | out = self.convs_pose[i](out)
46 | if i != 2:
47 | out = self.relu(out)
48 |
49 | out = out.mean(dim=[2,3])
50 |
51 | pose = 0.01 * out.view(-1, 3)
52 |
53 | return pose
54 |
55 |
56 |
57 | class ObjPoseNet(nn.Module):
58 |
59 | def __init__(self, num_layers=18, pretrained=True):
60 | super(ObjPoseNet, self).__init__()
61 | self.encoder = ResnetEncoder(num_layers = num_layers, pretrained = pretrained, num_input_images=2)
62 | self.decoder = PoseDecoder(self.encoder.num_ch_enc)
63 |
64 | def init_weights(self):
65 | pass
66 |
67 | def forward(self, img1, img2):
68 | x = torch.cat([img1,img2],1)
69 | features = self.encoder(x)
70 | pose = self.decoder([features])
71 | return pose
72 |
73 | if __name__ == "__main__":
74 |
75 | torch.backends.cudnn.benchmark = True
76 |
77 | model = ObjPoseNet().cuda()
78 | model.train()
79 |
80 | tgt_img = torch.randn(4, 3, 256, 832).cuda()
81 | ref_imgs = [torch.randn(4, 3, 256, 832).cuda() for i in range(2)]
82 |
83 | pose = model(tgt_img, ref_imgs[0])
84 |
85 | print(pose.size())
--------------------------------------------------------------------------------
/models/EgoPoseNet.py:
--------------------------------------------------------------------------------
1 | '''
2 | Reference: https://github.com/SeokjuLee/Insta-DM/blob/master/models/EgoPoseNet.py
3 |
4 | '''
5 |
6 | from __future__ import absolute_import, division, print_function
7 | import torch
8 | import torch.nn as nn
9 | from collections import OrderedDict
10 | from .resnet_encoder import *
11 | import numpy as np
12 | import pdb
13 |
14 |
15 |
16 | class PoseDecoder(nn.Module):
17 | def __init__(self, num_ch_enc, num_input_features=1, num_frames_to_predict_for=1, stride=1):
18 | super(PoseDecoder, self).__init__()
19 |
20 | self.num_ch_enc = num_ch_enc
21 | self.num_input_features = num_input_features
22 |
23 | if num_frames_to_predict_for is None:
24 | num_frames_to_predict_for = num_input_features - 1
25 | self.num_frames_to_predict_for = num_frames_to_predict_for
26 |
27 | self.conv_squeeze = nn.Conv2d(self.num_ch_enc[-1], 256, 1)
28 |
29 | self.convs_pose = []
30 | self.convs_pose.append(nn.Conv2d(num_input_features * 256, 256, 3, stride, 1))
31 | self.convs_pose.append(nn.Conv2d(256, 256, 3, stride, 1))
32 | self.convs_pose.append(nn.Conv2d(256, 6 * num_frames_to_predict_for, 1))
33 |
34 | self.relu = nn.ReLU()
35 |
36 | self.convs_pose = nn.ModuleList(list(self.convs_pose))
37 |
38 | def forward(self, input_features):
39 | last_features = [f[-1] for f in input_features]
40 |
41 | cat_features = [self.relu(self.conv_squeeze(f)) for f in last_features]
42 | cat_features = torch.cat(cat_features, 1)
43 |
44 | out = cat_features
45 | for i in range(3):
46 | out = self.convs_pose[i](out)
47 | if i != 2:
48 | out = self.relu(out)
49 |
50 | out = out.mean(dim=[2,3])
51 |
52 | pose = 0.01 * out.view(-1, 6)
53 |
54 | return pose
55 |
56 |
57 |
58 | class EgoPoseNet(nn.Module):
59 |
60 | def __init__(self, num_layers=18, pretrained=True):
61 | super(EgoPoseNet, self).__init__()
62 | self.encoder = ResnetEncoder(num_layers = num_layers, pretrained = pretrained, num_input_images=2)
63 | self.decoder = PoseDecoder(self.encoder.num_ch_enc)
64 |
65 | def init_weights(self):
66 | pass
67 |
68 | def forward(self, img1, img2):
69 | x = torch.cat([img1,img2],1)
70 | features = self.encoder(x)
71 | pose = self.decoder([features])
72 | return pose
73 |
74 | if __name__ == "__main__":
75 |
76 | torch.backends.cudnn.benchmark = True
77 |
78 | model = EgoPoseNet().cuda()
79 | model.train()
80 |
81 | tgt_img = torch.randn(4, 3, 256, 832).cuda()
82 | ref_imgs = [torch.randn(4, 3, 256, 832).cuda() for i in range(2)]
83 |
84 | pose = model(tgt_img, ref_imgs[0])
85 |
86 | print(pose.size())
--------------------------------------------------------------------------------
/datasets/validation_folders.py:
--------------------------------------------------------------------------------
1 | '''
2 | Reference: https://github.com/SeokjuLee/Insta-DM/blob/master/datasets/validation_folders.py
3 | '''
4 |
5 | import torch
6 | import torch.utils.data as data
7 | import numpy as np
8 | from imageio import imread
9 | from path import Path
10 | import cv2
11 |
12 | from matplotlib import pyplot as plt
13 | import pdb
14 |
15 | def crawl_folders(folders_list):
16 | imgs = []
17 | depth = []
18 | segs = []
19 | for folder in folders_list:
20 | current_imgs = sorted(folder.files('*.jpg'))
21 | imgs.extend(current_imgs)
22 | for img in current_imgs:
23 | # Fetch depth file
24 | dd = img.dirname()/(img.name[:-4] + '.npy')
25 | assert(dd.isfile()), "depth file {} not found".format(str(dd))
26 | depth.append(dd)
27 | # Fetch segmentation file
28 | ss = folder.dirname().parent/'segmentation'/folder.basename()/(img.name[:-4] + '.npy')
29 | assert(ss.isfile()), "segmentation file {} not found".format(str(ss))
30 | segs.append(ss)
31 |
32 | return imgs, depth, segs
33 |
34 |
35 | def load_as_float(path):
36 | return imread(path).astype(np.float32)
37 |
38 |
39 | class ValidationSet(data.Dataset):
40 | """A sequence data loader where the files are arranged in this way:
41 | root/scene_1/0000000.jpg
42 | root/scene_1/0000000.npy
43 | root/scene_1/0000001.jpg
44 | root/scene_1/0000001.npy
45 | ..
46 | root/scene_2/0000000.jpg
47 | root/scene_2/0000000.npy
48 | .
49 |
50 | transform functions must take in a list a images and a numpy array which can be None
51 | """
52 |
53 | def __init__(self, root, transform=None):
54 | self.root = Path(root)
55 | scene_list_path = self.root/'val.txt'
56 | self.scenes = [self.root/'image'/folder[:-1] for folder in open(scene_list_path)]
57 | self.imgs, self.depth, self.segs = crawl_folders(self.scenes)
58 | self.transform = transform
59 |
60 | def __getitem__(self, index):
61 | img = load_as_float(self.imgs[index]) # H x W x 3
62 | depth = np.load(self.depth[index]).astype(np.float32) # H x W
63 | seg = torch.from_numpy(np.load(self.segs[index]).astype(np.float32)) # N x H X W
64 |
65 | # # Re-ordering segmentation by each mask size
66 | # seg_sort = torch.cat([torch.zeros(1).long(), seg.sum(dim=(1,2)).argsort(descending=True)[:-1]], dim=0)
67 | # seg = seg[seg_sort]
68 |
69 | # Sum segmentation for every mask
70 | seg = seg.sum(dim=0, keepdim=False).clamp(min=0.0, max=1.0) # H x W
71 |
72 | if self.transform is not None:
73 | img, _ = self.transform([img], None)
74 | img = img[0]
75 | return img, depth, seg
76 |
77 | def __len__(self):
78 | return len(self.imgs)
79 |
--------------------------------------------------------------------------------
/logger.py:
--------------------------------------------------------------------------------
1 | from blessings import Terminal
2 | import progressbar
3 | import sys
4 |
5 |
6 | class TermLogger(object):
7 | def __init__(self, n_epochs, train_size, valid_size):
8 | self.n_epochs = n_epochs
9 | self.train_size = train_size
10 | self.valid_size = valid_size
11 | self.t = Terminal()
12 | s = 10
13 | e = 1 # epoch bar position
14 | tr = 3 # train bar position
15 | ts = 6 # valid bar position
16 | value = self.t.height
17 | h = int(0 if value is None else value)
18 |
19 | for i in range(10):
20 | print('')
21 | self.epoch_bar = progressbar.ProgressBar(max_value=n_epochs, fd=Writer(self.t, (0, h-s+e)))
22 |
23 | self.train_writer = Writer(self.t, (0, h-s+tr))
24 | self.train_bar_writer = Writer(self.t, (0, h-s+tr+1))
25 |
26 | self.valid_writer = Writer(self.t, (0, h-s+ts))
27 | self.valid_bar_writer = Writer(self.t, (0, h-s+ts+1))
28 |
29 | self.reset_train_bar()
30 | self.reset_valid_bar()
31 |
32 | def reset_train_bar(self):
33 | self.train_bar = progressbar.ProgressBar(max_value=self.train_size, fd=self.train_bar_writer)
34 |
35 | def reset_valid_bar(self):
36 | self.valid_bar = progressbar.ProgressBar(max_value=self.valid_size, fd=self.valid_bar_writer)
37 |
38 |
39 | class Writer(object):
40 | """Create an object with a write method that writes to a
41 | specific place on the screen, defined at instantiation.
42 |
43 | This is the glue between blessings and progressbar.
44 | """
45 |
46 | def __init__(self, t, location):
47 | """
48 | Input: location - tuple of ints (x, y), the position
49 | of the bar in the terminal
50 | """
51 | self.location = location
52 | self.t = t
53 |
54 | def write(self, string):
55 | with self.t.location(*self.location):
56 | sys.stdout.write("\033[K")
57 | print(string)
58 |
59 | def flush(self):
60 | return
61 |
62 |
63 | class AverageMeter(object):
64 | """Computes and stores the average and current value"""
65 |
66 | def __init__(self, i=1, precision=3):
67 | self.meters = i
68 | self.precision = precision
69 | self.reset(self.meters)
70 |
71 | def reset(self, i):
72 | self.val = [0]*i
73 | self.avg = [0]*i
74 | self.sum = [0]*i
75 | self.count = 0
76 |
77 | def update(self, val, n=1):
78 | if not isinstance(val, list):
79 | val = [val]
80 | assert(len(val) == self.meters)
81 | self.count += n
82 | for i,v in enumerate(val):
83 | self.val[i] = v
84 | self.sum[i] += v * n
85 | self.avg[i] = self.sum[i] / self.count
86 |
87 | def __repr__(self):
88 | val = ' '.join(['{:.{}f}'.format(v, self.precision) for v in self.val])
89 | avg = ' '.join(['{:.{}f}'.format(a, self.precision) for a in self.avg])
90 | return '{} ({})'.format(val, avg)
91 |
--------------------------------------------------------------------------------
/custom_transforms_val.py:
--------------------------------------------------------------------------------
1 | from __future__ import division
2 | import torch
3 | import random
4 | import numpy as np
5 | from scipy.misc import imresize
6 | import cv2
7 |
8 | from matplotlib import pyplot as plt
9 | import pdb
10 |
11 | '''Set of tranform random routines that takes list of inputs as arguments,
12 | in order to have random but coherent transformations.'''
13 |
14 |
15 | class Compose(object):
16 | def __init__(self, transforms):
17 | self.transforms = transforms
18 |
19 | def __call__(self, images, intrinsics):
20 | for t in self.transforms:
21 | images, intrinsics = t(images, intrinsics)
22 | return images, intrinsics
23 |
24 |
25 | class Normalize(object):
26 | def __init__(self, mean, std):
27 | self.mean = mean
28 | self.std = std
29 |
30 | def __call__(self, images, intrinsics):
31 | for tensor in images:
32 | for t, m, s in zip(tensor, self.mean, self.std):
33 | t.sub_(m).div_(s)
34 | return images, intrinsics
35 |
36 |
37 | class ArrayToTensor(object):
38 | """Converts a list of numpy.ndarray (H x W x C) along with a intrinsics matrix to a list of torch.FloatTensor of shape (C x H x W) with a intrinsics tensor."""
39 |
40 | def __call__(self, images, intrinsics):
41 | tensors = []
42 | # pdb.set_trace()
43 | for im in images:
44 | # put it from HWC to CHW format
45 | im = np.transpose(im, (2, 0, 1))
46 | # handle numpy array
47 | tensors.append(torch.from_numpy(im).float()/255)
48 | return tensors, intrinsics
49 |
50 |
51 | class RandomHorizontalFlip(object):
52 | """Randomly horizontally flips the given numpy array with a probability of 0.5"""
53 |
54 | def __call__(self, images, intrinsics):
55 | assert intrinsics is not None
56 | if random.random() < 0.5:
57 | output_intrinsics = np.copy(intrinsics)
58 | output_images = [np.copy(np.fliplr(im)) for im in images]
59 | w = output_images[0].shape[1]
60 | output_intrinsics[0,2] = w - output_intrinsics[0,2]
61 | else:
62 | output_images = images
63 | output_intrinsics = intrinsics
64 | return output_images, output_intrinsics
65 |
66 |
67 | class RandomScaleCrop(object):
68 | """Randomly zooms images up to 15% and crop them to keep same size as before."""
69 |
70 | def __call__(self, images, intrinsics):
71 | assert intrinsics is not None
72 | output_intrinsics = np.copy(intrinsics)
73 |
74 | in_h, in_w, _ = images[0].shape
75 | x_scaling, y_scaling = np.random.uniform(1,1.15,2)
76 | scaled_h, scaled_w = int(in_h * y_scaling), int(in_w * x_scaling)
77 |
78 | output_intrinsics[0] *= x_scaling
79 | output_intrinsics[1] *= y_scaling
80 | scaled_images = [imresize(im, (scaled_h, scaled_w)) for im in images]
81 |
82 | offset_y = np.random.randint(scaled_h - in_h + 1)
83 | offset_x = np.random.randint(scaled_w - in_w + 1)
84 | cropped_images = [im[offset_y:offset_y + in_h, offset_x:offset_x + in_w] for im in scaled_images]
85 |
86 | output_intrinsics[0,2] -= offset_x
87 | output_intrinsics[1,2] -= offset_y
88 |
89 | return cropped_images, output_intrinsics
90 |
--------------------------------------------------------------------------------
/kitti_eval/eval_depth.py:
--------------------------------------------------------------------------------
1 | '''
2 | Reference: https://github.com/SeokjuLee/Insta-DM/blob/master/kitti_eval/eval_depth.py
3 | '''
4 |
5 | from __future__ import division
6 | import sys
7 | import cv2
8 | import os
9 | import numpy as np
10 | import argparse
11 | from depth_evaluation_utils import *
12 | from tqdm import tqdm
13 |
14 | from matplotlib import pyplot as plt
15 | import pdb
16 |
17 | parser = argparse.ArgumentParser()
18 | parser.add_argument("--kitti_dir", type=str, help='Path to the KITTI dataset directory')
19 | parser.add_argument("--pred_file", type=str, help="Path to the prediction file")
20 | parser.add_argument("--test_file_list", type=str, default='./data/kitti/test_files_eigen.txt', help="Path to the list of test files")
21 | parser.add_argument('--min_depth', type=float, default=1e-3, help="Threshold for minimum depth")
22 | parser.add_argument('--max_depth', type=float, default=80, help="Threshold for maximum depth")
23 | args = parser.parse_args()
24 |
25 | def main():
26 | pred_depths = np.load(args.pred_file)
27 | test_files = read_text_lines(args.test_file_list)
28 | gt_files, gt_calib, im_sizes, im_files, cams = read_file_data(test_files, args.kitti_dir)
29 | num_test = len(im_files)
30 | gt_depths = []
31 | pred_depths_resized = []
32 |
33 | print('=> Prepare predicted depth and GT')
34 | for t_id in tqdm(range(num_test)):
35 | camera_id = cams[t_id] # 2 is left, 3 is right
36 | pred_depths_resized.append( cv2.resize(pred_depths[t_id], (im_sizes[t_id][1], im_sizes[t_id][0]), interpolation=cv2.INTER_LINEAR) )
37 | depth = generate_depth_map(gt_calib[t_id], gt_files[t_id], im_sizes[t_id], camera_id, False, True)
38 | gt_depths.append( depth.astype(np.float32) )
39 |
40 | pred_depths = pred_depths_resized
41 |
42 | rms = np.zeros(num_test, np.float32)
43 | log_rms = np.zeros(num_test, np.float32)
44 | abs_rel = np.zeros(num_test, np.float32)
45 | sq_rel = np.zeros(num_test, np.float32)
46 | a1 = np.zeros(num_test, np.float32)
47 | a2 = np.zeros(num_test, np.float32)
48 | a3 = np.zeros(num_test, np.float32)
49 |
50 | print('=> Compute results')
51 | for i in tqdm(range(num_test)):
52 | gt_depth = gt_depths[i]
53 | pred_depth = np.copy(pred_depths[i])
54 |
55 | mask = np.logical_and(gt_depth > args.min_depth,
56 | gt_depth < args.max_depth)
57 | # crop used by Garg ECCV16 to reprocude Eigen NIPS14 results
58 | # if used on gt_size 370x1224 produces a crop of [-218, -3, 44, 1180]
59 | gt_height, gt_width = gt_depth.shape
60 | crop = np.array([0.40810811 * gt_height, 0.99189189 * gt_height,
61 | 0.03594771 * gt_width, 0.96405229 * gt_width]).astype(np.int32)
62 |
63 | crop_mask = np.zeros(mask.shape)
64 | crop_mask[crop[0]:crop[1],crop[2]:crop[3]] = 1
65 | mask = np.logical_and(mask, crop_mask)
66 |
67 | # Scale matching
68 | scalor = np.median(gt_depth[mask]) / np.median(pred_depth[mask])
69 | pred_depth[mask] *= scalor
70 |
71 | pred_depth[pred_depth < args.min_depth] = args.min_depth
72 | pred_depth[pred_depth > args.max_depth] = args.max_depth
73 | abs_rel[i], sq_rel[i], rms[i], log_rms[i], a1[i], a2[i], a3[i] = compute_errors(gt_depth[mask], pred_depth[mask])
74 |
75 | print("{:>10}, {:>10}, {:>10}, {:>10}, {:>10}, {:>10}, {:>10}".format('abs_rel', 'sq_rel', 'rms', 'log_rms', 'a1', 'a2', 'a3'))
76 | print("{:10.4f}, {:10.4f}, {:10.4f}, {:10.4f}, {:10.4f}, {:10.4f}, {:10.4f}".format(abs_rel.mean(), sq_rel.mean(), rms.mean(), log_rms.mean(), a1.mean(), a2.mean(), a3.mean()))
77 |
78 | main()
--------------------------------------------------------------------------------
/kitti_eval/save_depth.py:
--------------------------------------------------------------------------------
1 | '''
2 | Reference: https://github.com/SeokjuLee/Insta-DM/blob/master/kitti_eval/save_depth.py
3 |
4 | '''
5 |
6 | import torch
7 | from skimage.transform import resize as imresize
8 | from imageio import imread
9 | import numpy as np
10 | from path import Path
11 | import argparse
12 | import datetime
13 | from tqdm import tqdm
14 | import os,sys
15 |
16 | from matplotlib import pyplot as plt
17 | import pdb
18 |
19 | parser = argparse.ArgumentParser(description='Script for DispNet testing with corresponding groundTruth', formatter_class=argparse.ArgumentDefaultsHelpFormatter)
20 | parser.add_argument("--pretrained-dispnet", required=True, type=str, help="pretrained DispNet path")
21 | parser.add_argument("--img-height", default=256, type=int, help="Image height")
22 | parser.add_argument("--img-width", default=832, type=int, help="Image width")
23 | parser.add_argument("--no-resize", action='store_true', help="no resizing is done")
24 | parser.add_argument("--min-depth", default=1e-3)
25 | parser.add_argument("--max-depth", default=80)
26 | parser.add_argument("--dataset-dir", default='.', type=str, help="Dataset directory")
27 | parser.add_argument("--dataset-list", default=None, type=str, help="Dataset list file")
28 | parser.add_argument("--output-dir", default=None, required=True, type=str, help="Output directory for saving predictions in a big 3D numpy file")
29 |
30 |
31 | device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
32 |
33 | def load_tensor_image(filename, args):
34 | img = imread(filename).astype(np.float32)
35 | h,w,_ = img.shape
36 | if (not args.no_resize) and (h != args.img_height or w != args.img_width):
37 | img = imresize(img, (args.img_height, args.img_width)).astype(np.float32)
38 | img = np.transpose(img, (2, 0, 1))
39 | tensor_img = ((torch.from_numpy(img).unsqueeze(0)/255 - 0.5)/0.5).to(device)
40 | return tensor_img, img
41 |
42 | @torch.no_grad()
43 | def main():
44 | args = parser.parse_args()
45 |
46 | print("=> Tested at {}".format(datetime.datetime.now().strftime("%m-%d-%H:%M")))
47 |
48 | print('=> Load dispnet model from {}'.format(args.pretrained_dispnet))
49 |
50 | sys.path.insert(1, os.path.join(sys.path[0], '..'))
51 | import models
52 |
53 | disp_net = models.DispResNet().to(device)
54 | weights = torch.load(args.pretrained_dispnet, map_location='cuda:0')
55 | disp_net.load_state_dict(weights['state_dict'])
56 | disp_net.eval()
57 |
58 | dataset_dir = Path(args.dataset_dir)
59 | with open(args.dataset_list, 'r') as f:
60 | test_files = list(f.read().splitlines())
61 | print('=> {} files to test'.format(len(test_files)))
62 |
63 | output_dir = Path(args.output_dir)
64 | output_dir.makedirs_p()
65 |
66 | for j in tqdm(range(len(test_files))):
67 | tgt_img, ori_img = load_tensor_image(dataset_dir + test_files[j], args)
68 | pred_disp = disp_net(tgt_img).cpu().numpy()[0,0]
69 | # pdb.set_trace()
70 | '''
71 | fig = plt.figure(9, figsize=(8, 10))
72 | fig.add_subplot(2,1,1)
73 | plt.imshow(ori_img.transpose(1,2,0)/255, vmin=0, vmax=1), plt.grid(linestyle=':', linewidth=0.4), plt.colorbar()
74 | fig.add_subplot(2,1,2)
75 | plt.imshow(pred_disp), plt.grid(linestyle=':', linewidth=0.4), plt.colorbar()
76 | fig.tight_layout(), plt.ion(), plt.show()
77 | '''
78 |
79 | if j == 0:
80 | predictions = np.zeros((len(test_files), *pred_disp.shape))
81 | predictions[j] = 1/pred_disp
82 |
83 | np.save(output_dir/'predictions.npy', predictions)
84 |
85 |
86 | if __name__ == '__main__':
87 | main()
--------------------------------------------------------------------------------
/custom_transforms.py:
--------------------------------------------------------------------------------
1 | '''
2 | Reference: https://github.com/SeokjuLee/Insta-DM/blob/master/custom_transforms.py
3 | '''
4 |
5 | from __future__ import division
6 | import torch
7 | import random
8 | import numpy as np
9 | from scipy.misc import imresize
10 | import cv2
11 | from matplotlib import pyplot as plt
12 | import pdb
13 |
14 | '''Set of tranform random routines that takes list of inputs as arguments,
15 | in order to have random but coherent transformations.'''
16 |
17 |
18 | class Compose(object):
19 | def __init__(self, transforms):
20 | self.transforms = transforms
21 |
22 | def __call__(self, images, segms, intrinsics):
23 | for t in self.transforms:
24 | images, segms, intrinsics = t(images, segms, intrinsics)
25 | return images, segms, intrinsics
26 |
27 |
28 | class Normalize(object):
29 | def __init__(self, mean, std):
30 | self.mean = mean
31 | self.std = std
32 |
33 | def __call__(self, images, segms, intrinsics):
34 | for tensor in images:
35 | for t, m, s in zip(tensor, self.mean, self.std):
36 | t.sub_(m).div_(s)
37 | return images, segms, intrinsics
38 |
39 |
40 | class ArrayToTensor(object):
41 | """Converts a list of numpy.ndarray (H x W x C) along with a intrinsics matrix to a list of torch.FloatTensor of shape (C x H x W) with a intrinsics tensor."""
42 |
43 | def __call__(self, images, segms, intrinsics):
44 | img_tensors = []
45 | seg_tensors = []
46 | for im in images:
47 | im = np.transpose(im, (2, 0, 1)) # put it from HWC to CHW format
48 | img_tensors.append(torch.from_numpy(im).float()/255) # handle numpy array
49 | for im in segms:
50 | im = np.transpose(im, (2, 0, 1))
51 | seg_tensors.append(torch.from_numpy(im).float())
52 | return img_tensors, seg_tensors, intrinsics
53 |
54 |
55 | class RandomHorizontalFlip(object):
56 | """Randomly horizontally flips the given numpy array with a probability of 0.5"""
57 |
58 | def __call__(self, images, segms, intrinsics):
59 | assert intrinsics is not None
60 | if random.random() < 0.5:
61 | output_intrinsics = np.copy(intrinsics)
62 | output_images = [np.copy(np.fliplr(im)) for im in images]
63 | output_segms = [np.copy(np.fliplr(im)) for im in segms]
64 |
65 | w = output_images[0].shape[1]
66 | output_intrinsics[0,2] = w - output_intrinsics[0,2]
67 | else:
68 | output_images = images
69 | output_segms = segms
70 | output_intrinsics = intrinsics
71 | return output_images, output_segms, output_intrinsics
72 |
73 |
74 | class RandomScaleCrop(object):
75 | """Randomly zooms images up to 15% and crop them to keep same size as before."""
76 |
77 | def __call__(self, images, segms, intrinsics):
78 | assert intrinsics is not None
79 | output_intrinsics = np.copy(intrinsics)
80 |
81 | in_h, in_w, _ = images[0].shape
82 | x_scaling, y_scaling = np.random.uniform(1, 1.15, 2)
83 | scaled_h, scaled_w = int(in_h * y_scaling), int(in_w * x_scaling)
84 |
85 | output_intrinsics[0] *= x_scaling
86 | output_intrinsics[1] *= y_scaling
87 |
88 | scaled_images = [imresize(im, (scaled_h, scaled_w)) for im in images] # scipy.misc.imresize는 255 스케일로 변환됨!
89 | scaled_segms = [cv2.resize(im, (scaled_w, scaled_h), interpolation=cv2.INTER_NEAREST) for im in segms] # 이 부분에서 1채널 세그먼트 [256 x 832 x 1] >> [256 x 832]로 변환됨!
90 |
91 | offset_y = np.random.randint(scaled_h - in_h + 1)
92 | offset_x = np.random.randint(scaled_w - in_w + 1)
93 | cropped_images = [im[offset_y:offset_y + in_h, offset_x:offset_x + in_w] for im in scaled_images]
94 | cropped_segms = [im[offset_y:offset_y + in_h, offset_x:offset_x + in_w] for im in scaled_segms]
95 |
96 | output_intrinsics[0,2] -= offset_x
97 | output_intrinsics[1,2] -= offset_y
98 |
99 | return cropped_images, cropped_segms, output_intrinsics
100 |
--------------------------------------------------------------------------------
/flow_reversal.py:
--------------------------------------------------------------------------------
1 | ### Reference: Quadratic Video Interpolation (NeurIPS'19)
2 | ### class WarpLayer warps image x based on optical flow flo.
3 |
4 | import numpy
5 | import torch
6 | import torch.nn as nn
7 | import torch.nn.functional as F
8 | import pdb
9 |
10 | class FlowReversal(nn.Module):
11 | """docstring for WarpLayer"""
12 | def __init__(self,):
13 | super(FlowReversal, self).__init__()
14 |
15 |
16 | def forward(self, img, flo):
17 | """
18 | -img: image (N, C, H, W)
19 | -flo: optical flow (N, 2, H, W)
20 | elements of flo is in [0, H] and [0, W] for dx, dy
21 |
22 | """
23 |
24 |
25 | # (x1, y1) (x1, y2)
26 | # +---------------+
27 | # | |
28 | # | o(x, y) |
29 | # | |
30 | # | |
31 | # | |
32 | # | |
33 | # +---------------+
34 | # (x2, y1) (x2, y2)
35 |
36 |
37 | N, C, _, _ = img.size()
38 |
39 | # translate start-point optical flow to end-point optical flow
40 | y = flo[:, 0:1 :, :]
41 | x = flo[:, 1:2, :, :]
42 |
43 | x = x.repeat(1, C, 1, 1)
44 | y = y.repeat(1, C, 1, 1)
45 |
46 | # Four point of square (x1, y1), (x1, y2), (x2, y1), (y2, y2)
47 | x1 = torch.floor(x)
48 | x2 = x1 + 1
49 | y1 = torch.floor(y)
50 | y2 = y1 + 1
51 |
52 | # firstly, get gaussian weights
53 | w11, w12, w21, w22 = self.get_gaussian_weights(x, y, x1, x2, y1, y2)
54 | # pdb.set_trace()
55 |
56 | # secondly, sample each weighted corner
57 | img11, o11 = self.sample_one(img, x1, y1, w11)
58 | img12, o12 = self.sample_one(img, x1, y2, w12)
59 | img21, o21 = self.sample_one(img, x2, y1, w21)
60 | img22, o22 = self.sample_one(img, x2, y2, w22)
61 |
62 |
63 | imgw = img11 + img12 + img21 + img22
64 | o = o11 + o12 + o21 + o22
65 |
66 | return imgw, o
67 |
68 |
69 | def get_gaussian_weights(self, x, y, x1, x2, y1, y2):
70 | w11 = torch.exp(-((x - x1)**2 + (y - y1)**2))
71 | w12 = torch.exp(-((x - x1)**2 + (y - y2)**2))
72 | w21 = torch.exp(-((x - x2)**2 + (y - y1)**2))
73 | w22 = torch.exp(-((x - x2)**2 + (y - y2)**2))
74 |
75 | return w11, w12, w21, w22
76 |
77 |
78 | def sample_one(self, img, shiftx, shifty, weight):
79 | """
80 | Input:
81 | -img (N, C, H, W)
82 | -shiftx, shifty (N, c, H, W)
83 | """
84 |
85 | N, C, H, W = img.size()
86 | # pdb.set_trace()
87 |
88 | # flatten all (all restored as Tensors)
89 | flat_shiftx = shiftx.view(-1)
90 | flat_shifty = shifty.view(-1)
91 | flat_basex = torch.arange(0, H, requires_grad=False).view(-1, 1)[None, None].cuda().long().repeat(N, C, 1, W).view(-1)
92 | flat_basey = torch.arange(0, W, requires_grad=False).view(1, -1)[None, None].cuda().long().repeat(N, C, H, 1).view(-1)
93 | flat_weight = weight.view(-1)
94 | flat_img = img.reshape(-1)
95 |
96 |
97 | # The corresponding positions in I1
98 | idxn = torch.arange(0, N, requires_grad=False).view(N, 1, 1, 1).long().cuda().repeat(1, C, H, W).view(-1)
99 | idxc = torch.arange(0, C, requires_grad=False).view(1, C, 1, 1).long().cuda().repeat(N, 1, H, W).view(-1)
100 | idxx = flat_shiftx.long() + flat_basex
101 | idxy = flat_shifty.long() + flat_basey
102 |
103 |
104 | # recording the inside part the shifted
105 | mask = idxx.ge(0) & idxx.lt(H) & idxy.ge(0) & idxy.lt(W)
106 |
107 | # Mask off points out of boundaries
108 | ids = (idxn*C*H*W + idxc*H*W + idxx*W + idxy)
109 | ids_mask = torch.masked_select(ids, mask).clone().cuda()
110 |
111 | # Note here! accmulate fla must be true for proper bp
112 | img_warp = torch.zeros([N*C*H*W, ]).cuda()
113 | img_warp.put_(ids_mask, torch.masked_select(flat_img*flat_weight, mask), accumulate=True)
114 |
115 | one_warp = torch.zeros([N*C*H*W, ]).cuda()
116 | one_warp.put_(ids_mask, torch.masked_select(flat_weight, mask), accumulate=True)
117 |
118 |
119 |
120 | return img_warp.view(N, C, H, W), one_warp.view(N, C, H, W)
121 |
122 |
123 |
--------------------------------------------------------------------------------
/models/resnet_encoder.py:
--------------------------------------------------------------------------------
1 | # Copyright Niantic 2019. Patent Pending. All rights reserved.
2 | #
3 | # This software is licensed under the terms of the Monodepth2 licence
4 | # which allows for non-commercial use only, the full terms of which are made
5 | # available in the LICENSE file.
6 |
7 | from __future__ import absolute_import, division, print_function
8 |
9 | import numpy as np
10 |
11 | import torch
12 | import torch.nn as nn
13 | import torchvision.models as models
14 | import torch.utils.model_zoo as model_zoo
15 |
16 |
17 | class ResNetMultiImageInput(models.ResNet):
18 | """Constructs a resnet model with varying number of input images.
19 | Adapted from https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py
20 | """
21 | def __init__(self, block, layers, num_classes=1000, num_input_images=1):
22 | super(ResNetMultiImageInput, self).__init__(block, layers)
23 | self.inplanes = 64
24 | self.conv1 = nn.Conv2d(
25 | num_input_images * 3, 64, kernel_size=7, stride=2, padding=3, bias=False)
26 | self.bn1 = nn.BatchNorm2d(64)
27 | self.relu = nn.ReLU(inplace=True)
28 | self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
29 | self.layer1 = self._make_layer(block, 64, layers[0])
30 | self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
31 | self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
32 | self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
33 |
34 | for m in self.modules():
35 | if isinstance(m, nn.Conv2d):
36 | nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
37 | elif isinstance(m, nn.BatchNorm2d):
38 | nn.init.constant_(m.weight, 1)
39 | nn.init.constant_(m.bias, 0)
40 |
41 |
42 | def resnet_multiimage_input(num_layers, pretrained=False, num_input_images=1):
43 | """Constructs a ResNet model.
44 | Args:
45 | num_layers (int): Number of resnet layers. Must be 18 or 50
46 | pretrained (bool): If True, returns a model pre-trained on ImageNet
47 | num_input_images (int): Number of frames stacked as input
48 | """
49 | assert num_layers in [18, 50], "Can only run with 18 or 50 layer resnet"
50 | blocks = {18: [2, 2, 2, 2], 50: [3, 4, 6, 3]}[num_layers]
51 | block_type = {18: models.resnet.BasicBlock, 50: models.resnet.Bottleneck}[num_layers]
52 | model = ResNetMultiImageInput(block_type, blocks, num_input_images=num_input_images)
53 |
54 | if pretrained:
55 | loaded = model_zoo.load_url(models.resnet.model_urls['resnet{}'.format(num_layers)])
56 | loaded['conv1.weight'] = torch.cat(
57 | [loaded['conv1.weight']] * num_input_images, 1) / num_input_images
58 | model.load_state_dict(loaded)
59 | return model
60 |
61 |
62 | class ResnetEncoder(nn.Module):
63 | """Pytorch module for a resnet encoder
64 | """
65 | def __init__(self, num_layers, pretrained, num_input_images=1):
66 | super(ResnetEncoder, self).__init__()
67 |
68 | self.num_ch_enc = np.array([64, 64, 128, 256, 512])
69 |
70 | resnets = {18: models.resnet18,
71 | 34: models.resnet34,
72 | 50: models.resnet50,
73 | 101: models.resnet101,
74 | 152: models.resnet152}
75 |
76 | if num_layers not in resnets:
77 | raise ValueError("{} is not a valid number of resnet layers".format(num_layers))
78 |
79 | if num_input_images > 1:
80 | self.encoder = resnet_multiimage_input(num_layers, pretrained, num_input_images)
81 | else:
82 | self.encoder = resnets[num_layers](pretrained)
83 |
84 | if num_layers > 34:
85 | self.num_ch_enc[1:] *= 4
86 |
87 | def forward(self, input_image):
88 | self.features = []
89 | x = input_image
90 | x = self.encoder.conv1(x)
91 | x = self.encoder.bn1(x)
92 | self.features.append(self.encoder.relu(x))
93 | self.features.append(self.encoder.layer1(self.encoder.maxpool(self.features[-1])))
94 | self.features.append(self.encoder.layer2(self.features[-1]))
95 | self.features.append(self.encoder.layer3(self.features[-1]))
96 | self.features.append(self.encoder.layer4(self.features[-1]))
97 |
98 | return self.features
--------------------------------------------------------------------------------
/drawRobotics.py:
--------------------------------------------------------------------------------
1 | import matplotlib.pyplot as plt
2 | import numpy as np
3 | from matplotlib.patches import FancyArrowPatch
4 | from mpl_toolkits.mplot3d import proj3d
5 |
6 |
7 | class Arrow3D(FancyArrowPatch):
8 | def __init__(self, xs, ys, zs, *args, **kwargs):
9 | FancyArrowPatch.__init__(self, (0,0), (0,0), *args, **kwargs)
10 | self._verts3d = xs, ys, zs
11 |
12 | def draw(self, renderer):
13 | xs3d, ys3d, zs3d = self._verts3d
14 | xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M)
15 | self.set_positions((xs[0],ys[0]),(xs[1],ys[1]))
16 | FancyArrowPatch.draw(self, renderer)
17 |
18 |
19 | def drawVector(fig, pointA, pointB, **kwargs):
20 | ms = kwargs.get('mutation_scale', 20)
21 | alpha = kwargs.get('alpha', 1)
22 | ars = kwargs.get('arrowstyle', '-|>')
23 | lc = kwargs.get('lineColor', None)
24 | fc = kwargs.get('faceColor', 'y')
25 | ec = kwargs.get('edgeColor', 'k')
26 | pc = kwargs.get('projColor', 'k')
27 | pointEnable = kwargs.get('pointEnable', False)
28 | projOn = kwargs.get('proj', False)
29 | lineStyle = kwargs.get('lineStyle', '-')
30 | annotationString = kwargs.get('annotationString', '')
31 | lineWidth = kwargs.get('lineWidth', 1)
32 | zorder = kwargs.get('zorder', 1)
33 |
34 |
35 | if (3 <= pointA.size <= 4):
36 | xs = [pointA[0], pointB[0]]
37 | ys = [pointA[1], pointB[1]]
38 | zs = [pointA[2], pointB[2]]
39 | else:
40 | xs = [pointA[0,3], pointB[0,3]]
41 | ys = [pointA[1,3], pointB[1,3]]
42 | zs = [pointA[2,3], pointB[2,3]]
43 |
44 | if lc:
45 | out = Arrow3D(xs, ys, zs, mutation_scale=ms, alpha=alpha, arrowstyle=ars, color=lc, linestyle=lineStyle, linewidth=lineWidth, zorder=zorder)
46 | else:
47 | out = Arrow3D(xs, ys, zs, mutation_scale=ms, alpha=alpha, arrowstyle=ars, facecolor=fc, edgecolor=ec, linestyle=lineStyle, linewidth=lineWidth, zorder=zorder)
48 | fig.add_artist(out)
49 |
50 | # if pointEnable: fig.scatter(xs[1], ys[1], zs[1], color='k', s=50)
51 | if pointEnable: fig.scatter(xs[0], ys[0], zs[0], color=lc, s=20, zorder=zorder)
52 |
53 | if projOn:
54 | fig.plot(xs, ys, [0, 0], color=pc, linestyle='--', zorder=zorder)
55 | fig.plot([xs[0], xs[0]], [ys[0], ys[0]], [0, zs[0]], color=pc, linestyle='--', zorder=zorder)
56 | fig.plot([xs[1], xs[1]], [ys[1], ys[1]], [0, zs[1]], color=pc, linestyle='--', zorder=zorder)
57 |
58 | if annotationString != '':
59 | fig.text(xs[1], ys[1], zs[1], annotationString, size=15, color='k', zorder=zorder)
60 |
61 |
62 | def drawPointWithAxis(fig, *args, **kwargs):
63 | ms = kwargs.get('mutation_scale', 20)
64 | alpha = kwargs.get('alpha', 1)
65 | ars = kwargs.get('arrowstyle', '->')
66 | pointEnable = kwargs.get('pointEnable', True)
67 | axisEnable = kwargs.get('axisEnable', True)
68 | lineStyle = kwargs.get('lineStyle', '-')
69 | lineWidth = kwargs.get('lineWidth', 1)
70 | vectorLength = kwargs.get('vectorLength', 1)
71 | zorder = kwargs.get('zorder', 1)
72 |
73 | if len(args) == 4:
74 | ORG = args[0]
75 | hat_X = args[1]
76 | hat_Y = args[2]
77 | hat_Z = args[3]
78 | xs_n = [ORG[0], ORG[0] + hat_X[0]*vectorLength]
79 | ys_n = [ORG[1], ORG[1] + hat_X[1]*vectorLength]
80 | zs_n = [ORG[2], ORG[2] + hat_X[2]*vectorLength]
81 | xs_o = [ORG[0], ORG[0] + hat_Y[0]*vectorLength]
82 | ys_o = [ORG[1], ORG[1] + hat_Y[1]*vectorLength]
83 | zs_o = [ORG[2], ORG[2] + hat_Y[2]*vectorLength]
84 | xs_a = [ORG[0], ORG[0] + hat_Z[0]*vectorLength]
85 | ys_a = [ORG[1], ORG[1] + hat_Z[1]*vectorLength]
86 | zs_a = [ORG[2], ORG[2] + hat_Z[2]*vectorLength]
87 | else:
88 | tmp = args[0]
89 | ORG = tmp[:3,3:]
90 | hat_X = tmp[:3,0:1]
91 | hat_Y = tmp[:3,1:2]
92 | hat_Z = tmp[:3,2:3]
93 | xs_n = [ORG[0, 0], ORG[0, 0] + hat_X[0, 0]*vectorLength]
94 | ys_n = [ORG[1, 0], ORG[1, 0] + hat_X[1, 0]*vectorLength]
95 | zs_n = [ORG[2, 0], ORG[2, 0] + hat_X[2, 0]*vectorLength]
96 | xs_o = [ORG[0, 0], ORG[0, 0] + hat_Y[0, 0]*vectorLength]
97 | ys_o = [ORG[1, 0], ORG[1, 0] + hat_Y[1, 0]*vectorLength]
98 | zs_o = [ORG[2, 0], ORG[2, 0] + hat_Y[2, 0]*vectorLength]
99 | xs_a = [ORG[0, 0], ORG[0, 0] + hat_Z[0, 0]*vectorLength]
100 | ys_a = [ORG[1, 0], ORG[1, 0] + hat_Z[1, 0]*vectorLength]
101 | zs_a = [ORG[2, 0], ORG[2, 0] + hat_Z[2, 0]*vectorLength]
102 |
103 | if pointEnable: fig.scatter(xs_n[0], ys_n[0], zs_n[0], alpha=alpha, color='k', s=5, zorder=zorder)
104 |
105 | if axisEnable:
106 | n = Arrow3D(xs_n, ys_n, zs_n, mutation_scale=ms, alpha=alpha, arrowstyle=ars, color='r', linestyle=lineStyle, linewidth=lineWidth, zorder=zorder)
107 | o = Arrow3D(xs_o, ys_o, zs_o, mutation_scale=ms, alpha=alpha, arrowstyle=ars, color='g', linestyle=lineStyle, linewidth=lineWidth, zorder=zorder)
108 | a = Arrow3D(xs_a, ys_a, zs_a, mutation_scale=ms, alpha=alpha, arrowstyle=ars, color='b', linestyle=lineStyle, linewidth=lineWidth, zorder=zorder)
109 | fig.add_artist(n)
110 | fig.add_artist(o)
111 | fig.add_artist(a)
112 |
113 |
114 | def RotX(phi):
115 | return np.array([[1, 0, 0],
116 | [0, np.cos(phi), -np.sin(phi)],
117 | [0, np.sin(phi), np.cos(phi)]])
118 |
119 |
120 | def RotY(theta):
121 | return np.array([[np.cos(theta), 0, np.sin(theta)],
122 | [0, 1, 0],
123 | [-np.sin(theta), 0, np.cos(theta)]])
124 |
125 |
126 | def RotZ(psi):
127 | return np.array([[np.cos(psi), -np.sin(psi), 0],
128 | [np.sin(psi), np.cos(psi), 0],
129 | [0, 0, 1]])
130 |
131 |
132 | def D_q(dx, dy, dz):
133 | return np.array([[1, 0, 0, dx],
134 | [0, 1, 0, dy],
135 | [0, 0, 1, dz],
136 | [0, 0, 0, 1]])
--------------------------------------------------------------------------------
/utils.py:
--------------------------------------------------------------------------------
1 | '''
2 | Reference: https://github.com/SeokjuLee/Insta-DM/blob/master/utils.py
3 | '''
4 |
5 | from __future__ import division
6 | import shutil
7 | import numpy as np
8 | import torch
9 | from path import Path
10 | import datetime
11 | from collections import OrderedDict
12 | from matplotlib import cm
13 | from matplotlib.colors import ListedColormap, LinearSegmentedColormap
14 |
15 |
16 | def viz_flow(u, v, logscale=True, scaledown=6, output=False):
17 | """
18 | topleft is zero, u is horiz, v is vertical
19 | red is 3 o'clock, yellow is 6, light blue is 9, blue/purple is 12
20 | """
21 | colorwheel = makecolorwheel()
22 | ncols = colorwheel.shape[0]
23 |
24 | radius = np.sqrt(u**2 + v**2)
25 | if output:
26 | print("Maximum flow magnitude: %04f" % np.max(radius))
27 | if logscale:
28 | radius = np.log(radius + 1)
29 | if output:
30 | print("Maximum flow magnitude (after log): %0.4f" % np.max(radius))
31 | radius = radius / scaledown
32 | if output:
33 | print("Maximum flow magnitude (after scaledown): %0.4f" % np.max(radius))
34 | rot = np.arctan2(-v, -u) / np.pi
35 |
36 | fk = (rot+1)/2 * (ncols-1) # -1~1 maped to 0~ncols
37 | k0 = fk.astype(np.uint8) # 0, 1, 2, ..., ncols
38 |
39 | k1 = k0+1
40 | k1[k1 == ncols] = 0
41 |
42 | f = fk - k0
43 |
44 | ncolors = colorwheel.shape[1]
45 | img = np.zeros(u.shape+(ncolors,))
46 | for i in range(ncolors):
47 | tmp = colorwheel[:,i]
48 | col0 = tmp[k0]
49 | col1 = tmp[k1]
50 | col = (1-f)*col0 + f*col1
51 |
52 | idx = radius <= 1
53 | # increase saturation with radius
54 | col[idx] = 1 - radius[idx]*(1-col[idx])
55 | # out of range
56 | col[~idx] *= 0.75
57 | img[:,:,i] = np.floor(255*col).astype(np.uint8)
58 |
59 | return img.astype(np.uint8)
60 |
61 |
62 | def makecolorwheel():
63 | # Create a colorwheel for visualization
64 | RY = 15
65 | YG = 6
66 | GC = 4
67 | CB = 11
68 | BM = 13
69 | MR = 6
70 |
71 | ncols = RY + YG + GC + CB + BM + MR
72 |
73 | colorwheel = np.zeros((ncols,3))
74 |
75 | col = 0
76 | # RY
77 | colorwheel[0:RY,0] = 1
78 | colorwheel[0:RY,1] = np.arange(0,1,1./RY)
79 | col += RY
80 |
81 | # YG
82 | colorwheel[col:col+YG,0] = np.arange(1,0,-1./YG)
83 | colorwheel[col:col+YG,1] = 1
84 | col += YG
85 |
86 | # GC
87 | colorwheel[col:col+GC,1] = 1
88 | colorwheel[col:col+GC,2] = np.arange(0,1,1./GC)
89 | col += GC
90 |
91 | # CB
92 | colorwheel[col:col+CB,1] = np.arange(1,0,-1./CB)
93 | colorwheel[col:col+CB,2] = 1
94 | col += CB
95 |
96 | # BM
97 | colorwheel[col:col+BM,2] = 1
98 | colorwheel[col:col+BM,0] = np.arange(0,1,1./BM)
99 | col += BM
100 |
101 | # MR
102 | colorwheel[col:col+MR,2] = np.arange(1,0,-1./MR)
103 | colorwheel[col:col+MR,0] = 1
104 |
105 | return colorwheel
106 |
107 |
108 | def high_res_colormap(low_res_cmap, resolution=1000, max_value=1):
109 | # Construct the list colormap, with interpolated values for higer resolution
110 | # For a linear segmented colormap, you can just specify the number of point in
111 | # cm.get_cmap(name, lutsize) with the parameter lutsize
112 | x = np.linspace(0,1,low_res_cmap.N)
113 | low_res = low_res_cmap(x)
114 | new_x = np.linspace(0,max_value,resolution)
115 | high_res = np.stack([np.interp(new_x, x, low_res[:,i]) for i in range(low_res.shape[1])], axis=1)
116 | return ListedColormap(high_res)
117 |
118 |
119 | def opencv_rainbow(resolution=1000):
120 | # Construct the opencv equivalent of Rainbow
121 | opencv_rainbow_data = (
122 | (0.000, (1.00, 0.00, 0.00)),
123 | (0.400, (1.00, 1.00, 0.00)),
124 | (0.600, (0.00, 1.00, 0.00)),
125 | (0.800, (0.00, 0.00, 1.00)),
126 | (1.000, (0.60, 0.00, 1.00))
127 | )
128 |
129 | return LinearSegmentedColormap.from_list('opencv_rainbow', opencv_rainbow_data, resolution)
130 |
131 |
132 | COLORMAPS = {'rainbow': opencv_rainbow(),
133 | 'magma': high_res_colormap(cm.get_cmap('magma')),
134 | 'bone': cm.get_cmap('bone', 10000)}
135 |
136 |
137 | def tensor2array(tensor, max_value=None, colormap='rainbow'):
138 | tensor = tensor.detach().cpu()
139 | if max_value is None:
140 | max_value = tensor.max().item()
141 | if tensor.ndimension() == 2 or tensor.size(0) == 1:
142 | norm_array = tensor.squeeze().numpy()/max_value
143 | array = COLORMAPS[colormap](norm_array).astype(np.float32)
144 | array = array.transpose(2, 0, 1)
145 |
146 | elif tensor.ndimension() == 3:
147 | assert(tensor.size(0) == 3)
148 | array = 0.5 + tensor.numpy()*0.5
149 | return array
150 |
151 |
152 | def save_checkpoint(epoch, save_freq, save_path, dispnet_state, ego_pose_state, obj_pose_state, initial_ego_pose_state, is_best, filename='checkpoint.pth.tar'):
153 | file_prefixes = ['dispnet', 'ego_pose', 'obj_pose', 'initial_ego_pose']
154 | states = [dispnet_state, ego_pose_state, obj_pose_state, initial_ego_pose_state]
155 | for (prefix, state) in zip(file_prefixes, states):
156 | torch.save(state, save_path/'{}_{}'.format(prefix,filename))
157 |
158 | if epoch % save_freq == 0:
159 | for (prefix, state) in zip(file_prefixes, states):
160 | torch.save(state, save_path/'{}_{}_{}'.format(prefix, epoch, filename))
161 |
162 | if is_best:
163 | for prefix in file_prefixes:
164 | shutil.copyfile(save_path/'{}_{}'.format(prefix,filename), save_path/'{}_model_best.pth.tar'.format(prefix))
165 |
--------------------------------------------------------------------------------
/models/DispResNet.py:
--------------------------------------------------------------------------------
1 | '''
2 | Reference: https://github.com/SeokjuLee/Insta-DM/blob/master/models/DispResNet.py
3 |
4 | '''
5 |
6 | from __future__ import absolute_import, division, print_function
7 | import torch
8 | import torch.nn as nn
9 | import torch.nn.functional as F
10 | from .resnet_encoder import *
11 | import numpy as np
12 | import pdb
13 |
14 |
15 |
16 | class ConvBlock(nn.Module):
17 | """Layer to perform a convolution followed by ELU
18 | """
19 | def __init__(self, in_channels, out_channels):
20 | super(ConvBlock, self).__init__()
21 |
22 | self.conv = Conv3x3(in_channels, out_channels)
23 | self.nonlin = nn.ELU(inplace=True)
24 |
25 | def forward(self, x):
26 | out = self.conv(x)
27 | out = self.nonlin(out)
28 | return out
29 |
30 | class Conv3x3(nn.Module):
31 | """Layer to pad and convolve input
32 | """
33 | def __init__(self, in_channels, out_channels, use_refl=True):
34 | super(Conv3x3, self).__init__()
35 |
36 | if use_refl:
37 | self.pad = nn.ReflectionPad2d(1)
38 | else:
39 | self.pad = nn.ZeroPad2d(1)
40 | self.conv = nn.Conv2d(int(in_channels), int(out_channels), 3)
41 |
42 | def forward(self, x):
43 | out = self.pad(x)
44 | out = self.conv(out)
45 | return out
46 |
47 | def upsample(x):
48 | """Upsample input tensor by a factor of 2
49 | """
50 | return F.interpolate(x, scale_factor=2, mode="nearest")
51 |
52 |
53 |
54 | class DispDecoder(nn.Module):
55 | def __init__(self, num_ch_enc, scales=range(4), num_output_channels=1, use_skips=True):
56 | super(DispDecoder, self).__init__()
57 |
58 | self.num_output_channels = num_output_channels
59 | self.use_skips = use_skips
60 | self.upsample_mode = 'nearest'
61 | self.scales = scales
62 |
63 | self.num_ch_enc = num_ch_enc
64 | self.num_ch_dec = np.array([16, 32, 64, 128, 256])
65 |
66 | # decoder
67 | self.upconvs0 = []
68 | self.upconvs1 = []
69 | self.dispconvs = []
70 | self.i_to_scaleIdx_conversion = {}
71 |
72 | for i in range(4, -1, -1):
73 | # upconv_0
74 | num_ch_in = self.num_ch_enc[-1] if i == 4 else self.num_ch_dec[i + 1]
75 | num_ch_out = self.num_ch_dec[i]
76 | self.upconvs0.append(ConvBlock(num_ch_in, num_ch_out))
77 |
78 | # upconv_1
79 | num_ch_in = self.num_ch_dec[i]
80 | if self.use_skips and i > 0:
81 | num_ch_in += self.num_ch_enc[i - 1]
82 | num_ch_out = self.num_ch_dec[i]
83 | self.upconvs1.append(ConvBlock(num_ch_in, num_ch_out))
84 |
85 | for cnt, s in enumerate(self.scales):
86 | self.dispconvs.append(Conv3x3(self.num_ch_dec[s], self.num_output_channels))
87 |
88 | if s in range(4, -1, -1):
89 | self.i_to_scaleIdx_conversion[s] = cnt
90 |
91 | self.upconvs0 = nn.ModuleList(self.upconvs0)
92 | self.upconvs1 = nn.ModuleList(self.upconvs1)
93 | self.dispconvs = nn.ModuleList(self.dispconvs)
94 | self.sigmoid = nn.Sigmoid()
95 | self.softplus = nn.Softplus()
96 |
97 | def init_weights(self):
98 | return
99 |
100 | def forward(self, input_features):
101 |
102 | self.outputs = []
103 |
104 | # decoder
105 | x = input_features[-1]
106 |
107 | for cnt, i in enumerate(range(4, -1, -1)):
108 | x = self.upconvs0[cnt](x)
109 | x = [upsample(x)]
110 | if self.use_skips and i > 0:
111 | x += [input_features[i - 1]]
112 | x = torch.cat(x, 1)
113 | x = self.upconvs1[cnt](x)
114 | if i in self.scales:
115 | idx = self.i_to_scaleIdx_conversion[i]
116 | self.outputs.append(self.softplus(self.dispconvs[idx](x)))
117 |
118 | self.outputs = self.outputs[::-1]
119 | return self.outputs
120 |
121 |
122 | class DispResNet(nn.Module):
123 |
124 | def __init__(self, num_layers=18, pretrained=True):
125 | super(DispResNet, self).__init__()
126 | self.obj_height_prior = nn.Parameter(torch.tensor(0.02))
127 |
128 | self.encoder = ResnetEncoder(num_layers=num_layers, pretrained=pretrained, num_input_images=1)
129 | self.decoder = DispDecoder(self.encoder.num_ch_enc)
130 |
131 | def init_weights(self):
132 | pass
133 |
134 | def forward(self, x):
135 | features = self.encoder(x)
136 | outputs = self.decoder(features)
137 |
138 | return outputs[0] # change here [0]
139 |
140 | ###########################################################################
141 | # class DispResNet(nn.Module):
142 |
143 | # def __init__(self, num_layers=18, pretrained=True):
144 | # super(DispResNet, self).__init__()
145 | # self.obj_height_prior = nn.Parameter(torch.tensor(0.02))
146 |
147 | # self.encoder = ResnetEncoder(num_layers=num_layers, pretrained=pretrained, num_input_images=1)
148 | # self.decoder = DispDecoder(self.encoder.num_ch_enc)
149 |
150 | # def init_weights(self):
151 | # pass
152 |
153 | # def forward(self, x):
154 | # features = self.encoder(x)
155 | # outputs = self.decoder(features)
156 |
157 | # return outputs[0]
158 | ###########################################################################
159 |
160 |
161 | if __name__ == "__main__":
162 |
163 | torch.backends.cudnn.benchmark = True
164 |
165 | model = DispResNet().cuda()
166 | model.train()
167 |
168 | B = 12
169 |
170 | tgt_img = torch.randn(B, 3, 256, 832).cuda()
171 | ref_imgs = [torch.randn(B, 3, 256, 832).cuda() for i in range(2)]
172 |
173 | tgt_disp = model(tgt_img)
174 |
175 | print(tgt_disp.size())
176 |
177 |
178 |
--------------------------------------------------------------------------------
/kitti_eval/depth_evaluation_utils.py:
--------------------------------------------------------------------------------
1 | # Mostly based on the code written by Clement Godard:
2 | # https://github.com/mrharicot/monodepth/blob/master/utils/evaluation_utils.py
3 | # Reference: https://github.com/SeokjuLee/Insta-DM/blob/master/kitti_eval/depth_evaluation_utils.py
4 | import numpy as np
5 | # import pandas as pd
6 | import os
7 | import cv2
8 | from collections import Counter
9 | import pickle
10 |
11 | from matplotlib import pyplot as plt
12 | import pdb
13 |
14 | def compute_errors(gt, pred):
15 | thresh = np.maximum((gt / pred), (pred / gt))
16 | a1 = (thresh < 1.25 ).mean()
17 | a2 = (thresh < 1.25 ** 2).mean()
18 | a3 = (thresh < 1.25 ** 3).mean()
19 |
20 | rmse = (gt - pred) ** 2
21 | rmse = np.sqrt(rmse.mean())
22 |
23 | rmse_log = (np.log(gt) - np.log(pred)) ** 2
24 | rmse_log = np.sqrt(rmse_log.mean())
25 |
26 | abs_rel = np.mean(np.abs(gt - pred) / gt)
27 |
28 | sq_rel = np.mean(((gt - pred)**2) / gt)
29 |
30 | return abs_rel, sq_rel, rmse, rmse_log, a1, a2, a3
31 |
32 | ###############################################################################
33 | ####################### KITTI
34 |
35 | width_to_focal = dict()
36 | width_to_focal[1242] = 721.5377
37 | width_to_focal[1241] = 718.856
38 | width_to_focal[1224] = 707.0493
39 | width_to_focal[1238] = 718.3351
40 |
41 | def load_gt_disp_kitti(path):
42 | gt_disparities = []
43 | for i in range(200):
44 | disp = cv2.imread(path + "/training/disp_noc_0/" + str(i).zfill(6) + "_10.png", -1)
45 | disp = disp.astype(np.float32) / 256
46 | gt_disparities.append(disp)
47 | return gt_disparities
48 |
49 | def convert_disps_to_depths_kitti(gt_disparities, pred_disparities):
50 | gt_depths = []
51 | pred_depths = []
52 | pred_disparities_resized = []
53 |
54 | for i in range(len(gt_disparities)):
55 | gt_disp = gt_disparities[i]
56 | height, width = gt_disp.shape
57 |
58 | pred_disp = pred_disparities[i]
59 | pred_disp = width * cv2.resize(pred_disp, (width, height), interpolation=cv2.INTER_LINEAR)
60 |
61 | pred_disparities_resized.append(pred_disp)
62 |
63 | mask = gt_disp > 0
64 |
65 | gt_depth = width_to_focal[width] * 0.54 / (gt_disp + (1.0 - mask))
66 | pred_depth = width_to_focal[width] * 0.54 / pred_disp
67 |
68 | gt_depths.append(gt_depth)
69 | pred_depths.append(pred_depth)
70 | return gt_depths, pred_depths, pred_disparities_resized
71 |
72 |
73 | ###############################################################################
74 | ####################### EIGEN
75 |
76 | def read_text_lines(file_path):
77 | f = open(file_path, 'r')
78 | lines = f.readlines()
79 | f.close()
80 | lines = [l.rstrip() for l in lines]
81 | return lines
82 |
83 | def read_file_data(files, data_root):
84 | gt_files = []
85 | gt_calib = []
86 | im_sizes = []
87 | im_files = []
88 | cams = []
89 | num_probs = 0
90 | for filename in files:
91 | filename = filename.split()[0]
92 | splits = filename.split('/')
93 | # camera_id = filename[-1] # 2 is left, 3 is right
94 | date = splits[0]
95 | im_id = splits[4][:10]
96 | file_root = '{}/{}'
97 |
98 | im = filename
99 | # pdb.set_trace()
100 | vel = '{}/{}/velodyne_points/data/{}.bin'.format(splits[0], splits[1], im_id)
101 |
102 | if os.path.isfile(data_root + im):
103 | gt_files.append(data_root + vel)
104 | gt_calib.append(data_root + date + '/')
105 | im_sizes.append(cv2.imread(data_root + im).shape[:2])
106 | im_files.append(data_root + im)
107 | cams.append(2)
108 | else:
109 | num_probs += 1
110 | print('{} missing'.format(data_root + im))
111 | # print(num_probs, 'files missing')
112 |
113 | return gt_files, gt_calib, im_sizes, im_files, cams
114 |
115 | def load_velodyne_points(file_name):
116 | # adapted from https://github.com/hunse/kitti
117 | points = np.fromfile(file_name, dtype=np.float32).reshape(-1, 4)
118 | points[:, 3] = 1.0 # homogeneous
119 | return points
120 |
121 |
122 | def lin_interp(shape, xyd):
123 | # taken from https://github.com/hunse/kitti
124 | m, n = shape
125 | ij, d = xyd[:, 1::-1], xyd[:, 2]
126 | f = LinearNDInterpolator(ij, d, fill_value=0)
127 | J, I = np.meshgrid(np.arange(n), np.arange(m))
128 | IJ = np.vstack([I.flatten(), J.flatten()]).T
129 | disparity = f(IJ).reshape(shape)
130 | return disparity
131 |
132 |
133 | def read_calib_file(path):
134 | # taken from https://github.com/hunse/kitti
135 | float_chars = set("0123456789.e+- ")
136 | data = {}
137 | with open(path, 'r') as f:
138 | for line in f.readlines():
139 | key, value = line.split(':', 1)
140 | value = value.strip()
141 | data[key] = value
142 | # pdb.set_trace()
143 | if float_chars.issuperset(value):
144 | # try to cast to float array
145 | try:
146 | # data[key] = np.array(map(float, value.split(' ')))
147 | data[key] = np.array(value.split(' ')).astype(float)
148 | except ValueError:
149 | # casting error: data[key] already eq. value, so pass
150 | pass
151 |
152 | return data
153 |
154 |
155 | def get_focal_length_baseline(calib_dir, cam=2):
156 | cam2cam = read_calib_file(calib_dir + 'calib_cam_to_cam.txt')
157 | P2_rect = cam2cam['P_rect_02'].reshape(3,4)
158 | P3_rect = cam2cam['P_rect_03'].reshape(3,4)
159 |
160 | # cam 2 is left of camera 0 -6cm
161 | # cam 3 is to the right +54cm
162 | b2 = P2_rect[0,3] / -P2_rect[0,0]
163 | b3 = P3_rect[0,3] / -P3_rect[0,0]
164 | baseline = b3-b2
165 |
166 | if cam==2:
167 | focal_length = P2_rect[0,0]
168 | elif cam==3:
169 | focal_length = P3_rect[0,0]
170 |
171 | return focal_length, baseline
172 |
173 |
174 | def sub2ind(matrixSize, rowSub, colSub):
175 | m, n = matrixSize
176 | return rowSub * (n-1) + colSub - 1
177 |
178 | def generate_depth_map(calib_dir, velo_file_name, im_shape, cam=2, interp=False, vel_depth=False):
179 | # load calibration files
180 | cam2cam = read_calib_file(calib_dir + 'calib_cam_to_cam.txt')
181 | velo2cam = read_calib_file(calib_dir + 'calib_velo_to_cam.txt')
182 | # pdb.set_trace()
183 | velo2cam = np.hstack((velo2cam['R'].reshape(3,3), velo2cam['T'][..., np.newaxis]))
184 | velo2cam = np.vstack((velo2cam, np.array([0, 0, 0, 1.0])))
185 |
186 | # compute projection matrix velodyne->image plane
187 | R_cam2rect = np.eye(4)
188 | R_cam2rect[:3,:3] = cam2cam['R_rect_00'].reshape(3,3)
189 | P_rect = cam2cam['P_rect_0'+str(cam)].reshape(3,4)
190 | P_velo2im = np.dot(np.dot(P_rect, R_cam2rect), velo2cam)
191 |
192 | # load velodyne points and remove all behind image plane (approximation)
193 | # each row of the velodyne data is forward, left, up, reflectance
194 | velo = load_velodyne_points(velo_file_name)
195 | velo = velo[velo[:, 0] >= 0, :]
196 |
197 | # project the points to the camera
198 | velo_pts_im = np.dot(P_velo2im, velo.T).T
199 | velo_pts_im[:, :2] = velo_pts_im[:,:2] / velo_pts_im[:,2][..., np.newaxis]
200 |
201 | if vel_depth:
202 | velo_pts_im[:, 2] = velo[:, 0]
203 |
204 | # check if in bounds
205 | # use minus 1 to get the exact same value as KITTI matlab code
206 | velo_pts_im[:, 0] = np.round(velo_pts_im[:,0]) - 1
207 | velo_pts_im[:, 1] = np.round(velo_pts_im[:,1]) - 1
208 | val_inds = (velo_pts_im[:, 0] >= 0) & (velo_pts_im[:, 1] >= 0)
209 | val_inds = val_inds & (velo_pts_im[:,0] < im_shape[1]) & (velo_pts_im[:,1] < im_shape[0])
210 | velo_pts_im = velo_pts_im[val_inds, :]
211 |
212 | # project to image
213 | depth = np.zeros((im_shape))
214 | depth[velo_pts_im[:, 1].astype(np.int), velo_pts_im[:, 0].astype(np.int)] = velo_pts_im[:, 2]
215 |
216 | # find the duplicate points and choose the closest depth
217 | inds = sub2ind(depth.shape, velo_pts_im[:, 1], velo_pts_im[:, 0])
218 | # pdb.set_trace()
219 |
220 | # dupe_inds = [item for item, count in Counter(inds).iteritems() if count > 1]
221 | dupe_inds = [item for item, count in Counter(inds).items() if count > 1]
222 | for dd in dupe_inds:
223 | pts = np.where(inds==dd)[0]
224 | x_loc = int(velo_pts_im[pts[0], 0])
225 | y_loc = int(velo_pts_im[pts[0], 1])
226 | depth[y_loc, x_loc] = velo_pts_im[pts, 2].min()
227 | depth[depth<0] = 0
228 |
229 | if interp:
230 | # interpolate the depth map to fill in holes
231 | depth_interp = lin_interp(im_shape, velo_pts_im)
232 | return depth, depth_interp
233 | else:
234 | return depth
235 |
236 |
237 |
238 |
--------------------------------------------------------------------------------
/datasets/sequence_folders.py:
--------------------------------------------------------------------------------
1 | '''
2 | Reference: https://github.com/SeokjuLee/Insta-DM/blob/master/datasets/sequence_folders.py
3 | '''
4 |
5 | import torch
6 | import torch.nn as nn
7 | import torch.nn.functional as F
8 | import torch.utils.data as data
9 | import numpy as np
10 | from imageio import imread
11 | from path import Path
12 | import random
13 | import math
14 | from matplotlib import pyplot as plt
15 | from flow_io import flow_read
16 | from rigid_warp import flow_warp
17 | import pdb
18 |
19 |
20 | def load_as_float(path):
21 | return imread(path).astype(np.float32)
22 |
23 |
24 | def load_flo_as_float(path):
25 | out = np.array(flow_read(path)).astype(np.float32)
26 | return out
27 |
28 |
29 | def load_seg_as_float(path):
30 | out = np.load(path).astype(np.float32)
31 | return out
32 |
33 |
34 | def L2_norm(x, dim=1, keepdim=True):
35 | curr_offset = 1e-10
36 | l2_norm = torch.norm(torch.abs(x) + curr_offset, dim=dim, keepdim=True)
37 | return l2_norm
38 |
39 |
40 | def find_noc_masks(fwd_flow, bwd_flow):
41 | '''
42 | fwd_flow: torch.size([1, 2, 256, 832])
43 | bwd_flow: torch.size([1, 2, 256, 832])
44 | output: torch.size([1, 1, 256, 832]), torch.size([1, 1, 256, 832])
45 |
46 | input shape of flow_warp(): torch.size([bs, 2, 256, 832])
47 | '''
48 | bwd2fwd_flow, _ = flow_warp(bwd_flow, fwd_flow)
49 | fwd2bwd_flow, _ = flow_warp(fwd_flow, bwd_flow)
50 |
51 | fwd_flow_diff = torch.abs(bwd2fwd_flow + fwd_flow)
52 | bwd_flow_diff = torch.abs(fwd2bwd_flow + bwd_flow)
53 |
54 | fwd_consist_bound = torch.max(0.05 * L2_norm(fwd_flow), torch.Tensor([3.0]))
55 | bwd_consist_bound = torch.max(0.05 * L2_norm(bwd_flow), torch.Tensor([3.0]))
56 |
57 | noc_mask_0 = (L2_norm(fwd_flow_diff) < fwd_consist_bound).type(torch.FloatTensor) # noc_mask_tgt, torch.Size([1, 1, 256, 832]), torch.float32
58 | noc_mask_1 = (L2_norm(bwd_flow_diff) < bwd_consist_bound).type(torch.FloatTensor) # noc_mask_src, torch.Size([1, 1, 256, 832]), torch.float32
59 | # pdb.set_trace()
60 |
61 | return noc_mask_0, noc_mask_1
62 |
63 |
64 | def inst_iou(seg_src, seg_tgt, valid_mask):
65 | '''
66 | => Which channel instance of seg_tgt does the instances of seg_src match?
67 | => seg_src의 인스턴스들이 seg_tgt의 몇 번째 채널 인스턴스에 매칭되는가?
68 |
69 | seg_src: torch.Size([1, n_inst, 256, 832])
70 | seg_tgt: torch.Size([1, n_inst, 256, 832])
71 | valid_mask: torch.Size([1, 1, 256, 832])
72 | '''
73 | n_inst_src = seg_src.shape[1]
74 | n_inst_tgt = seg_tgt.shape[1]
75 |
76 | seg_src_m = seg_src * valid_mask.repeat(1,n_inst_src,1,1)
77 | seg_tgt_m = seg_tgt * valid_mask.repeat(1,n_inst_tgt,1,1)
78 | # pdb.set_trace()
79 | '''
80 | plt.figure(1), plt.imshow(seg_src.sum(dim=0).sum(dim=0)), plt.colorbar(), plt.ion(), plt.show()
81 | plt.figure(2), plt.imshow(seg_tgt.sum(dim=0).sum(dim=0)), plt.colorbar(), plt.ion(), plt.show()
82 | plt.figure(3), plt.imshow(valid_mask[0,0]), plt.colorbar(), plt.ion(), plt.show()
83 | plt.figure(4), plt.imshow(seg_src_m.sum(dim=0).sum(dim=0)), plt.colorbar(), plt.ion(), plt.show()
84 | '''
85 | for i in range(n_inst_src):
86 | if i == 0:
87 | match_table = torch.from_numpy(np.zeros([1,n_inst_tgt]).astype(np.float32))
88 | continue;
89 |
90 | overl = (seg_src_m[:,i].unsqueeze(1).repeat(1,n_inst_tgt,1,1) * seg_tgt_m).clamp(min=0,max=1).squeeze(0).sum(1).sum(1)
91 | union = (seg_src_m[:,i].unsqueeze(1).repeat(1,n_inst_tgt,1,1) + seg_tgt_m).clamp(min=0,max=1).squeeze(0).sum(1).sum(1)
92 |
93 | iou_inst = overl / union
94 | match_table = torch.cat((match_table, iou_inst.unsqueeze(0)), dim=0)
95 |
96 | iou, inst_idx = torch.max(match_table,dim=1)
97 | # pdb.set_trace()
98 |
99 | return iou, inst_idx
100 |
101 |
102 | def recursive_check_nonzero_inst(tgt_inst, ref_inst):
103 | assert( tgt_inst[0].mean() == ref_inst[0].mean() )
104 | n_inst = int(tgt_inst[0].mean())
105 | for nn in range(n_inst):
106 | if tgt_inst[nn+1].mean() == 0:
107 | tgt_inst[0] -= 1
108 | ref_inst[0] -= 1
109 | if nn+1 == n_inst:
110 | tgt_inst[nn+1:] = 0
111 | ref_inst[nn+1:] = 0
112 | else:
113 | tgt_inst[nn+1:] = torch.cat([tgt_inst[nn+2:], torch.zeros(1, tgt_inst.size(1), tgt_inst.size(2))], dim=0) # re-ordering
114 | ref_inst[nn+1:] = torch.cat([ref_inst[nn+2:], torch.zeros(1, ref_inst.size(1), ref_inst.size(2))], dim=0) # re-ordering
115 | return recursive_check_nonzero_inst(tgt_inst, ref_inst)
116 | if ref_inst[nn+1].mean() == 0:
117 | tgt_inst[0] -= 1
118 | ref_inst[0] -= 1
119 | if nn+1 == n_inst:
120 | tgt_inst[nn+1:] = 0
121 | ref_inst[nn+1:] = 0
122 | else:
123 | tgt_inst[nn+1:] = torch.cat([tgt_inst[nn+2:], torch.zeros(1, tgt_inst.size(1), tgt_inst.size(2))], dim=0) # re-ordering
124 | ref_inst[nn+1:] = torch.cat([ref_inst[nn+2:], torch.zeros(1, ref_inst.size(1), ref_inst.size(2))], dim=0) # re-ordering
125 | return recursive_check_nonzero_inst(tgt_inst, ref_inst)
126 | return tgt_inst, ref_inst
127 |
128 |
129 | class SequenceFolder(data.Dataset):
130 | """
131 | A sequence data loader where the files are arranged in this way:
132 | root/scene_1/0000000.jpg
133 | root/scene_1/0000001.jpg
134 | ..
135 | root/scene_1/cam.txt
136 | root/scene_2/0000000.jpg
137 | .
138 |
139 | transform functions must take in a list a images and a numpy array (usually intrinsics matrix)
140 |
141 | """
142 |
143 | def __init__(self, root, train, seed=None, shuffle=True, max_num_instances=20, sequence_length=3, transform=None, proportion=1, begin_idx=None):
144 | np.random.seed(seed)
145 | random.seed(seed)
146 | self.root = Path(root)
147 | scene_list_path = self.root/'train.txt' if train else self.root/'val.txt'
148 | self.scenes = [self.root/'image'/folder[:-1] for folder in open(scene_list_path)]
149 | self.is_shuffle = shuffle
150 | self.crawl_folders(sequence_length)
151 | self.mni = max_num_instances
152 | self.transform = transform
153 | split_index = int(math.floor(len(self.samples)*proportion))
154 | self.samples = self.samples[:split_index]
155 | if begin_idx:
156 | self.samples = self.samples[begin_idx:]
157 | # pdb.set_trace()
158 |
159 |
160 | def crawl_folders(self, sequence_length):
161 | sequence_set = []
162 | demi_length = (sequence_length-1)//2
163 | shifts = list(range(-demi_length, demi_length + 1))
164 | shifts.pop(demi_length)
165 | for scene in self.scenes:
166 | sceneff = Path(Path.dirname(scene).parent+'/flow_f/'+scene.split('/')[-1])
167 | scenefb = Path(Path.dirname(scene).parent+'/flow_b/'+scene.split('/')[-1])
168 | scenei = Path(Path.dirname(scene).parent+'/segmentation/'+scene.split('/')[-1])
169 | intrinsics = np.genfromtxt(scene/'cam.txt').astype(np.float32).reshape((3, 3))
170 |
171 | imgs = sorted(scene.files('*.jpg'))
172 | flof = sorted(sceneff.files('*.flo')) # 00: src, 01: tgt
173 | flob = sorted(scenefb.files('*.flo')) # 00: tgt, 01: src
174 | segm = sorted(scenei.files('*.npy'))
175 |
176 | if len(imgs) < sequence_length:
177 | continue
178 | for i in range(demi_length, len(imgs)-demi_length):
179 | sample = {'intrinsics': intrinsics, 'tgt': imgs[i], 'ref_imgs': [],
180 | 'flow_fs':[], 'flow_bs':[], 'tgt_seg':segm[i], 'ref_segs':[]} # ('tgt_insts':[], 'ref_insts':[]) will be processed when getitem() is called
181 | for j in shifts:
182 | sample['ref_imgs'].append(imgs[i+j])
183 | sample['ref_segs'].append(segm[i+j])
184 | for j in range(-demi_length, 1):
185 | sample['flow_fs'].append(flof[i+j])
186 | sample['flow_bs'].append(flob[i+j])
187 | sequence_set.append(sample)
188 | # pdb.set_trace()
189 | if self.is_shuffle:
190 | random.shuffle(sequence_set)
191 | self.samples = sequence_set
192 |
193 |
194 | def __getitem__(self, index):
195 | sample = self.samples[index]
196 | tgt_img = load_as_float(sample['tgt'])
197 | ref_imgs = [load_as_float(ref_img) for ref_img in sample['ref_imgs']]
198 |
199 | flow_fs = [torch.from_numpy(load_flo_as_float(flow_f)) for flow_f in sample['flow_fs']]
200 | flow_bs = [torch.from_numpy(load_flo_as_float(flow_b)) for flow_b in sample['flow_bs']]
201 |
202 | tgt_seg = torch.from_numpy(load_seg_as_float(sample['tgt_seg']))
203 | ref_segs = [torch.from_numpy(load_seg_as_float(ref_seg)) for ref_seg in sample['ref_segs']]
204 |
205 | tgt_sort = torch.cat([torch.zeros(1).long(), tgt_seg.sum(dim=(1,2)).argsort(descending=True)[:-1]], dim=0)
206 | ref_sorts = [ torch.cat([torch.zeros(1).long(), ref_seg.sum(dim=(1,2)).argsort(descending=True)[:-1]], dim=0) for ref_seg in ref_segs ]
207 | tgt_seg = tgt_seg[tgt_sort]
208 | ref_segs = [ref_seg[ref_sort] for ref_seg, ref_sort in zip(ref_segs, ref_sorts)]
209 |
210 | tgt_insts = []
211 | ref_insts = []
212 |
213 | noc = []
214 |
215 |
216 |
217 | for i in range( len(ref_imgs) ):
218 |
219 | noc_f, noc_b = find_noc_masks(flow_fs[i].unsqueeze(0), flow_bs[i].unsqueeze(0))
220 |
221 | noc.append([noc_b, noc_f])
222 |
223 |
224 | if i < len(ref_imgs)/2: # first half
225 | seg0 = ref_segs[i].unsqueeze(0)
226 | seg1 = tgt_seg.unsqueeze(0)
227 | else: # second half
228 | seg0 = tgt_seg.unsqueeze(0)
229 | seg1 = ref_segs[i].unsqueeze(0)
230 |
231 | seg0w, _ = flow_warp(seg1, flow_fs[i].unsqueeze(0))
232 | seg1w, _ = flow_warp(seg0, flow_bs[i].unsqueeze(0))
233 |
234 | n_inst0 = seg0.shape[1]
235 | n_inst1 = seg1.shape[1]
236 |
237 |
238 | ### Warp seg0 to seg1. Find IoU between seg1w and seg1. Find the maximum corresponded instance in seg1.
239 | iou_01, ch_01 = inst_iou(seg1w, seg1, valid_mask=noc_b)
240 | iou_10, ch_10 = inst_iou(seg0w, seg0, valid_mask=noc_f)
241 |
242 | seg0_re = torch.zeros(self.mni+1, seg0.shape[2], seg0.shape[3])
243 | seg1_re = torch.zeros(self.mni+1, seg1.shape[2], seg1.shape[3])
244 | non_overlap_0 = torch.ones([seg0.shape[2], seg0.shape[3]])
245 | non_overlap_1 = torch.ones([seg0.shape[2], seg0.shape[3]])
246 |
247 | num_match = 0
248 | for ch in range(n_inst0):
249 | condition1 = (ch == ch_10[ch_01[ch]]) and (iou_01[ch] > 0.5) and (iou_10[ch_01[ch]] > 0.5)
250 | condition2 = ((seg0[0,ch] * non_overlap_0).max() > 0) and ((seg1[0,ch_01[ch]] * non_overlap_1).max() > 0)
251 | if condition1 and condition2 and (num_match < self.mni): # matching success!
252 | num_match += 1
253 | seg0_re[num_match] = seg0[0,ch] * non_overlap_0
254 | seg1_re[num_match] = seg1[0,ch_01[ch]] * non_overlap_1
255 | non_overlap_0 = non_overlap_0 * (1 - seg0_re[num_match])
256 | non_overlap_1 = non_overlap_1 * (1 - seg1_re[num_match])
257 | seg0_re[0] = num_match
258 | seg1_re[0] = num_match
259 | # pdb.set_trace()
260 |
261 | if seg0_re[0].mean() != 0 and seg0_re[int(seg0_re[0].mean())].mean() == 0: pdb.set_trace()
262 | if seg1_re[0].mean() != 0 and seg1_re[int(seg1_re[0].mean())].mean() == 0: pdb.set_trace()
263 |
264 | if i < len(ref_imgs)/2: # first half
265 | tgt_insts.append(seg1_re.detach().cpu().numpy().transpose(1,2,0))
266 | ref_insts.append(seg0_re.detach().cpu().numpy().transpose(1,2,0))
267 | else: # second half
268 | tgt_insts.append(seg0_re.detach().cpu().numpy().transpose(1,2,0))
269 | ref_insts.append(seg1_re.detach().cpu().numpy().transpose(1,2,0))
270 |
271 |
272 |
273 | # pdb.set_trace()
274 | '''
275 | plt.close('all')
276 | plt.figure(1), plt.imshow(tgt_insts[0].sum(dim=0)), plt.grid(linestyle=':', linewidth=0.4), plt.colorbar(), plt.ion(), plt.show()
277 | plt.figure(2), plt.imshow(tgt_insts[1].sum(dim=0)), plt.grid(linestyle=':', linewidth=0.4), plt.colorbar(), plt.ion(), plt.show()
278 | plt.figure(3), plt.imshow(ref_insts[0].sum(dim=0)), plt.grid(linestyle=':', linewidth=0.4), plt.colorbar(), plt.ion(), plt.show()
279 | plt.figure(4), plt.imshow(ref_insts[1].sum(dim=0)), plt.grid(linestyle=':', linewidth=0.4), plt.colorbar(), plt.ion(), plt.show()
280 |
281 | '''
282 |
283 | if self.transform is not None:
284 | imgs, segms, intrinsics = self.transform([tgt_img] + ref_imgs, tgt_insts + ref_insts, np.copy(sample['intrinsics']))
285 | tgt_img = imgs[0]
286 | ref_imgs = imgs[1:]
287 | tgt_insts = segms[:int(len(ref_imgs)/2+1)]
288 | ref_insts = segms[int(len(ref_imgs)/2+1):]
289 |
290 | else:
291 | intrinsics = np.copy(sample['intrinsics'])
292 |
293 | ### While passing through RandomScaleCrop(), instances could be flied-out and become zero-mask. -> Need filtering!
294 | for sq in range( len(ref_imgs) ):
295 | tgt_insts[sq], ref_insts[sq] = recursive_check_nonzero_inst(tgt_insts[sq], ref_insts[sq])
296 |
297 |
298 | if tgt_insts[0][0].mean() != 0 and tgt_insts[0][int(tgt_insts[0][0].mean())].mean() == 0: pdb.set_trace()
299 | if tgt_insts[1][0].mean() != 0 and tgt_insts[1][int(tgt_insts[1][0].mean())].mean() == 0: pdb.set_trace()
300 | if ref_insts[0][0].mean() != 0 and ref_insts[0][int(ref_insts[0][0].mean())].mean() == 0: pdb.set_trace()
301 | if ref_insts[1][0].mean() != 0 and ref_insts[1][int(ref_insts[1][0].mean())].mean() == 0: pdb.set_trace()
302 |
303 | if tgt_insts[0][0].mean() != tgt_insts[0][1:].mean(-1).mean(-1).nonzero().size(0): pdb.set_trace()
304 | if tgt_insts[1][0].mean() != tgt_insts[1][1:].mean(-1).mean(-1).nonzero().size(0): pdb.set_trace()
305 | if ref_insts[0][0].mean() != ref_insts[0][1:].mean(-1).mean(-1).nonzero().size(0): pdb.set_trace()
306 | if ref_insts[1][0].mean() != ref_insts[1][1:].mean(-1).mean(-1).nonzero().size(0): pdb.set_trace()
307 |
308 | # pdb.set_trace()
309 | return tgt_img, ref_imgs, intrinsics, np.linalg.inv(intrinsics), tgt_insts, ref_insts, noc
310 |
311 | def __len__(self):
312 | return len(self.samples)
313 |
--------------------------------------------------------------------------------
/loss_functions.py:
--------------------------------------------------------------------------------
1 | '''
2 | Reference: https://github.com/SeokjuLee/Insta-DM/blob/master/loss_functions.py
3 | '''
4 |
5 | from __future__ import division
6 | import torch
7 | from torch import nn
8 | import torch.nn.functional as F
9 | from rigid_warp import inverse_warp_mof, pose_mof2mat, flow_warp
10 | import math
11 | import random
12 | import numpy as np
13 | from matplotlib import pyplot as plt
14 | import pdb
15 |
16 | device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
17 |
18 |
19 |
20 | class SSIM(nn.Module):
21 | '''
22 | Layer to compute the SSIM loss between a pair of images
23 | '''
24 |
25 | def __init__(self):
26 | super(SSIM, self).__init__()
27 | self.mu_x_pool = nn.AvgPool2d(3, 1)
28 | self.mu_y_pool = nn.AvgPool2d(3, 1)
29 | self.sig_x_pool = nn.AvgPool2d(3, 1)
30 | self.sig_y_pool = nn.AvgPool2d(3, 1)
31 | self.sig_xy_pool = nn.AvgPool2d(3, 1)
32 |
33 | self.refl = nn.ReflectionPad2d(1)
34 |
35 | self.C1 = 0.01 ** 2
36 | self.C2 = 0.03 ** 2
37 |
38 | def forward(self, x, y):
39 | x = self.refl(x)
40 | y = self.refl(y)
41 |
42 | mu_x = self.mu_x_pool(x)
43 | mu_y = self.mu_y_pool(y)
44 |
45 | sigma_x = self.sig_x_pool(x ** 2) - mu_x ** 2
46 | sigma_y = self.sig_y_pool(y ** 2) - mu_y ** 2
47 | sigma_xy = self.sig_xy_pool(x * y) - mu_x * mu_y
48 |
49 | SSIM_n = (2 * mu_x * mu_y + self.C1) * (2 * sigma_xy + self.C2)
50 | SSIM_d = (mu_x ** 2 + mu_y ** 2 + self.C1) * (sigma_x + sigma_y + self.C2)
51 |
52 | return torch.clamp((1 - SSIM_n / SSIM_d) / 2, 0, 1)
53 |
54 |
55 |
56 | compute_ssim_loss = SSIM().to(device)
57 |
58 |
59 |
60 | def compute_photo_and_geometry_loss(tgt_img, ref_imgs, intrinsics, tgt_depth, ref_depths, motion_fields_fwd, motion_fields_bwd, with_ssim, with_mask, with_auto_mask, padding_mode, with_only_obj, tgt_obj_masks, ref_obj_masks, vmasks_fwd, vmasks_bwd):
61 |
62 | photo_loss = 0
63 | geometry_loss = 0
64 |
65 | r2t_imgs, t2r_imgs = [], []
66 | r2t_flows, t2r_flows = [], []
67 | r2t_diffs, t2r_diffs = [], []
68 | r2t_vals, t2r_vals = [], []
69 |
70 | for ref_img, ref_depth, mf_fwd, mf_bwd, tgt_obj_mask, ref_obj_mask, vmask_fwd, vmask_bwd in zip(ref_imgs, ref_depths, motion_fields_fwd, motion_fields_bwd, tgt_obj_masks, ref_obj_masks, vmasks_fwd, vmasks_bwd):
71 | photo_loss1, geometry_loss1, r2t_img, tgt_comp_depth, r2t_flow, r2t_diff, r2t_val = compute_pairwise_loss(tgt_img, ref_img, tgt_depth, ref_depth, mf_fwd, \
72 | intrinsics, with_ssim, with_mask, with_auto_mask, padding_mode, \
73 | with_only_obj, tgt_obj_mask, vmask_fwd.detach())
74 | photo_loss2, geometry_loss2, t2r_img, ref_comp_depth, t2r_flow, t2r_diff, t2r_val = compute_pairwise_loss(ref_img, tgt_img, ref_depth, tgt_depth, mf_bwd, \
75 | intrinsics, with_ssim, with_mask, with_auto_mask, padding_mode, \
76 | with_only_obj, ref_obj_mask, vmask_bwd.detach())
77 | r2t_imgs.append(r2t_img)
78 | t2r_imgs.append(t2r_img)
79 | r2t_flows.append(r2t_flow)
80 | t2r_flows.append(t2r_flow)
81 | r2t_diffs.append(r2t_diff)
82 | t2r_diffs.append(t2r_diff)
83 | r2t_vals.append(r2t_val)
84 | t2r_vals.append(t2r_val)
85 |
86 | photo_loss += (photo_loss1 + photo_loss2)
87 | geometry_loss += (geometry_loss1 + geometry_loss2)
88 |
89 | return photo_loss, geometry_loss, r2t_imgs, t2r_imgs, r2t_flows, t2r_flows, r2t_diffs, t2r_diffs, r2t_vals, t2r_vals
90 |
91 |
92 |
93 | def compute_pairwise_loss(tgt_img, ref_img, tgt_depth, ref_depth, motion_field, intrinsic, with_ssim, with_mask, with_auto_mask, padding_mode, with_only_obj, obj_mask, vmask):
94 |
95 | ref_img_warped, valid_mask, projected_depth, computed_depth, r2t_flow = inverse_warp_mof(ref_img, tgt_depth, ref_depth, motion_field, intrinsic, padding_mode)
96 |
97 | diff_img = (tgt_img - ref_img_warped).abs().clamp(0, 1)
98 | diff_depth = ((computed_depth - projected_depth).abs() / (computed_depth + projected_depth)).clamp(0, 1)
99 |
100 | if with_auto_mask == True:
101 | auto_mask = (diff_img.mean(dim=1, keepdim=True) < (tgt_img - ref_img).abs().mean(dim=1, keepdim=True)).float() * valid_mask
102 | valid_mask = auto_mask
103 |
104 | if with_ssim == True:
105 | ssim_map = compute_ssim_loss(tgt_img, ref_img_warped)
106 | diff_img = (0.15 * diff_img + 0.85 * ssim_map) # hyper-parameter
107 |
108 | if with_mask == True:
109 | weight_mask = (1 - diff_depth)
110 | diff_img = diff_img * weight_mask
111 |
112 | if with_only_obj == True:
113 | valid_mask = valid_mask * obj_mask
114 |
115 | out_val = valid_mask * vmask
116 |
117 | # compute all loss
118 | reconstruction_loss = mean_on_mask(diff_img, out_val)
119 | geometry_consistency_loss = mean_on_mask(diff_depth, out_val)
120 |
121 | return reconstruction_loss, geometry_consistency_loss, ref_img_warped, computed_depth, r2t_flow, diff_depth, out_val
122 |
123 |
124 |
125 | def compute_smooth_loss(tgt_depth, tgt_img, ref_depths, ref_imgs):
126 | def get_smooth_loss(disp, img):
127 | """
128 | Computes the smoothness loss for a disparity image
129 | The color image is used for edge-aware smoothness
130 | """
131 | # normalize
132 | mean_disp = disp.mean(2, True).mean(3, True)
133 | norm_disp = disp / (mean_disp + 1e-7)
134 | disp = norm_disp
135 |
136 | grad_disp_x = torch.abs(disp - torch.roll(disp, 1, dims=3))
137 | grad_disp_y = torch.abs(disp - torch.roll(disp, 1, dims=2))
138 | grad_disp_x[:,:,:,0] = 0
139 | grad_disp_y[:,:,0,:] = 0
140 |
141 | grad_img_x = torch.mean(torch.abs(img - torch.roll(img, 1, dims=3)), 1, keepdim=True)
142 | grad_img_y = torch.mean(torch.abs(img - torch.roll(img, 1, dims=2)), 1, keepdim=True)
143 | grad_img_x[:,:,:,0] = 0
144 | grad_img_y[:,:,0,:] = 0
145 |
146 | grad_disp_x *= torch.exp(-grad_img_x)
147 | grad_disp_y *= torch.exp(-grad_img_y)
148 |
149 | return grad_disp_x.mean() + grad_disp_y.mean()
150 |
151 | loss = get_smooth_loss(tgt_depth, tgt_img)
152 |
153 | for ref_depth, ref_img in zip(ref_depths, ref_imgs):
154 | loss += get_smooth_loss(ref_depth, ref_img)
155 |
156 | return loss
157 |
158 |
159 |
160 | def compute_obj_size_constraint_loss(height_prior, tgtD, tgtMs, refDs, refMs, intrinsics, mni, num_insts):
161 | '''
162 | Reference: Struct2Depth (AAAI'19), https://github.com/tensorflow/models/blob/archive/research/struct2depth/model.py
163 | args:
164 | D_avg, D_obj, H_obj, D_app: tensor([d1, d2, d3, ... dn], device='cuda:0')
165 | num_inst: [n1, n2, ...]
166 | intrinsics.shape: torch.Size([B, 3, 3])
167 | '''
168 | bs, _, hh, ww = tgtD.size()
169 |
170 | loss = torch.tensor(.0).cuda()
171 |
172 | for tgtM, refD, refM, num_inst in zip(tgtMs, refDs, refMs, num_insts):
173 | if sum(num_inst) != 0:
174 | fy_rep = intrinsics[:,1,1].repeat_interleave(mni, dim=0)
175 |
176 | ### tgt-frame ###
177 | tgtD_rep = tgtD.repeat_interleave(mni, dim=0)
178 | tgtD_avg = tgtD_rep.mean(dim=[1,2,3])
179 | tgtM_rep = tgtM[:,1:].reshape(-1,1,hh,ww)
180 | tgtD_obj = (tgtD_rep * tgtM_rep).sum(dim=[1,2,3]) / tgtM_rep.sum(dim=[1,2,3]).clamp(min=1e-9)
181 | tgtM_idx = np.where(tgtM_rep.detach().cpu().numpy()==1)
182 | tgtH_obj = torch.tensor([ tgtM_idx[2][tgtM_idx[0]==obj].max() - tgtM_idx[2][tgtM_idx[0]==obj].min() if (tgtM_idx[0]==obj).sum()!=0 else 0 for obj in range(tgtM_rep.size(0)) ]).type_as(tgtD)
183 |
184 | tgt_val = (tgtD_obj > 0) * (tgtH_obj > 0)
185 |
186 | tgt_fy = fy_rep[tgt_val]
187 | tgtD_avg = tgtD_avg[tgt_val].detach() # d_avg.detach() to prevent increasing depth in the sky.
188 | # tgtD_avg = tgtD_avg[tgt_val]
189 | tgtD_obj = tgtD_obj[tgt_val]
190 | tgtH_obj = tgtH_obj[tgt_val]
191 | tgtD_app = (tgt_fy * height_prior) / tgtH_obj
192 |
193 | loss_tgt = torch.abs( (tgtD_obj-tgtD_app)/tgtD_avg ).sum() / torch.abs( (tgtD_obj-tgtD_app)/tgtD_avg ).size(0)
194 |
195 |
196 | ### ref-frame ###
197 | refD_rep = refD.repeat_interleave(mni, dim=0)
198 | refD_avg = refD_rep.mean(dim=[1,2,3])
199 | refM_rep = refM[:,1:].reshape(-1,1,hh,ww)
200 | refD_obj = (refD_rep * refM_rep).sum(dim=[1,2,3]) / refM_rep.sum(dim=[1,2,3]).clamp(min=1e-9)
201 | refM_idx = np.where(refM_rep.detach().cpu().numpy()==1)
202 | refH_obj = torch.tensor([ refM_idx[2][refM_idx[0]==obj].max() - refM_idx[2][refM_idx[0]==obj].min() if (refM_idx[0]==obj).sum()!=0 else 0 for obj in range(refM_rep.size(0)) ]).type_as(refD)
203 |
204 | ref_val = (refD_obj > 0) * (refH_obj > 0)
205 |
206 | ref_fy = fy_rep[ref_val]
207 | refD_avg = refD_avg[ref_val].detach() # d_avg.detach() to prevent increasing depth in the sky.
208 | # refD_avg = refD_avg[ref_val]
209 | refD_obj = refD_obj[ref_val]
210 | refH_obj = refH_obj[ref_val]
211 | refD_app = (ref_fy * height_prior) / refH_obj
212 |
213 | loss_ref = torch.abs( (refD_obj-refD_app)/refD_avg ).sum() / torch.abs( (refD_obj-refD_app)/refD_avg ).size(0)
214 |
215 | loss += 1/2 * (loss_tgt + loss_ref)
216 |
217 | return loss
218 |
219 |
220 |
221 | def compute_mof_consistency_loss(tgt_mofs, ref_mofs, r2t_flows, t2r_flows, r2t_diffs, t2r_diffs, r2t_vals, t2r_vals, alpha=10, thresh=0.1):
222 | '''
223 | Reference: Depth from Videos in the Wild (ICCV'19)
224 | Args:
225 | [DIRECTION]
226 | tgt_mofs_dir[0]: ref[0] >> tgt
227 | tgt_mofs_dir[1]: tgt << ref[1]
228 | [MAGNITUDE]
229 | tgt_mofs_mag[0]: ref[0] >> tgt
230 | tgt_mofs_mag[1]: tgt << ref[1]
231 | '''
232 | bs, _, hh, ww = tgt_mofs[0].size()
233 | eye = torch.eye(3).reshape(1,1,3,3).repeat(bs,hh*ww,1,1).type_as(tgt_mofs[0])
234 |
235 | loss = torch.tensor(.0).cuda()
236 |
237 | for enum, (tgt_mof, ref_mof, r2t_flow, t2r_flow, r2t_diff, t2r_diff, r2t_val, t2r_val) in \
238 | enumerate(zip(tgt_mofs, ref_mofs, r2t_flows, t2r_flows, r2t_diffs, t2r_diffs, r2t_vals, t2r_vals)):
239 |
240 | tgt_mat = pose_mof2mat(tgt_mof)
241 | ref_mat = pose_mof2mat(ref_mof)
242 |
243 | ### rotation error ###
244 | tgt_rot = tgt_mat[:,:,:3].reshape(bs,3,3,-1).permute(0,3,1,2)
245 | ref_rot = ref_mat[:,:,:3].reshape(bs,3,3,-1).permute(0,3,1,2)
246 | rot_unit = torch.matmul(tgt_rot, ref_rot)
247 |
248 | rot_err = torch.mean(torch.pow(rot_unit - eye, 2), dim=[2,3]).reshape(bs, 1, hh, ww)
249 | rot1_scale = torch.mean(torch.pow(tgt_rot - eye, 2), dim=[2,3]).reshape(bs, 1, hh, ww)
250 | rot2_scale = torch.mean(torch.pow(ref_rot - eye, 2), dim=[2,3]).reshape(bs, 1, hh, ww)
251 | rot_err /= (1e-24 + rot1_scale + rot2_scale)
252 | cost_r = rot_err.mean()
253 | # pdb.set_trace()
254 |
255 | ### translation error ###
256 | r2t_mof, _ = flow_warp(ref_mof, r2t_flow.detach()) # to be compared with "tgt_mof"
257 | r2t_mask = ( (1- (r2t_diff>thresh).float()) * r2t_val ).detach()
258 | r2t_mat = pose_mof2mat(r2t_mof)
259 |
260 | r2t_trans = r2t_mat[:,:,-1].reshape(bs,3,-1).permute(0,2,1).unsqueeze(-1)
261 | tgt_trans = tgt_mat[:,:,-1].reshape(bs,3,-1).permute(0,2,1).unsqueeze(-1)
262 | trans_zero = torch.matmul(tgt_rot, r2t_trans) + tgt_trans
263 | trans_zero_norm = torch.pow(trans_zero, 2).sum(dim=2).reshape(bs,1,hh,ww)
264 | r2t_trans_norm = torch.pow(r2t_trans, 2).sum(dim=2).reshape(bs,1,hh,ww)
265 | tgt_trans_norm = torch.pow(tgt_trans, 2).sum(dim=2).reshape(bs,1,hh,ww)
266 |
267 | trans_err = trans_zero_norm / (1e-24 + r2t_trans_norm + tgt_trans_norm)
268 | cost_t = mean_on_mask( trans_err, r2t_mask )
269 |
270 | loss += cost_r + alpha*cost_t
271 | # pdb.set_trace()
272 |
273 | # pdb.set_trace()
274 | '''
275 | r2t_mof, r2t_val0 = flow_warp(ref_mof, r2t_flow)
276 | t2r_mof, t2r_val0 = flow_warp(tgt_mof, t2r_flow)
277 | tgt_err = (tgt_mof + r2t_mof).abs()
278 | ref_err = (ref_mof + t2r_mof).abs()
279 | bb = 0
280 | vm = 0.02
281 | plt.close('all'); ea1 = 7; ea2 = 4; ii = 1;
282 | fig = plt.figure(99, figsize=(21, 12)) # figsize=(22, 13)
283 | fig.add_subplot(ea1,ea2,ii); ii += 1; plt.imshow(tgt_mof[bb,2].detach().cpu(), vmax=+vm, vmin=-vm); plt.colorbar(); plt.grid(linestyle=':', linewidth=0.4); plt.text(10, -14, "tgt_mof", fontsize=7, bbox=dict(facecolor='None', edgecolor='None'));
284 | fig.add_subplot(ea1,ea2,ii); ii += 1; plt.imshow(ref_mof[bb,2].detach().cpu(), vmax=+vm, vmin=-vm); plt.colorbar(); plt.grid(linestyle=':', linewidth=0.4); plt.text(10, -14, "ref_mof", fontsize=7, bbox=dict(facecolor='None', edgecolor='None'));
285 | fig.add_subplot(ea1,ea2,ii); ii += 1; plt.imshow(r2t_mof[bb,2].detach().cpu(), vmax=+vm, vmin=-vm); plt.colorbar(); plt.grid(linestyle=':', linewidth=0.4); plt.text(10, -14, "r2t_mof", fontsize=7, bbox=dict(facecolor='None', edgecolor='None'));
286 | fig.add_subplot(ea1,ea2,ii); ii += 1; plt.imshow(t2r_mof[bb,2].detach().cpu(), vmax=+vm, vmin=-vm); plt.colorbar(); plt.grid(linestyle=':', linewidth=0.4); plt.text(10, -14, "t2r_mof", fontsize=7, bbox=dict(facecolor='None', edgecolor='None'));
287 | fig.add_subplot(ea1,ea2,ii); ii += 1; plt.imshow(tgt_err[bb,2].detach().cpu(), vmax=+vm, vmin=-vm); plt.colorbar(); plt.grid(linestyle=':', linewidth=0.4); plt.text(10, -14, "tgt_err", fontsize=7, bbox=dict(facecolor='None', edgecolor='None'));
288 | fig.add_subplot(ea1,ea2,ii); ii += 1; plt.imshow(ref_err[bb,2].detach().cpu(), vmax=+vm, vmin=-vm); plt.colorbar(); plt.grid(linestyle=':', linewidth=0.4); plt.text(10, -14, "ref_err", fontsize=7, bbox=dict(facecolor='None', edgecolor='None'));
289 | fig.add_subplot(ea1,ea2,ii); ii += 1; plt.imshow(r2t_diff[bb,0].detach().cpu(), vmax=1, vmin=0); plt.colorbar(); plt.grid(linestyle=':', linewidth=0.4); plt.text(10, -14, "r2t_diff", fontsize=7, bbox=dict(facecolor='None', edgecolor='None'));
290 | fig.add_subplot(ea1,ea2,ii); ii += 1; plt.imshow(t2r_diff[bb,0].detach().cpu(), vmax=1, vmin=0); plt.colorbar(); plt.grid(linestyle=':', linewidth=0.4); plt.text(10, -14, "t2r_diff", fontsize=7, bbox=dict(facecolor='None', edgecolor='None'));
291 | fig.add_subplot(ea1,ea2,ii); ii += 1; plt.imshow(r2t_val[bb,0].detach().cpu(), vmax=1, vmin=0); plt.colorbar(); plt.grid(linestyle=':', linewidth=0.4); plt.text(10, -14, "r2t_val", fontsize=7, bbox=dict(facecolor='None', edgecolor='None'));
292 | fig.add_subplot(ea1,ea2,ii); ii += 1; plt.imshow(r2t_val0[bb,0].detach().cpu(), vmax=1, vmin=0); plt.colorbar(); plt.grid(linestyle=':', linewidth=0.4); plt.text(10, -14, "r2t_val0", fontsize=7, bbox=dict(facecolor='None', edgecolor='None'));
293 | fig.add_subplot(ea1,ea2,ii); ii += 1; plt.imshow(t2r_val[bb,0].detach().cpu(), vmax=1, vmin=0); plt.colorbar(); plt.grid(linestyle=':', linewidth=0.4); plt.text(10, -14, "t2r_val", fontsize=7, bbox=dict(facecolor='None', edgecolor='None'));
294 | fig.add_subplot(ea1,ea2,ii); ii += 1; plt.imshow(t2r_val0[bb,0].detach().cpu(), vmax=1, vmin=0); plt.colorbar(); plt.grid(linestyle=':', linewidth=0.4); plt.text(10, -14, "t2r_val0", fontsize=7, bbox=dict(facecolor='None', edgecolor='None'));
295 | # fig.add_subplot(ea1,ea2,ii); ii += 1; plt.imshow(fwd_mask[bb,0].detach().cpu(), vmax=1, vmin=0); plt.colorbar(); plt.grid(linestyle=':', linewidth=0.4); plt.text(10, -14, "fwd_mask", fontsize=7, bbox=dict(facecolor='None', edgecolor='None'));
296 | # fig.add_subplot(ea1,ea2,ii); ii += 1; plt.imshow(bwd_mask[bb,0].detach().cpu(), vmax=1, vmin=0); plt.colorbar(); plt.grid(linestyle=':', linewidth=0.4); plt.text(10, -14, "bwd_mask", fontsize=7, bbox=dict(facecolor='None', edgecolor='None'));
297 | # fig.add_subplot(ea1,ea2,ii); ii += 1; plt.imshow(fwd_val[bb,0].detach().cpu(), vmax=1, vmin=0); plt.colorbar(); plt.grid(linestyle=':', linewidth=0.4); plt.text(10, -14, "fwd_val", fontsize=7, bbox=dict(facecolor='None', edgecolor='None'));
298 | fig.add_subplot(ea1,ea2,ii); ii += 1; plt.imshow(r2t_mask[bb,0].detach().cpu(), vmax=1, vmin=0); plt.colorbar(); plt.grid(linestyle=':', linewidth=0.4); plt.text(10, -14, "r2t_mask", fontsize=7, bbox=dict(facecolor='None', edgecolor='None'));
299 | fig.add_subplot(ea1,ea2,ii); ii += 1; plt.imshow(rot_err[bb,0].detach().cpu() ); plt.colorbar(); plt.grid(linestyle=':', linewidth=0.4); plt.text(10, -14, "rot_err", fontsize=7, bbox=dict(facecolor='None', edgecolor='None'));
300 | fig.add_subplot(ea1,ea2,ii); ii += 1; plt.imshow(trans_err[bb,0].detach().cpu(), vmax=0.1, vmin=0 ); plt.colorbar(); plt.grid(linestyle=':', linewidth=0.4); plt.text(10, -14, "trans_err", fontsize=7, bbox=dict(facecolor='None', edgecolor='None'));
301 | plt.tight_layout(); fig.savefig(f'images/NEW/inverse_warp_mof/{enum}.png', dpi=fig.dpi);
302 | '''
303 | return loss / (enum+1)
304 |
305 |
306 |
307 | ################################################################################################################################################################################
308 |
309 |
310 | def mean_on_mask(diff, valid_mask):
311 | '''
312 | compute mean value given a binary mask
313 | '''
314 | mask = valid_mask.expand_as(diff)
315 | if mask.sum() == 0:
316 | return torch.tensor(.0).cuda()
317 | else:
318 | return (diff * mask).sum() / mask.sum()
319 |
320 |
321 |
322 | @torch.no_grad()
323 | def compute_errors(gt, pred, med_scale=None):
324 | abs_diff, abs_rel, sq_rel, a1, a2, a3 = 0,0,0,0,0,0
325 | batch_size = gt.size(0)
326 |
327 | '''
328 | crop used by Garg ECCV16 to reprocude Eigen NIPS14 results
329 | construct a mask of False values, with the same size as target
330 | and then set to True values inside the crop
331 | '''
332 | crop_mask = gt[0] != gt[0]
333 | y1,y2 = int(0.40810811 * gt.size(1)), int(0.99189189 * gt.size(1))
334 | x1,x2 = int(0.03594771 * gt.size(2)), int(0.96405229 * gt.size(2))
335 | crop_mask[y1:y2,x1:x2] = 1
336 | max_depth = 80
337 |
338 | for current_gt, current_pred in zip(gt, pred):
339 | valid = (current_gt > 0) & (current_gt < max_depth)
340 | valid = valid & crop_mask
341 |
342 | valid_gt = current_gt[valid]
343 | valid_pred = current_pred[valid].clamp(1e-3, max_depth)
344 |
345 | if med_scale is None:
346 | med_scale = torch.median(valid_gt) / torch.median(valid_pred)
347 |
348 | valid_pred = valid_pred * med_scale
349 |
350 | thresh = torch.max((valid_gt / valid_pred), (valid_pred / valid_gt))
351 | a1 += (thresh < 1.25).float().mean()
352 | a2 += (thresh < 1.25 ** 2).float().mean()
353 | a3 += (thresh < 1.25 ** 3).float().mean()
354 |
355 | abs_diff += torch.mean(torch.abs(valid_gt - valid_pred))
356 | abs_rel += torch.mean(torch.abs(valid_gt - valid_pred) / valid_gt)
357 |
358 | sq_rel += torch.mean(((valid_gt - valid_pred)**2) / valid_gt)
359 |
360 | return [metric.item() / batch_size for metric in [abs_diff, abs_rel, sq_rel, a1, a2, a3]], med_scale
--------------------------------------------------------------------------------
/rigid_warp.py:
--------------------------------------------------------------------------------
1 | '''
2 | Reference: https://github.com/SeokjuLee/Insta-DM/blob/master/rigid_warp.py
3 | '''
4 |
5 | from __future__ import division
6 | import torch
7 | import torch.nn.functional as F
8 | import numpy as np
9 | import time
10 | import random
11 | from torch_sparse import coalesce
12 | from matplotlib import pyplot as plt
13 | import pdb
14 |
15 | pixel_coords = None
16 |
17 |
18 | def set_id_grid(depth):
19 | global pixel_coords
20 | b, h, w = depth.size()
21 | i_range = torch.arange(0, h).view(1, h, 1).expand(1,h,w).type_as(depth) # [1, H, W]
22 | j_range = torch.arange(0, w).view(1, 1, w).expand(1,h,w).type_as(depth) # [1, H, W]
23 | ones = torch.ones(1,h,w).type_as(depth)
24 |
25 | pixel_coords = torch.stack((j_range, i_range, ones), dim=1) # [1, 3, H, W]
26 |
27 |
28 |
29 | def check_sizes(input, input_name, expected):
30 | condition = [input.ndimension() == len(expected)]
31 | for i,size in enumerate(expected):
32 | if size.isdigit():
33 | condition.append(input.size(i) == int(size))
34 | assert(all(condition)), "wrong size for {}, expected {}, got {}".format(input_name, 'x'.join(expected), list(input.size()))
35 |
36 |
37 |
38 | def pixel2cam(depth, intrinsics_inv):
39 | global pixel_coords
40 | """
41 | Transform coordinates in the pixel frame to the camera frame.
42 | Args:
43 | depth: depth maps -- [B, H, W]
44 | intrinsics_inv: intrinsics_inv matrix for each element of batch -- [B, 3, 3]
45 | Returns:
46 | array of (u,v,1) cam coordinates -- [B, 3, H, W]
47 | """
48 | b, h, w = depth.size()
49 | if (pixel_coords is None) or pixel_coords.size(2) < h:
50 | set_id_grid(depth)
51 | current_pixel_coords = pixel_coords[:,:,:h,:w].expand(b,3,h,w).reshape(b, 3, -1) # [B, 3, H*W]
52 | cam_coords = (intrinsics_inv @ current_pixel_coords).reshape(b, 3, h, w) # [B, 3, H, W]
53 |
54 | return cam_coords * depth.unsqueeze(1)
55 |
56 |
57 |
58 | def cam2pixel(cam_coords, proj_c2p_rot, proj_c2p_tr, padding_mode):
59 | """
60 | Transform coordinates in the camera frame to the pixel frame.
61 | Args:
62 | cam_coords: pixel coordinates defined in the first camera coordinates system -- [B, 4, H, W] // tgt_depth * K_inv
63 | proj_c2p_rot: rotation matrix of cameras -- [B, 3, 4]
64 | proj_c2p_tr: translation vectors of cameras -- [B, 3, 1]
65 | Returns:
66 | array of [-1,1] coordinates -- [B, 2, H, W]
67 | """
68 | b, _, h, w = cam_coords.size()
69 | cam_coords_flat = cam_coords.reshape(b, 3, -1) # [B, 3, H*W]
70 |
71 | if proj_c2p_rot is not None:
72 | pcoords = proj_c2p_rot @ cam_coords_flat # (K * P) * (D_tgt * K_inv)
73 | else:
74 | pcoords = cam_coords_flat
75 |
76 | if proj_c2p_tr is not None:
77 | pcoords = pcoords + proj_c2p_tr # [B, 3, H*W]
78 |
79 | X = pcoords[:, 0]
80 | Y = pcoords[:, 1]
81 | Z = pcoords[:, 2].clamp(min=1e-3)
82 |
83 | X_norm = 2*(X / Z)/(w-1) - 1 # Normalized, -1 if on extreme left, 1 if on extreme right (x = w-1) [B, H*W]
84 | Y_norm = 2*(Y / Z)/(h-1) - 1 # Idem [B, H*W]
85 |
86 | pixel_coords = torch.stack([X_norm, Y_norm], dim=2) # [B, H*W, 2]
87 | return pixel_coords.reshape(b,h,w,2)
88 |
89 |
90 |
91 | def cam2pixel2(cam_coords, proj_c2p_rot, proj_c2p_tr, padding_mode):
92 | """
93 | Transform coordinates in the camera frame to the pixel frame.
94 | Reference: https://github.com/JiawangBian/SC-SfMLearner-Release/blob/master/inverse_warp.py
95 | Args:
96 | cam_coords: pixel coordinates defined in the first camera coordinates system -- [B, 4, H, W] // tgt_depth * K_inv
97 | proj_c2p_rot: rotation matrix of cameras -- [B, 3, 4]
98 | proj_c2p_tr: translation vectors of cameras -- [B, 3, 1]
99 | Returns:
100 | array of [-1,1] coordinates -- [B, 2, H, W]
101 | """
102 | b, _, h, w = cam_coords.size()
103 | cam_coords_flat = cam_coords.reshape(b, 3, -1) # [B, 3, H*W]
104 |
105 | if proj_c2p_rot is not None:
106 | pcoords = proj_c2p_rot @ cam_coords_flat # (K * P) * (D_tgt * K_inv)
107 | else:
108 | pcoords = cam_coords_flat
109 |
110 | if proj_c2p_tr is not None:
111 | pcoords = pcoords + proj_c2p_tr # [B, 3, H*W]
112 |
113 | X = pcoords[:, 0]
114 | Y = pcoords[:, 1]
115 | Z = pcoords[:, 2].clamp(min=1e-3)
116 |
117 | X_norm = 2*(X / Z)/(w-1) - 1 # Normalized, -1 if on extreme left, 1 if on extreme right (x = w-1) [B, H*W]
118 | Y_norm = 2*(Y / Z)/(h-1) - 1 # Idem [B, H*W]
119 |
120 | if padding_mode == 'zeros':
121 | X_mask = ((X_norm > 1)+(X_norm < -1)).detach()
122 | X_norm[X_mask] = 2 # make sure that no point in warped image is a combination of im and gray
123 | Y_mask = ((Y_norm > 1)+(Y_norm < -1)).detach()
124 | Y_norm[Y_mask] = 2
125 |
126 | pixel_coords = torch.stack([X_norm, Y_norm], dim=2) # [B, H*W, 2]
127 |
128 | X_z = X / Z
129 | Y_z = Y / Z
130 | pixel_coords2 = torch.stack([X_z, Y_z], dim=2) # [B, H*W, 2]
131 |
132 | return pixel_coords.reshape(b, h, w, 2), Z.reshape(b, 1, h, w), pixel_coords2.reshape(b, h, w, 2)
133 |
134 |
135 |
136 | def cam2homo(cam_coords, proj_c2p_rot, proj_c2p_tr, padding_mode='zeros'):
137 | """
138 | Transform coordinates in the camera frame to the pixel frame.
139 | Args:
140 | cam_coords: pixel coordinates defined in the first camera coordinates system -- [B, 4, H, W]
141 | proj_c2p_rot: rotation matrix of cameras -- [B, 3, 4]
142 | proj_c2p_tr: translation vectors of cameras -- [B, 3, 1]
143 | Returns:
144 | array of [-1,1] coordinates -- [B, 2, H, W]
145 | """
146 | b, _, h, w = cam_coords.size()
147 | cam_coords_flat = cam_coords.view(b, 3, -1) # [B, 3, H*W]
148 | if proj_c2p_rot is not None:
149 | pcoords = proj_c2p_rot.bmm(cam_coords_flat)
150 | else:
151 | pcoords = cam_coords_flat
152 |
153 | if proj_c2p_tr is not None:
154 | pcoords = pcoords + proj_c2p_tr # [B, 3, H*W]
155 | X = pcoords[:, 0]
156 | Y = pcoords[:, 1]
157 | Z = pcoords[:, 2].clamp(min=1e-3)
158 |
159 | X_homo = X / Z # Homogeneous coords X
160 | Y_homo = Y / Z # Homogeneous coords Y
161 | pixel_coords_homo = torch.stack([X_homo, Y_homo], dim=2) # [B, H*W, 2]
162 |
163 | X_norm = 2*(X / Z)/(w-1) - 1 # Normalized, -1 if on extreme left, 1 if on extreme right (x = w-1) [B, H*W]
164 | Y_norm = 2*(Y / Z)/(h-1) - 1 # Idem [B, H*W]
165 | if padding_mode == 'zeros':
166 | X_mask = ((X_norm > 1)+(X_norm < -1)).detach()
167 | X_norm[X_mask] = 2 # make sure that no point in warped image is a combination of im and gray
168 | Y_mask = ((Y_norm > 1)+(Y_norm < -1)).detach()
169 | Y_norm[Y_mask] = 2
170 |
171 | pixel_coords_norm = torch.stack([X_norm, Y_norm], dim=2) # [B, H*W, 2]
172 |
173 | valid_points = pixel_coords_norm.view(b,h,w,2).abs().max(dim=-1)[0] <= 1
174 | valid_mask = valid_points.unsqueeze(1).float()
175 |
176 | return pixel_coords_homo.view(b,h,w,2), valid_mask
177 |
178 |
179 |
180 | def mat2euler(R):
181 | """
182 | Convert rotation matrix to euler angles.
183 | Reference: https://github.com/pulkitag/pycaffe-utils/blob/master/rot_utils.py#L174
184 | Args:
185 | Rotation matrix corresponding to the euler angles -- size = [B, 3, 3]
186 | Returns:
187 | angle: rotation angle along 3 axis (in radians) -- size = [B, 3]
188 | """
189 | bs = R.size(0)
190 |
191 | sy = torch.sqrt(R[:,0,0]*R[:,0,0]+R[:,1,0]*R[:,1,0])
192 | singular = (sy<1e-6).float()
193 |
194 | x = torch.atan2(R[:,2,1], R[:,2,2])
195 | y = torch.atan2(-R[:,2,0], sy)
196 | z = torch.atan2(R[:,1,0], R[:,0,0])
197 |
198 | xs = torch.atan2(-R[:,1,2], R[:,1,1])
199 | ys = torch.atan2(-R[:,2,0], sy)
200 | zs = R[:,1,0]*0
201 |
202 | out_euler_x = x*(1-singular)+xs*singular
203 | out_euler_y = y*(1-singular)+ys*singular
204 | out_euler_z = z*(1-singular)+zs*singular
205 |
206 | return torch.stack([out_euler_x, out_euler_y, out_euler_z], dim=-1)
207 |
208 |
209 |
210 | def euler2mat(angle):
211 | """
212 | Convert euler angles to rotation matrix.
213 | Reference: https://github.com/pulkitag/pycaffe-utils/blob/master/rot_utils.py#L174
214 | Args:
215 | angle: rotation angle along 3 axis (in radians) -- size = [B, 3]
216 | Returns:
217 | Rotation matrix corresponding to the euler angles -- size = [B, 3, 3]
218 | """
219 | B = angle.size(0)
220 | x, y, z = angle[:,0], angle[:,1], angle[:,2]
221 |
222 | cosz = torch.cos(z)
223 | sinz = torch.sin(z)
224 |
225 | zeros = z.detach()*0
226 | ones = zeros.detach()+1
227 | zmat = torch.stack([cosz, -sinz, zeros,
228 | sinz, cosz, zeros,
229 | zeros, zeros, ones], dim=1).reshape(B, 3, 3)
230 |
231 | cosy = torch.cos(y)
232 | siny = torch.sin(y)
233 |
234 | ymat = torch.stack([cosy, zeros, siny,
235 | zeros, ones, zeros,
236 | -siny, zeros, cosy], dim=1).reshape(B, 3, 3)
237 |
238 | cosx = torch.cos(x)
239 | sinx = torch.sin(x)
240 |
241 | xmat = torch.stack([ones, zeros, zeros,
242 | zeros, cosx, -sinx,
243 | zeros, sinx, cosx], dim=1).reshape(B, 3, 3)
244 |
245 | rotMat = xmat @ ymat @ zmat
246 | return rotMat
247 |
248 |
249 |
250 | def quat2mat(quat):
251 | """
252 | Convert quaternion coefficients to rotation matrix.
253 | Args:
254 | quat: first three coeff of quaternion of rotation. fourht is then computed to have a norm of 1 -- size = [B, 3]
255 | Returns:
256 | Rotation matrix corresponding to the quaternion -- size = [B, 3, 3]
257 | """
258 | norm_quat = torch.cat([quat[:,:1].detach()*0 + 1, quat], dim=1)
259 | norm_quat = norm_quat/norm_quat.norm(p=2, dim=1, keepdim=True)
260 | w, x, y, z = norm_quat[:,0], norm_quat[:,1], norm_quat[:,2], norm_quat[:,3]
261 |
262 | B = quat.size(0)
263 |
264 | w2, x2, y2, z2 = w.pow(2), x.pow(2), y.pow(2), z.pow(2)
265 | wx, wy, wz = w*x, w*y, w*z
266 | xy, xz, yz = x*y, x*z, y*z
267 |
268 | rotMat = torch.stack([w2 + x2 - y2 - z2, 2*xy - 2*wz, 2*wy + 2*xz,
269 | 2*wz + 2*xy, w2 - x2 + y2 - z2, 2*yz - 2*wx,
270 | 2*xz - 2*wy, 2*wx + 2*yz, w2 - x2 - y2 + z2], dim=1).reshape(B, 3, 3)
271 | return rotMat
272 |
273 |
274 |
275 | def pose_vec2mat(vec, rotation_mode='euler'):
276 | """
277 | Convert 6DoF parameters to transformation matrix.
278 | Args:
279 | vec: 6DoF parameters in the order of tx, ty, tz, rx, ry, rz -- [B, 6]
280 | Returns:
281 | A transformation matrix -- [B, 3, 4]
282 | """
283 | translation = vec[:, :3].unsqueeze(-1) # [B, 3, 1]
284 | # print(translation)
285 | rot = vec[:,3:]
286 | if rotation_mode == 'euler':
287 | rot_mat = euler2mat(rot) # [B, 3, 3]
288 | # print(rot_mat)
289 | elif rotation_mode == 'quat':
290 | rot_mat = quat2mat(rot) # [B, 3, 3]
291 | transform_mat = torch.cat([rot_mat, translation], dim=2) # [B, 3, 4]
292 | return transform_mat
293 |
294 |
295 |
296 | def pose_mof2mat_v1(mof, rotation_mode='euler'):
297 | """
298 | ### Out-of-Memory Issue ###
299 | Convert 6DoF parameters to transformation matrix.
300 | Args:
301 | mof: 6DoF parameters in the order of tx, ty, tz, rx, ry, rz -- [B, 6, H, W]
302 | Returns:
303 | A transformation matrix -- [B, 3, 4, H, W]
304 | """
305 | bs, _, hh, ww = mof.size()
306 | mof = mof.permute(0,2,3,1).reshape(-1,6) # [B*N, 6]
307 | translation = mof[:,:3].unsqueeze(-1) # [B*N, 3, 1]
308 | rot = mof[:,3:] # [B*N, 3]
309 |
310 | if rotation_mode == 'euler':
311 | rot_mat = euler2mat(rot) # [B*N, 3, 3]
312 | elif rotation_mode == 'quat':
313 | rot_mat = quat2mat(rot) # [B*N, 3, 3]
314 |
315 | transform_mat = torch.cat([rot_mat, translation], dim=2) # [B*N, 3, 4]
316 | transform_mat = transform_mat.reshape(bs, hh, ww, 3, 4).permute(0,3,4,1,2) # [B, 3, 4, H, W]
317 | # pdb.set_trace()
318 | return transform_mat
319 |
320 |
321 |
322 | def pose_mof2mat(mof, rotation_mode='euler'):
323 | """
324 | Convert 6DoF parameters to transformation matrix.
325 | Args:
326 | mof: 6DoF parameters in the order of tx, ty, tz, rx, ry, rz -- [B, 6, H, W]
327 | Returns:
328 | A transformation matrix -- [B, 3, 4, H, W]
329 | """
330 | bs, _, hh, ww = mof.size()
331 | translation = mof[:,:3].reshape(bs,3,1,hh,ww) # [B, 3, 1, H, W]
332 | rot = mof[:,3:].mean(dim=[2,3]) # [B*1, 3]
333 |
334 | if rotation_mode == 'euler':
335 | rot_mat = euler2mat(rot) # [B*1, 3, 3]
336 | elif rotation_mode == 'quat':
337 | rot_mat = quat2mat(rot) # [B*1, 3, 3]
338 |
339 | rot_mat = rot_mat.reshape(bs,3,3,1,1).repeat(1,1,1,hh,ww) # [B, 3, 3, H, W]
340 | transform_mat = torch.cat([rot_mat, translation], dim=2) # [B*N, 3, 4]
341 | # pdb.set_trace()
342 | return transform_mat
343 |
344 |
345 |
346 | def inverse_warp(img, depth, pose, intrinsics, rotation_mode='euler', padding_mode='zeros'):
347 | """
348 | Inverse warp a source image to the target image plane.
349 | Args:
350 | img: the source image (where to sample pixels) -- [B, 3, H, W]
351 | depth: depth map of the target image -- [B, H, W]
352 | pose: 6DoF pose parameters from target to source -- [B, 6]
353 | intrinsics: camera intrinsic matrix -- [B, 3, 3]
354 | Returns:
355 | projected_img: Source image warped to the target image plane
356 | valid_points: Boolean array indicating point validity
357 | """
358 | # check_sizes(img, 'img', 'B3HW')
359 | check_sizes(depth, 'depth', 'BHW')
360 | check_sizes(pose, 'pose', 'B6')
361 | check_sizes(intrinsics, 'intrinsics', 'B33')
362 |
363 | batch_size, _, img_height, img_width = img.size()
364 |
365 | cam_coords = pixel2cam(depth, intrinsics.inverse()) # [B,3,H,W]
366 |
367 | pose_mat = pose_vec2mat(pose, rotation_mode) # [B,3,4]
368 |
369 | # Get projection matrix for tgt camera frame to source pixel frame
370 | proj_cam_to_src_pixel = intrinsics @ pose_mat # [B, 3, 4]
371 |
372 | rot, tr = proj_cam_to_src_pixel[:,:,:3], proj_cam_to_src_pixel[:,:,-1:]
373 | src_pixel_coords = cam2pixel(cam_coords, rot, tr, padding_mode) # [B,H,W,2]
374 |
375 | if np.array(torch.__version__[:3]).astype(float) >= 1.3:
376 | projected_img = F.grid_sample(img, src_pixel_coords, padding_mode=padding_mode, align_corners=True)
377 | else:
378 | projected_img = F.grid_sample(img, src_pixel_coords, padding_mode=padding_mode)
379 |
380 | valid_points = src_pixel_coords.abs().max(dim=-1)[0] <= 1
381 |
382 | return projected_img, valid_points
383 |
384 |
385 |
386 | def inverse_warp2(img, depth, pose, intrinsics, ref_depth, rotation_mode='euler', padding_mode='zeros'):
387 | """
388 | Inverse warp a source image to the target image plane.
389 | Reference: https://github.com/JiawangBian/SC-SfMLearner-Release/blob/master/inverse_warp.py
390 | Args:
391 | img: the source image (where to sample pixels) -- [B, 3, H, W] // ref_img
392 | depth: depth map of the target image -- [B, 1, H, W] // tgt_depth
393 | ref_depth: the source depth map (where to sample depth) -- [B, 1, H, W] // ref_depth
394 | pose: 6DoF pose parameters from target to source -- [B, 6]
395 | intrinsics: camera intrinsic matrix -- [B, 3, 3]
396 | Returns:
397 | projected_img: Source image warped to the target image plane
398 | valid_mask: Float array indicating point validity
399 | """
400 | check_sizes(img, 'img', 'BCHW')
401 | check_sizes(depth, 'depth', 'B1HW')
402 | check_sizes(intrinsics, 'intrinsics', 'B33')
403 | check_sizes(ref_depth, 'ref_depth', 'B1HW')
404 | if isinstance(pose, list):
405 | for p_vec in pose:
406 | check_sizes(p_vec, 'pose', 'B6')
407 | else:
408 | check_sizes(pose, 'pose', 'B6')
409 |
410 | batch_size, _, img_height, img_width = img.size()
411 |
412 | cam_coords = pixel2cam(depth.squeeze(1), intrinsics.inverse()) # D * K_inv * X, [B,3,H,W]
413 |
414 | if isinstance(pose, list):
415 | for pp, p_vec in enumerate(pose):
416 | if pp == 0:
417 | pose_mat = pose_vec2mat(p_vec, rotation_mode) # RT, [B,3,4]
418 | aux_mat = torch.tensor([0,0,0,1]).type_as(pose_mat).unsqueeze(0).unsqueeze(0).repeat(batch_size,1,1) # [B,1,1]
419 | pose_mat = torch.cat([pose_mat, aux_mat], dim=1) # [B,4,4]
420 | continue;
421 | next_mat = pose_vec2mat(p_vec, rotation_mode) # RT, [B,3,4]
422 | aux_mat = torch.tensor([0,0,0,1]).type_as(next_mat).unsqueeze(0).unsqueeze(0).repeat(batch_size,1,1) # [B,1,1]
423 | next_mat = torch.cat([next_mat, aux_mat], dim=1)
424 | pose_mat = pose_mat @ next_mat # [B,4,4]
425 | # pose_mat = next_mat @ pose_mat # [B,4,4]
426 | pose_mat = pose_mat[:,:3,:] # RT, [B,3,4]
427 | else:
428 | pose_mat = pose_vec2mat(pose, rotation_mode) # RT, [B,3,4]
429 |
430 | # Get projection matrix for tgt camera frame to source pixel frame
431 | proj_cam_to_src_pixel = intrinsics @ pose_mat # [B, 3, 4]
432 |
433 | rot, tr = proj_cam_to_src_pixel[:,:,:3], proj_cam_to_src_pixel[:,:,-1:]
434 | src_pixel_coords, computed_depth, _ = cam2pixel2(cam_coords, rot, tr, padding_mode) # [B,H,W,2]
435 |
436 | if np.array(torch.__version__[:3]).astype(float) >= 1.3:
437 | projected_img = F.grid_sample(img, src_pixel_coords, padding_mode=padding_mode, align_corners=True)
438 | else:
439 | projected_img = F.grid_sample(img, src_pixel_coords, padding_mode=padding_mode)
440 |
441 | valid_points = src_pixel_coords.abs().max(dim=-1)[0] <= 1
442 | valid_mask = valid_points.unsqueeze(1).float()
443 |
444 | if np.array(torch.__version__[:3]).astype(float) >= 1.3:
445 | projected_depth = F.grid_sample(ref_depth, src_pixel_coords, padding_mode=padding_mode, align_corners=True).clamp(min=1e-3)
446 | else:
447 | projected_depth = F.grid_sample(ref_depth, src_pixel_coords, padding_mode=padding_mode).clamp(min=1e-3)
448 |
449 | return projected_img, valid_mask, projected_depth, computed_depth
450 |
451 |
452 |
453 | def transform_scale_consistent_depth(depth, pose, intrinsics, rotation_mode='euler', padding_mode='zeros'):
454 | """
455 | Transform scale of depth with given pose change.
456 | Args:
457 | depth: depth map of the target image -- [B, 1, H, W] // tgt_depth
458 | pose: 6DoF pose parameters from target to source -- [B, 6] //
459 | intrinsics: camera intrinsic matrix -- [B, 3, 3] //
460 | Returns:
461 | scale_transformed_depth: Source depth scaled to the target depth
462 | """
463 | check_sizes(depth, 'depth', 'B1HW')
464 | check_sizes(pose, 'pose', 'B6')
465 | check_sizes(intrinsics, 'intrinsics', 'B33')
466 |
467 | cam_coords = pixel2cam(depth.squeeze(1), intrinsics.inverse()) # D * K_inv * X, [B,3,H,W]
468 | pose_mat = pose_vec2mat(pose, rotation_mode) # RT, [B,3,4]
469 |
470 | # Get projection matrix for tgt camera frame to source pixel frame
471 | proj_cam_to_src_pixel = intrinsics @ pose_mat # [B, 3, 4]
472 |
473 | rot, tr = proj_cam_to_src_pixel[:,:,:3], proj_cam_to_src_pixel[:,:,-1:]
474 | _, computed_depth, _ = cam2pixel2(cam_coords, rot, tr, padding_mode) # [B,H,W,2]
475 | # pdb.set_trace()
476 |
477 | return computed_depth
478 |
479 |
480 |
481 | def depth2flow(depth, pose, intrinsics, reverse_pose=False, rotation_mode='euler', padding_mode='zeros'):
482 | """
483 | Depth + Pose => Flow
484 |
485 | Args:
486 | img: the source image (where to sample pixels) -- [B, 3, H, W]
487 | depth: depth map of the target image -- [B, 1, H, W]
488 | pose: 6DoF pose parameters from target to source -- [B, 6]
489 | intrinsics: camera intrinsic matrix -- [B, 3, 3]
490 | Returns:
491 | Source image warped to the target image plane
492 | """
493 | check_sizes(depth, 'depth', 'B1HW')
494 | check_sizes(pose, 'pose', 'B6')
495 | check_sizes(intrinsics, 'intrinsics', 'B33')
496 |
497 | batch_size, _, hh, ww = depth.size()
498 | cam_coords = pixel2cam(depth.squeeze(1), intrinsics.inverse()) # D * K_inv * X, [B,3,H,W]
499 | pose_mat = pose_vec2mat(pose, rotation_mode) # RT, [B,3,4]
500 |
501 | if reverse_pose:
502 | aux_mat = torch.zeros([batch_size,4]).cuda().unsqueeze(1)
503 | aux_mat[:,:,3] = 1
504 | pose_mat = torch.cat([pose_mat, aux_mat], dim=1) # [B, 4, 4]
505 | pose_mat = [t.inverse() for t in torch.functional.unbind(pose_mat)]
506 | pose_mat = torch.stack(pose_mat) # [B, 4, 4]
507 | pose_mat = pose_mat[:,:3,:]
508 |
509 | # Get projection matrix for tgt camera frame to source pixel frame
510 | proj_cam_to_src_pixel = intrinsics @ pose_mat # [B,3,4]
511 | rot, tr = proj_cam_to_src_pixel[:,:,:3], proj_cam_to_src_pixel[:,:,-1:]
512 | flow_grid, valid_mask = cam2homo(cam_coords, rot, tr, padding_mode) # [B,H,W,2], [B,1,H,W]
513 | mgrid_np = np.expand_dims(np.mgrid[0:ww,0:hh].transpose(2,1,0).astype(np.float32),0).repeat(batch_size, axis=0)
514 | mgrid = torch.from_numpy(mgrid_np).cuda() # [B,H,W,2]
515 |
516 | flow_rigid = flow_grid - mgrid
517 | flow_rigid = flow_rigid.permute(0,3,1,2)
518 |
519 | return flow_rigid, valid_mask
520 |
521 |
522 |
523 | def cam2pix_trans(cam_coords, pose_mat, intrinsics):
524 | b, _, h, w = cam_coords.size()
525 | rot, tr = pose_mat[:,:,:3], pose_mat[:,:,-1:] # [B, 3, 3], [B, 3, 1]
526 | cam_coords_flat = cam_coords.reshape(b, 3, -1) # [B, 3, H*W]
527 | cam_coords_trans = rot @ cam_coords_flat # [B, 3, H*W]
528 | cam_coords_trans = cam_coords_trans + tr # [B, 3, H*W]
529 |
530 | X = cam_coords_trans[:, 0] # [B, H*W]
531 | Y = cam_coords_trans[:, 1] # [B, H*W]
532 | Z = cam_coords_trans[:, 2].clamp(min=1e-3) # [B, H*W]
533 |
534 | X_norm = (X / Z) # [B, H*W]
535 | Y_norm = (Y / Z) # [B, H*W]
536 | Z_norm = (Z / Z) # [B, H*W]
537 | P_norm = torch.stack([X_norm, Y_norm, Z_norm], dim=1) # [B, 3, H*W]
538 | pix_coords = (intrinsics @ P_norm).permute(0,2,1)[:,:,:2] # [B, H*W, 2]
539 |
540 | return pix_coords.reshape(b, h, w, 2), Z.reshape(b, 1, h, w)
541 |
542 |
543 |
544 | def forward_warp(img, depth, pose, intrinsics, upscale=None, rotation_mode='euler', padding_mode='zeros'):
545 | """
546 | Inverse warp a source image to the target image plane.
547 | Args:
548 | img: the source image (where to sample pixels) -- [B, C, H, W]
549 | depth: depth map of the source image -- [B, 1, H, W]
550 | pose: 6DoF pose parameters from target to source -- [B, 6]
551 | intrinsics: camera intrinsic matrix -- [B, 3, 3]
552 | Returns:
553 | projected_img: Source image warped to the target image plane
554 | valid_points: Boolean array indicating point validity
555 |
556 | plt.close('all')
557 | plt.figure(1); plt.imshow(img[0,0].detach().cpu()); plt.colorbar(); plt.ion(); plt.show()
558 | plt.figure(2); plt.imshow(img_w[0,0].detach().cpu()); plt.colorbar(); plt.ion(); plt.show()
559 | plt.figure(3); plt.imshow(depth[0,0].detach().cpu()); plt.colorbar(); plt.ion(); plt.show()
560 | plt.figure(4); plt.imshow(depth_w[0,0].detach().cpu()); plt.colorbar(); plt.ion(); plt.show()
561 | plt.figure(5); plt.imshow(fw_val[0,0].detach().cpu()); plt.colorbar(); plt.ion(); plt.show()
562 | plt.figure(6); plt.imshow(iw_val[0,0].detach().cpu()); plt.colorbar(); plt.ion(); plt.show()
563 | plt.figure(7); plt.imshow(valid[0,0].detach().cpu()); plt.colorbar(); plt.ion(); plt.show()
564 |
565 | """
566 | check_sizes(depth, 'depth', 'B1HW')
567 | check_sizes(pose, 'pose', 'B6')
568 | check_sizes(intrinsics, 'intrinsics', 'B33')
569 |
570 | bs, _, hh, ww = depth.size()
571 | depth_u = F.interpolate(depth, scale_factor=upscale).squeeze(1)
572 | intrinsic_u = torch.cat((intrinsics[:, 0:2]*upscale, intrinsics[:, 2:]), dim=1)
573 |
574 | cam_coords = pixel2cam(depth_u, intrinsic_u.inverse()) # [B,3,uH,uW]
575 | pose_mat = pose_vec2mat(pose, rotation_mode) # [B,3,4]
576 | pcoords, Z = cam2pix_trans(cam_coords, pose_mat, intrinsics) # [B,uH,uW,2], [B,1,uH,uW]
577 |
578 | depth_w, fw_val = [], []
579 | for coo, z in zip(pcoords, Z):
580 | idx = coo.reshape(-1,2).permute(1,0).long()[[1,0]]
581 | val = z.reshape(-1)
582 | idx[0][idx[0]<0] = hh
583 | idx[0][idx[0]>hh-1] = hh
584 | idx[1][idx[1]<0] = ww
585 | idx[1][idx[1]>ww-1] = ww
586 | _idx, _val = coalesce(idx, 1/val, m=hh+1, n=ww+1, op='max') # Cast an index with maximum-inverse-depth: we do NOT interpolate points! >> errors near boundary
587 | depth_w.append( 1/torch.sparse.FloatTensor(_idx, _val, torch.Size([hh+1,ww+1])).to_dense()[:-1,:-1] )
588 | fw_val.append( 1- (torch.sparse.FloatTensor(_idx, _val, torch.Size([hh+1,ww+1])).to_dense()[:-1,:-1]==0).float() )
589 | # pdb.set_trace()
590 | depth_w = torch.stack(depth_w, dim=0)
591 | fw_val = torch.stack(fw_val, dim=0)
592 | depth_w[fw_val==0] = 0
593 |
594 | aux_mat = torch.tensor([0,0,0,1]).type_as(pose_mat).unsqueeze(0).unsqueeze(0).repeat(bs,1,1) # [B,1,1]
595 | pose_mat_inv = torch.inverse(torch.cat([pose_mat, aux_mat], dim=1)) # [B,4,4]
596 | trans_vec = pose_mat_inv[:,:3,3]
597 | euler_vec = mat2euler( pose_mat_inv[:,:3,:3] )
598 | pose_inv = torch.cat([trans_vec, euler_vec], dim=1)
599 |
600 | img_w, iw_val = inverse_warp(img, depth_w, pose_inv, intrinsics)
601 | iw_val = iw_val.float().unsqueeze(1)
602 | depth_w = depth_w.unsqueeze(1)
603 | valid = fw_val.unsqueeze(1) * iw_val
604 | # pdb.set_trace()
605 |
606 | return img_w*valid, depth_w*valid, valid
607 |
608 |
609 |
610 | def cam2pixel_mof(cam_coords, proj_c2p_rot, proj_c2p_tr, padding_mode):
611 | """
612 | Transform coordinates in the camera frame to the pixel frame.
613 | Args:
614 | cam_coords: pixel coordinates defined in the first camera coordinates system -- [B, 4, H, W] // tgt_depth * K_inv
615 | proj_c2p_rot: rotation matrix of cameras -- [B, 3, 3, H, W]
616 | proj_c2p_tr: translation vectors of cameras -- [B, 3, 1, H, W]
617 | Returns:
618 | array of [-1,1] coordinates -- [B, 2, H, W]
619 | """
620 | bs, _, hh, ww = cam_coords.size()
621 | cam_coords_flat = cam_coords.reshape(bs, 3, 1, -1) # [B, 3, 1, H*W]
622 | c2p_rot_flat = proj_c2p_rot.reshape(bs, 3, 3, -1) # [B, 3, 3, H*W]
623 | c2p_tr_flat = proj_c2p_tr.reshape(bs, 3, 1, -1) # [B, 3, 1, H*W]
624 |
625 | ### Rotation ###
626 | if proj_c2p_rot is not None:
627 | pcoords = c2p_rot_flat.permute(0,2,1,3) * cam_coords_flat # [B, 3, 3, H*W] // (K * P) * (D_tgt * K_inv)
628 | pcoords = pcoords.sum(dim=1, keepdim=True).permute(0,2,1,3) # [B, 3, 1, H*W]
629 | else:
630 | pcoords = cam_coords_flat
631 | # pdb.set_trace()
632 | '''
633 | plt.close('all')
634 | bb = 0
635 | plt.figure(1); plt.imshow(pcoords[bb,0,0].reshape(hh, ww).detach().cpu()); plt.colorbar(); plt.ion(); plt.show();
636 |
637 | '''
638 |
639 | ### Translation ###
640 | if proj_c2p_tr is not None:
641 | pcoords = pcoords + c2p_tr_flat # [B, 3, 1, H*W]
642 |
643 | pcoords = pcoords.reshape(bs, 3, -1)
644 |
645 | X = pcoords[:, 0]
646 | Y = pcoords[:, 1]
647 | Z = pcoords[:, 2].clamp(min=1e-3)
648 |
649 | X_norm = 2*(X/Z) / (ww-1) - 1 # Normalized, -1 if on extreme left, 1 if on extreme right (x = w-1) [B, H*W]
650 | Y_norm = 2*(Y/Z) / (hh-1) - 1 # Idem [B, H*W]
651 | if padding_mode == 'zeros':
652 | X_mask = ((X_norm > 1)+(X_norm < -1)).detach()
653 | X_norm[X_mask] = 2 # make sure that no point in warped image is a combination of im and gray
654 | Y_mask = ((Y_norm > 1)+(Y_norm < -1)).detach()
655 | Y_norm[Y_mask] = 2
656 |
657 | pixel_coords = torch.stack([X_norm, Y_norm], dim=2) # [B, H*W, 2]
658 | # pdb.set_trace()
659 | '''
660 | aaa = pixel_coords.reshape(bs, hh, ww, 2)[0,:,:,0]
661 | bbb = Z.reshape(bs, 1, hh, ww)[0,0]
662 | ccc = X_norm.reshape(bs, 1, hh, ww)[0,0]
663 | ddd = Y_norm.reshape(bs, 1, hh, ww)[0,0]
664 | plt.figure(1), plt.imshow(aaa.detach().cpu()), plt.colorbar(), plt.ion(), plt.show()
665 | plt.figure(2), plt.imshow(bbb.detach().cpu()), plt.colorbar(), plt.ion(), plt.show()
666 | plt.figure(3), plt.imshow(ccc.detach().cpu()), plt.colorbar(), plt.ion(), plt.show()
667 | plt.figure(4), plt.imshow(ddd.detach().cpu()), plt.colorbar(), plt.ion(), plt.show()
668 |
669 | plt.figure(9), plt.imshow(Z.reshape(bs, 1, hh, ww).detach().cpu().numpy()[0,0]), plt.colorbar(), plt.tight_layout(), plt.ion(), plt.show()
670 |
671 | '''
672 |
673 | X_z = X / Z
674 | Y_z = Y / Z
675 | pixel_coords2 = torch.stack([X_z, Y_z], dim=2) # [B, H*W, 2]
676 |
677 | return pixel_coords.reshape(bs, hh, ww, 2), Z.reshape(bs, 1, hh, ww), pixel_coords2.reshape(bs, hh, ww, 2)
678 |
679 |
680 |
681 | def inverse_warp_mof(img, depth, ref_depth, motion_field, intrinsics, padding_mode='zeros'):
682 | """
683 | Inverse warp a source image to the target image plane.
684 | Args:
685 | img: the source image (where to sample pixels) -- [B, 3, H, W]
686 | depth: depth map of the target image -- [B, 1, H, W]
687 | ref_depth: the source depth map (where to sample depth) -- [B, 1, H, W]
688 | motion_field: 6DoF pose parameters from target to source -- [B, 6, H, W]
689 | intrinsics: camera intrinsic matrix -- [B, 3, 3]
690 | Returns:
691 | projected_img: Source image warped to the target image plane
692 | valid_mask: Float array indicating point validity
693 | projected_depth: sampled depth from source image
694 | computed_depth: computed depth of source image using the target depth
695 | """
696 | check_sizes(img, 'img', 'B3HW')
697 | check_sizes(depth, 'depth', 'B1HW')
698 | check_sizes(ref_depth, 'ref_depth', 'B1HW')
699 | check_sizes(motion_field, 'motion_field', 'B6HW')
700 | check_sizes(intrinsics, 'intrinsics', 'B33')
701 |
702 | bs, _, hh, ww = img.size()
703 |
704 | cam_coords = pixel2cam(depth.squeeze(1), intrinsics.inverse()) # [B,3,H,W]
705 |
706 | transform_field = pose_mof2mat(motion_field) # [B, 3, 4, 256, 832]
707 | transform_field = transform_field.permute(0,3,4,1,2).reshape(bs,-1,3,4) # [B, N, 3, 4]
708 |
709 | # Get projection matrix for tgt camera frame to source pixel frame
710 | proj_cam_to_src_pixel = intrinsics.reshape(bs,1,3,3) @ transform_field # [B, N, 3, 4]
711 |
712 | rot, tr = proj_cam_to_src_pixel[:, :, :, :3], proj_cam_to_src_pixel[:, :, :, -1:]
713 | rot = rot.reshape(bs,hh,ww,3,3).permute(0,3,4,1,2) # [8, 3, 3, 256, 832]
714 | tr = tr.reshape(bs,hh,ww,3,1).permute(0,3,4,1,2) # [8, 3, 1, 256, 832]
715 |
716 | # pdb.set_trace()
717 | '''
718 | plt.close('all')
719 | plt.figure(1); plt.imshow(rot[0,0,0].detach().cpu()); plt.colorbar(); plt.ion(); plt.show();
720 | plt.figure(2); plt.imshow(tr[0,0,0].detach().cpu()); plt.colorbar(); plt.ion(); plt.show();
721 |
722 | '''
723 | src_pixel_coords, computed_depth, flow_grid = cam2pixel_mof(cam_coords, rot, tr, padding_mode) # [B,H,W,2]
724 | if np.array(torch.__version__[:3]).astype(float) >= 1.3:
725 | projected_img = F.grid_sample(img, src_pixel_coords, padding_mode=padding_mode, align_corners=False)
726 | else:
727 | projected_img = F.grid_sample(img, src_pixel_coords, padding_mode=padding_mode)
728 |
729 | valid_points = src_pixel_coords.abs().max(dim=-1)[0] <= 1
730 | valid_mask = valid_points.unsqueeze(1).float()
731 |
732 | if np.array(torch.__version__[:3]).astype(float) >= 1.3:
733 | projected_depth = F.grid_sample(ref_depth, src_pixel_coords, padding_mode=padding_mode, align_corners=False)
734 | else:
735 | projected_depth = F.grid_sample(ref_depth, src_pixel_coords, padding_mode=padding_mode)
736 |
737 | mgrid_np = np.expand_dims(np.mgrid[0:ww,0:hh].transpose(2,1,0).astype(np.float32),0).repeat(bs, axis=0)
738 | mgrid = torch.from_numpy(mgrid_np).cuda() # [B,H,W,2]
739 | flow = (flow_grid - mgrid).permute(0,3,1,2)
740 |
741 | return projected_img, valid_mask, projected_depth, computed_depth, flow
742 |
743 |
744 |
745 | def flow_warp(img, flo):
746 | '''
747 | Simple flow-guided warping operation with grid sampling interpolation.
748 | Args:
749 | img: b x c x h x w
750 | flo: b x c x h x w
751 | Returns:
752 | img_w: b x c x h x w
753 | valid: b x 1 x h x w
754 | '''
755 | bs, ch, gh, gw = img.size()
756 | mgrid_np = np.expand_dims(np.mgrid[0:gw,0:gh].transpose(0,2,1).astype(np.float32),0).repeat(bs, axis=0)
757 | mgrid = torch.from_numpy(mgrid_np).type_as(flo)
758 | grid = mgrid.add(flo).permute(0,2,3,1) # b x 2 x gh x gw
759 |
760 | grid[:,:,:,0] = grid[:,:,:,0].sub(gw/2).div(gw/2)
761 | grid[:,:,:,1] = grid[:,:,:,1].sub(gh/2).div(gh/2)
762 |
763 | if np.array(torch.__version__[:3]).astype(float) >= 1.3:
764 | img_w = F.grid_sample(img, grid, align_corners=True)
765 | else:
766 | img_w = F.grid_sample(img, grid)
767 |
768 | valid = (grid.abs().max(dim=-1)[0] <= 1).unsqueeze(1).float() # b x 1 x h x w
769 | img_w[(valid==0).repeat(1,ch,1,1)] = 0 # b x c x h x w
770 |
771 | return img_w, valid
--------------------------------------------------------------------------------
/train.py:
--------------------------------------------------------------------------------
1 |
2 | # Strongly base on code from https://github.com/SeokjuLee/Insta-DM/blob/master/train.py
3 | # (+) Dynamic masking
4 | # (+) Removing small objects
5 | # (+) Avoiding pose estimations in both forward and backward directions
6 |
7 | import warnings
8 | warnings.simplefilter("ignore", UserWarning)
9 |
10 | import argparse
11 | import time
12 | import csv
13 | from path import Path
14 | import datetime
15 | import os
16 | import numpy as np
17 |
18 | import torch
19 | import torch.backends.cudnn as cudnn
20 | import torch.optim
21 | import torch.utils.data
22 | from tensorboardX import SummaryWriter
23 |
24 | import models
25 | from datasets.sequence_folders import SequenceFolder
26 | import custom_transforms
27 | import custom_transforms_val
28 | from loss_functions import compute_photo_and_geometry_loss, compute_smooth_loss, compute_obj_size_constraint_loss, compute_mof_consistency_loss, compute_errors
29 | from rigid_warp import forward_warp
30 | from logger import TermLogger, AverageMeter
31 | from utils import save_checkpoint, viz_flow
32 | from collections import OrderedDict
33 |
34 | from matplotlib import pyplot as plt
35 | from matplotlib.gridspec import GridSpec
36 | import pdb
37 |
38 | parser = argparse.ArgumentParser(description='Dyna-DM: Dynamic Object-aware Self-supervised Monocular Depth Maps',
39 | formatter_class=argparse.ArgumentDefaultsHelpFormatter)
40 | parser.add_argument('data', metavar='DIR', help='path to dataset')
41 | parser.add_argument('-wg', '--with-gt', action='store_true', help='use ground truth for validation.')
42 | parser.add_argument('--sequence-length', type=int, metavar='N', help='sequence length for training', default=3)
43 | parser.add_argument('-mni', type=int, help='maximum number of instances', default=20)
44 | parser.add_argument('-dmni', type=int, help='maximum number of dynamic instances', default=3)
45 | parser.add_argument('-maxtheta', type=float, help='maximum instance overlap', default=1)
46 | parser.add_argument('-objsmall', type=float, help='remove small objects', default=0)
47 | parser.add_argument('--rotation-mode', type=str, choices=['euler', 'quat'], default='euler',
48 | help='rotation mode for PoseExpnet : euler (yaw, pitch, roll) or quaternion (last 3 coefficients)')
49 | parser.add_argument('--padding-mode', type=str, choices=['zeros', 'border'], default='zeros',
50 | help='padding mode for image warping : this is important for photometric differenciation when going outside target image.'
51 | ' zeros will null gradients outside target image.'
52 | ' border will only null gradients of the coordinate outside (x or y)')
53 |
54 | parser.add_argument('-j', '--workers', default=12, type=int, metavar='N', help='number of data loading workers')
55 | parser.add_argument('-b', '--batch-size', default=1, type=int, metavar='N', help='mini-batch size')
56 | parser.add_argument('--epochs', default=200, type=int, metavar='N', help='number of total epochs to run')
57 | parser.add_argument('--epoch-size', default=250, type=int, metavar='N', help='manual epoch size (will match dataset size if not set)')
58 | parser.add_argument('--disp-lr', '--disp-learning-rate', default=1e-4, type=float, metavar='LR', help='initial learning rate for DispResNet')
59 | parser.add_argument('--ego-lr', '--ego-learning-rate', default=1e-4, type=float, metavar='LR', help='initial learning rate for EgoPoseNet')
60 | parser.add_argument('--obj-lr', '--obj-learning-rate', default=1e-4, type=float, metavar='LR', help='initial learning rate for ObjPoseNet')
61 | parser.add_argument('--momentum', default=0.9, type=float, metavar='M', help='momentum for sgd, alpha parameter for adam')
62 | parser.add_argument('--beta', default=0.999, type=float, metavar='M', help='beta parameters for adam')
63 | parser.add_argument('--weight-decay', '--wd', default=0, type=float, metavar='W', help='weight decay')
64 | parser.add_argument('--print-freq', default=10, type=int, metavar='N', help='print frequency')
65 | parser.add_argument('--save-freq', default=3, type=int, metavar='N', help='save frequency')
66 | parser.add_argument('--resnet-layers', type=int, default=18, choices=[18, 50], help='number of ResNet layers for depth estimation.')
67 | parser.add_argument('--with-pretrain', type=int, default=1, help='with or without imagenet pretrain for resnet')
68 | parser.add_argument('--resnet-pretrained', action='store_true', help='pretrained from resnet model or not')
69 | parser.add_argument('--pretrained-disp', dest='pretrained_disp', default=None, metavar='PATH', help='path to pre-trained DispResNet')
70 | parser.add_argument('--pretrained-ego-pose', dest='pretrained_ego_pose', default=None, metavar='PATH', help='path to pre-trained EgoPoseNet')
71 | parser.add_argument('--pretrained-obj-pose', dest='pretrained_obj_pose', default=None, metavar='PATH', help='path to pre-trained ObjPoseNet')
72 | parser.add_argument('--seed', default=42, type=int, help='seed for random functions, and network initialization')
73 | parser.add_argument('--log-summary', default='progress_log_summary.csv', metavar='PATH', help='csv where to save per-epoch train and valid stats')
74 | parser.add_argument('--log-full', default='progress_log_full.csv', metavar='PATH', help='csv where to save per-gradient descent train stats')
75 |
76 | parser.add_argument('-p', '--photo-loss-weight', type=float, help='weight for photometric loss', metavar='W', default=2.0)
77 | parser.add_argument('-c', '--geometry-consistency-weight', type=float, help='weight for depth consistency loss', metavar='W', default=1.0)
78 | parser.add_argument('-s', '--smooth-loss-weight', type=float, help='weight for disparity smoothness loss', metavar='W', default=0.1)
79 | parser.add_argument('-o', '--scale-loss-weight', type=float, help='weight for object scale loss', metavar='W', default=0.02)
80 | parser.add_argument('-mc', '--mof-consistency-loss-weight', type=float, help='weight for mof consistency loss', metavar='W', default=0.1)
81 | parser.add_argument('-hp', '--height-loss-weight', type=float, help='weight for height prior loss', metavar='W', default=0.0)
82 | parser.add_argument('-dm', '--depth-loss-weight', type=float, help='weight for depth mean loss', metavar='W', default=0.0)
83 |
84 | parser.add_argument('--with-auto-mask', action='store_true', help='with the the mask for stationary points')
85 | parser.add_argument('--with-ssim', action='store_true', help='with ssim or not')
86 | parser.add_argument('--with-mask', action='store_true', help='with the the mask for moving objects and occlusions or not')
87 | parser.add_argument('--with-only-obj', action='store_true', help='with only obj mask')
88 |
89 | parser.add_argument('-nm', '--name', dest='name', type=str, help='name of the experiment')
90 | parser.add_argument('--debug-mode', action='store_true', help='run codes with debugging mode or not')
91 | parser.add_argument('--no-shuffle', action='store_true', help='feed data without shuffling')
92 | parser.add_argument('--no-input-aug', action='store_true', help='feed data without augmentation')
93 | parser.add_argument('--begin-idx', type=int, default=None, help='beginning index for pre-processed data')
94 |
95 |
96 |
97 | best_error = -1
98 | n_iter = 0
99 | device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
100 | device_val = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
101 | print(torch.cuda.is_available())
102 |
103 | def main():
104 | print('=> PyTorch version: ' + torch.__version__ + ' || CUDA_VISIBLE_DEVICES: ' + os.environ["CUDA_VISIBLE_DEVICES"])
105 | print(torch.cuda.is_available())
106 |
107 |
108 | global best_error, n_iter, device
109 | args = parser.parse_args()
110 |
111 | timestamp = datetime.datetime.now().strftime("%m_%d_%H_%M")
112 | if args.debug_mode:
113 | args.save_path = 'checkpoints'/Path('debug')/timestamp
114 | else:
115 | args.save_path = 'checkpoints'/Path(args.name)/timestamp
116 | print('=> will save everything to {}'.format(args.save_path))
117 | args.save_path.makedirs_p()
118 | torch.manual_seed(args.seed)
119 | np.random.seed(args.seed)
120 |
121 |
122 | tf_writer = SummaryWriter(args.save_path)
123 |
124 | # Data loading
125 | normalize = custom_transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
126 | normalize_val = custom_transforms_val.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
127 |
128 | train_transform = custom_transforms.Compose([
129 | custom_transforms.RandomHorizontalFlip(),
130 | custom_transforms.RandomScaleCrop(),
131 | custom_transforms.ArrayToTensor(),
132 | normalize
133 | ])
134 | if args.with_gt:
135 | valid_transform = custom_transforms_val.Compose([
136 | custom_transforms_val.ArrayToTensor(),
137 | normalize_val
138 | ])
139 | else:
140 | valid_transform = custom_transforms.Compose([
141 | custom_transforms.ArrayToTensor(),
142 | normalize
143 | ])
144 |
145 | print("=> fetching scenes from '{}'".format(args.data))
146 | train_set = SequenceFolder(
147 | root=args.data,
148 | train=True,
149 | seed=args.seed,
150 | shuffle=not(args.no_shuffle),
151 | max_num_instances=args.mni,
152 | sequence_length=args.sequence_length,
153 | transform=train_transform
154 | )
155 |
156 | if args.with_gt:
157 | from datasets.validation_folders import ValidationSet
158 | val_set = ValidationSet(
159 | root=args.data,
160 | transform=valid_transform
161 | )
162 | else:
163 | val_set = SequenceFolder(
164 | root=args.data,
165 | train=False,
166 | seed=args.seed,
167 | shuffle=not(args.no_shuffle),
168 | max_num_instances=args.mni,
169 | sequence_length=args.sequence_length,
170 | transform=valid_transform,
171 | proportion=0.1
172 | )
173 | print('=> {} samples found in training set || {} samples found in validation set'.format(len(train_set), len(val_set)))
174 |
175 | train_loader = torch.utils.data.DataLoader(
176 | train_set, batch_size=args.batch_size, shuffle=not(args.debug_mode),
177 | num_workers=args.workers, pin_memory=True)
178 | val_loader = torch.utils.data.DataLoader(
179 | val_set, batch_size=args.batch_size, shuffle=False,
180 | num_workers=args.workers, pin_memory=True)
181 |
182 | if args.epoch_size == 0:
183 | args.epoch_size = len(train_loader)
184 |
185 | # create model
186 | print("=> creating model")
187 |
188 | disp_net = models.DispResNet(args.resnet_layers, args.with_pretrain).to(device)
189 | ego_pose_net = models.EgoPoseNet(18, args.with_pretrain).to(device)
190 | ego_pose_net_initial = models.EgoPoseNet(18, args.with_pretrain).to(device)
191 | obj_pose_net = models.ObjPoseNet(18, args.with_pretrain).to(device)
192 |
193 | if args.pretrained_ego_pose:
194 | print("=> using pre-trained weights for EgoPoseNet")
195 | weights = torch.load(args.pretrained_ego_pose, map_location='cuda:0')
196 | ego_pose_net.load_state_dict(weights['state_dict'], strict=False)
197 | else:
198 | ego_pose_net.init_weights()
199 |
200 | # creating initial ego pose network
201 | if args.pretrained_ego_pose:
202 | print("=> using pre-trained weights for Initial EgoPoseNet")
203 | weights = torch.load(args.pretrained_ego_pose, map_location='cuda:0')
204 | ego_pose_net_initial.load_state_dict(weights['state_dict'], strict=False)
205 | else:
206 | ego_pose_net_initial.init_weights()
207 |
208 | if args.pretrained_obj_pose:
209 | print("=> using pre-trained weights for ObjPoseNet")
210 | weights = torch.load(args.pretrained_obj_pose, map_location='cuda:0')
211 | obj_pose_net.load_state_dict(weights['state_dict'], strict=False)
212 | else:
213 | obj_pose_net.init_weights()
214 |
215 | if args.pretrained_disp:
216 | print("=> using pre-trained weights for DispNet")
217 | weights = torch.load(args.pretrained_disp, map_location='cuda:0')
218 | if args.resnet_pretrained:
219 | disp_net.load_state_dict(weights, strict=False)
220 | else:
221 | disp_net.load_state_dict(weights['state_dict'], strict=False)
222 | else:
223 | disp_net.init_weights()
224 |
225 | cudnn.benchmark = True
226 | disp_net = torch.nn.DataParallel(disp_net)
227 | ego_pose_net_initial = torch.nn.DataParallel(ego_pose_net_initial)
228 | ego_pose_net = torch.nn.DataParallel(ego_pose_net)
229 | obj_pose_net = torch.nn.DataParallel(obj_pose_net)
230 |
231 | print('=> setting adam solver')
232 |
233 | optim_params = []
234 | if args.disp_lr != 0:
235 | optim_params.append({'params': disp_net.module.encoder.parameters(), 'lr': args.disp_lr})
236 | optim_params.append({'params': disp_net.module.decoder.parameters(), 'lr': args.disp_lr})
237 | optim_params.append({'params': disp_net.module.obj_height_prior, 'lr': args.disp_lr * 0.1})
238 | if args.ego_lr != 0:
239 | optim_params.append({'params': ego_pose_net.parameters(), 'lr': args.ego_lr})
240 | if args.obj_lr != 0:
241 | optim_params.append({'params': obj_pose_net.parameters(), 'lr': args.obj_lr})
242 |
243 | optimizer = torch.optim.Adam(optim_params, betas=(args.momentum, args.beta), weight_decay=args.weight_decay)
244 | scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.975)
245 |
246 | with open(args.save_path/args.log_summary, 'w') as csvfile:
247 | csv_summary = csv.writer(csvfile, delimiter='\t')
248 | csv_summary.writerow(['train_loss', 'validation_loss'])
249 |
250 | with open(args.save_path/args.log_full, 'w') as csvfile:
251 | csv_full = csv.writer(csvfile, delimiter='\t')
252 | csv_full.writerow(['photo_loss', 'geometry_loss', 'smooth_loss', 'scale_loss', 'mof_consistency_loss', 'height_loss', 'depth_loss', 'train_loss'])
253 |
254 | logger = TermLogger(n_epochs=args.epochs, train_size=min(len(train_loader), args.epoch_size), valid_size=len(val_loader))
255 | logger.epoch_bar.start()
256 |
257 |
258 | ### validation at start ###
259 | if not args.debug_mode:
260 | if args.pretrained_disp:
261 | logger.reset_valid_bar()
262 | if args.with_gt:
263 | print("=> With GT")
264 | errors, error_names = validate_with_gt(args, val_loader, disp_net, 0, logger)
265 | else:
266 | print("=> Without GT")
267 | errors, error_names = validate_without_gt(args, val_loader, disp_net, ego_pose_net, ego_pose_net_initial, obj_pose_net, 0, logger)
268 | for error, name in zip(errors, error_names):
269 | tf_writer.add_scalar(name, error, 0)
270 | error_string = ', '.join('{} : {:.3f}'.format(name, error) for name, error in zip(error_names, errors))
271 | logger.valid_writer.write(' * Avg {}'.format(error_string))
272 |
273 |
274 | num_objects_init = 0
275 | num_objects_new = 0
276 | for epoch in range(args.epochs):
277 | logger.epoch_bar.update(epoch)
278 |
279 | ### train for one epoch ###
280 | logger.reset_train_bar()
281 | train_loss, initial_inter, new_iter = train(args, train_loader, disp_net, ego_pose_net, ego_pose_net_initial, obj_pose_net, optimizer, args.epoch_size, logger, tf_writer) #scheduler
282 | num_objects_init += initial_inter
283 | num_objects_new += new_iter
284 | scheduler.step()
285 | logger.train_writer.write(' * Avg Loss : {:.3f}'.format(train_loss))
286 |
287 | ### evaluate on validation set ###
288 | logger.reset_valid_bar()
289 | if args.with_gt:
290 | errors, error_names = validate_with_gt(args, val_loader, disp_net, epoch, logger)
291 | else:
292 | errors, error_names = validate_without_gt(args, val_loader, disp_net, ego_pose_net, ego_pose_net_initial, obj_pose_net, epoch, logger)
293 | error_string = ', '.join('{} : {:.3f}'.format(name, error) for name, error in zip(error_names, errors))
294 | logger.valid_writer.write(' * Avg {}'.format(error_string))
295 |
296 |
297 | for error, name in zip(errors, error_names):
298 | tf_writer.add_scalar(name, error, epoch)
299 |
300 |
301 | tf_writer.add_scalar('training loss', train_loss, epoch)
302 |
303 | decisive_error = errors[1] # "errors[1]" or "train_loss"
304 | if best_error < 0:
305 | best_error = decisive_error
306 |
307 | # remember lowest error and save checkpoint
308 | is_best = decisive_error < best_error
309 | best_error = min(best_error, decisive_error)
310 | save_checkpoint(
311 | epoch,
312 | args.save_freq,
313 | args.save_path, {
314 | 'epoch': epoch + 1,
315 | 'state_dict': disp_net.module.state_dict()
316 | }, {
317 | 'epoch': epoch + 1,
318 | 'state_dict': ego_pose_net.module.state_dict()
319 | }, {
320 | 'epoch': epoch + 1,
321 | 'state_dict': obj_pose_net.module.state_dict()
322 | }, {
323 | 'epoch': epoch + 1,
324 | 'state_dict': ego_pose_net_initial.module.state_dict()
325 | },
326 | is_best)
327 |
328 | with open(args.save_path/args.log_summary, 'a') as csvfile:
329 | csv_summary = csv.writer(csvfile, delimiter='\t')
330 | csv_summary.writerow([train_loss, decisive_error])
331 | print(num_objects_init, num_objects_new)
332 |
333 |
334 |
335 | def train(args, train_loader, disp_net, ego_pose_net, ego_pose_net_initial, obj_pose_net, optimizer, epoch_size, logger, tf_writer): #schedule
336 | global n_iter, device
337 | batch_time = AverageMeter()
338 | data_time = AverageMeter()
339 | losses = AverageMeter(precision=4)
340 | torch.set_printoptions(sci_mode=False)
341 | np.set_printoptions(suppress=True)
342 | Initial_number_of_objects=0
343 | New_number_of_objects = 0
344 |
345 | w1, w2, w3 = args.photo_loss_weight, args.geometry_consistency_weight, args.smooth_loss_weight
346 | w4, w5, w6 = args.scale_loss_weight, args.mof_consistency_loss_weight, args.height_loss_weight
347 | w7 = args.depth_loss_weight
348 |
349 | # switch to train mode
350 | disp_net.train().to(device)
351 | ego_pose_net.train().to(device)
352 | obj_pose_net.train().to(device)
353 | ego_pose_net_initial.train().to(device)
354 |
355 | end = time.time()
356 | logger.train_bar.update(0)
357 |
358 | for i, (tgt_img, ref_imgs, intrinsics, intrinsics_inv, tgt_insts, ref_insts, noc) in enumerate(train_loader):
359 | if args.debug_mode and i > 5: break;
360 | # if i > 5: break;
361 |
362 | log_losses = i > 0 and n_iter % args.print_freq == 0
363 |
364 | ### inputs to GPU ###
365 | data_time.update(time.time() - end)
366 | tgt_img = tgt_img.to(device)
367 | ref_imgs = [img.to(device) for img in ref_imgs]
368 | intrinsics = intrinsics.to(device)
369 | intrinsics_inv = intrinsics_inv.to(device)
370 | tgt_insts = [img.to(device) for img in tgt_insts]
371 | ref_insts = [img.to(device) for img in ref_insts]
372 |
373 | ### input instance masking ###
374 | tgt_bg_masks = [1 - (img[:,1:].sum(dim=1, keepdim=True)>0).float() for img in tgt_insts]
375 | ref_bg_masks = [1 - (img[:,1:].sum(dim=1, keepdim=True)>0).float() for img in ref_insts]
376 | tgt_bg_imgs = [tgt_img * tgt_mask * ref_mask for tgt_mask, ref_mask in zip(tgt_bg_masks, ref_bg_masks)]
377 | ref_bg_imgs = [ref_img * tgt_mask * ref_mask for ref_img, tgt_mask, ref_mask in zip(ref_imgs, tgt_bg_masks, ref_bg_masks)]
378 | num_insts = [tgt_inst[:,0,0,0].int().detach().cpu().numpy().tolist() for tgt_inst in tgt_insts] # Number of instances for each sequence
379 |
380 |
381 | Initial_number_of_objects += num_insts[0][0]
382 | # ### object height piror ###
383 | height_prior = disp_net.module.obj_height_prior
384 |
385 | tgt_depth, ref_depths = compute_depth(disp_net, tgt_img, ref_imgs)
386 |
387 |
388 | ######################################################################################################################################################################################################
389 | #Dynamic masking
390 |
391 |
392 | _, ego_poses_bwd = compute_ego_pose_with_inv(ego_pose_net_initial, tgt_bg_imgs, ref_bg_imgs) # compute initial ego motion forwards using background masks
393 |
394 |
395 | img_cat = torch.cat([ref_imgs[0], ref_insts[0][:,1:]], dim=1)
396 | w_img_cat, _, valid = forward_warp(img_cat, ref_depths[0].detach(), 2*ego_poses_bwd[0].detach(), intrinsics, upscale=3) # Extra distance is shown with 2*ego-motion
397 | r2t_inst_ego_bwd = torch.cat([ref_insts[0][:,:1], w_img_cat[:,3:].round()], dim=1)
398 |
399 | tgt_insts, ref_insts = dynamic_objects(r2t_inst_ego_bwd, ref_insts, valid, ref_imgs, tgt_insts, num_insts, dmni=args.dmni, theta=args.maxtheta, obj_size=args.objsmall, bs=args.batch_size)
400 |
401 | tgt_bg_masks = [1 - (img[:,1:].sum(dim=1, keepdim=True)>0).float() for img in tgt_insts]
402 | ref_bg_masks = [1 - (img[:,1:].sum(dim=1, keepdim=True)>0).float() for img in ref_insts]
403 | tgt_bg_imgs = [tgt_img * tgt_mask * ref_mask for tgt_mask, ref_mask in zip(tgt_bg_masks, ref_bg_masks)]
404 | ref_bg_imgs = [ref_img * tgt_mask * ref_mask for ref_img, tgt_mask, ref_mask in zip(ref_imgs, tgt_bg_masks, ref_bg_masks)]
405 | num_insts = [tgt_inst[:,0,0,0].int().detach().cpu().numpy().tolist() for tgt_inst in tgt_insts]
406 |
407 | New_number_of_objects += num_insts[0][0]
408 | ######################################################################################################################################################################################################
409 |
410 |
411 | tgt_obj_masks = [1 - mask for mask in tgt_bg_masks]
412 | ref_obj_masks = [1 - mask for mask in ref_bg_masks]
413 |
414 |
415 | ### compute depth & ego-motion ###
416 | ego_poses_fwd, ego_poses_bwd = compute_ego_pose_with_inv(ego_pose_net, tgt_bg_imgs, ref_bg_imgs) # [ 2 x ([B, 6]) ]
417 |
418 | ### Remove ego-motion effct: transformation with ego-motion ###
419 | r2t_imgs_ego, r2t_insts_ego, r2t_depths_ego, r2t_vals_ego = compute_ego_warp(ref_imgs, ref_insts, ref_depths, ego_poses_bwd, intrinsics)
420 | t2r_imgs_ego, t2r_insts_ego, t2r_depths_ego, t2r_vals_ego = compute_ego_warp([tgt_img, tgt_img], tgt_insts, [tgt_depth, tgt_depth], ego_poses_fwd, intrinsics)
421 |
422 |
423 | ### Compute object motion ###
424 | obj_poses_fwd, obj_poses_bwd = compute_obj_pose_with_inv(obj_pose_net, tgt_img, tgt_insts, r2t_imgs_ego, r2t_insts_ego, ref_imgs, ref_insts, t2r_imgs_ego, t2r_insts_ego, intrinsics, args.dmni, num_insts)
425 |
426 | ### Compute composite motion field ###
427 | tot_mofs_fwd, tot_mofs_bwd = compute_motion_field(tgt_img, ego_poses_fwd, ego_poses_bwd, obj_poses_fwd, obj_poses_bwd, tgt_insts, ref_insts)
428 |
429 | ### Compute unified projection loss ###
430 | loss_1, loss_2, r2t_imgs, t2r_imgs, r2t_flows, t2r_flows, r2t_diffs, t2r_diffs, r2t_vals, t2r_vals = compute_photo_and_geometry_loss(tgt_img, ref_imgs, intrinsics, tgt_depth, ref_depths, tot_mofs_fwd, tot_mofs_bwd, \
431 | args.with_ssim, args.with_mask, args.with_auto_mask, args.padding_mode, args.with_only_obj, \
432 | tgt_obj_masks, ref_obj_masks, r2t_vals_ego, t2r_vals_ego)
433 |
434 | ### Compute depth smoothness loss ###
435 | if w3 == 0:
436 | loss_3 = torch.tensor(.0).cuda()
437 | else:
438 | loss_3 = compute_smooth_loss(tgt_depth, tgt_img, ref_depths, ref_imgs)
439 |
440 |
441 | ### Compute object size constraint loss ###
442 | if w4 == 0:
443 | loss_4 = torch.tensor(.0).cuda()
444 | else:
445 | loss_4 = compute_obj_size_constraint_loss(height_prior, tgt_depth, tgt_insts, ref_depths, ref_insts, intrinsics, args.dmni, num_insts)
446 |
447 |
448 | ### Compute unified motion consistency loss ###
449 | if w5 == 0:
450 | loss_5 = torch.tensor(.0).cuda()
451 | else:
452 | loss_5 = compute_mof_consistency_loss(tot_mofs_fwd, tot_mofs_bwd, r2t_flows, t2r_flows, r2t_diffs, t2r_diffs, r2t_vals, t2r_vals, alpha=5, thresh=0.1)
453 |
454 |
455 | ### Compute height prior constraint loss ###
456 | loss_6 = height_prior
457 |
458 |
459 | ### Compute depth mean constraint loss ###
460 | loss_7 = ((1/tgt_depth).mean() + sum([(1/depth).mean() for depth in ref_depths])) / (1 + len(ref_depths))
461 |
462 |
463 | loss = w1*loss_1 + w2*loss_2 + w3*loss_3 + w4*loss_4 + w5*loss_5 + w6*loss_6 + w7*loss_7
464 | # pdb.set_trace()
465 |
466 | if log_losses:
467 | tf_writer.add_scalar('photo_loss', loss_1.item(), n_iter)
468 | tf_writer.add_scalar('geometry_loss', loss_2.item(), n_iter)
469 | tf_writer.add_scalar('smooth_loss', loss_3.item(), n_iter)
470 | tf_writer.add_scalar('scale_loss', loss_4.item(), n_iter)
471 | tf_writer.add_scalar('mof_consistency_loss', loss_5.item(), n_iter)
472 | tf_writer.add_scalar('height_loss', loss_6.item(), n_iter)
473 | tf_writer.add_scalar('depth_loss', loss_7.item(), n_iter)
474 | tf_writer.add_scalar('total_loss', loss.item(), n_iter)
475 | ### record loss ###
476 | losses.update(loss.item(), args.batch_size)
477 |
478 | ### compute gradient and do Adam step ###
479 | if loss > 0:
480 | optimizer.zero_grad()
481 | loss.backward()
482 | optimizer.step()
483 |
484 | ### measure elapsed time ###
485 | batch_time.update(time.time() - end)
486 | end = time.time()
487 |
488 | with open(args.save_path/args.log_full, 'a') as csvfile:
489 | csv_full = csv.writer(csvfile, delimiter='\t')
490 | csv_full.writerow([loss_1.item(), loss_2.item(), loss_3.item(), loss_4.item(), loss_5.item(), loss_6.item(), loss_7.item(), loss.item()])
491 | logger.train_bar.update(i+1)
492 | if i % args.print_freq == 0:
493 | logger.train_writer.write('Train: Time {} Data {} Loss {}'.format(batch_time, data_time, losses))
494 | if i >= epoch_size - 1:
495 | break
496 |
497 | n_iter += 1
498 |
499 | return losses.avg[0], Initial_number_of_objects, New_number_of_objects
500 |
501 |
502 |
503 | @torch.no_grad()
504 | def validate_without_gt(args, val_loader, disp_net, ego_pose_net, ego_pose_net_initial, obj_pose_net, epoch, logger):
505 | global device
506 | batch_time = AverageMeter()
507 | losses = AverageMeter(i=4, precision=4)
508 |
509 | w1, w2, w3 = args.photo_loss_weight, args.geometry_consistency_weight, args.smooth_loss_weight
510 |
511 | # switch to evaluation mode
512 | disp_net.eval()
513 | ego_pose_net_initial.eval()
514 | ego_pose_net.eval()
515 | obj_pose_net.eval()
516 |
517 | end = time.time()
518 | logger.valid_bar.update(0)
519 |
520 | for i, (tgt_img, ref_imgs, intrinsics, intrinsics_inv, tgt_insts, ref_insts, k) in enumerate(val_loader): #I dont know what k is
521 | if args.debug_mode and i > 5: break;
522 |
523 | ### inputs to GPU ###
524 | tgt_img = tgt_img.to(device)
525 | ref_imgs = [img.to(device) for img in ref_imgs]
526 | intrinsics = intrinsics.to(device)
527 | intrinsics_inv = intrinsics_inv.to(device)
528 | tgt_insts = [img.to(device) for img in tgt_insts]
529 | ref_insts = [img.to(device) for img in ref_insts]
530 |
531 | ### input instance masking ###
532 | tgt_bg_masks = [1 - (img[:,1:].sum(dim=1, keepdim=True)>0).float() for img in tgt_insts]
533 | ref_bg_masks = [1 - (img[:,1:].sum(dim=1, keepdim=True)>0).float() for img in ref_insts]
534 | tgt_bg_imgs = [tgt_img * tgt_mask * ref_mask for tgt_mask, ref_mask in zip(tgt_bg_masks, ref_bg_masks)]
535 | ref_bg_imgs = [ref_img * tgt_mask * ref_mask for ref_img, tgt_mask, ref_mask in zip(ref_imgs, tgt_bg_masks, ref_bg_masks)]
536 | num_insts = [tgt_inst[:,0,0,0].int().detach().cpu().numpy().tolist() for tgt_inst in tgt_insts] # Number of instances for each sequence
537 |
538 |
539 | tgt_depth, ref_depths = compute_depth(disp_net, tgt_img, ref_imgs)
540 |
541 |
542 | ######################################################################################################################################################################################################
543 | # Dynamic masking
544 |
545 |
546 | _, ego_poses_bwd = compute_ego_pose_with_inv(ego_pose_net_initial, tgt_bg_imgs, ref_bg_imgs) # compute ego motion forwards using background masks
547 |
548 |
549 | img_cat = torch.cat([ref_imgs[0], ref_insts[0][:,1:]], dim=1)
550 | w_img_cat, _, valid = forward_warp(img_cat, ref_depths[0].detach(), 2*ego_poses_bwd[0].detach(), intrinsics, upscale=3)
551 | r2t_inst_ego_bwd = torch.cat([ref_insts[0][:,:1], w_img_cat[:,3:].round()], dim=1)
552 |
553 | tgt_insts, ref_insts = dynamic_objects(r2t_inst_ego_bwd, ref_insts, valid, ref_imgs, tgt_insts, num_insts, dmni=args.dmni, theta=args.maxtheta, obj_size=args.objsmall, bs=args.batch_size)
554 |
555 | tgt_bg_masks = [1 - (img[:,1:].sum(dim=1, keepdim=True)>0).float() for img in tgt_insts]
556 | ref_bg_masks = [1 - (img[:,1:].sum(dim=1, keepdim=True)>0).float() for img in ref_insts]
557 | tgt_bg_imgs = [tgt_img * tgt_mask * ref_mask for tgt_mask, ref_mask in zip(tgt_bg_masks, ref_bg_masks)]
558 | ref_bg_imgs = [ref_img * tgt_mask * ref_mask for ref_img, tgt_mask, ref_mask in zip(ref_imgs, tgt_bg_masks, ref_bg_masks)]
559 | num_insts = [tgt_inst[:,0,0,0].int().detach().cpu().numpy().tolist() for tgt_inst in tgt_insts] # Number of instances for each sequence
560 |
561 | ######################################################################################################################################################################################################
562 |
563 | tgt_obj_masks = [1 - mask for mask in tgt_bg_masks]
564 | ref_obj_masks = [1 - mask for mask in ref_bg_masks]
565 |
566 | ego_poses_fwd, ego_poses_bwd = compute_ego_pose_with_inv(ego_pose_net, tgt_bg_imgs, ref_bg_imgs) # [ 2 x ([B, 6]) ]
567 |
568 | ### Remove ego-motion effct: transformation with ego-motion ###
569 | r2t_imgs_ego, r2t_insts_ego, r2t_depths_ego, r2t_vals_ego = compute_ego_warp(ref_imgs, ref_insts, ref_depths, ego_poses_bwd, intrinsics)
570 | t2r_imgs_ego, t2r_insts_ego, t2r_depths_ego, t2r_vals_ego = compute_ego_warp([tgt_img, tgt_img], tgt_insts, [tgt_depth, tgt_depth], ego_poses_fwd, intrinsics)
571 |
572 |
573 | ### Compute object motion ###
574 | obj_poses_fwd, obj_poses_bwd = compute_obj_pose_with_inv(obj_pose_net, tgt_img, tgt_insts, r2t_imgs_ego, r2t_insts_ego, ref_imgs, ref_insts, t2r_imgs_ego, t2r_insts_ego, intrinsics, args.dmni, num_insts)
575 |
576 | ### Compute composite motion field ###
577 | tot_mofs_fwd, tot_mofs_bwd = compute_motion_field(tgt_img, ego_poses_fwd, ego_poses_bwd, obj_poses_fwd, obj_poses_bwd, tgt_insts, ref_insts)
578 |
579 | ### Compute unified projection loss ###
580 | loss_1, loss_2, _, _, _, _, _, _, _, _ = compute_photo_and_geometry_loss(tgt_img, ref_imgs, intrinsics, tgt_depth, ref_depths, tot_mofs_fwd, tot_mofs_bwd, \
581 | args.with_ssim, args.with_mask, args.with_auto_mask, args.padding_mode, args.with_only_obj, \
582 | tgt_obj_masks, ref_obj_masks, r2t_vals_ego, t2r_vals_ego)
583 | # pdb.set_trace()
584 |
585 | ### Compute depth smoothness loss ###
586 | loss_3 = compute_smooth_loss(tgt_depth, tgt_img, ref_depths, ref_imgs)
587 |
588 | loss_1 = loss_1.item()
589 | loss_2 = loss_2.item()
590 | loss_3 = loss_3.item()
591 |
592 | loss = w1*loss_1 + w2*loss_2 + w3*loss_3
593 |
594 | losses.update([loss, loss_1, loss_2, loss_3])
595 |
596 | # measure elapsed time
597 | batch_time.update(time.time() - end)
598 | end = time.time()
599 | logger.valid_bar.update(i+1)
600 | if i % args.print_freq == 0:
601 | logger.valid_writer.write('valid: Time {} Loss {}'.format(batch_time, losses))
602 |
603 | logger.valid_bar.update(len(val_loader))
604 | return losses.avg, ['Total loss', 'Photo loss', 'Geometry loss', 'Smooth loss']
605 |
606 |
607 |
608 | @torch.no_grad()
609 | def validate_with_gt(args, val_loader, disp_net, epoch, logger):
610 | global device_val
611 | batch_time = AverageMeter()
612 | error_names = ['abs_diff', 'abs_rel', 'sq_rel', 'a1', 'a2', 'a3']
613 | errors = AverageMeter(i=len(error_names))
614 | errors_fg = AverageMeter(i=len(error_names))
615 | errors_bg = AverageMeter(i=len(error_names))
616 |
617 | # switch to evaluate mode
618 | disp_net = disp_net.module.to(device_val)
619 | disp_net.eval()
620 |
621 | end = time.time()
622 | logger.valid_bar.update(0)
623 |
624 | for i, (tgt_img, depth, tgt_inst_sum) in enumerate(val_loader):
625 | if args.debug_mode and i > 5: break;
626 | # if i > 5: break;
627 |
628 | tgt_img = tgt_img.to(device_val) # B, 3, 256, 832
629 | depth = depth.to(device_val)
630 | tgt_inst_sum = tgt_inst_sum.to(device_val)
631 |
632 | vmask = (depth > 0).float()
633 | fg_pixs = vmask * tgt_inst_sum
634 | bg_pixs = vmask * (1 - tgt_inst_sum)
635 | fg_ratio = (fg_pixs.sum(dim=1).sum(dim=1) / vmask.sum(dim=1).sum(dim=1)).mean()
636 | depth_fg = depth * tgt_inst_sum
637 | depth_bg = depth * (1 - tgt_inst_sum)
638 |
639 | # compute output
640 | output_disp = disp_net(tgt_img)
641 | output_depth = 1/output_disp[:,0]
642 |
643 | error_all, med_scale = compute_errors(depth, output_depth)
644 | errors.update(error_all)
645 |
646 | errors_bg.update(compute_errors(depth_bg, output_depth, med_scale)[0])
647 | if fg_ratio:
648 | errors_fg.update(compute_errors(depth_fg, output_depth, med_scale)[0])
649 | # pdb.set_trace()
650 |
651 | # measure elapsed time
652 | batch_time.update(time.time() - end)
653 | end = time.time()
654 | logger.valid_bar.update(i+1)
655 | if i % args.print_freq == 0:
656 | logger.valid_writer.write('valid: Time {} Abs_Rel Error {:.4f} ({:.4f})'.format(batch_time, errors.val[1], errors.avg[1]))
657 | logger.valid_bar.update(len(val_loader))
658 |
659 | return errors.avg, error_names
660 |
661 | ################################################################################################################################################################################
662 |
663 | def inst_iou(seg_src, seg_tgt, valid_mask):
664 | '''
665 |
666 | seg_src: torch.Size([1, n_inst, 256, 832])
667 | seg_tgt: torch.Size([1, n_inst, 256, 832])
668 | valid_mask: torch.Size([1, 1, 256, 832])
669 | '''
670 | n_inst_src = seg_src.shape[1]
671 | n_inst_tgt = seg_tgt.shape[1]
672 |
673 | seg_src_m = seg_src.to(device)
674 | seg_tgt_m = seg_tgt.to(device)
675 |
676 | for i in range(n_inst_src):
677 | if i == 0:
678 | match_table = torch.from_numpy(np.zeros([1,n_inst_tgt]).astype(np.float32))
679 | continue;
680 |
681 | overl = (seg_src_m[:,i].unsqueeze(1).repeat(1,n_inst_tgt,1,1) * seg_tgt_m).clamp(min=0,max=1).squeeze(0).sum(1).sum(1)
682 | union = (seg_src_m[:,i].unsqueeze(1).repeat(1,n_inst_tgt,1,1) + seg_tgt_m).clamp(min=0,max=1).squeeze(0).sum(1).sum(1)
683 |
684 | iou_inst = overl / union
685 | match_table = torch.cat((match_table.to(device) , iou_inst.unsqueeze(0).to(device) ), dim=0)
686 |
687 | iou, inst_idx = torch.max(match_table,dim=1)
688 | # pdb.set_trace()
689 |
690 | return iou, inst_idx
691 |
692 | def dice(seg_src, seg_tgt, valid_mask):
693 | n_inst_src = seg_src.shape[1]
694 | n_inst_tgt = seg_tgt.shape[1]
695 |
696 | seg_src_m = seg_src.to(device)
697 | seg_tgt_m = seg_tgt.to(device)
698 |
699 | for i in range(n_inst_src):
700 | if i == 0:
701 | match_table = torch.from_numpy(np.zeros([1,n_inst_tgt]).astype(np.float32))
702 | continue;
703 |
704 | overl = (seg_src_m[:,i].unsqueeze(1).repeat(1,n_inst_tgt,1,1) * seg_tgt_m).clamp(min=0,max=1).squeeze(0).sum(1).sum(1)
705 | union = (seg_src_m[:,i].unsqueeze(1).repeat(1,n_inst_tgt,1,1) + seg_tgt_m).clamp(min=0,max=1).squeeze(0).sum(1).sum(1)
706 |
707 | dice_inst = 2*overl / (union + overl)
708 | match_table = torch.cat((match_table.to(device) , dice_inst.unsqueeze(0).to(device) ), dim=0)
709 |
710 | dice, dice_idx = torch.max(match_table,dim=1)
711 | # pdb.set_trace()
712 |
713 | return dice, dice_idx
714 |
715 |
716 | def compute_depth(disp_net, tgt_img, ref_imgs):
717 | tgt_depth = 1/disp_net(tgt_img)
718 | ref_depths = []
719 | for ref_img in ref_imgs:
720 | ref_depth = 1/disp_net(ref_img)
721 | ref_depths.append(ref_depth)
722 |
723 | return tgt_depth, ref_depths
724 |
725 |
726 | def compute_ego_pose_with_inv(pose_net, tgt_imgs, ref_imgs):
727 | poses_fwd = []
728 | poses_bwd = []
729 | for tgt_img, ref_img in zip(tgt_imgs, ref_imgs):
730 | fwd = pose_net(tgt_img, ref_img)
731 | bwd = -fwd
732 | poses_fwd.append(fwd)
733 | poses_bwd.append(bwd)
734 | return poses_fwd, poses_bwd
735 |
736 |
737 | def compute_ego_warp(imgs, insts, depths, poses, intrinsics, is_detach=True):
738 | w_imgs, w_insts, w_depths, w_vals = [], [], [], []
739 | for img, inst, depth, pose in zip(imgs, insts, depths, poses):
740 | img_cat = torch.cat([img, inst[:,1:]], dim=1)
741 | if is_detach:
742 | w_img_cat, w_depth, w_val = forward_warp(img_cat, depth.detach(), pose.detach(), intrinsics, upscale=3)
743 | else:
744 | w_img_cat, w_depth, w_val = forward_warp(img_cat, depth, pose, intrinsics, upscale=3)
745 | w_imgs.append( w_img_cat[:,:3] )
746 | w_insts.append( torch.cat([inst[:,:1], w_img_cat[:,3:].round()], dim=1) )
747 | w_depths.append( w_depth )
748 | w_vals.append( w_val )
749 |
750 | return w_imgs, w_insts, w_depths, w_vals
751 |
752 |
753 | def compute_obj_pose_with_inv(pose_net, tgtI, tgtMs, r2tIs, r2tMs, refIs, refMs, t2rIs, t2rMs, intrinsics, dmni, num_insts):
754 | bs, _, hh, ww = tgtI.size()
755 |
756 | obj_poses_fwd, obj_poses_bwd = [], []
757 |
758 | for tgtM, r2tI, r2tM, refI, refM, t2rI, t2rM, num_inst in zip(tgtMs, r2tIs, r2tMs, refIs, refMs, t2rIs, t2rMs, num_insts):
759 | obj_pose_fwd = torch.zeros([bs*dmni, 3]).type_as(tgtI)
760 | obj_pose_bwd = torch.zeros([bs*dmni, 3]).type_as(tgtI)
761 |
762 | if sum(num_inst) != 0:
763 | tgtI_rep = tgtI.repeat_interleave(dmni, dim=0)
764 | tgtM_rep = tgtM[:,1:].reshape(-1,1,hh,ww)
765 | fwdIdx = (tgtM_rep.mean(dim=[1,2,3])!=0).detach() # tgt, judge each channel whether instance exists
766 | tgtO = (tgtI_rep * tgtM_rep)[fwdIdx]
767 |
768 | r2tI_rep = r2tI.repeat_interleave(dmni, dim=0)
769 | r2tM_rep = r2tM[:,1:].reshape(-1,1,hh,ww)
770 | r2tO = (r2tI_rep * r2tM_rep)[fwdIdx]
771 |
772 | refI_rep = refI.repeat_interleave(dmni, dim=0)
773 | refM_rep = refM[:,1:].reshape(-1,1,hh,ww)
774 | bwdIdx = (refM_rep.mean(dim=[1,2,3])!=0).detach() # ref, judge each channel whether instance exists
775 | refO = (refI_rep * refM_rep)[bwdIdx]
776 |
777 | t2rI_rep = t2rI.repeat_interleave(dmni, dim=0)
778 | t2rM_rep = t2rM[:,1:].reshape(-1,1,hh,ww)
779 | t2rO = (t2rI_rep * t2rM_rep)[bwdIdx]
780 |
781 | pose_fwd = pose_net(tgtO, r2tO)
782 | pose_bwd = -pose_fwd
783 | obj_pose_fwd[fwdIdx] = pose_fwd
784 | obj_pose_bwd[bwdIdx] = pose_bwd
785 | # pdb.set_trace()
786 |
787 | obj_poses_fwd.append( obj_pose_fwd.reshape(bs, dmni, 3) )
788 | obj_poses_bwd.append( obj_pose_bwd.reshape(bs, dmni, 3) )
789 |
790 | return obj_poses_fwd, obj_poses_bwd
791 |
792 |
793 | def compute_motion_field(tgt_img, ego_poses_fwd, ego_poses_bwd, obj_poses_fwd, obj_poses_bwd, tgt_insts, ref_insts):
794 | bs, _, hh, ww = tgt_img.size()
795 | MFs_fwd, MFs_bwd = [], [] # [ ([B, 6, 256, 832]), ([B, 6, 256, 832]) ]
796 |
797 | for EP_fwd, EP_bwd, OP_fwd, OP_bwd, tgt_inst, ref_inst in zip(ego_poses_fwd, ego_poses_bwd, obj_poses_fwd, obj_poses_bwd, tgt_insts, ref_insts):
798 | if (tgt_inst[:,1:].sum(dim=1)>1).sum() + (ref_inst[:,1:].sum(dim=1)>1).sum():
799 | print("WARNING: overlapped instance region at {}".format(datetime.datetime.now().strftime("%m-%d-%H:%M")))
800 |
801 | MF_fwd = EP_fwd.reshape(bs, 6, 1, 1).repeat(1,1,hh,ww)
802 | MF_bwd = EP_bwd.reshape(bs, 6, 1, 1).repeat(1,1,hh,ww)
803 |
804 | obj_MF_fwd = tgt_inst[:,1:].unsqueeze(2) * OP_fwd.unsqueeze(-1).unsqueeze(-1) # [bs, dmni, 3, hh, ww]
805 | obj_MF_bwd = ref_inst[:,1:].unsqueeze(2) * OP_bwd.unsqueeze(-1).unsqueeze(-1) # [bs, dmni, 3, hh, ww]
806 |
807 | MF_fwd[:,:3] += obj_MF_fwd.sum(dim=1, keepdim=False)
808 | MF_bwd[:,:3] += obj_MF_bwd.sum(dim=1, keepdim=False)
809 |
810 | MFs_fwd.append(MF_fwd)
811 | MFs_bwd.append(MF_bwd)
812 |
813 | return MFs_fwd, MFs_bwd
814 |
815 |
816 | def save_image(data, cm, fn, vmin=None, vmax=None):
817 | sizes = np.shape(data)
818 | height = float(sizes[0])
819 | width = float(sizes[1])
820 |
821 | fig = plt.figure()
822 | fig.set_size_inches(width/height, 1, forward=False)
823 | ax = plt.Axes(fig, [0., 0., 1., 1.])
824 | ax.set_axis_off()
825 | fig.add_axes(ax)
826 |
827 | ax.imshow(data, cmap=cm, vmin=vmin, vmax=vmax)
828 | plt.savefig(fn, dpi = height)
829 | plt.close()
830 |
831 | def dynamic_objects(r2t_inst_ego_bwd, ref_insts, valid, ref_imgs, tgt_insts, num_insts, dmni, theta, obj_size, bs):
832 | r2t_inst_bwd = r2t_inst_ego_bwd.to(device)
833 | batch_dyn_tgt = []
834 | batch_dyn_ref = []
835 | if bs > 1:
836 | for j in range(bs):
837 | dyn_tgt_insts = torch.tensor([]).to(device)
838 | dyn_ref_insts = torch.tensor([]).to(device)
839 | r2t_inst_bwd_ = r2t_inst_bwd[j,:].unsqueeze(0)
840 | dice_01, _ = dice(r2t_inst_bwd_, ref_insts[1][j,:].unsqueeze(0), valid_mask=valid[j,:].unsqueeze(0))
841 | dice_01 = torch.nan_to_num(dice_01)
842 | max_dice, index = torch.sort(dice_01, descending=True)
843 |
844 | for n in range(len(ref_imgs)):
845 | seg0 = ref_insts[n][j,:].unsqueeze(0).to(device)
846 | seg1 = tgt_insts[n][j,:].unsqueeze(0).to(device)
847 | n_inst = num_insts[n][j]
848 |
849 | seg0_re = torch.zeros(dmni+1, seg0.shape[2], seg0.shape[3]).to(device)
850 | seg1_re = torch.zeros(dmni+1, seg1.shape[2], seg1.shape[3]).to(device)
851 | non_overlap_0 = torch.ones([seg0.shape[2], seg0.shape[3]]).to(device)
852 | non_overlap_1 = torch.ones([seg0.shape[2], seg0.shape[3]]).to(device)
853 |
854 | num_match = 0
855 | for ch in range(n_inst+1):
856 | condition2 = ((((seg0[0,index[ch]]).sum())/(seg0.shape[2] * seg0.shape[3])) *100) > obj_size #remove small objects
857 | condition1 = (max_dice[ch] < theta and max_dice[ch] > 0)
858 | if condition1 and condition2 and (num_match < dmni): # dynamic!
859 | num_match += 1
860 | seg0_re[num_match] = seg0[0,index[ch]] * non_overlap_0
861 | seg1_re[num_match] = seg1[0,index[ch]] * non_overlap_1
862 | non_overlap_0 = non_overlap_0 * (1 - seg0_re[num_match])
863 | non_overlap_1 = non_overlap_1 * (1 - seg1_re[num_match])
864 | seg0_re[0] = num_match
865 | seg1_re[0] = num_match
866 |
867 | if seg0_re[0].mean() != 0 and seg0_re[int(seg0_re[0].mean())].mean() == 0: pdb.set_trace()
868 | if seg1_re[0].mean() != 0 and seg1_re[int(seg1_re[0].mean())].mean() == 0: pdb.set_trace()
869 |
870 | dyn_tgt_insts = torch.cat((dyn_tgt_insts, seg1_re.unsqueeze(0)), 0)
871 | dyn_ref_insts = torch.cat((dyn_ref_insts, seg0_re.unsqueeze(0)), 0)
872 | batch_dyn_tgt.append(dyn_tgt_insts)
873 | batch_dyn_ref.append(dyn_ref_insts)
874 | else:
875 | dice_01, _ = dice(r2t_inst_bwd, ref_insts[1], valid_mask=valid)
876 | dice_01 = torch.nan_to_num(dice_01)
877 | max_dice, index = torch.sort(dice_01, descending=True)
878 |
879 | for n in range(len(ref_imgs)): # length is 2
880 | seg0 = ref_insts[n].to(device) # its origionally [img1, img2]
881 | seg1 = tgt_insts[n].to(device)
882 |
883 | n_inst = num_insts[n][0]
884 |
885 | seg0_re = torch.zeros(dmni+1, seg0.shape[2], seg0.shape[3]).to(device)
886 | seg1_re = torch.zeros(dmni+1, seg1.shape[2], seg1.shape[3]).to(device)
887 | non_overlap_0 = torch.ones([seg0.shape[2], seg0.shape[3]]).to(device)
888 | non_overlap_1 = torch.ones([seg0.shape[2], seg0.shape[3]]).to(device)
889 |
890 | num_match = 0
891 | for ch in range(n_inst+1):
892 | # distance = (ref_depths[0] * seg0[0,index[ch]]).mean()
893 | condition2 = ((((seg0[0,index[ch]]).sum())/(seg0.shape[2] * seg0.shape[3])) *100) > obj_size #greater than percent of the image
894 | condition1 = (max_dice[ch] < theta and max_dice[ch] > 0)
895 | if condition1 and condition2 and (num_match < dmni): # dynamic!
896 | num_match += 1
897 | seg0_re[num_match] = seg0[0,index[ch]] * non_overlap_0
898 | seg1_re[num_match] = seg1[0,index[ch]] * non_overlap_1
899 | non_overlap_0 = non_overlap_0 * (1 - seg0_re[num_match])
900 | non_overlap_1 = non_overlap_1 * (1 - seg1_re[num_match])
901 | seg0_re[0] = num_match
902 | seg1_re[0] = num_match
903 |
904 | if seg0_re[0].mean() != 0 and seg0_re[int(seg0_re[0].mean())].mean() == 0: pdb.set_trace()
905 | if seg1_re[0].mean() != 0 and seg1_re[int(seg1_re[0].mean())].mean() == 0: pdb.set_trace()
906 |
907 | batch_dyn_tgt.append(seg1_re.unsqueeze(0))
908 | batch_dyn_ref.append(seg0_re.unsqueeze(0))
909 | tgt_insts = [img.to(device) for img in batch_dyn_tgt]
910 | ref_insts = [img.to(device) for img in batch_dyn_ref]
911 | return tgt_insts, ref_insts
912 |
913 |
914 | if __name__ == '__main__':
915 | main()
916 |
--------------------------------------------------------------------------------
/kitti_eval/test_files_eigen.txt:
--------------------------------------------------------------------------------
1 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000069.png
2 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000054.png
3 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000042.png
4 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000057.png
5 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000030.png
6 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000027.png
7 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000012.png
8 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000075.png
9 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000036.png
10 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000033.png
11 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000015.png
12 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000072.png
13 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000003.png
14 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000039.png
15 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000009.png
16 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000051.png
17 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000060.png
18 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000021.png
19 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000000.png
20 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000024.png
21 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000045.png
22 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000018.png
23 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000048.png
24 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000006.png
25 | 2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000063.png
26 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000000.png
27 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000016.png
28 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000032.png
29 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000048.png
30 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000064.png
31 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000080.png
32 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000096.png
33 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000112.png
34 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000128.png
35 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000144.png
36 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000160.png
37 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000176.png
38 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000196.png
39 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000212.png
40 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000228.png
41 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000244.png
42 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000260.png
43 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000276.png
44 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000292.png
45 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000308.png
46 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000324.png
47 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000340.png
48 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000356.png
49 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000372.png
50 | 2011_09_26/2011_09_26_drive_0009_sync/image_02/data/0000000388.png
51 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000090.png
52 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000050.png
53 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000110.png
54 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000115.png
55 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000060.png
56 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000105.png
57 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000125.png
58 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000020.png
59 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000140.png
60 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000085.png
61 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000070.png
62 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000080.png
63 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000065.png
64 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000095.png
65 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000130.png
66 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000100.png
67 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000010.png
68 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000030.png
69 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000000.png
70 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000135.png
71 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000040.png
72 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000005.png
73 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000120.png
74 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000045.png
75 | 2011_09_26/2011_09_26_drive_0013_sync/image_02/data/0000000035.png
76 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000003.png
77 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000069.png
78 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000057.png
79 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000012.png
80 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000072.png
81 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000018.png
82 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000063.png
83 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000000.png
84 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000084.png
85 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000015.png
86 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000066.png
87 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000006.png
88 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000048.png
89 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000060.png
90 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000009.png
91 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000033.png
92 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000021.png
93 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000075.png
94 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000027.png
95 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000045.png
96 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000078.png
97 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000036.png
98 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000051.png
99 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000054.png
100 | 2011_09_26/2011_09_26_drive_0020_sync/image_02/data/0000000042.png
101 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000018.png
102 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000090.png
103 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000126.png
104 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000378.png
105 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000036.png
106 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000288.png
107 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000198.png
108 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000450.png
109 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000144.png
110 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000072.png
111 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000252.png
112 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000180.png
113 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000432.png
114 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000396.png
115 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000054.png
116 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000468.png
117 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000306.png
118 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000108.png
119 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000162.png
120 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000342.png
121 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000270.png
122 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000414.png
123 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000216.png
124 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000360.png
125 | 2011_09_26/2011_09_26_drive_0023_sync/image_02/data/0000000324.png
126 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000077.png
127 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000035.png
128 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000091.png
129 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000112.png
130 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000007.png
131 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000175.png
132 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000042.png
133 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000098.png
134 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000133.png
135 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000161.png
136 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000014.png
137 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000126.png
138 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000168.png
139 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000070.png
140 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000084.png
141 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000140.png
142 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000049.png
143 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000000.png
144 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000182.png
145 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000147.png
146 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000056.png
147 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000063.png
148 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000021.png
149 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000119.png
150 | 2011_09_26/2011_09_26_drive_0027_sync/image_02/data/0000000028.png
151 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000380.png
152 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000394.png
153 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000324.png
154 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000000.png
155 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000268.png
156 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000366.png
157 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000296.png
158 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000014.png
159 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000028.png
160 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000182.png
161 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000168.png
162 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000196.png
163 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000140.png
164 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000084.png
165 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000056.png
166 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000112.png
167 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000352.png
168 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000126.png
169 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000070.png
170 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000310.png
171 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000154.png
172 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000098.png
173 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000408.png
174 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000042.png
175 | 2011_09_26/2011_09_26_drive_0029_sync/image_02/data/0000000338.png
176 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000000.png
177 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000128.png
178 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000192.png
179 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000032.png
180 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000352.png
181 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000608.png
182 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000224.png
183 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000576.png
184 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000672.png
185 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000064.png
186 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000448.png
187 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000704.png
188 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000640.png
189 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000512.png
190 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000768.png
191 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000160.png
192 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000416.png
193 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000480.png
194 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000800.png
195 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000288.png
196 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000544.png
197 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000096.png
198 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000384.png
199 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000256.png
200 | 2011_09_26/2011_09_26_drive_0036_sync/image_02/data/0000000320.png
201 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000000.png
202 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000005.png
203 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000010.png
204 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000015.png
205 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000020.png
206 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000025.png
207 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000030.png
208 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000035.png
209 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000040.png
210 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000045.png
211 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000050.png
212 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000055.png
213 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000060.png
214 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000065.png
215 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000070.png
216 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000075.png
217 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000080.png
218 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000085.png
219 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000090.png
220 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000095.png
221 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000100.png
222 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000105.png
223 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000110.png
224 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000115.png
225 | 2011_09_26/2011_09_26_drive_0046_sync/image_02/data/0000000120.png
226 | 2011_09_26/2011_09_26_drive_0048_sync/image_02/data/0000000000.png
227 | 2011_09_26/2011_09_26_drive_0048_sync/image_02/data/0000000001.png
228 | 2011_09_26/2011_09_26_drive_0048_sync/image_02/data/0000000002.png
229 | 2011_09_26/2011_09_26_drive_0048_sync/image_02/data/0000000003.png
230 | 2011_09_26/2011_09_26_drive_0048_sync/image_02/data/0000000004.png
231 | 2011_09_26/2011_09_26_drive_0048_sync/image_02/data/0000000005.png
232 | 2011_09_26/2011_09_26_drive_0048_sync/image_02/data/0000000006.png
233 | 2011_09_26/2011_09_26_drive_0048_sync/image_02/data/0000000007.png
234 | 2011_09_26/2011_09_26_drive_0048_sync/image_02/data/0000000008.png
235 | 2011_09_26/2011_09_26_drive_0048_sync/image_02/data/0000000009.png
236 | 2011_09_26/2011_09_26_drive_0048_sync/image_02/data/0000000010.png
237 | 2011_09_26/2011_09_26_drive_0048_sync/image_02/data/0000000011.png
238 | 2011_09_26/2011_09_26_drive_0048_sync/image_02/data/0000000012.png
239 | 2011_09_26/2011_09_26_drive_0048_sync/image_02/data/0000000013.png
240 | 2011_09_26/2011_09_26_drive_0048_sync/image_02/data/0000000014.png
241 | 2011_09_26/2011_09_26_drive_0048_sync/image_02/data/0000000015.png
242 | 2011_09_26/2011_09_26_drive_0048_sync/image_02/data/0000000016.png
243 | 2011_09_26/2011_09_26_drive_0048_sync/image_02/data/0000000017.png
244 | 2011_09_26/2011_09_26_drive_0048_sync/image_02/data/0000000018.png
245 | 2011_09_26/2011_09_26_drive_0048_sync/image_02/data/0000000019.png
246 | 2011_09_26/2011_09_26_drive_0048_sync/image_02/data/0000000020.png
247 | 2011_09_26/2011_09_26_drive_0048_sync/image_02/data/0000000021.png
248 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000046.png
249 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000014.png
250 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000036.png
251 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000028.png
252 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000026.png
253 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000050.png
254 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000040.png
255 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000008.png
256 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000016.png
257 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000044.png
258 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000018.png
259 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000032.png
260 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000042.png
261 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000010.png
262 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000020.png
263 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000048.png
264 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000052.png
265 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000006.png
266 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000030.png
267 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000012.png
268 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000038.png
269 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000000.png
270 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000002.png
271 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000004.png
272 | 2011_09_26/2011_09_26_drive_0052_sync/image_02/data/0000000022.png
273 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000011.png
274 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000033.png
275 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000242.png
276 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000253.png
277 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000286.png
278 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000154.png
279 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000099.png
280 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000220.png
281 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000022.png
282 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000077.png
283 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000187.png
284 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000143.png
285 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000066.png
286 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000176.png
287 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000110.png
288 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000275.png
289 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000264.png
290 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000198.png
291 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000055.png
292 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000088.png
293 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000121.png
294 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000209.png
295 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000165.png
296 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000231.png
297 | 2011_09_26/2011_09_26_drive_0056_sync/image_02/data/0000000044.png
298 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000056.png
299 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000000.png
300 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000344.png
301 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000358.png
302 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000316.png
303 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000238.png
304 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000098.png
305 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000112.png
306 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000028.png
307 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000014.png
308 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000330.png
309 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000154.png
310 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000042.png
311 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000302.png
312 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000182.png
313 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000288.png
314 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000140.png
315 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000274.png
316 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000224.png
317 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000372.png
318 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000196.png
319 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000126.png
320 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000084.png
321 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000210.png
322 | 2011_09_26/2011_09_26_drive_0059_sync/image_02/data/0000000070.png
323 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000528.png
324 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000308.png
325 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000044.png
326 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000352.png
327 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000066.png
328 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000000.png
329 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000506.png
330 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000176.png
331 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000022.png
332 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000242.png
333 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000462.png
334 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000418.png
335 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000110.png
336 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000440.png
337 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000396.png
338 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000154.png
339 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000374.png
340 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000088.png
341 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000286.png
342 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000550.png
343 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000264.png
344 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000220.png
345 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000330.png
346 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000484.png
347 | 2011_09_26/2011_09_26_drive_0064_sync/image_02/data/0000000198.png
348 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000283.png
349 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000361.png
350 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000270.png
351 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000127.png
352 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000205.png
353 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000218.png
354 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000153.png
355 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000335.png
356 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000192.png
357 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000348.png
358 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000101.png
359 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000049.png
360 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000179.png
361 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000140.png
362 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000374.png
363 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000322.png
364 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000309.png
365 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000244.png
366 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000062.png
367 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000257.png
368 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000088.png
369 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000114.png
370 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000075.png
371 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000296.png
372 | 2011_09_26/2011_09_26_drive_0084_sync/image_02/data/0000000231.png
373 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000007.png
374 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000196.png
375 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000439.png
376 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000169.png
377 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000115.png
378 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000034.png
379 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000304.png
380 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000331.png
381 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000277.png
382 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000520.png
383 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000682.png
384 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000628.png
385 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000088.png
386 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000601.png
387 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000574.png
388 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000223.png
389 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000655.png
390 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000358.png
391 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000412.png
392 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000142.png
393 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000385.png
394 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000061.png
395 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000493.png
396 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000466.png
397 | 2011_09_26/2011_09_26_drive_0086_sync/image_02/data/0000000250.png
398 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000000.png
399 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000016.png
400 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000032.png
401 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000048.png
402 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000064.png
403 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000080.png
404 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000096.png
405 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000112.png
406 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000128.png
407 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000144.png
408 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000160.png
409 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000176.png
410 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000192.png
411 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000208.png
412 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000224.png
413 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000240.png
414 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000256.png
415 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000305.png
416 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000321.png
417 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000337.png
418 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000353.png
419 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000369.png
420 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000385.png
421 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000401.png
422 | 2011_09_26/2011_09_26_drive_0093_sync/image_02/data/0000000417.png
423 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000000.png
424 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000019.png
425 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000038.png
426 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000057.png
427 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000076.png
428 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000095.png
429 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000114.png
430 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000133.png
431 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000152.png
432 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000171.png
433 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000190.png
434 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000209.png
435 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000228.png
436 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000247.png
437 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000266.png
438 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000285.png
439 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000304.png
440 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000323.png
441 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000342.png
442 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000361.png
443 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000380.png
444 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000399.png
445 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000418.png
446 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000437.png
447 | 2011_09_26/2011_09_26_drive_0096_sync/image_02/data/0000000456.png
448 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000692.png
449 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000930.png
450 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000760.png
451 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000896.png
452 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000284.png
453 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000148.png
454 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000522.png
455 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000794.png
456 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000624.png
457 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000726.png
458 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000216.png
459 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000318.png
460 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000488.png
461 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000590.png
462 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000454.png
463 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000862.png
464 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000386.png
465 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000352.png
466 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000420.png
467 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000658.png
468 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000828.png
469 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000556.png
470 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000114.png
471 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000182.png
472 | 2011_09_26/2011_09_26_drive_0101_sync/image_02/data/0000000080.png
473 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000015.png
474 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000035.png
475 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000043.png
476 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000051.png
477 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000059.png
478 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000067.png
479 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000075.png
480 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000083.png
481 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000091.png
482 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000099.png
483 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000107.png
484 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000115.png
485 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000123.png
486 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000131.png
487 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000139.png
488 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000147.png
489 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000155.png
490 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000163.png
491 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000171.png
492 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000179.png
493 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000187.png
494 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000195.png
495 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000203.png
496 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000211.png
497 | 2011_09_26/2011_09_26_drive_0106_sync/image_02/data/0000000219.png
498 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000312.png
499 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000494.png
500 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000104.png
501 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000130.png
502 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000156.png
503 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000182.png
504 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000598.png
505 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000416.png
506 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000364.png
507 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000026.png
508 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000078.png
509 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000572.png
510 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000468.png
511 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000260.png
512 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000624.png
513 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000234.png
514 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000442.png
515 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000390.png
516 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000546.png
517 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000286.png
518 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000000.png
519 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000338.png
520 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000208.png
521 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000650.png
522 | 2011_09_26/2011_09_26_drive_0117_sync/image_02/data/0000000052.png
523 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000024.png
524 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000021.png
525 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000036.png
526 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000000.png
527 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000051.png
528 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000018.png
529 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000033.png
530 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000090.png
531 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000045.png
532 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000054.png
533 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000012.png
534 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000039.png
535 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000009.png
536 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000003.png
537 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000030.png
538 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000078.png
539 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000060.png
540 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000048.png
541 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000084.png
542 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000081.png
543 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000006.png
544 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000057.png
545 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000072.png
546 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000087.png
547 | 2011_09_28/2011_09_28_drive_0002_sync/image_02/data/0000000063.png
548 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000252.png
549 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000540.png
550 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000001054.png
551 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000036.png
552 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000360.png
553 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000807.png
554 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000879.png
555 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000288.png
556 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000771.png
557 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000000.png
558 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000216.png
559 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000951.png
560 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000324.png
561 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000432.png
562 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000504.png
563 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000576.png
564 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000108.png
565 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000180.png
566 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000072.png
567 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000612.png
568 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000915.png
569 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000735.png
570 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000144.png
571 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000396.png
572 | 2011_09_29/2011_09_29_drive_0071_sync/image_02/data/0000000468.png
573 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000132.png
574 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000011.png
575 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000154.png
576 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000022.png
577 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000242.png
578 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000198.png
579 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000176.png
580 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000231.png
581 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000275.png
582 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000220.png
583 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000088.png
584 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000143.png
585 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000055.png
586 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000033.png
587 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000187.png
588 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000110.png
589 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000044.png
590 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000077.png
591 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000066.png
592 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000000.png
593 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000165.png
594 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000264.png
595 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000253.png
596 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000209.png
597 | 2011_09_30/2011_09_30_drive_0016_sync/image_02/data/0000000121.png
598 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000000107.png
599 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000002247.png
600 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000001391.png
601 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000000535.png
602 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000001819.png
603 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000001177.png
604 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000000428.png
605 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000001926.png
606 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000000749.png
607 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000001284.png
608 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000002140.png
609 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000001605.png
610 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000001498.png
611 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000000642.png
612 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000002740.png
613 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000002419.png
614 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000000856.png
615 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000002526.png
616 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000001712.png
617 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000001070.png
618 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000000000.png
619 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000002033.png
620 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000000214.png
621 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000000963.png
622 | 2011_09_30/2011_09_30_drive_0018_sync/image_02/data/0000002633.png
623 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000533.png
624 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000001040.png
625 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000082.png
626 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000205.png
627 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000835.png
628 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000451.png
629 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000164.png
630 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000794.png
631 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000328.png
632 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000615.png
633 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000917.png
634 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000369.png
635 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000287.png
636 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000123.png
637 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000876.png
638 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000410.png
639 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000492.png
640 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000958.png
641 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000656.png
642 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000000.png
643 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000753.png
644 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000574.png
645 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000001081.png
646 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000041.png
647 | 2011_09_30/2011_09_30_drive_0027_sync/image_02/data/0000000246.png
648 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000002906.png
649 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000002544.png
650 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000000362.png
651 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000004535.png
652 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000000734.png
653 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000001096.png
654 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000004173.png
655 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000000543.png
656 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000001277.png
657 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000004354.png
658 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000001458.png
659 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000001820.png
660 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000003449.png
661 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000003268.png
662 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000000915.png
663 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000002363.png
664 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000002725.png
665 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000000181.png
666 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000001639.png
667 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000003992.png
668 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000003087.png
669 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000002001.png
670 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000003811.png
671 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000003630.png
672 | 2011_10_03/2011_10_03_drive_0027_sync/image_02/data/0000000000.png
673 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000096.png
674 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000800.png
675 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000320.png
676 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000576.png
677 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000000.png
678 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000480.png
679 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000640.png
680 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000032.png
681 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000384.png
682 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000160.png
683 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000704.png
684 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000736.png
685 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000672.png
686 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000064.png
687 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000288.png
688 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000352.png
689 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000512.png
690 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000544.png
691 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000608.png
692 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000128.png
693 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000224.png
694 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000416.png
695 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000192.png
696 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000448.png
697 | 2011_10_03/2011_10_03_drive_0047_sync/image_02/data/0000000768.png
698 |
--------------------------------------------------------------------------------