├── .gitignore
├── LICENSE
├── README.md
├── data
├── 2D
│ ├── all_poses.tar
│ └── v1_pose0.tar
└── ActiveVisionDataset
│ └── Home_011_1
│ └── local_point_cloud
│ ├── traj1.txt
│ └── traj2.txt
├── dataset_loader
├── AVD.py
├── __init__.py
└── simulated_data.py
├── docs
├── _config.yml
├── index.md
└── resources
│ ├── ate_vs_epochs.jpg
│ ├── deepmapping-2Dmapping.jpg
│ ├── deepmapping-2Dmapping2.jpg
│ ├── deepmapping-3Dmapping.jpg
│ ├── deepmapping-3Dmapping2.jpg
│ ├── deepmapping-overview.jpg
│ ├── vis_2D_sample1.gif
│ ├── vis_2D_sample2.gif
│ ├── vis_2D_sample3.gif
│ ├── vis_AVD_sample1.gif
│ ├── vis_AVD_sample2.gif
│ └── vis_AVD_sample3.gif
├── loss
├── __init__.py
├── bce_loss.py
└── chamfer_dist.py
├── models
├── __init__.py
├── deepmapping.py
└── networks.py
├── requirements.txt
├── script
├── __init__.py
├── download_AVD.sh
├── eval_2D.py
├── eval_vis_AVD.py
├── incremental_icp.py
├── o3d_icp.py
├── run_eval_2D.sh
├── run_eval_vis_AVD.sh
├── run_icp.sh
├── run_train_2D.sh
├── run_train_AVD.sh
├── set_path.py
├── train_2D.py
└── train_AVD.py
└── utils
├── __init__.py
├── geometry_utils.py
├── open3d_utils.py
├── utils.py
└── vis_utils.py
/.gitignore:
--------------------------------------------------------------------------------
1 | .vscode/
2 | .ipynb_checkpoints/
3 | __pycache__/
4 |
5 | data/
6 | results/
7 |
8 | *.pyc
9 | *tar.gz
10 | *tar
11 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | The code is copyrighted by the authors. Permission to copy and use
2 | this software for noncommercial use is hereby granted provided: (a)
3 | this notice is retained in all copies, (2) the publication describing
4 | the method (indicated below) is clearly cited, and (3) the
5 | distribution from which the code was obtained is clearly cited. For
6 | all other uses, please contact the authors.
7 |
8 | The software code is provided "as is" with ABSOLUTELY NO WARRANTY
9 | expressed or implied. Use at your own risk.
10 |
11 | This code provides an implementation of the method described in the
12 | following publication:
13 |
14 | Li Ding and Chen Feng, "DeepMapping: Unsupervised Map Estimation From
15 | Multiple Point Clouds," The IEEE Conference on Computer Vision and
16 | Pattern Recognition (CVPR), June, 2019.
17 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # DeepMapping: Unsupervised Map Estimation From Multiple Point Clouds
2 | This repository contains PyTorch implementation associated with the paper:
3 |
4 | "[DeepMapping: Unsupervised Map Estimation From Multiple Point Clouds](https://arxiv.org/abs/1811.11397)",
5 | [Li Ding](https://www.hajim.rochester.edu/ece/lding6/) and [Chen Feng](https://ai4ce.github.io),
6 | CVPR 2019 (Oral).
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 | # Citation
15 | If you find DeepMapping useful in your research, please cite:
16 | ```BibTex
17 | @InProceedings{Ding_2019_CVPR,
18 | author = {Ding, Li and Feng, Chen},
19 | title = {DeepMapping: Unsupervised Map Estimation From Multiple Point Clouds},
20 | booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
21 | month = {June},
22 | year = {2019}
23 | }
24 | ```
25 |
26 | # Dependencies
27 | Requires Python 3.x, [PyTorch](https://pytorch.org/), [Open3D](http://www.open3d.org/docs/introduction.html), and other common packages listed in ```requirements.txt```
28 | ```bash
29 | pip3 install -r requirements.txt
30 | ```
31 | Running on GPU is highly recommended. The code has been tested with Python 3.6.5, PyTorch 0.4.0 and Open3D 0.4.0
32 |
33 | # Getting Started
34 | ## Dataset
35 | Simulated 2D point clouds are provided as ```./data/2D/all_poses.tar```. Extract the tar file:
36 | ```bash
37 | tar -xvf ./data/2D/all_poses.tar -C ./data/2D/
38 | ```
39 | A set of sub-directories will be created. For example, ```./data/2D/v1_pose0``` corresponds to the trajectory 0 sampled from the environment v1. In this folder, there are 256 local point clouds saved in PCD file format. The corresponding ground truth sensor poses is saved as ```gt_pose.mat``` file, which is a 256-by-3 matrix. The i-th row in the matrix represent the sensor pose \[x,y,theta\] for the i-th point cloud.
40 |
41 | ## Solving Registration As Unsupervised Training
42 | To run DeepMapping, execute the script
43 | ```bash
44 | ./script/run_train_2D.sh
45 | ```
46 | By default, the results will be saved to ```./results/2D/```.
47 |
48 | ### Warm Start
49 | DeepMapping allows for seamless integration of a “warm start” to reduce the convergence time with improved performance. Instead of starting from scratch, you can first perform a coarse registration of all point clouds using incremental ICP
50 | ```bash
51 | ./script/run_icp.sh
52 | ```
53 | The coarse registration can be further improved by DeepMapping. To do so, simply set ```INIT_POSE=/PATH/TO/ICP/RESULTS/pose_est.npy``` in ```./script/run_train_2D.sh```. Please see the comments in the script for detailed instruction.
54 |
55 | ## Evaluation
56 | The estimated sensor pose is saved as numpy array ```pose_est.npy```. To evaluate the registration, execute the script
57 | ```bash
58 | ./script/run_eval_2D.sh
59 | ```
60 | Absolute trajectory error will be computed as error metrics.
61 |
62 | ## Related Project
63 | [DeepMapping2 (CVPR'2023) for large-scale LiDAR mapping](https://github.com/ai4ce/DeepMapping2)
64 |
--------------------------------------------------------------------------------
/data/2D/all_poses.tar:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ai4ce/DeepMapping/ffe0c5d3bf725bd9918cb1f936dfe65c50082e55/data/2D/all_poses.tar
--------------------------------------------------------------------------------
/data/2D/v1_pose0.tar:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ai4ce/DeepMapping/ffe0c5d3bf725bd9918cb1f936dfe65c50082e55/data/2D/v1_pose0.tar
--------------------------------------------------------------------------------
/data/ActiveVisionDataset/Home_011_1/local_point_cloud/traj1.txt:
--------------------------------------------------------------------------------
1 | 001110011360103.png
2 | 001110011350103.png
3 | 001110011230103.png
4 | 001110011110103.png
5 | 001110011230103.png
6 | 001110011220103.png
7 | 001110011100103.png
8 | 001110010980103.png
9 | 001110011640103.png
10 | 001110011530103.png
11 | 001110011650103.png
12 | 001110011760103.png
13 | 001110011750103.png
14 | 001110013670103.png
15 | 001110013680103.png
16 | 001110013800103.png
17 |
--------------------------------------------------------------------------------
/data/ActiveVisionDataset/Home_011_1/local_point_cloud/traj2.txt:
--------------------------------------------------------------------------------
1 | 001110010070103.png
2 | 001110010270103.png
3 | 001110011110103.png
4 | 001110010990103.png
5 | 001110011230103.png
6 | 001110011350103.png
7 | 001110011230103.png
8 | 001110011240103.png
9 | 001110010400103.png
10 | 001110010410103.png
11 | 001110011130103.png
12 | 001110011010103.png
13 | 001110013590103.png
14 | 001110013580103.png
15 | 001110013570103.png
16 | 001110011650103.png
17 |
--------------------------------------------------------------------------------
/dataset_loader/AVD.py:
--------------------------------------------------------------------------------
1 | import os
2 | import glob
3 | import numpy as np
4 | import scipy.io as sio
5 | import torch
6 | from torch.utils.data import Dataset
7 | import open3d as o3d
8 |
9 | import utils
10 |
11 | def find_valid_points(local_point_cloud):
12 | """
13 | find valid points in local point cloud
14 | the z-values for invalid points are zeros
15 | local_point_cloud:
16 | valid_points: indices of valid point (0/1)
17 | """
18 | eps = 1e-6
19 | valid_points = local_point_cloud[:,:,:,-1] > eps
20 | return valid_points
21 |
22 | class AVD(Dataset):
23 | width,height = 1080, 1920
24 | focal_len=(1070,1069.126)
25 | principal_pt = (927.268, 545.76)
26 |
27 | def __init__(self,root,traj,subsample_rate=20,depth_scale=2000,trans_by_pose=None):
28 | self.root = root
29 | self.traj = traj
30 | self._trans_by_pose = trans_by_pose
31 | self.depth_scale = depth_scale
32 | traj_file = os.path.join(root,'local_point_cloud',traj)
33 | depth_files = [line.rstrip('\n') for line in open(traj_file)]
34 | self.n_pc = len(depth_files)
35 |
36 | # load point cloud and gt
37 | image_structs = sio.loadmat(os.path.join(root,'image_structs.mat'))
38 | image_structs = image_structs['image_structs'][0]
39 | image_files = [i[0][0] for i in image_structs]
40 |
41 | point_clouds = []
42 | self.gt = np.zeros((self.n_pc,6))
43 | for index,depth_file in enumerate(depth_files):
44 | depth_file_full = os.path.join(root,'high_res_depth',depth_file)
45 | depth_map = np.asarray(o3d.read_image(depth_file_full))
46 | current_point_cloud = utils.convert_depth_map_to_pc(depth_map,self.focal_len,self.principal_pt,depth_scale=self.depth_scale)
47 | current_point_cloud = current_point_cloud[::subsample_rate,::subsample_rate,:]
48 | point_clouds.append(current_point_cloud)
49 |
50 | image_file = depth_file[:14] + '1.jpg'
51 | idx = image_files.index(image_file)
52 | current_image_struct = image_structs[idx]
53 | current_pos = current_image_struct[6]
54 | current_direction = current_image_struct[4]
55 | current_gt = np.concatenate((current_pos,current_direction)).T
56 |
57 | self.gt[index,:] = current_gt
58 |
59 | point_clouds = np.asarray(point_clouds)
60 | self.point_clouds = torch.from_numpy(point_clouds) #
61 | self.valid_points = find_valid_points(self.point_clouds)
62 |
63 |
64 | def __getitem__(self,index):
65 | pcd = self.point_clouds[index,:,:,:] #
66 | valid_points = self.valid_points[index,:] #
67 | if self._trans_by_pose is not None:
68 | pcd = pcd.unsqueeze(0) # <1XHxWx3>
69 | pose = self._trans_by_pose[index, :].unsqueeze(0) # <1x3>
70 | pcd = utils.transform_to_global_AVD(pose, pcd).squeeze(0)
71 | else:
72 | pose = torch.zeros(1,3,dtype=torch.float32)
73 | return pcd,valid_points,pose
74 |
75 | def __len__(self):
76 | return self.n_pc
77 |
--------------------------------------------------------------------------------
/dataset_loader/__init__.py:
--------------------------------------------------------------------------------
1 | from .simulated_data import SimulatedPointCloud
2 | from .AVD import AVD
3 |
--------------------------------------------------------------------------------
/dataset_loader/simulated_data.py:
--------------------------------------------------------------------------------
1 | import os
2 | import glob
3 | import numpy as np
4 | import torch
5 | from torch.utils.data import Dataset
6 | from open3d import read_point_cloud
7 |
8 | import utils
9 |
10 |
11 | def find_valid_points(local_point_cloud):
12 | """
13 | find valid points in local point cloud
14 | invalid points have all zeros local coordinates
15 | local_point_cloud:
16 | valid_points: indices of valid point (0/1)
17 | """
18 | eps = 1e-6
19 | non_zero_coord = torch.abs(local_point_cloud) > eps
20 | valid_points = torch.sum(non_zero_coord, dim=-1)
21 | valid_points = valid_points > 0
22 | return valid_points
23 |
24 |
25 | class SimulatedPointCloud(Dataset):
26 | def __init__(self, root, trans_by_pose=None):
27 | # trans_by_pose: pose
28 | self.root = os.path.expanduser(root)
29 | self._trans_by_pose = trans_by_pose
30 | file_list = glob.glob(os.path.join(self.root, '*pcd'))
31 | self.file_list = sorted(file_list)
32 |
33 | self.pcds = [] # a list of open3d pcd objects
34 | point_clouds = [] #a list of tensor
35 | for file in self.file_list:
36 | pcd = read_point_cloud(file)
37 | self.pcds.append(pcd)
38 |
39 | current_point_cloud = np.asarray(pcd.points, dtype=np.float32)[:, 0:2]
40 | point_clouds.append(current_point_cloud)
41 |
42 | point_clouds = np.asarray(point_clouds)
43 | self.point_clouds = torch.from_numpy(point_clouds) #
44 |
45 | self.valid_points = find_valid_points(self.point_clouds) #
46 |
47 | # number of points in each point cloud
48 | self.n_obs = self.point_clouds.shape[1]
49 |
50 | def __getitem__(self, index):
51 | pcd = self.point_clouds[index,:,:] #
52 | valid_points = self.valid_points[index,:]
53 | if self._trans_by_pose is not None:
54 | pcd = pcd.unsqueeze(0) # <1XLx2>
55 | pose = self._trans_by_pose[index, :].unsqueeze(0) # <1x3>
56 | pcd = utils.transform_to_global_2D(pose, pcd).squeeze(0)
57 | else:
58 | pose = torch.zeros(1,3,dtype=torch.float32)
59 | return pcd,valid_points,pose
60 |
61 | def __len__(self):
62 | return len(self.point_clouds)
63 |
--------------------------------------------------------------------------------
/docs/_config.yml:
--------------------------------------------------------------------------------
1 | theme: jekyll-theme-modernist
--------------------------------------------------------------------------------
/docs/index.md:
--------------------------------------------------------------------------------
1 | # DeepMapping: Unsupervised Map Estimation From Multiple Point Clouds
2 |
3 | [**Li Ding** (University of Rochester)](https://www.hajim.rochester.edu/ece/lding6/), [**Chen Feng** (NYU Tandon School of Engineering)](https://ai4ce.github.io)
4 |
5 | IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019, **Oral Presentation**
6 |
7 | |[Abstract](#abstract)|[Paper](#paper-arxiv)|[Code](#code-github)|[Results](#results)|[Acknowledgment](#acknowledgment)|
8 |
9 | 
10 |
11 | 
12 | 
13 | 
14 |
15 | 
16 | 
17 | 
18 |
19 |
20 | ### Abstract
21 | We propose DeepMapping, a novel registration framework using deep neural networks (DNNs) as auxiliary functions to align multiple point clouds from scratch to a globally consistent frame. We use DNNs to model the highly non-convex mapping process that traditionally involves hand-crafted data association, sensor pose initialization, and global refinement. Our key novelty is that properly defining unsupervised losses to "train" these DNNs through back-propagation is equivalent to solving the underlying registration problem, yet enables fewer dependencies on good initialization as required by ICP. Our framework contains two DNNs: a localization network that estimates the poses for input point clouds, and a map network that models the scene structure by estimating the occupancy status of global coordinates. This allows us to convert the registration problem to a binary occupancy classification, which can be solved efficiently using gradient-based optimization. We further show that DeepMapping can be readily extended to address the problem of Lidar SLAM by imposing geometric constraints between consecutive point clouds. Experiments are conducted on both simulated and real datasets. Qualitative and quantitative comparisons demonstrate that DeepMapping often enables more robust and accurate global registration of multiple point clouds than existing techniques.
22 |
23 | ### [Paper (arXiv)](https://arxiv.org/abs/1811.11397)
24 | To cite our paper:
25 | ```BibTex
26 | @article{ding2018deepmapping,
27 | title={DeepMapping: Unsupervised Map Estimation From Multiple Point Clouds},
28 | author={Ding, Li and Feng, Chen},
29 | journal={arXiv preprint arXiv:1811.11397},
30 | year={2018}
31 | }
32 | ```
33 |
34 | ### [Code (GitHub)](https://github.com/ai4ce/DeepMapping)
35 |
36 | ```
37 | The code is copyrighted by the authors. Permission to copy and use
38 | this software for noncommercial use is hereby granted provided: (a)
39 | this notice is retained in all copies, (2) the publication describing
40 | the method (indicated below) is clearly cited, and (3) the
41 | distribution from which the code was obtained is clearly cited. For
42 | all other uses, please contact the authors.
43 |
44 | The software code is provided "as is" with ABSOLUTELY NO WARRANTY
45 | expressed or implied. Use at your own risk.
46 |
47 | This code provides an implementation of the method described in the
48 | following publication:
49 |
50 | Li Ding and Chen Feng, "DeepMapping: Unsupervised Map Estimation From
51 | Multiple Point Clouds," The IEEE Conference on Computer Vision and
52 | Pattern Recognition (CVPR), June, 2019.
53 | ```
54 |
55 | 
56 |
57 | ### Results
58 | #### 2D Mapping (Simulated Data)
59 | 
60 | 
61 | #### 3D Mapping (Real Data)
62 | 
63 | 
64 |
65 | ### Acknowledgment
66 | This work was partially done while the authors were with MERL, and was supported in part by NYU Tandon School of Engineering and MERL. [Chen Feng](https://ai4ce.github.io) is the corresponding author. We gratefully acknowledge the helpful comments and suggestions from Yuichi Taguchi, Dong Tian, Weiyang Liu, and Alan Sullivan.
67 |
68 |
69 |
70 |
71 |
72 |
--------------------------------------------------------------------------------
/docs/resources/ate_vs_epochs.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ai4ce/DeepMapping/ffe0c5d3bf725bd9918cb1f936dfe65c50082e55/docs/resources/ate_vs_epochs.jpg
--------------------------------------------------------------------------------
/docs/resources/deepmapping-2Dmapping.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ai4ce/DeepMapping/ffe0c5d3bf725bd9918cb1f936dfe65c50082e55/docs/resources/deepmapping-2Dmapping.jpg
--------------------------------------------------------------------------------
/docs/resources/deepmapping-2Dmapping2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ai4ce/DeepMapping/ffe0c5d3bf725bd9918cb1f936dfe65c50082e55/docs/resources/deepmapping-2Dmapping2.jpg
--------------------------------------------------------------------------------
/docs/resources/deepmapping-3Dmapping.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ai4ce/DeepMapping/ffe0c5d3bf725bd9918cb1f936dfe65c50082e55/docs/resources/deepmapping-3Dmapping.jpg
--------------------------------------------------------------------------------
/docs/resources/deepmapping-3Dmapping2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ai4ce/DeepMapping/ffe0c5d3bf725bd9918cb1f936dfe65c50082e55/docs/resources/deepmapping-3Dmapping2.jpg
--------------------------------------------------------------------------------
/docs/resources/deepmapping-overview.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ai4ce/DeepMapping/ffe0c5d3bf725bd9918cb1f936dfe65c50082e55/docs/resources/deepmapping-overview.jpg
--------------------------------------------------------------------------------
/docs/resources/vis_2D_sample1.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ai4ce/DeepMapping/ffe0c5d3bf725bd9918cb1f936dfe65c50082e55/docs/resources/vis_2D_sample1.gif
--------------------------------------------------------------------------------
/docs/resources/vis_2D_sample2.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ai4ce/DeepMapping/ffe0c5d3bf725bd9918cb1f936dfe65c50082e55/docs/resources/vis_2D_sample2.gif
--------------------------------------------------------------------------------
/docs/resources/vis_2D_sample3.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ai4ce/DeepMapping/ffe0c5d3bf725bd9918cb1f936dfe65c50082e55/docs/resources/vis_2D_sample3.gif
--------------------------------------------------------------------------------
/docs/resources/vis_AVD_sample1.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ai4ce/DeepMapping/ffe0c5d3bf725bd9918cb1f936dfe65c50082e55/docs/resources/vis_AVD_sample1.gif
--------------------------------------------------------------------------------
/docs/resources/vis_AVD_sample2.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ai4ce/DeepMapping/ffe0c5d3bf725bd9918cb1f936dfe65c50082e55/docs/resources/vis_AVD_sample2.gif
--------------------------------------------------------------------------------
/docs/resources/vis_AVD_sample3.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ai4ce/DeepMapping/ffe0c5d3bf725bd9918cb1f936dfe65c50082e55/docs/resources/vis_AVD_sample3.gif
--------------------------------------------------------------------------------
/loss/__init__.py:
--------------------------------------------------------------------------------
1 | from .chamfer_dist import chamfer_loss
2 | from .bce_loss import bce
3 |
4 |
5 | def bce_ch(pred, targets, obs_global, valid_obs=None, bce_weight=None, seq=2, gamma=0.1):
6 | """
7 | pred: , occupancy probabiliry from M-Net
8 | targets: , occupancy label
9 | obs_global: k = 2,3, global point cloud
10 | valid_obs: , indices of valid point (0/1),
11 | invalid points are not used for computing chamfer loss
12 | bce_weight: , weight for each point in computing bce loss
13 | """
14 | bce_loss = bce(pred, targets, bce_weight)
15 | ch_loss = chamfer_loss(obs_global, valid_obs, seq)
16 | return gamma * bce_loss + (1 - gamma) * ch_loss
17 |
--------------------------------------------------------------------------------
/loss/bce_loss.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 |
4 |
5 | class BCEWithLogitsLoss2(nn.Module):
6 | def __init__(self, weight=None, reduction='elementwise_mean'):
7 | super(BCEWithLogitsLoss2, self).__init__()
8 | self.reduction = reduction
9 | self.register_buffer('weight', weight)
10 |
11 | def forward(self, input, target):
12 | return bce_with_logits(input, target, weight=self.weight, reduction=self.reduction)
13 |
14 |
15 | def bce_with_logits(input, target, weight=None, reduction='elementwise_mean'):
16 | """
17 | This function differs from F.binary_cross_entropy_with_logits in the way
18 | that if weight is not None, the loss is normalized by weight
19 | """
20 | if not (target.size() == input.size()):
21 | raise ValueError("Target size ({}) must be the same as input size ({})".format(
22 | target.size(), input.size()))
23 | if weight is not None:
24 | if not (weight.size() == input.size()):
25 | raise ValueError("Weight size ({}) must be the same as input size ({})".format(
26 | weight.size(), input.size()))
27 |
28 | max_val = (-input).clamp(min=0)
29 | loss = input - input * target + max_val + \
30 | ((-max_val).exp() + (-input - max_val).exp()).log()
31 |
32 | if weight is not None:
33 | loss = loss * weight
34 |
35 | if reduction == 'none':
36 | return loss
37 | elif reduction == 'elementwise_mean':
38 | if weight is not None:
39 | # different from F.binary_cross_entropy_with_logits
40 | return loss.sum() / weight.sum()
41 | else:
42 | return loss.mean()
43 | else:
44 | return loss.sum()
45 |
46 |
47 | def bce(pred, targets, weight=None):
48 | criternion = BCEWithLogitsLoss2(weight=weight)
49 | loss = criternion(pred, targets)
50 | return loss
51 |
--------------------------------------------------------------------------------
/loss/chamfer_dist.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 |
4 | INF = 1000000
5 |
6 |
7 | class ChamfersDistance(nn.Module):
8 | '''
9 | Extensively search to compute the Chamfersdistance.
10 | '''
11 |
12 | def forward(self, input1, input2, valid1=None, valid2=None):
13 |
14 | # input1, input2: BxNxK, BxMxK, K = 3
15 | B, N, K = input1.shape
16 | _, M, _ = input2.shape
17 | if valid1 is not None:
18 | # ignore invalid points
19 | valid1 = valid1.type(torch.float32)
20 | valid2 = valid2.type(torch.float32)
21 |
22 | invalid1 = 1 - valid1.unsqueeze(-1).expand(-1, -1, K)
23 | invalid2 = 1 - valid2.unsqueeze(-1).expand(-1, -1, K)
24 |
25 | input1 = input1 + invalid1 * INF * torch.ones_like(input1)
26 | input2 = input2 + invalid2 * INF * torch.ones_like(input2)
27 |
28 | # Repeat (x,y,z) M times in a row
29 | input11 = input1.unsqueeze(2) # BxNx1xK
30 | input11 = input11.expand(B, N, M, K) # BxNxMxK
31 | # Repeat (x,y,z) N times in a column
32 | input22 = input2.unsqueeze(1) # Bx1xMxK
33 | input22 = input22.expand(B, N, M, K) # BxNxMxK
34 | # compute the distance matrix
35 | D = input11 - input22 # BxNxMxK
36 | D = torch.norm(D, p=2, dim=3) # BxNxM
37 |
38 | dist0, _ = torch.min(D, dim=1) # BxM
39 | dist1, _ = torch.min(D, dim=2) # BxN
40 |
41 | if valid1 is not None:
42 | dist0 = torch.sum(dist0 * valid2, 1) / torch.sum(valid2, 1)
43 | dist1 = torch.sum(dist1 * valid1, 1) / torch.sum(valid1, 1)
44 | else:
45 | dist0 = torch.mean(dist0, 1)
46 | dist1 = torch.mean(dist1, 1)
47 |
48 | loss = dist0 + dist1 # B
49 | loss = torch.mean(loss) # 1
50 | return loss
51 |
52 |
53 | def registration_loss(obs, valid_obs=None):
54 | """
55 | Registration consistency
56 | obs: a set of obs frame in the same coordinate system
57 | select of frame as reference (ref_id) and the rest as target
58 | compute chamfer distance between each target frame and reference
59 |
60 | valid_obs: indics of valid points in obs
61 | """
62 | criternion = ChamfersDistance()
63 | bs = obs.shape[0]
64 | ref_id = 0
65 | ref_map = obs[ref_id, :, :].unsqueeze(0).expand(bs - 1, -1, -1)
66 | valid_ref = valid_obs[ref_id, :].unsqueeze(0).expand(bs - 1, -1)
67 |
68 | tgt_list = list(range(bs))
69 | tgt_list.pop(ref_id)
70 | tgt_map = obs[tgt_list, :, :]
71 | valid_tgt = valid_obs[tgt_list, :]
72 |
73 | loss = criternion(ref_map, tgt_map, valid_ref, valid_tgt)
74 | return loss
75 |
76 |
77 | def chamfer_loss(obs, valid_obs=None, seq=2):
78 | bs = obs.shape[0]
79 | total_step = bs - seq + 1
80 | loss = 0.
81 | for step in range(total_step):
82 | current_obs = obs[step:step + seq]
83 | current_valid_obs = valid_obs[step:step + seq]
84 |
85 | current_loss = registration_loss(current_obs, current_valid_obs)
86 | loss = loss + current_loss
87 |
88 | loss = loss / total_step
89 | return loss
90 |
--------------------------------------------------------------------------------
/models/__init__.py:
--------------------------------------------------------------------------------
1 | from .deepmapping import DeepMapping2D, DeepMapping_AVD
2 |
--------------------------------------------------------------------------------
/models/deepmapping.py:
--------------------------------------------------------------------------------
1 | from copy import deepcopy
2 | import numpy as np
3 | import torch
4 | import torch.nn as nn
5 | from .networks import LocNetReg2D, LocNetRegAVD, MLP
6 | from utils import transform_to_global_2D, transform_to_global_AVD
7 |
8 | def get_M_net_inputs_labels(occupied_points, unoccupited_points):
9 | """
10 | get global coord (occupied and unoccupied) and corresponding labels
11 | """
12 | n_pos = occupied_points.shape[1]
13 | inputs = torch.cat((occupied_points, unoccupited_points), 1)
14 | bs, N, _ = inputs.shape
15 |
16 | gt = torch.zeros([bs, N, 1], device=occupied_points.device)
17 | gt.requires_grad_(False)
18 | gt[:, :n_pos, :] = 1
19 | return inputs, gt
20 |
21 |
22 | def sample_unoccupied_point(local_point_cloud, n_samples, center):
23 | """
24 | sample unoccupied points along rays in local point cloud
25 | local_point_cloud:
26 | n_samples: number of samples on each ray
27 | center: location of sensor
28 | """
29 | bs, L, k = local_point_cloud.shape
30 | center = center.expand(-1,L,-1) #
31 | unoccupied = torch.zeros(bs, L * n_samples, k,
32 | device=local_point_cloud.device)
33 | for idx in range(1, n_samples + 1):
34 | fac = torch.rand(1).item()
35 | unoccupied[:, (idx - 1) * L:idx * L, :] = center + (local_point_cloud-center) * fac
36 | return unoccupied
37 |
38 | class DeepMapping2D(nn.Module):
39 | def __init__(self, loss_fn, n_obs=256, n_samples=19, dim=[2, 64, 512, 512, 256, 128, 1]):
40 | super(DeepMapping2D, self).__init__()
41 | self.n_obs = n_obs
42 | self.n_samples = n_samples
43 | self.loss_fn = loss_fn
44 | self.loc_net = LocNetReg2D(n_points=n_obs, out_dims=3)
45 | self.occup_net = MLP(dim)
46 |
47 | def forward(self, obs_local,valid_points,sensor_pose):
48 | # obs_local:
49 | # sensor_pose: init pose
50 | self.obs_local = deepcopy(obs_local)
51 | self.valid_points = valid_points
52 |
53 | self.pose_est = self.loc_net(self.obs_local)
54 |
55 | self.obs_global_est = transform_to_global_2D(
56 | self.pose_est, self.obs_local)
57 |
58 | if self.training:
59 | sensor_center = sensor_pose[:,:,:2]
60 | self.unoccupied_local = sample_unoccupied_point(
61 | self.obs_local, self.n_samples,sensor_center)
62 | self.unoccupied_global = transform_to_global_2D(
63 | self.pose_est, self.unoccupied_local)
64 |
65 | inputs, self.gt = get_M_net_inputs_labels(
66 | self.obs_global_est, self.unoccupied_global)
67 | self.occp_prob = self.occup_net(inputs)
68 | loss = self.compute_loss()
69 | return loss
70 |
71 | def compute_loss(self):
72 | valid_unoccupied_points = self.valid_points.repeat(1, self.n_samples)
73 | bce_weight = torch.cat(
74 | (self.valid_points, valid_unoccupied_points), 1).float()
75 | # same as occp_prob and gt
76 | bce_weight = bce_weight.unsqueeze(-1)
77 |
78 | if self.loss_fn.__name__ == 'bce_ch':
79 | loss = self.loss_fn(self.occp_prob, self.gt, self.obs_global_est,
80 | self.valid_points, bce_weight, seq=4, gamma=0.1) # BCE_CH
81 | elif self.loss_fn.__name__ == 'bce':
82 | loss = self.loss_fn(self.occp_prob, self.gt, bce_weight) # BCE
83 | return loss
84 |
85 |
86 |
87 | class DeepMapping_AVD(nn.Module):
88 | #def __init__(self, loss_fn, n_samples=35, dim=[3, 256, 256, 256, 256, 256, 256, 1]):
89 | def __init__(self, loss_fn, n_samples=35, dim=[3, 64, 512, 512, 256, 128, 1]):
90 | super(DeepMapping_AVD, self).__init__()
91 | self.n_samples = n_samples
92 | self.loss_fn = loss_fn
93 | self.loc_net = LocNetRegAVD(out_dims=3) # y=0
94 | self.occup_net = MLP(dim)
95 |
96 | def forward(self, obs_local,valid_points,sensor_pose):
97 | # obs_local:
98 | # valid_points:
99 |
100 | self.obs_local = deepcopy(obs_local)
101 | self.valid_points = valid_points
102 | self.pose_est = self.loc_net(self.obs_local)
103 |
104 | bs = obs_local.shape[0]
105 | self.obs_local = self.obs_local.view(bs,-1,3)
106 | self.valid_points = self.valid_points.view(bs,-1)
107 |
108 | self.obs_global_est = transform_to_global_AVD(
109 | self.pose_est, self.obs_local)
110 |
111 | if self.training:
112 | sensor_center = sensor_pose[:,:,:2]
113 | self.unoccupied_local = sample_unoccupied_point(
114 | self.obs_local, self.n_samples,sensor_center)
115 | self.unoccupied_global = transform_to_global_AVD(
116 | self.pose_est, self.unoccupied_local)
117 |
118 | inputs, self.gt = get_M_net_inputs_labels(
119 | self.obs_global_est, self.unoccupied_global)
120 | self.occp_prob = self.occup_net(inputs)
121 | loss = self.compute_loss()
122 | return loss
123 |
124 | def compute_loss(self):
125 | valid_unoccupied_points = self.valid_points.repeat(1, self.n_samples)
126 | bce_weight = torch.cat(
127 | (self.valid_points, valid_unoccupied_points), 1).float()
128 | # same as occp_prob and gt
129 | bce_weight = bce_weight.unsqueeze(-1)
130 |
131 | if self.loss_fn.__name__ == 'bce_ch':
132 | loss = self.loss_fn(self.occp_prob, self.gt, self.obs_global_est,
133 | self.valid_points, bce_weight, seq=2, gamma=0.9) # BCE_CH
134 | elif self.loss_fn.__name__ == 'bce':
135 | loss = self.loss_fn(self.occp_prob, self.gt, bce_weight) # BCE
136 | return loss
137 |
138 |
139 |
--------------------------------------------------------------------------------
/models/networks.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import torch.nn as nn
3 | import torch.nn.functional as F
4 |
5 | def get_and_init_FC_layer(din, dout):
6 | li = nn.Linear(din, dout)
7 | nn.init.xavier_uniform_(
8 | li.weight.data, gain=nn.init.calculate_gain('relu'))
9 | li.bias.data.fill_(0.)
10 | return li
11 |
12 |
13 | def get_MLP_layers(dims, doLastRelu):
14 | layers = []
15 | for i in range(1, len(dims)):
16 | layers.append(get_and_init_FC_layer(dims[i - 1], dims[i]))
17 | if i == len(dims) - 1 and not doLastRelu:
18 | continue
19 | layers.append(nn.ReLU())
20 | return layers
21 |
22 |
23 | class PointwiseMLP(nn.Sequential):
24 | def __init__(self, dims, doLastRelu=False):
25 | layers = get_MLP_layers(dims, doLastRelu)
26 | super(PointwiseMLP, self).__init__(*layers)
27 |
28 |
29 | class MLP(nn.Module):
30 | def __init__(self, dims):
31 | super(MLP, self).__init__()
32 | self.mlp = PointwiseMLP(dims, doLastRelu=False)
33 |
34 | def forward(self, x):
35 | return self.mlp.forward(x)
36 |
37 |
38 | class ObsFeat2D(nn.Module):
39 | """Feature extractor for 1D organized point clouds"""
40 |
41 | def __init__(self, n_points, n_out=1024):
42 | super(ObsFeat2D, self).__init__()
43 | self.n_out = n_out
44 | k = 3
45 | p = int(np.floor(k / 2)) + 2
46 | self.conv1 = nn.Conv1d(2, 64, kernel_size=k, padding=p, dilation=3)
47 | self.conv2 = nn.Conv1d(64, 128, kernel_size=k, padding=p, dilation=3)
48 | self.conv3 = nn.Conv1d(
49 | 128, self.n_out, kernel_size=k, padding=p, dilation=3)
50 | self.mp = nn.MaxPool1d(n_points)
51 |
52 | def forward(self, x):
53 | assert(x.shape[1] == 2), "the input size must be "
54 |
55 | x = F.relu(self.conv1(x))
56 | x = F.relu(self.conv2(x))
57 | x = self.conv3(x)
58 | x = self.mp(x)
59 | x = x.view(-1, self.n_out) #
60 | return x
61 |
62 |
63 | class ObsFeatAVD(nn.Module):
64 | """Feature extractor for 2D organized point clouds"""
65 | def __init__(self, n_out=1024):
66 | super(ObsFeatAVD, self).__init__()
67 | self.n_out = n_out
68 | k = 3
69 | p = int(np.floor(k / 2)) + 2
70 | self.conv1 = nn.Conv2d(3,64,kernel_size=k,padding=p,dilation=3)
71 | self.conv2 = nn.Conv2d(64,128,kernel_size=k,padding=p,dilation=3)
72 | self.conv3 = nn.Conv2d(128,256,kernel_size=k,padding=p,dilation=3)
73 | self.conv4 = nn.Conv2d(256,self.n_out,kernel_size=k,padding=p,dilation=3)
74 | self.amp = nn.AdaptiveMaxPool2d(1)
75 |
76 | def forward(self, x):
77 | assert(x.shape[1]==3),"the input size must be "
78 | x = F.relu(self.conv1(x))
79 | x = F.relu(self.conv2(x))
80 | x = F.relu(self.conv3(x))
81 | x = self.conv4(x)
82 | x = self.amp(x)
83 | x = x.view(-1,self.n_out) #
84 | return x
85 |
86 |
87 | class LocNetReg2D(nn.Module):
88 | def __init__(self, n_points, out_dims):
89 | super(LocNetReg2D, self).__init__()
90 | self.obs_feat_extractor = ObsFeat2D(n_points)
91 | n_in = self.obs_feat_extractor.n_out
92 | self.fc = MLP([n_in, 512, 256, out_dims])
93 |
94 | def forward(self, obs):
95 | obs = obs.transpose(1, 2)
96 | obs_feat = self.obs_feat_extractor(obs)
97 | obs = obs.transpose(1, 2)
98 |
99 | x = self.fc(obs_feat)
100 | return x
101 |
102 |
103 | class LocNetRegAVD(nn.Module):
104 | def __init__(self, out_dims):
105 | super(LocNetRegAVD, self).__init__()
106 | self.obs_feat_extractor = ObsFeatAVD()
107 | n_in = self.obs_feat_extractor.n_out
108 | self.fc = MLP([n_in, 512, 256, out_dims])
109 |
110 | def forward(self, obs):
111 | # obs:
112 | bs = obs.shape[0]
113 | obs = obs.permute(0,3,1,2) #
114 | obs_feat = self.obs_feat_extractor(obs)
115 | obs = obs.permute(0,2,3,1)
116 |
117 | x = self.fc(obs_feat)
118 | return x
119 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | torch==0.4.1
2 | open3d-python==0.4
3 | scipy
4 | matplotlib
5 | numpy
6 |
--------------------------------------------------------------------------------
/script/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ai4ce/DeepMapping/ffe0c5d3bf725bd9918cb1f936dfe65c50082e55/script/__init__.py
--------------------------------------------------------------------------------
/script/download_AVD.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # This is the script to download the AVD dataset
4 | # Alternatively, you can obtain the dataset from the original sources, whom we thank for providing the data
5 | # The relevant paper is:
6 | # Ammirato, Phil, et al. "A dataset for developing and benchmarking active vision." ICRA, 2017.
7 |
8 |
9 | # download file from google drive where the original AVD is stored
10 | function gdrive_download () {
11 | CONFIRM=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate "https://docs.google.com/uc?export=download&id=$1" -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')
12 | wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$CONFIRM&id=$1" -O $2
13 | rm -rf /tmp/cookies.txt
14 | }
15 |
16 | DATADIR=../data
17 | FILEID=0B8zhxvoYK3WTLXlvYlhrc2tDVW8
18 | OUTPUT=${DATADIR}/AVD_Part3.tar
19 | mkdir -p ${DATADIR}
20 | gdrive_download ${FILEID} ${OUTPUT}
21 | tar -C ${DATADIR} -xvf $OUTPUT ActiveVisionDataset/Home_011_1/
22 | rm $OUTPUT
23 |
--------------------------------------------------------------------------------
/script/eval_2D.py:
--------------------------------------------------------------------------------
1 | import set_path
2 | import os
3 | import argparse
4 | import functools
5 | print = functools.partial(print,flush=True)
6 |
7 | import numpy as np
8 | import scipy.io as sio
9 |
10 | import utils
11 |
12 | parser = argparse.ArgumentParser()
13 | parser.add_argument('-c','--checkpoint_dir',type=str,required=True,help='path to results folder')
14 | opt = parser.parse_args()
15 | saved_json_file = os.path.join(opt.checkpoint_dir,'opt.json')
16 | train_opt = utils.load_opt_from_json(saved_json_file)
17 | name = train_opt['name']
18 | data_dir = train_opt['data_dir']
19 |
20 | # load ground truth poses
21 | gt_file = os.path.join(data_dir,'gt_pose.mat')
22 | gt_pose = sio.loadmat(gt_file)
23 | gt_pose = gt_pose['pose']
24 | gt_location = gt_pose[:,:2]
25 |
26 | # load predicted poses
27 | pred_file = os.path.join(opt.checkpoint_dir,'pose_est.npy')
28 | pred_pose = np.load(pred_file)
29 | pred_location = pred_pose[:,:2] * 512 # denormalization, tbd
30 |
31 | # compute absolute trajectory error (ATE)
32 | ate,aligned_location = utils.compute_ate(pred_location,gt_location)
33 | print('{}, ate: {}'.format(name,ate))
34 |
--------------------------------------------------------------------------------
/script/eval_vis_AVD.py:
--------------------------------------------------------------------------------
1 | import set_path
2 | import os
3 | import argparse
4 | import functools
5 | print = functools.partial(print,flush=True)
6 |
7 | import numpy as np
8 | import open3d as o3d
9 |
10 | from dataset_loader import AVD
11 | import utils
12 |
13 | def add_y_coord_for_evaluation(pred_pos_DM):
14 | """
15 | pred_pos_DM (predicted position) estimated from DeepMapping only has x and z coordinate
16 | convert this to for evaluation
17 | """
18 | n = pred_pos_DM.shape[0]
19 | x = pred_pos_DM[:,0]
20 | y = np.zeros_like(x)
21 | z = pred_pos_DM[:,1]
22 | return np.stack((x,y,z),axis=-1)
23 |
24 | parser = argparse.ArgumentParser()
25 | parser.add_argument('-c','--checkpoint_dir',type=str,required=True,help='path to results folder')
26 | opt = parser.parse_args()
27 | saved_json_file = os.path.join(opt.checkpoint_dir,'opt.json')
28 | train_opt = utils.load_opt_from_json(saved_json_file)
29 | name = train_opt['name']
30 | data_dir = train_opt['data_dir']
31 | subsample_rate = train_opt['subsample_rate']
32 | traj = train_opt['traj']
33 |
34 | # load ground truth poses
35 | dataset = AVD(data_dir,traj,subsample_rate)
36 | gt_pose = dataset.gt
37 | gt_location = gt_pose[:,:3]
38 |
39 | # load predicted poses
40 | pred_file = os.path.join(opt.checkpoint_dir,'pose_est.npy')
41 | pred_pose = np.load(pred_file)
42 | pred_location = pred_pose[:,:2] * dataset.depth_scale # denormalization
43 | pred_location = add_y_coord_for_evaluation(pred_location)
44 |
45 | # compute absolute trajectory error (ATE)
46 | ate,aligned_location = utils.compute_ate(pred_location,gt_location)
47 | print('{}, ate: {}'.format(name,ate))
48 |
49 | # vis results
50 | global_point_cloud_file = os.path.join(opt.checkpoint_dir,'obs_global_est.npy')
51 | pcds = utils.load_obs_global_est(global_point_cloud_file)
52 | o3d.draw_geometries([pcds])
53 |
--------------------------------------------------------------------------------
/script/incremental_icp.py:
--------------------------------------------------------------------------------
1 | import set_path
2 | import os
3 | import argparse
4 | import functools
5 | print = functools.partial(print,flush=True)
6 | import torch
7 | import numpy as np
8 |
9 | import utils
10 | from dataset_loader import SimulatedPointCloud
11 |
12 | parser = argparse.ArgumentParser()
13 | parser.add_argument('--name',type=str,default='test',help='experiment name')
14 | parser.add_argument('-m','--metric',type=str,default='point',choices=['point','plane'] ,help='minimization metric')
15 | parser.add_argument('-d','--data_dir',type=str,default='../data/2D/',help='dataset path')
16 | opt = parser.parse_args()
17 |
18 | checkpoint_dir = os.path.join('../results/2D',opt.name)
19 | if not os.path.exists(checkpoint_dir):
20 | os.makedirs(checkpoint_dir)
21 | utils.save_opt(checkpoint_dir,opt)
22 |
23 | dataset = SimulatedPointCloud(opt.data_dir)
24 | n_pc = len(dataset)
25 |
26 | pose_est = np.zeros((n_pc,3),dtype=np.float32)
27 | print('running icp')
28 | for idx in range(n_pc-1):
29 | dst,valid_dst,_ = dataset[idx]
30 | src,valid_src,_ = dataset[idx+1]
31 |
32 | dst = dst[valid_dst,:].numpy()
33 | src = src[valid_src,:].numpy()
34 |
35 | _,R0,t0 = utils.icp(src,dst,metrics=opt.metric)
36 | if idx == 0:
37 | R_cum = R0
38 | t_cum = t0
39 | else:
40 | R_cum = np.matmul(R_cum , R0)
41 | t_cum = np.matmul(R_cum,t0) + t_cum
42 |
43 | pose_est[idx+1,:2] = t_cum.T
44 | pose_est[idx+1,2] = np.arctan2(R_cum[1,0],R_cum[0,0])
45 |
46 | save_name = os.path.join(checkpoint_dir,'pose_est.npy')
47 | np.save(save_name,pose_est)
48 |
49 | print('saving results')
50 | pose_est = torch.from_numpy(pose_est)
51 | local_pc,valid_id,_ = dataset[:]
52 | global_pc = utils.transform_to_global_2D(pose_est,local_pc)
53 | utils.plot_global_point_cloud(global_pc,pose_est,valid_id,checkpoint_dir)
54 |
--------------------------------------------------------------------------------
/script/o3d_icp.py:
--------------------------------------------------------------------------------
1 | import set_path
2 | import os
3 | import argparse
4 | import functools
5 | print = functools.partial(print,flush=True)
6 | import torch
7 | import numpy as np
8 | import open3d
9 |
10 | import utils
11 | from dataset_loader import SimulatedPointCloud
12 |
13 | parser = argparse.ArgumentParser()
14 | parser.add_argument('--name',type=str,default='test',help='experiment name')
15 | parser.add_argument('-m','--metric',type=str,default='point',choices=['point','plane'] ,help='minimization metric')
16 | parser.add_argument('-d','--data_dir',type=str,default='../data/2D/',help='dataset path')
17 | parser.add_argument('-r','--radius',type=float,default=0.02)
18 | opt = parser.parse_args()
19 | print(opt.radius)
20 |
21 | checkpoint_dir = os.path.join('../results/2D',opt.name)
22 | if not os.path.exists(checkpoint_dir):
23 | os.makedirs(checkpoint_dir)
24 | utils.save_opt(checkpoint_dir,opt)
25 |
26 | dataset = SimulatedPointCloud(opt.data_dir)
27 | local_pcds = dataset.pcds[:]
28 | n_pc = len(local_pcds)
29 |
30 | #"""
31 | # remove invalid points
32 | local_pcds = [utils.remove_invalid_pcd(x) for x in local_pcds]
33 |
34 | if opt.metric == 'point':
35 | metric = open3d.TransformationEstimationPointToPoint()
36 | else:
37 | metric = open3d.TransformationEstimationPointToPlane()
38 | for idx in range(n_pc):
39 | open3d.estimate_normals(local_pcds[idx],search_param = open3d.KDTreeSearchParamHybrid(radius=opt.radius,max_nn=10))
40 |
41 |
42 | pose_est = np.zeros((n_pc,3),dtype=np.float32)
43 | print('running icp')
44 | for idx in range(n_pc-1):
45 | dst = local_pcds[idx]
46 | src = local_pcds[idx+1]
47 | result_icp = open3d.registration_icp(src,dst,opt.radius,estimation_method=metric)
48 |
49 | R0 = result_icp.transformation[:2,:2]
50 | t0 = result_icp.transformation[:2,3:]
51 | if idx == 0:
52 | R_cum = R0
53 | t_cum = t0
54 | else:
55 | R_cum = np.matmul(R_cum , R0)
56 | t_cum = np.matmul(R_cum,t0) + t_cum
57 |
58 | pose_est[idx+1,:2] = t_cum.T
59 | pose_est[idx+1,2] = np.arctan2(R_cum[1,0],R_cum[0,0])
60 |
61 | save_name = os.path.join(checkpoint_dir,'pose_est.npy')
62 | np.save(save_name,pose_est)
63 |
64 | # plot point cloud in global frame
65 |
66 | print('saving results')
67 | global_pcds = utils.transform_to_global_open3d(pose_est,local_pcds)
68 | utils.save_global_point_cloud_open3d(global_pcds,pose_est,checkpoint_dir)
69 |
--------------------------------------------------------------------------------
/script/run_eval_2D.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | CHECKPOINT_DIR='../results/2D/v1_pose0/'
4 | python eval_2D.py -c $CHECKPOINT_DIR
5 |
--------------------------------------------------------------------------------
/script/run_eval_vis_AVD.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | CHECKPOINT_DIR='../results/AVD/AVD_Home_011_1_traj1/'
4 | python eval_vis_AVD.py -c $CHECKPOINT_DIR
5 | #vglrun-wrapper python eval_vis_AVD.py -c $CHECKPOINT_DIR
6 |
--------------------------------------------------------------------------------
/script/run_icp.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # path to dataset
4 | DATA_DIR=../data/2D/v1_pose0
5 | # experiment name, the results will be saved to ../results/2D/${NAME}
6 | NAME=icp_v1_pose0
7 | # Error metrics for ICP
8 | # point: "point2point"
9 | # plane: "point2plane"
10 | METRIC=plane
11 |
12 | python incremental_icp.py --name $NAME -d $DATA_DIR -m $METRIC
13 |
--------------------------------------------------------------------------------
/script/run_train_2D.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # experiment name, the results will be saved to ../results/2D/${NAME}
4 | NAME=v1_pose0
5 | # path to dataset
6 | DATA_DIR=../data/2D/v1_pose0
7 | # training epochs
8 | EPOCH=3000
9 | # batch size
10 | BS=128
11 | # loss function
12 | LOSS=bce_ch
13 | # number of points sampled from line-of-sight
14 | N=19
15 | # logging interval
16 | LOG=20
17 |
18 | ### training from scratch
19 | #python train_2D.py --name $NAME -d $DATA_DIR -e $EPOCH -b $BS -l $LOSS -n $N --log_interval $LOG
20 |
21 | #### warm start
22 | #### uncomment the following commands to run DeepMapping with a warm start. This requires an initial sensor pose that can be computed using ./script/run_icp.sh
23 | INIT_POSE=../results/2D/icp_v1_pose0/pose_est.npy
24 | python train_2D.py --name $NAME -d $DATA_DIR -i $INIT_POSE -e $EPOCH -b $BS -l $LOSS -n $N --log_interval $LOG
25 |
--------------------------------------------------------------------------------
/script/run_train_AVD.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # path to dataset
4 | DATA_DIR=../data/ActiveVisionDataset/Home_011_1/
5 | # trajectiory file name
6 | TRAJ=traj1
7 | # experiment name, the results will be saved to ../results/2D/${NAME}
8 | NAME=AVD_Home_011_1_${TRAJ}
9 | # training epochs
10 | EPOCH=3000
11 | # batch size
12 | BS=16
13 | # loss function
14 | LOSS=bce_ch
15 | # number of points sampled from line-of-sight
16 | N=35
17 | # logging interval
18 | LOG=2
19 |
20 | ### training from scratch
21 | python train_AVD.py --name $NAME -d $DATA_DIR -t ${TRAJ}.txt -e $EPOCH -b $BS -l $LOSS -n $N --log_interval $LOG
22 |
--------------------------------------------------------------------------------
/script/set_path.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import os
3 | sys.path.insert(0, '../')
4 |
--------------------------------------------------------------------------------
/script/train_2D.py:
--------------------------------------------------------------------------------
1 | import set_path
2 | import os
3 | import argparse
4 | import functools
5 | print = functools.partial(print,flush=True)
6 |
7 | import numpy as np
8 | import torch
9 | import torch.optim as optim
10 | from torch.utils.data import DataLoader
11 |
12 | import utils
13 | import loss
14 | from models import DeepMapping2D
15 | from dataset_loader import SimulatedPointCloud
16 |
17 | torch.backends.cudnn.deterministic = True
18 | torch.manual_seed(999)
19 |
20 | parser = argparse.ArgumentParser()
21 | parser.add_argument('--name',type=str,default='test',help='experiment name')
22 | parser.add_argument('-e','--n_epochs',type=int,default=1000,help='number of epochs')
23 | parser.add_argument('-b','--batch_size',type=int,default=32,help='batch_size')
24 | parser.add_argument('-l','--loss',type=str,default='bce_ch',help='loss function')
25 | parser.add_argument('-n','--n_samples',type=int,default=19,help='number of sampled unoccupied points along rays')
26 | parser.add_argument('--lr',type=float,default=0.001,help='learning rate')
27 | parser.add_argument('-d','--data_dir',type=str,default='../data/2D/',help='dataset path')
28 | parser.add_argument('-m','--model', type=str, default=None,help='pretrained model name')
29 | parser.add_argument('-i','--init', type=str, default=None,help='init pose')
30 | parser.add_argument('--log_interval',type=int,default=10,help='logging interval of saving results')
31 |
32 | opt = parser.parse_args()
33 |
34 | checkpoint_dir = os.path.join('../results/2D',opt.name)
35 | if not os.path.exists(checkpoint_dir):
36 | os.makedirs(checkpoint_dir)
37 | utils.save_opt(checkpoint_dir,opt)
38 |
39 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
40 |
41 | print('loading dataset')
42 | if opt.init is not None:
43 | init_pose_np = np.load(opt.init)
44 | init_pose = torch.from_numpy(init_pose_np)
45 | else:
46 | init_pose = None
47 | dataset = SimulatedPointCloud(opt.data_dir,init_pose)
48 | loader = DataLoader(dataset,batch_size=opt.batch_size,shuffle=False)
49 |
50 | loss_fn = eval('loss.'+opt.loss)
51 |
52 | print('creating model')
53 | model = DeepMapping2D(loss_fn=loss_fn,n_obs=dataset.n_obs, n_samples=opt.n_samples).to(device)
54 | optimizer = optim.Adam(model.parameters(),lr=opt.lr)
55 |
56 | if opt.model is not None:
57 | utils.load_checkpoint(opt.model,model,optimizer)
58 |
59 | print('start training')
60 | for epoch in range(opt.n_epochs):
61 |
62 | training_loss= 0.0
63 | model.train()
64 |
65 | for index,(obs_batch,valid_pt,pose_batch) in enumerate(loader):
66 | obs_batch = obs_batch.to(device)
67 | valid_pt = valid_pt.to(device)
68 | pose_batch = pose_batch.to(device)
69 | loss = model(obs_batch,valid_pt,pose_batch)
70 |
71 | optimizer.zero_grad()
72 | loss.backward()
73 | optimizer.step()
74 |
75 | training_loss += loss.item()
76 |
77 | training_loss_epoch = training_loss/len(loader)
78 |
79 | if (epoch+1) % opt.log_interval == 0:
80 | print('[{}/{}], training loss: {:.4f}'.format(
81 | epoch+1,opt.n_epochs,training_loss_epoch))
82 |
83 | obs_global_est_np = []
84 | pose_est_np = []
85 | with torch.no_grad():
86 | model.eval()
87 | for index,(obs_batch,valid_pt,pose_batch) in enumerate(loader):
88 | obs_batch = obs_batch.to(device)
89 | valid_pt = valid_pt.to(device)
90 | pose_batch = pose_batch.to(device)
91 | model(obs_batch,valid_pt,pose_batch)
92 |
93 | obs_global_est_np.append(model.obs_global_est.cpu().detach().numpy())
94 | pose_est_np.append(model.pose_est.cpu().detach().numpy())
95 |
96 | pose_est_np = np.concatenate(pose_est_np)
97 | if init_pose is not None:
98 | pose_est_np = utils.cat_pose_2D(init_pose_np,pose_est_np)
99 |
100 | save_name = os.path.join(checkpoint_dir,'model_best.pth')
101 | utils.save_checkpoint(save_name,model,optimizer)
102 |
103 | obs_global_est_np = np.concatenate(obs_global_est_np)
104 | kwargs = {'e':epoch+1}
105 | valid_pt_np = dataset.valid_points.cpu().detach().numpy()
106 | utils.plot_global_point_cloud(obs_global_est_np,pose_est_np,valid_pt_np,checkpoint_dir,**kwargs)
107 |
108 | #save_name = os.path.join(checkpoint_dir,'obs_global_est.npy')
109 | #np.save(save_name,obs_global_est_np)
110 |
111 | save_name = os.path.join(checkpoint_dir,'pose_est.npy')
112 | np.save(save_name,pose_est_np)
113 |
114 |
--------------------------------------------------------------------------------
/script/train_AVD.py:
--------------------------------------------------------------------------------
1 | import set_path
2 | import os
3 | import argparse
4 | import functools
5 | print = functools.partial(print,flush=True)
6 |
7 | import numpy as np
8 | import torch
9 | import torch.optim as optim
10 | from torch.utils.data import DataLoader
11 |
12 | import utils
13 | import loss
14 | from models import DeepMapping_AVD
15 | from dataset_loader import AVD
16 |
17 | torch.backends.cudnn.deterministic = True
18 | torch.manual_seed(999)
19 |
20 |
21 |
22 | parser = argparse.ArgumentParser()
23 | parser.add_argument('--name',type=str,default='test',help='experiment name')
24 | parser.add_argument('-e','--n_epochs',type=int,default=1000,help='number of epochs')
25 | parser.add_argument('-b','--batch_size',type=int,default=16,help='batch_size')
26 | parser.add_argument('-l','--loss',type=str,default='bce_ch',help='loss function')
27 | parser.add_argument('-n','--n_samples',type=int,default=35,help='number of sampled unoccupied points along rays')
28 | parser.add_argument('-s','--subsample_rate',type=int,default=40,help='subsample rate')
29 | parser.add_argument('--lr',type=float,default=0.001,help='learning rate')
30 | parser.add_argument('-d','--data_dir',type=str,default='../data/ActiveVisionDataset/',help='dataset path')
31 | parser.add_argument('-t','--traj',type=str,default='traj1.txt',help='trajectory file name')
32 | parser.add_argument('-m','--model', type=str, default=None,help='pretrained model name')
33 | #parser.add_argument('-i','--init', type=str, default=None,help='init pose')
34 | parser.add_argument('--log_interval',type=int,default=10,help='logging interval of saving results')
35 |
36 | opt = parser.parse_args()
37 |
38 | checkpoint_dir = os.path.join('../results/AVD',opt.name)
39 | if not os.path.exists(checkpoint_dir):
40 | os.makedirs(checkpoint_dir)
41 | utils.save_opt(checkpoint_dir,opt)
42 |
43 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
44 |
45 | # TODO warm start
46 | #if opt.init is not None:
47 | # init_pose_np = np.load(opt.init)
48 | # init_pose = torch.from_numpy(init_pose_np)
49 | #else:
50 | # init_pose = None
51 |
52 |
53 | print('loading dataset')
54 | dataset = AVD(opt.data_dir,opt.traj,opt.subsample_rate)
55 | loader = DataLoader(dataset,batch_size=opt.batch_size,shuffle=False)
56 |
57 | loss_fn = eval('loss.'+opt.loss)
58 |
59 | print('creating model')
60 |
61 | model = DeepMapping_AVD(loss_fn=loss_fn,n_samples=opt.n_samples).to(device)
62 | optimizer = optim.Adam(model.parameters(),lr=opt.lr)
63 |
64 | if opt.model is not None:
65 | utils.load_checkpoint(opt.model,model,optimizer)
66 |
67 |
68 |
69 | print('start training')
70 | best_loss = 2.0
71 | for epoch in range(opt.n_epochs):
72 |
73 | training_loss= 0.0
74 | model.train()
75 |
76 | for index,(obs_batch,valid_pt, pose_batch) in enumerate(loader):
77 | obs_batch = obs_batch.to(device)
78 | valid_pt = valid_pt.to(device)
79 | pose_batch = pose_batch.to(device)
80 | loss = model(obs_batch,valid_pt,pose_batch)
81 |
82 | optimizer.zero_grad()
83 | loss.backward()
84 | optimizer.step()
85 |
86 | training_loss += loss.item()
87 |
88 | training_loss_epoch = training_loss/len(loader)
89 |
90 | if (epoch+1) % opt.log_interval == 0:
91 | print('[{}/{}], training loss: {:.4f}'.format(
92 | epoch+1,opt.n_epochs,training_loss_epoch))
93 |
94 | if training_loss_epoch < best_loss:
95 | best_loss = training_loss_epoch
96 | obs_global_est_np = []
97 | pose_est_np = []
98 | with torch.no_grad():
99 | model.eval()
100 | for index,(obs_batch,valid_pt,pose_batch) in enumerate(loader):
101 | obs_batch = obs_batch.to(device)
102 | valid_pt = valid_pt.to(device)
103 | pose_batch = pose_batch.to(device)
104 | model(obs_batch,valid_pt,pose_batch)
105 |
106 | obs_global_est_np.append(model.obs_global_est.cpu().detach().numpy())
107 | pose_est_np.append(model.pose_est.cpu().detach().numpy())
108 |
109 | pose_est_np = np.concatenate(pose_est_np)
110 | #if init_pose is not None:
111 | # pose_est_np = utils.cat_pose_AVD(init_pose_np,pose_est_np)
112 |
113 | save_name = os.path.join(checkpoint_dir,'model_best.pth')
114 | utils.save_checkpoint(save_name,model,optimizer)
115 |
116 | obs_global_est_np = np.concatenate(obs_global_est_np)
117 | #kwargs = {'e':epoch+1}
118 | #valid_pt_np = dataset.valid_points.cpu().detach().numpy()
119 |
120 | save_name = os.path.join(checkpoint_dir,'obs_global_est.npy')
121 | np.save(save_name,obs_global_est_np)
122 |
123 | save_name = os.path.join(checkpoint_dir,'pose_est.npy')
124 | np.save(save_name,pose_est_np)
125 |
--------------------------------------------------------------------------------
/utils/__init__.py:
--------------------------------------------------------------------------------
1 | from .utils import *
2 | from .geometry_utils import *
3 | from .vis_utils import *
4 | from .open3d_utils import *
5 | #from .icp import *
6 |
--------------------------------------------------------------------------------
/utils/geometry_utils.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import numpy as np
3 | import open3d
4 | from sklearn.neighbors import NearestNeighbors
5 | import sys
6 |
7 | def transform_to_global_2D(pose, obs_local):
8 | """
9 | transform local point cloud to global frame
10 | row-based matrix product
11 | pose: each row represents
12 | obs_local:
13 | """
14 | L = obs_local.shape[1]
15 | # c0 is the loc of sensor in global coord. frame c0:
16 | c0, theta0 = pose[:, 0:2], pose[:, 2]
17 | c0 = c0.unsqueeze(1).expand(-1, L, -1) #
18 |
19 | cos = torch.cos(theta0).unsqueeze(-1).unsqueeze(-1)
20 | sin = torch.sin(theta0).unsqueeze(-1).unsqueeze(-1)
21 | R_transpose = torch.cat((cos, sin, -sin, cos), dim=1).reshape(-1, 2, 2)
22 |
23 | obs_global = torch.bmm(obs_local, R_transpose) + c0
24 | return obs_global
25 |
26 | def transform_to_global_AVD(pose, obs_local):
27 | """
28 | transform obs local coordinate to global corrdinate frame
29 | :param pose: y = 0
30 | :param obs_local: (unorganized) or (organized)
31 | :return obs_global: (unorganized) or (organized)
32 | """
33 | is_organized = 1 if len(obs_local.shape) == 4 else 0
34 | b = obs_local.shape[0]
35 | if is_organized:
36 | H,W = obs_local.shape[1:3]
37 | obs_local = obs_local.view(b,-1,3) #
38 |
39 | L = obs_local.shape[1]
40 |
41 | c0, theta0 = pose[:,0:2],pose[:,2] # c0 is the loc of sensor in global coord frame c0
42 |
43 | zero = torch.zeros_like(c0[:,:1])
44 | c0 = torch.cat((c0,zero),-1) #
45 | c0 = c0[:,[0,2,1]] #
46 | c0 = c0.unsqueeze(1).expand(-1,L,-1) #
47 |
48 | cos = torch.cos(theta0).unsqueeze(-1).unsqueeze(-1)
49 | sin = torch.sin(theta0).unsqueeze(-1).unsqueeze(-1)
50 | zero = torch.zeros_like(sin)
51 | one = torch.ones_like(sin)
52 |
53 | R_y_transpose = torch.cat((cos,zero,-sin,zero,one,zero,sin,zero,cos),dim=1).reshape(-1,3,3)
54 | obs_global = torch.bmm(obs_local,R_y_transpose) + c0
55 | if is_organized:
56 | obs_global = obs_global.view(b,H,W,3)
57 | return obs_global
58 |
59 |
60 | def rigid_transform_kD(A, B):
61 | """
62 | Find optimal transformation between two sets of corresponding points
63 | Adapted from: http://nghiaho.com/uploads/code/rigid_transform_3D.py_
64 | Args:
65 | A.B: each row represent a k-D points
66 | Returns:
67 | R: kxk
68 | t: kx1
69 | B = R*A+t
70 | """
71 | assert len(A) == len(B)
72 | N,k = A.shape
73 |
74 | centroid_A = np.mean(A, axis=0)
75 | centroid_B = np.mean(B, axis=0)
76 |
77 | # centre the points
78 | AA = A - np.tile(centroid_A, (N, 1))
79 | BB = B - np.tile(centroid_B, (N, 1))
80 |
81 | H = np.matmul(np.transpose(AA) , BB)
82 | U, S, Vt = np.linalg.svd(H)
83 | R = np.matmul(Vt.T , U.T)
84 |
85 | # special reflection case
86 | if np.linalg.det(R) < 0:
87 | Vt[k-1,:] *= -1
88 | R = np.matmul(Vt.T , U.T)
89 |
90 | t = np.matmul(-R,centroid_A.T) + centroid_B.T
91 | t = np.expand_dims(t,-1)
92 | return R, t
93 |
94 | def estimate_normal_eig(data):
95 | """
96 | Computes the vector normal to the k-dimensional sample points
97 | """
98 | data -= np.mean(data,axis=0)
99 | data = data.T
100 | A = np.cov(data)
101 | w,v = np.linalg.eig(A)
102 | idx = np.argmin(w)
103 | v = v[:,idx]
104 | v /= np.linalg.norm(v,2)
105 | return v
106 |
107 | def surface_normal(pc,n_neighbors=6):
108 | """
109 | Estimate point cloud surface normal
110 | Args:
111 | pc: Nxk matrix representing k-dimensional point cloud
112 | """
113 |
114 | n_points,k = pc.shape
115 | v = np.zeros_like(pc)
116 |
117 | # nn search
118 | nbrs = NearestNeighbors(n_neighbors=n_neighbors, algorithm='auto').fit(pc)
119 | _, indices = nbrs.kneighbors(pc)
120 | neighbor_points = pc[indices]
121 | for i in range(n_points):
122 | # estimate surface normal
123 | v_tmp = estimate_normal_eig(neighbor_points[i,])
124 | v_tmp[abs(v_tmp)<1e-5] = 0
125 | if v_tmp[0] < 0:
126 | v_tmp *= -1
127 | v[i,:] = v_tmp
128 | return v
129 |
130 |
131 | def point2plane_metrics_2D(p,q,v):
132 | """
133 | Point-to-plane minimization
134 | Chen, Y. and G. Medioni. “Object Modelling by Registration of Multiple Range Images.”
135 | Image Vision Computing. Butterworth-Heinemann . Vol. 10, Issue 3, April 1992, pp. 145-155.
136 |
137 | Args:
138 | p: Nx2 matrix, moving point locations
139 | q: Nx2 matrix, fixed point locations
140 | v:Nx2 matrix, fixed point normal
141 | Returns:
142 | R: 2x2 matrix
143 | t: 2x1 matrix
144 | """
145 | assert q.shape[1] == p.shape[1] == v.shape[1] == 2, 'points must be 2D'
146 |
147 | p,q,v = np.array(p),np.array(q),np.array(v)
148 | c = np.expand_dims(np.cross(p,v),-1)
149 | cn = np.concatenate((c,v),axis=1) # [ci,nix,niy]
150 | C = np.matmul(cn.T,cn)
151 | if np.linalg.cond(C)>=1/sys.float_info.epsilon:
152 | # handle singular matrix
153 | raise ArithmeticError('Singular matrix')
154 |
155 | # print(C.shape)
156 | qp = q-p
157 | b = np.array([
158 | [(qp*cn[:,0:1]*v).sum()],
159 | [(qp*cn[:,1:2]*v).sum()],
160 | [(qp*cn[:,2:]*v).sum()],
161 | ])
162 |
163 | X = np.linalg.solve(C, b)
164 | cos_ = np.cos(X[0])[0]
165 | sin_ = np.sin(X[0])[0]
166 | R = np.array([
167 | [cos_,-sin_],
168 | [sin_,cos_]
169 | ])
170 | t = np.array(X[1:])
171 | return R,t
172 |
173 | def icp(src,dst,nv=None,n_iter=100,init_pose=[0,0,0],torlerance=1e-6,metrics='point',verbose=False):
174 | '''
175 | Currently only works for 2D case
176 | Args:
177 | src: 2-dim moving points
178 | dst: 2-dim fixed points
179 | n_iter: a positive integer to specify the maxium nuber of iterations
180 | init_pose: [tx,ty,theta] initial transformation
181 | torlerance: the tolerance of registration error
182 | metrics: 'point' or 'plane'
183 |
184 | Return:
185 | src: transformed src points
186 | R: rotation matrix
187 | t: translation vector
188 | R*src + t
189 | '''
190 | n_src = src.shape[0]
191 | if metrics == 'plane' and nv is None:
192 | nv = surface_normal(dst)
193 |
194 | #src = np.matrix(src)
195 | #dst = np.matrix(dst)
196 | #Initialise with the initial pose estimation
197 | R_init = np.array([[np.cos(init_pose[2]),-np.sin(init_pose[2])],
198 | [np.sin(init_pose[2]), np.cos(init_pose[2])]
199 | ])
200 | t_init = np.array([[init_pose[0]],
201 | [init_pose[1]]
202 | ])
203 |
204 | #src = R_init*src.T + t_init
205 | src = np.matmul(R_init,src.T) + t_init
206 | src = src.T
207 |
208 | R,t = R_init,t_init
209 |
210 | prev_err = np.inf
211 | nbrs = NearestNeighbors(n_neighbors=1, algorithm='auto').fit(dst)
212 | for i in range(n_iter):
213 | # Find the nearest neighbours
214 | _, indices = nbrs.kneighbors(src)
215 |
216 | # Compute the transformation
217 | if metrics == 'point':
218 | R0,t0 = rigid_transform_kD(src,dst[indices[:,0]])
219 | elif metrics=='plane':
220 | try:
221 | R0,t0 = point2plane_metrics_2D(src,dst[indices[:,0]], nv[indices[:,0]])
222 | except ArithmeticError:
223 | print('Singular matrix')
224 | return src,R,t
225 | else:
226 | raise ValueError('metrics: {} not recognized.'.format(metrics))
227 | # Update dst and compute error
228 | src = np.matmul(R0,src.T) + t0
229 | src = src.T
230 |
231 | R = np.matmul(R0,R)
232 | t = np.matmul(R0,t) + t0
233 | #R = R0*R
234 | #t = R0*t + t0
235 | current_err = np.sqrt((np.array(src-dst[indices[:,0]])**2).sum()/n_src)
236 |
237 | if verbose:
238 | print('iter: {}, error: {}'.format(i,current_err))
239 |
240 | if np.abs(current_err - prev_err) < torlerance:
241 | break
242 | else:
243 | prev_err = current_err
244 |
245 | return src,R,t
246 |
247 |
248 | def compute_ate(output,target):
249 | """
250 | compute absolute trajectory error for avd dataset
251 | Args:
252 | output: predicted trajectory positions, where N is #scans
253 | target: ground truth trajectory positions
254 | Returns:
255 | trans_error: absolute trajectory error for each pose
256 | output_aligned: aligned position in ground truth coord
257 | """
258 | R,t = rigid_transform_kD(output,target)
259 | output_aligned = np.matmul(R , output.T) + t
260 | output_aligned = output_aligned.T
261 |
262 | align_error = np.array(output_aligned - target)
263 | trans_error = np.sqrt(np.sum(align_error**2,1))
264 |
265 | ate = np.sqrt(np.dot(trans_error,trans_error) / len(trans_error))
266 |
267 | return ate,output_aligned
268 |
269 | def remove_invalid_pcd(pcd):
270 | """
271 | remove invalid in valid points that have all-zero coordinates
272 | pcd: open3d pcd objective
273 | """
274 | pcd_np = np.asarray(pcd.points) #
275 | non_zero_coord = np.abs(pcd_np) > 1e-6 #
276 | valid_ind = np.sum(non_zero_coord,axis=-1)>0 #
277 | valid_ind = list(np.nonzero(valid_ind)[0])
278 | valid_pcd = open3d.select_down_sample(pcd,valid_ind)
279 | return valid_pcd
280 |
281 | def ang2mat(theta):
282 | c = np.cos(theta)
283 | s = np.sin(theta)
284 | R = np.array([[c,-s],[s,c]])
285 | return R
286 |
287 | def cat_pose_2D(pose0,pose1):
288 | """
289 | pose0, pose1: , numpy array
290 | each row:
291 | """
292 | assert(pose0.shape==pose1.shape)
293 | n_pose = pose0.shape[0]
294 | pose_out = np.zeros_like(pose0)
295 | for i in range(n_pose):
296 | R0 = ang2mat(pose0[i,-1])
297 | R1 = ang2mat(pose1[i,-1])
298 | t0 = np.expand_dims(pose0[i,:2],-1)
299 | t1 = np.expand_dims(pose1[i,:2],-1)
300 |
301 | R = np.matmul(R1,R0)
302 | theta = np.arctan2(R[1,0],R[0,0])
303 | t = np.matmul(R1,t0) + t1
304 | pose_out[i,:2] = t.T
305 | pose_out[i,2] = theta
306 | return pose_out
307 |
308 | def convert_depth_map_to_pc(depth,fxy,cxy,max_depth=7000,depth_scale=2000):
309 | """
310 | create point cloud from depth map and camera instrinsic
311 | depth: numpy array
312 | fxy: [fx,fy]
313 | cxy: [cx,cy]
314 | """
315 | fx,fy = fxy
316 | cx,cy = cxy
317 | h,w = depth.shape
318 |
319 | c,r = np.meshgrid(range(1,w+1), range(1,h+1))
320 | invalid = depth >= max_depth
321 | depth[invalid] = 0
322 |
323 | z = depth / float(depth_scale)
324 | x = z * (c-cx) / fx
325 | y = z * (r-cy) / fy
326 | xyz = np.dstack((x,y,z)).astype(np.float32)
327 | return xyz
328 |
329 |
--------------------------------------------------------------------------------
/utils/open3d_utils.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import open3d as o3d
3 | import copy
4 |
5 | def transform_to_global_open3d(pose,local_pcd):
6 | pcd = copy.deepcopy(local_pcd)
7 | n_pcd = len(pcd)
8 | for i in range(n_pcd):
9 | tx,ty,theta = pose[i,:]
10 | cos,sin = np.cos(theta),np.sin(theta)
11 | trans = np.array([
12 | [cos,-sin,0,tx],
13 | [sin,cos,0,ty],
14 | [0,0,1,0],
15 | [0,0,0,1],
16 | ])
17 | pcd[i].transform(trans)
18 | return pcd
19 |
20 |
21 | def np_to_pcd(xyz):
22 | """
23 | convert numpy array to point cloud object in open3d
24 | """
25 | xyz = xyz.reshape(-1,3)
26 | pcd = o3d.PointCloud()
27 | pcd.points = o3d.Vector3dVector(xyz)
28 | pcd.paint_uniform_color(np.random.rand(3,))
29 | return pcd
30 |
31 |
32 | def load_obs_global_est(file_name):
33 | """
34 | load saved obs_global_est.npy file and convert to point cloud object
35 | """
36 | obs_global_est = np.load(file_name)
37 | n_pc = obs_global_est.shape[0]
38 | pcds = o3d.PointCloud()
39 |
40 | for i in range(n_pc):
41 | xyz = obs_global_est[i,:,:]
42 | current_pcd = np_to_pcd(xyz)
43 | pcds += current_pcd
44 | return pcds
45 |
46 |
47 |
--------------------------------------------------------------------------------
/utils/utils.py:
--------------------------------------------------------------------------------
1 | import os
2 | import json
3 | import torch
4 |
5 |
6 | def save_opt(working_dir, opt):
7 | """
8 | Save option as a json file
9 | """
10 | opt = vars(opt)
11 | save_name = os.path.join(working_dir, 'opt.json')
12 | with open(save_name, 'wt') as f:
13 | json.dump(opt, f, indent=4, sort_keys=True)
14 |
15 |
16 | def save_checkpoint(save_name, model, optimizer):
17 | state = {'state_dict': model.state_dict(),
18 | 'optimizer': optimizer.state_dict()}
19 | torch.save(state, save_name)
20 | print('model saved to {}'.format(save_name))
21 |
22 |
23 | def load_checkpoint(save_name, model, optimizer):
24 | state = torch.load(save_name)
25 | model.load_state_dict(state['state_dict'])
26 | if optimizer is not None:
27 | optimizer.load_state_dict(state['optimizer'])
28 | print('model loaded from {}'.format(save_name))
29 |
30 |
31 | def load_opt_from_json(file_name):
32 | if os.path.isfile(file_name):
33 | with open(file_name,'rb') as f:
34 | opt_dict = json.load(f)
35 | return opt_dict
36 | else:
37 | raise FileNotFoundError("Can't find file: {}. Run training script first".format(file_name))
38 |
--------------------------------------------------------------------------------
/utils/vis_utils.py:
--------------------------------------------------------------------------------
1 | import os
2 | from matplotlib import pyplot as plt
3 | import torch
4 | import numpy as np
5 |
6 |
7 | def plot_global_point_cloud(point_cloud, pose, valid_points, save_dir, **kwargs):
8 | if torch.is_tensor(point_cloud):
9 | point_cloud = point_cloud.cpu().detach().numpy()
10 | if torch.is_tensor(pose):
11 | pose = pose.cpu().detach().numpy()
12 | if torch.is_tensor(valid_points):
13 | valid_points = valid_points.cpu().detach().numpy()
14 |
15 | file_name = 'global_map_pose'
16 | if kwargs is not None:
17 | for k, v in kwargs.items():
18 | file_name = file_name + '_' + str(k) + '_' + str(v)
19 | save_name = os.path.join(save_dir, file_name)
20 |
21 | bs = point_cloud.shape[0]
22 | for i in range(bs):
23 | current_pc = point_cloud[i, :, :]
24 | idx = valid_points[i, ] > 0
25 | current_pc = current_pc[idx]
26 |
27 | plt.plot(current_pc[:, 0], current_pc[:, 1], '.')
28 | ax = plt.gca()
29 | ax.set_ylim(ax.get_ylim()[::-1])
30 | plt.plot(pose[:, 0], pose[:, 1], color='black')
31 | plt.savefig(save_name)
32 | plt.close()
33 |
34 | def save_global_point_cloud_open3d(point_cloud,pose,save_dir):
35 | file_name = 'global_map_pose'
36 | save_name = os.path.join(save_dir, file_name)
37 |
38 | n_pcd = len(point_cloud)
39 | for i in range(n_pcd):
40 | current_pc = np.asarray(point_cloud[i].points)
41 | plt.plot(current_pc[:, 0], current_pc[:, 1], '.',markersize=1)
42 |
43 | ax = plt.gca()
44 | ax.set_ylim(ax.get_ylim()[::-1])
45 | plt.plot(pose[:, 0], pose[:, 1], color='black')
46 | plt.savefig(save_name)
47 | plt.close()
48 |
--------------------------------------------------------------------------------