├── LICENSE ├── README.md ├── assets └── overall.png ├── ckpt └── pretrained │ ├── kitti.pth │ ├── kitti_feats.pth │ ├── kitti_keypoints.pth │ ├── nusc_feats.pth │ ├── nusc_keypoints.pth │ └── nuscenes.pth ├── data ├── kitti_data.py ├── kitti_list │ ├── 00.txt │ ├── 01.txt │ ├── 02.txt │ ├── 03.txt │ ├── 04.txt │ ├── 05.txt │ ├── 06.txt │ ├── 07.txt │ ├── 08.txt │ ├── 09.txt │ └── 10.txt ├── nuscenes_data.py └── nuscenes_list │ ├── test.txt │ ├── train.txt │ └── val.txt ├── models ├── PointUtils │ ├── points_utils.py │ ├── setup.py │ └── src │ │ ├── cuda_utils.h │ │ ├── furthest_point_sampling.cpp │ │ ├── furthest_point_sampling_gpu.cu │ │ ├── furthest_point_sampling_gpu.h │ │ └── point_utils_api.cpp ├── layers.py ├── losses.py ├── models.py └── utils.py ├── requirements.txt ├── scripts ├── test_kitti.sh ├── test_nusc.sh ├── train_kitti_desc.sh ├── train_kitti_det.sh ├── train_kitti_reg.sh ├── train_nusc_desc.sh ├── train_nusc_det.sh └── train_nusc_reg.sh ├── test.py ├── train_feats.py └── train_reg.py /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2021 Intelligent Sensing, Perception and Computing Group 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## HRegNet: A Hierarchical Network for Large-scale Outdoor LiDAR Point Cloud Registration 2 | 3 | ### Introduction 4 | The repository contains the source code and pre-trained models of our paper (published on ICCV 2021): `HRegNet: A Hierarchical Network for Large-scale Outdoor LiDAR Point Cloud Registration`. 5 | 6 | The overall network architecture is shown below: 7 |
8 | 9 |
10 | 11 | ### News 12 | We have fixed some bugs in the code and updated the pretrained weights for both two datasets. The registration performance would be better than the reported performance in the paper. The current tested results are listed here: 13 | 14 | #### KITTI dataset 15 | |RTE (m)|RRE (deg)|Success rate| 16 | |:----:|:----:|:----:| 17 | |0.0557+-0.0746|0.1780+-0.1959|99.77%| 18 | 19 | #### NuScenes dataset 20 | |RTE (m)|RRE (deg)|Success rate| 21 | |:----:|:----:|:----:| 22 | |0.1218+-0.1122|0.2734+-0.1970|100.0%| 23 | 24 | ### Environments 25 | The code mainly requires the following libraries and you can check `requirements.txt` for more environment requirements. 26 | - PyTorch 1.7.0/1.7.1 27 | - Cuda 11.0/11.1 28 | - [pytorch3d 0.3.0](https://github.com/facebookresearch/pytorch3d) 29 | - [MinkowskiEngine 0.5](https://github.com/NVIDIA/MinkowskiEngine) 30 | 31 | Please run the following commands to install `point_utils` 32 | ``` 33 | cd models/PointUtils 34 | python setup.py install 35 | ``` 36 | 37 | **Training device**: NVIDIA RTX 3090 38 | 39 | ### Datasets 40 | The point cloud pairs list and the ground truth relative transformation are stored in `data/kitti_list` and `data/nuscenes_list`. 41 | The data of the two datasets should be organized as follows: 42 | #### KITTI odometry dataset 43 | ``` 44 | DATA_ROOT 45 | ├── 00 46 | │ ├── velodyne 47 | │ ├── calib.txt 48 | ├── 01 49 | ├── ... 50 | ``` 51 | #### NuScenes dataset 52 | ``` 53 | DATA_ROOT 54 | ├── v1.0-trainval 55 | │ ├── maps 56 | │ ├── samples 57 | │ │ ├──LIDAR_TOP 58 | │ ├── sweeps 59 | │ ├── v1.0-trainval 60 | ├── v1.0-test 61 | │ ├── maps 62 | │ ├── samples 63 | │ │ ├──LIDAR_TOP 64 | │ ├── sweeps 65 | │ ├── v1.0-test 66 | ``` 67 | ### Train 68 | The training of the whole network is divided into two steps: we firstly train the feature extraction module and then train the network based on the pretrain features. 69 | #### Train feature extraction 70 | - Train keypoints detector by running `sh scripts/train_kitti_det.sh` or `sh scripts/train_nusc_det.sh`, please reminder to specify the `GPU`,`DATA_ROOT`,`CKPT_DIR`,`RUNNAME`,`WANDB_DIR` in the scripts. 71 | - Train descriptor by running `sh scripts/train_kitti_desc.sh` or `sh scripts/train_nusc_desc.sh`, please reminder to specify the `GPU`,`DATA_ROOT`,`CKPT_DIR`,`RUNNAME`,`WANDB_DIR` and `PRETRAIN_DETECTOR` in the scripts. 72 | 73 | #### Train the whole network 74 | Train the network by running `sh scripts/train_kitti_reg.sh` or `sh scripts/train_nusc_reg.sh`, please reminder to specify the `GPU`,`DATA_ROOT`,`CKPT_DIR`,`RUNNAME`,`WANDB_DIR` and `PRETRAIN_FEATS` in the scripts. 75 | 76 | **Update**: Pretrained weights for detector and descriptor are provided in `ckpt/pretrained`. If you want to train descriptor, you can set `PRETRAIN_DETECTOR` to `DATASET_keypoints.pth`. If you want to train the whole network, you can set `PRETRAIN_FEATS` to `DATASET_feats.pth`. 77 | 78 | ### Test 79 | We provide pretrain models in `ckpt/pretrained`, please run `sh scripts/test_kitti.sh` or `sh scripts/test_nusc.sh`, please reminder to specify `GPU`,`DATA_ROOT`,`SAVE_DIR` in the scripts. The test results will be saved in `SAVE_DIR`. 80 | 81 | ### Citation 82 | If you find this project useful for your work, please consider citing: 83 | ``` 84 | @InProceedings{Lu_2021_HRegNet, 85 | author = {Lu, Fan and Chen, Guang and Liu, Yinlong and Zhang Lijun, Qu Sanqing, Liu Shu, Gu Rongqi}, 86 | title = {HRegNet: A Hierarchical Network for Large-scale Outdoor LiDAR Point Cloud Registration}, 87 | booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision}, 88 | year = {2021} 89 | } 90 | ``` 91 | 92 | ### Acknowledgments 93 | We want to thank all the ICCV reviewers and the following open-source projects for the help of the implementation: 94 | 95 | - [DGR](https://github.com/chrischoy/DeepGlobalRegistration)(Point clouds preprocessing and evaluation) 96 | - [PointNet++](https://github.com/sshaoshuai/Pointnet2.PyTorch)(unofficial implementation, for Furthest Points Sampling) -------------------------------------------------------------------------------- /assets/overall.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ispc-lab/HRegNet/e381cb87575380727c54e1f712821c4078cb0c63/assets/overall.png -------------------------------------------------------------------------------- /ckpt/pretrained/kitti.pth: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ispc-lab/HRegNet/e381cb87575380727c54e1f712821c4078cb0c63/ckpt/pretrained/kitti.pth -------------------------------------------------------------------------------- /ckpt/pretrained/kitti_feats.pth: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ispc-lab/HRegNet/e381cb87575380727c54e1f712821c4078cb0c63/ckpt/pretrained/kitti_feats.pth -------------------------------------------------------------------------------- /ckpt/pretrained/kitti_keypoints.pth: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ispc-lab/HRegNet/e381cb87575380727c54e1f712821c4078cb0c63/ckpt/pretrained/kitti_keypoints.pth -------------------------------------------------------------------------------- /ckpt/pretrained/nusc_feats.pth: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ispc-lab/HRegNet/e381cb87575380727c54e1f712821c4078cb0c63/ckpt/pretrained/nusc_feats.pth -------------------------------------------------------------------------------- /ckpt/pretrained/nusc_keypoints.pth: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ispc-lab/HRegNet/e381cb87575380727c54e1f712821c4078cb0c63/ckpt/pretrained/nusc_keypoints.pth -------------------------------------------------------------------------------- /ckpt/pretrained/nuscenes.pth: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ispc-lab/HRegNet/e381cb87575380727c54e1f712821c4078cb0c63/ckpt/pretrained/nuscenes.pth -------------------------------------------------------------------------------- /data/kitti_data.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import torch 3 | import torchvision 4 | from torch.utils.data import Dataset 5 | 6 | import os 7 | import glob 8 | import numpy as np 9 | import MinkowskiEngine as ME 10 | 11 | from models.utils import generate_rand_rotm, generate_rand_trans, apply_transform 12 | 13 | def read_kitti_bin_voxel(filename, npoints=None, voxel_size=None): 14 | ''' 15 | Input: 16 | filename 17 | npoints: int/None 18 | voxel_size: int/None 19 | ''' 20 | scan = np.fromfile(filename, dtype=np.float32, count=-1).reshape([-1,4]) 21 | scan = scan[:,:3] 22 | 23 | if voxel_size is not None: 24 | _, sel = ME.utils.sparse_quantize(scan / voxel_size, return_index=True) 25 | scan = scan[sel] 26 | if npoints is None: 27 | return scan.astype('float32') 28 | 29 | N = scan.shape[0] 30 | if N >= npoints: 31 | sample_idx = np.random.choice(N, npoints, replace=False) 32 | else: 33 | sample_idx = np.concatenate((np.arange(N), np.random.choice(N, npoints-N, replace=True)), axis=-1) 34 | 35 | scan = scan[sample_idx, :].astype('float32') 36 | return scan 37 | 38 | class KittiDataset(Dataset): 39 | ''' 40 | Params: 41 | root 42 | seqs 43 | npoints 44 | voxel_size 45 | data_list 46 | augment 47 | ''' 48 | def __init__(self, root, seqs, npoints, voxel_size, data_list, augment=0.0): 49 | super(KittiDataset, self).__init__() 50 | 51 | self.root = root 52 | self.seqs = seqs 53 | self.npoints = npoints 54 | self.voxel_size = voxel_size 55 | self.augment = augment 56 | self.data_list = data_list 57 | self.dataset = self.make_dataset() 58 | 59 | def make_dataset(self): 60 | last_row = np.zeros((1,4), dtype=np.float32) 61 | last_row[:,3] = 1.0 62 | dataset = [] 63 | 64 | for seq in self.seqs: 65 | fn_pair_poses = os.path.join(self.data_list, seq + '.txt') 66 | 67 | with open(fn_pair_poses, 'r') as f: 68 | lines = f.readlines() 69 | for line in lines: 70 | data_dict = {} 71 | line = line.strip(' \n').split(' ') 72 | src_fn = os.path.join(self.root, seq, 'velodyne', line[0] + '.bin') 73 | dst_fn = os.path.join(self.root, seq, 'velodyne', line[1] + '.bin') 74 | values = [] 75 | for i in range(2, len(line)): 76 | values.append(float(line[i])) 77 | values = np.array(values).astype(np.float32) 78 | rela_pose = values.reshape(3,4) 79 | rela_pose = np.concatenate([rela_pose, last_row], axis = 0) 80 | data_dict['points1'] = src_fn 81 | data_dict['points2'] = dst_fn 82 | data_dict['Tr'] = rela_pose 83 | dataset.append(data_dict) 84 | 85 | return dataset 86 | 87 | def __getitem__(self, index): 88 | data_dict = self.dataset[index] 89 | src_points = read_kitti_bin_voxel(data_dict['points1'], self.npoints, self.voxel_size) 90 | dst_points = read_kitti_bin_voxel(data_dict['points2'], self.npoints, self.voxel_size) 91 | Tr = data_dict['Tr'] 92 | 93 | # Random rotation augmentation (Only for training feature extraction) 94 | if np.random.rand() < self.augment: 95 | aug_T = np.zeros((4,4), dtype=np.float32) 96 | aug_T[3,3] = 1.0 97 | rand_rotm = generate_rand_rotm(1.0, 1.0, 45.0) 98 | aug_T[:3,:3] = rand_rotm 99 | src_points = apply_transform(src_points, aug_T) 100 | Tr = Tr.dot(np.linalg.inv(aug_T)) 101 | 102 | src_points = torch.from_numpy(src_points) 103 | dst_points = torch.from_numpy(dst_points) 104 | Tr = torch.from_numpy(Tr) 105 | R = Tr[:3,:3] 106 | t = Tr[:3,3] 107 | return src_points, dst_points, R, t 108 | 109 | def __len__(self): 110 | return len(self.dataset) -------------------------------------------------------------------------------- /data/kitti_list/04.txt: -------------------------------------------------------------------------------- 1 | 000000 000010 0.9999998098016578 0.0005672680365811173 -0.00045258357980403985 -13.234989061558569 -0.0005685095649304117 0.999996014152324 -0.0027476204742096985 -0.05543050386567423 0.00045102160111207425 0.002747876277816457 0.9999961834188693 -0.12739370294882835 2 | 000001 000011 0.9999992160314742 0.0002822939994422932 0.0011891338235716295 -13.26388685949429 -0.0002756757421145959 0.999984544311341 -0.005562216246857572 -0.06291131044332729 -0.001190686879204257 0.005561885210287666 0.9999838523060627 -0.13072991113783888 3 | 000002 000012 0.9999997063274981 -0.00039086700758357665 0.0005803576643498224 -13.290341699103278 0.0003951855562628545 0.9999720396439271 -0.007459950498980011 -0.07122126592580656 -0.0005774255603236601 0.007460177418105273 0.9999719973745748 -0.14113610163642487 4 | 000003 000013 0.999996088508452 -0.0007811821520237197 -0.0026929139039292534 -13.324942010075905 0.0007639981455408126 0.999979364024965 -0.006376219574618416 -0.07405714859568478 0.0026978387907153427 0.006374140802897797 0.9999760240479106 -0.1806755225917377 5 | 000004 000014 0.9999886236636732 -0.0008906379965582689 -0.004721386344369506 -13.361040771530547 0.0008802167293181283 0.999997170626377 -0.002208859045538694 -0.07110489425074824 0.004723339149540058 0.0022046782316769093 0.9999863756274234 -0.20794652238623046 6 | 000005 000015 0.9999971293727715 -0.0003670387439908388 -0.0023727711692658844 -13.39586746995598 0.00036674653810810125 0.9999999004481358 -0.00012357028172352763 -0.06727491245123439 0.0023728147353144807 0.0001227019703440452 0.9999971782303051 -0.1841325778023622 7 | 000006 000016 0.9999983204237185 -0.000985458816746895 0.0015742485250603603 -13.403653021120174 0.000987456145046661 0.9999986840909791 -0.001268600128463128 -0.05316768606359781 -0.0015729978409562662 0.0012701510149512613 0.9999980602308475 -0.14808057099197414 8 | 000007 000017 0.9999961541535127 -0.00042924929731700173 0.0027377215238930832 -13.448769730567511 0.000439346868420493 0.9999931822174306 -0.0036886934800375317 -0.08026741449275997 -0.0027361209751297195 0.003689882702143444 0.999989474603425 -0.1381821160372107 9 | 000008 000018 0.9999976462967878 -0.0022137866910454823 0.00017807973751014908 -13.470591566341028 0.0022146342250152842 0.9999854356593173 -0.004912257897610628 -0.08278296242405644 -0.00016720371721034314 0.004912642986248596 0.9999879078608926 -0.1439826762168765 10 | 000009 000019 0.9999755779849616 -0.0031038175961329976 -0.006270355354912317 -13.51178992225655 0.003096833244792734 0.9999945442774202 -0.0011232848460508709 -0.08635933782400737 0.00627380915042552 0.0011038383896556868 0.999979821757983 -0.23102472395139634 11 | 000010 000020 0.9999631901954129 -0.0030567068127754712 -0.00800976108969028 -13.545476157992805 0.003087191252295457 0.9999880787939616 0.0037962633995939103 -0.0709648744798379 0.007998064680765596 -0.003820850990742584 0.9999606571297831 -0.22490646237189607 12 | 000011 000021 0.9999614226728389 -0.003573789653148087 -0.008026417510548979 -13.565391623295765 0.0036517221943739727 0.9999460541198191 0.009715960695472868 -0.04761380811701477 0.007991264587694216 -0.00974490149791771 0.9999206476785694 -0.20370636517485238 13 | 000012 000022 0.9999904039713963 -0.0021009143248102214 -0.0038532632640413014 -13.618618625637135 0.002142085986682441 0.9999404931234532 0.01071234881023597 -0.0644400391206767 0.003830526230642664 -0.010720500069013912 0.9999351693100705 -0.15726737864843757 14 | 000013 000023 0.9999929973695252 -0.0036859386895843702 0.00047711424264799875 -13.623384462146234 0.0036818021650494486 0.9999578341055869 0.008398149368141143 -0.04430007694629474 -0.0005080480997992727 -0.00839633994924307 0.9999646229410121 -0.13871112579073902 15 | 000014 000024 0.9999875604205583 -0.0027255062647114966 0.004172711164386105 -13.676184753312342 0.002717965352773809 0.99999463952534 0.0018116590000312911 -0.08864509578994706 -0.004177626797112781 -0.0018002941417509083 0.9999896879765638 -0.12024645974016157 16 | 000015 000025 0.9999857224616201 -0.003952666243264262 0.0036078482734400326 -13.704251306885245 0.003963446223500321 0.9999877395052367 -0.002985956796299577 -0.08206321861899349 -0.0035960014054529665 0.0030002121612145334 0.9999890322763094 -0.1009014732994146 17 | 000016 000026 0.9999906392416413 -0.004115315380596478 0.0012974853672973292 -13.741253492992323 0.004120623136487844 0.9999829643177367 -0.004115420021473955 -0.08388275211896194 -0.001280527540526733 0.0041207309506922475 0.9999906056824079 -0.10355629197451212 18 | 000017 000027 0.9999938641481313 -0.0032237214959281377 -0.0014317336841623354 -13.77185122637308 0.003221086492400079 0.9999932141107658 -0.001838621584377009 -0.0768054511230331 0.0014376524497294863 0.0018339983126966887 0.9999974163054925 -0.14026087001234558 19 | 000018 000028 0.999993944344565 -0.0028145023539421494 -0.0020072015863848536 -13.794735038601573 0.0028140872043489686 0.9999960882751653 -0.00020992969586076073 -0.0603930657142234 0.0020077845862156393 0.00020428157477902844 0.9999978559523441 -0.17034451986032484 20 | 000019 000029 0.9999969666399308 -0.0010110602255497963 0.0021983069485564184 -13.82371229798305 0.0010164881984846366 0.9999962774772302 -0.0024694447794831775 -0.06978956430743664 -0.002195802935001552 0.0024716711537385583 0.9999945535789636 -0.14199599415836905 21 | 000020 000030 0.9999932947773871 -0.001283302628632575 0.003473790486169949 -13.853488804389983 0.0013082038978540808 0.9999734055918603 -0.007175447636509265 -0.0662375823103371 -0.003464492613710428 0.00717994455696986 0.9999682294629237 -0.16581203302518996 22 | 000021 000031 0.9999951443936583 -0.001738875043959322 0.0025235561742211173 -13.875456413942707 0.0017714651971525708 0.9999144390052584 -0.012970034303047901 -0.07431838444827049 -0.002500786979389954 0.012974450015604065 0.999912675788662 -0.18205961910905058 23 | 000022 000032 0.9999989049367011 -0.001351258338990622 -0.0006409064093785326 -13.918397771858936 0.0013415985679393126 0.9998888665567672 -0.0148419579356558 -0.07108827782168405 0.0006608901889170878 0.014841085982849497 0.9998896971959658 -0.19447361391405746 24 | 000023 000033 0.999996312437071 -0.0006333600678834892 -0.0026554112977000147 -13.937772326582184 0.0005952708328022357 0.9998974384588625 -0.014320618781397077 -0.06151261333104485 0.002664208418801619 0.014318992217687872 0.9998940029154241 -0.18878956490927598 25 | 000024 000034 0.9999950324608371 0.000987243134231233 -0.002996702292183767 -13.97779828102942 -0.0010159469586530218 0.9999534718166995 -0.009592475473147127 -0.04617352140563963 0.002987092430361671 0.009595472483514536 0.9999494672764915 -0.18015983333232655 26 | 000025 000035 0.9999836706074702 0.0016950073732992636 -0.005449464615430394 -14.00415872494871 -0.0017410472957536758 0.999962708134263 -0.008455045790878251 -0.028292530957358647 0.005434930901619515 0.008464400251735905 0.9999494279278903 -0.1884617706890963 27 | 000026 000036 0.9999750671557589 0.003835444216587123 -0.005918685449465409 -14.048126011701145 -0.0038782918539199437 0.999966363285011 -0.00724510688373109 -0.03384224634011744 0.005890698287832991 0.007267883894789415 0.9999562009221051 -0.20563719981750317 28 | 000027 000037 0.999980714786097 0.004265969624748205 -0.004507802626425339 -14.074196605036954 -0.004304540019010932 0.9999537672305824 -0.008581694161844901 -0.01771203142757992 0.004470986433560346 0.008600933740316315 0.9999529559382975 -0.1928865721276689 29 | 000028 000038 0.9999836757644543 0.004826032422090594 -0.0030460697293601038 -14.09780027653615 -0.004850155094124666 0.9999564920294359 -0.007962118496419932 -0.009020922772037518 0.0030075130209473184 0.007976763397687723 0.9999637663118685 -0.1882276346805207 30 | 000029 000039 0.9999731788981968 0.006123600787428101 -0.004019006502376263 -14.12421584092758 -0.0061444680844750564 0.999967654769977 -0.005200631928587223 -0.005091372960181556 0.003987030311955083 0.005225189916627728 0.9999783013715116 -0.20636967829901298 31 | 000030 000040 0.9999669321448875 0.006266460765714355 -0.005178271038966593 -14.144146414822071 -0.0062719835884928596 0.9999797678007092 -0.001051068806817219 0.010779795162768085 0.005171580542738993 0.0010835099045664886 0.9999859971555792 -0.2143174960050421 32 | 000031 000041 0.9999553168941396 0.007982015904445593 -0.005057308514709358 -14.183817845466095 -0.007967043791502468 0.9999637453119264 0.002973916229465553 -0.002601903686843432 0.00508086436197336 -0.0029334941975968603 0.9999828142702163 -0.20977366802787592 33 | 000032 000042 0.9999604607718748 0.007728916091732793 -0.004380072614610323 -14.202702560366186 -0.007704012685939329 0.9999543857207183 0.005673950359490755 0.005378900261354111 0.004423729299597038 -0.005639984441875778 0.9999744390925234 -0.2047900363922185 34 | 000033 000043 0.9999657217794766 0.006932462051300681 -0.004525428070400365 -14.209031866500096 -0.006906981648172144 0.999960333264562 0.005622309249528353 0.00023718973742967164 0.004564225999256484 -0.005590861925483871 0.9999738896647637 -0.2144457166047599 35 | 000034 000044 0.9999638074393274 0.006139647478810535 -0.005886670133945713 -14.231259937554718 -0.006105631987839419 0.9999646833022868 0.005779023894411488 -0.011392224538691959 0.005921944117264279 -0.005742873810908385 0.9999659273435986 -0.2211067968725841 36 | 000035 000045 0.9999715781673684 0.0045743186807591435 -0.005989299766441387 -14.240619511052435 -0.004530553817222407 0.9999630712029438 0.007300319796067074 -0.02384579315740608 0.0060224728000667525 -0.0072729795609375834 0.9999554597891419 -0.22287617691692896 37 | 000036 000046 0.9999823042628314 0.0033788976462712146 -0.0049130391548609965 -14.244301241152295 -0.0033320079854228897 0.9999490815092139 0.009521179811693061 -0.03212451203959574 0.004944959752575729 -0.009504644326246764 0.9999426243082564 -0.21517488759380748 38 | 000037 000047 0.9999898567983467 0.0019546148624167087 -0.0040497909872395195 -14.240649961776962 -0.001911624285491472 0.9999421082547547 0.01059209190284639 -0.04153500573005506 0.0040702607959657204 -0.010584245587148997 0.999935695640101 -0.20189244089475472 39 | 000038 000048 0.9999932799890151 2.5901034708751876e-05 -0.0036936998022906913 -14.223482622407511 1.4850510146098449e-05 0.9999393140012043 0.01103224806910775 -0.04768245403117334 0.0036937607555438175 -0.011032228879003375 0.9999322844826949 -0.2024755250058256 40 | 000039 000049 0.9999948856663877 -0.0016167162286442388 -0.0027416820854040645 -14.216700907343473 0.0016417171430352497 0.9999568391882246 0.009141401065147924 -0.06140185980298994 0.002726786805660369 -0.009145861427315438 0.999954482951498 -0.19958353712504173 41 | 000040 000050 0.9999900158064715 -0.003225106469400944 -0.00307335105616654 -14.188691224374468 0.0032412119218022378 0.9999810238416529 0.0052499815494306355 -0.06799828832103683 0.003056361527877479 -0.005259892617011013 0.9999815697871459 -0.21786082285259323 42 | 000041 000051 0.9999831836327346 -0.004156344773822262 -0.004054591957929652 -14.187555340723577 0.004158030572650573 0.999991464846154 0.00040690431262566454 -0.0875956187330929 0.004052865915505535 -0.00042375163682680043 0.9999916063147177 -0.23998749219719945 43 | 000042 000052 0.9999691600380172 -0.0058301061648554555 -0.005290024676347007 -14.148108903506698 0.005801304808690439 0.9999683317795434 -0.005443378436465413 -0.09568350979095339 0.005321592454706008 0.005412520484048699 0.9999711524118555 -0.24375796296738939 44 | 000043 000053 0.9999759118468231 -0.005507907496042492 -0.0042407462166517305 -14.1547522068031 0.005460195349258607 0.999922511350852 -0.01118135877768656 -0.1158745893272542 0.004302003351274435 0.011157935223458197 0.9999285283820838 -0.2391018241327079 45 | 000044 000054 0.9999757220792306 -0.006268532288838673 -0.0030635864040572326 -14.121574629005503 0.006219527213465262 0.9998566074172931 -0.01575188355705368 -0.12201440248976596 0.003161889103649859 0.01573245175385994 0.9998712502773055 -0.2335809201821583 46 | 000045 000055 0.9999764525815262 -0.006576042887944512 -0.0019727115270247943 -14.092794971676474 0.0065375140349804885 0.9997993201138743 -0.01893911581724369 -0.1267369155608351 0.0020968610942581284 0.01892577317510927 0.9998186878674269 -0.23138049200395183 47 | 000046 000056 0.9999774863110407 -0.006217961662694635 -0.0025366231146515647 -14.083655490198336 0.006164772902974963 0.9997715945549283 -0.020463155393802043 -0.1326916137589293 0.0026632845033901053 0.020447061240168863 0.9997873822875236 -0.23432590230987171 48 | 000047 000057 0.9999754827764139 -0.006254353301052983 -0.0031545477548566097 -14.03920233469413 0.006191169864530968 0.9997877698085212 -0.019656606263423162 -0.12086461727148945 0.003276817908264233 0.019636603920326803 0.9998018002690041 -0.2348319768639272 49 | 000048 000058 0.9999845771426469 -0.0048000980326052285 -0.0027577671047522524 -14.026442306673218 0.004745202021054045 0.9997969918977467 -0.019579537148370133 -0.12401872081911333 0.002851192180803566 0.01956614742637122 0.9998046391589615 -0.22127435039770232 50 | 000049 000059 0.9999876454128357 -0.00404455676971821 -0.0028710020858589438 -13.996026493135783 0.003994351301987504 0.9998426990183384 -0.017282738651009596 -0.1066117987791839 0.002940449250315352 0.01727106538738577 0.9998463530529555 -0.21528354303025538 51 | 000050 000060 0.9999930171241881 -0.003222599039019402 -0.0018858035076455418 -13.955484773831278 0.0031966450009026513 0.9999022247026946 -0.013608108896382165 -0.09728855883998982 0.0019294719814825117 0.01360199031389765 0.9999055759646829 -0.2069937980765303 52 | 000051 000061 0.9999977947366052 -0.0019730681324565246 -0.0006521778524886546 -13.937947542897511 0.0019656659072517747 0.9999357275449935 -0.011161694032737417 -0.08336493906190495 0.0006741571532011876 0.011160385814132728 0.9999375156676534 -0.20016861876477 53 | 000052 000062 0.999999842394087 -0.0004953624336649228 -8.206738178117738e-05 -13.900410259846502 0.0004947825114961344 0.9999759162355787 -0.006924078007821655 -0.06558900456713018 8.549430721705551e-05 0.006924049854644089 0.9999761031278295 -0.2037705461988558 54 | 000053 000063 0.9999980299914156 0.001212506302489121 -0.0014917839926085407 -13.880166075660208 -0.001214758944710153 0.9999981646920977 -0.0015103082099374085 -0.04568949946813017 0.0014899504529530922 0.0015121162740778995 0.9999976696581707 -0.21112225338772142 55 | 000054 000064 0.9999911242090682 0.0030218429167550736 -0.002961570535269907 -13.853420558849196 -0.0030119900438610007 0.9999899879208531 0.0033258314770694955 -0.021888104035419916 0.0029715910073176478 -0.0033168984199430616 0.9999902467101485 -0.2272171135716708 56 | 000055 000065 0.999983336300867 0.00463118643634778 -0.0034441193570240733 -13.825687201606538 -0.004604599854232762 0.9999598381949267 0.00768771821253834 -0.0030772937500297276 0.003479585525552223 -0.007671731036971632 0.9999645622379986 -0.2395659075120909 57 | 000056 000066 0.9999791371567549 0.005778328050643496 -0.0028751022843161653 -13.807568369922532 -0.005750001259346181 0.999935799851953 0.009765194192309794 0.0026878888318164866 0.002931343173673449 -0.00974845949092765 0.9999481060403955 -0.24393321253208086 58 | 000057 000067 0.9999759255751502 0.006357383613104715 -0.002745952818156847 -13.756690907383966 -0.006331671125604395 0.9999368822298285 0.009273485841830102 0.000656897406232617 0.0028047332556631007 -0.009255883836970551 0.9999532200744021 -0.22991275408004525 59 | 000058 000068 0.9999731906080415 0.006592308362593268 -0.00318884048476743 -13.741527226315645 -0.0065605774472892436 0.9999297500364543 0.009860790823905636 -0.005229088866543337 0.003253621513237359 -0.009839603402043626 0.9999461634042728 -0.22193071872214376 60 | 000059 000069 0.9999782851892789 0.006088662054054937 -0.002521833975121003 -13.696419944522933 -0.006065578921937607 0.9999405846018209 0.009062046183880621 -0.008093143739438747 0.0025768602261127386 -0.00904655692790478 0.9999558294249477 -0.2133024813998764 61 | 000060 000070 0.9999800883046548 0.006010244644549008 -0.0018389056535167143 -13.69411180245965 -0.005994261389976415 0.9999451814411214 0.008577966444262838 -0.020284766620642016 0.0018903619734641068 -0.008566779413923127 0.9999615708184861 -0.20484665174812142 62 | 000061 000071 0.9999845638674243 0.0053220601038640875 -0.001621617241957745 -13.657176965190809 -0.005309745273027328 0.9999575983361404 0.007506621571352975 -0.03419742858613952 0.0016615000385004253 -0.007497900569987879 0.9999705787624495 -0.1969814237868131 63 | 000062 000072 0.9999903002140782 0.004091279930697367 -0.0015937392584720613 -13.61915325225445 -0.00408087311044434 0.9999706853438535 0.0064793868671885755 -0.044694264210095286 0.001620203216061915 -0.006472832843151508 0.9999777421432677 -0.19581066734297045 64 | 000063 000073 0.9999921910989625 0.0030700497130024588 -0.002535406231408568 -13.605833612359463 -0.0030580355165475692 0.9999841020794954 0.004729169071822053 -0.05543617228091289 0.0025498824368145666 -0.004721374192320513 0.9999854606500573 -0.21617960936100827 65 | 000064 000074 0.9999956104017578 0.0016900137649112298 -0.0024320034171308397 -13.548206613018884 -0.0016818767113139705 0.9999930224056072 0.003343880453724107 -0.06113360413137947 0.002437637885338886 -0.0033397648101372046 0.9999912896835301 -0.23016058828290645 66 | 000065 000075 0.9999993886836569 0.0006933408577721738 -0.000973632749097369 -13.547975486359507 -0.000692371136456708 0.9999993104913968 0.0009952993601482316 -0.0740976417254646 0.0009743211622742808 -0.0009946239127232232 0.9999990853337962 -0.21184842206959068 67 | 000066 000076 0.9999990132094693 -0.00040583811643529625 0.0013591245356144093 -13.495151772316444 0.0004086285967101941 0.9999979165523353 -0.0020530777143599284 -0.07745572939778164 -0.0013582892891705054 0.002053634346704216 0.9999969754812575 -0.174853285042396 68 | 000067 000077 0.9999976437254711 -0.0010157788112127217 0.0019621351218047614 -13.463844671236862 0.0010217639728361486 0.9999948200601557 -0.00305156853079308 -0.07610452545891264 -0.001959026029942731 0.0030535699447280557 0.9999933739751157 -0.16620594354816734 69 | 000068 000078 0.9999992578940322 -0.0012208891074228512 0.00037487908359750663 -13.452227968734576 0.0012214717511366366 0.9999979875456094 -0.0015608819350601082 -0.07865292420006763 -0.0003729741094294489 0.001561338701207184 0.9999987145070887 -0.18597877118288786 70 | 000069 000079 0.9999965922786924 -0.001921954982324442 -0.0018729139215161845 -13.4119080072185 0.0019250106335778404 0.9999968168380831 0.0016315839143648805 -0.08074780351435927 0.0018697719327557536 -0.0016351828047270046 0.9999968418435515 -0.21070108859010422 71 | 000070 000080 0.9999913442273607 -0.0027660709743813536 -0.003137403756932231 -13.40418706237253 0.0027791668868828237 0.9999874761498109 0.004177326626016397 -0.08596957904381718 0.0031258078401819444 -0.004186008934537609 0.9999862112002627 -0.22364552911271487 72 | 000071 000081 0.9999909010978933 -0.0033681069263632813 -0.0025876606248651716 -13.370462359420014 0.003382191699509635 0.9999794698763561 0.005457642903253616 -0.08807697315540333 0.0025692256386685847 -0.005466345400948697 0.999981675298297 -0.20218111122609844 73 | 000072 000082 0.9999927462824707 -0.0038360954550237126 8.56642972401179e-05 -13.33175773361692 0.0038356313891609677 0.999980733657955 0.004875432538505272 -0.08977941170627207 -0.00010436735007277559 -0.004875069648921989 0.9999881285519295 -0.17239940513573324 74 | 000073 000083 0.9999840276667262 -0.004598716559299147 0.003277242989488092 -13.322525750103141 0.004592062640619543 0.9999873733085988 0.0020347893348729037 -0.10080101426637257 -0.0032865577190992834 -0.0020197110746018824 0.9999927223980164 -0.14403168826978405 75 | 000074 000084 0.9999710512043788 -0.006156116090769086 0.0044625214268848865 -13.298836345667334 0.006155980107655159 0.9999810635616112 4.423997392064298e-05 -0.11528998551800185 -0.0044627107009694664 -1.676611532620929e-05 0.9999900167734762 -0.13597573528629897 76 | 000075 000085 0.9999717889311291 -0.00730482733811284 0.001730931878862741 -13.29144183942619 0.0073067612781992355 0.9999726769676917 -0.0011135240540534593 -0.12938987042312447 -0.001722750126374838 0.0011261405707779237 0.9999978792414254 -0.17059777360054884 77 | 000076 000086 0.9999630815992984 -0.008139709015746108 -0.0027386813496709343 -13.258763998379743 0.008139098982289601 0.9999667659732165 -0.00023362385683592764 -0.1299216698174046 0.002740494524430254 0.0002113203471799893 0.9999964057955163 -0.20678064451548614 78 | 000077 000087 0.999950231310353 -0.008780237057074227 -0.00474971003269122 -13.312417765051423 0.008778867597357732 0.9999614332575298 -0.00030906942975032347 -0.1342206827610039 0.004752240876211001 0.00026735502416917945 0.999988666071322 -0.2189781277569685 79 | 000078 000088 0.9999477387439533 -0.00981821626356646 -0.0028370779759312936 -13.291345491332688 0.00980945139043176 0.9999470933999011 -0.003087186803355027 -0.13627520965526888 0.002867241177610122 0.0030591969583013325 0.999991255874901 -0.20180116283246627 80 | 000079 000089 0.999942121103727 -0.010737428753836145 -0.0005870148760522684 -13.287350935067103 0.010733193880281326 0.9999193926861297 -0.006796640339672427 -0.14103391167712725 0.0006599454663359722 0.0067899480333348475 0.9999767424655011 -0.19208934275341982 81 | 000080 000090 0.9999407143328942 -0.010868505578066493 0.0006780841090283518 -13.298948015546918 0.010875359604385114 0.9998793434537829 -0.0110923758551535 -0.15108590257150514 -0.0005574427235209178 0.011099093282633546 0.9999383740339631 -0.17986033321366796 82 | 000081 000091 0.9999325534236873 -0.01160872081247275 0.000473707839978005 -13.275888226353269 0.011613882918605892 0.9998486360481094 -0.012948434463746838 -0.15052678792980442 -0.0003233222088729933 0.012953069340822068 0.9999159835197426 -0.171860726851264 83 | 000082 000092 0.9999318028304028 -0.011600479584105143 -0.0012190077194632288 -13.341439059570243 0.011585327275582995 0.9998635200240912 -0.01177719989357295 -0.1574252617686223 0.0013554610773477502 0.011762281385114924 0.999929745927889 -0.16723931860551372 84 | 000083 000093 0.9999296379761144 -0.011542517577413787 -0.0027343555370714623 -13.325244531045893 0.011518585853376636 0.9998966919029666 -0.008611092391117077 -0.1540440667245222 0.0028334669701380794 0.00857899014468182 0.9999592212694751 -0.16471824127375406 85 | 000084 000094 0.9999295501317649 -0.011061609350479616 -0.0043168956259670025 -13.33334266322455 0.011038076072344088 0.9999242888252781 -0.005438906758965806 -0.1430941526329055 0.0043767323895654705 0.005390874864303226 0.9999758907860049 -0.17573335240527344 86 | 000085 000095 0.9999421869318486 -0.010239251834578007 -0.0032857979780690724 -13.352181843910417 0.010233505783357438 0.9999462346899177 -0.001758423620955326 -0.13866729733746744 0.0033036228243369437 0.0017246956458877793 0.9999929240604745 -0.18712976487486854 87 | 000086 000096 0.999950901910648 -0.00982097926971745 -0.0013512002717401776 -13.351708381712028 0.009818829574535119 0.9999505331899148 -0.0015929343030169477 -0.1261388646003029 0.0013667773530986972 0.0015795934322689064 0.9999977686450368 -0.1868608781730769 88 | 000087 000097 0.9999589783023024 -0.009006404109016554 -0.000953430327652364 -13.379133018856965 0.009005588099915847 0.9999591667584667 -0.0008525735132783731 -0.1132155945200878 0.0009610684716178869 0.0008439532486404188 0.9999992292780118 -0.18908410601197648 89 | 000088 000098 0.9999681915590262 -0.0077079151555518605 -0.00203589558415799 -13.382715991246465 0.007710841937837698 0.99996923400885 0.0014333520430741965 -0.09421750601982608 0.0020247810464318625 -0.001449005921371042 0.9999968573693337 -0.19802562650989805 90 | 000089 000099 0.9999802888549341 -0.0057777943282899934 -0.002481965423781246 -13.365412631475156 0.005791073428446369 0.9999686300023688 0.005374463869686448 -0.07410054373013635 0.002450836197857341 -0.005388735261591302 0.9999824907313936 -0.195099781358422 91 | 000090 000100 0.9999898545358812 -0.003507728301314917 -0.0027860766732560127 -13.37547126740067 0.0035289949824208644 0.999964277778577 0.007665429007447958 -0.04816232941132356 0.002759089194469167 -0.007675187915033645 0.9999667585611952 -0.19967694996596752 92 | 000091 000101 0.9999888834347803 -0.0012080701331087188 -0.004553115213086892 -13.386288354690436 0.0012582477848001243 0.9999383561215192 0.011033152494256963 -0.023490812229122388 0.004539506865766838 -0.011038765797679116 0.9999287581814548 -0.2086337977608304 93 | 000092 000102 0.9999845668751214 0.000928853990367334 -0.005490641715062979 -13.40085983859157 -0.0008561102462872037 0.9999120486641936 0.013236113406052468 -0.0017981269194465022 0.00550245494215979 -0.013231208659651822 0.9998973186553629 -0.21038797931269362 94 | 000093 000103 0.9999809761771615 0.002930759963080775 -0.005417286865695294 -13.38279516328547 -0.0028608009771285007 0.9999129691185245 0.0128763227256589 0.018497449455133638 0.005454555483459159 -0.012860582917627416 0.9999025098735485 -0.20769021035548413 95 | 000094 000104 0.9999768379100094 0.005388818414319969 -0.0041759353502896805 -13.429372024799278 -0.005337199110741543 0.9999103506791075 0.012276237480239255 0.023244128080601598 0.004241715694822818 -0.012253669355881196 0.9999158497961719 -0.20212581072508068 96 | 000095 000105 0.9999723399374796 0.007063188379891045 -0.0023498397484109108 -13.40396157590231 -0.00703367738583694 0.9998991897750482 0.01233612895781223 0.03195218870043694 0.0024367372755600375 -0.012319259424100525 0.9999212721407916 -0.1835408183529106 97 | 000096 000106 0.9999648395073409 0.00838197077146823 -0.0001026530330618827 -13.401424159940527 -0.008380176370973574 0.999894923673187 0.011820623393832011 0.03332680072147903 0.00020172124053416691 -0.01181935211665236 0.9999301024115576 -0.16202362193585726 98 | 000097 000107 0.999957372644348 0.009208835857702791 0.000556782213794972 -13.404381144782086 -0.009215840276852249 0.999855762184505 0.014262855237466628 0.04200419259157971 -0.00042535709693450823 -0.014267383244045147 0.9998981447384874 -0.14653154870259416 99 | 000098 000108 0.9999509401244053 0.009853574375882943 -0.000939199326618804 -13.403000375116848 -0.00983108603741974 0.9997200987557026 0.021520372510600634 0.058909289075023075 0.0011509888613154724 -0.021510085128119576 0.9997679313643386 -0.18734372970756624 100 | 000099 000109 0.9999346828426678 0.01117449519511712 -0.0023721724848478794 -13.433150546217195 -0.01110376609363268 0.9995466622947111 0.027987014680904063 0.07557343381781217 0.002683835997713471 -0.027958855539911745 0.9996054818894815 -0.19755966370259687 101 | 000100 000110 0.9999394030795588 0.011011270977531497 -4.542100599983365e-06 -13.421543047676996 -0.01100628667362878 0.9994990800312405 0.029678962364460888 0.08150356725672823 0.00033134381951718133 -0.02967712173548327 0.9995595755977588 -0.16715095376900474 102 | 000101 000111 0.9999474066821179 0.009458562710280881 0.003960702702632584 -13.419677448141025 -0.009556729225390203 0.9996278752224413 0.025545964685502708 0.06387059928149356 -0.003717602021763236 -0.025582482182114438 0.9996659417252626 -0.1217638484900222 103 | 000102 000112 0.9999480533727461 0.007528118908255133 0.006869858648184681 -13.42425486408813 -0.0076797097715229174 0.9997215240418519 0.02231296880880248 0.039530969720704925 -0.006699972368127075 -0.02236457581963836 0.9997274903468987 -0.09543811806771271 104 | 000103 000113 0.9999637470193202 0.005387762665871204 0.00660331560977273 -13.426899728853135 -0.005516761672580627 0.9997911788976104 0.0196754118711992 0.017234604069677584 -0.006495935403647976 -0.01971113051279005 0.9997845071987065 -0.1114765584516223 105 | 000104 000114 0.9999880546557731 0.0035912847037558927 0.0033098680323276715 -13.432951963349195 -0.0036700038175600507 0.9997030892858627 0.024092208601175627 0.006824620489154318 -0.0032223636690285446 -0.024104077925582404 0.9997042637069911 -0.15414798488543435 106 | 000105 000115 0.9999975125209042 0.002172275096030668 -0.00036758175918050635 -13.442601149758 -0.0021599953936108137 0.9995283687479446 0.03063328124639592 0.007999010994774958 0.00043395122659772183 -0.03063241539888225 0.9995306206140318 -0.2097888732297913 107 | 000106 000116 0.9999910229136176 0.0012988658446314314 -0.004022496891269375 -13.445021327252778 -0.0011694637211740917 0.9994870444408065 0.032006115212363906 0.009998312588873834 0.004062005165151337 -0.03200113301005705 0.9994795650934342 -0.2113432994064932 108 | 000107 000117 0.9999876998690544 0.0003734867555997568 -0.004939811041585939 -13.456383777922817 -0.00023661007144324853 0.999616751789253 0.027680435660152092 0.007232896288779975 0.004948258108220448 -0.0276789347542917 0.9996045676914828 -0.22621493904813883 109 | 000108 000118 0.9999947478094559 -0.0017373622660381413 -0.0027393184366115795 -13.457831702272273 0.001787151147177648 0.9998313371874624 0.018279480860833218 -0.01458795511579913 0.002707102536001857 -0.018284291608127497 0.9998293998096234 -0.22418218896499262 110 | 000109 000119 0.9999910372725863 -0.004153603404222918 -0.0008356098440072596 -13.46705349538386 0.004160281751209694 0.9999580611565687 0.008155454281613003 -0.047689571060642096 0.0008017025023446617 -0.008158862591007272 0.9999662691642092 -0.21609324045737932 111 | 000110 000120 0.9999776908673648 -0.006512149252865403 -0.0015226620144208776 -13.465314478650582 0.006511805049976321 0.9999788076525395 -0.00023075971653431813 -0.06513075014387572 0.0015241312756751125 0.00022084088197482386 0.9999987511806434 -0.18475513272675248 112 | 000111 000121 0.9999718587834561 -0.006457648400317053 -0.0038262060428327907 -13.492436097699029 0.00644413926914458 0.999972996181877 -0.003532493816322609 -0.08244433944290641 0.0038489128780958624 0.0035077412012253927 0.9999863240943777 -0.18458624990040456 113 | 000112 000122 0.9999525520641 -0.00619815505112181 -0.007516787796119826 -13.505185524539833 0.006197715178711771 0.9999808130311635 -8.196849879325933e-05 -0.07304133298926403 0.007517152603006651 3.5380599317483265e-05 0.9999717393754306 -0.1851032878960186 114 | 000113 000123 0.9999609267553429 -0.005189703419272427 -0.007139667939882523 -13.51511279553097 0.005182889647156292 0.9999861190515936 -0.0009723959001419839 -0.06218809196476092 0.007144618348010303 0.0009353408978544257 0.9999740223522091 -0.17706458441472883 115 | 000114 000124 0.9999905289506771 -0.003955935381057736 -0.001858698208150962 -13.53244251056337 0.003942851187603715 0.9999676846740047 -0.006991719441071937 -0.06040203326172777 0.001886296473796912 0.006984330874184889 0.999973911551678 -0.1409483067516433 116 | 000115 000125 0.9999861975663831 -0.0030247015334038388 0.004289533936022596 -13.537522606070556 0.003095079873132977 0.9998590362999988 -0.016496572748111577 -0.06051391129439526 -0.00423903332908809 0.016509620385448318 0.9998546412592367 -0.10318251615744027 117 | 000116 000126 0.9999749813472995 -0.002388124123505726 0.006660789348675584 -13.560750999253287 0.0025675320895740673 0.9996304790274178 -0.027057930159843195 -0.07725739869482003 -0.006593711294542452 0.027074359267003564 0.9996117403762591 -0.10517690244568889 118 | 000117 000127 0.99997896547961 -0.0028220135734252994 0.005837473009685283 -13.582372195632141 0.0030064697518167585 0.9994885349286899 -0.03183521547336115 -0.08814070643571342 -0.005744650102448912 0.03185210784279753 0.999476134103262 -0.11166029968486112 119 | 000118 000128 0.9999867713705828 -0.002763499678746899 0.004357143381725729 -13.606513016428039 0.0029029574864924117 0.9994729987211249 -0.03233195244246232 -0.09807656926287281 -0.004265501922105195 0.032344179679517644 0.9994675483084217 -0.156383267539005 120 | 000119 000129 0.9999972729777615 -0.0015617738594687549 0.0017165393439067 -13.645144268721559 0.0016107698333856168 0.9995804775114513 -0.028921881590337248 -0.09482070168629794 -0.0016706505563380087 0.028924571378353605 0.9995801583599395 -0.18522000727522603 121 | 000120 000130 0.9999983029134772 3.345605031844081e-05 -0.0018220911109851673 -13.671945121962652 -7.590843667627352e-05 0.9997283842095086 -0.02330389137683357 -0.07651436213704535 0.0018208172051194047 0.023303994918112254 0.9997267587378443 -0.2087710889058107 122 | 000121 000131 0.9999942087770307 0.0024619623411770925 -0.0023424691682158174 -13.692076866350924 -0.0024988012324368885 0.9998713011562191 -0.015855167757951028 -0.050997105873391346 0.0023031330754174103 0.015860929715060628 0.9998715324665711 -0.2017371958578955 123 | 000122 000132 0.9999857090610991 0.004756090967643803 0.002455356054547309 -13.71375064953415 -0.004718979147651006 0.9998776981428239 -0.014905955422025866 -0.026714143991121486 -0.0025259521234933203 0.01489416007927571 0.9998859174162935 -0.15115682404049063 124 | 000123 000133 0.999960921339849 0.006500517885200721 0.006016276651271512 -13.740277951554594 -0.006397629055347073 0.9998355706930228 -0.01696558636938768 -0.026537093269564077 -0.006125577223989322 0.016926441838144776 0.9998378880048885 -0.10018564203773772 125 | 000124 000134 0.9999642732371111 0.0064931724183990075 0.0054011675001253675 -13.772566359604417 -0.006395789406433022 0.9998201350989592 -0.017855967965994948 -0.030427337697323614 -0.005516136739919071 0.01782078915198337 0.9998259885983166 -0.09040104547276862 126 | 000125 000135 0.9999815421269646 0.006005730910753968 0.0009725306440075285 -13.79789750549506 -0.005985208620466988 0.9997843632316191 -0.0198834989472183 -0.040221654641901805 -0.0010917367883743778 0.01987731738527014 0.9998017970953958 -0.11592276068547197 127 | 000126 000136 0.9999812164002004 0.005488649297809799 -0.0027340217645936923 -13.834722857783927 -0.005526271111477154 0.9998874590445704 -0.01394885681036786 -0.04704907246501441 0.002657153535480937 0.013963708775316644 0.9998988812052123 -0.1617762500639903 128 | 000127 000137 0.999979130264294 0.005091180395217387 -0.003986621840604613 -13.850213431436849 -0.005132662004557769 0.9999320428257567 -0.01046540452182028 -0.05275906859493942 0.003933067147538143 0.010485651875828937 0.9999372027252766 -0.19268512483512448 129 | 000128 000138 0.9999770193346499 0.005819129979047819 -0.0034381188040442925 -13.878422545099669 -0.005844627499259528 0.9999551584031879 -0.007453072098170239 -0.05978968156515713 0.003394595979638779 0.007472999371778929 0.9999663677850968 -0.1900814629209265 130 | 000129 000139 0.9999812893000758 0.005900508493591718 -0.00161528007542607 -13.896880958474593 -0.005908188495341506 0.9999709897905096 -0.004791179947861865 -0.061348594988645255 0.0015869628824414046 0.004800634414680712 0.9999873982718223 -0.2056695455963799 131 | 000130 000140 0.9999865917415728 0.005014316589766805 -0.0012972226371280248 -13.930453278261362 -0.0050198279245599895 0.9999781829847835 -0.004279965511176071 -0.06636123896884721 0.001275733353762413 0.004286419040684248 0.9999900638186974 -0.20722861700755046 132 | 000131 000141 0.9999948316747953 0.0029779291985720297 -0.0012696292412323479 -13.961343590627393 -0.0029841637535030384 0.9999833253957681 -0.004937825276412046 -0.07663755137440886 0.0012549033743853705 0.004941591296073157 0.9999871110715284 -0.21493396155931935 133 | 000132 000142 0.9999806961898902 0.0005296319809035919 -0.006186446714293119 -13.989619424732256 -0.0005445509967456546 0.9999968593710308 -0.0024103184353271266 -0.0810489251222008 0.006185153221576836 0.0024136392063246873 0.9999779447988789 -0.22201550114865579 134 | 000133 000143 0.9999316153907434 -0.0013028265916009499 -0.011623173187323126 -14.03328095947798 0.0013530038475531557 0.9999897938607283 0.004310216378000173 -0.07580185663177173 0.01161744088144987 -0.004325648425138549 0.9999231855514714 -0.24527385753839795 135 | 000134 000144 0.9999135501345771 -0.0022988827795069108 -0.012942326098706785 -14.042957910979325 0.002460493866353471 0.9999191333642735 0.012485008701879786 -0.057996427401784886 0.012912578742305954 -0.012515781212650305 0.9998382840329919 -0.2322335433194344 136 | 000135 000145 0.9999661445269451 -0.0024271887256559358 -0.00785486783318911 -14.110743711708619 0.002537677338350093 0.9998975514527725 0.01408702005652727 -0.05450163017474392 0.007819872300397537 -0.014106482052095795 0.9998699470011299 -0.16034575482795219 137 | 000136 000146 0.9999957666291122 -0.0029368150140414236 0.0002591356056166918 -14.14770470790602 0.0029348435975083994 0.9999690109718278 0.007306854667762263 -0.060124479676499566 -0.00028058909366015963 -0.007306061242614873 0.9999732653494464 -0.09046460589178308 138 | 000137 000147 0.9999891110778827 -0.00376895165948841 0.002773083543987602 -14.146398756090584 0.003773797022800252 0.9999915285388352 -0.0017441088528445136 -0.08097353199669202 -0.002766485749926737 0.0017545541095568182 0.999994659665785 -0.08071030880399001 139 | 000138 000148 0.9999889987562955 -0.004694343944294538 0.00020670506464616506 -14.182824952223054 0.004696015543772045 0.9999485936789935 -0.00899796103616736 -0.11143902455987185 -0.00016445512533797541 0.008998828828220767 0.9999595754661504 -0.13590671533792764 140 | 000139 000149 0.9999824503742806 -0.005747922453416245 -0.00139108859287918 -14.199711597289445 0.005731464894032772 0.9999167931922184 -0.011558142718027672 -0.11856580598350333 0.001457409236033479 0.011549972000260743 0.9999321680455249 -0.1754336373322567 141 | 000140 000150 0.9999875501825108 -0.004887439971510455 -0.0010059557904801189 -14.268580248041582 0.004873664623102213 0.9999000570085382 -0.013269132048959149 -0.1144867014119558 0.0010707064985658772 0.013264073807375298 0.9999113724530329 -0.1812840976346134 142 | 000141 000151 0.9999929872961958 -0.003530482795707388 -0.001186838522318038 -14.298240772380924 0.0035153854628447094 0.9999156371540676 -0.01249021175513887 -0.08839149316874026 0.0012308351152464068 0.012485952185010763 0.999921156658582 -0.18244865414144262 143 | 000142 000152 0.99999777598388 -0.002105077199119218 -2.919381015090892e-05 -14.325824746004054 0.002104501880582798 0.9999093614763145 -0.013298829413460587 -0.06947790723462309 5.7185038913415264e-05 0.013298742115545303 0.9999115461489828 -0.18337427842616177 144 | 000143 000153 0.9999966754543408 -0.0011281225845467626 0.0022958579712062596 -14.368279732918626 0.0011694958975905056 0.9998354206291293 -0.018099786823798224 -0.07039440962633244 -0.0022750631671486264 0.01810241839283868 0.9998335027418024 -0.17447585600034937 145 | 000144 000154 0.9999963128894325 -0.0007858808652143941 0.0025678280007393234 -14.393690044740387 0.0008413998885656664 0.9997642603359433 -0.02169164952527494 -0.07367067731597766 -0.0025501778676985092 0.02169374016035409 0.9997613723859227 -0.18337052758514272 146 | 000145 000155 0.9999962893788913 -0.000420916577336462 -0.002677195384265098 -14.429895243076384 0.0003657061940326819 0.9997877704675094 -0.02058999650504775 -0.07171925295149052 0.0026852940429741403 0.02058894643684043 0.9997843114963357 -0.19210774106412232 147 | 000146 000156 0.9999703269785107 0.0002816872106390105 -0.007697480013502557 -14.477858618060791 -0.000382090730222764 0.9999147398930452 -0.013045219254359733 -0.060552955759104465 0.007693152778866326 0.013047771734682684 0.9998853245090427 -0.19444890655296787 148 | 000147 000157 0.9999721207178129 0.0017596161884586147 -0.007257101300926063 -14.494994612340637 -0.0017892906768641218 0.9999899463294619 -0.004084530208205239 -0.04251962838242074 0.007249843471869552 0.004097402286696626 0.999965313347633 -0.1771814864099224 149 | 000148 000158 0.9999893705499642 0.0033445238444270377 -0.0031802432479132664 -14.541783155176034 -0.003333997940878819 0.9999889917968399 0.0033101319409772558 -0.02309999794430526 0.0031912779574290785 -0.003299489899093178 0.9999893577283306 -0.16592998896670472 150 | 000149 000159 0.9999891836233683 0.004601837837966 -0.0006853634217861305 -14.55758272863975 -0.0045982654606573975 0.9999762779919417 0.005124518518388635 -0.01826931576301755 0.0007089280593539444 -0.005121313119768608 0.999986531205607 -0.16460268565713015 151 | 000150 000160 0.9999899850625177 0.0044988710953353765 -0.00010052044405609515 -14.591512534070608 -0.004498572506923015 0.9999861309913407 0.002794738161868402 -0.025001313944266374 0.00011309026213019475 -0.0027942624060134213 0.9999961144261036 -0.16911899077762063 152 | 000151 000161 0.9999943020719261 0.0033689176389350025 0.00031284472239012575 -14.599944142372935 -0.0033686541557289575 0.9999941022022764 -0.0008406985042334761 -0.03133218352746621 -0.00031567367927879953 0.0008396386311913291 0.9999997236118529 -0.16850588658460963 153 | 000152 000162 0.9999953043187145 0.0030852805833512676 0.00013794931020562822 -14.67424231303551 -0.0030848893138101558 0.9999914278390422 -0.0027489132955493076 -0.0455077265954021 -0.00014642898202419798 0.002748474370907576 0.9999962117731568 -0.1633644587548803 154 | 000153 000163 0.9999959426013698 0.002787321662637795 -0.0003933078715096105 -14.698339835429456 -0.002788209132222695 0.9999935643888136 -0.0022741073326183426 -0.04071288583105186 0.0003869667179917841 0.0022751976249401453 0.9999973101864823 -0.16749527219193341 155 | 000154 000164 0.9999956529580736 0.002982649202444947 -0.00021891929829615832 -14.725805105940637 -0.0029830638691140765 0.9999937638977828 -0.0019198997326184528 -0.042747392604004876 0.00021319016678085427 0.0019205393577572246 0.9999981634917401 -0.1711133695211454 156 | 000155 000165 0.9999959100221901 0.0028051924494616035 0.00043319648901290857 -14.761361696478398 -0.0028049176473365772 0.9999959178449365 -0.0006342555789718997 -0.040758200171066454 -0.0004349722930883235 0.0006330365468074418 0.999999834382931 -0.17793422388959138 157 | 000156 000166 0.9999970678645596 0.002353333414212428 0.000548815156317789 -14.776201605224491 -0.002352852619522426 0.9999969266641844 -0.0008725988136307636 -0.042938288412681706 -0.0005508676459864138 0.0008713083143007744 0.9999994165296239 -0.16831771103414653 158 | 000157 000167 0.9999972672550907 0.0021926901663813668 -0.0008538633346742476 -14.851774939627022 -0.0021924876614071304 0.9999975322074561 0.0002377681187307433 -0.054013872364643514 0.0008543804552773096 -0.00023589649342037913 0.9999994784871414 -0.18291805029784536 159 | 000158 000168 0.9999969314066622 0.0016614817335285628 -0.0018807365564704817 -14.857867831208905 -0.0016584148287054632 0.9999971615229005 0.0016307707204275062 -0.054770251211754314 0.001883438857301769 -0.001627647179841646 0.9999968234716978 -0.1917760572677386 160 | 000159 000169 0.9999966572318816 0.001105344079132129 -0.0023468147183647657 -14.878194900965271 -0.0010990861672100368 0.9999958854495845 0.0026660288215969758 -0.05269201458508095 0.002349751985413342 -0.00266344146275442 0.999993774531896 -0.19348709513738838 161 | 000160 000170 0.9999971704093712 0.0009982809216604006 -0.0021657877436140113 -14.892927470407287 -0.0009937606671230643 0.9999971976938793 0.0020871948513450545 -0.0567266109153339 0.002167864766010435 -0.0020850397659887736 0.9999954599405659 -0.19643271888139935 162 | 000161 000171 0.9999984252353321 0.00041984669892040823 -0.0017442567059521817 -14.913976432557249 -0.00041962271507008217 0.9999998936664054 0.00012847227512778497 -0.05962391046847454 0.0017443058048781828 -0.00012773755730761816 0.9999983695782348 -0.18409782117107937 163 | 000162 000172 0.9999966977248648 -0.0003528092370948685 -0.0025865018938684165 -14.949117840030716 0.0003542348873442319 0.9999997753726838 0.0005509936474378237 -0.06947022281993816 0.00258630491173178 -0.0005519083005483262 0.99999646900184 -0.19361668722228978 164 | 000163 000173 0.9999879060783685 -0.0012874462696002577 -0.004758598192711745 -14.980042850672353 0.0012931910828961639 0.9999983645261462 0.001204500959172104 -0.07648937413816906 0.004757040757040596 -0.0012106427592529616 0.9999879940147959 -0.22618722993124435 165 | 000164 000174 0.9999828190682234 -0.00221697980046442 -0.005412351614499932 -14.998259611136591 0.002229497065402821 0.9999946387400503 0.0023078793854732008 -0.07908440188456947 0.005407208277369403 -0.0023199036266320417 0.9999826892650209 -0.22825293743757138 166 | 000165 000175 0.9999889638193626 -0.002790636432326839 -0.0037906320256189145 -15.030623572582467 0.002790679338029352 0.9999961082719876 6.1542366202615965e-06 -0.08431370057952106 0.0037905992135253996 -1.6733214460762436e-05 0.999992785778415 -0.20183833087132005 167 | 000166 000176 0.9999898643799 -0.0034705814147008066 -0.002877699100793028 -15.040956290877881 0.0034671421684239344 0.9999932880402888 -0.001199128103061841 -0.08776622638556049 0.002881841503200833 0.0011891384592633359 0.9999952750015852 -0.1867251776795218 168 | 000167 000177 0.9999899101793199 -0.003431869202094392 -0.002876675615464933 -15.071438609045025 0.0034243212282689278 0.9999907002855672 -0.0026245159756835027 -0.09159454701378214 0.0028856579835934443 0.0026146354740782534 0.999992585764153 -0.1858275414433564 169 | 000168 000178 0.9999894025422168 -0.003377655639396279 -0.003115084884038304 -15.09714560228142 0.003367479450916059 0.9999890168583658 -0.003266172459285216 -0.08748605074132366 0.0031260846987490208 0.0032556479843714927 0.9999899425887728 -0.19741900486311717 170 | 000169 000179 0.9999880823250509 -0.0032414600089924177 -0.0036574148133087193 -15.109266709784357 0.0032301012937246522 0.9999899710389778 -0.0031072071848303 -0.07479263625910995 0.0036674498609246915 0.0030953590748140887 0.9999885356886027 -0.20370783970227146 171 | 000170 000180 0.9999874407359199 -0.0030400246371333323 -0.0039563977502231834 -15.155994697608485 0.0030273061223157496 0.9999903370457497 -0.0032168460860163596 -0.06869510769446802 0.003966140273534133 0.003204834056765275 0.9999869243124898 -0.2042179234226389 172 | 000171 000181 0.9999852547512937 -0.0027017463464197775 -0.004681329657165284 -15.164764405743831 0.0026832123448743894 0.999988437410879 -0.003960713465177619 -0.0672610192750237 0.004691980288224252 0.003948093324871223 0.999981177483628 -0.2093415347517325 173 | 000172 000182 0.9999881989691831 -0.002495321808358256 -0.004144920675364575 -15.199043453604645 0.002479733656959945 0.9999898304782764 -0.003761863650796615 -0.06513825388889397 0.004154267059767379 0.0037515351084362316 0.9999844573809328 -0.20905700336993457 174 | 000173 000183 0.9999955313397008 -0.002154720125492843 -0.002058087884198017 -15.202386069257386 0.0021455021358511893 0.9999877717268665 -0.004470617453675768 -0.05306214513019014 0.0020676957483856764 0.004466183105155822 0.999987869864668 -0.199453188739153 175 | 000174 000184 0.9999978173743863 -0.0012075305296208555 -0.0017488560094154233 -15.252525671326849 0.0012019047031951275 0.9999942583816323 -0.003214731736088608 -0.05791687043649038 0.0017527262860715705 0.0032126252842381844 0.9999933162256937 -0.21049681288690714 176 | 000175 000185 0.9999916119444154 -0.0006944215092190514 -0.004048718222247085 -15.257729055945157 0.0006896680367547688 0.9999991830387022 -0.0011752008874048661 -0.051129098732088776 0.004049531242859939 0.0011723996001503265 0.9999911735990881 -0.2189432412554933 177 | 000176 000186 0.999979143747587 -0.00010989043276354017 -0.006452554604619822 -15.286729258188766 0.000127026817630847 0.9999964498377585 0.002655359133407954 -0.04756162746253037 0.006452241026798866 -0.0026561235934166897 0.9999754788939189 -0.23357806259366962 178 | 000177 000187 0.9999867196104102 0.000525663758385455 -0.005129318475964376 -15.298900276693256 -0.0005022485321820873 0.9999894923694889 0.004565285104366703 -0.040119681547870725 0.005131665806516882 -0.004562647759916917 0.9999764123906005 -0.2173128206464041 179 | 000178 000188 0.9999946526620834 0.00042642504851611454 -0.003233926626852795 -15.295592442152182 -0.00041486173466238706 0.9999935806030296 0.0035753379475157066 -0.03372126858152018 0.003235430317356764 -0.0035739796980781033 0.9999882990285645 -0.20126848028029576 180 | 000179 000189 0.9999904350450105 8.529073720133469e-05 -0.0043520533697626925 -15.344650986800675 -5.778181611834221e-05 0.9999800295852074 0.0063207787110092116 -0.04882092736726924 0.0043525073173442475 -0.006320470624206729 0.9999705146254064 -0.2234542681223373 181 | 000180 000190 0.9999836626053575 -0.0005588821928512329 -0.005702637974822402 -15.339821449389127 0.000618914530780447 0.9999441624765109 0.010530858939691993 -0.04873552413670136 0.005696435381247766 -0.010534221779015826 0.9999283066536631 -0.24330696241859343 182 | 000181 000191 0.9999864223006206 -0.0006697413809483043 -0.005180185902363092 -15.361640586554103 0.000732461516361672 0.999926367694484 0.012115049812528856 -0.04763308675705513 0.005171690825973236 -0.01211868567518379 0.9999132675409149 -0.23462490972529262 183 | 000182 000192 0.9999918095215293 -0.0005040491545434404 -0.004001398784281191 -15.37422011162708 0.000557035783085285 0.9999121320288464 0.013252164177947497 -0.05171221223744102 0.003994367061844435 -0.013254278748181221 0.999904041324832 -0.20869188069279437 184 | 000183 000193 0.999991260265629 -0.0015515622311861638 -0.003877858680436474 -15.37942581336525 0.0016020001561439868 0.9999136803595675 0.013037256153188206 -0.05679661162460785 0.003857295918444345 -0.013043358602497153 0.9999074740242838 -0.2054460142259853 185 | 000184 000194 0.99999005310642 -0.002516140306939405 -0.003639302938439257 -15.39564910616638 0.0025538397719429315 0.9999427580536656 0.01039213366969058 -0.07561078190216991 0.003612949630602375 -0.010401330189851406 0.9999393791162128 -0.20385401451707824 186 | 000185 000195 0.9999895205040235 -0.0036245774168864758 -0.002753972246196428 -15.400327183802611 0.003644074894226955 0.9999680068945569 0.00710744242656155 -0.08912810229915857 0.0027281233546985473 -0.007117404963907295 0.9999708454225155 -0.19764387412081208 187 | 000186 000196 0.999988682325157 -0.004494442443845715 -0.0015940865473122595 -15.394754931463309 0.004498963078605397 0.9999856625200655 0.002843005901663017 -0.10752007487347173 0.0015812860223986742 -0.0028501486262115273 0.9999947679024549 -0.19097188712641722 188 | 000187 000197 0.9999821415355132 -0.005629687576521621 -0.002042189253352969 -15.410203679186893 0.005633044793206391 0.9999828341745158 0.001641483214169906 -0.11438557030874219 0.002032911362203933 -0.001652958139126829 0.9999965567732069 -0.18762949515421895 189 | 000188 000198 0.9999744323461963 -0.0067471935539488385 -0.0023880555276276066 -15.396812702240942 0.006747444112816171 0.9999771617518108 9.709805185027688e-05 -0.12414416687147428 0.0023873459574984333 -0.00011320994770444969 0.9999972113612087 -0.18431856170099228 190 | 000189 000199 0.9999756339286187 -0.006986292590031423 -0.00027935043320233396 -15.414061595030706 0.006985880456203669 0.9999745295153424 -0.0014475770631355376 -0.1333388007300938 0.00028945481230154826 0.0014455906673074804 0.9999987938163757 -0.17579677601934038 191 | 000190 000200 0.9999717508625218 -0.00744892553679736 0.0007950244172731254 -15.415428672679813 0.007451547613041988 0.9999666576452395 -0.0033519561815726594 -0.14038612349344742 -0.0007700277357857507 0.0033577867279646046 0.9999942078499395 -0.17353428512449987 192 | 000191 000201 0.999965207265521 -0.008320313761689789 -0.0005010901973099383 -15.415736922862806 0.008318753305884396 0.9999608843418664 -0.003035614466364394 -0.13204457892760477 0.0005263270903059242 0.0030313420717138968 0.9999952019707553 -0.20254508166283103 193 | 000192 000202 0.9999653755617187 -0.008199811661295986 -0.0014265755166780092 -15.419734584796483 0.008195834353233455 0.9999624748426649 -0.002774372224092784 -0.12689216745452325 0.0014492728844856533 0.002762582522636165 0.9999952545779204 -0.22217231796858056 194 | 000193 000203 0.9999684409120375 -0.007938085307081679 0.0003968443510081466 -15.41907610997185 0.007938619357125712 0.999967594582626 -0.0013678639577455614 -0.10893266190202933 -0.000385973316480091 0.0013709723679149414 0.9999990715284075 -0.19032710012615062 195 | 000194 000204 0.9999773925967217 -0.006381830152082674 0.0021514675247126056 -15.425508186860236 0.0063831400256265805 0.9999794702538908 -0.0006015312796080825 -0.09211677878587662 -0.0021475886131928595 0.000615253782336526 0.9999975193085583 -0.15284602554650498 196 | 000195 000205 0.9999874696410694 -0.004550162140941722 0.0020918872481473496 -15.420612439986373 0.004545046906883452 0.9999866803956207 0.0024459057252291493 -0.07205526287798708 -0.0021029898965422003 -0.002436368531227676 0.9999948541605643 -0.1460902812650859 197 | 000196 000206 0.9999924985396951 -0.003258424816041082 0.002081628664249486 -15.435196099476524 0.003246308102641339 0.9999779947594231 0.005795223610586724 -0.04164715001761 -0.002100465878836844 -0.0057884224112084975 0.9999809564686438 -0.16159480528520584 198 | 000197 000207 0.9999963470961882 -0.0011978456200891939 0.002432346104452136 -15.459521716340666 0.0011854822550006017 0.9999864946647937 0.005077668807438669 -0.02828477798728947 -0.0024383963062296326 -0.005074768214253307 0.9999841870374797 -0.1619875397510988 199 | 000198 000208 0.9999979438683537 0.00010029540562836823 0.0020370824507316273 -15.468556119597503 -0.000110951500824019 0.9999864642492821 0.005230507668775272 -0.01922678164651937 -0.002036530997456203 -0.005230723357419441 0.9999842195958379 -0.16223187455403876 200 | 000199 000209 0.9999973288432925 0.0018162066653123123 0.0014351109872071486 -15.490046680075809 -0.0018227733816922125 0.9999877505964875 0.004588192093774779 -0.012342712958620522 -0.0014267591527251195 -0.004590798436419253 0.9999885894436229 -0.16941213832456217 201 | 000200 000210 0.9999934626297784 0.003413784114112312 0.0013450535271864465 -15.510095132181315 -0.0034199554270055873 0.999983598759672 0.004615966960600364 -0.0036351281388398646 -0.0013292788203848093 -0.00462053507835626 0.9999881891957929 -0.18109216829598349 202 | 000201 000211 0.9999864938432458 0.005052713000918535 0.001247931561649926 -15.533863470855955 -0.0050565697776692045 0.9999823572834379 0.0031083571103073205 0.0013908558766899531 -0.0012322036212933337 -0.003114625446900889 0.9999944775769154 -0.19497055414443676 203 | 000202 000212 0.9999803475174694 0.006279633052122797 0.0001787562064195009 -15.579691119018605 -0.0062800297591361605 0.9999776618445732 0.002293471048367482 -0.00530010434368515 -0.00016435160465431425 -0.002294550540413356 0.9999972415646371 -0.21427044037899215 204 | 000203 000213 0.999983783041859 0.005254648852918163 -0.002209178042734222 -15.603070080074856 -0.005253826877162531 0.999986179875094 0.00037641664643943835 -0.018643370312135928 0.0022111241974681374 -0.0003648020734901567 0.999997413244866 -0.21825910228827478 205 | 000204 000214 0.9999823429663895 0.0042245618450766605 -0.004177383437340264 -15.658100068644185 -0.004230383024557001 0.9999901809512537 -0.0013852423597790252 -0.05008995288960487 0.0041714918958756205 0.0014028873585249573 0.9999903512117938 -0.22074072088730762 206 | 000205 000215 0.9999873370395939 0.0028536175215675393 -0.004146778735539358 -15.692138168097896 -0.0028553365877345372 0.9999958058752925 -0.0004071552631490256 -0.05809199379327601 0.00414560067423686 0.00041899164679989003 0.9999913333364845 -0.21346707700722203 207 | 000206 000216 0.9999942822375014 0.001695633453472995 -0.0029135776057130245 -15.72632253298448 -0.0016925864393762344 0.9999979506541277 0.001046034648080836 -0.05713795573489576 0.0029153450214436283 -0.001041095655635225 0.9999952506502143 -0.1816581744448647 208 | 000207 000217 0.9999982358936489 0.0004531173151016718 -0.0017431526490138173 -15.76743986823211 -0.0004500087421335362 0.9999982783109792 0.0017825500411658914 -0.06773932183652027 0.0017439587628391856 -0.0017817629029597646 0.9999968133515746 -0.15908331272429 209 | 000208 000218 0.9999997052300181 -0.0003565828076824513 -0.000743967339001467 -15.803998398957583 0.00035903585534049825 0.9999944479551983 0.003298952798558253 -0.06219827271833037 0.0007427863639093895 -0.0032992186794984896 0.9999942651056473 -0.13641951832396576 210 | 000209 000219 0.999998863578274 0.0014612523808064748 0.00014464887169668168 -15.86328933534892 -0.0014616310430366098 0.9999954422322906 0.0026525677802046225 -0.04454644629628121 -0.00014077328396156055 -0.0026527757424260725 0.9999964709862468 -0.1245084006715198 211 | 000210 000220 0.9999964246214582 0.0026547907439215084 -0.00023488558537636343 -15.905611425379705 -0.002654697926926053 0.999996306423548 0.0004005007816782805 -0.013059420570464414 0.00023595002294570308 -0.00039987776273676815 1.000000040411966 -0.13296429693877154 212 | 000211 000221 0.9999969002348944 0.0018761391276146538 -0.0016199331555058493 -15.946945267015384 -0.0018765572429537316 0.9999981667503964 -0.00025604988517549004 -0.0065789945352406065 0.0016194485841326683 0.00025908873563409393 0.9999986431376768 -0.1888028058111126 213 | 000212 000222 0.9999941441811763 0.000911322823441898 -0.0032701948280582233 -15.99995767795905 -0.0009003089886404788 0.9999938493697764 0.003367549771891371 -0.009276339107864604 0.003273243737538883 -0.0033645853393381156 0.9999888285833618 -0.23621108479663816 214 | 000213 000223 0.9999898833879975 0.0013628538530743362 -0.004274623238565844 -16.051574378832687 -0.0013397624069094683 0.9999844423904262 0.005399971949005272 -0.017114371103675148 0.004281917755726979 -0.005394193570935828 0.9999762436204984 -0.2359768097691461 215 | 000214 000224 0.999990936320142 0.0023819505634263606 -0.0035235811270229803 -16.094891646956174 -0.002361713844745144 0.9999806685286604 0.005736215487302274 -0.023774541335775178 0.0035371778055864585 -0.005727841019002216 0.9999773575244152 -0.2282611240013416 216 | 000215 000225 0.9999927274102498 0.00150396856484313 -0.0034838795644134943 -16.119789369791533 -0.0014856315935394184 0.9999851146699057 0.005259691939609719 -0.018680054315404974 0.003491738284887449 -0.005254481125672745 0.9999800065364429 -0.22926244345070348 217 | 000216 000226 0.9999847016202058 0.0026337336070912983 -0.004881624110390915 -16.151945233838955 -0.002616429128371708 0.9999902813070469 0.003547968625839159 -0.05372217144689147 0.004890922655767138 -0.003535144469789985 0.99998178005088 -0.23438098829984247 218 | 000217 000227 0.9999747284425601 0.0021344350366344082 -0.006798891398983828 -16.179327288209908 -0.002121659568919875 0.999995866914748 0.0018857237512361858 -0.05603856688375522 0.006802888978694955 -0.001871250975308892 0.9999751500414309 -0.23378842442587336 219 | 000218 000228 0.9999752869814157 0.0013253602275340184 -0.006903720436032452 -16.20623107831513 -0.001322551252106753 0.9999990189158392 0.0004115144101998718 -0.06771083667725016 0.006904262308853802 -0.00040237283465186993 0.999976130089818 -0.21786968409475388 220 | 000219 000229 0.999974559400707 -0.001064813808745172 -0.007060873019000971 -16.218769596641998 0.001062947282987266 0.9999994887954531 -0.00026808221178412393 -0.08751502481393543 0.007061152656056463 0.0002605697807097677 0.9999748901138291 -0.20435978273076585 221 | 000220 000230 0.9999549309006909 -0.004717163615748061 -0.008224301324969464 -16.218648146012626 0.004732864830954792 0.9999869863504472 0.0018907343480378542 -0.08399839879449876 0.008215277435979036 -0.001929571540503462 0.9999644103304609 -0.23013964139647564 222 | 000221 000231 0.9999747602425549 -0.0038711881889413645 -0.005976009140892137 -16.24283165157188 0.003902762326205338 0.9999785223843584 0.005280972458157102 -0.1021991357942571 0.0059554380641598905 -0.005304161989196728 0.9999682016600971 -0.2469915872720133 223 | 000222 000232 0.9999913077814732 -0.00363197416222756 -0.002053318284031637 -16.22926977151703 0.0036378890807803525 0.9999892599509936 0.0028845293153196465 -0.085829515566261 0.0020428207145825085 -0.002891971194295722 0.9999938310510624 -0.22363623833841004 224 | 000223 000233 0.9999925837994001 -0.003664553666811223 -0.001186687786822116 -16.217578757293087 0.0036667813483737557 0.9999915195969244 0.0018802467814711394 -0.07571084938161884 0.0011797873066363154 -0.0018845833077091314 0.9999974965604332 -0.23449701276632742 225 | 000224 000234 0.9999957585212198 -0.0029582925979258694 -0.0001317722707131729 -16.209696569887374 0.002958293903651613 0.9999955997767932 3.5592734920953585e-06 -0.07597253767341863 0.00013175992534989066 -3.950724742923126e-06 0.9999999156637407 -0.19740487264984508 226 | 000225 000235 0.9999950948624261 -0.0024261634243161077 0.0019607926111916145 -16.1927114970497 0.002436133111024102 0.9999839855840188 -0.0050979027682897405 -0.06731132894871197 -0.001948395097260688 0.005102656996081161 0.9999850560672797 -0.1499520016495624 227 | 000226 000236 0.9999980051539378 -0.0009768262509355376 0.0018213934614184832 -16.182878360240025 0.0009861084696184735 0.9999865191942514 -0.0051023887351967665 -0.059205448725001816 -0.0018163870359383639 0.005104174433055918 0.9999852943809602 -0.16785792542795258 228 | 000227 000237 0.9999994429960726 0.0003017691934695075 -0.0008927578266502727 -16.16426497837991 -0.00030690599786283723 0.999983506386618 -0.0057590806574534 -0.046607621468253965 0.0008910055808261536 0.005759352454023413 0.9999831067971705 -0.23376396056538784 229 | 000228 000238 0.9999889894676409 0.0016870152010928082 -0.004364848809338153 -16.14715685179847 -0.0017057954731721438 0.9999892372139148 -0.004302579903693713 -0.02807776554082711 0.004357543256575656 0.00430997790257298 0.9999812680043348 -0.2761464169021141 230 | 000229 000239 0.9999825006559431 0.0037929168345591027 -0.004541310787418405 -16.128520597501126 -0.0038155547254489597 0.999980230648934 -0.004986922934253737 -0.016497194929626632 0.0045223086281973735 0.005004163415082384 0.9999773680661973 -0.25190732019275475 231 | 000230 000240 0.999980765109746 0.006055608609328184 -0.0013117193019519004 -16.09859644468605 -0.006064060965891105 0.9999601982304744 -0.006538575391963073 0.003566804049815922 0.0012720718299055974 0.006546404966998219 0.9999777574242473 -0.22371753906068792 232 | 000231 000241 0.9999641394417503 0.008364630965993354 -0.0012694489941468268 -16.083581969615185 -0.008371179060674805 0.999951174990646 -0.005244122062826545 0.020228508321956135 0.0012255220340152211 0.005254561824883745 0.9999854523066447 -0.24008937262229083 233 | 000232 000242 0.9999538300861285 0.00911773738174384 -0.003063412093424051 -16.064369412148498 -0.009130003967392405 0.999950257247563 -0.004014381143845264 0.021122459377872993 0.0030266582636481903 0.004042162768383897 0.9999872859311127 -0.264266630393845 234 | 000233 000243 0.9999531033930393 0.009457257673836317 -0.0021000644038947055 -16.047764514439226 -0.00946487388928785 0.9999485906515277 -0.0036461979111945086 0.012642653858306083 0.002065473605409288 0.0036659036645904906 0.9999912261075447 -0.24332056788788906 235 | 000234 000244 0.9999485914487102 0.009493641405449059 -0.00355389840584082 -16.026390890083878 -0.009498098868797466 0.9999541527166745 -0.0012395421104338532 0.004871694742996632 0.003541968038352795 0.0012732352896104642 0.9999928954306564 -0.2265905093047267 236 | 000235 000245 0.9999431095323287 0.008866087620903747 -0.005939407002499432 -16.008587206562055 -0.008859951412185055 0.9999602076564531 0.0010584646062298318 -0.004913357341776614 0.005948557548774327 -0.0010057821393535402 0.9999819145828593 -0.2383191809174437 237 | 000236 000246 0.9999513176127448 0.008152882384729367 -0.005535731749400367 -15.987345628906933 -0.008137445959789642 0.999962951226939 0.002804993717328692 -0.013508607002605713 0.0055583975377073375 -0.0027598112692002984 0.9999808658590145 -0.2215104933819551 238 | 000237 000247 0.9999721078148615 0.0072634108803426405 -0.0017360057438178624 -15.968044584300621 -0.0072559307793379135 0.9999645118519584 0.004276946354836165 -0.026804226812673845 0.0017670084343262547 -0.004264231716743367 0.9999892162902595 -0.20511368783048772 239 | 000238 000248 0.999980620409165 0.0060306524881196056 0.0015822376954148791 -15.941980001921733 -0.0060391420162600636 0.9999671081749022 0.005416934073010867 -0.02824839214286736 -0.0015495205203023626 -0.005426383845127871 0.9999840454627045 -0.193534402389117 240 | 000239 000249 0.9999880627732498 0.004825687135941092 0.000686383018408955 -15.924311559288794 -0.0048299921385288325 0.9999678602761555 0.00641508074121614 -0.03581771542217954 -0.0006554052758891285 -0.006418321348838294 0.9999792043827737 -0.2095848698220033 241 | 000240 000250 0.9999958241549336 0.0029144463849459967 -0.00011864924535960944 -15.923382454430314 -0.00291367186777941 0.9999774422331013 0.0060729992310983765 -0.042504831616750186 0.00013634660924321002 -0.006072628591422786 0.999981599014562 -0.19742107130723002 242 | 000241 000251 0.9999974349667544 0.0012289056960338041 0.001925990330981694 -15.916535134344342 -0.0012395168061335301 0.9999839605595835 0.005517836872201032 -0.054783051264209015 -0.0019191798602192398 -0.005520210179170454 0.9999829290199028 -0.18260047672421953 243 | 000242 000252 0.9999982966938166 0.00017387113389678301 0.0018562446677634578 -15.90537641205711 -0.00018435672298036264 0.9999840313623745 0.005650228905794067 -0.058424857959666614 -0.0018552355504423692 -0.005650559319859019 0.9999822467343771 -0.1980569191222807 244 | 000243 000253 0.9999997438360306 -0.0004457143971646642 -0.00042090508012914275 -15.902876632090168 0.0004489750071563911 0.9999695176266362 0.00778005748368153 -0.05479688295645783 0.00041742492209205815 -0.007780248486658249 0.9999696276637745 -0.2229070094831317 245 | 000244 000254 0.9999996667796507 -0.0005712587240185624 -0.0004784202463865986 -15.907001683774082 0.0005745955117423561 0.9999751196534065 0.007008873097283446 -0.05067335899920474 0.00047440477696812357 -0.007009156127831393 0.9999752697468388 -0.2140546528124832 246 | 000245 000255 0.9999995275976805 -0.0011020564247145813 -4.5587082285626e-05 -15.9131391044917 0.001102279188647513 0.9999858603934886 0.005229808462481141 -0.059311909130564916 3.982117681337862e-05 -0.005229854443950054 0.9999862451151669 -0.20201658030980413 247 | 000246 000256 0.9999989140453331 -0.0012233103072598732 -0.0008657718230814039 -15.91781699718999 0.0012261865178755903 0.9999938000589109 0.0033290414664141317 -0.07048322838378134 0.0008616925561294362 -0.003330102101844214 0.9999939765755941 -0.21716030574305836 248 | 000247 000257 0.9999946307441782 -0.0025572299394404038 -0.002013308470797302 -15.915251461891108 0.0025600643203574974 0.9999956817243456 0.0014069573803178349 -0.06454717783919081 0.0020097026020694437 -0.0014121035900521274 0.999997012569631 -0.22075690647780172 249 | 000248 000258 0.9999959799391143 -0.0014398646125784547 -0.0024456152282742536 -15.941494817374279 0.0014421566718919715 0.9999985628162048 0.0009359514143509537 -0.08968636600732201 0.0024442647593622678 -0.0009394763780814577 0.9999965599558146 -0.2154882397564159 250 | 000249 000259 0.9999958147343879 -0.002386236495985034 -0.0016797331706614274 -15.949606617703568 0.002387000069239941 0.9999969696650607 0.00045335314760228704 -0.08794749167539874 0.001678645997300778 -0.0004573596819279253 0.9999984600975799 -0.1999638074303297 251 | 000250 000260 0.9999939702574299 -0.002691031102160852 -0.0022366748746873457 -15.956060631665638 0.002691423597895779 0.9999963580828569 0.00017258066406631602 -0.08887946640117558 0.0022362001334964177 -0.0001785991857244798 0.9999973978294876 -0.20190315666928674 252 | 000251 000261 0.9999906171242123 -0.002863760465730006 -0.003267197285053189 -15.971979090127231 0.0028591465809197024 0.9999949370965594 -0.0014158451224655253 -0.09286760994065688 0.00327123481601107 0.0014064908886965276 0.9999936649829498 -0.19883198108644154 253 | 000252 000262 0.9999895096034818 -0.00404391185297393 -0.0021264263830529976 -15.979055405227754 0.004034230538125605 0.9999814803383056 -0.004537419694539399 -0.0857348935719969 0.0021447373420156934 0.004528793999434162 0.9999875348325848 -0.19982721337380407 254 | 000253 000263 0.9999957294253649 -0.0026917483547144747 -0.001209015139400425 -16.01717285159381 0.0026826625618099655 0.9999687188195372 -0.007454641321173876 -0.11184854852495495 0.0012290417215475953 0.007451368344126972 0.9999714536509559 -0.20452170253293875 255 | 000254 000264 0.9999918971675387 -0.0036007729149309347 -0.0017583941012394029 -16.030320241161554 0.003585854004478236 0.999958171284751 -0.00841585969072729 -0.10966616701503991 0.0017886235598441296 0.008409497223895569 0.9999631175591707 -0.19433207016647025 256 | 000255 000265 0.9999910771949949 -0.0037912360867006474 -0.0018845389321767321 -16.044262118911345 0.003775720245291576 0.9999593683112059 -0.008169451841678362 -0.11094048712319844 0.0019154343184055974 0.00816226226209064 0.9999648741650244 -0.19292320684479064 257 | 000256 000266 0.9999924563363122 -0.003661284509382501 -0.0011999321054142872 -16.067690867992354 0.003652231548779296 0.999965427997154 -0.007462046689594311 -0.10740181353822277 0.0012272118088907616 0.0074576131248131165 0.9999713507987815 -0.19285350203018237 258 | 000257 000267 0.9999921564813156 -0.0038082273218536635 -0.001115825408097553 -16.0728243990791 0.003799907606837912 0.9999657193203473 -0.007366138042698305 -0.1032733179550921 0.0011438392044857539 0.007361841781080087 0.9999722438904376 -0.21423249212610398 259 | 000258 000268 0.9999929331215306 -0.003315301036016643 -0.001710937200377196 -16.102665660295663 0.0033021457603580997 0.9999653744655873 -0.007635294335232489 -0.11147906362526994 0.001736191922053086 0.007629594127982305 0.9999693285968188 -0.22531569566886545 260 | 000259 000269 0.9999920862447309 -0.003403224084651256 -0.002063921225561469 -16.12026326872933 0.0033843609241979433 0.999953111359748 -0.0090753487370228 -0.11546776991304003 0.0020947105924663777 0.009068293055442218 0.9999567779394734 -0.2143960554926289 261 | 000260 000270 0.9999896683812757 -0.003698236546394512 -0.0026587967454232064 -16.127709418209687 0.0036748066990855252 0.9999548018736522 -0.00876369104342395 -0.10732206114598136 0.0026910872243075382 0.008753831578442556 0.9999580723450077 -0.21584747992378317 262 | -------------------------------------------------------------------------------- /data/nuscenes_data.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import torch 3 | import torchvision 4 | from torch.utils.data import Dataset 5 | 6 | import os 7 | import glob 8 | import numpy as np 9 | import MinkowskiEngine as ME 10 | 11 | from models.utils import generate_rand_rotm, generate_rand_trans, apply_transform 12 | 13 | def read_nuscenes_bin_voxel(filename, npoints=None, voxel_size=None): 14 | ''' 15 | Input: 16 | filename 17 | npoints: int/None 18 | voxel_size: int/None 19 | ''' 20 | scan = np.fromfile(filename, dtype=np.float32, count=-1).reshape([-1,5]) 21 | scan = scan[:,:3] 22 | 23 | if voxel_size is not None: 24 | _, sel = ME.utils.sparse_quantize(scan / voxel_size, return_index=True) 25 | scan = scan[sel] 26 | if npoints is None: 27 | return scan.astype('float32') 28 | 29 | N = scan.shape[0] 30 | if N >= npoints: 31 | sample_idx = np.random.choice(N, npoints, replace=False) 32 | else: 33 | sample_idx = np.concatenate((np.arange(N), np.random.choice(N, npoints-N, replace=True)), axis=-1) 34 | 35 | scan = scan[sample_idx, :].astype('float32') 36 | return scan 37 | 38 | class NuscenesDataset(Dataset): 39 | def __init__(self, root, seqs, npoints, voxel_size, data_list, augment=0.0): 40 | super(NuscenesDataset, self).__init__() 41 | 42 | self.root = root 43 | self.seqs = seqs 44 | self.npoints = npoints 45 | self.voxel_size = voxel_size 46 | self.data_list = data_list 47 | self.augment = augment 48 | self.dataset = self.make_dataset() 49 | 50 | def make_dataset(self): 51 | last_row = np.zeros((1,4), dtype=np.float32) 52 | last_row[:,3] = 1.0 53 | dataset = [] 54 | 55 | for seq in self.seqs: 56 | if seq == 'test': 57 | data_root = os.path.join(self.root, 'v1.0-test') 58 | else: 59 | data_root = os.path.join(self.root, 'v1.0-trainval') 60 | fn_pair_poses = os.path.join(self.data_list, seq + '.txt') 61 | with open(fn_pair_poses, 'r') as f: 62 | lines = f.readlines() 63 | for line in lines: 64 | data_dict = {} 65 | line = line.strip(' \n').split(' ') 66 | src_fn = os.path.join(data_root, line[0]) 67 | dst_fn = os.path.join(data_root, line[1]) 68 | values = [] 69 | for i in range(2, len(line)): 70 | values.append(float(line[i])) 71 | values = np.array(values).astype(np.float32) 72 | rela_pose = values.reshape(3,4) 73 | rela_pose = np.concatenate([rela_pose, last_row], axis = 0) 74 | data_dict['points1'] = src_fn 75 | data_dict['points2'] = dst_fn 76 | data_dict['Tr'] = rela_pose 77 | dataset.append(data_dict) 78 | 79 | return dataset 80 | 81 | def __getitem__(self, index): 82 | data_dict = self.dataset[index] 83 | src_points = read_nuscenes_bin_voxel(data_dict['points1'], self.npoints, self.voxel_size) 84 | dst_points = read_nuscenes_bin_voxel(data_dict['points2'], self.npoints, self.voxel_size) 85 | Tr = data_dict['Tr'] 86 | 87 | if np.random.rand() < self.augment: 88 | aug_T = np.zeros((4,4), dtype=np.float32) 89 | aug_T[3,3] = 1.0 90 | rand_rotm = generate_rand_rotm(1.0, 1.0, 45.0) 91 | aug_T[:3,:3] = rand_rotm 92 | src_points = apply_transform(src_points, aug_T) 93 | Tr = Tr.dot(np.linalg.inv(aug_T)) 94 | 95 | src_points = torch.from_numpy(src_points) 96 | dst_points = torch.from_numpy(dst_points) 97 | Tr = torch.from_numpy(Tr) 98 | R = Tr[:3,:3] 99 | t = Tr[:3,3] 100 | return src_points, dst_points, R, t 101 | 102 | def __len__(self): 103 | return len(self.dataset) -------------------------------------------------------------------------------- /models/PointUtils/points_utils.py: -------------------------------------------------------------------------------- 1 | import torch 2 | from torch.autograd import Variable 3 | from torch.autograd import Function 4 | from torch.autograd.function import once_differentiable 5 | import torch.nn as nn 6 | 7 | from typing import Union 8 | 9 | import point_utils_cuda 10 | 11 | class FurthestPointSampling(Function): 12 | @staticmethod 13 | def forward(ctx, xyz: torch.Tensor, npoint: int) -> torch.Tensor: 14 | ''' 15 | ctx: 16 | xyz: [B,N,3] 17 | npoint: int 18 | ''' 19 | assert xyz.is_contiguous() 20 | 21 | B, N, _ = xyz.size() 22 | output = torch.cuda.IntTensor(B, npoint) 23 | temp = torch.cuda.FloatTensor(B, N).fill_(1e10) 24 | 25 | point_utils_cuda.furthest_point_sampling_wrapper(B, N, npoint, xyz, temp, output) 26 | return output 27 | 28 | @staticmethod 29 | def backward(xyz, a=None): 30 | return None, None 31 | 32 | furthest_point_sample = FurthestPointSampling.apply 33 | 34 | class WeightedFurthestPointSampling(Function): 35 | @staticmethod 36 | def forward(ctx, xyz: torch.Tensor, weights: torch.Tensor, npoint: int) -> torch.Tensor: 37 | ''' 38 | ctx: 39 | xyz: [B,N,3] 40 | weights: [B,N] 41 | npoint: int 42 | ''' 43 | assert xyz.is_contiguous() 44 | assert weights.is_contiguous() 45 | B, N, _ = xyz.size() 46 | output = torch.cuda.IntTensor(B, npoint) 47 | temp = torch.cuda.FloatTensor(B, N).fill_(1e10) 48 | 49 | point_utils_cuda.weighted_furthest_point_sampling_wrapper(B, N, npoint, xyz, weights, temp, output); 50 | return output 51 | 52 | @staticmethod 53 | def backward(xyz, a=None): 54 | return None, None 55 | 56 | weighted_furthest_point_sample = WeightedFurthestPointSampling.apply 57 | 58 | class GatherOperation(Function): 59 | @staticmethod 60 | def forward(ctx, features: torch.Tensor, idx: torch.Tensor) -> torch.Tensor: 61 | ''' 62 | ctx 63 | features: [B,C,N] 64 | idx: [B,npoint] 65 | ''' 66 | assert features.is_contiguous() 67 | assert idx.is_contiguous() 68 | 69 | B, npoint = idx.size() 70 | _, C, N = features.size() 71 | output = torch.cuda.FloatTensor(B, C, npoint) 72 | 73 | point_utils_cuda.gather_points_wrapper(B, C, N, npoint, features, idx, output) 74 | 75 | ctx.for_backwards = (idx, C, N) 76 | return output 77 | 78 | @staticmethod 79 | def backward(ctx, grad_out): 80 | idx, C, N = ctx.for_backwards 81 | B, npoint = idx.size() 82 | grad_features = Variable(torch.cuda.FloatTensor(B,C,N).zero_()) 83 | grad_out_data = grad_out.data.contiguous() 84 | point_utils_cuda.gather_points_grad_wrapper(B, C, N, npoint, grad_out_data, idx, grad_features.data) 85 | return grad_features, None 86 | 87 | gather_operation = GatherOperation.apply 88 | -------------------------------------------------------------------------------- /models/PointUtils/setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import setup 2 | from torch.utils.cpp_extension import BuildExtension, CUDAExtension 3 | 4 | setup( 5 | name='point_utils', 6 | ext_modules=[ 7 | CUDAExtension('point_utils_cuda', [ 8 | 'src/point_utils_api.cpp', 9 | 10 | 'src/furthest_point_sampling.cpp', 11 | 'src/furthest_point_sampling_gpu.cu', 12 | ], 13 | extra_compile_args={ 14 | 'cxx':['-g'], 15 | 'nvcc': ['-O2'] 16 | }) 17 | ], 18 | cmdclass={'build_ext':BuildExtension} 19 | ) -------------------------------------------------------------------------------- /models/PointUtils/src/cuda_utils.h: -------------------------------------------------------------------------------- 1 | #ifndef _CUDA_UTILS_H 2 | #define _CUDA_UTILS_H 3 | 4 | #include 5 | #include 6 | 7 | #define TOTAL_THREADS 1024 8 | #define THREADS_PER_BLOCK 256 9 | #define DIVUP(m,n) ((m) / (n)+((m) % (n) > 0)) 10 | 11 | #define CHECK_CUDA(x) TORCH_CHECK(x.is_cuda(), #x "must be a CUDA tensor.") 12 | #define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x "must be contiguous.") 13 | #define CHECK_CONTIGUOUS_CUDA(x) \ 14 | CHECK_CUDA(x); \ 15 | CHECK_CONTIGUOUS(x) 16 | 17 | /*** 18 | * calculate proper thread number 19 | * If work_size < TOTAL_THREADS, number = work_size (2^n) 20 | * Else number = TOTAL_THREADS 21 | ***/ 22 | inline int opt_n_threads(int work_size) { 23 | // log2(work_size) 24 | const int pow_2 = std::log(static_cast(work_size)) / std::log(2.0); 25 | // 1 * 2^(pow_2) 26 | return std::max(std::min(1 << pow_2, TOTAL_THREADS), 1); 27 | } 28 | 29 | #endif -------------------------------------------------------------------------------- /models/PointUtils/src/furthest_point_sampling.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | // #include 5 | 6 | #include "furthest_point_sampling_gpu.h" 7 | 8 | // extern THCState *state; 9 | 10 | int gather_points_wrapper_fast(int b, int c, int n, int npoints, 11 | at::Tensor points_tensor, at::Tensor idx_tensor, at::Tensor out_tensor) { 12 | const float *points = points_tensor.data(); 13 | const int *idx = idx_tensor.data(); 14 | float *out = out_tensor.data(); 15 | 16 | cudaStream_t stream = at::cuda::getCurrentCUDAStream(); 17 | gather_points_kernel_launcher_fast(b, c, n, npoints, points, idx, out, stream); 18 | return 1; 19 | } 20 | 21 | int gather_points_grad_wrapper_fast(int b, int c, int n, int npoints, 22 | at::Tensor grad_out_tensor, at::Tensor idx_tensor, at::Tensor grad_points_tensor) { 23 | 24 | const float *grad_out = grad_out_tensor.data(); 25 | const int *idx = idx_tensor.data(); 26 | float *grad_points = grad_points_tensor.data(); 27 | 28 | cudaStream_t stream = at::cuda::getCurrentCUDAStream(); 29 | gather_points_grad_kernel_launcher_fast(b, c, n, npoints, grad_out, idx, grad_points, stream); 30 | return 1; 31 | } 32 | 33 | int furthest_point_sampling_wrapper(int b, int n, int m, 34 | at::Tensor points_tensor, at::Tensor temp_tensor, at::Tensor idx_tensor) { 35 | 36 | const float *points = points_tensor.data(); 37 | float *temp = temp_tensor.data(); 38 | int *idx = idx_tensor.data(); 39 | 40 | cudaStream_t stream = at::cuda::getCurrentCUDAStream(); 41 | furthest_point_sampling_kernel_launcher(b, n, m, points, temp, idx, stream); 42 | return 1; 43 | } 44 | 45 | int weighted_furthest_point_sampling_wrapper(int b, int n, int m, 46 | at::Tensor points_tensor, at::Tensor weights_tensor, at::Tensor temp_tensor, at::Tensor idx_tensor) { 47 | 48 | const float *points = points_tensor.data(); 49 | const float *weights = weights_tensor.data(); 50 | float *temp = temp_tensor.data(); 51 | int *idx = idx_tensor.data(); 52 | 53 | cudaStream_t stream = at::cuda::getCurrentCUDAStream(); 54 | weighted_furthest_point_sampling_kernel_launcher(b, n, m, points, weights, temp, idx, stream); 55 | return 1; 56 | } -------------------------------------------------------------------------------- /models/PointUtils/src/furthest_point_sampling_gpu.cu: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | 4 | #include "cuda_utils.h" 5 | #include "furthest_point_sampling_gpu.h" 6 | 7 | __global__ void gather_points_kernel_fast(int b, int c, int n, int m, 8 | const float *__restrict__ points, const int *__restrict__ idx, float *__restrict__ out) { 9 | // points: [B,C,N] 10 | // idx: [B,M] 11 | 12 | int bs_idx = blockIdx.z; 13 | int c_idx = blockIdx.y; 14 | int pt_idx = blockIdx.x * blockDim.x + threadIdx.x; 15 | if (bs_idx >= b || c_idx >= c || pt_idx >= m) return; 16 | // Pointer to current point 17 | out += bs_idx * c * m + c_idx * m + pt_idx; // curr batch + channels + points 18 | idx += bs_idx * m + pt_idx; // curr batch + points 19 | points += bs_idx * c * n + c_idx * n; // batch + channels 20 | out[0] = points[idx[0]]; // curr batch channels -> channel of curr point ? 21 | } 22 | 23 | void gather_points_kernel_launcher_fast(int b, int c, int n, int npoints, 24 | const float *points, const int *idx, float *out, cudaStream_t stream) { 25 | // points: [B,C,N] 26 | // idx: [B,npoints] 27 | cudaError_t err; 28 | // dim3 is a type to assign dimension 29 | dim3 blocks(DIVUP(npoints, THREADS_PER_BLOCK), c, b); // DIVUP: npoints/THREADS_PER_BLOCK 30 | dim3 threads(THREADS_PER_BLOCK); // others assign to 1 31 | 32 | gather_points_kernel_fast<<>>(b, c, n, npoints, points, idx, out); 33 | 34 | err = cudaGetLastError(); 35 | if (cudaSuccess != err) { 36 | fprintf(stderr, "CUDA kernel failed: %s\n", cudaGetErrorString(err)); 37 | exit(-1); 38 | } 39 | } 40 | 41 | __global__ void gather_points_grad_kernel_fast(int b, int c, int n, int m, 42 | const float *__restrict__ grad_out, const int *__restrict__ idx, float *__restrict__ grad_points) { 43 | // grad_out: [B,C,M] 44 | // idx: [B,M] 45 | int bs_idx = blockIdx.z; 46 | int c_idx = blockIdx.y; 47 | int pt_idx = blockIdx.x * blockDim.x + threadIdx.x; 48 | if (bs_idx > b || c_idx >= c || pt_idx >= m) return; 49 | 50 | grad_out += bs_idx * c * m + c_idx * m + pt_idx; 51 | idx += bs_idx * m + pt_idx; 52 | grad_points += bs_idx * c * n + c_idx * n; 53 | 54 | atomicAdd(grad_points + idx[0], grad_out[0]); // assign the grad of indexed value to grad_points 55 | } 56 | 57 | void gather_points_grad_kernel_launcher_fast(int b, int c, int n, int npoints, 58 | const float *grad_out, const int *idx, float *grad_points, cudaStream_t stream) { 59 | // grad_out: [B,C, npoints] 60 | // idx: [B, npoints] 61 | 62 | cudaError_t err; 63 | dim3 blocks(DIVUP(npoints, THREADS_PER_BLOCK), c, b); 64 | dim3 threads(THREADS_PER_BLOCK); 65 | 66 | gather_points_grad_kernel_fast<<>>(b, c, n, npoints, grad_out, idx, grad_points); 67 | 68 | err = cudaGetLastError(); 69 | if (cudaSuccess != err) { 70 | fprintf(stderr, "CUDA kernel failed: %s\n", cudaGetErrorString(err)); 71 | exit(-1); 72 | } 73 | } 74 | 75 | __device__ void __update(float *__restrict__ dists, int *__restrict__ dists_i, int idx1, int idx2) { 76 | const float v1 = dists[idx1], v2 = dists[idx2]; 77 | const int i1 = dists_i[idx1], i2 = dists_i[idx2]; 78 | dists[idx1] = max(v1, v2); 79 | dists_i[idx1] = v2 > v1 ? i2 : i1; 80 | } 81 | 82 | // A kernel runs on single thread and the launcher is defined to launch the kernel 83 | // Grid size and block size are all defined in the launcher 84 | template 85 | __global__ void furthest_point_sampling_kernel(int b, int n, int m, 86 | const float *__restrict__ dataset, float *__restrict__ temp, int *__restrict__ idxs) { 87 | // dataset [B,N,3] 88 | // temp: [B,N] 89 | // idxs: 90 | // All global memory 91 | 92 | if (m <= 0) return; 93 | // assign shared memory 94 | __shared__ float dists[block_size]; 95 | __shared__ int dists_i[block_size]; 96 | 97 | int batch_index = blockIdx.x; 98 | // Point to curr batch (blockIdx of current thread of this kernel) 99 | dataset += batch_index * n * 3; 100 | temp += batch_index * n; 101 | idxs += batch_index * m; 102 | 103 | // threadIdx of current thread 104 | int tid = threadIdx.x; 105 | const int stride = block_size; // number of threads in one block 106 | 107 | int old = 0; 108 | if (threadIdx.x == 0) 109 | idxs[0] = old; // Initialize index 110 | 111 | __syncthreads(); 112 | // for loop m for m sampled points 113 | for (int j = 1; j < m; j++) { 114 | // printf("curr index: %d\n", j); 115 | int besti = 0; 116 | float best = -1; 117 | // Coordinate of last point 118 | float x1 = dataset[old * 3 + 0]; 119 | float y1 = dataset[old * 3 + 1]; 120 | float z1 = dataset[old * 3 + 2]; 121 | // Get global index, parallel calculate distance with multiple blocks 122 | for (int k = tid; k < n; k += stride) { 123 | // calculate distance with the other point 124 | float x2, y2, z2; 125 | x2 = dataset[k * 3 + 0]; 126 | y2 = dataset[k * 3 + 1]; 127 | z2 = dataset[k * 3 + 2]; 128 | 129 | float d = (x2 - x1) * (x2 - x1) + (y2 - y1) * (y2 - y1) + (z2 - z1) * (z2 - z1); 130 | float d2 = min(d, temp[k]); 131 | temp[k] = d2; // update temp distance 132 | besti = d2 > best ? k : besti; // If d2 > best, besti = k (idx) 133 | best = d2 > best ? d2 : best; // If d2 > best, best = d2 (distance) 134 | } 135 | // dists[tid] stores the largest dist over all blocks for the current threadIdx 136 | dists[tid] = best; 137 | dists_i[tid] = besti; 138 | __syncthreads(); // wait for all threads finishing compute the distance 139 | // calculate the idx of largest distance ? 140 | if (block_size >= 1024) { 141 | if (tid < 512) { 142 | __update(dists, dists_i, tid, tid + 512); 143 | } 144 | __syncthreads(); 145 | } 146 | if (block_size >= 512) { 147 | if (tid < 256) { 148 | __update(dists, dists_i, tid, tid + 256); 149 | } 150 | __syncthreads(); 151 | } 152 | if (block_size >= 256) { 153 | if (tid < 128) { 154 | __update(dists, dists_i, tid, tid + 128); 155 | } 156 | __syncthreads(); 157 | } 158 | if (block_size >= 128) { 159 | if (tid < 64) { 160 | __update(dists, dists_i, tid, tid + 64); 161 | } 162 | __syncthreads(); 163 | } 164 | if (block_size >= 64) { 165 | if (tid < 32) { 166 | __update(dists, dists_i, tid, tid + 32); 167 | } 168 | __syncthreads(); 169 | } 170 | if (block_size >= 32) { 171 | if (tid < 16) { 172 | __update(dists, dists_i, tid, tid + 16); 173 | } 174 | __syncthreads(); 175 | } 176 | if (block_size >= 16) { 177 | if (tid < 8) { 178 | __update(dists, dists_i, tid, tid + 8); 179 | } 180 | __syncthreads(); 181 | } 182 | if (block_size >= 8) { 183 | if (tid < 4) { 184 | __update(dists, dists_i, tid, tid + 4); 185 | } 186 | __syncthreads(); 187 | } 188 | if (block_size >= 4) { 189 | if (tid < 2) { 190 | __update(dists, dists_i, tid, tid + 2); 191 | } 192 | __syncthreads(); 193 | } 194 | if (block_size >= 2) { 195 | if (tid < 1) { 196 | __update(dists, dists_i, tid, tid + 1); 197 | } 198 | __syncthreads(); 199 | } 200 | 201 | // All threads update a single new point (old). 202 | old = dists_i[0]; // update last point index 203 | if (tid == 0) 204 | idxs[j] = old; 205 | } 206 | } 207 | 208 | void furthest_point_sampling_kernel_launcher(int b, int n, int m, 209 | const float *dataset, float *temp, int *idxs, cudaStream_t stream) { 210 | // dataset: [B,N,3] 211 | // tmp: [B,N] 212 | 213 | cudaError_t err; 214 | unsigned int n_threads = opt_n_threads(n); // compute proper thread number 215 | 216 | switch (n_threads) { 217 | // Call kernel functions: Func 218 | // Dg: grid size (how many blocks in the grid) 219 | // Db: block size (how many threads in the block) 220 | // Ns: memory for shared value, default 0 221 | // s: stream 222 | case 1024: 223 | furthest_point_sampling_kernel<1024><<>>(b, n, m, dataset, temp, idxs); break; 224 | case 512: 225 | furthest_point_sampling_kernel<512><<>>(b, n, m, dataset, temp, idxs); break; 226 | case 256: 227 | furthest_point_sampling_kernel<256><<>>(b, n, m, dataset, temp, idxs); break; 228 | case 128: 229 | furthest_point_sampling_kernel<128><<>>(b, n, m, dataset, temp, idxs); break; 230 | case 64: 231 | furthest_point_sampling_kernel<64><<>>(b, n, m, dataset, temp, idxs); break; 232 | case 32: 233 | furthest_point_sampling_kernel<32><<>>(b, n, m, dataset, temp, idxs); break; 234 | case 16: 235 | furthest_point_sampling_kernel<16><<>>(b, n, m, dataset, temp, idxs); break; 236 | case 8: 237 | furthest_point_sampling_kernel<8><<>>(b, n, m, dataset, temp, idxs); break; 238 | case 4: 239 | furthest_point_sampling_kernel<4><<>>(b, n, m, dataset, temp, idxs); break; 240 | case 2: 241 | furthest_point_sampling_kernel<2><<>>(b, n, m, dataset, temp, idxs); break; 242 | case 1: 243 | furthest_point_sampling_kernel<1><<>>(b, n, m, dataset, temp, idxs); break; 244 | default: 245 | furthest_point_sampling_kernel<512><<>>(b, n, m, dataset, temp, idxs); 246 | } 247 | err = cudaGetLastError(); 248 | if (cudaSuccess != err) { 249 | fprintf(stderr, "CUDA kernel failed: %s\n", cudaGetErrorString(err)); 250 | exit(-1); 251 | } 252 | } 253 | 254 | template 255 | __global__ void weighted_furthest_point_sampling_kernel(int b, int n, int m, 256 | const float *__restrict__ dataset, const float *__restrict__ weights, float *__restrict__ temp, int *__restrict__ idxs) { 257 | // dataset: [B,N,3] 258 | // weights: [B,N] 259 | // temp: [B,N] 260 | 261 | if (m <= 0) return; 262 | 263 | __shared__ float dists[block_size]; 264 | __shared__ int dists_i[block_size]; 265 | 266 | int batch_index = blockIdx.x; 267 | dataset += batch_index * n * 3; 268 | weights += batch_index * n; 269 | temp += batch_index * n; 270 | idxs += batch_index * m; 271 | 272 | int tid = threadIdx.x; 273 | const int stride = block_size; 274 | 275 | int old = 0; 276 | if (threadIdx.x == 0) 277 | idxs[0] = old; 278 | 279 | __syncthreads(); 280 | 281 | for (int j = 1; j < m; j++) { 282 | 283 | int besti = 0; 284 | float best = -1; 285 | 286 | float x1 = dataset[old * 3 + 0]; 287 | float y1 = dataset[old * 3 + 1]; 288 | float z1 = dataset[old * 3 + 2]; 289 | 290 | float w1 = weights[old]; 291 | 292 | for (int k = tid; k < n; k += stride) { 293 | float x2, y2, z2, w2; 294 | x2 = dataset[k * 3 + 0]; 295 | y2 = dataset[k * 3 + 1]; 296 | z2 = dataset[k * 3 + 2]; 297 | w2 = weights[k]; 298 | 299 | float d = w2 * ((x2 - x1) * (x2 - x1) + (y2 - y1) * (y2 - y1) + (z2 - z1) * (z2 - z1)); 300 | float d2 = min(d, temp[k]); 301 | temp[k] = d2; 302 | besti = d2 > best ? k : besti; 303 | best = d2 > best ? d2 : best; 304 | } 305 | dists[tid] = best; 306 | dists_i[tid] = besti; 307 | __syncthreads(); 308 | 309 | if (block_size >= 1024) { 310 | if (tid < 512) { 311 | __update(dists, dists_i, tid, tid + 512); 312 | } 313 | __syncthreads(); 314 | } 315 | if (block_size >= 512) { 316 | if (tid < 256) { 317 | __update(dists, dists_i, tid, tid + 256); 318 | } 319 | __syncthreads(); 320 | } 321 | if (block_size >= 256) { 322 | if (tid < 128) { 323 | __update(dists, dists_i, tid, tid + 128); 324 | } 325 | __syncthreads(); 326 | } 327 | if (block_size >= 128) { 328 | if (tid < 64) { 329 | __update(dists, dists_i, tid, tid + 64); 330 | } 331 | __syncthreads(); 332 | } 333 | if (block_size >= 64) { 334 | if (tid < 32) { 335 | __update(dists, dists_i, tid, tid + 32); 336 | } 337 | __syncthreads(); 338 | } 339 | if (block_size >= 32) { 340 | if (tid < 16) { 341 | __update(dists, dists_i, tid, tid + 16); 342 | } 343 | __syncthreads(); 344 | } 345 | if (block_size >= 16) { 346 | if (tid < 8) { 347 | __update(dists, dists_i, tid, tid + 8); 348 | } 349 | __syncthreads(); 350 | } 351 | if (block_size >= 8) { 352 | if (tid < 4) { 353 | __update(dists, dists_i, tid, tid + 4); 354 | } 355 | __syncthreads(); 356 | } 357 | if (block_size >= 4) { 358 | if (tid < 2) { 359 | __update(dists, dists_i, tid, tid + 2); 360 | } 361 | __syncthreads(); 362 | } 363 | if (block_size >= 2) { 364 | if (tid < 1) { 365 | __update(dists, dists_i, tid, tid + 1); 366 | } 367 | __syncthreads(); 368 | } 369 | 370 | // All threads update a single new point (old). 371 | old = dists_i[0]; // update last point index 372 | if (tid == 0) 373 | idxs[j] = old; 374 | } 375 | } 376 | 377 | void weighted_furthest_point_sampling_kernel_launcher(int b, int n, int m, 378 | const float *dataset, const float *weights, float *temp, int *idxs, cudaStream_t stream) { 379 | 380 | cudaError_t err; 381 | unsigned int n_threads = opt_n_threads(n); // compute proper thread numbere 382 | 383 | switch (n_threads) { 384 | // Call kernel functions: Func 385 | // Dg: grid size (how many blocks in the grid) 386 | // Db: block size (how many threads in the block) 387 | // Ns: memory for shared value, default 0 388 | // s: stream 389 | case 1024: 390 | weighted_furthest_point_sampling_kernel<1024><<>>(b, n, m, dataset, weights, temp, idxs); break; 391 | case 512: 392 | weighted_furthest_point_sampling_kernel<512><<>>(b, n, m, dataset, weights, temp, idxs); break; 393 | case 256: 394 | weighted_furthest_point_sampling_kernel<256><<>>(b, n, m, dataset, weights, temp, idxs); break; 395 | case 128: 396 | weighted_furthest_point_sampling_kernel<128><<>>(b, n, m, dataset, weights, temp, idxs); break; 397 | case 64: 398 | weighted_furthest_point_sampling_kernel<64><<>>(b, n, m, dataset, weights, temp, idxs); break; 399 | case 32: 400 | weighted_furthest_point_sampling_kernel<32><<>>(b, n, m, dataset, weights, temp, idxs); break; 401 | case 16: 402 | weighted_furthest_point_sampling_kernel<16><<>>(b, n, m, dataset, weights, temp, idxs); break; 403 | case 8: 404 | weighted_furthest_point_sampling_kernel<8><<>>(b, n, m, dataset, weights, temp, idxs); break; 405 | case 4: 406 | weighted_furthest_point_sampling_kernel<4><<>>(b, n, m, dataset, weights, temp, idxs); break; 407 | case 2: 408 | weighted_furthest_point_sampling_kernel<2><<>>(b, n, m, dataset, weights, temp, idxs); break; 409 | case 1: 410 | weighted_furthest_point_sampling_kernel<1><<>>(b, n, m, dataset, weights, temp, idxs); break; 411 | default: 412 | weighted_furthest_point_sampling_kernel<512><<>>(b, n, m, dataset, weights, temp, idxs); 413 | } 414 | err = cudaGetLastError(); 415 | if (cudaSuccess != err) { 416 | fprintf(stderr, "CUDA kernel failed: %s\n", cudaGetErrorString(err)); 417 | exit(-1); 418 | } 419 | } -------------------------------------------------------------------------------- /models/PointUtils/src/furthest_point_sampling_gpu.h: -------------------------------------------------------------------------------- 1 | #ifndef _FURTHEST_POINT_SAMPLING_H 2 | #define _FURTHEST_POINT_SAMPLING_H 3 | 4 | #include 5 | #include 6 | #include 7 | 8 | int gather_points_wrapper_fast(int b, int c, int n, int npoints, 9 | at::Tensor points_tensor, at::Tensor idx_tensor, at::Tensor out_tensor); 10 | 11 | void gather_points_kernel_launcher_fast(int b, int c, int n, int npoints, 12 | const float *points, const int *idx, float *out, cudaStream_t stream); 13 | 14 | int gather_points_grad_wrapper_fast(int b, int c, int n, int npoints, 15 | at::Tensor grad_out_Tensor, at::Tensor idx_tensor, at::Tensor grad_points_tensor); 16 | 17 | void gather_points_grad_kernel_launcher_fast(int b, int c, int n, int npoints, 18 | const float *grad_out, const int *idx, float *grad_points, cudaStream_t stream); 19 | 20 | int furthest_point_sampling_wrapper(int b, int n, int m, 21 | at::Tensor points_tensor, at::Tensor temp_tensor, at::Tensor idx_tensor); 22 | 23 | int weighted_furthest_point_sampling_wrapper(int b, int n, int m, 24 | at::Tensor points_tensor, at::Tensor weights_tensor, at::Tensor temp_tensor, at::Tensor idx_tensor); 25 | 26 | void furthest_point_sampling_kernel_launcher(int b, int n, int m, 27 | const float *dataset, float *temp, int *idxs, cudaStream_t stream); 28 | 29 | void weighted_furthest_point_sampling_kernel_launcher(int b, int n, int m, 30 | const float *dataset, const float *weights, float *temp, int *idxs, cudaStream_t stream); 31 | 32 | #endif -------------------------------------------------------------------------------- /models/PointUtils/src/point_utils_api.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | 4 | #include "furthest_point_sampling_gpu.h" 5 | 6 | PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { 7 | 8 | m.def("gather_points_wrapper", &gather_points_wrapper_fast, "gather_points_wrapper_fast"); 9 | m.def("gather_points_grad_wrapper", &gather_points_grad_wrapper_fast, "gather_points_grad_wrapper_fast"); 10 | 11 | m.def("furthest_point_sampling_wrapper", &furthest_point_sampling_wrapper, "furthest_point_sampling_wrapper"); 12 | m.def("weighted_furthest_point_sampling_wrapper", &weighted_furthest_point_sampling_wrapper, "weighted_furthest_point_sampling_wrapper"); 13 | } -------------------------------------------------------------------------------- /models/layers.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import numpy as np 3 | import torch.nn as nn 4 | import torch.nn.functional as F 5 | 6 | from .utils import furthest_point_sample, weighted_furthest_point_sample, gather_operation 7 | from pytorch3d.ops import knn_points, knn_gather 8 | 9 | def knn_group(xyz1, xyz2, features2, k): 10 | ''' 11 | Input: 12 | xyz1: query points, [B,M,3] 13 | xyz2: database points, [B,N,3] 14 | features2: [B,C,N] 15 | k: int 16 | Output: 17 | grouped_features: [B,4+C,M,k] 18 | knn_xyz: [B,M,k,3] 19 | ''' 20 | _, knn_idx, knn_xyz = knn_points(xyz1, xyz2, K=k, return_nn=True) 21 | rela_xyz = knn_xyz - xyz1.unsqueeze(2).repeat(1,1,k,1) # [B,M,k,3] 22 | rela_dist = torch.norm(rela_xyz, dim=-1, keepdim=True) # [B,M,k,1] 23 | grouped_features = torch.cat([rela_xyz,rela_dist], dim=-1) 24 | if features2 is not None: 25 | knn_features = knn_gather(features2.permute(0,2,1).contiguous(), knn_idx) 26 | grouped_features = torch.cat([rela_xyz,rela_dist,knn_features],dim=-1) # [B,M,k,4+C] 27 | return grouped_features.permute(0,3,1,2).contiguous(), knn_xyz 28 | 29 | def calc_cosine_similarity(desc1, desc2): 30 | ''' 31 | Input: 32 | desc1: [B,N,*,C] 33 | desc2: [B,N,*,C] 34 | Ret: 35 | similarity: [B,N,*] 36 | ''' 37 | inner_product = torch.sum(torch.mul(desc1, desc2), dim=-1, keepdim=False) 38 | norm_1 = torch.norm(desc1, dim=-1, keepdim=False) 39 | norm_2 = torch.norm(desc2, dim=-1, keepdim=False) 40 | similarity = inner_product/(torch.mul(norm_1, norm_2)+1e-6) 41 | return similarity 42 | 43 | class KeypointDetector(nn.Module): 44 | ''' 45 | Params: 46 | nsample: number of sampled points 47 | k: k nearest neighbors 48 | in_channels: input channel number 49 | out_channels: output channel number 50 | fps: use furthest point sampling 51 | Input: 52 | xyz: [B,N,3] 53 | features: [B,N,C_in] 54 | weights: None / [B,N] 55 | Output: 56 | keypoints: [B,M,3] 57 | weights: [B,M] 58 | attentive_feature: [B,C_o,M] 59 | grouped_features: [B,C_in+4,M,k] 60 | attentive_feature_map: [B,C_o,M,k] 61 | ''' 62 | def __init__(self, nsample, k, in_channels, out_channels, fps=True): 63 | super(KeypointDetector, self).__init__() 64 | 65 | self.nsample = nsample 66 | self.k = k 67 | self.fps = fps 68 | 69 | layers = [] 70 | out_channels = [in_channels+4, *out_channels] 71 | for i in range(1, len(out_channels)): 72 | layers += [nn.Conv2d(out_channels[i-1], out_channels[i], kernel_size=1, bias=False), 73 | nn.BatchNorm2d(out_channels[i]), 74 | nn.ReLU()] 75 | self.convs = nn.Sequential(*layers) 76 | self.C_o1 = out_channels[-1] 77 | 78 | self.mlp1 = nn.Sequential(nn.Conv1d(self.C_o1, self.C_o1, kernel_size=1), 79 | nn.BatchNorm1d(self.C_o1), 80 | nn.ReLU()) 81 | self.mlp2 = nn.Sequential(nn.Conv1d(self.C_o1, self.C_o1, kernel_size=1), 82 | nn.BatchNorm1d(self.C_o1), 83 | nn.ReLU()) 84 | self.mlp3 = nn.Sequential(nn.Conv1d(self.C_o1, 1, kernel_size=1)) 85 | 86 | self.softplus = nn.Softplus() 87 | 88 | def forward(self, xyz, features, weights=None): 89 | # Use FPS or random sampling 90 | if self.fps: 91 | # Use FPS or WFPS 92 | if weights is None: 93 | fps_idx = furthest_point_sample(xyz, self.nsample) 94 | sampled_xyz = gather_operation(xyz.permute(0,2,1).contiguous(), fps_idx).permute(0,2,1).contiguous() 95 | else: 96 | fps_idx = weighted_furthest_point_sample(xyz, weights, self.nsample) 97 | sampled_xyz = gather_operation(xyz.permute(0,2,1).contiguous(), fps_idx).permute(0,2,1).contiguous() 98 | else: 99 | N = xyz.shape[1] 100 | rand_idx = torch.randperm(N)[:self.nsample] 101 | sampled_xyz = xyz[:,rand_idx,:] 102 | 103 | grouped_features, knn_xyz = knn_group(sampled_xyz, xyz, features, self.k) # [B,4+C1,M,k] [B,M,k,3] 104 | embedding = self.convs(grouped_features) 105 | x1 = torch.max(embedding, dim=1, keepdim=False)[0] # [B,M,k] 106 | attentive_weights = F.softmax(x1, dim=-1) # [B,M,k] 107 | 108 | weights_xyz = attentive_weights.unsqueeze(-1).repeat(1,1,1,3) 109 | keypoints = torch.sum(torch.mul(weights_xyz, knn_xyz),dim=2,keepdim=False) # [B,M,3] 110 | 111 | weights_feature = attentive_weights.unsqueeze(1).repeat(1,self.C_o1,1,1) 112 | attentive_feature_map = torch.mul(embedding, weights_feature) # [B,C2,M,k] 113 | attentive_feature = torch.sum(attentive_feature_map, dim=-1, keepdim=False) 114 | 115 | sigmas = self.mlp3(self.mlp2(self.mlp1(attentive_feature))) 116 | sigmas = self.softplus(sigmas) + 0.001 117 | sigmas = sigmas.squeeze(1) 118 | 119 | return keypoints, sigmas, attentive_feature, grouped_features, attentive_feature_map 120 | 121 | class DescExtractor(nn.Module): 122 | ''' 123 | Params: 124 | in_channels: input channel number 125 | out_channels: output channel number 126 | C_detector: channel number of keypoint detector (attentive feature map) 127 | desc_dim: dimension of descriptor 128 | Input: 129 | grouped_features: [B,C_in+4,M,k] 130 | attentive_feature_map: [B,C_detector,M,k] 131 | Output: 132 | desc: [B,desc_dim,M] 133 | ''' 134 | def __init__(self, in_channels, out_channels, C_detector, desc_dim): 135 | super(DescExtractor, self).__init__() 136 | 137 | layers = [] 138 | out_channels = [in_channels+4, *out_channels] 139 | for i in range(1, len(out_channels)): 140 | layers += [nn.Conv2d(out_channels[i-1], out_channels[i], kernel_size=1, bias=False), 141 | nn.BatchNorm2d(out_channels[i]), 142 | nn.ReLU()] 143 | self.convs = nn.Sequential(*layers) 144 | 145 | self.C_o1 = out_channels[-1] 146 | 147 | self.mlp1 = nn.Sequential(nn.Conv2d(2*self.C_o1+C_detector, out_channels[-2], kernel_size=1, bias=False), 148 | nn.BatchNorm2d(out_channels[-2]), 149 | nn.ReLU()) 150 | self.mlp2 = nn.Sequential(nn.Conv2d(out_channels[-2], desc_dim, kernel_size=1, bias=False), 151 | nn.BatchNorm2d(desc_dim), 152 | nn.ReLU()) 153 | 154 | def forward(self, grouped_features, attentive_feature_map): 155 | x1 = self.convs(grouped_features) 156 | x2 = torch.max(x1, dim=3, keepdim=True)[0] 157 | k = x1.shape[-1] 158 | x2 = x2.repeat(1,1,1,k) 159 | x2 = torch.cat((x2, x1),dim=1) # [B,2*C_o1,N,k] 160 | x2 = torch.cat((x2, attentive_feature_map), dim=1) 161 | x2 = self.mlp2(self.mlp1(x2)) 162 | desc = torch.max(x2, dim=3, keepdim=False)[0] 163 | return desc 164 | 165 | class CoarseReg(nn.Module): 166 | ''' 167 | Params: 168 | k: number of candidate keypoints 169 | in_channels: input channel number 170 | use_sim: use original similarity features 171 | use_neighbor: use neighbor aware similarity features 172 | Input: 173 | src_xyz: [B,N,3] 174 | src_desc: [B,C,N] 175 | dst_xyz: [B,N,3] 176 | dst_desc: [B,C,N] 177 | src_weights: [B,N] 178 | dst_weights: [B,N] 179 | Output: 180 | corres_xyz: [B,N,3] 181 | weights: [B,N] 182 | ''' 183 | def __init__(self, k, in_channels, use_sim=True, use_neighbor=True): 184 | super(CoarseReg, self).__init__() 185 | 186 | self.k = k 187 | 188 | self.use_sim = use_sim 189 | self.use_neighbor = use_neighbor 190 | 191 | if self.use_sim and self.use_neighbor: 192 | out_channels = [in_channels*2+16, in_channels*2, in_channels*2, in_channels*2] 193 | elif self.use_sim: 194 | out_channels = [in_channels*2+14, in_channels*2, in_channels*2, in_channels*2] 195 | elif self.use_neighbor: 196 | out_channels = [in_channels*2+14, in_channels*2, in_channels*2, in_channels*2] 197 | else: 198 | out_channels = [in_channels*2+12, in_channels*2, in_channels*2, in_channels*2] 199 | 200 | layers = [] 201 | 202 | for i in range(1, len(out_channels)): 203 | layers += [nn.Conv2d(out_channels[i-1], out_channels[i], kernel_size=1, bias=False), 204 | nn.BatchNorm2d(out_channels[i]), 205 | nn.ReLU()] 206 | self.convs_1 = nn.Sequential(*layers) 207 | 208 | out_channels_nbr = [in_channels+4, in_channels, in_channels, in_channels] 209 | self_layers = [] 210 | for i in range(1, len(out_channels_nbr)): 211 | self_layers += [nn.Conv2d(out_channels_nbr[i-1], out_channels_nbr[i], kernel_size=1, bias=False), 212 | nn.BatchNorm2d(out_channels_nbr[i]), 213 | nn.ReLU()] 214 | self.convs_2 = nn.Sequential(*self_layers) 215 | 216 | self.mlp1 = nn.Sequential(nn.Conv1d(in_channels*2, in_channels*2, kernel_size=1), 217 | nn.BatchNorm1d(in_channels*2), 218 | nn.ReLU()) 219 | self.mlp2 = nn.Sequential(nn.Conv1d(in_channels*2, in_channels*2, kernel_size=1), 220 | nn.BatchNorm1d(in_channels*2), 221 | nn.ReLU()) 222 | self.mlp3 = nn.Sequential(nn.Conv1d(in_channels*2, 1, kernel_size=1)) 223 | 224 | def forward(self, src_xyz, src_desc, dst_xyz, dst_desc, src_weights, dst_weights): 225 | src_desc = src_desc.permute(0,2,1).contiguous() 226 | dst_desc = dst_desc.permute(0,2,1).contiguous() 227 | _, src_knn_idx, src_knn_desc = knn_points(src_desc, dst_desc, K=self.k, return_nn=True) 228 | src_knn_xyz = knn_gather(dst_xyz, src_knn_idx) # [B,N,k,3] 229 | src_xyz_expand = src_xyz.unsqueeze(2).repeat(1,1,self.k,1) 230 | src_desc_expand = src_desc.unsqueeze(2).repeat(1,1,self.k,1) # [B,N,k,C] 231 | src_rela_xyz = src_knn_xyz - src_xyz_expand # [B,N,k,3] 232 | src_rela_dist = torch.norm(src_rela_xyz, dim=-1, keepdim=True) # [B,N,k,1] 233 | src_weights_expand = src_weights.unsqueeze(-1).unsqueeze(-1).repeat(1,1,self.k,1) # [B,N,k,1] 234 | src_knn_weights = knn_gather(dst_weights.unsqueeze(-1), src_knn_idx) # [B,N,k,1] 235 | 236 | if self.use_sim: 237 | # construct original similarity features 238 | dst_desc_expand_N = dst_desc.unsqueeze(2).repeat(1,1,src_xyz.shape[1],1) # [B,N2,N1,C] 239 | src_desc_expand_N = src_desc.unsqueeze(1).repeat(1,dst_xyz.shape[1],1,1) # [B,N2,N1,C] 240 | 241 | dst_src_cos = calc_cosine_similarity(dst_desc_expand_N, src_desc_expand_N) # [B,N2,N1] 242 | dst_src_cos_max = torch.max(dst_src_cos, dim=2, keepdim=True)[0] # [B,N2,1] 243 | dst_src_cos_norm = dst_src_cos/(dst_src_cos_max+1e-6) # [B,N2,N1] 244 | 245 | src_dst_cos = dst_src_cos.permute(0,2,1) # [B,N1,N2] 246 | src_dst_cos_max = torch.max(src_dst_cos, dim=2, keepdim=True)[0] # [B,N1,1] 247 | src_dst_cos_norm = src_dst_cos/(src_dst_cos_max+1e-6) # [B,N1,N2] 248 | 249 | dst_src_cos_knn = knn_gather(dst_src_cos_norm, src_knn_idx) # [B,N1,k,N1] 250 | dst_src_cos = torch.zeros(src_knn_xyz.shape[0], src_knn_xyz.shape[1], \ 251 | src_knn_xyz.shape[2]).cuda() # [B,N1,k] 252 | for i in range(src_xyz.shape[1]): 253 | dst_src_cos[:,i,:] = dst_src_cos_knn[:,i,:,i] 254 | 255 | src_dst_cos_knn = knn_gather(src_dst_cos_norm.permute(0,2,1), src_knn_idx) 256 | src_dst_cos = torch.zeros(src_knn_xyz.shape[0], src_knn_xyz.shape[1], \ 257 | src_knn_xyz.shape[2]).cuda() # [B,N1,k] 258 | for i in range(src_xyz.shape[1]): 259 | src_dst_cos[:,i,:] = src_dst_cos_knn[:,i,:,i] 260 | 261 | if self.use_neighbor: 262 | _, src_nbr_knn_idx, src_nbr_knn_xyz = knn_points(src_xyz, src_xyz, K=self.k, return_nn=True) 263 | src_nbr_knn_feats = knn_gather(src_desc, src_nbr_knn_idx) # [B,N,k,C] 264 | src_nbr_knn_rela_xyz = src_nbr_knn_xyz - src_xyz_expand # [B,N,k,3] 265 | src_nbr_knn_rela_dist = torch.norm(src_nbr_knn_rela_xyz, dim=-1, keepdim=True) # [B,N,k] 266 | src_nbr_feats = torch.cat([src_nbr_knn_feats, src_nbr_knn_rela_xyz, src_nbr_knn_rela_dist], dim=-1) 267 | 268 | _, dst_nbr_knn_idx, dst_nbr_knn_xyz = knn_points(dst_xyz, dst_xyz, K=self.k, return_nn=True) 269 | dst_nbr_knn_feats = knn_gather(dst_desc, dst_nbr_knn_idx) # [B,N,k,C] 270 | dst_xyz_expand = dst_xyz.unsqueeze(2).repeat(1,1,self.k,1) 271 | dst_nbr_knn_rela_xyz = dst_nbr_knn_xyz - dst_xyz_expand # [B,N,k,3] 272 | dst_nbr_knn_rela_dist = torch.norm(dst_nbr_knn_rela_xyz, dim=-1, keepdim=True) # [B,N,k] 273 | dst_nbr_feats = torch.cat([dst_nbr_knn_feats, dst_nbr_knn_rela_xyz, dst_nbr_knn_rela_dist], dim=-1) 274 | 275 | src_nbr_weights = self.convs_2(src_nbr_feats.permute(0,3,1,2).contiguous()) 276 | src_nbr_weights = torch.max(src_nbr_weights, dim=1, keepdim=False)[0] 277 | src_nbr_weights = F.softmax(src_nbr_weights, dim=-1) 278 | src_nbr_desc = torch.sum(torch.mul(src_nbr_knn_feats, src_nbr_weights.unsqueeze(-1)),dim=2, keepdim=False) 279 | 280 | dst_nbr_weights = self.convs_2(dst_nbr_feats.permute(0,3,1,2).contiguous()) 281 | dst_nbr_weights = torch.max(dst_nbr_weights, dim=1, keepdim=False)[0] 282 | dst_nbr_weights = F.softmax(dst_nbr_weights, dim=-1) 283 | dst_nbr_desc = torch.sum(torch.mul(dst_nbr_knn_feats, dst_nbr_weights.unsqueeze(-1)),dim=2, keepdim=False) 284 | 285 | dst_nbr_desc_expand_N = dst_nbr_desc.unsqueeze(2).repeat(1,1,src_xyz.shape[1],1) # [B,N2,N1,C] 286 | src_nbr_desc_expand_N = src_nbr_desc.unsqueeze(1).repeat(1,dst_xyz.shape[1],1,1) # [B,N2,N1,C] 287 | 288 | dst_src_nbr_cos = calc_cosine_similarity(dst_nbr_desc_expand_N, src_nbr_desc_expand_N) # [B,N2,N1] 289 | dst_src_nbr_cos_max = torch.max(dst_src_nbr_cos, dim=2, keepdim=True)[0] # [B,N2,1] 290 | dst_src_nbr_cos_norm = dst_src_nbr_cos/(dst_src_nbr_cos_max+1e-6) # [B,N2,N1] 291 | 292 | src_dst_nbr_cos = dst_src_nbr_cos.permute(0,2,1) # [B,N1,N2] 293 | src_dst_nbr_cos_max = torch.max(src_dst_nbr_cos, dim=2, keepdim=True)[0] # [B,N1,1] 294 | src_dst_nbr_cos_norm = src_dst_nbr_cos/(src_dst_nbr_cos_max+1e-6) # [B,N1,N2] 295 | 296 | dst_src_nbr_cos_knn = knn_gather(dst_src_nbr_cos_norm, src_knn_idx) # [B,N1,k,N1] 297 | dst_src_nbr_cos = torch.zeros(src_knn_xyz.shape[0], src_knn_xyz.shape[1], \ 298 | src_knn_xyz.shape[2]).to(src_knn_xyz.cuda()) # [B,N1,k] 299 | for i in range(src_xyz.shape[1]): 300 | dst_src_nbr_cos[:,i,:] = dst_src_nbr_cos_knn[:,i,:,i] 301 | 302 | src_dst_nbr_cos_knn = knn_gather(src_dst_nbr_cos_norm.permute(0,2,1), src_knn_idx) 303 | src_dst_nbr_cos = torch.zeros(src_knn_xyz.shape[0], src_knn_xyz.shape[1], \ 304 | src_knn_xyz.shape[2]).to(src_knn_xyz.cuda()) # [B,N1,k] 305 | for i in range(src_xyz.shape[1]): 306 | src_dst_nbr_cos[:,i,:] = src_dst_nbr_cos_knn[:,i,:,i] 307 | 308 | geom_feats = torch.cat([src_rela_xyz, src_rela_dist, src_xyz_expand, src_knn_xyz],dim=-1) 309 | desc_feats = torch.cat([src_desc_expand, src_knn_desc, src_weights_expand, src_knn_weights],dim=-1) 310 | if self.use_sim and self.use_neighbor: 311 | similarity_feats = torch.cat([src_dst_cos.unsqueeze(-1), dst_src_cos.unsqueeze(-1), \ 312 | src_dst_nbr_cos.unsqueeze(-1), dst_src_nbr_cos.unsqueeze(-1)], dim=-1) 313 | elif self.use_sim: 314 | similarity_feats = torch.cat([src_dst_cos.unsqueeze(-1), dst_src_cos.unsqueeze(-1)],dim=-1) 315 | elif self.use_neighbor: 316 | similarity_feats = torch.cat([src_dst_nbr_cos.unsqueeze(-1), dst_src_nbr_cos.unsqueeze(-1)], dim=-1) 317 | else: 318 | similarity_feats = None 319 | 320 | if self.use_sim or self.use_neighbor: 321 | feats = torch.cat([geom_feats, desc_feats, similarity_feats],dim=-1) 322 | else: 323 | feats = torch.cat([geom_feats, desc_feats],dim=-1) 324 | 325 | feats = self.convs_1(feats.permute(0,3,1,2)) 326 | attentive_weights = torch.max(feats, dim=1)[0] 327 | attentive_weights = F.softmax(attentive_weights, dim=-1) # [B,N,k] 328 | corres_xyz = torch.sum(torch.mul(attentive_weights.unsqueeze(-1), src_knn_xyz), dim=2, keepdim=False) # [B,N,3] 329 | attentive_feats = torch.sum(torch.mul(attentive_weights.unsqueeze(1), feats), dim=-1, keepdim=False) # [B,N,C] 330 | weights = self.mlp3(self.mlp2(self.mlp1(attentive_feats))) # [B,1,N] 331 | weights = torch.sigmoid(weights.squeeze(1)) 332 | 333 | return corres_xyz, weights 334 | 335 | class FineReg(nn.Module): 336 | ''' 337 | Params: 338 | k: number of candidate keypoints 339 | in_channels: input channel number 340 | Input: 341 | src_xyz: [B,N,3] 342 | src_desc: [B,C,N] 343 | dst_xyz: [B,N,3] 344 | dst_desc: [B,C,N] 345 | src_weights: [B,N] 346 | dst_weights: [B,N] 347 | Output: 348 | corres_xyz: [B,N,3] 349 | weights: [B,N] 350 | ''' 351 | def __init__(self, k, in_channels): 352 | super(FineReg, self).__init__() 353 | self.k = k 354 | out_channels = [in_channels*2+12, in_channels*2, in_channels*2, in_channels*2] 355 | layers = [] 356 | for i in range(1, len(out_channels)): 357 | layers += [nn.Conv2d(out_channels[i-1], out_channels[i], kernel_size=1, bias=False), 358 | nn.BatchNorm2d(out_channels[i]), 359 | nn.ReLU()] 360 | self.convs_1 = nn.Sequential(*layers) 361 | 362 | self.mlp1 = nn.Sequential(nn.Conv1d(in_channels*2, in_channels*2, kernel_size=1), 363 | nn.BatchNorm1d(in_channels*2), 364 | nn.ReLU()) 365 | self.mlp2 = nn.Sequential(nn.Conv1d(in_channels*2, in_channels*2, kernel_size=1), 366 | nn.BatchNorm1d(in_channels*2), 367 | nn.ReLU()) 368 | self.mlp3 = nn.Sequential(nn.Conv1d(in_channels*2, 1, kernel_size=1)) 369 | 370 | def forward(self, src_xyz, src_feat, dst_xyz, dst_feat, src_weights, dst_weights): 371 | _, src_knn_idx, src_knn_xyz = knn_points(src_xyz, dst_xyz, K=self.k, return_nn=True) 372 | src_feat = src_feat.permute(0,2,1).contiguous() 373 | dst_feat = dst_feat.permute(0,2,1).contiguous() 374 | src_knn_feat = knn_gather(dst_feat, src_knn_idx) # [B,N,k,C] 375 | src_xyz_expand = src_xyz.unsqueeze(2).repeat(1,1,self.k,1) 376 | src_feat_expand = src_feat.unsqueeze(2).repeat(1,1,self.k,1) 377 | src_rela_xyz = src_knn_xyz - src_xyz_expand # [B,N,k,3] 378 | src_rela_dist = torch.norm(src_rela_xyz, dim=-1, keepdim=True) # [B,N,k,1] 379 | src_weights_expand = src_weights.unsqueeze(-1).unsqueeze(-1).repeat(1,1,self.k,1) # [B,N,k,1] 380 | src_knn_weights = knn_gather(dst_weights.unsqueeze(-1), src_knn_idx) # [B,N,k,1] 381 | feats = torch.cat([src_rela_xyz, src_rela_dist, src_xyz_expand, src_knn_xyz, \ 382 | src_feat_expand, src_knn_feat, src_weights_expand, src_knn_weights], dim=-1) 383 | feats = self.convs_1(feats.permute(0,3,1,2).contiguous()) # [B,C,N,k] 384 | attentive_weights = torch.max(feats, dim=1)[0] 385 | attentive_weights = F.softmax(attentive_weights, dim=-1) # [B,N,k] 386 | corres_xyz = torch.sum(torch.mul(attentive_weights.unsqueeze(-1), src_knn_xyz), dim=2, keepdim=False) # [B,N,3] 387 | attentive_feats = torch.sum(torch.mul(attentive_weights.unsqueeze(1), feats), dim=-1, keepdim=False) # [B,N,C] 388 | weights = self.mlp3(self.mlp2(self.mlp1(attentive_feats))) # [B,1,N] 389 | weights = torch.sigmoid(weights.squeeze(1)) 390 | 391 | return corres_xyz, weights 392 | 393 | class WeightedSVDHead(nn.Module): 394 | ''' 395 | Input: 396 | src: [B,N,3] 397 | src_corres: [B,N,3] 398 | weights: [B,N] 399 | Output: 400 | r: [B,3,3] 401 | t: [B,3] 402 | ''' 403 | def __init__(self): 404 | super(WeightedSVDHead, self).__init__() 405 | 406 | def forward(self, src, src_corres, weights): 407 | eps = 1e-4 408 | sum_weights = torch.sum(weights,dim=1,keepdim=True) + eps 409 | weights = weights/sum_weights 410 | weights = weights.unsqueeze(2) 411 | 412 | src_mean = torch.matmul(weights.transpose(1,2),src)/(torch.sum(weights,dim=1).unsqueeze(1)+eps) 413 | src_corres_mean = torch.matmul(weights.transpose(1,2),src_corres)/(torch.sum(weights,dim=1).unsqueeze(1)+eps) 414 | 415 | src_centered = src - src_mean # [B,N,3] 416 | src_corres_centered = src_corres - src_corres_mean # [B,N,3] 417 | 418 | weight_matrix = torch.diag_embed(weights.squeeze(2)) 419 | 420 | cov_mat = torch.matmul(src_centered.transpose(1,2),torch.matmul(weight_matrix,src_corres_centered)) 421 | 422 | try: 423 | u, s, v = torch.svd(cov_mat) 424 | except Exception as e: 425 | r = torch.eye(3).cuda() 426 | r = r.repeat(src_mean.shape[0],1,1) 427 | t = torch.zeros((src_mean.shape[0],3,1)).cuda() 428 | t = t.view(t.shape[0], 3) 429 | 430 | return r, t 431 | 432 | tm_determinant = torch.det(torch.matmul(v.transpose(1,2), u.transpose(1,2))) 433 | 434 | determinant_matrix = torch.diag_embed(torch.cat((torch.ones((tm_determinant.shape[0], 2)).cuda(),tm_determinant.unsqueeze(1)), 1)) 435 | 436 | r = torch.matmul(v, torch.matmul(determinant_matrix, u.transpose(1,2))) 437 | 438 | t = src_corres_mean.transpose(1,2) - torch.matmul(r, src_mean.transpose(1,2)) 439 | t = t.view(t.shape[0], 3) 440 | 441 | return r, t -------------------------------------------------------------------------------- /models/losses.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import numpy as np 3 | import torch.nn.functional as F 4 | 5 | from pytorch3d.loss import chamfer_distance 6 | from pytorch3d.ops import knn_points, knn_gather 7 | 8 | def prob_chamfer_loss(keypoints1, keypoints2, sigma1, sigma2, gt_R, gt_t): 9 | """ 10 | Calculate probabilistic chamfer distance between keypoints1 and keypoints2 11 | Input: 12 | keypoints1: [B,M,3] 13 | keypoints2: [B,M,3] 14 | sigma1: [B,M] 15 | sigma2: [B,M] 16 | gt_R: [B,3,3] 17 | gt_t: [B,3] 18 | """ 19 | keypoints1 = keypoints1.permute(0,2,1).contiguous() 20 | keypoints1 = torch.matmul(gt_R, keypoints1) + gt_t.unsqueeze(2) 21 | keypoints2 = keypoints2.permute(0,2,1).contiguous() 22 | B, M = keypoints1.size()[0], keypoints1.size()[2] 23 | N = keypoints2.size()[2] 24 | 25 | keypoints1_expanded = keypoints1.unsqueeze(3).expand(B,3,M,N) 26 | keypoints2_expanded = keypoints2.unsqueeze(2).expand(B,3,M,N) 27 | 28 | # diff: [B, M, M] 29 | diff = torch.norm(keypoints1_expanded-keypoints2_expanded, dim=1, keepdim=False) 30 | 31 | if sigma1 is None or sigma2 is None: 32 | min_dist_forward, _ = torch.min(diff, dim=2, keepdim=False) 33 | forward_loss = min_dist_forward.mean() 34 | 35 | min_dist_backward, _ = torch.min(diff, dim=1, keepdim=False) 36 | backward_loss = min_dist_backward.mean() 37 | 38 | loss = forward_loss + backward_loss 39 | 40 | else: 41 | min_dist_forward, min_dist_forward_I = torch.min(diff, dim=2, keepdim=False) 42 | selected_sigma_2 = torch.gather(sigma2, dim=1, index=min_dist_forward_I) 43 | sigma_forward = (sigma1 + selected_sigma_2)/2 44 | forward_loss = (torch.log(sigma_forward)+min_dist_forward/sigma_forward).mean() 45 | 46 | min_dist_backward, min_dist_backward_I = torch.min(diff, dim=1, keepdim=False) 47 | selected_sigma_1 = torch.gather(sigma1, dim=1, index=min_dist_backward_I) 48 | sigma_backward = (sigma2 + selected_sigma_1)/2 49 | backward_loss = (torch.log(sigma_backward)+min_dist_backward/sigma_backward).mean() 50 | 51 | loss = forward_loss + backward_loss 52 | return loss 53 | 54 | def matching_loss(src_kp, src_sigma, src_desc, dst_kp, dst_sigma, dst_desc, gt_R, gt_t, temp=0.1, sigma_max=3.0): 55 | src_kp = src_kp.permute(0,2,1).contiguous() 56 | src_kp = torch.matmul(gt_R, src_kp) + gt_t.unsqueeze(2) 57 | dst_kp = dst_kp.permute(0,2,1).contiguous() 58 | 59 | src_desc = src_desc.unsqueeze(3) # [B,C,M,1] 60 | dst_desc = dst_desc.unsqueeze(2) # [B,C,1,M] 61 | 62 | desc_dists = torch.norm((src_desc - dst_desc), dim=1) # [B,M,M] 63 | desc_dists_inv = 1.0/(desc_dists + 1e-3) 64 | desc_dists_inv = desc_dists_inv/temp 65 | 66 | score_src = F.softmax(desc_dists_inv, dim=2) 67 | score_dst = F.softmax(desc_dists_inv, dim=1).permute(0,2,1) 68 | 69 | src_kp = src_kp.permute(0,2,1) 70 | dst_kp = dst_kp.permute(0,2,1) 71 | 72 | src_kp_corres = torch.matmul(score_src, dst_kp) 73 | dst_kp_corres = torch.matmul(score_dst, src_kp) 74 | 75 | diff_forward = torch.norm((src_kp - src_kp_corres), dim=-1) 76 | diff_backward = torch.norm((dst_kp - dst_kp_corres), dim=-1) 77 | 78 | src_weights = torch.clamp(sigma_max - src_sigma, min=0.01) 79 | src_weights_mean = torch.mean(src_weights, dim=1, keepdim=True) 80 | src_weights = (src_weights/src_weights_mean).detach() 81 | 82 | dst_weights = torch.clamp(sigma_max - dst_sigma, min=0.01) 83 | dst_weights_mean = torch.mean(dst_weights, dim=1, keepdim=True) 84 | dst_weights = (dst_weights/dst_weights_mean).detach() 85 | 86 | loss_forward = (src_weights * diff_forward).mean() 87 | loss_backward = (dst_weights * diff_backward).mean() 88 | 89 | loss = loss_forward + loss_backward 90 | 91 | return loss 92 | 93 | def transformation_loss(pred_R, pred_t, gt_R, gt_t, alpha=1.0): 94 | ''' 95 | Input: 96 | pred_R: [B,3,3] 97 | pred_t: [B,3] 98 | gt_R: [B,3,3] 99 | gt_t: [B,3] 100 | alpha: weight 101 | ''' 102 | Identity = [] 103 | for i in range(pred_R.shape[0]): 104 | Identity.append(torch.eye(3,3).cuda()) 105 | Identity = torch.stack(Identity, dim=0) 106 | resi_R = torch.norm((torch.matmul(pred_R.transpose(2,1).contiguous(),gt_R) - Identity), dim=(1,2), keepdim=False) 107 | resi_t = torch.norm((pred_t - gt_t), dim=1, keepdim=False) 108 | loss_R = torch.mean(resi_R) 109 | loss_t = torch.mean(resi_t) 110 | loss = alpha * loss_R + loss_t 111 | 112 | return loss, loss_R, loss_t -------------------------------------------------------------------------------- /models/models.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.functional as F 4 | 5 | from .layers import KeypointDetector, DescExtractor, CoarseReg, FineReg, WeightedSVDHead 6 | 7 | class HierFeatureExtraction(nn.Module): 8 | def __init__(self, args): 9 | super(HierFeatureExtraction, self).__init__() 10 | 11 | self.use_fps = args.use_fps 12 | self.use_weights = args.use_weights 13 | 14 | self.detector_1 = KeypointDetector(nsample=1024, k=64, in_channels=0, out_channels=[32,32,64], fps=self.use_fps) 15 | self.detector_2 = KeypointDetector(nsample=512, k=32, in_channels=64, out_channels=[64,64,128], fps=self.use_fps) 16 | self.detector_3 = KeypointDetector(nsample=256, k=16, in_channels=128, out_channels=[128,128,256], fps=self.use_fps) 17 | 18 | if args.freeze_detector: 19 | for p in self.parameters(): 20 | p.requires_grad = False 21 | 22 | self.desc_extractor_1 = DescExtractor(in_channels=0, out_channels=[32,32,64], C_detector=64, desc_dim=64) 23 | self.desc_extractor_2 = DescExtractor(in_channels=64, out_channels=[64,64,128], C_detector=128, desc_dim=128) 24 | self.desc_extractor_3 = DescExtractor(in_channels=128, out_channels=[128,128,256], C_detector=256, desc_dim=256) 25 | 26 | def forward(self, points): 27 | xyz_1, sigmas_1, attentive_feature_1, grouped_features_1, attentive_feature_map_1 = self.detector_1(points, None) 28 | desc_1 = self.desc_extractor_1(grouped_features_1, attentive_feature_map_1) 29 | if self.use_weights: 30 | weights_1 = 1.0/(sigmas_1 + 1e-5) 31 | weights_1_mean = torch.mean(weights_1, dim=1, keepdim=True) 32 | weights_1 = weights_1 / weights_1_mean 33 | xyz_2, sigmas_2, attentive_feature_2, grouped_features_2, attentive_feature_map_2 = self.detector_2(xyz_1, attentive_feature_1, weights_1) 34 | desc_2 = self.desc_extractor_2(grouped_features_2, attentive_feature_map_2) 35 | 36 | weights_2 = 1.0/(sigmas_2 + 1e-5) 37 | weights_2_mean = torch.mean(weights_2, dim=1, keepdim=True) 38 | weights_2 = weights_2 / weights_2_mean 39 | xyz_3, sigmas_3, attentive_feature_3, grouped_features_3, attentive_feature_map_3 = self.detector_3(xyz_2, attentive_feature_2, weights_2) 40 | desc_3 = self.desc_extractor_3(grouped_features_3, attentive_feature_map_3) 41 | else: 42 | xyz_2, sigmas_2, attentive_feature_2, grouped_features_2, attentive_feature_map_2 = self.detector_2(xyz_1, attentive_feature_1) 43 | desc_2 = self.desc_extractor_2(grouped_features_2, attentive_feature_map_2) 44 | xyz_3, sigmas_3, attentive_feature_3, grouped_features_3, attentive_feature_map_3 = self.detector_3(xyz_2, attentive_feature_2) 45 | desc_3 = self.desc_extractor_3(grouped_features_3, attentive_feature_map_3) 46 | 47 | ret_dict = {} 48 | ret_dict['xyz_1'] = xyz_1 49 | ret_dict['xyz_2'] = xyz_2 50 | ret_dict['xyz_3'] = xyz_3 51 | ret_dict['sigmas_1'] = sigmas_1 52 | ret_dict['sigmas_2'] = sigmas_2 53 | ret_dict['sigmas_3'] = sigmas_3 54 | ret_dict['desc_1'] = desc_1 55 | ret_dict['desc_2'] = desc_2 56 | ret_dict['desc_3'] = desc_3 57 | 58 | return ret_dict 59 | 60 | class HRegNet(nn.Module): 61 | 62 | def __init__(self, args): 63 | super(HRegNet, self).__init__() 64 | self.feature_extraction = HierFeatureExtraction(args) 65 | 66 | # Freeze pretrained features when train 67 | if args.freeze_feats: 68 | for p in self.parameters(): 69 | p.requires_grad = False 70 | 71 | self.coarse_corres = CoarseReg(k=8, in_channels=256, use_sim=True, use_neighbor=True) 72 | self.fine_corres_2 = FineReg(k=8, in_channels=128) 73 | self.fine_corres_1 = FineReg(k=8, in_channels=64) 74 | 75 | self.svd_head = WeightedSVDHead() 76 | 77 | def forward(self, src_points, dst_points): 78 | # Feature extraction 79 | src_feats = self.feature_extraction(src_points) 80 | dst_feats = self.feature_extraction(dst_points) 81 | 82 | # Coarse registration 83 | src_xyz_corres_3, src_dst_weights_3 = self.coarse_corres(src_feats['xyz_3'], src_feats['desc_3'], dst_feats['xyz_3'], \ 84 | dst_feats['desc_3'], src_feats['sigmas_3'], dst_feats['sigmas_3']) 85 | 86 | R3, t3 = self.svd_head(src_feats['xyz_3'], src_xyz_corres_3, src_dst_weights_3) 87 | 88 | # Fine registration: Layer 2 89 | src_xyz_2_trans = torch.matmul(R3, src_feats['xyz_2'].permute(0,2,1).contiguous()) + t3.unsqueeze(2) 90 | src_xyz_2_trans = src_xyz_2_trans.permute(0,2,1).contiguous() 91 | src_xyz_corres_2, src_dst_weights_2 = self.fine_corres_2(src_xyz_2_trans, src_feats['desc_2'], dst_feats['xyz_2'], \ 92 | dst_feats['desc_2'], src_feats['sigmas_2'], dst_feats['sigmas_2']) 93 | R2_, t2_ = self.svd_head(src_xyz_2_trans, src_xyz_corres_2, src_dst_weights_2) 94 | T3 = torch.zeros(R3.shape[0],4,4).cuda() 95 | T3[:,:3,:3] = R3 96 | T3[:,:3,3] = t3 97 | T3[:,3,3] = 1.0 98 | T2_ = torch.zeros(R2_.shape[0],4,4).cuda() 99 | T2_[:,:3,:3] = R2_ 100 | T2_[:,:3,3] = t2_ 101 | T2_[:,3,3] = 1.0 102 | T2 = torch.matmul(T2_, T3) 103 | R2 = T2[:,:3,:3] 104 | t2 = T2[:,:3,3] 105 | 106 | # Fine registration: Layer 1 107 | src_xyz_1_trans = torch.matmul(R2, src_feats['xyz_1'].permute(0,2,1).contiguous()) + t2.unsqueeze(2) 108 | src_xyz_1_trans = src_xyz_1_trans.permute(0,2,1).contiguous() 109 | src_xyz_corres_1, src_dst_weights_1 = self.fine_corres_1(src_xyz_1_trans, src_feats['desc_1'], dst_feats['xyz_1'], \ 110 | dst_feats['desc_1'], src_feats['sigmas_1'], dst_feats['sigmas_1']) 111 | R1_, t1_ = self.svd_head(src_xyz_1_trans, src_xyz_corres_1, src_dst_weights_1) 112 | T1_ = torch.zeros(R1_.shape[0],4,4).cuda() 113 | T1_[:,:3,:3] = R1_ 114 | T1_[:,:3,3] = t1_ 115 | T1_[:,3,3] = 1.0 116 | 117 | T1 = torch.matmul(T1_, T2) 118 | R1 = T1[:,:3,:3] 119 | t1 = T1[:,:3,3] 120 | 121 | corres_dict = {} 122 | corres_dict['src_xyz_corres_3'] = src_xyz_corres_3 123 | corres_dict['src_xyz_corres_2'] = src_xyz_corres_2 124 | corres_dict['src_xyz_corres_1'] = src_xyz_corres_1 125 | corres_dict['src_dst_weights_3'] = src_dst_weights_3 126 | corres_dict['src_dst_weights_2'] = src_dst_weights_2 127 | corres_dict['src_dst_weights_1'] = src_dst_weights_1 128 | 129 | ret_dict = {} 130 | ret_dict['rotation'] = [R3, R2, R1] 131 | ret_dict['translation'] = [t3, t2, t1] 132 | ret_dict['src_feats'] = src_feats 133 | ret_dict['dst_feats'] = dst_feats 134 | 135 | return ret_dict 136 | 137 | if __name__ == '__main__': 138 | import argparse 139 | 140 | def parse_args(): 141 | parser = argparse.ArgumentParser('HRegNet') 142 | 143 | parser.add_argument('--npoints', type=int, default=16384, help='number of input points') 144 | parser.add_argument('--freeze_detector', action='store_true') 145 | parser.add_argument('--use_fps', action='store_false') 146 | parser.add_argument('--freeze_features', action='store_true') 147 | parser.add_argument('--use_weights', action='store_true') 148 | 149 | return parser.parse_args() 150 | 151 | args = parse_args() 152 | args.use_fps = True 153 | args.use_weights = True 154 | model = HRegNet(args).cuda() 155 | xyz1 = torch.rand(2,16384,3).cuda() 156 | xyz2 = torch.rand(2,16384,3).cuda() 157 | ret_dict = model(xyz1, xyz2) 158 | print(ret_dict['rotation'][-1].shape) 159 | print(ret_dict['translation'][-1].shape) -------------------------------------------------------------------------------- /models/utils.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import torch 3 | from torch.autograd import Variable 4 | from torch.autograd import Function 5 | import torch.nn.functional as F 6 | 7 | import point_utils_cuda 8 | from pytorch3d.loss import chamfer_distance 9 | from pytorch3d.ops import knn_points, knn_gather 10 | from scipy.spatial.transform import Rotation 11 | import random 12 | 13 | class FurthestPointSampling(Function): 14 | @staticmethod 15 | def forward(ctx, xyz: torch.Tensor, npoint: int) -> torch.Tensor: 16 | ''' 17 | ctx: 18 | xyz: [B,N,3] 19 | npoint: int 20 | ''' 21 | assert xyz.is_contiguous() 22 | 23 | B, N, _ = xyz.size() 24 | output = torch.cuda.IntTensor(B, npoint) 25 | temp = torch.cuda.FloatTensor(B, N).fill_(1e10) 26 | 27 | point_utils_cuda.furthest_point_sampling_wrapper(B, N, npoint, xyz, temp, output) 28 | return output 29 | 30 | @staticmethod 31 | def backward(xyz, a=None): 32 | return None, None 33 | 34 | furthest_point_sample = FurthestPointSampling.apply 35 | 36 | class WeightedFurthestPointSampling(Function): 37 | @staticmethod 38 | def forward(ctx, xyz: torch.Tensor, weights: torch.Tensor, npoint: int) -> torch.Tensor: 39 | ''' 40 | ctx: 41 | xyz: [B,N,3] 42 | weights: [B,N] 43 | npoint: int 44 | ''' 45 | assert xyz.is_contiguous() 46 | assert weights.is_contiguous() 47 | B, N, _ = xyz.size() 48 | output = torch.cuda.IntTensor(B, npoint) 49 | temp = torch.cuda.FloatTensor(B, N).fill_(1e10) 50 | 51 | point_utils_cuda.weighted_furthest_point_sampling_wrapper(B, N, npoint, xyz, weights, temp, output); 52 | return output 53 | 54 | @staticmethod 55 | def backward(xyz, a=None): 56 | return None, None 57 | 58 | weighted_furthest_point_sample = WeightedFurthestPointSampling.apply 59 | 60 | class GatherOperation(Function): 61 | @staticmethod 62 | def forward(ctx, features: torch.Tensor, idx: torch.Tensor) -> torch.Tensor: 63 | ''' 64 | ctx 65 | features: [B,C,N] 66 | idx: [B,npoint] 67 | ''' 68 | assert features.is_contiguous() 69 | assert idx.is_contiguous() 70 | 71 | B, npoint = idx.size() 72 | _, C, N = features.size() 73 | output = torch.cuda.FloatTensor(B, C, npoint) 74 | 75 | point_utils_cuda.gather_points_wrapper(B, C, N, npoint, features, idx, output) 76 | 77 | ctx.for_backwards = (idx, C, N) 78 | return output 79 | 80 | @staticmethod 81 | def backward(ctx, grad_out): 82 | idx, C, N = ctx.for_backwards 83 | B, npoint = idx.size() 84 | grad_features = Variable(torch.cuda.FloatTensor(B,C,N).zero_()) 85 | grad_out_data = grad_out.data.contiguous() 86 | point_utils_cuda.gather_points_grad_wrapper(B, C, N, npoint, grad_out_data, idx, grad_features.data) 87 | return grad_features, None 88 | 89 | gather_operation = GatherOperation.apply 90 | 91 | def generate_rand_rotm(x_lim=5.0, y_lim=5.0, z_lim=180.0): 92 | ''' 93 | Input: 94 | x_lim 95 | y_lim 96 | z_lim 97 | return: 98 | rotm: [3,3] 99 | ''' 100 | rand_z = np.random.uniform(low=-z_lim, high=z_lim) 101 | rand_y = np.random.uniform(low=-y_lim, high=y_lim) 102 | rand_x = np.random.uniform(low=-x_lim, high=x_lim) 103 | 104 | rand_eul = np.array([rand_z, rand_y, rand_x]) 105 | r = Rotation.from_euler('zyx', rand_eul, degrees=True) 106 | rotm = r.as_matrix() 107 | return rotm 108 | 109 | def generate_rand_trans(x_lim=10.0, y_lim=1.0, z_lim=0.1): 110 | ''' 111 | Input: 112 | x_lim 113 | y_lim 114 | z_lim 115 | return: 116 | trans [3] 117 | ''' 118 | rand_x = np.random.uniform(low=-x_lim, high=x_lim) 119 | rand_y = np.random.uniform(low=-y_lim, high=y_lim) 120 | rand_z = np.random.uniform(low=-z_lim, high=z_lim) 121 | 122 | rand_trans = np.array([rand_x, rand_y, rand_z]) 123 | 124 | return rand_trans 125 | 126 | def apply_transform(pts, trans): 127 | R = trans[:3, :3] 128 | T = trans[:3, 3] 129 | pts = pts @ R.T + T 130 | return pts 131 | 132 | def calc_error_np(pred_R, pred_t, gt_R, gt_t): 133 | tmp = (np.trace(pred_R.transpose().dot(gt_R))-1)/2 134 | tmp = np.clip(tmp, -1.0, 1.0) 135 | L_rot = np.arccos(tmp) 136 | L_rot = 180 * L_rot / np.pi 137 | L_trans = np.linalg.norm(pred_t - gt_t) 138 | return L_rot, L_trans 139 | 140 | def set_seed(seed): 141 | ''' 142 | Set random seed for torch, numpy and python 143 | ''' 144 | random.seed(seed) 145 | np.random.seed(seed) 146 | torch.manual_seed(seed) 147 | if torch.cuda.is_available(): 148 | torch.cuda.manual_seed(seed) 149 | torch.cuda.manual_seed_all(seed) 150 | 151 | torch.backends.cudnn.benchmark=False 152 | torch.backends.cudnn.deterministic=True -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | torch # tested on 1.7.0/1.7.1 2 | numpy 3 | pytorch3d # tested on 0.3.0 4 | MinkowskiEngine # tested on 0.5.0 5 | scipy 6 | tqdm 7 | wandb -------------------------------------------------------------------------------- /scripts/test_kitti.sh: -------------------------------------------------------------------------------- 1 | python test.py --seed 1 --gpu GPU --npoints 16384 --dataset kitti --voxel_size 0.3 \ 2 | --use_fps --use_weights --data_list ./data/kitti_list --root DATA_ROOT \ 3 | --pretrain_weights ./ckpt/pretrained/kitti.pth --save_dir SAVE_DIR -------------------------------------------------------------------------------- /scripts/test_nusc.sh: -------------------------------------------------------------------------------- 1 | python test.py --seed 1 --gpu GPU --npoints 8192 --dataset nuscenes --voxel_size 0.3 \ 2 | --use_fps --use_weights --data_list ./data/nuscenes_list --root DATA_ROOT \ 3 | --pretrain_weights ./ckpt/pretrained/nuscenes.pth --save_dir SAVE_DIR -------------------------------------------------------------------------------- /scripts/train_kitti_desc.sh: -------------------------------------------------------------------------------- 1 | python train_feats.py --batch_size 8 --epochs 100 --lr 0.001 --seed 1 --gpu GPU \ 2 | --npoints 16384 --dataset kitti --voxel_size 0.3 --ckpt_dir CKPT_DIR \ 3 | --use_fps --use_weights --data_list ./data/kitti_list --runname RUNNAME --augment 0.5 \ 4 | --root DATA_ROOT --wandb_dir WANDB_DIR --train_desc --freeze_detector \ 5 | --pretrain_detector PRETRAIN_DETECTOR --use_wandb -------------------------------------------------------------------------------- /scripts/train_kitti_det.sh: -------------------------------------------------------------------------------- 1 | python train_feats.py --batch_size 8 --epochs 100 --lr 0.001 --seed 1 --gpu GPU \ 2 | --npoints 16384 --dataset kitti --voxel_size 0.3 --ckpt_dir CKPT_DIR \ 3 | --use_fps --use_weights --data_list ./data/kitti_list --runname RUNNAME --augment 0.5 \ 4 | --root DATA_ROOT --wandb_dir WANDB_DIR --use_wandb -------------------------------------------------------------------------------- /scripts/train_kitti_reg.sh: -------------------------------------------------------------------------------- 1 | python train_reg.py --batch_size 16 --epochs 100 --lr 0.001 --seed 1 --gpu GPU \ 2 | --npoints 16384 --dataset kitti --voxel_size 0.3 --ckpt_dir CKPT_DIR \ 3 | --use_fps --use_weights --alpha 1.8 \ 4 | --data_list ./data/kitti_list --runname RUNNAME --augment 0.0 \ 5 | --root DATA_ROOT --wandb_dir WANDB_DIR --freeze_detector \ 6 | --pretrain_feats PRETRAIN_FEATS --use_wandb -------------------------------------------------------------------------------- /scripts/train_nusc_desc.sh: -------------------------------------------------------------------------------- 1 | python train_feats.py --batch_size 8 --epochs 100 --lr 0.001 --seed 1 --gpu GPU \ 2 | --npoints 8192 --dataset nuscenes --voxel_size 0.3 --ckpt_dir CKPT_DIR \ 3 | --use_fps --use_weights --data_list ./data/nuscenes_list --runname RUNNAME --augment 0.5 \ 4 | --root DATA_ROOT --wandb_dir WANDB_DIR --train_desc --freeze_detector \ 5 | --pretrain_detector PRETRAIN_DETECTOR --use_wandb -------------------------------------------------------------------------------- /scripts/train_nusc_det.sh: -------------------------------------------------------------------------------- 1 | python train_feats.py --batch_size 8 --epochs 100 --lr 0.001 --seed 1 --gpu GPU \ 2 | --npoints 8192 --dataset nuscenes --voxel_size 0.3 --ckpt_dir CKPT_DIR \ 3 | --use_fps --use_weights --data_list ./data/nuscenes_list --runname RUNNAME --augment 0.5 \ 4 | --root DATA_ROOT --wandb_dir WANDB_DIR --use_wandb -------------------------------------------------------------------------------- /scripts/train_nusc_reg.sh: -------------------------------------------------------------------------------- 1 | python train_reg.py --batch_size 16 --epochs 100 --lr 0.001 --seed 1 --gpu 1 \ 2 | --npoints 8192 --dataset nuscenes --voxel_size 0.3 --ckpt_dir CKPT_DIR \ 3 | --use_fps --use_weights --alpha 2.0 \ 4 | --data_list ./data/nuscenes_list --runname RUNNAME --augment 0.0 \ 5 | --root DATA_ROOT --wandb_dir WANDB_DIR --freeze_detector \ 6 | --pretrain_feats PRETRAIN_FEATS --use_wandb -------------------------------------------------------------------------------- /test.py: -------------------------------------------------------------------------------- 1 | import os 2 | import numpy as np 3 | import torch 4 | 5 | from data.kitti_data import KittiDataset 6 | from data.nuscenes_data import NuscenesDataset 7 | 8 | from models.models import HRegNet 9 | from models.utils import calc_error_np, set_seed 10 | 11 | import argparse 12 | import datetime 13 | 14 | def parse_args(): 15 | parser = argparse.ArgumentParser(description='HRegNet') 16 | 17 | parser.add_argument('--seed', type=int, default=1) 18 | parser.add_argument('--gpu', type=str, default='1') 19 | parser.add_argument('--root', type=str, default='') 20 | parser.add_argument('--npoints', type=int, default=16384) 21 | parser.add_argument('--dataset', type=str, default='kitti') 22 | parser.add_argument('--use_fps', action='store_true') 23 | parser.add_argument('--data_list', type=str, default='') 24 | parser.add_argument('--use_weights', action='store_true') 25 | parser.add_argument('--pretrain_weights', type=str, default='') 26 | parser.add_argument('--voxel_size', type=float, default=0.3) 27 | parser.add_argument('--save_dir',type=str, default='') 28 | parser.add_argument('--augment', type=float, default=0.0) 29 | parser.add_argument('--freeze_detector', action='store_true') 30 | parser.add_argument('--freeze_feats', action='store_true') 31 | 32 | return parser.parse_args() 33 | 34 | def test(args): 35 | if args.dataset == 'kitti': 36 | test_seqs = ['08','09','10'] 37 | test_dataset = KittiDataset(args.root, test_seqs, args.npoints, args.voxel_size, args.data_list, args.augment) 38 | elif args.dataset == 'nuscenes': 39 | test_seqs = ['test'] 40 | test_dataset = NuscenesDataset(args.root, test_seqs, args.npoints, args.voxel_size, args.data_list, args.augment) 41 | else: 42 | raise('Not implemented') 43 | 44 | net = HRegNet(args).cuda() 45 | net.load_state_dict(torch.load(args.pretrain_weights)) 46 | 47 | net.eval() 48 | 49 | trans_error_list = [] 50 | rot_error_list = [] 51 | pred_T_list = [] 52 | delta_t_list = [] 53 | trans_thresh = 2.0 54 | rot_thresh = 5.0 55 | success_idx = [] 56 | 57 | with torch.no_grad(): 58 | for idx in range(test_dataset.__len__()): 59 | start_t = datetime.datetime.now() 60 | src_points, dst_points, gt_R, gt_t = test_dataset[idx] 61 | src_points = src_points.unsqueeze(0).cuda() 62 | dst_points = dst_points.unsqueeze(0).cuda() 63 | gt_R = gt_R.numpy() 64 | gt_t = gt_t.numpy() 65 | ret_dict = net(src_points, dst_points) 66 | end_t = datetime.datetime.now() 67 | pred_R = ret_dict['rotation'][-1] 68 | pred_t = ret_dict['translation'][-1] 69 | pred_R = pred_R.squeeze().cpu().numpy() 70 | pred_t = pred_t.squeeze().cpu().numpy() 71 | rot_error, trans_error = calc_error_np(pred_R, pred_t, gt_R, gt_t) 72 | pred_T = np.zeros((3,4)) 73 | gt_T = np.zeros((3,4)) 74 | pred_T[:3,:3] = pred_R 75 | pred_T[:3,3] = pred_t 76 | gt_T[:3,:3] = gt_R 77 | gt_T[:3,3] = gt_t 78 | pred_T = pred_T.flatten() 79 | gt_T = gt_T.flatten() 80 | pred_T_list.append(pred_T) 81 | print('{:d}: trans: {:.4f} rot: {:.4f}'.format(idx, trans_error, rot_error)) 82 | trans_error_list.append(trans_error) 83 | rot_error_list.append(rot_error) 84 | 85 | if trans_error < trans_thresh and rot_error < rot_thresh: 86 | success_idx.append(idx) 87 | 88 | delta_t = (end_t - start_t).microseconds 89 | delta_t_list.append(delta_t) 90 | 91 | success_rate = len(success_idx)/test_dataset.__len__() 92 | trans_error_array = np.array(trans_error_list) 93 | rot_error_array = np.array(rot_error_list) 94 | trans_mean = np.mean(trans_error_array[success_idx]) 95 | trans_std = np.std(trans_error_array[success_idx]) 96 | rot_mean = np.mean(rot_error_array[success_idx]) 97 | rot_std = np.std(rot_error_array[success_idx]) 98 | delta_t_array = np.array(delta_t_list) 99 | delta_t_mean = np.mean(delta_t_array) 100 | 101 | print('Translation mean: {:.4f}'.format(trans_mean)) 102 | print('Translation std: {:.4f}'.format(trans_std)) 103 | print('Rotation mean: {:.4f}'.format(rot_mean)) 104 | print('Rotation std: {:.4f}'.format(rot_std)) 105 | print('Runtime: {:.4f}'.format(delta_t_mean)) 106 | print('Success rate: {:.4f}'.format(success_rate)) 107 | 108 | if not os.path.exists(args.save_dir): 109 | os.makedirs(args.save_dir) 110 | pred_T_array = np.array(pred_T_list) 111 | np.savetxt(os.path.join(args.save_dir, args.dataset+'_pred.txt'), pred_T_array) 112 | np.savetxt(os.path.join(args.save_dir, args.dataset+'_trans_error.txt'), trans_error_list) 113 | np.savetxt(os.path.join(args.save_dir, args.dataset+'_rot_error.txt'), rot_error_list) 114 | 115 | f_summary = open(os.path.join(args.save_dir, args.dataset+'_summary.txt'), 'w') 116 | f_summary.write('Dataset: '+args.dataset+'\n') 117 | f_summary.write('Translation threshold: {:.2f}\n'.format(trans_thresh)) 118 | f_summary.write('Rotation threshold: {:.2f}\n'.format(rot_thresh)) 119 | f_summary.write('Translation mean: {:.4f}\n'.format(trans_mean)) 120 | f_summary.write('Translation std: {:.4f}\n'.format(trans_std)) 121 | f_summary.write('Rotation mean: {:.4f}\n'.format(rot_mean)) 122 | f_summary.write('Rotation std: {:.4f}\n'.format(rot_std)) 123 | f_summary.write('Runtime: {:.4f}\n'.format(delta_t_mean)) 124 | f_summary.write('Success rate: {:.4f}\n'.format(success_rate)) 125 | f_summary.close() 126 | 127 | print('Saved results to ' + args.save_dir) 128 | 129 | if __name__ == '__main__': 130 | args = parse_args() 131 | 132 | os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu 133 | set_seed(args.seed) 134 | 135 | test(args) -------------------------------------------------------------------------------- /train_feats.py: -------------------------------------------------------------------------------- 1 | import os 2 | import torch 3 | from torch.utils.data import DataLoader 4 | import torch.optim as optim 5 | from torch.optim.lr_scheduler import StepLR 6 | 7 | from data.kitti_data import KittiDataset 8 | from data.nuscenes_data import NuscenesDataset 9 | 10 | from models.models import HierFeatureExtraction 11 | from models.utils import set_seed 12 | from models.losses import matching_loss, prob_chamfer_loss 13 | 14 | from tqdm import tqdm 15 | import argparse 16 | import wandb 17 | 18 | def parse_args(): 19 | parser = argparse.ArgumentParser(description='HRegNet') 20 | parser.add_argument('--batch_size', type=int, default=2) 21 | parser.add_argument('--epochs', type=int, default=100) 22 | parser.add_argument('--lr', type=float, default=0.001) 23 | parser.add_argument('--seed', type=int, default=1) 24 | parser.add_argument('--gpu', type=str, default='1') 25 | parser.add_argument('--root', type=str, default='') 26 | parser.add_argument('--npoints', type=int, default=16384) 27 | parser.add_argument('--temp', type=float, default=0.1) 28 | parser.add_argument('--runname', type=str, default='') 29 | parser.add_argument('--wandb_dir', type=str, default='') 30 | parser.add_argument('--sigma_max', type=float, default=3.0) 31 | parser.add_argument('--dataset', type=str, default='kitti') 32 | parser.add_argument('--data_list', type=str, default='') 33 | parser.add_argument('--voxel_size', type=float, default=0.3) 34 | parser.add_argument('--augment', type=float, default=0.0) 35 | parser.add_argument('--ckpt_dir', type=str, default='') 36 | parser.add_argument('--pretrain_detector', type=str, default='') 37 | parser.add_argument('--train_desc', action='store_true') 38 | parser.add_argument('--freeze_detector', action='store_true') 39 | parser.add_argument('--use_fps', action='store_true') 40 | parser.add_argument('--use_weights', action='store_true') 41 | parser.add_argument('--use_wandb', action='store_true') 42 | 43 | return parser.parse_args() 44 | 45 | def calc_losses(ret_dict_src, ret_dict_dst, gt_R, gt_t, args): 46 | l_chamfer_1 = prob_chamfer_loss(ret_dict_src['xyz_1'], ret_dict_dst['xyz_1'], \ 47 | ret_dict_src['sigmas_1'], ret_dict_dst['sigmas_1'], gt_R, gt_t) 48 | l_chamfer_2 = prob_chamfer_loss(ret_dict_src['xyz_2'], ret_dict_dst['xyz_2'], \ 49 | ret_dict_src['sigmas_2'], ret_dict_dst['sigmas_2'], gt_R, gt_t) 50 | l_chamfer_3 = prob_chamfer_loss(ret_dict_src['xyz_3'], ret_dict_dst['xyz_3'], \ 51 | ret_dict_src['sigmas_3'], ret_dict_dst['sigmas_3'], gt_R, gt_t) 52 | l_chamfer = l_chamfer_1 + l_chamfer_2 + l_chamfer_3 53 | 54 | if not args.train_desc: 55 | return l_chamfer 56 | else: 57 | l_matching_1 = matching_loss(ret_dict_src['xyz_1'], ret_dict_src['sigmas_1'], ret_dict_src['desc_1'], \ 58 | ret_dict_dst['xyz_1'], ret_dict_dst['sigmas_1'], ret_dict_dst['desc_1'], gt_R, gt_t, args.temp, args.sigma_max) 59 | l_matching_2 = matching_loss(ret_dict_src['xyz_2'], ret_dict_src['sigmas_2'], ret_dict_src['desc_2'], \ 60 | ret_dict_dst['xyz_2'], ret_dict_dst['sigmas_2'], ret_dict_dst['desc_2'], gt_R, gt_t, args.temp, args.sigma_max) 61 | l_matching_3 = matching_loss(ret_dict_src['xyz_3'], ret_dict_src['sigmas_3'], ret_dict_src['desc_3'], \ 62 | ret_dict_dst['xyz_3'], ret_dict_dst['sigmas_3'], ret_dict_dst['desc_3'], gt_R, gt_t, args.temp, args.sigma_max) 63 | l_matching = l_matching_1 + l_matching_2 + l_matching_3 64 | return l_chamfer, l_matching 65 | 66 | def val_feats(args, net): 67 | if args.dataset == 'kitti': 68 | val_seqs = ['06','07'] 69 | val_dataset = KittiDataset(args.root, val_seqs, args.npoints, args.voxel_size, args.data_list, 0.0) 70 | elif args.dataset == 'nuscenes': 71 | val_seqs = ['val'] 72 | val_dataset = NuscenesDataset(args.root, val_seqs, args.npoints, args.voxel_size, args.data_list, 0.0) 73 | else: 74 | raise('Not implemented') 75 | 76 | val_loader = DataLoader(val_dataset, 77 | batch_size=args.batch_size, 78 | num_workers=4, 79 | shuffle=True, 80 | pin_memory=True, 81 | drop_last=True) 82 | 83 | net.eval() 84 | total_l_chamfer = 0 85 | total_l_matching = 0 86 | count = 0 87 | pbar = tqdm(enumerate(val_loader)) 88 | 89 | with torch.no_grad(): 90 | for i, data in pbar: 91 | src_points, dst_points, gt_R, gt_t = data 92 | src_points = src_points.cuda() 93 | dst_points = dst_points.cuda() 94 | gt_R = gt_R.cuda() 95 | gt_t = gt_t.cuda() 96 | 97 | ret_dict_src = net(src_points) 98 | ret_dict_dst = net(dst_points) 99 | 100 | if args.train_desc: 101 | l_chamfer, l_matching = calc_losses(ret_dict_src, ret_dict_dst, gt_R, gt_t, args) 102 | total_l_chamfer += l_chamfer.item() 103 | total_l_matching += l_matching.item() 104 | else: 105 | l_chamfer = calc_losses(ret_dict_src, ret_dict_dst, gt_R, gt_t, args) 106 | total_l_chamfer += l_chamfer.item() 107 | count += 1 108 | 109 | if args.train_desc: 110 | total_l_chamfer = total_l_chamfer/count 111 | total_l_matching = total_l_matching/count 112 | return total_l_chamfer, total_l_matching 113 | else: 114 | total_l_chamfer = total_l_chamfer/count 115 | return total_l_chamfer 116 | 117 | def train_feats(args): 118 | if args.dataset == 'kitti': 119 | train_seqs = ['00','01','02','03','04','05'] 120 | train_dataset = KittiDataset(args.root, train_seqs, args.npoints, args.voxel_size, args.data_list, args.augment) 121 | elif args.dataset == 'nuscenes': 122 | train_seqs = ['train'] 123 | train_dataset = NuscenesDataset(args.root, train_seqs, args.npoints, args.voxel_size, args.data_list, args.augment) 124 | else: 125 | raise('Not implemented') 126 | 127 | train_loader = DataLoader(train_dataset, 128 | batch_size=args.batch_size, 129 | num_workers=4, 130 | shuffle=True, 131 | pin_memory=True, 132 | drop_last=True) 133 | 134 | net = HierFeatureExtraction(args) 135 | if args.train_desc: 136 | net.load_state_dict(torch.load(args.pretrain_detector)) 137 | if args.use_wandb: 138 | wandb.watch(net) 139 | 140 | net.cuda() 141 | optimizer = optim.Adam(filter(lambda p: p.requires_grad, net.parameters()), lr=args.lr) 142 | scheduler = StepLR(optimizer, step_size=10, gamma=0.5) 143 | 144 | best_train_loss = float('inf') 145 | best_val_loss = float('inf') 146 | best_train_epoch = 0 147 | best_val_epoch = 0 148 | 149 | for epoch in tqdm(range(args.epochs)): 150 | 151 | net.train() 152 | count = 0 153 | total_loss = 0 154 | total_l_chamfer = 0 155 | total_l_matching = 0 156 | 157 | pbar = tqdm(enumerate(train_loader)) 158 | 159 | for i, data in pbar: 160 | src_points, dst_points, gt_R, gt_t = data 161 | src_points = src_points.cuda() 162 | dst_points = dst_points.cuda() 163 | gt_R = gt_R.cuda() 164 | gt_t = gt_t.cuda() 165 | 166 | optimizer.zero_grad() 167 | ret_dict_src = net(src_points) 168 | ret_dict_dst = net(dst_points) 169 | if args.train_desc: 170 | l_chamfer, l_matching = calc_losses(ret_dict_src, ret_dict_dst, gt_R, gt_t, args) 171 | loss = l_chamfer + l_matching 172 | else: 173 | l_chamfer = calc_losses(ret_dict_src, ret_dict_dst, gt_R, gt_t, args) 174 | loss = l_chamfer 175 | loss.backward() 176 | optimizer.step() 177 | 178 | count += 1 179 | total_loss += loss.item() 180 | total_l_chamfer += l_chamfer.item() 181 | if args.train_desc: 182 | total_l_matching += l_matching.item() 183 | 184 | if i % 10 == 0: 185 | pbar.set_description('Train Epoch:{}[{}/{}({:.0f}%)]\tLoss: {:.6f}'.format( 186 | epoch+1, i, len(train_loader), 100. * i/len(train_loader), loss.item() 187 | )) 188 | 189 | total_loss /= count 190 | total_l_chamfer /= count 191 | if args.train_desc: 192 | total_l_matching /= count 193 | 194 | if args.train_desc: 195 | val_chamfer, val_matching = val_feats(args, net) 196 | total_val_loss = val_chamfer + val_matching 197 | else: 198 | val_chamfer = val_feats(args, net) 199 | total_val_loss = val_chamfer 200 | 201 | if args.use_wandb: 202 | if args.train_desc: 203 | wandb.log({"train_chamfer":total_l_chamfer, 204 | "train_matching":total_l_matching, 205 | "val_chamfer":val_chamfer, 206 | "val_matching":val_matching}) 207 | else: 208 | wandb.log({"train_chamfer":total_l_chamfer, 209 | "val_chamfer":val_chamfer}) 210 | 211 | print('\n Epoch {} finished. Training loss: {:.4f} Valiadation loss: {:.4f}'.\ 212 | format(epoch+1, total_loss, total_val_loss)) 213 | 214 | ckpt_dir = os.path.join(args.ckpt_dir, args.dataset + '_ckpt_'+args.runname) 215 | if not os.path.exists(ckpt_dir): 216 | os.makedirs(ckpt_dir) 217 | 218 | if total_loss < best_train_loss: 219 | torch.save(net.state_dict(), os.path.join(ckpt_dir, 'best_train.pth')) 220 | best_train_loss = total_loss 221 | best_train_epoch = epoch + 1 222 | 223 | if total_val_loss < best_val_loss: 224 | torch.save(net.state_dict(), os.path.join(ckpt_dir, 'best_val.pth')) 225 | best_val_loss = total_val_loss 226 | best_val_epoch = epoch + 1 227 | 228 | print('Best train epoch: {} Best train loss: {:.4f} Best val epoch: {} Best val loss: {:.4f}'.format( 229 | best_train_epoch, best_train_loss, best_val_epoch, best_val_loss 230 | )) 231 | 232 | scheduler.step() 233 | 234 | if __name__ == '__main__': 235 | args = parse_args() 236 | 237 | os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu 238 | set_seed(args.seed) 239 | 240 | if args.use_wandb: 241 | import wandb 242 | wandb.init(config=args, project='PointReg', name=args.dataset+'_'+args.runname, dir=args.wandb_dir) 243 | train_feats(args) -------------------------------------------------------------------------------- /train_reg.py: -------------------------------------------------------------------------------- 1 | import os 2 | import torch 3 | from torch.utils.data import DataLoader 4 | import torch.optim as optim 5 | from torch.optim.lr_scheduler import StepLR 6 | 7 | from data.kitti_data import KittiDataset 8 | from data.nuscenes_data import NuscenesDataset 9 | 10 | from models.models import HRegNet 11 | from models.losses import transformation_loss 12 | from models.utils import set_seed 13 | 14 | from tqdm import tqdm 15 | import argparse 16 | import wandb 17 | 18 | def parse_args(): 19 | parser = argparse.ArgumentParser(description='HRegNet') 20 | 21 | parser.add_argument('--batch_size', type=int, default=2) 22 | parser.add_argument('--epochs', type=int, default=100) 23 | parser.add_argument('--lr', type=float, default=0.001) 24 | parser.add_argument('--seed', type=int, default=1) 25 | parser.add_argument('--gpu', type=str, default='1') 26 | parser.add_argument('--root', type=str, default='') 27 | parser.add_argument('--npoints', type=int, default=16384) 28 | parser.add_argument('--voxel_size', type=float, default=0.3) 29 | parser.add_argument('--use_wandb', action='store_true') 30 | parser.add_argument('--runname', type=str, default='') 31 | parser.add_argument('--dataset', type=str, default='kitti') 32 | parser.add_argument('--augment', type=float, default=0.0) 33 | parser.add_argument('--ckpt_dir', type=str, default='') 34 | parser.add_argument('--wandb_dir', type=str, default='') 35 | parser.add_argument('--freeze_detector', action='store_true') 36 | parser.add_argument('--freeze_feats', action='store_true') 37 | parser.add_argument('--use_fps', action='store_true') 38 | parser.add_argument('--data_list', type=str, default='') 39 | parser.add_argument('--use_weights', action='store_true') 40 | parser.add_argument('--pretrain_feats', type=str, default=None) 41 | parser.add_argument('--alpha', type=float, default=1.0) 42 | 43 | return parser.parse_args() 44 | 45 | def val_reg(args, net): 46 | if args.dataset == 'kitti': 47 | val_seqs = ['06','07'] 48 | val_dataset = KittiDataset(args.root, val_seqs, args.npoints, args.voxel_size, args.data_list, 0.0) 49 | elif args.dataset == 'nuscenes': 50 | val_seqs = ['val'] 51 | val_dataset = NuscenesDataset(args.root, val_seqs, args.npoints, args.voxel_size, args.data_list, 0.0) 52 | else: 53 | raise('Not implemented') 54 | 55 | val_loader = DataLoader(val_dataset, 56 | batch_size=args.batch_size, 57 | num_workers=4, 58 | shuffle=False, 59 | pin_memory=True, 60 | drop_last=True) 61 | 62 | net.eval() 63 | total_loss = 0 64 | total_R_loss = 0 65 | total_t_loss = 0 66 | count = 0 67 | pbar = tqdm(enumerate(val_loader)) 68 | with torch.no_grad(): 69 | for i, data in pbar: 70 | src_points, dst_points, gt_R, gt_t = data 71 | src_points = src_points.cuda() 72 | dst_points = dst_points.cuda() 73 | gt_R = gt_R.cuda() 74 | gt_t = gt_t.cuda() 75 | 76 | ret_dict = net(src_points, dst_points) 77 | l_trans, l_R, l_t = transformation_loss(ret_dict['rotation'][-1], ret_dict['translation'][-1], gt_R, gt_t, args.alpha) 78 | total_loss += l_trans.item() 79 | total_R_loss += l_R.item() 80 | total_t_loss += l_t.item() 81 | count += 1 82 | 83 | total_loss = total_loss/count 84 | total_R_loss = total_R_loss/count 85 | total_t_loss = total_t_loss/count 86 | 87 | return total_loss, total_R_loss, total_t_loss 88 | 89 | def test_reg(args, net): 90 | if args.dataset == 'kitti': 91 | test_seqs = ['08','09','10'] 92 | test_dataset = KittiDataset(args.root, test_seqs, args.npoints, args.voxel_size, args.data_list, 0.0) 93 | elif args.dataset == 'nuscenes': 94 | test_seqs = ['test'] 95 | test_dataset = NuscenesDataset(args.root, test_seqs, args.npoints, args.voxel_size, args.data_list, 0.0) 96 | else: 97 | raise('Not implemented') 98 | 99 | test_loader = DataLoader(test_dataset, 100 | batch_size=args.batch_size, 101 | num_workers=4, 102 | shuffle=False, 103 | pin_memory=True, 104 | drop_last=True) 105 | 106 | net.eval() 107 | total_loss = 0 108 | total_R_loss = 0 109 | total_t_loss = 0 110 | count = 0 111 | pbar = tqdm(enumerate(test_loader)) 112 | with torch.no_grad(): 113 | for i, data in pbar: 114 | src_points, dst_points, gt_R, gt_t = data 115 | src_points = src_points.cuda() 116 | dst_points = dst_points.cuda() 117 | gt_R = gt_R.cuda() 118 | gt_t = gt_t.cuda() 119 | 120 | ret_dict = net(src_points, dst_points) 121 | l_trans, l_R, l_t = transformation_loss(ret_dict['rotation'][-1], ret_dict['translation'][-1], gt_R, gt_t, args.alpha) 122 | total_loss += l_trans.item() 123 | total_R_loss += l_R.item() 124 | total_t_loss += l_t.item() 125 | count += 1 126 | 127 | total_loss = total_loss/count 128 | total_R_loss = total_R_loss/count 129 | total_t_loss = total_t_loss/count 130 | 131 | return total_loss, total_R_loss, total_t_loss 132 | 133 | def train_reg(args): 134 | 135 | if args.dataset == 'kitti': 136 | train_seqs = ['00','01','02','03','04','05'] 137 | train_dataset = KittiDataset(args.root, train_seqs, args.npoints, args.voxel_size, args.data_list, args.augment) 138 | elif args.dataset == 'nuscenes': 139 | train_seqs = ['train'] 140 | train_dataset = NuscenesDataset(args.root, train_seqs, args.npoints, args.voxel_size, args.data_list, args.augment) 141 | else: 142 | raise('Not implemented') 143 | 144 | train_loader = DataLoader(train_dataset, 145 | batch_size=args.batch_size, 146 | num_workers=4, 147 | shuffle=True, 148 | pin_memory=True, 149 | drop_last=True) 150 | 151 | net = HRegNet(args) 152 | net.feature_extraction.load_state_dict(torch.load(args.pretrain_feats)) 153 | 154 | if args.use_wandb: 155 | wandb.watch(net) 156 | 157 | net.cuda() 158 | 159 | optimizer = optim.Adam(net.parameters(), lr=args.lr) 160 | scheduler = StepLR(optimizer, step_size=10, gamma=0.5) 161 | 162 | best_train_loss = float('inf') 163 | best_val_loss = float('inf') 164 | 165 | for epoch in tqdm(range(args.epochs)): 166 | net.train() 167 | count = 0 168 | total_loss = 0 169 | pbar = tqdm(enumerate(train_loader)) 170 | 171 | for i, data in pbar: 172 | src_points, dst_points, gt_R, gt_t = data 173 | src_points = src_points.cuda() 174 | dst_points = dst_points.cuda() 175 | gt_R = gt_R.cuda() 176 | gt_t = gt_t.cuda() 177 | 178 | optimizer.zero_grad() 179 | ret_dict = net(src_points, dst_points) 180 | 181 | l_trans = 0.0 182 | l_R = 0.0 183 | l_t = 0.0 184 | 185 | for idx in range(3): 186 | l_trans_, l_R_, l_t_ = transformation_loss(ret_dict['rotation'][idx], ret_dict['translation'][idx], gt_R, gt_t, args.alpha) 187 | l_trans += l_trans_ 188 | l_R += l_R_ 189 | l_t += l_t_ 190 | 191 | l_trans = l_trans / 3.0 192 | loss = l_trans 193 | loss.backward() 194 | optimizer.step() 195 | 196 | count += 1 197 | total_loss += loss.item() 198 | 199 | if i % 10 == 0: 200 | pbar.set_description('Train Epoch:{}[{}/{}({:.0f}%)]\tLoss: {:.6f}'.format( 201 | epoch+1, i, len(train_loader), 100. * i/len(train_loader), loss.item() 202 | )) 203 | 204 | total_loss /= count 205 | total_val_loss, total_val_R, total_val_t = val_reg(args, net) 206 | total_test_loss, total_test_R, total_test_t = test_reg(args, net) 207 | if args.use_wandb: 208 | wandb.log({"train loss":total_loss, 209 | "val loss": total_val_loss, \ 210 | "val R": total_val_R, \ 211 | "val t":total_val_t, \ 212 | "test loss":total_test_loss,\ 213 | "test R":total_test_R,\ 214 | "test_t":total_test_t}) 215 | 216 | print('\n Epoch {} finished. Training loss: {:.4f} Valiadation loss: {:.4f}'.\ 217 | format(epoch+1, total_loss, total_val_loss)) 218 | 219 | ckpt_dir = os.path.join(args.ckpt_dir, args.dataset + '_ckpt_'+args.runname) 220 | if not os.path.exists(ckpt_dir): 221 | os.makedirs(ckpt_dir) 222 | 223 | if total_loss < best_train_loss: 224 | torch.save(net.state_dict(), os.path.join(ckpt_dir, 'best_train.pth')) 225 | best_train_loss = total_loss 226 | best_train_epoch = epoch + 1 227 | 228 | if total_val_loss < best_val_loss: 229 | torch.save(net.state_dict(), os.path.join(ckpt_dir, 'best_val.pth')) 230 | best_val_loss = total_val_loss 231 | best_val_epoch = epoch + 1 232 | 233 | print('Best train epoch: {} Best train loss: {:.4f} Best val epoch: {} Best val loss: {:.4f}'.format( 234 | best_train_epoch, best_train_loss, best_val_epoch, best_val_loss 235 | )) 236 | 237 | scheduler.step() 238 | 239 | if __name__ == '__main__': 240 | args = parse_args() 241 | 242 | os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu 243 | set_seed(args.seed) 244 | 245 | if args.use_wandb: 246 | wandb.init(config=args, project='HRegNet', name=args.dataset+'_'+args.runname, dir=args.wandb_dir) 247 | train_reg(args) --------------------------------------------------------------------------------