├── .github
└── workflows
│ └── cmake-single-platform.yml
├── README.md
├── README
├── 10.1177_02783649241303525-fig15.jpg
├── PALoc.svg+xml
├── image (4).png
├── image (5).png
├── image (6).png
├── image-1706213398646-3.png
├── image-20230101200651976.png
├── image-20230106135937336.png
├── image-20230106140017020.png
├── image-20240730152951528.png
├── image-20241127080805642.png
├── image-20241127083256739.png
├── image-20241127083549685.png
├── image-20241127083603307.png
├── image-20241127083707943.png
├── image-20241127083751932.png
├── image-20241127083801691.png
├── image-20241127083813797.png
├── image-20241127083906299.png
├── image-20241127083957970.png
├── image-20241127084143144.png
├── image-20241127084154114.png
├── image-20241127084220301.png
├── image-20241127084229287.png
├── image-20241127091209844.png
├── image-20241127091224755.png
├── image-20241129091541655.png
├── image-20241129091604653.png
├── image-20241129091746334.png
├── image-20241129091823199.png
├── image-20241129091845196.png
├── image-20241129091927786.png
├── image-20241129092017261.png
├── image-20241129092052634.png
├── image-20241129123446075.png
├── image-20250212202446474.png
├── image-20250212202920950.png
├── image-20250212202933255.png
├── image-20250212203009074.png
├── image-20250212203025149.png
├── image-20250214100110872.png
├── image-20250304121753995.png
├── image-20250322192302315.png
├── image-20250322192323830.png
├── image-20250322192349614.png
├── image-20250508072506829.png
├── image-20250508072544312.png
├── image-20250508072654363.png
├── image-20250508072706474.png
├── image-20250508072719975.png
├── image-20250513202215918.png
├── image-20250513202632779.png
├── image-20250513202654237.png
├── image-20250513203004722.png
├── image-20250513203017745.png
├── image-20250513204209752.png
├── image-20250513205043100.png
├── image-20250515144617477.png
├── image-20250515144842908.png
├── image-20250515144853720.png
├── image-20250515144915631.png
├── image-20250515144943191.png
├── image-20250515145431522.png
├── image-20250517201011720.png
└── image-20250517201051378.png
└── map_eval
├── .idea
├── .gitignore
├── easycode.ignore
├── libraries
│ ├── ROS.xml
│ └── workspace.xml
├── map_eval.iml
├── misc.xml
├── modules.xml
├── ros.xml
└── vcs.xml
├── CMakeLists.txt
├── config
├── config.yaml
└── template.yaml
├── include
└── tic_toc.h
├── scripts
├── error-visualization.py
├── voxel_errors.txt
└── voxel_wasserstein_cdf.txt
└── src
├── map_eval.cpp
├── map_eval.h
├── map_eval_main.cpp
├── voxel_calculator.cpp
└── voxel_calculator.hpp
/.github/workflows/cmake-single-platform.yml:
--------------------------------------------------------------------------------
1 | name: CMake CI (Ubuntu 20.04)
2 |
3 | on:
4 | push:
5 | branches: [ "main" ]
6 | pull_request:
7 | branches: [ "main" ]
8 |
9 | env:
10 | BUILD_TYPE: Release
11 |
12 | jobs:
13 | build:
14 | runs-on: ubuntu-20.04 # 明确指定 Ubuntu 20.04
15 |
16 | steps:
17 | - uses: actions/checkout@v4
18 |
19 | - name: Install dependencies
20 | run: |
21 | # 添加 Open3D 官方 PPA
22 | sudo apt-get install -y software-properties-common
23 | sudo add-apt-repository ppa:open3d/ppa -y
24 | sudo apt-get update
25 |
26 | # 安装指定版本 Open3D (0.17+)
27 | sudo apt-get install -y \
28 | libopen3d-dev=0.17.0+* \
29 | cmake \
30 | g++ \
31 | libeigen3-dev \
32 | libpcl-dev \
33 | libyaml-cpp-dev
34 |
35 | - name: Configure CMake
36 | run: cmake -B ${{github.workspace}}/build -DCMAKE_BUILD_TYPE=${{env.BUILD_TYPE}}
37 |
38 | - name: Build
39 | run: cmake --build ${{github.workspace}}/build --config ${{env.BUILD_TYPE}} -j 2
40 |
41 | # 可选测试步骤
42 | - name: Test
43 | working-directory: ${{github.workspace}}/build
44 | run: ctest -C ${{env.BUILD_TYPE}}
45 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
MapEval: Towards Unified, Robust and Efficient SLAM Map Evaluation Framework
4 |
5 | [**Xiangcheng Hu**](https://github.com/JokerJohn)
1 · [**Jin Wu**](https://zarathustr.github.io/)
1 · [**Mingkai Jia**](https://github.com/MKJia)
1· [**Hongyu Yan**](https://scholar.google.com/citations?user=TeKnXhkAAAAJ&hl=zh-CN)
1· [**Yi Jiang**](https://yijiang1992.github.io/)
2· [**Binqian Jiang**](https://github.com/lewisjiang/)
1
6 |
7 | [**Wei Zhang**](https://ece.hkust.edu.hk/eeweiz)
1 · [**Wei He**](https://sites.google.com/view/drweihecv/home/)
3 · [**Ping Tan**](https://facultyprofiles.hkust.edu.hk/profiles.php?profile=ping-tan-pingtan#publications)
1*†
8 |
9 |
1**HKUST
2CityU
3USTB**
10 |
11 | †Project lead *Corresponding Author
12 |
13 |

[](https://github.com/JokerJohn/Cloud_Map_Evaluation/stargazers)
14 |
15 | 
[](https://github.com/JokerJohn/Cloud_Map_Evaluation/issues)[](https://opensource.org/licenses/MIT)
16 |
17 |
18 |
19 | 
20 |
21 |
22 | MapEval is a comprehensive framework for evaluating point cloud maps in SLAM systems, addressing two fundamentally distinct aspects of map quality assessment:
23 | 1. **Global Geometric Accuracy**: Measures the absolute geometric fidelity of the reconstructed map compared to ground truth. This aspect is crucial as SLAM systems often accumulate drift over long trajectories, leading to global deformation.
24 | 2. **Local Structural Consistency**: Evaluates the preservation of local geometric features and structural relationships, which is essential for tasks like obstacle avoidance and local planning, even when global accuracy may be compromised.
25 |
26 | These complementary aspects require different evaluation approaches, as global drift may exist despite excellent local reconstruction, or conversely, good global alignment might mask local inconsistencies. Our framework provides a unified solution through both traditional metrics and novel evaluation methods based on optimal transport theory.
27 |
28 | ## News
29 |
30 | - **2025/05/05:** Add new test data and remove the simulation codes.
31 | - **2025/03/05**: [Formally published](https://ieeexplore.ieee.org/document/10910156)!
32 | - **2025/02/25**: Accept!
33 | - **2025/02/12**: Codes released!
34 | - **2025/02/05**: Resubmit.
35 | - **2024/12/19**: Submitted to **IEEE RAL**!
36 |
37 | ## Key Features
38 |
39 | **Traditional Metrics Implementation**:
40 |
41 | - **Accuracy** (AC): Point-level geometric error assessment
42 | - **Completeness** (COM): Map coverage evaluation.
43 | - **Chamfer Distance** (CD): Bidirectional point cloud difference
44 | - **Mean Map Entropy** (MME): Information-theoretic local consistency metric
45 |
46 | **Novel Proposed Metrics**:
47 |
48 | - **Average Wasserstein Distance** (AWD): Robust global geometric accuracy assessment
49 | - **Spatial Consistency Score** (SCS): Enhanced local consistency evaluation
50 |
51 | |  |
52 | | ------------------------------------------------------------ |
53 |
54 | ## Results
55 |
56 | ### Simulated experiments
57 |
58 | | Noise Sensitivity | Outlier Robustness |
59 | | ------------------------------------------------------------ | ------------------------------------------------------------ |
60 | |  |  |
61 |
62 | 
63 |
64 | ### Real-world experiments
65 |
66 | | Map Evaluation via Localization Accuracy | Map Evaluation in Diverse Environments |
67 | | ------------------------------------------------------------ | ------------------------------------------------------------ |
68 | |  |  |
69 |
70 | |  |
71 | | ------------------------------------------------------------ |
72 |
73 | ## Efficiency and Parameter Analysis
74 |
75 | |  |  |
76 | | ------------------------------------------------------------ | ------------------------------------------------------------ |
77 |
78 | ## Datasets
79 |
80 | | [MS-dataset](https://github.com/JokerJohn/MS-Dataset) | [FusionPortable (FP) and FusionPortableV2 dataset](https://fusionportable.github.io/dataset/fusionportable_v2/) | [Newer College (NC)](https://ori-drs.github.io/newer-college-dataset/) | [ GEODE dataset (GE)](https://github.com/PengYu-Team/GEODE_dataset) |
81 | | ----------------------------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
82 |
83 | |  |
84 | | ------------------------------------------------------------ |
85 |
86 |
87 |
88 | ## Quickly Run
89 |
90 | ### Dependencies
91 |
92 | - *[Open3d ( >= 0.11)](https://github.com/isl-org/Open3D)*
93 | - Eigen3
94 | - yaml-cpp
95 | - Ubuntu 20.04
96 |
97 | ### Test Data(password: 1)
98 |
99 | | sequence | | Test PCD | GT PCD |
100 | | -------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
101 | | MCR_slow |  | [map.pcd](https://hkustconnect-my.sharepoint.com/:u:/g/personal/xhubd_connect_ust_hk/ES9eSANEr-9NvkFqMzMFsecBo5r3hBpBnj0c6BMPgsfXnQ?e=aijdPf) | [map_gt.pcd](https://hkustconnect-my.sharepoint.com/:u:/g/personal/xhubd_connect_ust_hk/ESfn5EEsiPlCiJcydVc_HqgBDGqy65MHoyu63XE-iKbFBQ?e=dTDon4) |
102 | | PK01 |  | [map.pcd](https://hkustconnect-my.sharepoint.com/:u:/g/personal/xhubd_connect_ust_hk/ERPFVJN6CtBKtHlPWyni-jIB0dgLzgF1FGxPTatKoCp02Q?e=TEgfBp) | [gt.pcd](https://hkustconnect-my.sharepoint.com/:u:/g/personal/xhubd_connect_ust_hk/EeztnFHwKJlCoW-fmKljaMMBSvNvT5BkTXxoA1iXqeUS5A?e=37evMi) |
103 |
104 | ### Usage
105 |
106 | 1. install open3d. (maybe a higer version of CMake is needed)
107 |
108 | ```bash
109 | git clone https://github.com/isl-org/Open3D.git
110 | cd Open3D && mkdir build && cd build
111 | cmake ..
112 | make install
113 | ```
114 |
115 | 2. set and read the instruction of some params in [config.yaml](map_eval/config/config.yaml).
116 |
117 | ```yaml
118 | # accuracy_level, vector5d, we mainly use the result of the first element
119 | # if inlier is very small, we can try to larger the value, e.g. for outdoors, [0.5, 0.3, 0.2, 0.1, 0.05]
120 | accuracy_level: [0.2, 0.1, 0.08, 0.05, 0.01]
121 |
122 | # initial_matrix, vector16d, the initial matrix of the registration
123 | # make sure the format is correct, or you will got the error log: YAML::BadSubscript' what(): operator[] call on a scalar
124 | initial_matrix:
125 | - [1.0, 0.0, 0.0, 0.0]
126 | - [0.0, 1.0, 0.0, 0.0]
127 | - [0.0, 0.0, 1.0, 0.0]
128 | - [0.0, 0.0, 0.0, 1.0]
129 |
130 | # vmd voxel size, outdoor: 2.0-4.0; indoor: 2.0-3.0
131 | vmd_voxel_size: 3.0
132 | ```
133 |
134 |
135 | 3. complie map_eval
136 |
137 | ```bash
138 | git clone https://github.com/JokerJohn/Cloud_Map_Evaluation.git
139 | cd Cloud_Map_Evaluation/map_eval && mkdir build
140 | cmake ..
141 | make
142 | ```
143 |
144 | 4. get the final results
145 |
146 | ```bash
147 | ./map_eval
148 | ```
149 |
150 | we have a point cloud map generated by a pose-slam system, and we have a ground truth point cloud map. Then we caculate related metrics.
151 |
152 | 
153 |
154 | ### Visulization
155 |
156 | We can also get a rendered raw distance-error map(10cm) and inlier distance-error map(2cm) in this process, the color R->G->B represent for the distance error at a level of 0-10cm.
157 |
158 | .png)
159 |
160 | **if we do not have gt ma**p, we can only evaluate the **Mean Map Entropy (MME)**, smaller means better consistency. just set `evaluate_mme: false` in **[config.yaml](map_eval/config/config.yaml)**.
161 |
162 | .png)
163 |
164 | we can also get a simpe mesh reconstructed from point cloud map.
165 |
166 | 
167 |
168 | 5. we got the result flies.
169 |
170 | 
171 |
172 | 6. if you want to get the visulization of voxel errors, use the [error-visualization.py](map_eval/scripts/error-visualization.py)
173 |
174 | ```python
175 | pip install numpy matplotlib scipy
176 |
177 | python3 error-visualization.py
178 | ```
179 |
180 | |  |  |
181 | | ------------------------------------------------------------ | ------------------------------------------------------------ |
182 | |  |  |
183 |
184 | ## Issues
185 |
186 | ### How do you get your initial pose?
187 |
188 | we can use [CloudCompare](https://github.com/CloudCompare/CloudCompare) to align LIO map to Gt map .
189 |
190 | - Roughly translate and rotate the LIO point cloud map to the GT map。
191 |
192 | - Manually register the moved LIO map (aligned) to the GT map (reference), and get the output of the terminal transfrom `T2`, then the initial pose matrix is the terminal output transform `T`.
193 |
194 | |  |  |
195 | | ------------------------------------------------------------ | ------------------------------------------------------------ |
196 |
197 | ### What's the difference between raw rendered map and inlier rendered map?
198 |
199 | The primary function of the r**aw rendered map** (left) is to color-code the error of all points in the map estimated by the algorithm. For each point in the estimated map that does not find a corresponding point in the **ground truth (gt) map**, it is defaulted to the maximum error (**20cm**), represented as red. On the other hand, the i**nlier rendered map** (right) excludes the non-overlapping regions of the point cloud and colors only the error of the inlier points after point cloud matching. This map therefore contains only a portion of the points from the original estimated map. (remindered by [John-Henawy](https://github.com/John-Henawy) in [issue 5](https://github.com/JokerJohn/Cloud_Map_Evaluation/issues/5))
200 |
201 | .png)
202 |
203 | ### **Applicable Scenarios:**
204 |
205 | 1. **With a ground truth map:** All metrics are applicable.
206 |
207 | 2. **Without a ground truth map** (remindered by [@Silentbarber](https://github.com/Silentbarber), [ZOUYIyi](https://github.com/ZOUYIyi) in [issue 4](https://github.com/JokerJohn/Cloud_Map_Evaluation/issues/4) and [issue 7](https://github.com/JokerJohn/Cloud_Map_Evaluation/issues/7)):
208 |
209 | - Only **MME** can be used for evaluation. It is crucial to remember that the maps being evaluated must be on the same scale.
210 |
211 | > For example, **you cannot compare a LIO map with a LIO SLAM map** that has performed loop closure optimization. This is because loop closure adjusts the local point cloud structure, leading to inaccurate MME evaluation. You can compare the MME of different LIO maps.
212 |
213 | ## Publications
214 |
215 | We recommend to cite [our paper](https://arxiv.org/abs/2411.17928) if you find this library useful:
216 |
217 | ```latex
218 | @misc{hu2024mapeval,
219 | title={MapEval: Towards Unified, Robust and Efficient SLAM Map Evaluation Framework},
220 | author={Xiangcheng Hu and Jin Wu and Mingkai Jia and Hongyu Yan and Yi Jiang and Binqian Jiang and Wei Zhang and Wei He and Ping Tan},
221 | year={2025},
222 | volume={10},
223 | number={5},
224 | pages={4228-4235},
225 | doi={10.1109/LRA.2025.3548441}
226 | }
227 |
228 | @article{wei2024fpv2,
229 | title={Fusionportablev2: A unified multi-sensor dataset for generalized slam across diverse platforms and scalable environments},
230 | author={Wei, Hexiang and Jiao, Jianhao and Hu, Xiangcheng and Yu, Jingwen and Xie, Xupeng and Wu, Jin and Zhu, Yilong and Liu, Yuxuan and Wang, Lujia and Liu, Ming},
231 | journal={The International Journal of Robotics Research},
232 | pages={02783649241303525},
233 | year={2024},
234 | publisher={SAGE Publications Sage UK: London, England}
235 | }
236 | ```
237 |
238 | ## Related Package
239 |
240 | The folloing works use MapEval for map evalution.
241 |
242 | | Work | Tasks | Date | Metrics | Demo |
243 | | ------------------------------------------------------------ | ---------------------------------------- | ---------- | --------- | ------------------------------------------------------------ |
244 | | [**LEMON-Mapping**](https://arxiv.org/abs/2505.10018) | Multi-Session Point Cloud mapping | Arxiv'2025 | MME |  |
245 | | **[CompSLAM](https://arxiv.org/abs/2505.06483)** | Multi-Modal Localization