├── .asset
├── day0.jpg
├── day1.jpg
├── example_duster_neg_pair.png
├── radar-webpage.png
└── replicate.md
├── .gitignore
├── .gitmodules
├── LICENSE
├── README.md
├── config
├── day.yaml
├── day_debug.yaml
├── night.yaml
├── nordland.yaml
├── season.yaml
├── uacampus.yaml
└── weather.yaml
├── dataloaders
├── ImagePairDataset.py
└── __init__.py
├── dataset
└── gt
│ ├── day.txt
│ ├── night.txt
│ ├── nordland.txt
│ ├── season.txt
│ ├── uacampus.txt
│ └── weather.txt
├── main.py
├── requirements.txt
└── scripts
└── evaluation.sh
/.asset/day0.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jarvisyjw/GV-Bench/b78355f066f1ec511261110d253077e7415fc12e/.asset/day0.jpg
--------------------------------------------------------------------------------
/.asset/day1.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jarvisyjw/GV-Bench/b78355f066f1ec511261110d253077e7415fc12e/.asset/day1.jpg
--------------------------------------------------------------------------------
/.asset/example_duster_neg_pair.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jarvisyjw/GV-Bench/b78355f066f1ec511261110d253077e7415fc12e/.asset/example_duster_neg_pair.png
--------------------------------------------------------------------------------
/.asset/radar-webpage.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jarvisyjw/GV-Bench/b78355f066f1ec511261110d253077e7415fc12e/.asset/radar-webpage.png
--------------------------------------------------------------------------------
/.asset/replicate.md:
--------------------------------------------------------------------------------
1 | # Replicate Results in Paper
2 |
3 | - **Note**
4 | we use fundamental matrix estimation for outlier reject and use default image resolution of 1024*640, which is different from the image-matching-model's processing. Therefore if you replicate the experiments using image-matching-models, you may get different results. In order to foster the research, we deprecated the previous image matchers derived from hloc.
5 |
6 | - ### Replicate from the log
7 | We provide the [output results](https://hkustconnect-my.sharepoint.com/:f:/g/personal/jyubt_connect_ust_hk/EkflAPp79spCviRK5EkSGVABrGncg-TfNV5I3ThXxzopLg?e=tu91Xn) with the format shown below. You can use these results directly.
8 |
9 | - `$seq_$feature_$match.log`
10 | - `$seq_$feature_$match.npy` # with following format
11 |
12 | ```python
13 | np.save(str(export_dir), {
14 | 'prob': num_matches_norm,
15 | 'qImages': qImages,
16 | 'rImages': rImages,
17 | 'gt': labels,
18 | 'inliers': inliers_list,
19 | 'all_matches': pointMaps,
20 | 'precision': precision,
21 | 'recall': recall,
22 | 'TH': TH,
23 | 'average_precision': average_precision,
24 | 'Max Recall': r_recall
25 | })
26 | ```
27 |
28 | - ### Replicate from scratch
29 | To get standard feature detection and matching results, we proposed to use [hloc](https://github.com/cvg/Hierarchical-Localization).
30 |
31 | - Download the dataset sequences from [google drive](https://drive.google.com/file/d/1145hQb812E0HaPGekdpD04bEbjuej4Lx/view?usp=drive_link) and put it under the `dataset/` folder.
32 |
33 | - Extract and match feature using hloc.
34 | - Extract features: SIFT, SuperPoint, and DISK
35 |
36 | ```bash
37 | python third_party/Hierarchical-Localization/gvbench_utils.py config/${seq}.yaml --extraction
38 | ```
39 | - Match features: SIFT-NN, SIFT-LightGlue (Not yet implemented), SuperPoint-NN, DISK-NN, SuperPoint-SuperGlue, SuperPoint-LightGlue, DISK-LightGlue, LoFTR
40 | ```bash
41 | # all methods except LoFTR
42 | python third_party/Hierarchical-Localization/gvbench_utils.py config/${seq}.yaml --matching
43 |
44 | # LoFTR is different from above methods thus
45 | python third_party/Hierarchical-Localization/gvbench_utils.py config/${seq}.yaml --matching_loftr
46 | ```
47 | - Image pairs files
48 | - We prepare pairs (GT) file for matching under `dataset/gt` foler.
49 | - Make sure to use the fork hloc for feature extraction and matching `https://github.com/jarvisyjw/Hierarchical-Localization.git -b gvbench`
50 | - Example of ${seq}.yaml
51 |
52 | ```yaml
53 | night.yaml
54 | extraction:
55 | image_path: dataset/images
56 | features:
57 | output_path: dataset/features
58 | matching:
59 | pairs: dataset/gt/night.txt
60 | feature:
61 | feature_path: dataset/features
62 | output_path: dataset/matches
63 | matcher:
64 | matching_loftr:
65 | pairs: dataset/gt/night.txt
66 | image_path: dataset/images
67 | matches: dataset/matches
68 | features: dataset/features
69 | ```
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | __pycache__
2 | .vscode
3 | dataset/
--------------------------------------------------------------------------------
/.gitmodules:
--------------------------------------------------------------------------------
1 | [submodule "third_party/image-matching-models"]
2 | path = third_party/image-matching-models
3 | url = https://github.com/jarvisyjw/image-matching-models.git
4 | branch = gvbench
5 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2024 Jingwen(Jarvis) YU
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # GV-Bench
2 |
3 | > GV-Bench: Benchmarking Local Feature Matching for Geometric Verification of Long-term Loop Closure Detection
4 | > [Jingwen Yu](https://jingwenyust.github.io/), [Hanjing Ye](https://medlartea.github.io/), [Jianhao Jiao](https://gogojjh.github.io/), [Ping Tan](https://facultyprofiles.hkust.edu.hk/profiles.php?profile=ping-tan-pingtan), [Hong Zhang](https://faculty.sustech.edu.cn/?tagid=zhangh33&iscss=1&snapid=1&orderby=date&go=2&lang=en)
5 | > 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
6 | > [arXiv](https://arxiv.org/abs/2407.11736), [IEEEXplore](https://ieeexplore.ieee.org/abstract/document/10801481), [Project Page](https://jarvisyjw.github.io/GV-Bench/), [Blog Post (in Chinese)](https://mp.weixin.qq.com/s/edUw7vLep0zmve0Uj3IzkQ), [Video (Bilibili)](https://www.bilibili.com/video/BV1WD23YhEZw/?share_source=copy_web&vd_source=4db6a86d3347fa85196b3e77a6092d1a)
7 | >
8 |
9 |
10 |
11 |
12 |
13 | ## Abstract
14 | In short, GV-Bench provides a benchmark for evaluating different local feature matching methods on geometric verification (GV), which is crucial for vision-based localization and mapping systems (e.g., Visual SLAM, Structure-from-Motion, Visual Localization).
15 |
16 | ## News
17 | - :star: Add support for [Image-matching-models](https://github.com/alexstoken/image-matching-models), thanks for their excellent work! :smile:
18 | - :tada: [Chinese intro](https://mp.weixin.qq.com/s/edUw7vLep0zmve0Uj3IzkQ) on wechat offical account.
19 | - :star: Paper is release on [arxiv](https://arxiv.org/abs/2407.11736).
20 | - :tada: The paper is accepted by IROS 2024!
21 | - :rocket: Releasing the visualization of [image matching]([./assets/appendix.pdf](https://drive.google.com/file/d/1145hQb812E0HaPGekdpD04bEbjuej4Lx/view?usp=drive_link)) results.
22 | - :rocket: Releasing the benchmark!
23 |
24 | ## Installation
25 |
26 |
27 | ```bash
28 | # clone the repo
29 | git clone --recursive https://github.com/jarvisyjw/GV-Bench.git
30 | conda create -n gvbench python=3.11
31 | pip install -r requirements
32 | cd third_party/image-matching-models
33 | git submodule init
34 | git submodule update
35 | ```
36 |
37 | ## Evaluation
38 | ### Data
39 | 1. Get the GV-Bench sequences from [here](https://hkustconnect-my.sharepoint.com/:f:/g/personal/jyubt_connect_ust_hk/EkflAPp79spCviRK5EkSGVABrGncg-TfNV5I3ThXxzopLg?e=DdwCAL).
40 | 2. Unzip and organize the dataset folder like following:
41 |
42 | ```bash
43 | |-- gt
44 | | |-- day.txt
45 | | |-- night.txt
46 | | |-- nordland.txt
47 | | |-- season.txt
48 | | |-- uacampus.txt
49 | | |-- weather.txt
50 | |-- images
51 | | |-- day0
52 | | |-- day1
53 | | |-- night0
54 | | |...
55 | ```
56 | ### Benchmark
57 | Now, we support using [image-matching-models](https://github.com/alexstoken/image-matching-models) directly,
58 | which enables many more state-of-the-art matching methods for Geometric Verification (GV).
59 | The example usage:
60 |
61 | ```bash
62 | # Show all supported image matching models
63 | python main.py --support_model
64 | # Run
65 | python main.py config/day.yaml
66 | ```
67 |
68 | In the configs, please specify the `data directory`, `sequence info` (which is provided with the repo in dataset/gt folder), `image_size` and the `matcher` you want to use.
69 | The evaluation supports runing multiple matchers in a single run. However, our sequences provides a rather large number of pairs, so the evaluation might takes time.
70 |
71 | If you want to replicate the paper's result of IROS2024, please refer to [this](./.asset/replicate.md).
72 |
73 | ### Using Customized Image Matcher
74 | - We recommend contributing or building your image matcher as the standard of [image-matching-models](https://github.com/alexstoken/image-matching-models).
75 | - Example usage of bench loader and evaluator
76 | ```python
77 | # bench sequence
78 | gvbench_seq = ImagePairDataset(config.data, transform=None) # load images
79 | labels = gvbench_seq.label # load labels
80 | MODEL = YOUR_MODEL(max_num_keypoints=2048) # your image matching model
81 | # if your method is two-stage image matching
82 | # i.e. Step 1: Keypoints Extraction
83 | # Step 2: Keypoints matching
84 | # We recommend you set a max_num_keypoints to 2048
85 | # to be consistent with the image-matching-models default setup.
86 | scores = []
87 | for data in gvbench_seq:
88 | img0 = load_image(data['img0'], resize)
89 | img1 = load_image(data['img1'], resize)
90 | inliers = MODEL(img0, img1)
91 | scores.append(inliers)
92 | # normalize
93 | scores_norm = (scores - np.min(scores)) / (np.max(scores)- np.min(scores))
94 | mAP, MaxR = eval(scores, labels)
95 | ```
96 |
97 | ## Experiments
98 | ```bash
99 | Seq: Day
100 | +---------------------------------------------------------+
101 | | GV-Bench:day |
102 | +---------------+--------------------+--------------------+
103 | | Matcher | mAP | Max Recall@1.0 |
104 | +---------------+--------------------+--------------------+
105 | | master | 0.9975692177820614 | 0.5236404833836859 |
106 | | superpoint-lg | 0.9937596599094175 | 0.4628776435045317 |
107 | | disk-lg | 0.9971895284342447 | 0.5448640483383685 |
108 | | roma | 0.9961146810617404 | 0.389916918429003 |
109 | | sift-nn | 0.9742650091865308 | 0.3534365558912387 |
110 | | sift-lg | 0.9934969495417851 | 0.4178247734138973 |
111 | | duster | 0.9913920054359513 | 0.5118202416918429 |
112 | | xfeat | 0.9896198466632193 | 0.4217522658610272 |
113 | | loftr | 0.9933317967081768 | 0.41774924471299096|
114 | +---------------+--------------------+--------------------+
115 |
116 | Seq: Night
117 | +----------------------------------------------------------+
118 | | GV-Bench:night |
119 | +---------------+--------------------+---------------------+
120 | | Matcher | mAP | Max Recall@1.0 |
121 | +---------------+--------------------+---------------------+
122 | | master | 0.9868066941492244 | 0.504996776273372 |
123 | | superpoint-lg | 0.9767695728913396 | 0.384107027724049 |
124 | | disk-lg | 0.96613211965037 | 0.42109929078014185 |
125 | | roma | 0.9878749584388696 | 0.11210509348807221 |
126 | | sift-nn | 0.6135980930535652 | 0.04223081882656351 |
127 | | sift-lg | 0.9343053648298021 | 0.45970341715022567 |
128 | | duster | 0.7884898363259103 | 0.24186009026434557 |
129 | | xfeat | 0.9142595500480328 | 0.2730496453900709 |
130 | | loftr | 0.9849044142298791 | 0.3534816247582205 |
131 | +---------------+--------------------+---------------------+
132 |
133 | Seq: Season
134 | +-----------------------------------------------------------+
135 | | GV-Bench:season |
136 | +---------------+--------------------+----------------------+
137 | | Matcher | mAP | Max Recall@1.0 |
138 | +---------------+--------------------+----------------------+
139 | | master | 0.9988580197550959 | 0.6730315734311542 |
140 | | superpoint-lg | 0.998920429558958 | 0.7452181317961483 |
141 | | disk-lg | 0.999232632789443 | 0.7445958338792087 |
142 | | roma | 0.9980641689106753 | 0.09049521813179615 |
143 | | sift-nn | 0.9791038717843851 | 0.24914843442945106 |
144 | | sift-lg | 0.9985371274637678 | 0.5848617843573956 |
145 | | duster | 0.9626673775950897 | 0.030558102973928993 |
146 | | xfeat | 0.998104560026835 | 0.7191143718066291 |
147 | | loftr | 0.9989917476116729 | 0.7945106773221539 |
148 | +---------------+--------------------+----------------------+
149 |
150 | Seq: Weather
151 | +----------------------------------------------------------+
152 | | GV-Bench:weather |
153 | +---------------+--------------------+---------------------+
154 | | Matcher | mAP | Max Recall@1.0 |
155 | +---------------+--------------------+---------------------+
156 | | master | 0.9988330039057637 | 0.4650407500459587 |
157 | | superpoint-lg | 0.9984747637720905 | 0.566486917090508 |
158 | | disk-lg | 0.9984560239543505 | 0.5313131932103683 |
159 | | roma | 0.9973693543564228 | 0.11639806360683866 |
160 | | sift-nn | 0.9962779437782742 | 0.416355168821619 |
161 | | sift-lg | 0.9983667779883508 | 0.5204363012439488 |
162 | | duster | 0.9984571842125228 | 0.23999632330412402 |
163 | | xfeat | 0.9973036794158792 | 0.38568539738954594 |
164 | +---------------+--------------------+---------------------+
165 |
166 | Seq: UAcampus
167 | +-----------------------------------------------------------+
168 | | GV-Bench:uacampus |
169 | +---------------+--------------------+----------------------+
170 | | Matcher | mAP | Max Recall@1.0 |
171 | +---------------+--------------------+----------------------+
172 | | master | 0.8295730669487469 | 0.12686567164179105 |
173 | | superpoint-lg | 0.8518262736710289 | 0.26119402985074625 |
174 | | disk-lg | 0.7876584656153724 | 0.06716417910447761 |
175 | | roma | 0.7335515971553305 | 0.06716417910447761 |
176 | | sift-nn | 0.5002679213167198 | 0.007462686567164179 |
177 | | sift-lg | 0.814838399559011 | 0.22885572139303484 |
178 | | duster | 0.5915022125735299 | 0.022388059701492536 |
179 | | xfeat | 0.8190512921138375 | 0.19402985074626866 |
180 | | loftr | 0.7951256302225166 | 0.2263681592039801 |
181 | +---------------+--------------------+----------------------+
182 |
183 | Seq: Nordland
184 | +-------------------------------------------------------------+
185 | | GV-Bench:nordland |
186 | +---------------+---------------------+-----------------------+
187 | | Matcher | mAP | Max Recall@1.0 |
188 | +---------------+---------------------+-----------------------+
189 | | master | 0.8351055461236119 | 0.1266891891891892 |
190 | | superpoint-lg | 0.7352313891322164 | 0.02027027027027027 |
191 | | disk-lg | 0.7109283125369613 | 0.11148648648648649 |
192 | | roma | 0.77038309764354 | 0.010135135135135136 |
193 | | sift-nn | 0.21872420539646553 | 0.0016891891891891893 |
194 | | sift-lg | 0.7351407725535801 | 0.06418918918918919 |
195 | | duster | 0.15624893409492682 | 0.012387387387387387 |
196 | | xfeat | 0.7170417005176157 | 0.14864864864864866 |
197 | | loftr | 0.7842976503985124 | 0.14695945945945946 |
198 | +---------------+---------------------+-----------------------+
199 | ```
200 |
201 | ## Acknowledgement
202 | - This work thanks [hloc](https://github.com/cvg/Hierarchical-Localization), [image-matching-models](https://github.com/alexstoken/image-matching-models) for their amazing works.
203 | - Contact: `jingwen.yu@connect.ust.hk`
204 |
205 | ## Citation
206 | ```
207 | @inproceedings{yu2024gv,
208 | title={GV-Bench: Benchmarking Local Feature Matching for Geometric Verification of Long-term Loop Closure Detection},
209 | author={Yu, Jingwen and Ye, Hanjing and Jiao, Jianhao and Tan, Ping and Zhang, Hong},
210 | booktitle={2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
211 | pages={7922--7928},
212 | year={2024},
213 | organization={IEEE}
214 | }
215 | ```
--------------------------------------------------------------------------------
/config/day.yaml:
--------------------------------------------------------------------------------
1 | # gvbench sequence
2 | data:
3 | name: day
4 | image_dir: dataset/images
5 | pairs_info: dataset/gt/day.txt
6 | image_width: 512
7 | image_height: 288
8 |
9 | # matchers you want to test
10 | matcher: [master, superpoint-lg, disk-lg, roma, sift-nn, sift-lg, duster, xfeat, loftr]
11 |
--------------------------------------------------------------------------------
/config/day_debug.yaml:
--------------------------------------------------------------------------------
1 | # gvbench sequence
2 | data:
3 | image_dir: dataset/images
4 | pairs_info: dataset/gt/day_debug.txt
5 | image_width: 512
6 | image_height: 288
7 |
8 | # matchers you want to test
9 | matcher: [master, superpoint-lg, disk-lg, roma, sift-nn, sift-lg, duster, xfeat]
--------------------------------------------------------------------------------
/config/night.yaml:
--------------------------------------------------------------------------------
1 | # gvbench sequence
2 | data:
3 | name: night
4 | image_dir: dataset/images
5 | pairs_info: dataset/gt/night.txt
6 | image_width: 512
7 | image_height: 288
8 |
9 | # matchers you want to test
10 | matcher: [master, superpoint-lg, disk-lg, roma, sift-nn, sift-lg, duster, xfeat, loftr]
11 |
--------------------------------------------------------------------------------
/config/nordland.yaml:
--------------------------------------------------------------------------------
1 | # gvbench sequence
2 | data:
3 | name: nordland
4 | image_dir: dataset/images
5 | pairs_info: dataset/gt/nordland.txt
6 | image_width: 512
7 | image_height: 288
8 |
9 | exp_log: dataset/norland.txt
10 |
11 | # matchers you want to test
12 | matcher: [master, superpoint-lg, disk-lg, roma, sift-nn, sift-lg, duster, xfeat, loftr]
13 |
--------------------------------------------------------------------------------
/config/season.yaml:
--------------------------------------------------------------------------------
1 | # gvbench sequence
2 | data:
3 | name: season
4 | image_dir: dataset/images
5 | pairs_info: dataset/gt/season.txt
6 | image_width: 512
7 | image_height: 288
8 |
9 | exp_log: dataset/season.txt
10 |
11 | # matchers you want to test
12 | matcher: [master, superpoint-lg, disk-lg, roma, sift-nn, sift-lg, duster, xfeat, loftr]
--------------------------------------------------------------------------------
/config/uacampus.yaml:
--------------------------------------------------------------------------------
1 | # gvbench sequence
2 | data:
3 | name: uacampus
4 | image_dir: dataset/images
5 | pairs_info: dataset/gt/uacampus.txt
6 | image_width: 512
7 | image_height: 288
8 | exp_log: dataset/uacampus.txt
9 | # matchers you want to test
10 | matcher: [master, superpoint-lg, disk-lg, roma, sift-nn, sift-lg, duster, xfeat, loftr]
--------------------------------------------------------------------------------
/config/weather.yaml:
--------------------------------------------------------------------------------
1 | # gvbench sequence
2 | data:
3 | name: weather
4 | image_dir: dataset/images
5 | pairs_info: dataset/gt/weather.txt
6 | image_width: 512
7 | image_height: 288
8 |
9 | exp_log: dataset/weather.txt
10 |
11 | # matchers you want to test
12 | matcher: [master, superpoint-lg, disk-lg, roma, sift-nn, sift-lg, duster, xfeat, loftr]
--------------------------------------------------------------------------------
/dataloaders/ImagePairDataset.py:
--------------------------------------------------------------------------------
1 | import torch
2 | from torch.utils.data import Dataset
3 | from PIL import Image
4 | import os
5 | import sys
6 | from pathlib import Path
7 | from typing import Tuple
8 | sys.path.append(os.path.join(os.path.dirname(__file__), '..', 'third_party', 'image-matching-models'))
9 | from matching.im_models.base_matcher import BaseMatcher
10 |
11 | class ImagePairDataset(Dataset):
12 | def __init__(self, cfg, transform=None, save_memory=True):
13 | """
14 | Args:
15 | txt_file (string): Path to the text file with image pairs.
16 | root_dir (string): Directory with all the images.
17 | transform (callable, optional): Optional transform to be applied on a sample.
18 | """
19 | self.root_dir = cfg.image_dir
20 | self.transform = transform
21 | self.image_pairs = []
22 | self.label = []
23 | self.save_memory = save_memory
24 | self.cached = {}
25 | self.image_size = (cfg.image_height, cfg.image_width)
26 |
27 | # Read the file and extract image paths
28 | with open(cfg.pairs_info, 'r') as file:
29 | lines = file.readlines()
30 | for line in lines:
31 | image1_path, image2_path, gt = line.strip().split()
32 | self.image_pairs.append((image1_path, image2_path))
33 | self.label.append(int(gt))
34 |
35 | def __len__(self):
36 | return len(self.image_pairs)
37 |
38 | # wrapper of image-matching-models BaseMatcher's load image
39 | def load_image(self, path: str | Path, resize: int | Tuple = None, rot_angle: float = 0) -> torch.Tensor:
40 | return BaseMatcher.load_image(path, resize, rot_angle)
41 |
42 | def __getitem__(self, idx):
43 |
44 | if idx in self.cached:
45 | return self.cached[idx]
46 | else:
47 | img0_path, img1_path = self.image_pairs[idx]
48 | label = self.label[idx]
49 |
50 | img0_path = os.path.join(self.root_dir, img0_path)
51 | img1_path = os.path.join(self.root_dir, img1_path)
52 |
53 | if self.transform:
54 | raise NotImplementedError("Transform not implemented")
55 |
56 | data = {
57 | 'img0': self.load_image(img0_path,resize=self.image_size),
58 | 'img1': self.load_image(img1_path,resize=self.image_size),
59 | 'label': label
60 | }
61 |
62 | if not self.save_memory:
63 | self.cached[idx] = data
64 |
65 | return data
--------------------------------------------------------------------------------
/dataloaders/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jarvisyjw/GV-Bench/b78355f066f1ec511261110d253077e7415fc12e/dataloaders/__init__.py
--------------------------------------------------------------------------------
/main.py:
--------------------------------------------------------------------------------
1 | from dataloaders.ImagePairDataset import ImagePairDataset
2 | import sys
3 | import argparse
4 | import yaml
5 | from sklearn.metrics import average_precision_score, precision_recall_curve
6 | from prettytable import PrettyTable
7 | import numpy as np
8 | from typing import Tuple
9 |
10 | ### import image-matching-models
11 | sys.path.append('third_party/image-matching-models')
12 | import warnings
13 | warnings.filterwarnings("ignore")
14 | from matching import get_matcher, available_models
15 | from matching.im_models.base_matcher import BaseMatcher
16 | from matching.viz import *
17 | from pathlib import Path
18 | import torch
19 | from torch.utils.data import DataLoader
20 | from tqdm import tqdm
21 |
22 | def parser():
23 | parser = argparse.ArgumentParser()
24 | parser.add_argument('config', type=str, nargs='?', help='Path to the config file')
25 | parser.add_argument('--support_model', action='store_true', help="Show all image-matching models")
26 | args = parser.parse_args()
27 |
28 | def dict2namespace(config):
29 | namespace = argparse.Namespace()
30 | for key, value in config.items():
31 | if isinstance(value, dict):
32 | new_value = dict2namespace(value)
33 | else:
34 | new_value = value
35 | setattr(namespace, key, new_value)
36 | return namespace
37 |
38 | # Check for config file
39 | if args.config is None:
40 | if args.support_model:
41 | print(f"Available models: {available_models}")
42 | sys.exit(0)
43 | else:
44 | raise ValueError('Please provide a config file')
45 |
46 | # Load the config file
47 | try:
48 | with open(args.config, 'r') as f:
49 | config = yaml.safe_load(f)
50 | except FileNotFoundError:
51 | raise FileNotFoundError(f"Config file '{args.config}' not found.")
52 | except yaml.YAMLError as e:
53 | raise ValueError(f"Error parsing YAML file: {e}")
54 |
55 | config = dict2namespace(config)
56 |
57 | return config
58 |
59 | # wrapper of image-matching-models BaseMatcher's load image
60 | def load_image(path: str | Path, resize: int | Tuple = None, rot_angle: float = 0) -> torch.Tensor:
61 | return BaseMatcher.load_image(path, resize, rot_angle)
62 |
63 | def match(matcher, loader, image_size=512):
64 | '''
65 | Args:
66 | matcher: image-matching-models matcher
67 | loader: dataloader
68 | image_size: int, resized shape
69 |
70 | Return:
71 | scores: np.array
72 | '''
73 | scores = []
74 | for idx, data in tqdm(enumerate(loader), total=len(loader)):
75 | img0, img1 = data['img0'], data['img1']
76 | img0 = img0.squeeze(0)
77 | img1 = img1.squeeze(0)
78 | result = matcher(img0, img1)
79 | num_inliers, H, mkpts0, mkpts1 = result['num_inliers'], result['H'], result['inlier_kpts0'], result['inlier_kpts1']
80 | scores.append(num_inliers)
81 | # normalize
82 | scores = np.array(scores)
83 | scores_norm = (scores - np.min(scores)) / (np.max(scores)- np.min(scores))
84 | return scores_norm
85 |
86 | # max recall @ 100% precision
87 | def max_recall(precision: np.ndarray, recall: np.ndarray):
88 | idx = np.where(precision == 1.0)
89 | max_recall = np.max(recall[idx])
90 | return max_recall
91 |
92 | def eval(scores, labels):
93 | '''
94 | Args:
95 | scores: np.array
96 | labels: np.array
97 | matcher: name of matcher
98 | talbe: PrettyTable holder
99 |
100 | Return:
101 | precision: np.array
102 | recall: np.array
103 |
104 | '''
105 | # mAP
106 | average_precision = average_precision_score(labels, scores)
107 | precision, recall, TH = precision_recall_curve(labels, scores)
108 | # max recall @ 100% precision
109 | recall_max = max_recall(precision, recall)
110 | return average_precision, recall_max
111 |
112 | def main(config):
113 | # ransac params, keep it consistent for fairness
114 | ransac_kwargs = {'ransac_reproj_thresh': 3,
115 | 'ransac_conf':0.95,
116 | 'ransac_iters':2000} # optional ransac params
117 | # bench sequence
118 | gvbench_seq = ImagePairDataset(config.data, transform=None) # load images
119 | # current imm models only support batch size 1
120 | gvbench_loader = DataLoader(gvbench_seq, batch_size=1, shuffle=False, num_workers=10, pin_memory=True, prefetch_factor=10) # create dataloader
121 | labels = gvbench_seq.label # load labels
122 | # create result table
123 | table = PrettyTable()
124 | table.title = f"GV-Bench:{config.data.name}"
125 | table.field_names = ["Matcher", "mAP", "Max Recall@1.0"]
126 |
127 | # Check if the file exists and write headers only once
128 | exp_log = config.exp_log
129 | try:
130 | with open(exp_log, "x") as file: # "x" mode creates the file; raises an error if it exists
131 | headers = "| " + " | ".join(table.field_names) + " |" # Format the headers
132 | file.write(headers + "\n") # Write headers
133 | file.write("-" * len(headers) + "\n") # Optional: Add a separator
134 | except FileExistsError:
135 | pass # File already exists, so we skip writing headers
136 |
137 | # matching loop
138 | for matcher in config.matcher:
139 | assert matcher in available_models, f"Invalid model name. Choose from {available_models}"
140 | print(f"Running {matcher}...")
141 | # load matcher
142 | if torch.cuda.is_available():
143 | model = get_matcher(matcher, device='cuda', ransac_kwargs=ransac_kwargs)
144 | else:
145 | raise ValueError('No GPU available')
146 | # compute scores
147 | scores = match(model, gvbench_loader, image_size=(config.data.image_height, config.data.image_width))
148 | mAP, MaxR = eval(scores, labels)
149 |
150 | # write to log
151 | table.add_row([matcher, mAP, MaxR])
152 | # Append the new row to the file
153 | with open(exp_log, "a") as file: # Open in append mode
154 | row = table._rows[-1] # Get the last row added
155 | formatted_row = "| " + " | ".join(map(str, row)) + " |" # Format the row
156 | file.write(formatted_row + "\n") # Write the formatted row
157 |
158 | # print result
159 | print(table)
160 |
161 | if __name__ == "__main__":
162 | # parser
163 | cfg = parser()
164 | main(cfg)
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | # usage: pip install -r requirements.txt
2 | torch
3 | torchvision
4 | opencv-python
5 | matplotlib
6 | kornia
7 | einops
8 | transforms3d
9 | h5py
10 | kornia_moons
11 | yacs
12 | gdown>=5.1.0
13 | tables
14 | imageio
15 | vispy
16 | pyglet==1.5.28
17 | tensorboard
18 | scipy
19 | trimesh
20 | e2cnn
21 | scikit-learn
22 | scikit-image
23 | tqdm
24 | py3_wget
25 | roma
26 | loguru
27 | timm
28 | omegaconf
29 | prettytable
30 |
--------------------------------------------------------------------------------
/scripts/evaluation.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # Check if the correct number of arguments is provided
4 | if [ "$#" -ne 2 ]; then
5 | echo "Usage: $0 "
6 | exit 1
7 | fi
8 |
9 | # Assign input arguments to variables
10 | cuda_visible_device=$1
11 | config_path=$2
12 |
13 | # Export the CUDA_VISIBLE_DEVICES environment variable
14 | export CUDA_VISIBLE_DEVICES=$cuda_visible_device
15 |
16 | # Run the Python script with the provided config path
17 | python main.py $config_path
18 |
--------------------------------------------------------------------------------