├── images └── MSP-MVS-pipeline.png └── README.md /images/MSP-MVS-pipeline.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ZhenlongYuan/MSP-MVS/HEAD/images/MSP-MVS-pipeline.png -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # MSP-MVS 2 | 3 | Zhenlong Yuan, Cong Liu, Fei Shen, Zhaoxin Li, Jingguo luo, Tianlu Mao and Zhaoqi Wang, [**MSP-MVS: Multi-Granularity Segmentation Prior Guided Multi-View Stereo**](https://arxiv.org/pdf/2407.19323), AAAI 2025. 4 | ![](images/MSP-MVS-pipeline.png) 5 | 6 | ## About 7 | MSP-MVS aggregates multi-granularity SAM model **Semantic-SAM** with multi-view stereo(**MVS**) algorithm to address patch deformation instability of PatchMatch-based MVS. 8 | 9 | Our paper was accepted by **AAAI2025**! 10 | 11 | If you find this project useful for your research, please cite: 12 | 13 | ``` 14 | @inproceedings{yuan2025msp, 15 | title={MSP-MVS: Multi-granularity segmentation prior guided multi-view stereo}, 16 | author={Yuan, Zhenlong and Liu, Cong and Shen, Fei and Li, Zhaoxin and Luo, Jinguo and Mao, Tianlu and Wang, Zhaoqi}, 17 | booktitle={Proceedings of the AAAI Conference on Artificial Intelligence}, 18 | volume={39}, 19 | number={9}, 20 | pages={9753--9762}, 21 | year={2025} 22 | } 23 | ``` 24 | ## Dependencies 25 | 26 | The code has been tested on Ubuntu 18.04 with Nvidia Titan RTX, and you can modify the CMakeList.txt to compile on Windows. 27 | * [Cuda](https://developer.nvidia.cn/zh-cn/cuda-toolkit) >= 10.2 28 | * [OpenCV](https://opencv.org/) >= 3.3.0 29 | * [Boost](https://www.boost.org/) >= 1.62.0 30 | * [cmake](https://cmake.org/) >= 2.8 31 | 32 | **Besides make sure that your [GPU Compute Capability](https://en.wikipedia.org/wiki/CUDA) matches the CMakeList.txt!!!** Otherwise you won't get the depth results! For example, according to [GPU Compute Capability](https://en.wikipedia.org/wiki/CUDA), RTX3080's Compute Capability is 8.6. So you should set the 33 | cuda compilation parameter 'arch=compute_86,code=sm_86' or add a '-gencode arch=compute_86,code=sm_86'. 34 | 35 | ## Usage 36 | ## Usage 37 | - Compile 38 | > 39 | mkdir build & cd build 40 | cmake .. 41 | make 42 | 43 | #### ETH Dataset 44 | 45 | You may download [train](https://www.eth3d.net/data/multi_view_training_dslr_undistorted.7z) and [test](https://www.eth3d.net/data/multi_view_test_dslr_undistorted.7z) dataset from ETH3D, and use the script [*colmap2mvsnet.py*](./colmap2mvsnet.py) to convert the dataset format(you may refer to [MVSNet](https://github.com/YoYo000/MVSNet#file-formats)). You can use the "scale" option in the script to generate any resolution you need. 46 | 47 | ```python 48 | python colmap2mvsnet.py --dense_folder --save_folder --scale_factor 2 # half resolution 49 | ``` 50 | 51 | #### Tanks & Temples Dataset 52 | 53 | We use the version provided by MVSNet. The dataset can be downloaded from [here](https://drive.google.com/file/d/1YArOJaX9WVLJh4757uE8AEREYkgszrCo/view), and the format is exactly what we need. 54 | 55 | #### Other Dataset 56 | 57 | Such as DTU and BlenderMVS, you may explore them yourself. !!! But remember to modify the [ReadCamera](https://github.com/whoiszzj/APD-MVS/blob/d9f9731235f4db05712024213e32346b6a01f5d6/APD.cpp#L84) function when you test on DTU !!! 58 | 59 | ### Run 60 | 61 | After you prepare the dataset, and you want to run the test for ETH3D/office, you can follow this command line. 62 | 63 | ```bash 64 | ./APD /office 65 | ``` 66 | 67 | The result will be saved in the folder office/APD, and the point cloud is saved as "APD.ply" 68 | 69 | It is very easy to use, and you can modify our code as you need. 70 | 71 | ## Acknowledgements 72 | 73 | This code largely benefits from the following repositories: [APD-MVS](https://github.com/whoiszzj/APD-MVS), [Semantic-SAM](https://github.com/UX-Decoder/Semantic-SAM) and [ACMMP](https://github.com/GhiXu/ACMMP.git). Thanks to their authors for opening the source of their excellent works. 74 | --------------------------------------------------------------------------------