├── README.md └── figures ├── DTU.jpg ├── T&T.jpg ├── teaser.jpg └── teaser.png /README.md: -------------------------------------------------------------------------------- 1 | # SurfaceNet+ 2 | - An End-to-end 3D Neural Network for Very Sparse MVS. 3 | * 2020TPAMI [early access link](https://ieeexplore.ieee.org/document/9099504). 4 | * or Arxiv [preprint version](https://www.researchgate.net/publication/341647549_SurfaceNet_An_End-to-end_3D_Neural_Network_for_Very_Sparse_Multi-view_Stereopsis/figures). 5 | - **Key contributions** 6 | 1. Proposed a Sparse-MVS benchmark (under construction) 7 | * Comprehensive evaluation on the datasets: [DTU](http://roboimagedata.compute.dtu.dk/?page_id=36), [Tanks and Temples](https://www.tanksandtemples.org/), etc. 8 | 2. Proposed a **trainable occlusion-aware** view selection scheme for the volumetric MVS method, e.g., [SurfaceNet](https://github.com/mjiUST/SurfaceNet)[5]. 9 | 3. Analysed the advantages of the volumetric methods, e.g., [SurfaceNet](https://github.com/mjiUST/SurfaceNet)[5] and SurfaceNet+, on the **Sparse-MVS problem** over the depth-fusion methods, e.g., [Gipuma](https://github.com/kysucix/gipuma) [6], [R-MVSNet](https://github.com/YoYo000/MVSNet)[7], [Point-MVSNet](https://github.com/callmeray/PointMVSNet)[8], and [COLMAP](https://github.com/colmap/colmap)[9]. 10 | 11 | # [Sparse-MVS Benchmark](http://sparse-mvs.com) 12 | 13 |

14 | 15 |

16 | 17 | ## (1) [Sparse-MVS of the DTU dataset](http://sparse-mvs.com/leaderboard.html) 18 | 19 |

20 | 21 | 22 | **Fig.1**: Illustration of a very sparse MVS setting using only $1/7$ of the camera views, i.e., $\{v_i\}_{i=1,8,15,22,...}$, to recover the model 23 in the DTU dataset [10]. Compared with the state-of-the-art methods, the proposed SurfaceNet+ provides much complete reconstruction, especially around the boarder region captured by very sparse views. 23 |

24 | 25 |

26 | 27 | 28 | **Fig.2**: Comparison with the existing methods in the DTU Dataset [10] with different sparsely sampling strategy. When Sparsity = 3 and Batchsize = 2, the chosen camera indexes are 1,2 / 4,5 / 7,8 / 10,11 / .... SurfaceNet+ constantly outperforms the state-of-the-art methods at all the settings, especially at the very sparse scenario. 29 |

30 | 31 | ## (2) [Sparse-MVS of the T&T dataset](http://sparse-mvs.com/leaderboard.html) 32 | 33 |

34 | 35 | 36 | **Fig.3**: Results of a tank model in the Tanks and Temples 'intermediate' set [23] compared with R-MVSNet [7] and COLMAP [9], which demonstrate the power of SurfaceNet+ of high recall prediction in the sparse-MVS setting. 37 |

38 | 39 | 40 | 41 | # Citing 42 | 43 | If you find SurfaceNet+, the Sparse-MVS benchmark, or [SurfaceNet](https://github.com/mjiUST/SurfaceNet) useful in your research, please consider citing: 44 | 45 | @article{ji2020surfacenet_plus, 46 | title={SurfaceNet+: An End-to-end 3D Neural Network for Very Sparse Multi-view Stereopsis}, 47 | author={Ji, Mengqi and Zhang, Jinzhi and Dai, Qionghai and Fang, Lu}, 48 | journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 49 | year={2020}, 50 | publisher={IEEE} 51 | } 52 | 53 | @inproceedings{ji2017surfacenet, 54 | title={SurfaceNet: An End-To-End 3D Neural Network for Multiview Stereopsis}, 55 | author={Ji, Mengqi and Gall, Juergen and Zheng, Haitian and Liu, Yebin and Fang, Lu}, 56 | booktitle={Proceedings of the IEEE International Conference on Computer Vision (ICCV)}, 57 | pages={2307--2315}, 58 | year={2017} 59 | } 60 | 61 | -------------------------------------------------------------------------------- /figures/DTU.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mjiUST/SurfaceNet-plus/38a486000ff66b8b1dda89c04812d77d14023823/figures/DTU.jpg -------------------------------------------------------------------------------- /figures/T&T.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mjiUST/SurfaceNet-plus/38a486000ff66b8b1dda89c04812d77d14023823/figures/T&T.jpg -------------------------------------------------------------------------------- /figures/teaser.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mjiUST/SurfaceNet-plus/38a486000ff66b8b1dda89c04812d77d14023823/figures/teaser.jpg -------------------------------------------------------------------------------- /figures/teaser.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mjiUST/SurfaceNet-plus/38a486000ff66b8b1dda89c04812d77d14023823/figures/teaser.png --------------------------------------------------------------------------------