├── img ├── intro.pdf └── intro.png └── README.md /img/intro.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/BFZD233/TranScene/HEAD/img/intro.pdf -------------------------------------------------------------------------------- /img/intro.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/BFZD233/TranScene/HEAD/img/intro.png -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Multi-label Stereo Matching for Transparent Scene Depth Estimation 2 | 3 | 4 | ![intro](./img/intro.png "intro") 5 | ## News 6 | 7 | - **2025.5.21**: The code will be uploaded in a few weeks. 8 | 9 | 10 | ## Installation 11 | 12 | Our code is based on CUDA 11.7 and PyTorch 2.0.1. We recommend using Anaconda to create a new environment: 13 | 14 | ```bash 15 | conda create -n trans python=3.7 16 | conda activate trans 17 | ``` 18 | 19 | Then, install the dependencies: 20 | 21 | ```bash 22 | pip install -r requirements.txt 23 | ``` 24 | 25 | ## Dataset 26 | 27 | We use the [TartanAir](https://github.com/castacks/tartanair_tools), [SceneFlow](https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html), and [Raw data of KITTI](https://github.com/youmi-zym/TemporalStereo?tab=readme-ov-file#kitti-20122015) datasets in our experiments. Please download the datasets and organize them as follows: 28 | 29 | ``` 30 | ├── datasets 31 | ├── sceneflow 32 | ├── driving 33 | │ ├── disparity 34 | │ ├── frames_cleanpass 35 | │ └── frames_finalpass 36 | ├── flying3d 37 | │ ├── disparity 38 | │ ├── frames_cleanpass 39 | │ └── frames_finalpass 40 | └── monkaa 41 | ├── disparity 42 | ├── frames_cleanpass 43 | └── frames_finalpass 44 | ├── TranScene 45 | ├── left 46 | │ ├── disrparity 47 | │ ├── disparity_without_trans 48 | │ └── img 49 | └── right 50 | ``` 51 | 52 | ## Checkpoints 53 | 54 | To Do. 55 | 56 | ## Evaluation 57 | 58 | Before evaluation, please download the checkpoints and put them in the `./checkpoints` directory. 59 | 60 | You can evaluate the pre-trained models on TranScene by running the following scripts: 61 | 62 | ```bash 63 | bash trans_evaluate.sh 64 | ``` 65 | 66 | ## Training 67 | 68 | For SceneFlow dataset, you can train the model by running the following script: 69 | ```bash 70 | bash sceneflow_train.sh 71 | ``` 72 | 73 | For TranScene dataset, you can train the model by running the following script: 74 | ```bash 75 | bash trans_train.sh 76 | ``` 77 | 78 | ## Acknowledgement 79 | 80 | Our code is based on [RAFT-Stereo](https://github.com/princeton-vl/RAFT-Stereo). We thank the authors for this great work. --------------------------------------------------------------------------------