├── Curation_Pipeline.md ├── README.md ├── WebVi3D ├── examples │ ├── 000b0a3dd5d6827172e323ab4fc9232db10bec74.mp4 │ ├── 0b7b58a8bf4200d835dc459bbe1af25bf3f917cc_3924_4085.mp4 │ ├── 1af3278b744dfc772f43bb550d8dbada93fae2c6.mp4 │ ├── 1b2590e9aeecd93c5926eea76c4a8dbddcebe980.mp4 │ ├── 1b37bac6aae4142fdda0146ce5e1ead300ec1830.mp4 │ ├── 1b8090b14fa7b90807c261c12e3078379f5419cb.mp4 │ ├── 2b686e7afc43f5fc571d97601ae11c80b01d2228.mp4 │ ├── 36d4e80bdfe3f59f5281a3126956291879dcb436.mp4 │ ├── 4a08ffee34fcd0ac524ef723acead61262b65d32.mp4 │ ├── 94f90e9aad920e534127d112fa24f9a8ab43bd83.mp4 │ ├── a539c32a2ce9cf79e893a1d30fb1572721b81b5d.mp4 │ └── a9cf7f4fda030ae0c13da01d08162412389ce84e.mp4 ├── requirements.txt ├── s1_downsample.py ├── s2_generate_flow_earlycheck.py ├── s3_generate_mask.py ├── s4_filter.py ├── utils │ ├── RAFT │ │ ├── __init__.py │ │ ├── corr.py │ │ ├── datasets.py │ │ ├── demo.py │ │ ├── extractor.py │ │ ├── raft.py │ │ ├── update.py │ │ └── utils │ │ │ ├── __init__.py │ │ │ ├── augmentor.py │ │ │ ├── flow_viz.py │ │ │ ├── frame_utils.py │ │ │ └── utils.py │ └── flow_utils.py └── weights │ ├── raft-small.pth │ └── raft-things.pth ├── assets ├── logo.jpg ├── method.jpg └── teaser.jpg ├── inference.py ├── mv_diffusion.py ├── mv_diffusion_SR.py ├── mv_unet.py ├── pipeline_mvd_warp_mix_classifier.py ├── pipeline_mvd_warp_mix_classifier_SR.py ├── requirements.txt ├── single_infer.sh └── sparse_infer.sh /Curation_Pipeline.md: -------------------------------------------------------------------------------- 1 | # Curation Pipeline 2 | 3 | Our WebVi3D curation pipeline contains 4 steps, including: 4 | 5 | - Temporal-Spatial Downsampling 6 | - Semantic-Based Dynamic Recognition 7 | - Dynamic Filtering 8 | - Tracking-Based Small Viewpoint Filtering 9 | 10 | ## Running 11 | 12 | We provide some unfiltered video cases and the demo code of data curation pipeline for your own video dataset. Notice that you may modify the code into parallel processing for faster processing on larger scale dataset. You may also adjust the parameters in each step for your own applications. 13 | 14 | First, install extra dependencies: 15 | ```sh 16 | cd ./WebVi3D 17 | pip install -r requirements.txt 18 | ``` 19 | 20 | ### Down Sampling and Dynamic Recognition 21 | 22 | You can run directly with our provided video cases: 23 | ```sh 24 | python s1_downsample.py --input_dir examples --output_dir examples_vid/step1+2_outputs --txt examples_txt/all_video_list.txt 25 | ``` 26 | If you want to process your own video dataset, please run: 27 | ```sh 28 | python s1_downsample.py --input_dir