├── assets ├── logo.png └── teaser.png └── README.md /assets/logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Iron-LYK/ReCamDriving/HEAD/assets/logo.png -------------------------------------------------------------------------------- /assets/teaser.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Iron-LYK/ReCamDriving/HEAD/assets/teaser.png -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 |
2 | ReCamDriving Logo 3 |

ReCamDriving: LiDAR-Free Camera-Controlled Novel Trajectory Video Generation

4 |
5 | 6 |
7 |

8 | Yaokun Li1, 9 | Shuaixian Wang1,3, 10 | Mantang Guo2, 11 | Jiehui Huang4, 12 | Taojun Ding2 13 |
14 | Mu Hu4, 15 | Kaixuan Wang2, 16 | Shaojie Shen4, 17 | Guang Tan1 18 |

19 |

20 | 1 Sun Yat-sen University    21 | 2 ZYT    22 | 3 Shenzhen Polytechnic University    23 |
24 | 4 The Hong Kong University of Science and Technology 25 |

26 |
27 | 28 |

29 | 30 |   31 | 32 |   33 | 34 |

35 | 36 | 37 | ## 📷 Abstract 38 | 39 | We propose **ReCamDriving**, a purely vision-based, camera-controlled novel-trajectory video generation framework. While repair-based methods fail to restore complex artifacts and LiDAR-based approaches rely on sparse and incomplete cues, ReCamDriving leverages dense and scene-complete **3DGS renderings** for explicit geometric guidance, achieving precise camera-controllable generation. To mitigate overfitting to restoration behaviors when conditioned on 3DGS renderings, ReCamDriving adopts a **two-stage training paradigm**: the first stage uses camera poses for coarse control, while the second stage incorporates 3DGS renderings for fine-grained viewpoint and geometric guidance. Furthermore, we present a **3DGS-based cross-trajectory data curation strategy** to eliminate the train–test gap in camera transformation patterns, enabling scalable multi-trajectory supervision from monocular videos. Based on this strategy, we construct the **ParaDrive** dataset, containing over 110K parallel-trajectory video pairs. Extensive experiments demonstrate that ReCamDriving achieves state-of-the-art camera controllability and structural consistency. 40 | 41 |
42 |
43 | ReCamDriving Teaser Image: Comparison of novel-trajectory generation methods 44 |
45 |
46 | 47 | Comparison of novel-trajectory generation. Repair-based methods (e.g., Difix3D+) suffer from severe artifacts under novel viewpoints, while LiDAR-based camera-controlled methods (e.g., StreetCrafter) show geometric inconsistencies in occluded or distant regions due to incomplete cues. In contrast, ReCamDriving employs a coarse-to-fine two-stage training strategy that leverages dense scene-structure information from novel-trajectory 3DGS renderings for precise camera control and structurally consistent generation. 48 | 49 | 50 | ## 🌐 ParaDrive Dataset 51 | Based on our data curation strategy, we constructed the **ParaDrive** dataset, which contains **over 110K parallel-trajectory video pairs**, enabling scalable multi-trajectory supervision. 52 | 53 | 54 | ## ✅ TODO List 55 | We are finalizing the release of the code and data and aim to complete it as soon as possible. Please stay tuned! 56 | - [x] Paper released on arXiv. 57 | - [ ] Release training and inference code. 58 | - [ ] Release model weights. 59 | - [ ] Release ParaDrive dataset. 60 | 61 | 62 | ## 🔗 Citation 63 | If you find our work helpful, please consider citing: 64 | ```bibtex 65 | @misc{li2025recamdrivinglidarfreecameracontrollednovel, 66 | title={ReCamDriving: LiDAR-Free Camera-Controlled Novel Trajectory Video Generation}, 67 | author={Yaokun Li and Shuaixian Wang and Mantang Guo and Jiehui Huang and Taojun Ding and Mu Hu and Kaixuan Wang and Shaojie Shen and Guang Tan}, 68 | year={2025}, 69 | eprint={2512.03621}, 70 | archivePrefix={arXiv}, 71 | primaryClass={cs.CV}, 72 | url={https://arxiv.org/abs/2512.03621}, 73 | } 74 | --------------------------------------------------------------------------------