76 | 77 |
78 |
79 |

World-Consistent Video-to-Video Synthesis

80 |
81 | 82 |
83 |
84 |
85 |
Arun Mallya*
86 |
87 |
88 |
Ting-Chun Wang*
89 |
90 |
91 |
Karan Sapra
92 |
93 |
94 |
Ming-Yu Liu
95 |
96 |
97 | 98 |
99 |
100 |
NVIDIA
101 |
Published at the European Conference on Computer Vision, 2020
102 |
103 |
104 |
105 | 106 |
107 | Paper (arxiv) 108 | Paper (embedded videos) 109 | Code (GitHub) 110 |

111 |
112 | 113 |
114 |
115 |
116 |

We present a GAN-based approach to generate 2D world renderings that are consistent over time and viewpoints, which was not possible with prior approaches. 117 | Our method colors the 3D point cloud of the world as the camera moves through the world, coloring new regions in a manner consistent with the already colored world. 118 | It learns to render images based on the 2D projections of the point cloud to the camera in a semantically consistent manner while robustly dealing with incorrect and incomplete point clouds. 119 | Our proposed approach further shortens the gap between classical graphics rendering and neural rendering.

120 |
121 |
122 | 123 |
124 |
Colorization of the world's 3D point cloud
125 |
Simultaneously rendered 2D output
126 |
127 |
128 |
129 | 132 |

133 |
134 |
135 | 136 |
137 |