├── .gitignore ├── LICENSE ├── README.md ├── assets └── videos.gif ├── configs ├── config_pnp.yaml └── config_sdedit.yaml ├── data ├── wolf.mp4 └── woman-running.mp4 ├── preprocess.py ├── requirements.txt ├── run_tokenflow_pnp.py ├── run_tokenflow_sdedit.py ├── tokenflow_utils.py └── util.py /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023 Michal Geyer, Omer Bar-Tal, Shai Bagon, Tali Dekel 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # TokenFlow: Consistent Diffusion Features for Consistent Video Editing (ICLR 2024) 2 | ## [Project Page] 3 | 4 | [![arXiv](https://img.shields.io/badge/arXiv-TokenFlow-b31b1b.svg)](https://arxiv.org/abs/2307.10373) [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/weizmannscience/tokenflow) 5 | ![Pytorch](https://img.shields.io/badge/PyTorch->=1.10.0-Red?logo=pytorch) 6 | 7 | 8 | 9 | [//]: # ([![Replicate](https://replicate.com/cjwbw/multidiffusion/badge)](https://replicate.com/cjwbw/multidiffusion)) 10 | 11 | [//]: # ([![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/weizmannscience/text2live)) 12 | 13 | 14 | 15 | 16 | https://github.com/omerbt/TokenFlow/assets/52277000/93dccd63-7e9a-4540-a941-31962361b0bb 17 | 18 | 19 | **TokenFlow** is a framework that enables consistent video editing, using a pre-trained text-to-image diffusion model, without any further training or finetuning. 20 | 21 | [//]: # (as described in (link to paper).) 22 | 23 | [//]: # (. It can be used for localized and global edits that change the texture of existing objects or augment the scene with semi-transparent effects (e.g. smoke, fire, snow).) 24 | 25 | [//]: # (### Abstract) 26 | >The generative AI revolution has been recently expanded to videos. Nevertheless, current state-of-the-art video mod- els are still lagging behind image models in terms of visual quality and user control over the generated content. In this work, we present a framework that harnesses the power of a text-to-image diffusion model for the task of text-driven video editing. Specifically, given a source video and a target text-prompt, our method generates a high-quality video that adheres to the target text, while preserving the spatial lay- out and dynamics of the input video. Our method is based on our key observation that consistency in the edited video can be obtained by enforcing consistency in the diffusion feature space. We achieve this by explicitly propagating diffusion features based on inter-frame correspondences, readily available in the model. Thus, our framework does not require any training or fine-tuning, and can work in con- junction with any off-the-shelf text-to-image editing method. We demonstrate state-of-the-art editing results on a variety of real-world videos. 27 | 28 | For more see the [project webpage](https://diffusion-tokenflow.github.io). 29 | 30 | ## Sample results 31 | 32 | 33 | 34 | ## Environment 35 | ``` 36 | conda create -n tokenflow python=3.9 37 | conda activate tokenflow 38 | pip install -r requirements.txt 39 | ``` 40 | ## Preprocess 41 | 42 | Preprocess you video by running using the following command: 43 | ``` 44 | python preprocess.py --data_path \ 45 | --inversion_prompt <'' or a string describing the video content> 46 | ``` 47 | Additional arguments: 48 | ``` 49 | --save_dir 50 | --H