24 |
25 |
26 | ### Overview
27 |
28 |
29 |
30 | ### Visual
31 | The generated audio-video examples on landscape:
32 |
33 | https://user-images.githubusercontent.com/105475691/207589456-52914a01-1175-4f77-b8f5-112d97013f7c.mp4
34 |
35 | The generated audio-video examples on AIST++:
36 |
37 | https://user-images.githubusercontent.com/105475691/207589611-fe300424-e5e6-4379-a917-d9a07e9dd8fb.mp4
38 |
39 | The generated audio-video examples on Audioset:
40 |
41 | https://user-images.githubusercontent.com/105475691/207589639-0a371435-f207-4ff4-a78e-3e9c0868d523.mp4
42 |
43 | ## Requirements and dependencies
44 | * python 3.8 (recommend to use [Anaconda](https://www.anaconda.com/))
45 | * pytorch >= 1.11.0
46 | ```
47 | git clone https://github.com/researchmm/MM-Diffusion.git
48 | cd MM-Diffusion
49 |
50 | conda create -n mmdiffusion python=3.8
51 | conda activate mmdiffusion
52 | conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch-nightly -c nvidia
53 | conda install mpi4py
54 | pip install -r requirement.txt
55 | ```
56 | ## Models
57 | Pre-trained models can be downloaded from [google drive](https://drive.google.com/drive/folders/1Mno4A3BUXELAdX4m650CJ1VfuMVlkz5p?usp=share_link), and [baidu cloud](https://pan.baidu.com/s/1vJIZCHBVlmcq9np1ytstbQ?pwd=vqon).
58 | * *Landscape.pt*: trained on landscape dataset to generate audio-video pairs.
59 | * *Landscape_SR.pt*: trained on landscape dataset to upsample frame from reolution 64x64 to 256x256.
60 | * *AIST++.pt*: trained on AIST++ dataset to generate audio-video pairs.
61 | * *AIST++_SR.pt*: trained on AIST++ dataset to upsample frame from reolution 64x64 to 256x256.
62 | * *guided-diffusion_64_256_upsampler.pt*: from [guided-diffusion](https://github.com/openai/guided-diffusion), used as initialization of image SR model.
63 |
64 | * *i3d_pretrained_400.pt*: model for evaluting videos' FVD and KVD, Manually download to ```~/.cache/mmdiffusion/``` if the automatic download procedure fails.
65 | * *AudioCLIP-Full-Training.pt*: model for evaluting audios; FAD, Manually download to ```~/.cache/mmdiffusion/``` if the automatic download procedure fails.
66 |
67 |
69 |
70 | ## Datasets
71 | 1. Landscape
72 | 2. AIST++_crop
73 |
74 | The datasets can be downloaded from [google drive](https://drive.google.com/drive/folders/14A1zaQI5EfShlv3QirgCGeNFzZBzQ3lq?usp=sharing), and [baidu cloud](https://pan.baidu.com/s/1CRUSpUzdATIN7Jt8aNDaUw?pwd=fec8). \
75 | We only use the training set for training and evaluation.
76 |
77 | You can also run our script on your own dataset by providing the directory path with relevant videos, and the script will capture all videos under the path, regardless of how they are organized.
78 |
79 | ## Test
80 |
81 | 1. Download the pre-trained checkpoints.
82 | 2. Download the datasets: Landscape or AIST++_crop.
83 | 3. Modify relative pathes and run generation scripts to generate audio-video pairs.
84 | ```
85 | bash ssh_scripts/multimodal_sample_sr.sh
86 | ```
87 | 4. Modify `REF_DIR`, `SAMPLE_DIR`, `OUTPUT_DIR` and run evaluation scripts.
88 | ```
89 | bash ssh_scripts/multimodal_eval.sh
90 | ```
91 |
92 | ## Train
93 |
94 | 1. Prepare training datasets: Landscape or AIST++_crop.
95 | 2. Download datasets: Landscape or AIST++_crop
96 | ```
97 | # Traning Base model
98 | bash ssh_scripts/multimodal_train.sh
99 |
100 | # Training Upsampler from 64x64 -> 256x256, first extract videos into frames for SR training,
101 | bash ssh_scripts/image_sr_train.sh
102 | ```
103 |
104 | ## Conditional Generation
105 | ```
106 | # zero-shot conditional generation: audio-to-video
107 | bash ssh_scripts/audio2video_sample_sr.sh
108 |
109 | # zero-shot conditional generation: video-to-audio
110 | bash ssh_scripts/video2audio_sample.sh
111 | ```
112 | ## Related projects
113 | We also sincerely recommend some other excellent works related to us. :sparkles:
114 | * [Diffusion Models Beat GANS on Image Synthesis](https://github.com/openai/guided-diffusion)
115 | * [AudioCLIP: Extending CLIP to Image, Text and Audio](https://github.com/AndreyGuzhov/AudioCLIP)
116 | * [DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps](https://github.com/LuChengTHU/dpm-solver)
117 |
118 | ## Citation
119 | If you find our work useful for your research, please consider citing our paper. :blush:
120 | ```
121 | @inproceedings{ruan2022mmdiffusion,
122 | author = {Ruan, Ludan and Ma, Yiyang and Yang, Huan and He, Huiguo and Liu, Bei and Fu, Jianlong and Yuan, Nicholas Jing and Jin, Qin and Guo, Baining},
123 | title = {MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation},
124 | year = {2023},
125 | booktitle = {CVPR},
126 | }
127 | ```
128 |
129 | ## Contact
130 | If you meet any problems, please describe them in issues or contact:
131 | * Ludan Ruan: