├── .gitignore ├── README.md ├── _config.yml ├── cezanne_watermelon_and_pomegranates.jpg ├── dataset ├── digital_affinebreaks.txt ├── digital_truestarts.txt └── digital_vid_caches_minimal.zip ├── index.md ├── make_timelapse.py ├── src ├── TimelapseFramesPredictor.py ├── TimelapseSequence.py ├── Ubuntu-M.ttf ├── __init__.py ├── config │ ├── __init__.py │ ├── digital_configs.py │ └── watercolors_configs.py ├── dataset │ ├── __init__.py │ ├── dataset_utils.py │ ├── datasets.py │ ├── frame_filter_utils.py │ ├── preprocessors.py │ └── scripts │ │ └── preprocess_digital_painting_vids.ipynb ├── experiment_base.py ├── experiment_engine.py ├── main.py ├── metrics.py ├── networks │ ├── WGAN.py │ ├── __init__.py │ ├── cvae_class.py │ ├── network_modules.py │ └── network_wrappers.py ├── sequence_extractor.py └── utils │ ├── __init__.py │ ├── network_utils.py │ ├── utils.py │ └── vis_utils.py └── trained_models ├── ours_digital_and_watercolor_epoch400.h5 ├── ours_digital_epoch300.h5 └── ours_watercolor_epoch300.h5 /.gitignore: -------------------------------------------------------------------------------- 1 | 2 | *.pyc 3 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## timecraft 2 | A learning-based method for synthesizing time lapse videos of paintings. This work was presented at CVPR 2020. This repository contains the authors' implementation, as discussed in the paper. 3 | 4 | If you use our code, please cite: 5 | 6 | **Painting Many Pasts: Synthesizing Time Lapse Videos of Paintings** 7 | [Amy Zhao](https://people.csail.mit.edu/xamyzhao), [Guha Balakrishnan](https://people.csail.mit.edu/balakg/), [Kathleen M. Lewis](https://katiemlewis.github.io/), [Fredo Durand](https://people.csail.mit.edu/fredo), [John Guttag](https://people.csail.mit.edu/guttag), [Adrian V. Dalca](adalca.mit.edu) 8 | CVPR 2020. [eprint arXiv:2001.01026](https://arxiv.org/abs/2001.01026) 9 | 10 | # Getting started 11 | ## Prerequisites 12 | To run this code, you will need: 13 | * Python 3.6+ (Python 2.7 may work but has not been tested) 14 | * CUDA 10.0+ 15 | * Tensorflow 1.13.1 and Keras 2.2.4 16 | * 1-2 GPUs, each with 12 GB of memory 17 | 18 | ## Creating time lapse samples 19 | Use the provided script to run our trained model and synthesize a time lapse for a given input image. 20 | ``` 21 | python make_timelapse.py cezanne_watermelon_and_pomegranates.jpg 22 | ``` 23 | If you get any interesting or fun results, please let me know on [twitter](https://twitter.com/AmyZhaoMIT)! 24 | 25 | ## Downloading the dataset 26 | We have organized the dataset into [pickle](https://docs.python.org/3/library/pickle.html) files. 27 | Each file contains information from a single time lapse video, and has the keys: 28 | * `vid_name`: A shortened name in the rough format \