├── .gitignore ├── README.md ├── White_box_Cartoonization.ipynb ├── assets ├── delhi.jpg ├── demo.jpg ├── fedal.jpg ├── flow.jpg ├── food.jpg ├── gates.jpg ├── jobs.jpg ├── merkel.jpg ├── messi-ronaldo.jpg ├── newyorkts.jpg ├── pratap.jpg ├── subway.jpg ├── trump-clinton.jpg └── wave.svg ├── index.html ├── models └── CartoonGAN │ ├── saved_model │ ├── saved_model.pb │ └── variables │ │ ├── variables.data-00000-of-00001 │ │ └── variables.index │ ├── web-float16 │ ├── group1-shard1of1.bin │ └── model.json │ ├── web-uint8 │ ├── group1-shard1of1.bin │ └── model.json │ └── web │ ├── group1-shard1of2.bin │ ├── group1-shard2of2.bin │ └── model.json ├── script.js └── style.css /.gitignore: -------------------------------------------------------------------------------- 1 | todo.md 2 | *.zip 3 | *.mp4 -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Cartoonizer with TensorFlow.js 2 | 3 | [Try application][liveapp] 4 | 5 | We used Generative Adversarial Network (GAN) model proposed in 6 | [Learning to Cartoonize Using White-box Cartoon Representations][cvpr2020] (CVPR 2020) by Xinrui Wang and Jinze Yu. 7 | 8 | Our idea was to test if it is reasonably possible to perform model inferences in 9 | the browser clients with CPUs only. Without needing to send any of user's data (images) to servers. 10 | 11 | **[App preview][liveapp]**: Upload an image or try examples 12 | 13 | [][liveapp] 14 | 15 | Here's the application flow and architecture: 16 | 17 |
18 |
19 |
All of your data stays within your browser.
50 |Upload an image or try below examples
51 |91 | We used Generative Adversarial Network (GAN) model proposed in 92 | 93 | Learning to Cartoonize Using White-box Cartoon Representations (CVPR 2020) by 94 | Xinrui Wang and Jinze Yu. 95 | Our idea was to test if it is reasonably possible to perform model inferences in 96 | the browser clients with CPUs only. Without needing to send any of user's data (images) to servers. 97 |
98 |Here's the application flow and architecture:
99 |101 | TensorFlow Saved models are converted to 102 | TensorFlow.js models. 103 | Images are padded and scaled to 256px before they are fed to the model. 104 | This rescaling is done to speed up the processing and might reduce the quality too.
105 |106 | Model footprint is ~1.5MB. These models in the browsers without GPU acceleration could manage to cartoonize, 107 | but takes anywhere between 5-10 seconds for processing. 108 | This is much slower than tflite models performance in mobile devices. 109 | However, web browsers benefit from users not needing to install anything or transmitting data outside of their systems. 110 |
111 |