├── LICENSE ├── README.assets └── badge.svg └── README.md /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2021 Deng-Ping Fan 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.assets/badge.svg: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # FaceSketch-Awesome-List [![Awesome](README.assets/badge.svg)](https://awesome.re) 2 | ### News 3 | 4 | - [2021.12] **Deep Facial Synthesis: A New Challenge** is coming! [[paper](https://arxiv.org/abs/2112.15439)] [[code](https://github.com/DengPingFan/FSGAN)] [[dataset](https://github.com/DengPingFan/FS2K)] 5 | 6 | 7 | 8 | ## Categories: 9 | 10 | | Related Work | 11 | | :----------------------------------------------------------- | 12 | | [**Handcraft Feature based Facial Sketch Synthesis**](#Handcraft-Feature-based-Facial-Sketch-Synthesis) | 13 | | [**General Neural Style Transfer**](#General-Neural-Style-Transfer) | 14 | | [**Deep Image-to-Image Translation**](#Deep-Image-to-Image-Translation) | 15 | | [**Deep Image-to-Sketch Synthesis**](#Deep-Image-to-Sketch-Synthesis) | 16 | 17 | # Towards Image-to-Sketch Synthesis 18 | 19 | ## Handcraft Feature based Facial Sketch Synthesis 20 | 21 | ### 2018 22 | 23 | - **[RS]** Random Sampling for Fast Face Sketch Synthesis (**PR**) [[paper](https://arxiv.org/pdf/1701.01911.pdf)] 24 | 25 | ### 2017 26 | 27 | - **[DSM]** Adaptive representation-based face sketch-photo synthesis (**NC**) [[paper](https://www.sciencedirect.com/science/article/abs/pii/S0925231217310032)] 28 | 29 | - **[AR]** Free-Hand Sketch Synthesis with Deformable Stroke Models (**IJCV**) [[paper](https://link.springer.com/content/pdf/10.1007%2Fs11263-016-0963-9.pdf)] 30 | 31 | ### 2016 32 | 33 | - **[MR]** Multiple Representations-Based Face Sketch–Photo Synthesis (**TNNLS**) [[paper](https://ieeexplore.ieee.org/document/7244234)] 34 | 35 | ### 2015 36 | 37 | - **[RobustStyle]** Superpixel-Based Face Sketch–Photo Synthesis (**TCSVT**) [[paper](https://ieeexplore.ieee.org/document/7335623)] 38 | 39 | - **[SPP]** Robust Face Sketch Style Synthesis (**TIP**) [[paper](https://ieeexplore.ieee.org/document/7331298)] 40 | 41 | ### 2014 42 | 43 | - **[REB]** Real-Time Exemplar-Based Face Sketch Synthesis (**ECCV**) [[paper](https://link.springer.com/content/pdf/10.1007/978-3-319-10599-4_51.pdf)] 44 | 45 | ### 2013 46 | 47 | - **[SAPS]** Style and abstraction in portrait sketching (**TOG**) [[paper](https://dl.acm.org/doi/pdf/10.1145/2461912.2461964)] 48 | - **[FESM]** Learnable Stroke Models for Example-based Portrait Painting (**BMVC**) [[paper](http://www.bmva.org/bmvc/2013/Papers/paper0036/paper0036.pdf)] 49 | - **[Transductive]** Transductive Face Sketch-Photo Synthesis (**TNNLS**) [[paper](https://ieeexplore.ieee.org/document/6515363)] 50 | - **[CDFSL]** Coupled Dictionary and Feature Space Learning with Applications to Cross-Domain Image Synthesis and Recognition (**ICCV**) [[paper](https://openaccess.thecvf.com/content_iccv_2013/papers/Huang_Coupled_Dictionary_and_2013_ICCV_paper.pdf)] 51 | 52 | ### 2012 53 | 54 | - **[SCDL]** Semi-coupled dictionary learning with applications to image super-resolution and photo-sketch synthesis (**CVPR**) [[paper](https://ieeexplore.ieee.org/document/6247930)] 55 | - **[MWF]** Markov Weight Fields for face sketch synthesis (**CVPR**) [[paper](https://ieeexplore.ieee.org/document/6247788)] 56 | 57 | - **[SR]** Face Sketch–Photo Synthesis and Retrieval Using Sparse Representation (**TCSVT**) [[paper](https://ieeexplore.ieee.org/document/6196209)] 58 | 59 | ### 2011 60 | 61 | - **[LRM]** Local Regression Model for Automatic Face Sketch Generation (**ICIG**) [[paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6005598)] 62 | 63 | - **[MOR]** Face Sketch Synthesis via Multivariate Output Regression (**HCII**) [[paper](https://link.springer.com/content/pdf/10.1007/978-3-642-21602-2_60.pdf)] 64 | - **[MDSR]** Face Sketch-Photo Synthesis under Multi-dictionary Sparse Representation Framework (**ICIG**) [[paper](https://ieeexplore.ieee.org/document/6005537)] 65 | - **[SVR]** Face sketch-photo synthesis based on support vector regression (**ICIP**) [[paper](https://ieeexplore.ieee.org/document/6115625)] 66 | 67 | ### 2010 68 | 69 | - **[LPR]** Lighting and Pose Robust Face Sketch Synthesis (**ECCV**) [[paper](https://link.springer.com/content/pdf/10.1007/978-3-642-15567-3_31.pdf)] 70 | 71 | ### 2009 72 | 73 | - **[MRF]** Face Photo-Sketch Synthesis and Recognition (**TPAMI**) [[paper](https://ieeexplore.ieee.org/document/4624272)] 74 | 75 | ### 2008 76 | 77 | - **[E-HMM]** Face Sketch Synthesis Algorithm Based on E-HMM and Selective Ensemble (**TCSVT**) [[paper](https://ieeexplore.ieee.org/document/4453838)] 78 | - **[HCM]** A Hierarchical Compositional Model for Face Representation and Sketching (**TPAMI**) [[paper](https://ieeexplore.ieee.org/document/4468712)] 79 | 80 | ### 2005 81 | 82 | - **[Nonlinear]** A Nonlinear Approach for Face Sketch Synthesis and Recognition (**ICCV**) [[paper](http://mmlab.ie.cuhk.edu.hk/archive/2005/CVPR_face_sketch_05.pdf)] 83 | 84 | ### 2001 85 | 86 | - **[EFSGNS]** Example-based facial sketch generation with non-parametric sampling (**ICCV**) [[paper](http://www.stat.ucla.edu/~sczhu/papers/Conf_2001/TBD2001_face_sketch.pdf)] 87 | 88 | 89 | 90 | ## General Neural Style Transfer 91 | 92 | ### 2021 93 | 94 | - **[RST]** Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes (**CVPR**) [[paper](https://arxiv.org/pdf/2103.17185.pdf)] [[code](https://github.com/CompVis/brushstroke-parameterized-style-transfer)] 95 | 96 | - **[LPN]** Drafting and Revision: Laplacian Pyramid Network for Fast High-Quality Artistic Style Transfer (**CVPR**) [[paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Lin_Drafting_and_Revision_Laplacian_Pyramid_Network_for_Fast_High-Quality_Artistic_CVPR_2021_paper.pdf)] [[code](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/lap_style.md)]) 97 | 98 | - **[pSp]** Encoding in Style: A StyleGAN Encoder for Image-to-Image Translation (**CVPR**) [[paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Richardson_Encoding_in_Style_A_StyleGAN_Encoder_for_Image-to-Image_Translation_CVPR_2021_paper.pdf)] [[code](https://github.com/eladrich/pixel2style2pixel)] 99 | 100 | ### 2020 101 | 102 | - **[DIN]** Dynamic Instance Normalization for Arbitrary Style Transfer (**AAAI**) [[paper](https://arxiv.org/pdf/1911.06953.pdf)] 103 | 104 | ### 2019 105 | 106 | - **[LinearTransfer]** Learning Linear Transformations for Fast Image and Video Style Transfer (**CVPR**) [[paper](https://openaccess.thecvf.com/content_CVPR_2019/papers/Li_Learning_Linear_Transformations_for_Fast_Image_and_Video_Style_Transfer_CVPR_2019_paper.pdf)] [[code](https://github.com/sunshineatnoon/LinearStyleTransfer)] 107 | 108 | - **[SANet]** Arbitrary Style Transfer With Style-Attentional Networks (**CVPR**) [[paper](https://openaccess.thecvf.com/content_CVPR_2019/papers/Park_Arbitrary_Style_Transfer_With_Style-Attentional_Networks_CVPR_2019_paper.pdf)] [[demo](https://dypark86.github.io/SANET/)] 109 | 110 | - **[Image2StyleGAN]** Image2StyleGAN: How to Embed Images Into the StyleGAN Latent Space? (**CVPR**) [[paper](https://openaccess.thecvf.com/content_ICCV_2019/papers/Abdal_Image2StyleGAN_How_to_Embed_Images_Into_the_StyleGAN_Latent_Space_ICCV_2019_paper.pdf)] [[code](https://github.com/zaidbhat1234/Image2StyleGAN)] 111 | 112 | ### 2018 113 | 114 | - **[DFR]** Arbitrary Style Transfer With Deep Feature Reshuffle (**CVPR**) [[paper](https://openaccess.thecvf.com/content_cvpr_2018/papers/Gu_Arbitrary_Style_Transfer_CVPR_2018_paper.pdf)] [[code](https://github.com/msracver/Style-Feature-Reshuffle)] 115 | 116 | - **[CartoonGAN]** CartoonGAN: Generative Adversarial Networks for Photo Cartoonization (**CVPR**) [[paper](https://openaccess.thecvf.com/content_cvpr_2018/papers/Chen_CartoonGAN_Generative_Adversarial_CVPR_2018_paper.pdf)] [[code](https://github.com/znxlwm/pytorch-CartoonGAN)] 117 | 118 | - **[MNetwork]** Neural Style Transfer via Meta Networks (**CVPR**) [[paper](https://openaccess.thecvf.com/content_cvpr_2018/papers/Shen_Neural_Style_Transfer_CVPR_2018_paper.pdf)] [[code](https://github.com/shenfalong/styletransfer)] 119 | 120 | - **[Avatar-net]** Avatar-Net: Multi-Scale Zero-Shot Style Transfer by Feature Decoration (**CVPR**) [[paper](https://openaccess.thecvf.com/content_cvpr_2018/papers/Sheng_Avatar-Net_Multi-Scale_Zero-Shot_CVPR_2018_paper.pdf)] [[code](https://github.com/LucasSheng/avatar-net)] 121 | 122 | - **[CFITT]** A Common Framework for Interactive Texture Transfer (**CVPR**) [[paper](https://openaccess.thecvf.com/content_cvpr_2018/papers/Men_A_Common_Framework_CVPR_2018_paper.pdf)] [[code](https://github.com/menyifang/CFITT)] 123 | 124 | - **[SSC]** Separating Style and Content for Generalized Style Transfer (**CVPR**) [[paper](https://openaccess.thecvf.com/content_cvpr_2018/papers/Zhang_Separating_Style_and_CVPR_2018_paper.pdf)] [[code](https://github.com/zhyxun/Separating-Style-and-Content-for-Generalized-Style-Transfer)] 125 | 126 | - **[SNST]** Stereoscopic Neural Style Transfer (**CVPR**) [[paper](https://openaccess.thecvf.com/content_cvpr_2018/papers/Chen_Stereoscopic_Neural_Style_CVPR_2018_paper.pdf)] 127 | 128 | - **[CL]** The Contextual Loss for Image Transformation with Non-Aligned Data (**ECCV**) [[paper](https://openaccess.thecvf.com/content_ECCV_2018/papers/Roey_Mechrez_The_Contextual_Loss_ECCV_2018_paper.pdf)] [[code](https://github.com/roimehrez/contextualLoss)] 129 | 130 | - **[SACL]** A Style-aware Content Loss for Real-time HD Style Transfer (**ECCV**) [[paper](https://openaccess.thecvf.com/content_ECCV_2018/papers/Artsiom_Sanakoyeu_A_Style-aware_Content_ECCV_2018_paper.pdf)] 131 | 132 | - **[ARF]** Stroke Controllable Fast Style Transfer with Adaptive Receptive Fields (**ECCV**) [[paper](https://openaccess.thecvf.com/content_ECCV_2018/papers/Yongcheng_Jing_Stroke_Controllable_Fast_ECCV_2018_paper.pdf)] [[code](https://github.com/LouieYang/stroke-controllable-fast-style-transfer)] 133 | 134 | 135 | 136 | ### 2017 137 | 138 | - **[ILC]** Incorporating Long-rage Consistency in CNN-based Texture Generation (**ICLR**) [[paper](https://arxiv.org/pdf/1606.01286.pdf)] [[code](https://github.com/guillaumebrg/texture_generation)] 139 | 140 | - **[CIN]** A Learned Representation for Artistic Style (**ICLR**) [[paper](https://arxiv.org/pdf/1610.07629.pdf)] [[code](https://github.com/magenta/magenta/tree/main/magenta/models/image_stylization)] 141 | 142 | - **[CPF]** Controlling Perceptual Factors in Neural Style Transfer (**CVPR**) [[paper](https://openaccess.thecvf.com/content_cvpr_2017/papers/Gatys_Controlling_Perceptual_Factors_CVPR_2017_paper.pdf)] [[code](https://github.com/leongatys/NeuralImageSynthesis)] 143 | 144 | - **[DPST]** Deep Photo Style Transfer (**CVPR**) [[paper](https://openaccess.thecvf.com/content_cvpr_2017/papers/Luan_Deep_Photo_Style_CVPR_2017_paper.pdf)] [[code](https://github.com/luanfujun/deep-photo-styletransfer)] 145 | 146 | - **[FFN]** Diversified Texture Synthesis With Feed-Forward Networks (**CVPR**) [[paper](https://openaccess.thecvf.com/content_cvpr_2017/papers/Li_Diversified_Texture_Synthesis_CVPR_2017_paper.pdf)] [[code](https://github.com/Yijunmaverick/MultiTextureSynthesis)] 147 | 148 | - **[StyleBank]** StyleBank: An Explicit Representation for Neural Image Style Transfer (**CVPR**) [[paper](https://openaccess.thecvf.com/content_cvpr_2017/papers/Chen_StyleBank_An_Explicit_CVPR_2017_paper.pdf)] 149 | 150 | - **[ITN]** Improved Texture Networks: Maximizing Quality and Diversity in Feed-Forward Stylization and Texture Synthesis (**CVPR**) [[paper](https://openaccess.thecvf.com/content_cvpr_2017/papers/Ulyanov_Improved_Texture_Networks_CVPR_2017_paper.pdf)] [[code](https://github.com/DmitryUlyanov/texture_nets)] 151 | 152 | - **[HDCNN]** Multimodal Transfer: A Hierarchical Deep Convolutional Neural Network for Fast Artistic Style Transfer (**CVPR**) [[paper](https://openaccess.thecvf.com/content_cvpr_2017/papers/Wang_Multimodal_Transfer_A_CVPR_2017_paper.pdf)] [[code](https://github.com/fullfanta/multimodal_transfer)] 153 | 154 | - **[AdaIN]** Arbitrary Style Transfer in Real-Time With Adaptive Instance Normalization (**ICCV**) [[paper](https://openaccess.thecvf.com/content_ICCV_2017/papers/Huang_Arbitrary_Style_Transfer_ICCV_2017_paper.pdf)] [[code](https://github.com/xunhuang1995/AdaIN-style)] 155 | 156 | - **[CI]** Characterizing and Improving Stability in Neural Style Transfer (**ICCV**) [[paper](https://openaccess.thecvf.com/content_ICCV_2017/papers/Gupta_Characterizing_and_Improving_ICCV_2017_paper.pdf)] [[code](https://github.com/jcjohnson/fast-neural-style)] 157 | 158 | - **[DNLRF]** Decoder Network Over Lightweight Reconstructed Feature for Fast Semantic Style Transfer (**ICCV**) [[paper](https://openaccess.thecvf.com/content_ICCV_2017/papers/Lu_Decoder_Network_Over_ICCV_2017_paper.pdf)] 159 | 160 | - **[WCT]** Universal Style Transfer via Feature Transforms (**NeurIPS**) [[paper](https://arxiv.org/pdf/1705.08086.pdf)] [[code](https://github.com/Yijunmaverick/UniversalStyleTransfer)] 161 | 162 | - **[VAT-DIA]** Visual Attribute Transfer through Deep Image Analogy (**TOG**) [[paper](https://arxiv.org/pdf/1705.01088.pdf)] [[code](https://github.com/msracver/Deep-Image-Analogy)] 163 | 164 | ### 2016 165 | 166 | - **[NST]** Image Style Transfer Using Convolutional Neural Networks (**CVPR**) [[paper](XX)] [[code](https://github.com/kaishengtai/neuralart)] 167 | 168 | - **[CNNMRF]** Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis (**CVPR**) [[paper](https://openaccess.thecvf.com/content_cvpr_2016/papers/Li_Combining_Markov_Random_CVPR_2016_paper.pdf)] [[code](https://github.com/chuanli11/CNNMRF)] 169 | 170 | - **[FNS]** Perceptual Losses for Real-Time Style Transfer and Super-Resolution (**ECCV**) [[paper](https://arxiv.org/pdf/1603.08155.pdf)] [[code](https://github.com/jcjohnson/fast-neural-style)] 171 | 172 | - **[MGANs]** Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks (**ECCV**) [[paper](https://arxiv.org/pdf/1604.04382.pdf)] [[code](https://github.com/chuanli11/MGANs)] 173 | 174 | - **[TextureNet]** Texture Networks: Feed-forward Synthesis of Textures and Stylized Images (**ICML**) [[paper](http://proceedings.mlr.press/v48/ulyanov16.pdf)] [[code](https://github.com/DmitryUlyanov/texture_nets)] 175 | 176 | - **[EDSC]** Painting style transfer for head portraits using convolutional neural networks (**TOG**) [[paper](https://dl.acm.org/doi/pdf/10.1145/2897824.2925968)] 177 | 178 | - **[FPST]** Fast Patch-based Style Transfer of Arbitrary Style (**NeurIPSW**) [[paper](https://arxiv.org/pdf/1612.04337.pdf)] [[code](https://github.com/rtqichen/style-swap)] 179 | 180 | ### 2015 181 | 182 | - **[NST]** A Neural Algorithm of Artistic Style (**arXiv**) [[paper](https://arxiv.org/pdf/1508.06576.pdf)] 183 | 184 | ## Deep Image-to-Image Translation 185 | 186 | ### 2022 187 | 188 | + **[SofGAN]** SofGAN: A Portrait Image Generator with Dynamic Styling (**TOG**) [[paper](https://dl.acm.org/doi/abs/10.1145/3470848?casa_token=7Jw8pKRSaBoAAAAA:l66EiRNumrjIl54plcMB7Aela1ZA0CcmVDIrYuX7TXPcika6tdJH2Nf94un1pk3o3a3u0dV7p-eI1w)] [[code](https://apchenstu.github.io/sofgan)] 189 | 190 | ### 2021 191 | 192 | - **[LLS]** Conditional Generative Modeling via Learning the Latent Space (**ICLR**) [[paper](https://arxiv.org/pdf/2010.03132.pdf)] [[code](https://github.com/samgregoost/cGML)] 193 | - **[GH-feat]** Generative Hierarchical Features from Synthesizing Images (**CVPR**) [[paper](https://arxiv.org/pdf/2007.10379.pdf)] [[code](https://github.com/genforce/ghfeat)] 194 | - **[Divco]** DivCo: Diverse Conditional Image Synthesis via Contrastive Generative Adversarial Network (**CVPR**) [[paper](https://arxiv.org/pdf/2103.07893.pdf)] [[code](https://github.com/ruiliu-ai/DivCo)] 195 | - **[CoCosNet v2]** CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation (**CVPR**) [[paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhou_CoCosNet_v2_Full-Resolution_Correspondence_Learning_for_Image_Translation_CVPR_2021_paper.pdf)] [[code](https://github.com/microsoft/CoCosNet-v2)] 196 | - **[SDEdit]** SDEdit: Image Synthesis and Editing with Stochastic Differential Equations (arXiv) [[paper](https://arxiv.org/pdf/2108.01073.pdf)] [[code](https://github.com/ermongroup/SDEdit)] 197 | 198 | ### 2020 199 | 200 | - **[UGATIT]** U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation (**ICLR**) [[paper](https://arxiv.org/pdf/1907.10830.pdf)] [[code](https://github.com/znxlwm/UGATIT-pytorch)] 201 | 202 | - **[CrossNet]** CrossNet: Latent Cross-Consistency for Unpaired Image Translation (**WACV**) [[paper](https://openaccess.thecvf.com/content_WACV_2020/papers/Sendik_CrossNet_Latent_Cross-Consistency_for_Unpaired_Image_Translation_WACV_2020_paper.pdf)] 203 | 204 | - **[StarGAN v2]** StarGAN v2: Diverse Image Synthesis for Multiple Domains (**CVPR**) [[paper](https://openaccess.thecvf.com/content_CVPR_2020/papers/Choi_StarGAN_v2_Diverse_Image_Synthesis_for_Multiple_Domains_CVPR_2020_paper.pdf)] [[code](https://github.com/clovaai/stargan-v2)] 205 | 206 | - **[HiDT]** High-Resolution Daytime Translation Without Domain Labels (**CVPR**) [[paper](https://openaccess.thecvf.com/content_CVPR_2020/papers/Anokhin_High-Resolution_Daytime_Translation_Without_Domain_Labels_CVPR_2020_paper.pdf)] [[code](https://github.com/saic-mdal/HiDT)] 207 | 208 | - **[NICE-GAN]** Reusing Discriminators for Encoding: Towards Unsupervised Image-to-Image Translation (**CVPR**) [[paper](https://openaccess.thecvf.com/content_CVPR_2020/papers/Chen_Reusing_Discriminators_for_Encoding_Towards_Unsupervised_Image-to-Image_Translation_CVPR_2020_paper.pdf)] [[code](https://github.com/alpc91/NICE-GAN-pytorch)] 209 | 210 | - **[SEAN]** SEAN: Image Synthesis With Semantic Region-Adaptive Normalization (**CVPR**) [[paper](https://openaccess.thecvf.com/content_CVPR_2020/papers/Zhu_SEAN_Image_Synthesis_With_Semantic_Region-Adaptive_Normalization_CVPR_2020_paper.pdf)] [[code](https://github.com/ZPdesu/SEAN)] 211 | 212 | - **[CoCosNet]** Cross-Domain Correspondence Learning for Exemplar-Based Image Translation (**CVPR**) [[paper](https://openaccess.thecvf.com/content_CVPR_2020/papers/Zhang_Cross-Domain_Correspondence_Learning_for_Exemplar-Based_Image_Translation_CVPR_2020_paper.pdf)] [[code](https://github.com/microsoft/CoCosNet)] 213 | 214 | - **[TSIT]** TSIT: A Simple and Versatile Framework for Image-to-Image Translation (**ECCV**) [[paper](https://arxiv.org/pdf/2007.12072.pdf)] [[code](https://github.com/EndlessSora/TSIT)] 215 | 216 | - **[DSMAP]** Domain-Specific Mappings for Generative Adversarial Style Transfer (**ECCV**) [[paper](https://arxiv.org/pdf/2008.02198.pdf)] [[code](https://github.com/acht7111020/DSMAP)] 217 | 218 | - **[ACL-GAN]** Unpaired Image-to-Image Translation using Adversarial Consistency Loss (**ECCV**) [[paper](https://arxiv.org/pdf/2003.04858.pdf)] [[code](https://github.com/hyperplane-lab/ACL-GAN)] 219 | 220 | - **[DRIT++]** DRIT++: Diverse Image-to-Image Translation via Disentangled Representations (**IJCV**) [[paper](https://link.springer.com/content/pdf/10.1007/s11263-019-01284-z.pdf)] [[code](https://github.com/HsinYingLee/DRIT)] 221 | 222 | ### 2019 223 | 224 | - **[EGSC-IT]** Exemplar Guided Unsupervised Image-to-Image Translation with Semantic Consistency (**ICLR**) [[paper](https://arxiv.org/pdf/1805.11145.pdf)] [[code](https://github.com/charliememory/EGSC-IT)] 225 | 226 | - **[HarmonicGAN]** Harmonic Unpaired Image-to-image Translation (**ICLR**) [[paper](https://arxiv.org/pdf/1902.09727.pdf)] 227 | 228 | - **[GDWCT]** Image-To-Image Translation via Group-Wise Deep Whitening-And-Coloring Transformation (**CVPR**) [[paper](https://openaccess.thecvf.com/content_CVPR_2019/papers/Cho_Image-To-Image_Translation_via_Group-Wise_Deep_Whitening-And-Coloring_Transformation_CVPR_2019_paper.pdf)] [[code](https://github.com/WonwoongCho/GDWCT)] 229 | 230 | - **[SPADE]** Semantic Image Synthesis with Spatially-Adaptive Normalization (**CVPR**) [[paper](https://arxiv.org/pdf/1903.07291.pdf)] [[code](https://github.com/NVlabs/SPADE)] 231 | 232 | - **[TransGaGa]** TransGaGa: Geometry-Aware Unsupervised Image-To-Image Translation (**CVPR**) [[paper](https://openaccess.thecvf.com/content_CVPR_2019/papers/Wu_TransGaGa_Geometry-Aware_Unsupervised_Image-To-Image_Translation_CVPR_2019_paper.pdf)] 233 | 234 | - **[MS-GAN]** Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis (**CVPR**) [[paper](https://openaccess.thecvf.com/content_CVPR_2019/papers/Mao_Mode_Seeking_Generative_Adversarial_Networks_for_Diverse_Image_Synthesis_CVPR_2019_paper.pdf)] [[code](https://github.com/HelenMao/MSGAN)] 235 | 236 | - **[Selection-GAN]** Multi-Channel Attention Selection GAN With Cascaded Semantic Guidance for Cross-View Image Translation (**CVPR**) [[paper](https://openaccess.thecvf.com/content_CVPR_2019/papers/Tang_Multi-Channel_Attention_Selection_GAN_With_Cascaded_Semantic_Guidance_for_Cross-View_CVPR_2019_paper.pdf)] [[code](https://github.com/Ha0Tang/SelectionGAN)] 237 | 238 | - **[AttentionGAN]** Attention-Guided Generative Adversarial Networks for Unsupervised Image-to-Image Translation (**IJCNN**) [[paper](https://ieeexplore.ieee.org/document/8851881)] [[code](https://github.com/Ha0Tang/AttentionGAN)] 239 | 240 | - **[FUNIT]** Few-Shot Unsupervised Image-to-Image Translation (**ICCV**) [[paper](https://openaccess.thecvf.com/content_ICCV_2019/papers/Liu_Few-Shot_Unsupervised_Image-to-Image_Translation_ICCV_2019_paper.pdf)] [[code](https://github.com/NVlabs/FUNIT)] 241 | 242 | ### 2018 243 | 244 | - **[Pix2pixHD]** High-Resolution Image Synthesis and Semantic Manipulation With Conditional GANs (**CVPR**) [[paper](https://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_High-Resolution_Image_Synthesis_CVPR_2018_paper.pdf)] [[code](https://github.com/NVIDIA/pix2pixHD)] 245 | 246 | - **[DA-GAN]** DA-GAN: Instance-Level Image Translation by Deep Attention Generative Adversarial Networks (**CVPR**) [[paper](https://openaccess.thecvf.com/content_cvpr_2018/papers/Ma_DA-GAN_Instance-Level_Image_CVPR_2018_paper.pdf)] 247 | 248 | - **[StarGAN]** StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation (**CVPR**) [[paper](https://openaccess.thecvf.com/content_cvpr_2018/papers/Choi_StarGAN_Unified_Generative_CVPR_2018_paper.pdf)] [[code](https://github.com/yunjey/StarGAN)] 249 | 250 | - **[ModularGAN]** Modular Generative Adversarial Networks (**ECCV**) [[paper](https://openaccess.thecvf.com/content_ECCV_2018/papers/Bo_Zhao_Modular_Generative_Adversarial_ECCV_2018_paper.pdf)] 251 | 252 | - **[GANimation]** Anatomically Coherent Facial Expression Synthesis (**ECCV**) [[paper](https://openaccess.thecvf.com/content_ECCV_2018/papers/Albert_Pumarola_Anatomically_Coherent_Facial_ECCV_2018_paper.pdf)] [[code](https://github.com/albertpumarola/GANimation)] 253 | 254 | - **[SCANs]** Unsupervised Image-to-Image Translation with Stacked Cycle-Consistent Adversarial Networks (**ECCV**) [[paper](https://openaccess.thecvf.com/content_ECCV_2018/papers/Minjun_Li_Unsupervised_Image-to-Image_Translation_ECCV_2018_paper.pdf)] 255 | 256 | - **[MUNIT]** Multimodal Unsupervised Image-to-image Translation (**ECCV**) [[paper](https://openaccess.thecvf.com/content_ECCV_2018/papers/Xun_Huang_Multimodal_Unsupervised_Image-to-image_ECCV_2018_paper.pdf)] [[code](https://github.com/NVlabs/MUNIT)] 257 | 258 | - **[Elegant]** ELEGANT: Exchanging Latent Encodings with GAN for Transferring Multiple Face Attributes (**ECCV**) [[paper](https://openaccess.thecvf.com/content_ECCV_2018/papers/Taihong_Xiao_ELEGANT_Exchanging_Latent_ECCV_2018_paper.pdf)] [[code](https://github.com/Prinsphield/ELEGANT)] 259 | 260 | ### 2017 261 | 262 | - **[Pix2pix]** Image-To-Image Translation With Conditional Adversarial Networks (**CVPR**) [[paper](https://openaccess.thecvf.com/content_cvpr_2017/papers/Isola_Image-To-Image_Translation_With_CVPR_2017_paper.pdf)] [[code](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix)] 263 | 264 | - **[SisGAN]** Semantic Image Synthesis via Adversarial Learning (**ICCV**) [[paper](https://openaccess.thecvf.com/content_ICCV_2017/papers/Dong_Semantic_Image_Synthesis_ICCV_2017_paper.pdf)] [[code](https://github.com/woozzu/dong_iccv_2017)] 265 | 266 | - **[CycleGAN]** Unpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks (**ICCV**) [[paper](https://openaccess.thecvf.com/content_ICCV_2017/papers/Zhu_Unpaired_Image-To-Image_Translation_ICCV_2017_paper.pdf)] [[code](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix)] 267 | 268 | - **[DualGAN]** DualGAN: Unsupervised Dual Learning for Image-To-Image Translation (**ICCV**) [[paper](https://openaccess.thecvf.com/content_ICCV_2017/papers/Yi_DualGAN_Unsupervised_Dual_ICCV_2017_paper.pdf)] [[code](https://github.com/togheppi/DualGAN)] 269 | 270 | - **[DiscoGAN]** Learning to Discover Cross-Domain Relations with Generative Adversarial Networks (**ICML**) [[paper](http://proceedings.mlr.press/v70/kim17a/kim17a.pdf)] [[code](https://github.com/SKTBrain/DiscoGAN)] 271 | 272 | - **[DTN]** Unsupervised Cross-domain Image Generation (**ICLR**) [[paper](https://arxiv.org/pdf/1611.02200.pdf)] 273 | 274 | - **[BicycleGAN]** Toward Multimodal Image-to-Image Translation (**NeurIPS**) [[paper](https://papers.nips.cc/paper/2017/file/819f46e52c25763a55cc642422644317-Paper.pdf)] [[code](https://github.com/junyanz/BicycleGAN)] 275 | 276 | - **[UNIT]** Unsupervised Image-to-Image Translation Networks (**NeurIPS**) [[paper](https://papers.nips.cc/paper/2017/file/dc6a6489640ca02b0d42dabeb8e46bb7-Paper.pdf)] [[code](https://github.com/mingyuliutw/UNIT)] 277 | 278 | - **[DistanceGAN]** One-Sided Unsupervised Domain Mapping (**NeurIPS**) [[paper](https://papers.nips.cc/paper/2017/file/59b90e1005a220e2ebc542eb9d950b1e-Paper.pdf)] [[code](https://github.com/sagiebenaim/DistanceGAN)] 279 | 280 | - **[TriangleGAN]** Triangle Generative Adversarial Networks (**NeurIPS**) [[paper](https://papers.nips.cc/paper/2017/file/bbeb0c1b1fd44e392c7ce2fdbd137e87-Paper.pdf)] [[code](https://github.com/LiqunChen0606/Triangle-GAN)] 281 | 282 | 283 | 284 | ## Deep Image-to-Sketch Synthesis 285 | 286 | ### 2021 287 | 288 | - **[MSG-SARL]** Multi-Scale Gradients Self-Attention Residual Learning for Face Photo-Sketch Transformation (**TIFS**) [[paper](https://ieeexplore.ieee.org/document/9225019)] 289 | - **[GAN Sketching]** Sketch Your Own GAN (**ICCV**) [[paper](https://arxiv.org/pdf/2108.02774.pdf)] 290 | - **[DoodleFormer]** DoodleFormer: Creative Sketch Drawing with Transformers (**Arxiv**) [[paper](https://arxiv.org/pdf/2112.03258.pdf)] 291 | 292 | ### 2020 293 | 294 | - **[APDrawing++]** Line Drawings for Face Portraits from Photos using Global and Local Structure based GANs (**TPAMI**) [[paper](https://ieeexplore.ieee.org/document/9069416)] [[code](https://github.com/yiranran/APDrawingGAN2)] 295 | 296 | - **[UPDG]** Unpaired Portrait Drawing Generation via Asymmetric Cycle Mapping (**CVPR**) [[paper](https://openaccess.thecvf.com/content_CVPR_2020/papers/Yi_Unpaired_Portrait_Drawing_Generation_via_Asymmetric_Cycle_Mapping_CVPR_2020_paper.pdf)] [[code](https://github.com/yiranran/Unpaired-Portrait-Drawing)] 297 | 298 | - **[WCR-GAN]** Learning to Cartoonize Using White-Box Cartoon Representations (**CVPR**) [[paper](https://openaccess.thecvf.com/content_CVPR_2020/papers/Wang_Learning_to_Cartoonize_Using_White-Box_Cartoon_Representations_CVPR_2020_paper.pdf)] 299 | 300 | - **[EdgeGAN]** SketchyCOCO: Image Generation From Freehand Scene Sketches (**CVPR**) [[paper](https://openaccess.thecvf.com/content_CVPR_2020/papers/Gao_SketchyCOCO_Image_Generation_From_Freehand_Scene_Sketches_CVPR_2020_paper.pdf)] [[code](https://github.com/sysu-imsl/EdgeGAN)] 301 | 302 | - **[DeepPS]** Deep Plastic Surgery: Robust and Controllable Image Editing with Human-Drawn Sketches (**ECCV**) [[paper](https://arxiv.org/pdf/2001.02890.pdf)] [[code](https://github.com/VITA-Group/DeepPS)] 303 | 304 | - **[DeepFaceDrawing]** DeepFaceDrawing: deep generation of face images from sketches (**TOG**) [[paper](https://dl.acm.org/doi/abs/10.1145/3386569.3392386)] [[code](https://github.com/franknb/Drawing-to-Face)] 305 | 306 | - **[CA-GAN]** Toward Realistic Face Photo-Sketch Synthesis via Composition-Aided GANs (**ITC**) [[paper](https://ieeexplore.ieee.org/document/9025751)] [[code](https://github.com/fei-hdu/ca-gan)] 307 | 308 | - **[IDA-CycleGAN]** Identity-aware CycleGAN for face photo-sketch synthesis and recognition (**PR**) [[paper](https://www.sciencedirect.com/science/article/abs/pii/S0031320320300558)] 309 | 310 | - **[IPAM-GAN]** An Identity-Preserved Model for Face Sketch-Photo Synthesis (**SPL**) [[paper](https://ieeexplore.ieee.org/document/9126135)] 311 | 312 | - **[MvDT]** Universal Face Photo-Sketch Style Transfer via Multiview Domain Translation (**TIP**) [[paper](https://ieeexplore.ieee.org/document/9171460)] [[code](https://github.com/clpeng/UniversalFPSS)] 313 | 314 | 315 | 316 | ### 2019 317 | 318 | - **[PI-REC]** PI-REC: Progressive Image Reconstruction Network With Edge and Color Domain (**arXiv**) [[paper](https://arxiv.org/pdf/1903.10146.pdf)] [[code](https://github.com/youyuge34/PI-REC)] 319 | 320 | - **[DLLRR]** Deep Latent Low-Rank Representation for Face Sketch Synthesis (**TNNLS**) [[paper](https://ieeexplore.ieee.org/document/8621606)] 321 | 322 | - **[Col-cGAN]** A Deep Collaborative Framework for Face Photo–Sketch Synthesis (**TNNLS**) [[paper](https://ieeexplore.ieee.org/document/8621611)] 323 | 324 | - **[CFSS]** Cascaded Face Sketch Synthesis Under Various Illuminations (**TIP**) [[paper](https://ieeexplore.ieee.org/document/8848856)] 325 | 326 | - **[KT]** Face Photo-Sketch Synthesis via Knowledge Transfer (**IJCAI**) [[paper](https://www.ijcai.org/Proceedings/2019/0147.pdf)] 327 | 328 | - **[im2pencil]** Im2Pencil: Controllable Pencil Illustration from Photographs (**CVPR**) [[paper](https://arxiv.org/pdf/1903.08682.pdf)] [[code](https://github.com/Yijunmaverick/Im2Pencil)] 329 | 330 | - **[ISF]** Interactive Sketch & Fill: Multiclass Sketch-to-Image Translation (**ICCV**) [[paper](https://openaccess.thecvf.com/content_ICCV_2019/papers/Ghosh_Interactive_Sketch__Fill_Multiclass_Sketch-to-Image_Translation_ICCV_2019_paper.pdf)] 331 | 332 | - **[APDrawing]** APDrawingGAN: Generating Artistic Portrait Drawings From Face Photos With Hierarchical GANs (**CVPR**) [[paper](https://openaccess.thecvf.com/content_CVPR_2019/papers/Yi_APDrawingGAN_Generating_Artistic_Portrait_Drawings_From_Face_Photos_With_Hierarchical_CVPR_2019_paper.pdf)] [[code](https://github.com/yiranran/APDrawingGAN)] 333 | 334 | ### 2018 335 | 336 | - **[FSSC2F]** Face Sketch Synthesis from Coarse to Fine (**AAAI**) [[paper](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewFile/16088/16357)] 337 | 338 | - **[TextureGAN]** TextureGAN: Controlling Deep Image Synthesis with Texture Patches (**CVPR**) [[paper](https://openaccess.thecvf.com/content_cvpr_2018/papers/Xian_TextureGAN_Controlling_Deep_CVPR_2018_paper.pdf)] [[code](https://github.com/janesjanes/Pytorch-TextureGAN)] 339 | 340 | - **[SCC-GAN]** Learning to Sketch with Shortcut Cycle Consistency (**CVPR**) [[paper](https://openaccess.thecvf.com/content_cvpr_2018/papers/Song_Learning_to_Sketch_CVPR_2018_paper.pdf)] 341 | 342 | - **[ContextualGAN]** Image Generation from Sketch Constraint Using Contextual GAN (**ECCV**) [[paper](https://arxiv.org/pdf/1711.08972.pdf)] [[code](https://github.com/elliottwu/sText2Image)] 343 | 344 | - **[pGAN]** Robust Face Sketch Synthesis via Generative Adversarial Fusion of Priors and Parametric Sigmoid (**IJCAI**) [[paper](https://www.ijcai.org/Proceedings/2018/0162.pdf)] [[code](https://github.com/hujiecpp/pGAN)] 345 | 346 | - **[MRNF]** Markov Random Neural Fields for Face Sketch Synthesis (**IJCAI**) [[paper](https://www.ijcai.org/Proceedings/2018/0159.pdf)] 347 | 348 | - **[PS2-MAN]** High-Quality Facial Photo-Sketch Synthesis Using Multi-Adversarial Networks (**FG**) [[paper](https://ieeexplore.ieee.org/document/8373815)] [[code](https://github.com/lidan1/PhotoSketchMAN)] 349 | 350 | - **[DualT]** Dual-Transfer Face Sketch–Photo Synthesis (**TIP**) [[paper](https://ieeexplore.ieee.org/document/8463611)] 351 | 352 | - **[MDAL]** Face Sketch Synthesis by Multidomain Adversarial Learning (**TNNLS**) [[paper](https://ieeexplore.ieee.org/document/8478205)] [[code](https://github.com/hujiecpp/MDAL)] 353 | 354 | - **[FAG-GAN]** Facial Attributes Guided Deep Sketch-to-Photo Synthesis (**WACVW**) [[paper](https://ieeexplore.ieee.org/document/8347106)] 355 | 356 | - **[Geo-GAN]** Unsupervised Facial Geometry Learning for Sketch to Photo Synthesis (**BIOSIG**) [[paper](https://ieeexplore.ieee.org/document/8552937)] 357 | 358 | ### 2017 359 | 360 | - **[DGFL]** Deep Graphical Feature Learning for Face Sketch Synthesis (**IJCAI**) [[paper](https://www.ijcai.org/proceedings/2017/0500.pdf)] 361 | - **[Scribbler]** Scribbler: Controlling Deep Image Synthesis with Sketch and Color (**CVPR**) [[paper](https://openaccess.thecvf.com/content_cvpr_2017/papers/Sangkloy_Scribbler_Controlling_Deep_CVPR_2017_paper.pdf)] [[project](http://scribbler.eye.gatech.edu)] 362 | 363 | ### 2015 364 | 365 | - **[FCRL]** End-to-End Photo-Sketch Generation via Fully Convolutional Representation Learning (**ICMR**) [[paper](https://dl.acm.org/doi/pdf/10.1145/2671188.2749321)] 366 | 367 | 368 | 369 | --------------------------------------------------------------------------------