├── .gitignore ├── .vscode └── settings.json ├── LICENSE └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | .vscode/ 2 | -------------------------------------------------------------------------------- /.vscode/settings.json: -------------------------------------------------------------------------------- 1 | { 2 | "cSpell.words": [ 3 | "CVPR", 4 | "Denoising", 5 | "DIFFUSEMIX", 6 | "ICCV", 7 | "ICML", 8 | "Localizable", 9 | "Neur", 10 | "WACV" 11 | ] 12 | } -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Huan Wang 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # A-Survey-of-Synthetic-Images-Methods-with-Diffusion 2 | 3 | ## Papers 4 | 5 | ### Survey 6 | 7 | - 2023-IEEE-[Diffusion Models in Vision: A Survey](https://ieeexplore.ieee.org/abstract/document/10081412) 8 | 9 | - 2021-IEEE-[Deep Learning for Image Super-Resolution: A Survey](https://ieeexplore.ieee.org/abstract/document/9044873) 10 | 11 | ### Base Model 12 | 13 | - 2022-CVPR-[High-Resolution Image Synthesis with Latent Diffusion Models](https://openaccess.thecvf.com/content/CVPR2022/papers/Rombach_High-Resolution_Image_Synthesis_With_Latent_Diffusion_Models_CVPR_2022_paper.pdf)[[Code](https://github.com/CompVis/latent-diffusion)] 14 | 15 | ### Mathematical Formula 16 | 17 | - 2022-CVPR-[Understanding Diffusion Objectives as the ELBO with Simple Data Augmentation](https://proceedings.neurips.cc/paper_files/paper/2023/file/ce79fbf9baef726645bc2337abb0ade2-Paper-Conference.pdf) 18 | 19 | - 2023-NeurIPS-[Toward Understanding Generative Data Augmentation](https://proceedings.neurips.cc/paper_files/paper/2023/file/a94a8800a4b0af45600bab91164849df-Paper-Conference.pdf)[[Code](https://github.com/ML-GSAI/Understanding-GDA)] 20 | 21 | ### Model For High Resolution 22 | 23 | - 2023-ICML-[simple diffusion: End-to-end diffusion for high resolution images](https://proceedings.mlr.press/v202/hoogeboom23a/hoogeboom23a.pdf) 24 | 25 | ### Personalization 26 | 27 | - 2023-CVPR-[DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation](https://dreambooth.github.io/) [[Slides](https://cvpr2023.thecvf.com/media/cvpr-2023/Slides/23180.pdf)] 28 | 29 | - 2023-NIPS-[Visual Instruction Inversion: Image Editing via Visual Prompting](https://thaoshibe.github.io/visii/) 30 | ### Traditional Augmentation Methods 31 | 32 | - 2022-CVPR-[TeachAugment: Data Augmentation Optimization Using Teacher Knowledge](https://openaccess.thecvf.com/content/CVPR2022/papers/Suzuki_TeachAugment_Data_Augmentation_Optimization_Using_Teacher_Knowledge_CVPR_2022_paper.pdf)[[Code](https://github.com/DensoITLab/TeachAugment)] 33 | 34 | - 2021-CVPR-[SuperMix: Supervising the Mixing Data Augmentation](https://openaccess.thecvf.com/content/CVPR2021/papers/Dabouei_SuperMix_Supervising_the_Mixing_Data_Augmentation_CVPR_2021_paper.pdf)[[Code](https://github.com/alldbi/SuperMix)] 35 | 36 | - 2019-ICCV-[CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features](https://openaccess.thecvf.com/content_ICCV_2019/papers/Yun_CutMix_Regularization_Strategy_to_Train_Strong_Classifiers_With_Localizable_Features_ICCV_2019_paper.pdf)[[Code](https://github.com/clovaai/CutMix-PyTorch)] 37 | 38 | ### Mixing General Enhancement Methods 39 | 40 | - 2024-CVPR-[DIFFUSEMIX: Label-Preserving Data Augmentation with Diffusion Models](https://openaccess.thecvf.com/content/CVPR2024/papers/Islam_DiffuseMix_Label-Preserving_Data_Augmentation_with_Diffusion_Models_CVPR_2024_paper.pdf)[[Code](https://github.com/khawar-islam/diffuseMix)] 41 | 42 | - 2023-NeurIPS-[DP-Mix: Mixup-based Data Augmentation for Differentially Private Learning](https://proceedings.neurips.cc/paper_files/paper/2023/file/28484cee66f27fa070796b631cc5242d-Paper-Conference.pdf)[[Code](https://github.com/wenxuan-Bao/DP-Mix)] 43 | 44 | ### Controllable Augmentation 45 | 46 | - 2024-WACV-[Data Augmentation for Object Detection via Controllable Diffusion Models](https://openaccess.thecvf.com/content/WACV2024/papers/Fang_Data_Augmentation_for_Object_Detection_via_Controllable_Diffusion_Models_WACV_2024_paper.pdf)[[Code](https://github.com/FANGAreNotGnu/ControlAug)] 47 | 48 | - 2023-CVPR-[Semantic Data Augmentation with Generative Models](https://openaccess.thecvf.com/content/CVPR2023W/GCV/papers/Shivashankar_Semantic_Data_Augmentation_With_Generative_Models_CVPRW_2023_paper.pdf) 49 | 50 | - 2024-WACV-[Controllable Image Synthesis of Industrial Data using Stable Diffusion](https://openaccess.thecvf.com/content/WACV2024/papers/Valvano_Controllable_Image_Synthesis_of_Industrial_Data_Using_Stable_Diffusion_WACV_2024_paper.pdf) 51 | 52 | - 2023-NeurIPS-[Diffusion Self-Guidance for Controllable Image Generation](https://proceedings.neurips.cc/paper_files/paper/2023/file/3469b211b829b39d2b0cfd3b880a869c-Paper-Conference.pdf)[[Code](https://github.com/Sainzerjj/Free-Guidance-Diffusion)] 53 | 54 | - 2023-ICCV-[SVDiff: Compact Parameter Space for Diffusion Fine-Tuning](https://openaccess.thecvf.com/content/ICCV2023/papers/Han_SVDiff_Compact_Parameter_Space_for_Diffusion_Fine-Tuning_ICCV_2023_paper.pdf) 55 | 56 | - 2024-WACV-[Training-Free Layout Control with Cross-Attention Guidance](https://openaccess.thecvf.com/content/WACV2024/papers/Chen_Training-Free_Layout_Control_With_Cross-Attention_Guidance_WACV_2024_paper.pdf) 57 | 58 | ### Prompts 59 | 60 | - 2023-ICCV-[Diverse Data Augmentation with Diffusions for Effective Test-time Prompt Tuning](https://openaccess.thecvf.com/content/ICCV2023/papers/Feng_Diverse_Data_Augmentation_with_Diffusions_for_Effective_Test-time_Prompt_Tuning_ICCV_2023_paper.pdf)[[Code](https://github.com/chunmeifeng/DiffTPT)] 61 | 62 | - 2023-NeurIPS-[Diversify Your Vision Datasets with Automatic Diffusion-Based Augmentation](https://proceedings.neurips.cc/paper_files/paper/2023/file/f99f7b22ad47fa6ce151730cf8d17911-Paper-Conference.pdf)[[Code](https://github.com/lisadunlap/ALIA)] 63 | 64 | - 2024-AAAI-[Semantic-Guided Generative Image Augmentation Method with Diffusion Models for Image Classification](https://ojs.aaai.org/index.php/AAAI/article/view/28084) 65 | 66 | - 2023-ICCV-[Diffusion Based Augmentation for Captioning and Retrieval in Cultural Heritage](https://openaccess.thecvf.com/content/ICCV2023W/e-Heritage/papers/Cioni_Diffusion_Based_Augmentation_for_Captioning_and_Retrieval_in_Cultural_Heritage_ICCVW_2023_paper.pdf) 67 | 68 | ### Multi-Model 69 | 70 | - 2024-KDD-[SimDiff: Simple Denoising Probabilistic Latent Diffusion Model for Data Augmentation on Multi-modal Knowledge Graph](https://dl.acm.org/doi/pdf/10.1145/3637528.3671769)[[Code](https://github.com/ranlislz/SimDiff)] 71 | 72 | ### Deblurring 73 | 74 | - 2024-CVPR-[ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation](https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_ID-Blau_Image_Deblurring_by_Implicit_Diffusion-based_reBLurring_AUgmentation_CVPR_2024_paper.pdf) 75 | 76 | ### Dataset Augmentation 77 | 78 | - 2023-NeurIPS-[Dataset Diffusion: Diffusion-based Synthetic Dataset Generation for Pixel-Level Semantic Segmentation](https://proceedings.neurips.cc/paper_files/paper/2023/file/f2957e48240c1d90e62b303574871b47-Paper-Conference.pdf)[[Code](https://github.com/VinAIResearch/Dataset-Diffusion)] 79 | 80 | - 2024-CVPR-[Domain Gap Embeddings for Generative Dataset Augmentation](https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_Domain_Gap_Embeddings_for_Generative_Dataset_Augmentation_CVPR_2024_paper.pdf)[[Code](https://github.com/humansensinglab/DoGE)] 81 | 82 | - 2023-NeurIPS-[Expanding Small-Scale Datasets with Guided Imagination](https://proceedings.neurips.cc/paper_files/paper/2023/file/f188a55392d3a7509b0b27f8d24364bb-Paper-Conference.pdf)[[Code](https://github.com/Vanint/DatasetExpansion.git)] 83 | 84 | ### Face Generation 85 | 86 | - 2023-CVPR-[DCFace: Synthetic Face Generation with Dual Condition Diffusion Model](https://openaccess.thecvf.com/content/CVPR2023/papers/Kim_DCFace_Synthetic_Face_Generation_With_Dual_Condition_Diffusion_Model_CVPR_2023_paper.pdf)[[Code](https://github.com/mk-minchul/dcface)] 87 | 88 | ### Fine-tuning 89 | 90 | - 2024-CVPR-[Enhance Image Classification via Inter-Class Image Mixup with Diffusion Model](https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_Enhance_Image_Classification_via_Inter-Class_Image_Mixup_with_Diffusion_Model_CVPR_2024_paper.pdf)[[Code](https://github.com/Zhicaiwww/Diff-Mix)] 91 | 92 | ### Adversarial Guidance 93 | 94 | - 2023-ICCV-[AdvDiffuser: Natural Adversarial Example Synthesis with Diffusion Models](https://openaccess.thecvf.com/content/ICCV2023/papers/Chen_AdvDiffuser_Natural_Adversarial_Example_Synthesis_with_Diffusion_Models_ICCV_2023_paper.pdf) 95 | 96 | ### Inversion 97 | 98 | - 2020-CVPR-[Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion](https://openaccess.thecvf.com/content_CVPR_2020/papers/Yin_Dreaming_to_Distill_Data-Free_Knowledge_Transfer_via_DeepInversion_CVPR_2020_paper.pdf)[[Code](https://github.com/NVlabs/DeepInversion)] 99 | 100 | - 2023-ICML-[Training on Thin Air: Improve Image Classification with Generated Data](https://dmlr.ai/assets/accepted-papers/9/CameraReady/Diffusion_Inversion_DMLR_ICML2023_compressed.pdf)[[Code](https://github.com/yongchaoz/diffusion_inversion)] 101 | 102 | - 2023-CVPR-[Inversion-based Style Transfer with Diffusion Models](https://openaccess.thecvf.com/content/CVPR2023/papers/Zhang_Inversion-Based_Style_Transfer_With_Diffusion_Models_CVPR_2023_paper.pdf)[[Code](https://github.com/zyxElsa/InST)] 103 | 104 | - 2024-AAAI-[Compositional Inversion for Stable Diffusion Models](https://ojs.aaai.org/index.php/AAAI/article/view/28565)[[Code](https://github.com/zhangxulu1996/Compositional-Inversion)] 105 | 106 | - 2023-ICCV-[Controllable Inversion of Black-Box Face Recognition Models via Diffusion](https://openaccess.thecvf.com/content/ICCV2023W/AMFG/papers/Kansy_Controllable_Inversion_of_Black-Box_Face_Recognition_Models_via_Diffusion_ICCVW_2023_paper.pdf) 107 | 108 | - 2023-ICLR-[An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion](https://openreview.net/pdf?id=NAQvF08TcyG)[[Code](https://github.com/rinongal/textual_inversion)] 109 | 110 | ### Image Variation 111 | 112 | - 2023-NeurIPS-[Real-World Image Variation by Aligning Diffusion Inversion Chain](https://proceedings.neurips.cc/paper_files/paper/2023/file/61960fdfda4d4e95fa1c1f6e64bfe8bc-Paper-Conference.pdf)[[Code](https://github.com/dvlab-research/RIVAL)] 113 | 114 | ### Image Editing 115 | 116 | - 2023-ICCV-[Effective Real Image Editing with Accelerated Iterative Diffusion Inversion](https://openaccess.thecvf.com/content/ICCV2023/papers/Pan_Effective_Real_Image_Editing_with_Accelerated_Iterative_Diffusion_Inversion_ICCV_2023_paper.pdf) 117 | 118 | - 2024-CVPR-[Inversion-Free Image Editing with Language-Guided Diffusion Models](https://openaccess.thecvf.com/content/CVPR2024/papers/Xu_Inversion-Free_Image_Editing_with_Language-Guided_Diffusion_Models_CVPR_2024_paper.pdf)[[Code](https://github.com/sled-group/InfEdit)] 119 | 120 | - 2023-ICCV-[Prompt Tuning Inversion for Text-Driven Image Editing Using Diffusion Models](https://openaccess.thecvf.com/content/ICCV2023/papers/Dong_Prompt_Tuning_Inversion_for_Text-driven_Image_Editing_Using_Diffusion_Models_ICCV_2023_paper.pdf) 121 | 122 | - 2022-NeurIPS-[One Model to Edit Them All: Free-Form Text-Driven Image Manipulation with Semantic Modulations](https://proceedings.neurips.cc/paper_files/paper/2022/file/a0a53fefef4c2ad72d5ab79703ba70cb-Paper-Conference.pdf)[[Code](https://github.com/kristen-rang/FFCLIP)] 123 | 124 | - 2023-CVPR-[InstructPix2Pix: Learning to Follow Image Editing Instructions](https://openaccess.thecvf.com/content/CVPR2023/papers/Brooks_InstructPix2Pix_Learning_To_Follow_Image_Editing_Instructions_CVPR_2023_paper.pdf)[[Code](https://github.com/timothybrooks/instruct-pix2pix)] 125 | 126 | ### Super Resolution 127 | 128 | - 2024-CVPR-[Beyond Image Super-Resolution for Image Recognition with Task-Driven Perceptual Loss](https://openaccess.thecvf.com/content/CVPR2024/papers/Kim_Beyond_Image_Super-Resolution_for_Image_Recognition_with_Task-Driven_Perceptual_Loss_CVPR_2024_paper.pdf)[[Code](https://github.com/JaehaKim97/SR4IR)] 129 | 130 | - 2024-CVPR-[Low-Res Leads the Way: Improving Generalization for Super-Resolution by Self-Supervised Learning](https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_Low-Res_Leads_the_Way_Improving_Generalization_for_Super-Resolution_by_Self-Supervised_CVPR_2024_paper.pdf) 131 | 132 | - 2023-NeurIPS-[ResShift: Efficient Diffusion Model for Image Super-resolution by Residual Shifting](https://proceedings.neurips.cc/paper_files/paper/2023/file/2ac2eac5098dba08208807b65c5851cc-Paper-Conference.pdf)[[Code](https://github.com/zsyOAOA/ResShift)] 133 | --------------------------------------------------------------------------------