├── README.md └── assets ├── approaches ├── Adaptation-Aware.svg ├── Contrastive Learning.svg ├── Dynamic Network Architecture.svg ├── Ensemble Large Pretrained Vision Models.svg ├── Feature Enhancement.svg ├── Feature-level Augmentation.svg ├── Fusion.svg ├── Image-level Augmentation.svg ├── Knowledge Distillation.svg ├── Latent Space.svg ├── Masking.svg ├── Modulation.svg ├── Natural-Language Guided.svg ├── Non-Progressive Training.svg ├── Optimization.svg ├── Other Multi-Task Objectives.svg ├── Progressive Training.svg ├── Prompt Tuning.svg ├── Prototype Learning.svg ├── Regularizer-based Fine-Tuning.svg ├── Regularizer.svg ├── Transformation-Driven Design.svg └── Transformation.svg ├── model ├── DM.svg ├── GAN.svg ├── VAE.svg └── VQ-VAE.svg └── task ├── IGM.svg ├── SGM.svg ├── cGM-1.svg ├── cGM-2.svg ├── cGM-3.svg ├── uGM-1.svg ├── uGM-2.svg └── uGM-3.svg /README.md: -------------------------------------------------------------------------------- 1 | 2 | [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome) 3 | [![Maintenance](https://img.shields.io/badge/Maintained%3F-yes-green.svg)](https://GitHub.com/Naereen/StrapDown.js/graphs/commit-activity) 4 | [![PR's Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat)](http://makeapullrequest.com) 5 | 6 | 7 | # A Survey on Generative Modeling with Limited Data, Few Shots, and Zero Shot 8 | ### [Project Page](https://gmdc-survey.github.io) | [Paper](https://arxiv.org/abs/2307.14397) | [Bibtex](#bibtex) 9 | [Milad Abdollahzadeh](https://miladabd.github.io/), [Touba Malekzadeh](https://scholar.google.com/citations?user=DgnZKiQAAAAJ&hl=en)\*, [Christopher T. H. Teo](https://scholar.google.com/citations?user=JhyGETcAAAAJ&hl=en)\*, [Keshigeyan Chandrasegaran](https://keshik6.github.io/)\*, [Guimeng Liu](https://scholar.google.com/citations?user=wJskd84AAAAJ&hl=en&oi=sra), [Ngai-Man Cheung](https://sites.google.com/site/mancheung0407/) 10 |
11 | (* Equal contribution, Corresponding author) 12 | 13 | 14 | This repo contains the list of papers with public code implementations for Generative Modeling under Data Constraint (GM-DC). 15 | For each work, we determine the generative task(s) addressed, the approach, and the type of generative model used. 16 | 17 | First, we define the generative tasks, and the approach definition, and then provide our comprehensive list of the works for GM-DC with the required details for each work. 18 | 19 | 20 | ## :star: Overview 21 | > In machine learning, generative modeling aims to learn to generate new data statistically similar to the 22 | training data distribution. In this paper, we survey learning generative models under limited data, few shots 23 | and zero shot, referred to as Generative Modeling under Data Constraint (GM-DC). This is an important 24 | topic when data acquisition is challenging, e.g. healthcare applications. We discuss background, challenges, 25 | and propose two taxonomies: one on GM-DC tasks and another on GM-DC approaches. Importantly, we 26 | study interactions between different GM-DC tasks and approaches. Furthermore, we highlight research gaps, 27 | research trends, and potential avenues for future exploration. 28 | 29 | ## :earth_asia: News 30 | * **Oct 28, 2024:** The slides for our ICIP tutorial on *"Generative Modeling for Limited Data, Few Shots and Zero Shot"* can be found [here](https://drive.google.com/file/d/1L4k2VTywZDnIhl51Or4I-kOcMt26CSP4/view?usp=sharing). 31 | * **July 28, 2023:** First release (113 works included)! 32 | 33 | 34 | ## Generative Tasks Definition 35 | We define 8 different generative tasks under data constraints based on the rigorous review of the literature. The description of these tasks can be found in the follwing table: 36 | 37 | | Task | Description & Example | Illustration | 38 | |:---------|:---------------------:|:------------:| 39 | | **uGM-1** | **Description:** Given $K$ samples from a domain $\mathcal{D}$, learn to generate diverse and high-quality samples from $\mathcal{D}$
**Example:** [ADA](https://arxiv.org/abs/2006.06676) learns a StyleGAN2 using 1k images from AFHQ-Dog| ![uGM1](https://github.com/sutd-visual-computing-group/awesome-generative-modeling-under-data-constraints/assets/29326313/14495003-55e9-4643-a798-5cfd7e43a5b7)| 40 | | **uGM-2** | **Description:** Given a pre-trained generator on a source domain $\mathcal{D}_s$ and $K$ samples from a target domain $\mathcal{D}_t$, learn to generate diverse and high-quality samples from $\mathcal{D}_t$
**Example:** [CDC](https://arxiv.org/abs/2104.06820) adapts a pre-trained GAN on FFHQ (Human Faces) to Sketches using 10 samples| ![uGM2](https://github.com/sutd-visual-computing-group/awesome-generative-modeling-under-data-constraints/assets/29326313/fa0a4549-0dee-4125-a643-11c14e161405)| 41 | | **uGM-3** | **Description:** Given a pre-trained generator on a source domain $\mathcal{D}_s$ and a text prompt describing a target domain $\mathcal{D}_t$, learn to generate diverse and high-quality samples from $\mathcal{D}_t$
**Example:** [StyleGAN-NADA](https://arxiv.org/abs/2108.00946) adapts pre-trained GAN on FFHQ to the painting domain using `Fernando Botero Painting` as input| ![uGM3](https://github.com/sutd-visual-computing-group/awesome-generative-modeling-under-data-constraints/assets/29326313/05dc48fa-d794-4448-9ccf-70d961a2214d)| 42 | | **cGM-1** | **Description:** Given $K$ samples with class labels from a domain $\mathcal{D}$, learn to generate diverse and high-quality samples conditioning on the class labels from $\mathcal{D}$
**Example:** [CbC](https://arxiv.org/abs/2201.06578) trains conditional generator on 20 classes of ImageNet Carnivores using 100 images per class | ![cGM1](https://github.com/sutd-visual-computing-group/awesome-generative-modeling-under-data-constraints/assets/29326313/b74f20df-9499-434a-afa1-76a0ca519c2f)| 43 | | **cGM-2** | **Description:** Given a pre-trained generator on the seen classes $C_{seen}$ of a domain $\mathcal{D}$ and $K$ samples with class labels from unseen classes $C_{unseen}$ of $\mathcal{D}$, learn to generate diverse and high-quality samples conditioning on the class labels for $C_{unseen}$ from $\mathcal{D}$
**Example:** [LoFGAN](https://ieeexplore.ieee.org/document/9710556) learns from 85 classes of Flowers to generate images for an unseen class with only 3 samples| ![cGM2](https://github.com/sutd-visual-computing-group/awesome-generative-modeling-under-data-constraints/assets/29326313/a2d4d83f-6de6-4b68-a929-58a97009caf6)| 44 | | **cGM-3** | **Description:** Given a pre-trained generator on a source domain $\mathcal{D}_s$ and $K$ samples with class labels from a target domain $\mathcal{D}_t$ , learn to generate diverse and high-quality samples conditioning on the class labels from $\mathcal{D}_t$
**Example:** [VPT](https://arxiv.org/abs/2210.00990) adapts a pre-trained conditional generator on ImageNet to Places365 with 500 images per class| ![cGM3](https://github.com/sutd-visual-computing-group/awesome-generative-modeling-under-data-constraints/assets/29326313/d0a58267-f782-46d7-aa6a-d53d520c9ece)| 45 | | **IGM** | **Description:** Given $K$ samples (usually $K=1$) and assuming rich internal distribution for patches within these samples, learn to generate diverse and high-quality samples with the same internal patch distribution
**Example:** [SinDDM](https://arxiv.org/abs/2211.16582) trains a generator using a single image of Marina Bay Sands, and generates variants of it |![IGM](https://github.com/sutd-visual-computing-group/awesome-generative-modeling-under-data-constraints/assets/29326313/7d0ee556-cb72-4766-9dc1-363369719eef)| 46 | | **SGM** | **Description:** Given a pre-trained generator, $K$ samples of a particular subject, and a text prompt, learn to generate diverse and high-quality samples containing the same subject
**Example:** [DreamBooth](https://arxiv.org/abs/2208.12242) trains a generator using 4 images of a particular backpack and adapts it with a text-prompt to be in the `grand canyon` |![SGM](https://github.com/sutd-visual-computing-group/awesome-generative-modeling-under-data-constraints/assets/29326313/d56a9fd3-45c3-4c36-aede-212cc32cc95f)| 47 | 48 | Please refer to our survey for a more detailed discussion of these generative tasks including the attributes of each task and the data limitation range that addressed for each task. 49 | 50 | 51 | 52 | 53 | 54 | 55 | 65 | 66 | 67 |

Transfer Learning

68 | 69 |
70 | Click to expand/collapse 50 works 71 | 72 | - **Transferring GANs: generating images from limited data**
ECCV 2018
[Paper] [Official Code]
[](#0) [](#0) [](#0) [](#0) 73 | - **Image Generation from Small Datasets via Batch Statistics Adaptation**
ICCV 2019
[Paper] [Official Code]
[](#2) [](#2) [](#2) 74 | - **Freeze the Discriminator: a Simple Baseline for Fine-tuning GANs**
CVPR 2020-W
[Paper] [Official Code]
[](#3) [](#3) [](#3) 75 | - **On Leveraging Pretrained GANs for Generation with Limited Data**
ICML 2020
[Paper] [Official Code]
[](#8) [](#8) [](#8) 76 | - **Few-Shot Image Generation with Elastic Weight Consolidation**
NeurIPS 2020
[Paper]
[](#9) [](#9) [](#9) 77 | - **GAN Memory with No Forgetting**
NeurIPS 2020
[Paper] [Official Code]
[](#10) [](#10) [](#10) 78 | - **Few-Shot Adaptation of Generative Adversarial Networks**
arXiv 2020
[Paper] [Official Code]
[](#31) [](#31) [](#31) 79 | - **Effective Knowledge Transfer from GANs to Target domains with Few Images**
CVPR 2021
[Paper] [Official Code]
[](#4) [](#4) [](#4) [](#4) 80 | - **Few-Shot Image Generation via Cross-domain Correspondence**
CVPR 2021
[Paper] [Official Code]
[](#12) [](#12) [](#12) 81 | - **Efficient Conditional GAN Transfer with Konwledge Propagation across Classes**
CVPR 2021
[Paper] [Official Code]
[](#13) [](#13) [](#13) [](#13) 82 | - **CAM-GAN: Continual Adaptation Modules for Generative Adversarial Networks**
NeurIPS 2021
[Paper] [Official Code]
[](#11) [](#11) [](#11) 83 | - **Contrastive Learning for Cross-domain Correspondence in Few-shot Image Generation**
NeurIPS 2021-W
[Paper]
[](#41) [](#41) [](#41) 84 | - **Instance-Conditioned GAN**
NeurIPS 2021
[Paper] [Official Code]
[](#48) [](#48) [](#48) 85 | - **Mining Generative Models for Efficient Knowledge Transfer to Limited Data Domains**
arXiv 2021
[Paper] [Official Code]
[](#6) [](#6) [](#6) [](#6) 86 | - **One-Shot Generative Domain Adaptation**
arXiv 2021
[Paper] [Official Code]
[](#30) [](#30) [](#30) 87 | - **When, Why, and Which Pre-trained GANs are useful?**
ICLR 2022
[Paper] [Official Code]
[](#15) [](#15) [](#15) 88 | - **Domain Gap Control for Single Shot Domain Adaptation for Generative Adversarial Networks**
ICLR 2022
[Paper] [Official Code]
[](#23) [](#23) [](#23) 89 | - **A Closer Look at Few-Shot Image Generation**
CVPR 2022
[Paper]
[](#16) [](#16) [](#16) 90 | - **Few shot generative model adaption via relaxed spatial structural alignment**
CVPR 2022
[Paper] [Official Code]
[](#17) [](#17) [](#17) 91 | - **One Shot Face Stylization**
ECCV 2022
[Paper] [Official Code]
[](#32) [](#32) [](#32) 92 | - **Few-shot Image Generation via Adaptation-Aware Kernel Modulation**
NeurIPS 2022
[Paper] [Official Code]
[](#18) [](#18) [](#18) [](#18) 93 | - **Universal Domain Adaptation for Generative Adversarial Networks**
NeurIPS 2022
[Paper] [Official Code]
[](#24) [](#24) [](#24) [](#24) [](#24) 94 | - **Generalized One-shot Domain Adaptation of Generative Adversarial Networks**
NeurIPS 2022
[Paper] [Official Code]
[](#28) [](#28) [](#28) 95 | - **Towards Diverse and Faithful One-shot Adaption of Generative Adversarial Networks**
NeurIPS 2022
[Paper] [Official Code]
[](#58) [](#58) [](#58) 96 | - **CLIP-Guided Domain Adaptation of Image Generators**
ACM-TOG 2022
[Paper] [Official Code]
[](#22) [](#22) [](#22) 97 | - **Dynamic Few-shot Adaptation of GANs to Multiple Domains**
SIGGRAPH-Asia 2022
[Paper] [Official Code]
[](#29) [](#29) [](#29) 98 | - **Exploiting Knowledge Distillation for Few-Shot Image Generation**
arXiv 2022
[Paper]
[](#35) [](#35) [](#35) 99 | - **Few-shot Artistic Portraits Generation with Contrastive Transfer Learning**
arXiv 2022
[Paper]
[](#36) [](#36) [](#36) 100 | - **Dynamic Weighted Semantic Correspondence for Few-Shot Image Generative Adaptation**
ACM-MM 2022
[Paper]
[](#50) [](#50) [](#50) 101 | - **Fair Generative Models via Transfer Learning**
AAAI 2023
[Paper] [Official Code]
[](#20) [](#20) [](#20) 102 | - **Progressive Few-Shot Adaptation of Generative Model with Align-Free Spatial Correlation**
AAAI 2023
[Paper] [Official Code]
[](#54) [](#54) [](#54) 103 | - **Few-shot Cross-domain Image Generation via Inference-time Latent-code Learning**
ICLR 2023
[Paper] [Official Code]
[](#37) [](#37) [](#37) 104 | - **Exploring Incompatible Knowledge Transfer in Few-shot Image Generation**
CVPR 2023
[Paper] [Official Code]
[](#21) [](#21) [](#21) 105 | - **Zero-shot Generative Model Adaptation via Image-specific Prompt Learning**
CVPR 2023
[Paper] [Official Code]
[](#38) [](#38) [](#38) 106 | - **Visual Prompt Tuning for Generative Transfer Learning**
CVPR 2023
[Paper] [Official Code]
[](#39) [](#39) [](#39) [](#39) 107 | - **SINgle Image Editing with Text-to-Image Diffusion Models**
CVPR 2023
[Paper] [Official Code]
[](#42) [](#42) [](#42) 108 | - **DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation**
CVPR 2023
[Paper]
[](#43) [](#43) [](#43) 109 | - **Multi-Concept Customization of Text-to-Image Diffusion**
CVPR 2023
[Paper] [Official Code]
[](#44) [](#44) [](#44) 110 | - **Plug-and-Play Sample-Efficient Fine-Tuning of Text-to-Image Diffusion Models to Learn Any Unseen Style**
CVPR 2023
[Paper]
[](#56) [](#56) [](#56) 111 | - **Target-Aware Generative Augmentations for Single-Shot Adaptation**
ICML 2023
[Paper] [Official Code]
[](#51) [](#51) [](#51) 112 | - **MultiDiffusion:Fusing Diffusion Paths for Controlled Image Generation**
ICML 2023
[Paper] [Official Code]
[](#52) [](#52) [](#52) 113 | - **Data-Dependent Domain Transfer GANs for Image Generation with Limited Data**
AC-MTMCCA 2023
[Paper]
[](#33) [](#33) [](#33) 114 | - **One-Shot Adaptation of GAN in Just One CLIP**
TPAMI 2023
[Paper] [Official Code]
[](#34) [](#34) [](#34) 115 | - **Few-shot Image Generation via Masked Discrimination**
arXiv 2023
[Paper]
[](#45) [](#45) [](#45) 116 | - **Few-shot Image Generation via Latent Space Relocation**
arXiv 2023
[Paper]
[](#46) [](#46) [](#46) 117 | - **Faster Few-Shot Face Image Generation with Features of Specific Group Using Pivotal Tuning Inversion and PCA**
ICAIIC 2023
[Paper]
[](#47) [](#47) [](#47) 118 | - **Few-shot Image Generation with Diffusion Models**
arXiv 2023
[Paper]
[](#49) [](#49) [](#49) 119 | - **Rethinking cross-domain semantic relation for few-shot image generation**
Applied-Inteligence 2023
[Paper] [Official Code]
[](#53) [](#53) [](#53) 120 | - **An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion**
arXiv 2023
[Paper] [Official Code]
[](#55) [](#55) [](#55) 121 | - **BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing**
arXiv 2023
[Paper] [Official Code]
[](#57) [](#57) [](#57) 122 | 123 | 124 |
125 | 126 | 127 | 128 | ## Data Augmentation 129 | 130 |
131 | Click to expand/collapse 12 works 132 | 133 | - **Consistency Regularization for Generative Adversarial Networks**
ICLR 2019
[Paper] [Official Code]
[](#0) [](#0) [](#0) 134 | - **Training generative adversarial networks with limited data**
NeurIPS 2020
[Paper] [Official Code]
[](#2) [](#2) [](#2) [](#2) 135 | - **Differentiable Augmentation for Data-efficient GAN Training**
NeurIPS 2020
[Paper] [Official Code]
[](#4) [](#4) [](#4) [](#4) 136 | - **Image Augmentations for GAN Training**
arXiv 2020
[Paper]
[](#7) [](#7) [](#7) 137 | - **Improved Consistency Regularization for GANs**
AAAI 2021
[Paper]
[](#1) [](#1) [](#1) 138 | - **DeceiveD: Adaptive pseudo augmentation for gan training with limited data**
NeurIPS 2021
[Paper] [Official Code]
[](#8) [](#8) [](#8) 139 | - **Data-efficient gan training beyond (just) augmentations: A lottery ticket perspective**
NeurIPS 2021
[Paper] [Official Code]
[](#9) [](#9) [](#9) 140 | - **Self-Supervised GANs with Label Augmentation**
NeurIPS 2021
[Paper] [Official Code]
[](#10) [](#10) [](#10) 141 | - **On Data Augmentation for GAN Training**
TIP 2021
[Paper] [Official Code]
[](#6) [](#6) [](#6) 142 | - **Adaptive Feature Interpolation for Low-Shot Image Generation**
ECCV 2022
[Paper] [Official Code]
[](#13) [](#13) [](#13) 143 | - **Training GANs with Diffusion**
ICLR 2023
[Paper] [Official Code]
[](#12) [](#12) [](#12) 144 | - **Faster and More Data-Efficient Training of Diffusion Models**
arXiv 2023
[Paper]
[](#11) [](#11) [](#11) 145 | 146 | 147 |
148 | 149 | 150 | ## Network Architectures 151 | 152 |
153 | Click to expand/collapse 11 works 154 | 155 | - **Towards faster and stabilized gan training for high-fidelity few-shot image synthesis**
ICLR 2021
[Paper] [Official Code]
[](#1) [](#1) [](#1) 156 | - **Data-efficient gan training beyond (just) augmentations: A lottery ticket perspective**
NeurIPS 2021
[Paper] [Official Code]
[](#0) [](#0) [](#0) 157 | - **Projected GANs Converge Faster**
NeurIPS 2021
[Paper] [Official Code]
[](#2) [](#2) [](#2) 158 | - **Prototype Memory and Attention Mechanisms for Few Shot Image Generation**
ICLR 2022
[Paper] [Official Code]
[](#3) [](#3) [](#3) 159 | - **Collapse by conditioning: Training class-conditional GANs with limited data**
ICLR 2022
[Paper] [Official Code]
[](#4) [](#4) [](#4) 160 | - **Ensembling Off-the-shelf Models for GAN Training**
CVPR 2022
[Paper] [Official Code]
[](#5) [](#5) [](#5) 161 | - **Hierarchical Context Aggregation for Few-Shot Generation**
ICML 2022
[Paper] [Official Code]
[](#9) [](#9) [](#9) 162 | - **Improving GANs with A Dynamic Discriminator**
NeurIPS 2022
[Paper] [Official Code]
[](#6) [](#6) [](#6) 163 | - **Data-Efficient GANs Training via Architectural Reconfiguration**
CVPR 2023
[Paper] [Official Code]
[](#8) [](#8) [](#8) 164 | - **Introducing editable and representative attributes for few-shot image generation**
Engineering Applications of AI 2023
[Paper] [Official Code]
[](#7) [](#7) [](#7) 165 | - **Toward a better image synthesis GAN framework for high-fidelity few-shot datasets via NAS and contrastive learning**
Elsevier KBS 2023
[Paper] [Official Code]
[](#10) [](#10) [](#10) 166 | 167 | 168 |
169 | 170 | 171 | ## Multi-Task Objectives 172 | 173 |
174 | Click to expand/collapse 25 works 175 | 176 | - **Image Augmentations for GAN Training**
arXiv 2020
[Paper]
[](#5) [](#5) [](#5) 177 | - **Regularizing generative adversarial networks under limited data**
CVPR 2021
[Paper] [Official Code]
[](#9) [](#9) [](#9) 178 | - **Contrastive Learning for Cross-domain Correspondence in Few-shot Image Generation**
NeurIPS 2021-W
[Paper]
[](#3) [](#3) [](#3) 179 | - **Data-Efficient Instance Generation from Instance Discrimination**
NeurIPS 2021
[Paper] [Official Code]
[](#11) [](#11) [](#11) 180 | - **Diffusion-Decoding Models for Few-Shot Conditional Generation**
NeurIPS 2021
[Paper] [Official Code]
[](#20) [](#20) [](#20) 181 | - **Generative Co-training for Generative Adversarial Networks with Limited Data**
AAAI 2022
[Paper] [Official Code]
[](#13) [](#13) [](#13) 182 | - **Prototype Memory and Attention Mechanisms for Few Shot Image Generation**
ICLR 2022
[Paper] [Official Code]
[](#8) [](#8) [](#8) 183 | - **A Closer Look at Few-Shot Image Generation**
CVPR 2022
[Paper]
[](#0) [](#0) [](#0) 184 | - **Few-shot Image Generation with Mixup-based Distance Learning**
ECCV 2022
[Paper] [Official Code]
[](#15) [](#15) [](#15) 185 | - **Exploring Contrastive Learning for Solving Latent Discontinuity in Data-Efficient GANs**
ECCV 2022
[Paper] [Official Code]
[](#16) [](#16) [](#16) 186 | - **Any-resolution Training for High-resolution Image Synthesis**
ECCV 2022
[Paper] [Official Code]
[](#19) [](#19) [](#19) 187 | - **Discriminator gradIent Gap Regularization for GAN Training with Limited Data**
NeurIPS 2022
[Paper] [Official Code]
[](#10) [](#10) [](#10) 188 | - **Masked Generative Adversarial Networks are Data-Efficient Generation Learners**
NeurIPS 2022
[Paper]
[](#14) [](#14) [](#14) 189 | - **Exploiting Knowledge Distillation for Few-Shot Image Generation**
arXiv 2022
[Paper]
[](#1) [](#1) [](#1) 190 | - **Few-shot Artistic Portraits Generation with Contrastive Transfer Learning**
arXiv 2022
[Paper]
[](#2) [](#2) [](#2) 191 | - **Few-Shot Diffusion Models**
arXiv 2022
[Paper] [Official Code]
[](#21) [](#21) [](#21) 192 | - **Few-shot image generation based on contrastive meta-learning generative adversarial network**
Visual Computer 2022
[Paper]
[](#24) [](#24) [](#24) 193 | - **Training GANs with Diffusion**
ICLR 2023
[Paper] [Official Code]
[](#7) [](#7) [](#7) 194 | - **Data Limited Image Generation via Knowledge Distillation**
CVPR 2023
[Paper]
[](#17) [](#17) [](#17) 195 | - **Adaptive IMLE for Few-shot Pretraining-free Generative Modelling**
ICML 2023
[Paper] [Official Code]
[](#23) [](#23) [](#23) 196 | - **Few-shot Image Generation via Masked Discrimination**
arXiv 2023
[Paper]
[](#4) [](#4) [](#4) 197 | - **Faster and More Data-Efficient Training of Diffusion Models**
arXiv 2023
[Paper]
[](#6) [](#6) [](#6) 198 | - **Towards high diversity and fidelity image synthesis under limited data**
Information Sciences 2023
[Paper] [Official Code]
[](#12) [](#12) [](#12) 199 | - **Regularizing Label-Augmented Generative Adversarial Networks Under Limited Data**
IEEE Access 2023
[Paper]
[](#18) [](#18) [](#18) 200 | - **Dynamically Masked Discriminator for Generative Adversarial Networks**
arXiv 2023
[Paper]
[](#22) [](#22) [](#22) 201 | 202 | 203 |
204 | 205 | 206 | ## Exploiting Frequency Components 207 |
208 | Click to expand/collapse 4 works 209 | 210 | - **Generative Co-training for Generative Adversarial Networks with Limited Data**
AAAI 2022
[Paper] [Official Code]
[](#3) [](#3) 211 | - **Frequency-Aware GAN for High-Fidelity Few-Shot Image Generation**
ECCV 2022
[Paper] [Official Code]
[](#1) [](#1) 212 | - **Improving GANs with A Dynamic Discriminator**
NeurIPS 2022
[Paper] [Official Code]
[](#0) [](#0) 213 | - **Exploiting Frequency Components for Training GANs under Limited Data**
NeurIPS 2022
[Paper] [Official Code]
[](#2) [](#2) 214 | 215 | 216 |
217 | 218 | 219 | ## Meta-learning 220 |
221 | Click to expand/collapse 17 works 222 | 223 | - **Data Augmentaion Generative Adversarial Networks**
arXiv 2017
[Paper] [Official Code]
[](#2) [](#2) [](#2) 224 | - **Few-shot Generative Modelling with Generative Matching Networks**
AISTATS 2018
[Paper]
[](#3) [](#3) [](#3) 225 | - **Few-shot Image Generation with Reptile**
arXiv 2019
[Paper] [Official Code]
[](#4) [](#4) [](#4) 226 | - **A domain adaptive few shot generation framework**
arXiv 2020
[Paper]
[](#5) [](#5) [](#5) 227 | - **Matching-based Few-shot Image Generation**
ICME 2020
[Paper] [Official Code]
[](#6) [](#6) [](#6) 228 | - **Fusing-and-Filling GAN for Few-shot Image Generation**
ACM-MM 2020
[Paper] [Official Code]
[](#7) [](#7) [](#7) 229 | - **Fusing Local Representations for Few-shot Image Generation**
ICCV 2021
[Paper] [Official Code]
[](#8) [](#8) [](#8) 230 | - **Fast Adaptive Meta-Learning for Few-Shot Image Generation**
TMM 2021
[Paper] [Official Code]
[](#10) [](#10) [](#10) 231 | - **Frequency-Aware GAN for High-Fidelity Few-Shot Image Generation**
ECCV 2022
[Paper] [Official Code]
[](#0) [](#0) [](#0) 232 | - **Towards Diverse Few-shot Image Generation with Sample-Specific Delta**
ECCV 2022
[Paper] [Official Code]
[](#9) [](#9) [](#9) 233 | - **Few-shot image generation based on contrastive meta-learning generative adversarial network**
Visual Computer 2022
[Paper]
[](#1) [](#1) [](#1) 234 | - **Few-shot Image Generation Using Discrete Content Representation**
ACM MM 2022
[Paper]
[](#12) [](#12) [](#12) 235 | - **The Euclidean Space is Evil: Hyperbolic Attribute Editing for Few-shot Image Generation**
arXiv 2022
[Paper]
[](#16) [](#16) [](#16) 236 | - **Where is My Spot? Few-shot Image Generation via Latent Subspace Optimization**
CVPR 2023
[Paper] [Official Code]
[](#13) [](#13) [](#13) 237 | - **Attribute Group Editing for Reliable Few-shot Image Generation**
CVPR 2023
[Paper] [Official Code]
[](#14) [](#14) [](#14) 238 | - **Adaptive multi-scale modulation generative adversarial network for few-shot image generation**
Applied Intelligence 2023
[Paper]
[](#11) [](#11) [](#11) 239 | - **Stable Attribute Group Editing for Reliable Few-shot Image Generation**
arXiv 2023
[Paper] [Official Code]
[](#15) [](#15) [](#15) 240 | 241 | 242 |
243 | 244 | 245 | 246 | 247 | 248 | ## Modeling Internal Patch Distribution 249 |
250 | Click to expand/collapse 8 works 251 | 252 | - **Learning a Generative Model from a Single Natural Image**
ICCV 2019
[Paper] [Official Code]
[](#0) [](#0) [](#0) 253 | - **Learning to generate samples from single images and videos**
CVPR 2021-W
[Paper] [Official Code]
[](#1) [](#1) [](#1) 254 | - **Improved techniques for training single image gans**
WACV 2021
[Paper] [Official Code]
[](#2) [](#2) [](#2) 255 | - **Learning a Diffusion Model from a Single Natural Image**
arXiv 2022
[Paper] [Official Code]
[](#4) [](#4) [](#4) 256 | - **Learning and Blending the Internal Distributions of Single Images by Spatial Image-Identity Conditioning**
arXiv 2022
[Paper]
[](#5) [](#5) [](#5) 257 | - **Training Diffusion Models on a Single Image or Video**
ICML 2023
[Paper] [Official Code]
[](#6) [](#6) [](#6) 258 | - **A Single Image Denoising Diffusion Model**
ICML 2023
[Paper] [Official Code]
[](#7) [](#7) [](#7) 259 | - **Diverse Attribute Transfer for Few-Shot Image Synthesis**
VISIGRAPP 2023
[Paper] [Official Code]
[](#3) [](#3) [](#3) 260 | 261 | 262 |
263 | 264 | 265 | 266 | ## Citation 267 | 268 | If you find this repo useful, please cite our paper 269 |
@article{abdollahzadeh2023survey,
270 |       title={A Survey on Generative Modeling with Limited Data, Few Shots, and Zero Shot}, 
271 |       author={Milad Abdollahzadeh and Touba Malekzadeh and Christopher T. H. Teo and Keshigeyan Chandrasegaran and Guimeng Liu and Ngai-Man Cheung},
272 |       year={2023},
273 |       eprint={2307.14397},
274 |       archivePrefix={arXiv},
275 |       primaryClass={cs.CV}
276 | }
277 | 
278 | 279 | 280 | 281 | 282 | -------------------------------------------------------------------------------- /assets/approaches/Adaptation-Aware.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Adaptation-Aware 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/approaches/Contrastive Learning.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Contrastive Learning 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/approaches/Dynamic Network Architecture.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Dynamic Network Architecture 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/approaches/Ensemble Large Pretrained Vision Models.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Ensemble Large Pretrained Vision Models 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/approaches/Feature Enhancement.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Feature Enhancement 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/approaches/Feature-level Augmentation.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Feature-level Augmentation 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/approaches/Fusion.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Fusion 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/approaches/Image-level Augmentation.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Image-level Augmentation 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/approaches/Knowledge Distillation.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Knowledge Distillation 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/approaches/Latent Space.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Latent Space 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/approaches/Masking.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Masking 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/approaches/Modulation.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Modulation 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/approaches/Natural-Language Guided.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Natural-Language Guided 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/approaches/Non-Progressive Training.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Non-Progressive Training 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/approaches/Optimization.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Optimization 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/approaches/Other Multi-Task Objectives.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Other Multi-Task Objectives 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/approaches/Progressive Training.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Progressive Training 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/approaches/Prompt Tuning.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Prompt Tuning 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/approaches/Prototype Learning.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Prototype Learning 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/approaches/Regularizer-based Fine-Tuning.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Regularizer-based Fine-Tuning 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/approaches/Regularizer.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Regularizer 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/approaches/Transformation-Driven Design.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Transformation-Driven Design 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/approaches/Transformation.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | Transformation 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/model/DM.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | DM 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/model/GAN.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | GAN 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/model/VAE.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | VAE 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/model/VQ-VAE.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | VQ-VAE 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/task/IGM.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | IGM 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/task/SGM.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | SGM 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/task/cGM-1.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | cGM-1 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/task/cGM-2.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | cGM-2 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/task/cGM-3.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | cGM-3 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/task/uGM-1.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | uGM-1 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/task/uGM-2.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | uGM-2 26 | 27 | 28 | -------------------------------------------------------------------------------- /assets/task/uGM-3.svg: -------------------------------------------------------------------------------- 1 | 7 | 16 | 25 | uGM-3 26 | 27 | 28 | --------------------------------------------------------------------------------