├── README.md
├── XL_converge.png
├── generation
├── DiT
│ ├── LICENSE.txt
│ ├── diffusion
│ │ ├── __init__.py
│ │ ├── diffusion_utils.py
│ │ ├── gaussian_diffusion.py
│ │ ├── respace.py
│ │ └── timestep_sampler.py
│ ├── distributed.py
│ ├── download.py
│ ├── environment.yml
│ ├── eval_dit.sh
│ ├── guided_diffusion
│ │ ├── LICENSE
│ │ ├── README.md
│ │ ├── __init__.py
│ │ ├── datasets
│ │ │ ├── README.md
│ │ │ └── lsun_bedroom.py
│ │ ├── evaluations
│ │ │ ├── README.md
│ │ │ ├── evaluator.py
│ │ │ └── requirements.txt
│ │ ├── guided_diffusion
│ │ │ ├── __init__.py
│ │ │ ├── dist_util.py
│ │ │ ├── fp16_util.py
│ │ │ ├── gaussian_diffusion.py
│ │ │ ├── image_datasets.py
│ │ │ ├── logger.py
│ │ │ ├── losses.py
│ │ │ ├── nn.py
│ │ │ ├── resample.py
│ │ │ ├── respace.py
│ │ │ ├── script_util.py
│ │ │ ├── train_util.py
│ │ │ └── unet.py
│ │ ├── model-card.md
│ │ ├── scripts
│ │ │ ├── classifier_sample.py
│ │ │ ├── classifier_train.py
│ │ │ ├── image_nll.py
│ │ │ ├── image_sample.py
│ │ │ ├── image_train.py
│ │ │ ├── super_res_sample.py
│ │ │ └── super_res_train.py
│ │ └── setup.py
│ ├── models.py
│ ├── sample.py
│ ├── sample_ddp.py
│ ├── sample_dit.sh
│ ├── train.py
│ └── train_dit.sh
├── GENERATION.md
└── SiT
│ ├── .gitignore
│ ├── LICENSE.txt
│ ├── download.py
│ ├── environment.yml
│ ├── models.py
│ ├── sample.py
│ ├── sample_ddp.py
│ ├── sample_sit.sh
│ ├── train.py
│ ├── train_sit.sh
│ ├── train_utils.py
│ ├── transport
│ ├── __init__.py
│ ├── integrators.py
│ ├── path.py
│ ├── transport.py
│ └── utils.py
│ ├── visuals
│ ├── visual.png
│ └── visual_2.png
│ └── wandb_utils.py
├── method.png
└── pretrain
├── PRETRAIN.md
├── engine_finetune.py
├── engine_pretrain.py
├── main_finetune.py
├── main_generate.py
├── main_linprobe.py
├── main_pretrain.py
├── models_dit.py
├── models_mae.py
├── models_vit.py
├── requirements.txt
├── util
├── crop.py
├── datasets.py
├── lars.py
├── loader.py
├── lr_decay.py
├── lr_sched.py
├── misc.py
└── pos_embed.py
└── vae.py
/README.md:
--------------------------------------------------------------------------------
1 |
2 | USP: Unified Self-Supervised Pretraining for Image Generation and Understanding
3 |
4 |
5 | [](https://arxiv.org/abs/2503.06132)
6 |
7 | This is official implementation of USP.
8 |
9 | 
10 |
11 | Converge much faster just with weight initialization from pretrain.
12 | 
13 |
14 |
15 | If you find USP useful in your research or applications, please consider giving a star ⭐ and citing using the following BibTeX:
16 | ```
17 | @misc{chu2025uspunifiedselfsupervisedpretraining,
18 | title={USP: Unified Self-Supervised Pretraining for Image Generation and Understanding},
19 | author={Xiangxiang Chu and Renda Li and Yong Wang},
20 | year={2025},
21 | eprint={2503.06132},
22 | archivePrefix={arXiv},
23 | primaryClass={cs.CV},
24 | url={https://arxiv.org/abs/2503.06132},
25 | }
26 |
27 | ```
28 | ### Catalog
29 | - [x] 【4.21】Upload image generation finetuning weights
30 | - [x] Pre-training code
31 | - [x] (ImageNet SFT and linear probe finetuning code)
32 |
33 | ## Finetuning Wrights
34 | Uploaded image generation finetuning weights in [Hugging Face](https://huggingface.co/GD-ML/USP-Image_Generation/tree/main)
35 |
36 | All weights were pretrained for 1600 epochs and then finetuned for 400 K steps.
37 |
38 | Using the above weights and following the inference and evaluation procedures outlined in [GENERATION.md](./generation/GENERATION.md), we obtained the following evaluation results:
39 |
40 | | Model Name | Pretrain | Finetuning | FID | IS | sFID |
41 | |------------|----------------|----------------|--------|-------|--------|
42 | | DiT_B-2 | 1600 epochs | 400 K steps | 27.22 | 50.47 | 7.60 |
43 | | DiT_L-2 | 1600 epochs | 400 K steps | 15.05 | 80.11 | 6.41 |
44 | | DiT_XL-2 | 1600 epochs | 400 K steps | 9.64 | 112.93 | 6.30 |
45 | | SiT_B-2 | 1600 epochs | 400 K steps | 22.10 | 61.59 | 5.88 |
46 | | SiT_XL-2 | 1600 epochs | 400 K steps | 7.35 | 128.50 | 5.00 |
47 |
48 | ## Introduction
49 | Recent studies have highlighted the interplay between diffusion models and representation learning. Intermediate representations from diffusion models can be leveraged for downstream visual tasks, while self-supervised vision models can enhance the convergence and generation quality of diffusion models. However, transferring pretrained weights from vision models to diffusion models is challenging due to input mismatches and the use of latent spaces. To address these challenges, we propose Unified Self-supervised Pretraining (USP), a framework that initializes diffusion models via masked latent modeling in a Variational Autoencoder (VAE) latent space. USP achieves comparable performance in understanding tasks while significantly improving the convergence speed and generation quality of diffusion models.
50 |
51 | [//]: # (## Updates)
52 |
53 | [//]: # ()
54 | [//]: # (Our code is released.)
55 | ## Pretraining
56 | Please refer to [PRETRAIN.md](./pretrain/PRETRAIN.md)
57 | ## Downstream Task
58 | ### Generation
59 | Please refer to [GENERATION.md](./generation/GENERATION.md)
60 |
61 | [//]: # (### Image Generation Under the DiT Framework)
62 |
63 | [//]: # (### Image Generation Under the SiT Framework)
64 |
65 | [//]: # (### Image Understanding)
66 |
67 | ## Acknowledgement
68 |
69 | Our code are based on [MAE](https://github.com/facebookresearch/mae), [DiT](https://github.com/facebookresearch/DiT), [SiT](https://github.com/willisma/SiT) and [VisionLLaMA](https://github.com/Meituan-AutoML/VisionLLaMA). Thanks for their great work.
70 |
71 |
72 |
--------------------------------------------------------------------------------
/XL_converge.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AMAP-ML/USP/6026805bf62f724b6966b45efaff4893e71ce3b2/XL_converge.png
--------------------------------------------------------------------------------
/generation/DiT/diffusion/__init__.py:
--------------------------------------------------------------------------------
1 | # Modified from OpenAI's diffusion repos
2 | # GLIDE: https://github.com/openai/glide-text2im/blob/main/glide_text2im/gaussian_diffusion.py
3 | # ADM: https://github.com/openai/guided-diffusion/blob/main/guided_diffusion
4 | # IDDPM: https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py
5 |
6 | from . import gaussian_diffusion as gd
7 | from .respace import SpacedDiffusion, space_timesteps
8 |
9 |
10 | def create_diffusion(
11 | timestep_respacing,
12 | noise_schedule="linear",
13 | use_kl=False,
14 | sigma_small=False,
15 | predict_xstart=False,
16 | learn_sigma=True,
17 | rescale_learned_sigmas=False,
18 | diffusion_steps=1000
19 | ):
20 | betas = gd.get_named_beta_schedule(noise_schedule, diffusion_steps)
21 | if use_kl:
22 | loss_type = gd.LossType.RESCALED_KL
23 | elif rescale_learned_sigmas:
24 | loss_type = gd.LossType.RESCALED_MSE
25 | else:
26 | loss_type = gd.LossType.MSE
27 | if timestep_respacing is None or timestep_respacing == "":
28 | timestep_respacing = [diffusion_steps]
29 | return SpacedDiffusion(
30 | use_timesteps=space_timesteps(diffusion_steps, timestep_respacing),
31 | betas=betas,
32 | model_mean_type=(
33 | gd.ModelMeanType.EPSILON if not predict_xstart else gd.ModelMeanType.START_X
34 | ),
35 | model_var_type=(
36 | (
37 | gd.ModelVarType.FIXED_LARGE
38 | if not sigma_small
39 | else gd.ModelVarType.FIXED_SMALL
40 | )
41 | if not learn_sigma
42 | else gd.ModelVarType.LEARNED_RANGE
43 | ),
44 | loss_type=loss_type
45 | # rescale_timesteps=rescale_timesteps,
46 | )
47 |
--------------------------------------------------------------------------------
/generation/DiT/diffusion/diffusion_utils.py:
--------------------------------------------------------------------------------
1 | # Modified from OpenAI's diffusion repos
2 | # GLIDE: https://github.com/openai/glide-text2im/blob/main/glide_text2im/gaussian_diffusion.py
3 | # ADM: https://github.com/openai/guided-diffusion/blob/main/guided_diffusion
4 | # IDDPM: https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py
5 |
6 | import torch as th
7 | import numpy as np
8 |
9 |
10 | def normal_kl(mean1, logvar1, mean2, logvar2):
11 | """
12 | Compute the KL divergence between two gaussians.
13 | Shapes are automatically broadcasted, so batches can be compared to
14 | scalars, among other use cases.
15 | """
16 | tensor = None
17 | for obj in (mean1, logvar1, mean2, logvar2):
18 | if isinstance(obj, th.Tensor):
19 | tensor = obj
20 | break
21 | assert tensor is not None, "at least one argument must be a Tensor"
22 |
23 | # Force variances to be Tensors. Broadcasting helps convert scalars to
24 | # Tensors, but it does not work for th.exp().
25 | logvar1, logvar2 = [
26 | x if isinstance(x, th.Tensor) else th.tensor(x).to(tensor)
27 | for x in (logvar1, logvar2)
28 | ]
29 |
30 | return 0.5 * (
31 | -1.0
32 | + logvar2
33 | - logvar1
34 | + th.exp(logvar1 - logvar2)
35 | + ((mean1 - mean2) ** 2) * th.exp(-logvar2)
36 | )
37 |
38 |
39 | def approx_standard_normal_cdf(x):
40 | """
41 | A fast approximation of the cumulative distribution function of the
42 | standard normal.
43 | """
44 | return 0.5 * (1.0 + th.tanh(np.sqrt(2.0 / np.pi) * (x + 0.044715 * th.pow(x, 3))))
45 |
46 |
47 | def continuous_gaussian_log_likelihood(x, *, means, log_scales):
48 | """
49 | Compute the log-likelihood of a continuous Gaussian distribution.
50 | :param x: the targets
51 | :param means: the Gaussian mean Tensor.
52 | :param log_scales: the Gaussian log stddev Tensor.
53 | :return: a tensor like x of log probabilities (in nats).
54 | """
55 | centered_x = x - means
56 | inv_stdv = th.exp(-log_scales)
57 | normalized_x = centered_x * inv_stdv
58 | log_probs = th.distributions.Normal(th.zeros_like(x), th.ones_like(x)).log_prob(normalized_x)
59 | return log_probs
60 |
61 |
62 | def discretized_gaussian_log_likelihood(x, *, means, log_scales):
63 | """
64 | Compute the log-likelihood of a Gaussian distribution discretizing to a
65 | given image.
66 | :param x: the target images. It is assumed that this was uint8 values,
67 | rescaled to the range [-1, 1].
68 | :param means: the Gaussian mean Tensor.
69 | :param log_scales: the Gaussian log stddev Tensor.
70 | :return: a tensor like x of log probabilities (in nats).
71 | """
72 | assert x.shape == means.shape == log_scales.shape
73 | centered_x = x - means
74 | inv_stdv = th.exp(-log_scales)
75 | plus_in = inv_stdv * (centered_x + 1.0 / 255.0)
76 | cdf_plus = approx_standard_normal_cdf(plus_in)
77 | min_in = inv_stdv * (centered_x - 1.0 / 255.0)
78 | cdf_min = approx_standard_normal_cdf(min_in)
79 | log_cdf_plus = th.log(cdf_plus.clamp(min=1e-12))
80 | log_one_minus_cdf_min = th.log((1.0 - cdf_min).clamp(min=1e-12))
81 | cdf_delta = cdf_plus - cdf_min
82 | log_probs = th.where(
83 | x < -0.999,
84 | log_cdf_plus,
85 | th.where(x > 0.999, log_one_minus_cdf_min, th.log(cdf_delta.clamp(min=1e-12))),
86 | )
87 | assert log_probs.shape == x.shape
88 | return log_probs
89 |
--------------------------------------------------------------------------------
/generation/DiT/diffusion/respace.py:
--------------------------------------------------------------------------------
1 | # Modified from OpenAI's diffusion repos
2 | # GLIDE: https://github.com/openai/glide-text2im/blob/main/glide_text2im/gaussian_diffusion.py
3 | # ADM: https://github.com/openai/guided-diffusion/blob/main/guided_diffusion
4 | # IDDPM: https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py
5 |
6 | import numpy as np
7 | import torch as th
8 |
9 | from .gaussian_diffusion import GaussianDiffusion
10 |
11 |
12 | def space_timesteps(num_timesteps, section_counts):
13 | """
14 | Create a list of timesteps to use from an original diffusion process,
15 | given the number of timesteps we want to take from equally-sized portions
16 | of the original process.
17 | For example, if there's 300 timesteps and the section counts are [10,15,20]
18 | then the first 100 timesteps are strided to be 10 timesteps, the second 100
19 | are strided to be 15 timesteps, and the final 100 are strided to be 20.
20 | If the stride is a string starting with "ddim", then the fixed striding
21 | from the DDIM paper is used, and only one section is allowed.
22 | :param num_timesteps: the number of diffusion steps in the original
23 | process to divide up.
24 | :param section_counts: either a list of numbers, or a string containing
25 | comma-separated numbers, indicating the step count
26 | per section. As a special case, use "ddimN" where N
27 | is a number of steps to use the striding from the
28 | DDIM paper.
29 | :return: a set of diffusion steps from the original process to use.
30 | """
31 | if isinstance(section_counts, str):
32 | if section_counts.startswith("ddim"):
33 | desired_count = int(section_counts[len("ddim") :])
34 | for i in range(1, num_timesteps):
35 | if len(range(0, num_timesteps, i)) == desired_count:
36 | return set(range(0, num_timesteps, i))
37 | raise ValueError(
38 | f"cannot create exactly {num_timesteps} steps with an integer stride"
39 | )
40 | section_counts = [int(x) for x in section_counts.split(",")]
41 | size_per = num_timesteps // len(section_counts)
42 | extra = num_timesteps % len(section_counts)
43 | start_idx = 0
44 | all_steps = []
45 | for i, section_count in enumerate(section_counts):
46 | size = size_per + (1 if i < extra else 0)
47 | if size < section_count:
48 | raise ValueError(
49 | f"cannot divide section of {size} steps into {section_count}"
50 | )
51 | if section_count <= 1:
52 | frac_stride = 1
53 | else:
54 | frac_stride = (size - 1) / (section_count - 1)
55 | cur_idx = 0.0
56 | taken_steps = []
57 | for _ in range(section_count):
58 | taken_steps.append(start_idx + round(cur_idx))
59 | cur_idx += frac_stride
60 | all_steps += taken_steps
61 | start_idx += size
62 | return set(all_steps)
63 |
64 |
65 | class SpacedDiffusion(GaussianDiffusion):
66 | """
67 | A diffusion process which can skip steps in a base diffusion process.
68 | :param use_timesteps: a collection (sequence or set) of timesteps from the
69 | original diffusion process to retain.
70 | :param kwargs: the kwargs to create the base diffusion process.
71 | """
72 |
73 | def __init__(self, use_timesteps, **kwargs):
74 | self.use_timesteps = set(use_timesteps)
75 | self.timestep_map = []
76 | self.original_num_steps = len(kwargs["betas"])
77 |
78 | base_diffusion = GaussianDiffusion(**kwargs) # pylint: disable=missing-kwoa
79 | last_alpha_cumprod = 1.0
80 | new_betas = []
81 | for i, alpha_cumprod in enumerate(base_diffusion.alphas_cumprod):
82 | if i in self.use_timesteps:
83 | new_betas.append(1 - alpha_cumprod / last_alpha_cumprod)
84 | last_alpha_cumprod = alpha_cumprod
85 | self.timestep_map.append(i)
86 | kwargs["betas"] = np.array(new_betas)
87 | super().__init__(**kwargs)
88 |
89 | def p_mean_variance(
90 | self, model, *args, **kwargs
91 | ): # pylint: disable=signature-differs
92 | return super().p_mean_variance(self._wrap_model(model), *args, **kwargs)
93 |
94 | def training_losses(
95 | self, model, *args, **kwargs
96 | ): # pylint: disable=signature-differs
97 | return super().training_losses(self._wrap_model(model), *args, **kwargs)
98 |
99 | def condition_mean(self, cond_fn, *args, **kwargs):
100 | return super().condition_mean(self._wrap_model(cond_fn), *args, **kwargs)
101 |
102 | def condition_score(self, cond_fn, *args, **kwargs):
103 | return super().condition_score(self._wrap_model(cond_fn), *args, **kwargs)
104 |
105 | def _wrap_model(self, model):
106 | if isinstance(model, _WrappedModel):
107 | return model
108 | return _WrappedModel(
109 | model, self.timestep_map, self.original_num_steps
110 | )
111 |
112 | def _scale_timesteps(self, t):
113 | # Scaling is done by the wrapped model.
114 | return t
115 |
116 |
117 | class _WrappedModel:
118 | def __init__(self, model, timestep_map, original_num_steps):
119 | self.model = model
120 | self.timestep_map = timestep_map
121 | # self.rescale_timesteps = rescale_timesteps
122 | self.original_num_steps = original_num_steps
123 |
124 | def __call__(self, x, ts, **kwargs):
125 | map_tensor = th.tensor(self.timestep_map, device=ts.device, dtype=ts.dtype)
126 | new_ts = map_tensor[ts]
127 | # if self.rescale_timesteps:
128 | # new_ts = new_ts.float() * (1000.0 / self.original_num_steps)
129 | return self.model(x, new_ts, **kwargs)
130 |
--------------------------------------------------------------------------------
/generation/DiT/diffusion/timestep_sampler.py:
--------------------------------------------------------------------------------
1 | # Modified from OpenAI's diffusion repos
2 | # GLIDE: https://github.com/openai/glide-text2im/blob/main/glide_text2im/gaussian_diffusion.py
3 | # ADM: https://github.com/openai/guided-diffusion/blob/main/guided_diffusion
4 | # IDDPM: https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py
5 |
6 | from abc import ABC, abstractmethod
7 |
8 | import numpy as np
9 | import torch as th
10 | import torch.distributed as dist
11 |
12 |
13 | def create_named_schedule_sampler(name, diffusion):
14 | """
15 | Create a ScheduleSampler from a library of pre-defined samplers.
16 | :param name: the name of the sampler.
17 | :param diffusion: the diffusion object to sample for.
18 | """
19 | if name == "uniform":
20 | return UniformSampler(diffusion)
21 | elif name == "loss-second-moment":
22 | return LossSecondMomentResampler(diffusion)
23 | else:
24 | raise NotImplementedError(f"unknown schedule sampler: {name}")
25 |
26 |
27 | class ScheduleSampler(ABC):
28 | """
29 | A distribution over timesteps in the diffusion process, intended to reduce
30 | variance of the objective.
31 | By default, samplers perform unbiased importance sampling, in which the
32 | objective's mean is unchanged.
33 | However, subclasses may override sample() to change how the resampled
34 | terms are reweighted, allowing for actual changes in the objective.
35 | """
36 |
37 | @abstractmethod
38 | def weights(self):
39 | """
40 | Get a numpy array of weights, one per diffusion step.
41 | The weights needn't be normalized, but must be positive.
42 | """
43 |
44 | def sample(self, batch_size, device):
45 | """
46 | Importance-sample timesteps for a batch.
47 | :param batch_size: the number of timesteps.
48 | :param device: the torch device to save to.
49 | :return: a tuple (timesteps, weights):
50 | - timesteps: a tensor of timestep indices.
51 | - weights: a tensor of weights to scale the resulting losses.
52 | """
53 | w = self.weights()
54 | p = w / np.sum(w)
55 | indices_np = np.random.choice(len(p), size=(batch_size,), p=p)
56 | indices = th.from_numpy(indices_np).long().to(device)
57 | weights_np = 1 / (len(p) * p[indices_np])
58 | weights = th.from_numpy(weights_np).float().to(device)
59 | return indices, weights
60 |
61 |
62 | class UniformSampler(ScheduleSampler):
63 | def __init__(self, diffusion):
64 | self.diffusion = diffusion
65 | self._weights = np.ones([diffusion.num_timesteps])
66 |
67 | def weights(self):
68 | return self._weights
69 |
70 |
71 | class LossAwareSampler(ScheduleSampler):
72 | def update_with_local_losses(self, local_ts, local_losses):
73 | """
74 | Update the reweighting using losses from a model.
75 | Call this method from each rank with a batch of timesteps and the
76 | corresponding losses for each of those timesteps.
77 | This method will perform synchronization to make sure all of the ranks
78 | maintain the exact same reweighting.
79 | :param local_ts: an integer Tensor of timesteps.
80 | :param local_losses: a 1D Tensor of losses.
81 | """
82 | batch_sizes = [
83 | th.tensor([0], dtype=th.int32, device=local_ts.device)
84 | for _ in range(dist.get_world_size())
85 | ]
86 | dist.all_gather(
87 | batch_sizes,
88 | th.tensor([len(local_ts)], dtype=th.int32, device=local_ts.device),
89 | )
90 |
91 | # Pad all_gather batches to be the maximum batch size.
92 | batch_sizes = [x.item() for x in batch_sizes]
93 | max_bs = max(batch_sizes)
94 |
95 | timestep_batches = [th.zeros(max_bs).to(local_ts) for bs in batch_sizes]
96 | loss_batches = [th.zeros(max_bs).to(local_losses) for bs in batch_sizes]
97 | dist.all_gather(timestep_batches, local_ts)
98 | dist.all_gather(loss_batches, local_losses)
99 | timesteps = [
100 | x.item() for y, bs in zip(timestep_batches, batch_sizes) for x in y[:bs]
101 | ]
102 | losses = [x.item() for y, bs in zip(loss_batches, batch_sizes) for x in y[:bs]]
103 | self.update_with_all_losses(timesteps, losses)
104 |
105 | @abstractmethod
106 | def update_with_all_losses(self, ts, losses):
107 | """
108 | Update the reweighting using losses from a model.
109 | Sub-classes should override this method to update the reweighting
110 | using losses from the model.
111 | This method directly updates the reweighting without synchronizing
112 | between workers. It is called by update_with_local_losses from all
113 | ranks with identical arguments. Thus, it should have deterministic
114 | behavior to maintain state across workers.
115 | :param ts: a list of int timesteps.
116 | :param losses: a list of float losses, one per timestep.
117 | """
118 |
119 |
120 | class LossSecondMomentResampler(LossAwareSampler):
121 | def __init__(self, diffusion, history_per_term=10, uniform_prob=0.001):
122 | self.diffusion = diffusion
123 | self.history_per_term = history_per_term
124 | self.uniform_prob = uniform_prob
125 | self._loss_history = np.zeros(
126 | [diffusion.num_timesteps, history_per_term], dtype=np.float64
127 | )
128 | self._loss_counts = np.zeros([diffusion.num_timesteps], dtype=np.int)
129 |
130 | def weights(self):
131 | if not self._warmed_up():
132 | return np.ones([self.diffusion.num_timesteps], dtype=np.float64)
133 | weights = np.sqrt(np.mean(self._loss_history ** 2, axis=-1))
134 | weights /= np.sum(weights)
135 | weights *= 1 - self.uniform_prob
136 | weights += self.uniform_prob / len(weights)
137 | return weights
138 |
139 | def update_with_all_losses(self, ts, losses):
140 | for t, loss in zip(ts, losses):
141 | if self._loss_counts[t] == self.history_per_term:
142 | # Shift out the oldest loss term.
143 | self._loss_history[t, :-1] = self._loss_history[t, 1:]
144 | self._loss_history[t, -1] = loss
145 | else:
146 | self._loss_history[t, self._loss_counts[t]] = loss
147 | self._loss_counts[t] += 1
148 |
149 | def _warmed_up(self):
150 | return (self._loss_counts == self.history_per_term).all()
151 |
--------------------------------------------------------------------------------
/generation/DiT/distributed.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | import torch
4 | import torch.distributed as dist
5 | from datetime import timedelta
6 |
7 | try:
8 | import horovod.torch as hvd
9 | except ImportError:
10 | hvd = None
11 |
12 |
13 | def is_global_master(args):
14 | return args.rank == 0
15 |
16 |
17 | def is_local_master(args):
18 | return args.local_rank == 0
19 |
20 |
21 | def is_master(args, local=False):
22 | return is_local_master(args) if local else is_global_master(args)
23 |
24 |
25 | def is_using_horovod():
26 | # NOTE w/ horovod run, OMPI vars should be set, but w/ SLURM PMI vars will be set
27 | # Differentiating between horovod and DDP use via SLURM may not be possible, so horovod arg still required...
28 | ompi_vars = ["OMPI_COMM_WORLD_RANK", "OMPI_COMM_WORLD_SIZE"]
29 | pmi_vars = ["PMI_RANK", "PMI_SIZE"]
30 | if all([var in os.environ for var in ompi_vars]) or all([var in os.environ for var in pmi_vars]):
31 | return True
32 | else:
33 | return False
34 |
35 |
36 | def is_using_distributed():
37 | if 'WORLD_SIZE' in os.environ:
38 | return int(os.environ['WORLD_SIZE']) > 1
39 | if 'SLURM_NTASKS' in os.environ:
40 | return int(os.environ['SLURM_NTASKS']) > 1
41 | return False
42 |
43 |
44 | def world_info_from_env():
45 | local_rank = 0
46 | for v in ('LOCAL_RANK', 'MPI_LOCALRANKID', 'SLURM_LOCALID', 'OMPI_COMM_WORLD_LOCAL_RANK'):
47 | if v in os.environ:
48 | local_rank = int(os.environ[v])
49 | break
50 | global_rank = 0
51 | for v in ('RANK', 'PMI_RANK', 'SLURM_PROCID', 'OMPI_COMM_WORLD_RANK'):
52 | if v in os.environ:
53 | global_rank = int(os.environ[v])
54 | break
55 | world_size = 1
56 | for v in ('WORLD_SIZE', 'PMI_SIZE', 'SLURM_NTASKS', 'OMPI_COMM_WORLD_SIZE'):
57 | if v in os.environ:
58 | world_size = int(os.environ[v])
59 | break
60 |
61 | return local_rank, global_rank, world_size
62 |
63 |
64 | def init_distributed_device(args):
65 | # Distributed training = training on more than one GPU.
66 | # Works in both single and multi-node scenarios.
67 | args.distributed = False
68 | args.world_size = 1
69 | args.rank = 0 # global rank
70 | args.local_rank = 0
71 | if args.horovod:
72 | assert hvd is not None, "Horovod is not installed"
73 | hvd.init()
74 | args.local_rank = int(hvd.local_rank())
75 | args.rank = hvd.rank()
76 | args.world_size = hvd.size()
77 | args.distributed = True
78 | os.environ['LOCAL_RANK'] = str(args.local_rank)
79 | os.environ['RANK'] = str(args.rank)
80 | os.environ['WORLD_SIZE'] = str(args.world_size)
81 | elif is_using_distributed():
82 | if 'SLURM_PROCID' in os.environ:
83 | # DDP via SLURM
84 | args.local_rank, args.rank, args.world_size = world_info_from_env()
85 | # SLURM var -> torch.distributed vars in case needed
86 | os.environ['LOCAL_RANK'] = str(args.local_rank)
87 | os.environ['RANK'] = str(args.rank)
88 | os.environ['WORLD_SIZE'] = str(args.world_size)
89 | torch.distributed.init_process_group(
90 | backend=args.dist_backend,
91 | init_method=args.dist_url,
92 | world_size=args.world_size,
93 | timeout=timedelta(seconds=1800),
94 | rank=args.rank,
95 | )
96 | else:
97 | # DDP via torchrun, torch.distributed.launch
98 | args.local_rank, _, _ = world_info_from_env()
99 | torch.distributed.init_process_group(
100 | backend=args.dist_backend,
101 | init_method=args.dist_url,
102 | timeout=timedelta(seconds=1800)
103 | )
104 | args.world_size = torch.distributed.get_world_size()
105 | args.rank = torch.distributed.get_rank()
106 | args.distributed = True
107 |
108 | if torch.cuda.is_available():
109 | if args.distributed and not args.no_set_device_rank:
110 | device = 'cuda:%d' % args.local_rank
111 | else:
112 | device = 'cuda:0'
113 | torch.cuda.set_device(device)
114 | else:
115 | device = 'cpu'
116 | args.device = device
117 | device = torch.device(device)
118 | return device
119 |
120 |
121 | def broadcast_object(args, obj, src=0):
122 | # broadcast a pickle-able python object from rank-0 to all ranks
123 | if args.horovod:
124 | return hvd.broadcast_object(obj, root_rank=src)
125 | else:
126 | if args.rank == src:
127 | objects = [obj]
128 | else:
129 | objects = [None]
130 | dist.broadcast_object_list(objects, src=src)
131 | return objects[0]
132 |
133 |
134 | def all_gather_object(args, obj, dst=0):
135 | # gather a pickle-able python object across all ranks
136 | if args.horovod:
137 | return hvd.allgather_object(obj)
138 | else:
139 | objects = [None for _ in range(args.world_size)]
140 | dist.all_gather_object(objects, obj)
141 | return objects
142 |
--------------------------------------------------------------------------------
/generation/DiT/download.py:
--------------------------------------------------------------------------------
1 | # Copyright (c) Meta Platforms, Inc. and affiliates.
2 | # All rights reserved.
3 |
4 | # This source code is licensed under the license found in the
5 | # LICENSE file in the root directory of this source tree.
6 |
7 | """
8 | Functions for downloading pre-trained DiT models
9 | """
10 | from torchvision.datasets.utils import download_url
11 | import torch
12 | import os
13 |
14 |
15 | pretrained_models = {'DiT-XL-2-512x512.pt', 'DiT-XL-2-256x256.pt'}
16 |
17 |
18 | def find_model(model_name):
19 | """
20 | Finds a pre-trained DiT model, downloading it if necessary. Alternatively, loads a model from a local path.
21 | """
22 | if model_name in pretrained_models: # Find/download our pre-trained DiT checkpoints
23 | return download_model(model_name)
24 | else: # Load a custom DiT checkpoint:
25 | assert os.path.isfile(model_name), f'Could not find DiT checkpoint at {model_name}'
26 | checkpoint = torch.load(model_name, map_location=lambda storage, loc: storage)
27 | if "ema" in checkpoint: # supports checkpoints from train.py
28 | checkpoint = checkpoint["ema"]
29 | return checkpoint
30 |
31 |
32 | def download_model(model_name):
33 | """
34 | Downloads a pre-trained DiT model from the web.
35 | """
36 | assert model_name in pretrained_models
37 | local_path = f'pretrained_models/{model_name}'
38 | if not os.path.isfile(local_path):
39 | os.makedirs('pretrained_models', exist_ok=True)
40 | web_path = f'https://dl.fbaipublicfiles.com/DiT/models/{model_name}'
41 | download_url(web_path, 'pretrained_models')
42 | model = torch.load(local_path, map_location=lambda storage, loc: storage)
43 | return model
44 |
45 |
46 | if __name__ == "__main__":
47 | # Download all DiT checkpoints
48 | for model in pretrained_models:
49 | download_model(model)
50 | print('Done.')
51 |
--------------------------------------------------------------------------------
/generation/DiT/environment.yml:
--------------------------------------------------------------------------------
1 | name: DiT
2 | channels:
3 | - pytorch
4 | - nvidia
5 | dependencies:
6 | - python >= 3.8
7 | - pytorch >= 1.13
8 | - torchvision
9 | - pytorch-cuda=11.7
10 | - pip:
11 | - timm
12 | - diffusers
13 | - accelerate
14 |
--------------------------------------------------------------------------------
/generation/DiT/eval_dit.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | python guided_diffusion/evaluations/evaluator.py \
3 | PATH/TO/YOUR/VIRTUAL_imagenet256_labeled.npz \
4 | PATH/TO/YOUR/SAMPLES_NPZ_FILE
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2021 OpenAI
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/README.md:
--------------------------------------------------------------------------------
1 | # guided-diffusion
2 |
3 | This is the codebase for [Diffusion Models Beat GANS on Image Synthesis](http://arxiv.org/abs/2105.05233).
4 |
5 | This repository is based on [openai/improved-diffusion](https://github.com/openai/improved-diffusion), with modifications for classifier conditioning and architecture improvements.
6 |
7 | # Download pre-trained models
8 |
9 | We have released checkpoints for the main models in the paper. Before using these models, please review the corresponding [model card](model-card.md) to understand the intended use and limitations of these models.
10 |
11 | Here are the download links for each model checkpoint:
12 |
13 | * 64x64 classifier: [64x64_classifier.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/64x64_classifier.pt)
14 | * 64x64 diffusion: [64x64_diffusion.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/64x64_diffusion.pt)
15 | * 128x128 classifier: [128x128_classifier.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/128x128_classifier.pt)
16 | * 128x128 diffusion: [128x128_diffusion.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/128x128_diffusion.pt)
17 | * 256x256 classifier: [256x256_classifier.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/256x256_classifier.pt)
18 | * 256x256 diffusion: [256x256_diffusion.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/256x256_diffusion.pt)
19 | * 256x256 diffusion (not class conditional): [256x256_diffusion_uncond.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/256x256_diffusion_uncond.pt)
20 | * 512x512 classifier: [512x512_classifier.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/512x512_classifier.pt)
21 | * 512x512 diffusion: [512x512_diffusion.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/512x512_diffusion.pt)
22 | * 64x64 -> 256x256 upsampler: [64_256_upsampler.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/64_256_upsampler.pt)
23 | * 128x128 -> 512x512 upsampler: [128_512_upsampler.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/128_512_upsampler.pt)
24 | * LSUN bedroom: [lsun_bedroom.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/lsun_bedroom.pt)
25 | * LSUN cat: [lsun_cat.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/lsun_cat.pt)
26 | * LSUN horse: [lsun_horse.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/lsun_horse.pt)
27 | * LSUN horse (no dropout): [lsun_horse_nodropout.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/lsun_horse_nodropout.pt)
28 |
29 | # Sampling from pre-trained models
30 |
31 | To sample from these models, you can use the `classifier_sample.py`, `image_sample.py`, and `super_res_sample.py` scripts.
32 | Here, we provide flags for sampling from all of these models.
33 | We assume that you have downloaded the relevant model checkpoints into a folder called `models/`.
34 |
35 | For these examples, we will generate 100 samples with batch size 4. Feel free to change these values.
36 |
37 | ```
38 | SAMPLE_FLAGS="--batch_size 4 --num_samples 100 --timestep_respacing 250"
39 | ```
40 |
41 | ## Classifier guidance
42 |
43 | Note for these sampling runs that you can set `--classifier_scale 0` to sample from the base diffusion model.
44 | You may also use the `image_sample.py` script instead of `classifier_sample.py` in that case.
45 |
46 | * 64x64 model:
47 |
48 | ```
49 | MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond True --diffusion_steps 1000 --dropout 0.1 --image_size 64 --learn_sigma True --noise_schedule cosine --num_channels 192 --num_head_channels 64 --num_res_blocks 3 --resblock_updown True --use_new_attention_order True --use_fp16 True --use_scale_shift_norm True"
50 | python classifier_sample.py $MODEL_FLAGS --classifier_scale 1.0 --classifier_path models/64x64_classifier.pt --classifier_depth 4 --model_path models/64x64_diffusion.pt $SAMPLE_FLAGS
51 | ```
52 |
53 | * 128x128 model:
54 |
55 | ```
56 | MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond True --diffusion_steps 1000 --image_size 128 --learn_sigma True --noise_schedule linear --num_channels 256 --num_heads 4 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
57 | python classifier_sample.py $MODEL_FLAGS --classifier_scale 0.5 --classifier_path models/128x128_classifier.pt --model_path models/128x128_diffusion.pt $SAMPLE_FLAGS
58 | ```
59 |
60 | * 256x256 model:
61 |
62 | ```
63 | MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond True --diffusion_steps 1000 --image_size 256 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
64 | python classifier_sample.py $MODEL_FLAGS --classifier_scale 1.0 --classifier_path models/256x256_classifier.pt --model_path models/256x256_diffusion.pt $SAMPLE_FLAGS
65 | ```
66 |
67 | * 256x256 model (unconditional):
68 |
69 | ```
70 | MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond False --diffusion_steps 1000 --image_size 256 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
71 | python classifier_sample.py $MODEL_FLAGS --classifier_scale 10.0 --classifier_path models/256x256_classifier.pt --model_path models/256x256_diffusion_uncond.pt $SAMPLE_FLAGS
72 | ```
73 |
74 | * 512x512 model:
75 |
76 | ```
77 | MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond True --diffusion_steps 1000 --image_size 512 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 False --use_scale_shift_norm True"
78 | python classifier_sample.py $MODEL_FLAGS --classifier_scale 4.0 --classifier_path models/512x512_classifier.pt --model_path models/512x512_diffusion.pt $SAMPLE_FLAGS
79 | ```
80 |
81 | ## Upsampling
82 |
83 | For these runs, we assume you have some base samples in a file `64_samples.npz` or `128_samples.npz` for the two respective models.
84 |
85 | * 64 -> 256:
86 |
87 | ```
88 | MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond True --diffusion_steps 1000 --large_size 256 --small_size 64 --learn_sigma True --noise_schedule linear --num_channels 192 --num_heads 4 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
89 | python super_res_sample.py $MODEL_FLAGS --model_path models/64_256_upsampler.pt --base_samples 64_samples.npz $SAMPLE_FLAGS
90 | ```
91 |
92 | * 128 -> 512:
93 |
94 | ```
95 | MODEL_FLAGS="--attention_resolutions 32,16 --class_cond True --diffusion_steps 1000 --large_size 512 --small_size 128 --learn_sigma True --noise_schedule linear --num_channels 192 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
96 | python super_res_sample.py $MODEL_FLAGS --model_path models/128_512_upsampler.pt $SAMPLE_FLAGS --base_samples 128_samples.npz
97 | ```
98 |
99 | ## LSUN models
100 |
101 | These models are class-unconditional and correspond to a single LSUN class. Here, we show how to sample from `lsun_bedroom.pt`, but the other two LSUN checkpoints should work as well:
102 |
103 | ```
104 | MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond False --diffusion_steps 1000 --dropout 0.1 --image_size 256 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
105 | python image_sample.py $MODEL_FLAGS --model_path models/lsun_bedroom.pt $SAMPLE_FLAGS
106 | ```
107 |
108 | You can sample from `lsun_horse_nodropout.pt` by changing the dropout flag:
109 |
110 | ```
111 | MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond False --diffusion_steps 1000 --dropout 0.0 --image_size 256 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
112 | python image_sample.py $MODEL_FLAGS --model_path models/lsun_horse_nodropout.pt $SAMPLE_FLAGS
113 | ```
114 |
115 | Note that for these models, the best samples result from using 1000 timesteps:
116 |
117 | ```
118 | SAMPLE_FLAGS="--batch_size 4 --num_samples 100 --timestep_respacing 1000"
119 | ```
120 |
121 | # Results
122 |
123 | This table summarizes our ImageNet results for pure guided diffusion models:
124 |
125 | | Dataset | FID | Precision | Recall |
126 | |------------------|------|-----------|--------|
127 | | ImageNet 64x64 | 2.07 | 0.74 | 0.63 |
128 | | ImageNet 128x128 | 2.97 | 0.78 | 0.59 |
129 | | ImageNet 256x256 | 4.59 | 0.82 | 0.52 |
130 | | ImageNet 512x512 | 7.72 | 0.87 | 0.42 |
131 |
132 | This table shows the best results for high resolutions when using upsampling and guidance together:
133 |
134 | | Dataset | FID | Precision | Recall |
135 | |------------------|------|-----------|--------|
136 | | ImageNet 256x256 | 3.94 | 0.83 | 0.53 |
137 | | ImageNet 512x512 | 3.85 | 0.84 | 0.53 |
138 |
139 | Finally, here are the unguided results on individual LSUN classes:
140 |
141 | | Dataset | FID | Precision | Recall |
142 | |--------------|------|-----------|--------|
143 | | LSUN Bedroom | 1.90 | 0.66 | 0.51 |
144 | | LSUN Cat | 5.57 | 0.63 | 0.52 |
145 | | LSUN Horse | 2.57 | 0.71 | 0.55 |
146 |
147 | # Training models
148 |
149 | Training diffusion models is described in the [parent repository](https://github.com/openai/improved-diffusion). Training a classifier is similar. We assume you have put training hyperparameters into a `TRAIN_FLAGS` variable, and classifier hyperparameters into a `CLASSIFIER_FLAGS` variable. Then you can run:
150 |
151 | ```
152 | mpiexec -n N python scripts/classifier_train.py --data_dir path/to/imagenet $TRAIN_FLAGS $CLASSIFIER_FLAGS
153 | ```
154 |
155 | Make sure to divide the batch size in `TRAIN_FLAGS` by the number of MPI processes you are using.
156 |
157 | Here are flags for training the 128x128 classifier. You can modify these for training classifiers at other resolutions:
158 |
159 | ```sh
160 | TRAIN_FLAGS="--iterations 300000 --anneal_lr True --batch_size 256 --lr 3e-4 --save_interval 10000 --weight_decay 0.05"
161 | CLASSIFIER_FLAGS="--image_size 128 --classifier_attention_resolutions 32,16,8 --classifier_depth 2 --classifier_width 128 --classifier_pool attention --classifier_resblock_updown True --classifier_use_scale_shift_norm True"
162 | ```
163 |
164 | For sampling from a 128x128 classifier-guided model, 25 step DDIM:
165 |
166 | ```sh
167 | MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond True --image_size 128 --learn_sigma True --num_channels 256 --num_heads 4 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
168 | CLASSIFIER_FLAGS="--image_size 128 --classifier_attention_resolutions 32,16,8 --classifier_depth 2 --classifier_width 128 --classifier_pool attention --classifier_resblock_updown True --classifier_use_scale_shift_norm True --classifier_scale 1.0 --classifier_use_fp16 True"
169 | SAMPLE_FLAGS="--batch_size 4 --num_samples 50000 --timestep_respacing ddim25 --use_ddim True"
170 | mpiexec -n N python scripts/classifier_sample.py \
171 | --model_path /path/to/model.pt \
172 | --classifier_path path/to/classifier.pt \
173 | $MODEL_FLAGS $CLASSIFIER_FLAGS $SAMPLE_FLAGS
174 | ```
175 |
176 | To sample for 250 timesteps without DDIM, replace `--timestep_respacing ddim25` to `--timestep_respacing 250`, and replace `--use_ddim True` with `--use_ddim False`.
177 |
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AMAP-ML/USP/6026805bf62f724b6966b45efaff4893e71ce3b2/generation/DiT/guided_diffusion/__init__.py
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/datasets/README.md:
--------------------------------------------------------------------------------
1 | # Downloading datasets
2 |
3 | This directory includes instructions and scripts for downloading ImageNet and LSUN bedrooms for use in this codebase.
4 |
5 | ## Class-conditional ImageNet
6 |
7 | For our class-conditional models, we use the official ILSVRC2012 dataset with manual center cropping and downsampling. To obtain this dataset, navigate to [this page on image-net.org](http://www.image-net.org/challenges/LSVRC/2012/downloads) and sign in (or create an account if you do not already have one). Then click on the link reading "Training images (Task 1 & 2)". This is a 138GB tar file containing 1000 sub-tar files, one per class.
8 |
9 | Once the file is downloaded, extract it and look inside. You should see 1000 `.tar` files. You need to extract each of these, which may be impractical to do by hand on your operating system. To automate the process on a Unix-based system, you can `cd` into the directory and run this short shell script:
10 |
11 | ```
12 | for file in *.tar; do tar xf "$file"; rm "$file"; done
13 | ```
14 |
15 | This will extract and remove each tar file in turn.
16 |
17 | Once all of the images have been extracted, the resulting directory should be usable as a data directory (the `--data_dir` argument for the training script). The filenames should all start with WNID (class ids) followed by underscores, like `n01440764_2708.JPEG`. Conveniently (but not by accident) this is how the automated data-loader expects to discover class labels.
18 |
19 | ## LSUN bedroom
20 |
21 | To download and pre-process LSUN bedroom, clone [fyu/lsun](https://github.com/fyu/lsun) on GitHub and run their download script `python3 download.py bedroom`. The result will be an "lmdb" database named like `bedroom_train_lmdb`. You can pass this to our [lsun_bedroom.py](lsun_bedroom.py) script like so:
22 |
23 | ```
24 | python lsun_bedroom.py bedroom_train_lmdb lsun_train_output_dir
25 | ```
26 |
27 | This creates a directory called `lsun_train_output_dir`. This directory can be passed to the training scripts via the `--data_dir` argument.
28 |
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/datasets/lsun_bedroom.py:
--------------------------------------------------------------------------------
1 | """
2 | Convert an LSUN lmdb database into a directory of images.
3 | """
4 |
5 | import argparse
6 | import io
7 | import os
8 |
9 | from PIL import Image
10 | import lmdb
11 | import numpy as np
12 |
13 |
14 | def read_images(lmdb_path, image_size):
15 | env = lmdb.open(lmdb_path, map_size=1099511627776, max_readers=100, readonly=True)
16 | with env.begin(write=False) as transaction:
17 | cursor = transaction.cursor()
18 | for _, webp_data in cursor:
19 | img = Image.open(io.BytesIO(webp_data))
20 | width, height = img.size
21 | scale = image_size / min(width, height)
22 | img = img.resize(
23 | (int(round(scale * width)), int(round(scale * height))),
24 | resample=Image.BOX,
25 | )
26 | arr = np.array(img)
27 | h, w, _ = arr.shape
28 | h_off = (h - image_size) // 2
29 | w_off = (w - image_size) // 2
30 | arr = arr[h_off : h_off + image_size, w_off : w_off + image_size]
31 | yield arr
32 |
33 |
34 | def dump_images(out_dir, images, prefix):
35 | if not os.path.exists(out_dir):
36 | os.mkdir(out_dir)
37 | for i, img in enumerate(images):
38 | Image.fromarray(img).save(os.path.join(out_dir, f"{prefix}_{i:07d}.png"))
39 |
40 |
41 | def main():
42 | parser = argparse.ArgumentParser()
43 | parser.add_argument("--image-size", help="new image size", type=int, default=256)
44 | parser.add_argument("--prefix", help="class name", type=str, default="bedroom")
45 | parser.add_argument("lmdb_path", help="path to an LSUN lmdb database")
46 | parser.add_argument("out_dir", help="path to output directory")
47 | args = parser.parse_args()
48 |
49 | images = read_images(args.lmdb_path, args.image_size)
50 | dump_images(args.out_dir, images, args.prefix)
51 |
52 |
53 | if __name__ == "__main__":
54 | main()
55 |
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/evaluations/README.md:
--------------------------------------------------------------------------------
1 | # Evaluations
2 |
3 | To compare different generative models, we use FID, sFID, Precision, Recall, and Inception Score. These metrics can all be calculated using batches of samples, which we store in `.npz` (numpy) files.
4 |
5 | # Download batches
6 |
7 | We provide pre-computed sample batches for the reference datasets, our diffusion models, and several baselines we compare against. These are all stored in `.npz` format.
8 |
9 | Reference dataset batches contain pre-computed statistics over the whole dataset, as well as 10,000 images for computing Precision and Recall. All other batches contain 50,000 images which can be used to compute statistics and Precision/Recall.
10 |
11 | Here are links to download all of the sample and reference batches:
12 |
13 | * LSUN
14 | * LSUN bedroom: [reference batch](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/lsun/bedroom/VIRTUAL_lsun_bedroom256.npz)
15 | * [ADM (dropout)](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/lsun/bedroom/admnet_dropout_lsun_bedroom.npz)
16 | * [DDPM](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/lsun/bedroom/ddpm_lsun_bedroom.npz)
17 | * [IDDPM](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/lsun/bedroom/iddpm_lsun_bedroom.npz)
18 | * [StyleGAN](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/lsun/bedroom/stylegan_lsun_bedroom.npz)
19 | * LSUN cat: [reference batch](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/lsun/cat/VIRTUAL_lsun_cat256.npz)
20 | * [ADM (dropout)](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/lsun/cat/admnet_dropout_lsun_cat.npz)
21 | * [StyleGAN2](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/lsun/cat/stylegan2_lsun_cat.npz)
22 | * LSUN horse: [reference batch](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/lsun/horse/VIRTUAL_lsun_horse256.npz)
23 | * [ADM (dropout)](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/lsun/horse/admnet_dropout_lsun_horse.npz)
24 | * [ADM](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/lsun/horse/admnet_lsun_horse.npz)
25 |
26 | * ImageNet
27 | * ImageNet 64x64: [reference batch](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/64/VIRTUAL_imagenet64_labeled.npz)
28 | * [ADM](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/64/admnet_imagenet64.npz)
29 | * [IDDPM](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/64/iddpm_imagenet64.npz)
30 | * [BigGAN](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/64/biggan_deep_imagenet64.npz)
31 | * ImageNet 128x128: [reference batch](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/128/VIRTUAL_imagenet128_labeled.npz)
32 | * [ADM](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/128/admnet_imagenet128.npz)
33 | * [ADM-G](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/128/admnet_guided_imagenet128.npz)
34 | * [ADM-G, 25 steps](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/128/admnet_guided_25step_imagenet128.npz)
35 | * [BigGAN-deep (trunc=1.0)](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/128/biggan_deep_trunc1_imagenet128.npz)
36 | * ImageNet 256x256: [reference batch](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/256/VIRTUAL_imagenet256_labeled.npz)
37 | * [ADM](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/256/admnet_imagenet256.npz)
38 | * [ADM-G](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/256/admnet_guided_imagenet256.npz)
39 | * [ADM-G, 25 step](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/256/admnet_guided_25step_imagenet256.npz)
40 | * [ADM-G + ADM-U](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/256/admnet_guided_upsampled_imagenet256.npz)
41 | * [ADM-U](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/256/admnet_upsampled_imagenet256.npz)
42 | * [BigGAN-deep (trunc=1.0)](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/256/biggan_deep_trunc1_imagenet256.npz)
43 | * ImageNet 512x512: [reference batch](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/512/VIRTUAL_imagenet512.npz)
44 | * [ADM](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/512/admnet_imagenet512.npz)
45 | * [ADM-G](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/512/admnet_guided_imagenet512.npz)
46 | * [ADM-G, 25 step](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/512/admnet_guided_25step_imagenet512.npz)
47 | * [ADM-G + ADM-U](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/512/admnet_guided_upsampled_imagenet512.npz)
48 | * [ADM-U](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/512/admnet_upsampled_imagenet512.npz)
49 | * [BigGAN-deep (trunc=1.0)](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/512/biggan_deep_trunc1_imagenet512.npz)
50 |
51 | # Run evaluations
52 |
53 | First, generate or download a batch of samples and download the corresponding reference batch for the given dataset. For this example, we'll use ImageNet 256x256, so the refernce batch is `VIRTUAL_imagenet256_labeled.npz` and we can use the sample batch `admnet_guided_upsampled_imagenet256.npz`.
54 |
55 | Next, run the `evaluator.py` script. The requirements of this script can be found in [requirements.txt](requirements.txt). Pass two arguments to the script: the reference batch and the sample batch. The script will download the InceptionV3 model used for evaluations into the current working directory (if it is not already present). This file is roughly 100MB.
56 |
57 | The output of the script will look something like this, where the first `...` is a bunch of verbose TensorFlow logging:
58 |
59 | ```
60 | $ python evaluator.py VIRTUAL_imagenet256_labeled.npz admnet_guided_upsampled_imagenet256.npz
61 | ...
62 | computing reference batch activations...
63 | computing/reading reference batch statistics...
64 | computing sample batch activations...
65 | computing/reading sample batch statistics...
66 | Computing evaluations...
67 | Inception Score: 215.8370361328125
68 | FID: 3.9425574129223264
69 | sFID: 6.140433703346162
70 | Precision: 0.8265
71 | Recall: 0.5309
72 | ```
73 |
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/evaluations/requirements.txt:
--------------------------------------------------------------------------------
1 | tensorflow
2 | scipy
3 | requests
4 | tqdm
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/guided_diffusion/__init__.py:
--------------------------------------------------------------------------------
1 | """
2 | Codebase for "Improved Denoising Diffusion Probabilistic Models".
3 | """
4 |
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/guided_diffusion/dist_util.py:
--------------------------------------------------------------------------------
1 | """
2 | Helpers for distributed training.
3 | """
4 |
5 | import io
6 | import os
7 | import socket
8 |
9 | import blobfile as bf
10 | from mpi4py import MPI
11 | import torch as th
12 | import torch.distributed as dist
13 |
14 | # Change this to reflect your cluster layout.
15 | # The GPU for a given rank is (rank % GPUS_PER_NODE).
16 | GPUS_PER_NODE = 8
17 |
18 | SETUP_RETRY_COUNT = 3
19 |
20 |
21 | def setup_dist():
22 | """
23 | Setup a distributed process group.
24 | """
25 | if dist.is_initialized():
26 | return
27 | os.environ["CUDA_VISIBLE_DEVICES"] = f"{MPI.COMM_WORLD.Get_rank() % GPUS_PER_NODE}"
28 |
29 | comm = MPI.COMM_WORLD
30 | backend = "gloo" if not th.cuda.is_available() else "nccl"
31 |
32 | if backend == "gloo":
33 | hostname = "localhost"
34 | else:
35 | hostname = socket.gethostbyname(socket.getfqdn())
36 | os.environ["MASTER_ADDR"] = comm.bcast(hostname, root=0)
37 | os.environ["RANK"] = str(comm.rank)
38 | os.environ["WORLD_SIZE"] = str(comm.size)
39 |
40 | port = comm.bcast(_find_free_port(), root=0)
41 | os.environ["MASTER_PORT"] = str(port)
42 | dist.init_process_group(backend=backend, init_method="env://")
43 |
44 |
45 | def dev():
46 | """
47 | Get the device to use for torch.distributed.
48 | """
49 | if th.cuda.is_available():
50 | return th.device(f"cuda")
51 | return th.device("cpu")
52 |
53 |
54 | def load_state_dict(path, **kwargs):
55 | """
56 | Load a PyTorch file without redundant fetches across MPI ranks.
57 | """
58 | chunk_size = 2 ** 30 # MPI has a relatively small size limit
59 | if MPI.COMM_WORLD.Get_rank() == 0:
60 | with bf.BlobFile(path, "rb") as f:
61 | data = f.read()
62 | num_chunks = len(data) // chunk_size
63 | if len(data) % chunk_size:
64 | num_chunks += 1
65 | MPI.COMM_WORLD.bcast(num_chunks)
66 | for i in range(0, len(data), chunk_size):
67 | MPI.COMM_WORLD.bcast(data[i : i + chunk_size])
68 | else:
69 | num_chunks = MPI.COMM_WORLD.bcast(None)
70 | data = bytes()
71 | for _ in range(num_chunks):
72 | data += MPI.COMM_WORLD.bcast(None)
73 |
74 | return th.load(io.BytesIO(data), **kwargs)
75 |
76 |
77 | def sync_params(params):
78 | """
79 | Synchronize a sequence of Tensors across ranks from rank 0.
80 | """
81 | for p in params:
82 | with th.no_grad():
83 | dist.broadcast(p, 0)
84 |
85 |
86 | def _find_free_port():
87 | try:
88 | s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
89 | s.bind(("", 0))
90 | s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
91 | return s.getsockname()[1]
92 | finally:
93 | s.close()
94 |
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/guided_diffusion/fp16_util.py:
--------------------------------------------------------------------------------
1 | """
2 | Helpers to train with 16-bit precision.
3 | """
4 |
5 | import numpy as np
6 | import torch as th
7 | import torch.nn as nn
8 | from torch._utils import _flatten_dense_tensors, _unflatten_dense_tensors
9 |
10 | from . import logger
11 |
12 | INITIAL_LOG_LOSS_SCALE = 20.0
13 |
14 |
15 | def convert_module_to_f16(l):
16 | """
17 | Convert primitive modules to float16.
18 | """
19 | if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Conv3d)):
20 | l.weight.data = l.weight.data.half()
21 | if l.bias is not None:
22 | l.bias.data = l.bias.data.half()
23 |
24 |
25 | def convert_module_to_f32(l):
26 | """
27 | Convert primitive modules to float32, undoing convert_module_to_f16().
28 | """
29 | if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Conv3d)):
30 | l.weight.data = l.weight.data.float()
31 | if l.bias is not None:
32 | l.bias.data = l.bias.data.float()
33 |
34 |
35 | def make_master_params(param_groups_and_shapes):
36 | """
37 | Copy model parameters into a (differently-shaped) list of full-precision
38 | parameters.
39 | """
40 | master_params = []
41 | for param_group, shape in param_groups_and_shapes:
42 | master_param = nn.Parameter(
43 | _flatten_dense_tensors(
44 | [param.detach().float() for (_, param) in param_group]
45 | ).view(shape)
46 | )
47 | master_param.requires_grad = True
48 | master_params.append(master_param)
49 | return master_params
50 |
51 |
52 | def model_grads_to_master_grads(param_groups_and_shapes, master_params):
53 | """
54 | Copy the gradients from the model parameters into the master parameters
55 | from make_master_params().
56 | """
57 | for master_param, (param_group, shape) in zip(
58 | master_params, param_groups_and_shapes
59 | ):
60 | master_param.grad = _flatten_dense_tensors(
61 | [param_grad_or_zeros(param) for (_, param) in param_group]
62 | ).view(shape)
63 |
64 |
65 | def master_params_to_model_params(param_groups_and_shapes, master_params):
66 | """
67 | Copy the master parameter data back into the model parameters.
68 | """
69 | # Without copying to a list, if a generator is passed, this will
70 | # silently not copy any parameters.
71 | for master_param, (param_group, _) in zip(master_params, param_groups_and_shapes):
72 | for (_, param), unflat_master_param in zip(
73 | param_group, unflatten_master_params(param_group, master_param.view(-1))
74 | ):
75 | param.detach().copy_(unflat_master_param)
76 |
77 |
78 | def unflatten_master_params(param_group, master_param):
79 | return _unflatten_dense_tensors(master_param, [param for (_, param) in param_group])
80 |
81 |
82 | def get_param_groups_and_shapes(named_model_params):
83 | named_model_params = list(named_model_params)
84 | scalar_vector_named_params = (
85 | [(n, p) for (n, p) in named_model_params if p.ndim <= 1],
86 | (-1),
87 | )
88 | matrix_named_params = (
89 | [(n, p) for (n, p) in named_model_params if p.ndim > 1],
90 | (1, -1),
91 | )
92 | return [scalar_vector_named_params, matrix_named_params]
93 |
94 |
95 | def master_params_to_state_dict(
96 | model, param_groups_and_shapes, master_params, use_fp16
97 | ):
98 | if use_fp16:
99 | state_dict = model.state_dict()
100 | for master_param, (param_group, _) in zip(
101 | master_params, param_groups_and_shapes
102 | ):
103 | for (name, _), unflat_master_param in zip(
104 | param_group, unflatten_master_params(param_group, master_param.view(-1))
105 | ):
106 | assert name in state_dict
107 | state_dict[name] = unflat_master_param
108 | else:
109 | state_dict = model.state_dict()
110 | for i, (name, _value) in enumerate(model.named_parameters()):
111 | assert name in state_dict
112 | state_dict[name] = master_params[i]
113 | return state_dict
114 |
115 |
116 | def state_dict_to_master_params(model, state_dict, use_fp16):
117 | if use_fp16:
118 | named_model_params = [
119 | (name, state_dict[name]) for name, _ in model.named_parameters()
120 | ]
121 | param_groups_and_shapes = get_param_groups_and_shapes(named_model_params)
122 | master_params = make_master_params(param_groups_and_shapes)
123 | else:
124 | master_params = [state_dict[name] for name, _ in model.named_parameters()]
125 | return master_params
126 |
127 |
128 | def zero_master_grads(master_params):
129 | for param in master_params:
130 | param.grad = None
131 |
132 |
133 | def zero_grad(model_params):
134 | for param in model_params:
135 | # Taken from https://pytorch.org/docs/stable/_modules/torch/optim/optimizer.html#Optimizer.add_param_group
136 | if param.grad is not None:
137 | param.grad.detach_()
138 | param.grad.zero_()
139 |
140 |
141 | def param_grad_or_zeros(param):
142 | if param.grad is not None:
143 | return param.grad.data.detach()
144 | else:
145 | return th.zeros_like(param)
146 |
147 |
148 | class MixedPrecisionTrainer:
149 | def __init__(
150 | self,
151 | *,
152 | model,
153 | use_fp16=False,
154 | fp16_scale_growth=1e-3,
155 | initial_lg_loss_scale=INITIAL_LOG_LOSS_SCALE,
156 | ):
157 | self.model = model
158 | self.use_fp16 = use_fp16
159 | self.fp16_scale_growth = fp16_scale_growth
160 |
161 | self.model_params = list(self.model.parameters())
162 | self.master_params = self.model_params
163 | self.param_groups_and_shapes = None
164 | self.lg_loss_scale = initial_lg_loss_scale
165 |
166 | if self.use_fp16:
167 | self.param_groups_and_shapes = get_param_groups_and_shapes(
168 | self.model.named_parameters()
169 | )
170 | self.master_params = make_master_params(self.param_groups_and_shapes)
171 | self.model.convert_to_fp16()
172 |
173 | def zero_grad(self):
174 | zero_grad(self.model_params)
175 |
176 | def backward(self, loss: th.Tensor):
177 | if self.use_fp16:
178 | loss_scale = 2 ** self.lg_loss_scale
179 | (loss * loss_scale).backward()
180 | else:
181 | loss.backward()
182 |
183 | def optimize(self, opt: th.optim.Optimizer):
184 | if self.use_fp16:
185 | return self._optimize_fp16(opt)
186 | else:
187 | return self._optimize_normal(opt)
188 |
189 | def _optimize_fp16(self, opt: th.optim.Optimizer):
190 | logger.logkv_mean("lg_loss_scale", self.lg_loss_scale)
191 | model_grads_to_master_grads(self.param_groups_and_shapes, self.master_params)
192 | grad_norm, param_norm = self._compute_norms(grad_scale=2 ** self.lg_loss_scale)
193 | if check_overflow(grad_norm):
194 | self.lg_loss_scale -= 1
195 | logger.log(f"Found NaN, decreased lg_loss_scale to {self.lg_loss_scale}")
196 | zero_master_grads(self.master_params)
197 | return False
198 |
199 | logger.logkv_mean("grad_norm", grad_norm)
200 | logger.logkv_mean("param_norm", param_norm)
201 |
202 | for p in self.master_params:
203 | p.grad.mul_(1.0 / (2 ** self.lg_loss_scale))
204 | opt.step()
205 | zero_master_grads(self.master_params)
206 | master_params_to_model_params(self.param_groups_and_shapes, self.master_params)
207 | self.lg_loss_scale += self.fp16_scale_growth
208 | return True
209 |
210 | def _optimize_normal(self, opt: th.optim.Optimizer):
211 | grad_norm, param_norm = self._compute_norms()
212 | logger.logkv_mean("grad_norm", grad_norm)
213 | logger.logkv_mean("param_norm", param_norm)
214 | opt.step()
215 | return True
216 |
217 | def _compute_norms(self, grad_scale=1.0):
218 | grad_norm = 0.0
219 | param_norm = 0.0
220 | for p in self.master_params:
221 | with th.no_grad():
222 | param_norm += th.norm(p, p=2, dtype=th.float32).item() ** 2
223 | if p.grad is not None:
224 | grad_norm += th.norm(p.grad, p=2, dtype=th.float32).item() ** 2
225 | return np.sqrt(grad_norm) / grad_scale, np.sqrt(param_norm)
226 |
227 | def master_params_to_state_dict(self, master_params):
228 | return master_params_to_state_dict(
229 | self.model, self.param_groups_and_shapes, master_params, self.use_fp16
230 | )
231 |
232 | def state_dict_to_master_params(self, state_dict):
233 | return state_dict_to_master_params(self.model, state_dict, self.use_fp16)
234 |
235 |
236 | def check_overflow(value):
237 | return (value == float("inf")) or (value == -float("inf")) or (value != value)
238 |
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/guided_diffusion/image_datasets.py:
--------------------------------------------------------------------------------
1 | import math
2 | import random
3 |
4 | from PIL import Image
5 | import blobfile as bf
6 | from mpi4py import MPI
7 | import numpy as np
8 | from torch.utils.data import DataLoader, Dataset
9 |
10 |
11 | def load_data(
12 | *,
13 | data_dir,
14 | batch_size,
15 | image_size,
16 | class_cond=False,
17 | deterministic=False,
18 | random_crop=False,
19 | random_flip=True,
20 | ):
21 | """
22 | For a dataset, create a generator over (images, kwargs) pairs.
23 |
24 | Each images is an NCHW float tensor, and the kwargs dict contains zero or
25 | more keys, each of which map to a batched Tensor of their own.
26 | The kwargs dict can be used for class labels, in which case the key is "y"
27 | and the values are integer tensors of class labels.
28 |
29 | :param data_dir: a dataset directory.
30 | :param batch_size: the batch size of each returned pair.
31 | :param image_size: the size to which images are resized.
32 | :param class_cond: if True, include a "y" key in returned dicts for class
33 | label. If classes are not available and this is true, an
34 | exception will be raised.
35 | :param deterministic: if True, yield results in a deterministic order.
36 | :param random_crop: if True, randomly crop the images for augmentation.
37 | :param random_flip: if True, randomly flip the images for augmentation.
38 | """
39 | if not data_dir:
40 | raise ValueError("unspecified data directory")
41 | all_files = _list_image_files_recursively(data_dir)
42 | classes = None
43 | if class_cond:
44 | # Assume classes are the first part of the filename,
45 | # before an underscore.
46 | class_names = [bf.basename(path).split("_")[0] for path in all_files]
47 | sorted_classes = {x: i for i, x in enumerate(sorted(set(class_names)))}
48 | classes = [sorted_classes[x] for x in class_names]
49 | dataset = ImageDataset(
50 | image_size,
51 | all_files,
52 | classes=classes,
53 | shard=MPI.COMM_WORLD.Get_rank(),
54 | num_shards=MPI.COMM_WORLD.Get_size(),
55 | random_crop=random_crop,
56 | random_flip=random_flip,
57 | )
58 | if deterministic:
59 | loader = DataLoader(
60 | dataset, batch_size=batch_size, shuffle=False, num_workers=1, drop_last=True
61 | )
62 | else:
63 | loader = DataLoader(
64 | dataset, batch_size=batch_size, shuffle=True, num_workers=1, drop_last=True
65 | )
66 | while True:
67 | yield from loader
68 |
69 |
70 | def _list_image_files_recursively(data_dir):
71 | results = []
72 | for entry in sorted(bf.listdir(data_dir)):
73 | full_path = bf.join(data_dir, entry)
74 | ext = entry.split(".")[-1]
75 | if "." in entry and ext.lower() in ["jpg", "jpeg", "png", "gif"]:
76 | results.append(full_path)
77 | elif bf.isdir(full_path):
78 | results.extend(_list_image_files_recursively(full_path))
79 | return results
80 |
81 |
82 | class ImageDataset(Dataset):
83 | def __init__(
84 | self,
85 | resolution,
86 | image_paths,
87 | classes=None,
88 | shard=0,
89 | num_shards=1,
90 | random_crop=False,
91 | random_flip=True,
92 | ):
93 | super().__init__()
94 | self.resolution = resolution
95 | self.local_images = image_paths[shard:][::num_shards]
96 | self.local_classes = None if classes is None else classes[shard:][::num_shards]
97 | self.random_crop = random_crop
98 | self.random_flip = random_flip
99 |
100 | def __len__(self):
101 | return len(self.local_images)
102 |
103 | def __getitem__(self, idx):
104 | path = self.local_images[idx]
105 | with bf.BlobFile(path, "rb") as f:
106 | pil_image = Image.open(f)
107 | pil_image.load()
108 | pil_image = pil_image.convert("RGB")
109 |
110 | if self.random_crop:
111 | arr = random_crop_arr(pil_image, self.resolution)
112 | else:
113 | arr = center_crop_arr(pil_image, self.resolution)
114 |
115 | if self.random_flip and random.random() < 0.5:
116 | arr = arr[:, ::-1]
117 |
118 | arr = arr.astype(np.float32) / 127.5 - 1
119 |
120 | out_dict = {}
121 | if self.local_classes is not None:
122 | out_dict["y"] = np.array(self.local_classes[idx], dtype=np.int64)
123 | return np.transpose(arr, [2, 0, 1]), out_dict
124 |
125 |
126 | def center_crop_arr(pil_image, image_size):
127 | # We are not on a new enough PIL to support the `reducing_gap`
128 | # argument, which uses BOX downsampling at powers of two first.
129 | # Thus, we do it by hand to improve downsample quality.
130 | while min(*pil_image.size) >= 2 * image_size:
131 | pil_image = pil_image.resize(
132 | tuple(x // 2 for x in pil_image.size), resample=Image.BOX
133 | )
134 |
135 | scale = image_size / min(*pil_image.size)
136 | pil_image = pil_image.resize(
137 | tuple(round(x * scale) for x in pil_image.size), resample=Image.BICUBIC
138 | )
139 |
140 | arr = np.array(pil_image)
141 | crop_y = (arr.shape[0] - image_size) // 2
142 | crop_x = (arr.shape[1] - image_size) // 2
143 | return arr[crop_y : crop_y + image_size, crop_x : crop_x + image_size]
144 |
145 |
146 | def random_crop_arr(pil_image, image_size, min_crop_frac=0.8, max_crop_frac=1.0):
147 | min_smaller_dim_size = math.ceil(image_size / max_crop_frac)
148 | max_smaller_dim_size = math.ceil(image_size / min_crop_frac)
149 | smaller_dim_size = random.randrange(min_smaller_dim_size, max_smaller_dim_size + 1)
150 |
151 | # We are not on a new enough PIL to support the `reducing_gap`
152 | # argument, which uses BOX downsampling at powers of two first.
153 | # Thus, we do it by hand to improve downsample quality.
154 | while min(*pil_image.size) >= 2 * smaller_dim_size:
155 | pil_image = pil_image.resize(
156 | tuple(x // 2 for x in pil_image.size), resample=Image.BOX
157 | )
158 |
159 | scale = smaller_dim_size / min(*pil_image.size)
160 | pil_image = pil_image.resize(
161 | tuple(round(x * scale) for x in pil_image.size), resample=Image.BICUBIC
162 | )
163 |
164 | arr = np.array(pil_image)
165 | crop_y = random.randrange(arr.shape[0] - image_size + 1)
166 | crop_x = random.randrange(arr.shape[1] - image_size + 1)
167 | return arr[crop_y : crop_y + image_size, crop_x : crop_x + image_size]
168 |
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/guided_diffusion/losses.py:
--------------------------------------------------------------------------------
1 | """
2 | Helpers for various likelihood-based losses. These are ported from the original
3 | Ho et al. diffusion models codebase:
4 | https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/utils.py
5 | """
6 |
7 | import numpy as np
8 |
9 | import torch as th
10 |
11 |
12 | def normal_kl(mean1, logvar1, mean2, logvar2):
13 | """
14 | Compute the KL divergence between two gaussians.
15 |
16 | Shapes are automatically broadcasted, so batches can be compared to
17 | scalars, among other use cases.
18 | """
19 | tensor = None
20 | for obj in (mean1, logvar1, mean2, logvar2):
21 | if isinstance(obj, th.Tensor):
22 | tensor = obj
23 | break
24 | assert tensor is not None, "at least one argument must be a Tensor"
25 |
26 | # Force variances to be Tensors. Broadcasting helps convert scalars to
27 | # Tensors, but it does not work for th.exp().
28 | logvar1, logvar2 = [
29 | x if isinstance(x, th.Tensor) else th.tensor(x).to(tensor)
30 | for x in (logvar1, logvar2)
31 | ]
32 |
33 | return 0.5 * (
34 | -1.0
35 | + logvar2
36 | - logvar1
37 | + th.exp(logvar1 - logvar2)
38 | + ((mean1 - mean2) ** 2) * th.exp(-logvar2)
39 | )
40 |
41 |
42 | def approx_standard_normal_cdf(x):
43 | """
44 | A fast approximation of the cumulative distribution function of the
45 | standard normal.
46 | """
47 | return 0.5 * (1.0 + th.tanh(np.sqrt(2.0 / np.pi) * (x + 0.044715 * th.pow(x, 3))))
48 |
49 |
50 | def discretized_gaussian_log_likelihood(x, *, means, log_scales):
51 | """
52 | Compute the log-likelihood of a Gaussian distribution discretizing to a
53 | given image.
54 |
55 | :param x: the target images. It is assumed that this was uint8 values,
56 | rescaled to the range [-1, 1].
57 | :param means: the Gaussian mean Tensor.
58 | :param log_scales: the Gaussian log stddev Tensor.
59 | :return: a tensor like x of log probabilities (in nats).
60 | """
61 | assert x.shape == means.shape == log_scales.shape
62 | centered_x = x - means
63 | inv_stdv = th.exp(-log_scales)
64 | plus_in = inv_stdv * (centered_x + 1.0 / 255.0)
65 | cdf_plus = approx_standard_normal_cdf(plus_in)
66 | min_in = inv_stdv * (centered_x - 1.0 / 255.0)
67 | cdf_min = approx_standard_normal_cdf(min_in)
68 | log_cdf_plus = th.log(cdf_plus.clamp(min=1e-12))
69 | log_one_minus_cdf_min = th.log((1.0 - cdf_min).clamp(min=1e-12))
70 | cdf_delta = cdf_plus - cdf_min
71 | log_probs = th.where(
72 | x < -0.999,
73 | log_cdf_plus,
74 | th.where(x > 0.999, log_one_minus_cdf_min, th.log(cdf_delta.clamp(min=1e-12))),
75 | )
76 | assert log_probs.shape == x.shape
77 | return log_probs
78 |
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/guided_diffusion/nn.py:
--------------------------------------------------------------------------------
1 | """
2 | Various utilities for neural networks.
3 | """
4 |
5 | import math
6 |
7 | import torch as th
8 | import torch.nn as nn
9 |
10 |
11 | # PyTorch 1.7 has SiLU, but we support PyTorch 1.5.
12 | class SiLU(nn.Module):
13 | def forward(self, x):
14 | return x * th.sigmoid(x)
15 |
16 |
17 | class GroupNorm32(nn.GroupNorm):
18 | def forward(self, x):
19 | return super().forward(x.float()).type(x.dtype)
20 |
21 |
22 | def conv_nd(dims, *args, **kwargs):
23 | """
24 | Create a 1D, 2D, or 3D convolution module.
25 | """
26 | if dims == 1:
27 | return nn.Conv1d(*args, **kwargs)
28 | elif dims == 2:
29 | return nn.Conv2d(*args, **kwargs)
30 | elif dims == 3:
31 | return nn.Conv3d(*args, **kwargs)
32 | raise ValueError(f"unsupported dimensions: {dims}")
33 |
34 |
35 | def linear(*args, **kwargs):
36 | """
37 | Create a linear module.
38 | """
39 | return nn.Linear(*args, **kwargs)
40 |
41 |
42 | def avg_pool_nd(dims, *args, **kwargs):
43 | """
44 | Create a 1D, 2D, or 3D average pooling module.
45 | """
46 | if dims == 1:
47 | return nn.AvgPool1d(*args, **kwargs)
48 | elif dims == 2:
49 | return nn.AvgPool2d(*args, **kwargs)
50 | elif dims == 3:
51 | return nn.AvgPool3d(*args, **kwargs)
52 | raise ValueError(f"unsupported dimensions: {dims}")
53 |
54 |
55 | def update_ema(target_params, source_params, rate=0.99):
56 | """
57 | Update target parameters to be closer to those of source parameters using
58 | an exponential moving average.
59 |
60 | :param target_params: the target parameter sequence.
61 | :param source_params: the source parameter sequence.
62 | :param rate: the EMA rate (closer to 1 means slower).
63 | """
64 | for targ, src in zip(target_params, source_params):
65 | targ.detach().mul_(rate).add_(src, alpha=1 - rate)
66 |
67 |
68 | def zero_module(module):
69 | """
70 | Zero out the parameters of a module and return it.
71 | """
72 | for p in module.parameters():
73 | p.detach().zero_()
74 | return module
75 |
76 |
77 | def scale_module(module, scale):
78 | """
79 | Scale the parameters of a module and return it.
80 | """
81 | for p in module.parameters():
82 | p.detach().mul_(scale)
83 | return module
84 |
85 |
86 | def mean_flat(tensor):
87 | """
88 | Take the mean over all non-batch dimensions.
89 | """
90 | return tensor.mean(dim=list(range(1, len(tensor.shape))))
91 |
92 |
93 | def normalization(channels):
94 | """
95 | Make a standard normalization layer.
96 |
97 | :param channels: number of input channels.
98 | :return: an nn.Module for normalization.
99 | """
100 | return GroupNorm32(32, channels)
101 |
102 |
103 | def timestep_embedding(timesteps, dim, max_period=10000):
104 | """
105 | Create sinusoidal timestep embeddings.
106 |
107 | :param timesteps: a 1-D Tensor of N indices, one per batch element.
108 | These may be fractional.
109 | :param dim: the dimension of the output.
110 | :param max_period: controls the minimum frequency of the embeddings.
111 | :return: an [N x dim] Tensor of positional embeddings.
112 | """
113 | half = dim // 2
114 | freqs = th.exp(
115 | -math.log(max_period) * th.arange(start=0, end=half, dtype=th.float32) / half
116 | ).to(device=timesteps.device)
117 | args = timesteps[:, None].float() * freqs[None]
118 | embedding = th.cat([th.cos(args), th.sin(args)], dim=-1)
119 | if dim % 2:
120 | embedding = th.cat([embedding, th.zeros_like(embedding[:, :1])], dim=-1)
121 | return embedding
122 |
123 |
124 | def checkpoint(func, inputs, params, flag):
125 | """
126 | Evaluate a function without caching intermediate activations, allowing for
127 | reduced memory at the expense of extra compute in the backward pass.
128 |
129 | :param func: the function to evaluate.
130 | :param inputs: the argument sequence to pass to `func`.
131 | :param params: a sequence of parameters `func` depends on but does not
132 | explicitly take as arguments.
133 | :param flag: if False, disable gradient checkpointing.
134 | """
135 | if flag:
136 | args = tuple(inputs) + tuple(params)
137 | return CheckpointFunction.apply(func, len(inputs), *args)
138 | else:
139 | return func(*inputs)
140 |
141 |
142 | class CheckpointFunction(th.autograd.Function):
143 | @staticmethod
144 | def forward(ctx, run_function, length, *args):
145 | ctx.run_function = run_function
146 | ctx.input_tensors = list(args[:length])
147 | ctx.input_params = list(args[length:])
148 | with th.no_grad():
149 | output_tensors = ctx.run_function(*ctx.input_tensors)
150 | return output_tensors
151 |
152 | @staticmethod
153 | def backward(ctx, *output_grads):
154 | ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors]
155 | with th.enable_grad():
156 | # Fixes a bug where the first op in run_function modifies the
157 | # Tensor storage in place, which is not allowed for detach()'d
158 | # Tensors.
159 | shallow_copies = [x.view_as(x) for x in ctx.input_tensors]
160 | output_tensors = ctx.run_function(*shallow_copies)
161 | input_grads = th.autograd.grad(
162 | output_tensors,
163 | ctx.input_tensors + ctx.input_params,
164 | output_grads,
165 | allow_unused=True,
166 | )
167 | del ctx.input_tensors
168 | del ctx.input_params
169 | del output_tensors
170 | return (None, None) + input_grads
171 |
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/guided_diffusion/resample.py:
--------------------------------------------------------------------------------
1 | from abc import ABC, abstractmethod
2 |
3 | import numpy as np
4 | import torch as th
5 | import torch.distributed as dist
6 |
7 |
8 | def create_named_schedule_sampler(name, diffusion):
9 | """
10 | Create a ScheduleSampler from a library of pre-defined samplers.
11 |
12 | :param name: the name of the sampler.
13 | :param diffusion: the diffusion object to sample for.
14 | """
15 | if name == "uniform":
16 | return UniformSampler(diffusion)
17 | elif name == "loss-second-moment":
18 | return LossSecondMomentResampler(diffusion)
19 | else:
20 | raise NotImplementedError(f"unknown schedule sampler: {name}")
21 |
22 |
23 | class ScheduleSampler(ABC):
24 | """
25 | A distribution over timesteps in the diffusion process, intended to reduce
26 | variance of the objective.
27 |
28 | By default, samplers perform unbiased importance sampling, in which the
29 | objective's mean is unchanged.
30 | However, subclasses may override sample() to change how the resampled
31 | terms are reweighted, allowing for actual changes in the objective.
32 | """
33 |
34 | @abstractmethod
35 | def weights(self):
36 | """
37 | Get a numpy array of weights, one per diffusion step.
38 |
39 | The weights needn't be normalized, but must be positive.
40 | """
41 |
42 | def sample(self, batch_size, device):
43 | """
44 | Importance-sample timesteps for a batch.
45 |
46 | :param batch_size: the number of timesteps.
47 | :param device: the torch device to save to.
48 | :return: a tuple (timesteps, weights):
49 | - timesteps: a tensor of timestep indices.
50 | - weights: a tensor of weights to scale the resulting losses.
51 | """
52 | w = self.weights()
53 | p = w / np.sum(w)
54 | indices_np = np.random.choice(len(p), size=(batch_size,), p=p)
55 | indices = th.from_numpy(indices_np).long().to(device)
56 | weights_np = 1 / (len(p) * p[indices_np])
57 | weights = th.from_numpy(weights_np).float().to(device)
58 | return indices, weights
59 |
60 |
61 | class UniformSampler(ScheduleSampler):
62 | def __init__(self, diffusion):
63 | self.diffusion = diffusion
64 | self._weights = np.ones([diffusion.num_timesteps])
65 |
66 | def weights(self):
67 | return self._weights
68 |
69 |
70 | class LossAwareSampler(ScheduleSampler):
71 | def update_with_local_losses(self, local_ts, local_losses):
72 | """
73 | Update the reweighting using losses from a model.
74 |
75 | Call this method from each rank with a batch of timesteps and the
76 | corresponding losses for each of those timesteps.
77 | This method will perform synchronization to make sure all of the ranks
78 | maintain the exact same reweighting.
79 |
80 | :param local_ts: an integer Tensor of timesteps.
81 | :param local_losses: a 1D Tensor of losses.
82 | """
83 | batch_sizes = [
84 | th.tensor([0], dtype=th.int32, device=local_ts.device)
85 | for _ in range(dist.get_world_size())
86 | ]
87 | dist.all_gather(
88 | batch_sizes,
89 | th.tensor([len(local_ts)], dtype=th.int32, device=local_ts.device),
90 | )
91 |
92 | # Pad all_gather batches to be the maximum batch size.
93 | batch_sizes = [x.item() for x in batch_sizes]
94 | max_bs = max(batch_sizes)
95 |
96 | timestep_batches = [th.zeros(max_bs).to(local_ts) for bs in batch_sizes]
97 | loss_batches = [th.zeros(max_bs).to(local_losses) for bs in batch_sizes]
98 | dist.all_gather(timestep_batches, local_ts)
99 | dist.all_gather(loss_batches, local_losses)
100 | timesteps = [
101 | x.item() for y, bs in zip(timestep_batches, batch_sizes) for x in y[:bs]
102 | ]
103 | losses = [x.item() for y, bs in zip(loss_batches, batch_sizes) for x in y[:bs]]
104 | self.update_with_all_losses(timesteps, losses)
105 |
106 | @abstractmethod
107 | def update_with_all_losses(self, ts, losses):
108 | """
109 | Update the reweighting using losses from a model.
110 |
111 | Sub-classes should override this method to update the reweighting
112 | using losses from the model.
113 |
114 | This method directly updates the reweighting without synchronizing
115 | between workers. It is called by update_with_local_losses from all
116 | ranks with identical arguments. Thus, it should have deterministic
117 | behavior to maintain state across workers.
118 |
119 | :param ts: a list of int timesteps.
120 | :param losses: a list of float losses, one per timestep.
121 | """
122 |
123 |
124 | class LossSecondMomentResampler(LossAwareSampler):
125 | def __init__(self, diffusion, history_per_term=10, uniform_prob=0.001):
126 | self.diffusion = diffusion
127 | self.history_per_term = history_per_term
128 | self.uniform_prob = uniform_prob
129 | self._loss_history = np.zeros(
130 | [diffusion.num_timesteps, history_per_term], dtype=np.float64
131 | )
132 | self._loss_counts = np.zeros([diffusion.num_timesteps], dtype=np.int)
133 |
134 | def weights(self):
135 | if not self._warmed_up():
136 | return np.ones([self.diffusion.num_timesteps], dtype=np.float64)
137 | weights = np.sqrt(np.mean(self._loss_history ** 2, axis=-1))
138 | weights /= np.sum(weights)
139 | weights *= 1 - self.uniform_prob
140 | weights += self.uniform_prob / len(weights)
141 | return weights
142 |
143 | def update_with_all_losses(self, ts, losses):
144 | for t, loss in zip(ts, losses):
145 | if self._loss_counts[t] == self.history_per_term:
146 | # Shift out the oldest loss term.
147 | self._loss_history[t, :-1] = self._loss_history[t, 1:]
148 | self._loss_history[t, -1] = loss
149 | else:
150 | self._loss_history[t, self._loss_counts[t]] = loss
151 | self._loss_counts[t] += 1
152 |
153 | def _warmed_up(self):
154 | return (self._loss_counts == self.history_per_term).all()
155 |
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/guided_diffusion/respace.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import torch as th
3 |
4 | from .gaussian_diffusion import GaussianDiffusion
5 |
6 |
7 | def space_timesteps(num_timesteps, section_counts):
8 | """
9 | Create a list of timesteps to use from an original diffusion process,
10 | given the number of timesteps we want to take from equally-sized portions
11 | of the original process.
12 |
13 | For example, if there's 300 timesteps and the section counts are [10,15,20]
14 | then the first 100 timesteps are strided to be 10 timesteps, the second 100
15 | are strided to be 15 timesteps, and the final 100 are strided to be 20.
16 |
17 | If the stride is a string starting with "ddim", then the fixed striding
18 | from the DDIM paper is used, and only one section is allowed.
19 |
20 | :param num_timesteps: the number of diffusion steps in the original
21 | process to divide up.
22 | :param section_counts: either a list of numbers, or a string containing
23 | comma-separated numbers, indicating the step count
24 | per section. As a special case, use "ddimN" where N
25 | is a number of steps to use the striding from the
26 | DDIM paper.
27 | :return: a set of diffusion steps from the original process to use.
28 | """
29 | if isinstance(section_counts, str):
30 | if section_counts.startswith("ddim"):
31 | desired_count = int(section_counts[len("ddim") :])
32 | for i in range(1, num_timesteps):
33 | if len(range(0, num_timesteps, i)) == desired_count:
34 | return set(range(0, num_timesteps, i))
35 | raise ValueError(
36 | f"cannot create exactly {num_timesteps} steps with an integer stride"
37 | )
38 | section_counts = [int(x) for x in section_counts.split(",")]
39 | size_per = num_timesteps // len(section_counts)
40 | extra = num_timesteps % len(section_counts)
41 | start_idx = 0
42 | all_steps = []
43 | for i, section_count in enumerate(section_counts):
44 | size = size_per + (1 if i < extra else 0)
45 | if size < section_count:
46 | raise ValueError(
47 | f"cannot divide section of {size} steps into {section_count}"
48 | )
49 | if section_count <= 1:
50 | frac_stride = 1
51 | else:
52 | frac_stride = (size - 1) / (section_count - 1)
53 | cur_idx = 0.0
54 | taken_steps = []
55 | for _ in range(section_count):
56 | taken_steps.append(start_idx + round(cur_idx))
57 | cur_idx += frac_stride
58 | all_steps += taken_steps
59 | start_idx += size
60 | return set(all_steps)
61 |
62 |
63 | class SpacedDiffusion(GaussianDiffusion):
64 | """
65 | A diffusion process which can skip steps in a base diffusion process.
66 |
67 | :param use_timesteps: a collection (sequence or set) of timesteps from the
68 | original diffusion process to retain.
69 | :param kwargs: the kwargs to create the base diffusion process.
70 | """
71 |
72 | def __init__(self, use_timesteps, **kwargs):
73 | self.use_timesteps = set(use_timesteps)
74 | self.timestep_map = []
75 | self.original_num_steps = len(kwargs["betas"])
76 |
77 | base_diffusion = GaussianDiffusion(**kwargs) # pylint: disable=missing-kwoa
78 | last_alpha_cumprod = 1.0
79 | new_betas = []
80 | for i, alpha_cumprod in enumerate(base_diffusion.alphas_cumprod):
81 | if i in self.use_timesteps:
82 | new_betas.append(1 - alpha_cumprod / last_alpha_cumprod)
83 | last_alpha_cumprod = alpha_cumprod
84 | self.timestep_map.append(i)
85 | kwargs["betas"] = np.array(new_betas)
86 | super().__init__(**kwargs)
87 |
88 | def p_mean_variance(
89 | self, model, *args, **kwargs
90 | ): # pylint: disable=signature-differs
91 | return super().p_mean_variance(self._wrap_model(model), *args, **kwargs)
92 |
93 | def training_losses(
94 | self, model, *args, **kwargs
95 | ): # pylint: disable=signature-differs
96 | return super().training_losses(self._wrap_model(model), *args, **kwargs)
97 |
98 | def condition_mean(self, cond_fn, *args, **kwargs):
99 | return super().condition_mean(self._wrap_model(cond_fn), *args, **kwargs)
100 |
101 | def condition_score(self, cond_fn, *args, **kwargs):
102 | return super().condition_score(self._wrap_model(cond_fn), *args, **kwargs)
103 |
104 | def _wrap_model(self, model):
105 | if isinstance(model, _WrappedModel):
106 | return model
107 | return _WrappedModel(
108 | model, self.timestep_map, self.rescale_timesteps, self.original_num_steps
109 | )
110 |
111 | def _scale_timesteps(self, t):
112 | # Scaling is done by the wrapped model.
113 | return t
114 |
115 |
116 | class _WrappedModel:
117 | def __init__(self, model, timestep_map, rescale_timesteps, original_num_steps):
118 | self.model = model
119 | self.timestep_map = timestep_map
120 | self.rescale_timesteps = rescale_timesteps
121 | self.original_num_steps = original_num_steps
122 |
123 | def __call__(self, x, ts, **kwargs):
124 | map_tensor = th.tensor(self.timestep_map, device=ts.device, dtype=ts.dtype)
125 | new_ts = map_tensor[ts]
126 | if self.rescale_timesteps:
127 | new_ts = new_ts.float() * (1000.0 / self.original_num_steps)
128 | return self.model(x, new_ts, **kwargs)
129 |
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/model-card.md:
--------------------------------------------------------------------------------
1 | # Overview
2 |
3 | These are diffusion models and noised image classifiers described in the paper [Diffusion Models Beat GANs on Image Synthesis](https://arxiv.org/abs/2105.05233).
4 | Included in this release are the following models:
5 |
6 | * Noisy ImageNet classifiers at resolutions 64x64, 128x128, 256x256, 512x512
7 | * A class-unconditional ImageNet diffusion model at resolution 256x256
8 | * Class conditional ImageNet diffusion models at 64x64, 128x128, 256x256, 512x512 resolutions
9 | * Class-conditional ImageNet upsampling diffusion models: 64x64->256x256, 128x128->512x512
10 | * Diffusion models trained on three LSUN classes at 256x256 resolution: cat, horse, bedroom
11 |
12 | # Datasets
13 |
14 | All of the models we are releasing were either trained on the [ILSVRC 2012 subset of ImageNet](http://www.image-net.org/challenges/LSVRC/2012/) or on single classes of [LSUN](https://arxiv.org/abs/1506.03365).
15 | Here, we describe characteristics of these datasets which impact model behavior:
16 |
17 | **LSUN**: This dataset was collected in 2015 using a combination of human labeling (from Amazon Mechanical Turk) and automated data labeling.
18 | * Each of the three classes we consider contain over a million images.
19 | * The dataset creators found that the label accuracy was roughly 90% across the entire LSUN dataset when measured by trained experts.
20 | * Images are scraped from the internet, and LSUN cat images in particular tend to often follow a “meme” format.
21 | * We found that there are occasionally humans in these photos, including faces, especially within the cat class.
22 |
23 | **ILSVRC 2012 subset of ImageNet**: This dataset was curated in 2012 and consists of roughly one million images, each belonging to one of 1000 classes.
24 | * A large portion of the classes in this dataset are animals, plants, and other naturally-occurring objects.
25 | * Many images contain humans, although usually these humans aren’t reflected by the class label (e.g. the class “Tench, tinca tinca” contains many photos of people holding fish).
26 |
27 | # Performance
28 |
29 | These models are intended to generate samples consistent with their training distributions.
30 | This has been measured in terms of FID, Precision, and Recall.
31 | These metrics all rely on the representations of a [pre-trained Inception-V3 model](https://arxiv.org/abs/1512.00567),
32 | which was trained on ImageNet, and so is likely to focus more on the ImageNet classes (such as animals) than on other visual features (such as human faces).
33 |
34 | Qualitatively, the samples produced by these models often look highly realistic, especially when a diffusion model is combined with a noisy classifier.
35 |
36 | # Intended Use
37 |
38 | These models are intended to be used for research purposes only.
39 | In particular, they can be used as a baseline for generative modeling research, or as a starting point to build off of for such research.
40 |
41 | These models are not intended to be commercially deployed.
42 | Additionally, they are not intended to be used to create propaganda or offensive imagery.
43 |
44 | Before releasing these models, we probed their ability to ease the creation of targeted imagery, since doing so could be potentially harmful.
45 | We did this either by fine-tuning our ImageNet models on a target LSUN class, or through classifier guidance with publicly available [CLIP models](https://github.com/openai/CLIP).
46 | * To probe fine-tuning capabilities, we restricted our compute budget to roughly $100 and tried both standard fine-tuning,
47 | and a diffusion-specific approach where we train a specialized classifier for the LSUN class. The resulting FIDs were significantly worse than publicly available GAN models, indicating that fine-tuning an ImageNet diffusion model does not significantly lower the cost of image generation.
48 | * To probe guidance with CLIP, we tried two approaches for using pre-trained CLIP models for classifier guidance. Either we fed the noised image to CLIP directly and used its gradients, or we fed the diffusion model's denoised prediction to the CLIP model and differentiated through the whole process. In both cases, we found that it was difficult to recover information from the CLIP model, indicating that these diffusion models are unlikely to make it significantly easier to extract knowledge from CLIP compared to existing GAN models.
49 |
50 | # Limitations
51 |
52 | These models sometimes produce highly unrealistic outputs, particularly when generating images containing human faces.
53 | This may stem from ImageNet's emphasis on non-human objects.
54 |
55 | While classifier guidance can improve sample quality, it reduces diversity, resulting in some modes of the data distribution being underrepresented.
56 | This can potentially amplify existing biases in the training dataset such as gender and racial biases.
57 |
58 | Because ImageNet and LSUN contain images from the internet, they include photos of real people, and the model may have memorized some of the information contained in these photos.
59 | However, these images are already publicly available, and existing generative models trained on ImageNet have not demonstrated significant leakage of this information.
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/scripts/classifier_sample.py:
--------------------------------------------------------------------------------
1 | """
2 | Like image_sample.py, but use a noisy image classifier to guide the sampling
3 | process towards more realistic images.
4 | """
5 |
6 | import argparse
7 | import os
8 |
9 | import numpy as np
10 | import torch as th
11 | import torch.distributed as dist
12 | import torch.nn.functional as F
13 |
14 | from guided_diffusion import dist_util, logger
15 | from guided_diffusion.script_util import (
16 | NUM_CLASSES,
17 | model_and_diffusion_defaults,
18 | classifier_defaults,
19 | create_model_and_diffusion,
20 | create_classifier,
21 | add_dict_to_argparser,
22 | args_to_dict,
23 | )
24 |
25 |
26 | def main():
27 | args = create_argparser().parse_args()
28 |
29 | dist_util.setup_dist()
30 | logger.configure()
31 |
32 | logger.log("creating model and diffusion...")
33 | model, diffusion = create_model_and_diffusion(
34 | **args_to_dict(args, model_and_diffusion_defaults().keys())
35 | )
36 | model.load_state_dict(
37 | dist_util.load_state_dict(args.model_path, map_location="cpu")
38 | )
39 | model.to(dist_util.dev())
40 | if args.use_fp16:
41 | model.convert_to_fp16()
42 | model.eval()
43 |
44 | logger.log("loading classifier...")
45 | classifier = create_classifier(**args_to_dict(args, classifier_defaults().keys()))
46 | classifier.load_state_dict(
47 | dist_util.load_state_dict(args.classifier_path, map_location="cpu")
48 | )
49 | classifier.to(dist_util.dev())
50 | if args.classifier_use_fp16:
51 | classifier.convert_to_fp16()
52 | classifier.eval()
53 |
54 | def cond_fn(x, t, y=None):
55 | assert y is not None
56 | with th.enable_grad():
57 | x_in = x.detach().requires_grad_(True)
58 | logits = classifier(x_in, t)
59 | log_probs = F.log_softmax(logits, dim=-1)
60 | selected = log_probs[range(len(logits)), y.view(-1)]
61 | return th.autograd.grad(selected.sum(), x_in)[0] * args.classifier_scale
62 |
63 | def model_fn(x, t, y=None):
64 | assert y is not None
65 | return model(x, t, y if args.class_cond else None)
66 |
67 | logger.log("sampling...")
68 | all_images = []
69 | all_labels = []
70 | while len(all_images) * args.batch_size < args.num_samples:
71 | model_kwargs = {}
72 | classes = th.randint(
73 | low=0, high=NUM_CLASSES, size=(args.batch_size,), device=dist_util.dev()
74 | )
75 | model_kwargs["y"] = classes
76 | sample_fn = (
77 | diffusion.p_sample_loop if not args.use_ddim else diffusion.ddim_sample_loop
78 | )
79 | sample = sample_fn(
80 | model_fn,
81 | (args.batch_size, 3, args.image_size, args.image_size),
82 | clip_denoised=args.clip_denoised,
83 | model_kwargs=model_kwargs,
84 | cond_fn=cond_fn,
85 | device=dist_util.dev(),
86 | )
87 | sample = ((sample + 1) * 127.5).clamp(0, 255).to(th.uint8)
88 | sample = sample.permute(0, 2, 3, 1)
89 | sample = sample.contiguous()
90 |
91 | gathered_samples = [th.zeros_like(sample) for _ in range(dist.get_world_size())]
92 | dist.all_gather(gathered_samples, sample) # gather not supported with NCCL
93 | all_images.extend([sample.cpu().numpy() for sample in gathered_samples])
94 | gathered_labels = [th.zeros_like(classes) for _ in range(dist.get_world_size())]
95 | dist.all_gather(gathered_labels, classes)
96 | all_labels.extend([labels.cpu().numpy() for labels in gathered_labels])
97 | logger.log(f"created {len(all_images) * args.batch_size} samples")
98 |
99 | arr = np.concatenate(all_images, axis=0)
100 | arr = arr[: args.num_samples]
101 | label_arr = np.concatenate(all_labels, axis=0)
102 | label_arr = label_arr[: args.num_samples]
103 | if dist.get_rank() == 0:
104 | shape_str = "x".join([str(x) for x in arr.shape])
105 | out_path = os.path.join(logger.get_dir(), f"samples_{shape_str}.npz")
106 | logger.log(f"saving to {out_path}")
107 | np.savez(out_path, arr, label_arr)
108 |
109 | dist.barrier()
110 | logger.log("sampling complete")
111 |
112 |
113 | def create_argparser():
114 | defaults = dict(
115 | clip_denoised=True,
116 | num_samples=10000,
117 | batch_size=16,
118 | use_ddim=False,
119 | model_path="",
120 | classifier_path="",
121 | classifier_scale=1.0,
122 | )
123 | defaults.update(model_and_diffusion_defaults())
124 | defaults.update(classifier_defaults())
125 | parser = argparse.ArgumentParser()
126 | add_dict_to_argparser(parser, defaults)
127 | return parser
128 |
129 |
130 | if __name__ == "__main__":
131 | main()
132 |
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/scripts/classifier_train.py:
--------------------------------------------------------------------------------
1 | """
2 | Train a noised image classifier on ImageNet.
3 | """
4 |
5 | import argparse
6 | import os
7 |
8 | import blobfile as bf
9 | import torch as th
10 | import torch.distributed as dist
11 | import torch.nn.functional as F
12 | from torch.nn.parallel.distributed import DistributedDataParallel as DDP
13 | from torch.optim import AdamW
14 |
15 | from guided_diffusion import dist_util, logger
16 | from guided_diffusion.fp16_util import MixedPrecisionTrainer
17 | from guided_diffusion.image_datasets import load_data
18 | from guided_diffusion.resample import create_named_schedule_sampler
19 | from guided_diffusion.script_util import (
20 | add_dict_to_argparser,
21 | args_to_dict,
22 | classifier_and_diffusion_defaults,
23 | create_classifier_and_diffusion,
24 | )
25 | from guided_diffusion.train_util import parse_resume_step_from_filename, log_loss_dict
26 |
27 |
28 | def main():
29 | args = create_argparser().parse_args()
30 |
31 | dist_util.setup_dist()
32 | logger.configure()
33 |
34 | logger.log("creating model and diffusion...")
35 | model, diffusion = create_classifier_and_diffusion(
36 | **args_to_dict(args, classifier_and_diffusion_defaults().keys())
37 | )
38 | model.to(dist_util.dev())
39 | if args.noised:
40 | schedule_sampler = create_named_schedule_sampler(
41 | args.schedule_sampler, diffusion
42 | )
43 |
44 | resume_step = 0
45 | if args.resume_checkpoint:
46 | resume_step = parse_resume_step_from_filename(args.resume_checkpoint)
47 | if dist.get_rank() == 0:
48 | logger.log(
49 | f"loading model from checkpoint: {args.resume_checkpoint}... at {resume_step} step"
50 | )
51 | model.load_state_dict(
52 | dist_util.load_state_dict(
53 | args.resume_checkpoint, map_location=dist_util.dev()
54 | )
55 | )
56 |
57 | # Needed for creating correct EMAs and fp16 parameters.
58 | dist_util.sync_params(model.parameters())
59 |
60 | mp_trainer = MixedPrecisionTrainer(
61 | model=model, use_fp16=args.classifier_use_fp16, initial_lg_loss_scale=16.0
62 | )
63 |
64 | model = DDP(
65 | model,
66 | device_ids=[dist_util.dev()],
67 | output_device=dist_util.dev(),
68 | broadcast_buffers=False,
69 | bucket_cap_mb=128,
70 | find_unused_parameters=False,
71 | )
72 |
73 | logger.log("creating data loader...")
74 | data = load_data(
75 | data_dir=args.data_dir,
76 | batch_size=args.batch_size,
77 | image_size=args.image_size,
78 | class_cond=True,
79 | random_crop=True,
80 | )
81 | if args.val_data_dir:
82 | val_data = load_data(
83 | data_dir=args.val_data_dir,
84 | batch_size=args.batch_size,
85 | image_size=args.image_size,
86 | class_cond=True,
87 | )
88 | else:
89 | val_data = None
90 |
91 | logger.log(f"creating optimizer...")
92 | opt = AdamW(mp_trainer.master_params, lr=args.lr, weight_decay=args.weight_decay)
93 | if args.resume_checkpoint:
94 | opt_checkpoint = bf.join(
95 | bf.dirname(args.resume_checkpoint), f"opt{resume_step:06}.pt"
96 | )
97 | logger.log(f"loading optimizer state from checkpoint: {opt_checkpoint}")
98 | opt.load_state_dict(
99 | dist_util.load_state_dict(opt_checkpoint, map_location=dist_util.dev())
100 | )
101 |
102 | logger.log("training classifier model...")
103 |
104 | def forward_backward_log(data_loader, prefix="train"):
105 | batch, extra = next(data_loader)
106 | labels = extra["y"].to(dist_util.dev())
107 |
108 | batch = batch.to(dist_util.dev())
109 | # Noisy images
110 | if args.noised:
111 | t, _ = schedule_sampler.sample(batch.shape[0], dist_util.dev())
112 | batch = diffusion.q_sample(batch, t)
113 | else:
114 | t = th.zeros(batch.shape[0], dtype=th.long, device=dist_util.dev())
115 |
116 | for i, (sub_batch, sub_labels, sub_t) in enumerate(
117 | split_microbatches(args.microbatch, batch, labels, t)
118 | ):
119 | logits = model(sub_batch, timesteps=sub_t)
120 | loss = F.cross_entropy(logits, sub_labels, reduction="none")
121 |
122 | losses = {}
123 | losses[f"{prefix}_loss"] = loss.detach()
124 | losses[f"{prefix}_acc@1"] = compute_top_k(
125 | logits, sub_labels, k=1, reduction="none"
126 | )
127 | losses[f"{prefix}_acc@5"] = compute_top_k(
128 | logits, sub_labels, k=5, reduction="none"
129 | )
130 | log_loss_dict(diffusion, sub_t, losses)
131 | del losses
132 | loss = loss.mean()
133 | if loss.requires_grad:
134 | if i == 0:
135 | mp_trainer.zero_grad()
136 | mp_trainer.backward(loss * len(sub_batch) / len(batch))
137 |
138 | for step in range(args.iterations - resume_step):
139 | logger.logkv("step", step + resume_step)
140 | logger.logkv(
141 | "samples",
142 | (step + resume_step + 1) * args.batch_size * dist.get_world_size(),
143 | )
144 | if args.anneal_lr:
145 | set_annealed_lr(opt, args.lr, (step + resume_step) / args.iterations)
146 | forward_backward_log(data)
147 | mp_trainer.optimize(opt)
148 | if val_data is not None and not step % args.eval_interval:
149 | with th.no_grad():
150 | with model.no_sync():
151 | model.eval()
152 | forward_backward_log(val_data, prefix="val")
153 | model.train()
154 | if not step % args.log_interval:
155 | logger.dumpkvs()
156 | if (
157 | step
158 | and dist.get_rank() == 0
159 | and not (step + resume_step) % args.save_interval
160 | ):
161 | logger.log("saving model...")
162 | save_model(mp_trainer, opt, step + resume_step)
163 |
164 | if dist.get_rank() == 0:
165 | logger.log("saving model...")
166 | save_model(mp_trainer, opt, step + resume_step)
167 | dist.barrier()
168 |
169 |
170 | def set_annealed_lr(opt, base_lr, frac_done):
171 | lr = base_lr * (1 - frac_done)
172 | for param_group in opt.param_groups:
173 | param_group["lr"] = lr
174 |
175 |
176 | def save_model(mp_trainer, opt, step):
177 | if dist.get_rank() == 0:
178 | th.save(
179 | mp_trainer.master_params_to_state_dict(mp_trainer.master_params),
180 | os.path.join(logger.get_dir(), f"model{step:06d}.pt"),
181 | )
182 | th.save(opt.state_dict(), os.path.join(logger.get_dir(), f"opt{step:06d}.pt"))
183 |
184 |
185 | def compute_top_k(logits, labels, k, reduction="mean"):
186 | _, top_ks = th.topk(logits, k, dim=-1)
187 | if reduction == "mean":
188 | return (top_ks == labels[:, None]).float().sum(dim=-1).mean().item()
189 | elif reduction == "none":
190 | return (top_ks == labels[:, None]).float().sum(dim=-1)
191 |
192 |
193 | def split_microbatches(microbatch, *args):
194 | bs = len(args[0])
195 | if microbatch == -1 or microbatch >= bs:
196 | yield tuple(args)
197 | else:
198 | for i in range(0, bs, microbatch):
199 | yield tuple(x[i : i + microbatch] if x is not None else None for x in args)
200 |
201 |
202 | def create_argparser():
203 | defaults = dict(
204 | data_dir="",
205 | val_data_dir="",
206 | noised=True,
207 | iterations=150000,
208 | lr=3e-4,
209 | weight_decay=0.0,
210 | anneal_lr=False,
211 | batch_size=4,
212 | microbatch=-1,
213 | schedule_sampler="uniform",
214 | resume_checkpoint="",
215 | log_interval=10,
216 | eval_interval=5,
217 | save_interval=10000,
218 | )
219 | defaults.update(classifier_and_diffusion_defaults())
220 | parser = argparse.ArgumentParser()
221 | add_dict_to_argparser(parser, defaults)
222 | return parser
223 |
224 |
225 | if __name__ == "__main__":
226 | main()
227 |
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/scripts/image_nll.py:
--------------------------------------------------------------------------------
1 | """
2 | Approximate the bits/dimension for an image model.
3 | """
4 |
5 | import argparse
6 | import os
7 |
8 | import numpy as np
9 | import torch.distributed as dist
10 |
11 | from guided_diffusion import dist_util, logger
12 | from guided_diffusion.image_datasets import load_data
13 | from guided_diffusion.script_util import (
14 | model_and_diffusion_defaults,
15 | create_model_and_diffusion,
16 | add_dict_to_argparser,
17 | args_to_dict,
18 | )
19 |
20 |
21 | def main():
22 | args = create_argparser().parse_args()
23 |
24 | dist_util.setup_dist()
25 | logger.configure()
26 |
27 | logger.log("creating model and diffusion...")
28 | model, diffusion = create_model_and_diffusion(
29 | **args_to_dict(args, model_and_diffusion_defaults().keys())
30 | )
31 | model.load_state_dict(
32 | dist_util.load_state_dict(args.model_path, map_location="cpu")
33 | )
34 | model.to(dist_util.dev())
35 | model.eval()
36 |
37 | logger.log("creating data loader...")
38 | data = load_data(
39 | data_dir=args.data_dir,
40 | batch_size=args.batch_size,
41 | image_size=args.image_size,
42 | class_cond=args.class_cond,
43 | deterministic=True,
44 | )
45 |
46 | logger.log("evaluating...")
47 | run_bpd_evaluation(model, diffusion, data, args.num_samples, args.clip_denoised)
48 |
49 |
50 | def run_bpd_evaluation(model, diffusion, data, num_samples, clip_denoised):
51 | all_bpd = []
52 | all_metrics = {"vb": [], "mse": [], "xstart_mse": []}
53 | num_complete = 0
54 | while num_complete < num_samples:
55 | batch, model_kwargs = next(data)
56 | batch = batch.to(dist_util.dev())
57 | model_kwargs = {k: v.to(dist_util.dev()) for k, v in model_kwargs.items()}
58 | minibatch_metrics = diffusion.calc_bpd_loop(
59 | model, batch, clip_denoised=clip_denoised, model_kwargs=model_kwargs
60 | )
61 |
62 | for key, term_list in all_metrics.items():
63 | terms = minibatch_metrics[key].mean(dim=0) / dist.get_world_size()
64 | dist.all_reduce(terms)
65 | term_list.append(terms.detach().cpu().numpy())
66 |
67 | total_bpd = minibatch_metrics["total_bpd"]
68 | total_bpd = total_bpd.mean() / dist.get_world_size()
69 | dist.all_reduce(total_bpd)
70 | all_bpd.append(total_bpd.item())
71 | num_complete += dist.get_world_size() * batch.shape[0]
72 |
73 | logger.log(f"done {num_complete} samples: bpd={np.mean(all_bpd)}")
74 |
75 | if dist.get_rank() == 0:
76 | for name, terms in all_metrics.items():
77 | out_path = os.path.join(logger.get_dir(), f"{name}_terms.npz")
78 | logger.log(f"saving {name} terms to {out_path}")
79 | np.savez(out_path, np.mean(np.stack(terms), axis=0))
80 |
81 | dist.barrier()
82 | logger.log("evaluation complete")
83 |
84 |
85 | def create_argparser():
86 | defaults = dict(
87 | data_dir="", clip_denoised=True, num_samples=1000, batch_size=1, model_path=""
88 | )
89 | defaults.update(model_and_diffusion_defaults())
90 | parser = argparse.ArgumentParser()
91 | add_dict_to_argparser(parser, defaults)
92 | return parser
93 |
94 |
95 | if __name__ == "__main__":
96 | main()
97 |
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/scripts/image_sample.py:
--------------------------------------------------------------------------------
1 | """
2 | Generate a large batch of image samples from a model and save them as a large
3 | numpy array. This can be used to produce samples for FID evaluation.
4 | """
5 |
6 | import argparse
7 | import os
8 |
9 | import numpy as np
10 | import torch as th
11 | import torch.distributed as dist
12 |
13 | from guided_diffusion import dist_util, logger
14 | from guided_diffusion.script_util import (
15 | NUM_CLASSES,
16 | model_and_diffusion_defaults,
17 | create_model_and_diffusion,
18 | add_dict_to_argparser,
19 | args_to_dict,
20 | )
21 |
22 |
23 | def main():
24 | args = create_argparser().parse_args()
25 |
26 | dist_util.setup_dist()
27 | logger.configure()
28 |
29 | logger.log("creating model and diffusion...")
30 | model, diffusion = create_model_and_diffusion(
31 | **args_to_dict(args, model_and_diffusion_defaults().keys())
32 | )
33 | model.load_state_dict(
34 | dist_util.load_state_dict(args.model_path, map_location="cpu")
35 | )
36 | model.to(dist_util.dev())
37 | if args.use_fp16:
38 | model.convert_to_fp16()
39 | model.eval()
40 |
41 | logger.log("sampling...")
42 | all_images = []
43 | all_labels = []
44 | while len(all_images) * args.batch_size < args.num_samples:
45 | model_kwargs = {}
46 | if args.class_cond:
47 | classes = th.randint(
48 | low=0, high=NUM_CLASSES, size=(args.batch_size,), device=dist_util.dev()
49 | )
50 | model_kwargs["y"] = classes
51 | sample_fn = (
52 | diffusion.p_sample_loop if not args.use_ddim else diffusion.ddim_sample_loop
53 | )
54 | sample = sample_fn(
55 | model,
56 | (args.batch_size, 3, args.image_size, args.image_size),
57 | clip_denoised=args.clip_denoised,
58 | model_kwargs=model_kwargs,
59 | )
60 | sample = ((sample + 1) * 127.5).clamp(0, 255).to(th.uint8)
61 | sample = sample.permute(0, 2, 3, 1)
62 | sample = sample.contiguous()
63 |
64 | gathered_samples = [th.zeros_like(sample) for _ in range(dist.get_world_size())]
65 | dist.all_gather(gathered_samples, sample) # gather not supported with NCCL
66 | all_images.extend([sample.cpu().numpy() for sample in gathered_samples])
67 | if args.class_cond:
68 | gathered_labels = [
69 | th.zeros_like(classes) for _ in range(dist.get_world_size())
70 | ]
71 | dist.all_gather(gathered_labels, classes)
72 | all_labels.extend([labels.cpu().numpy() for labels in gathered_labels])
73 | logger.log(f"created {len(all_images) * args.batch_size} samples")
74 |
75 | arr = np.concatenate(all_images, axis=0)
76 | arr = arr[: args.num_samples]
77 | if args.class_cond:
78 | label_arr = np.concatenate(all_labels, axis=0)
79 | label_arr = label_arr[: args.num_samples]
80 | if dist.get_rank() == 0:
81 | shape_str = "x".join([str(x) for x in arr.shape])
82 | out_path = os.path.join(logger.get_dir(), f"samples_{shape_str}.npz")
83 | logger.log(f"saving to {out_path}")
84 | if args.class_cond:
85 | np.savez(out_path, arr, label_arr)
86 | else:
87 | np.savez(out_path, arr)
88 |
89 | dist.barrier()
90 | logger.log("sampling complete")
91 |
92 |
93 | def create_argparser():
94 | defaults = dict(
95 | clip_denoised=True,
96 | num_samples=10000,
97 | batch_size=16,
98 | use_ddim=False,
99 | model_path="",
100 | )
101 | defaults.update(model_and_diffusion_defaults())
102 | parser = argparse.ArgumentParser()
103 | add_dict_to_argparser(parser, defaults)
104 | return parser
105 |
106 |
107 | if __name__ == "__main__":
108 | main()
109 |
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/scripts/image_train.py:
--------------------------------------------------------------------------------
1 | """
2 | Train a diffusion model on images.
3 | """
4 |
5 | import argparse
6 |
7 | from guided_diffusion import dist_util, logger
8 | from guided_diffusion.image_datasets import load_data
9 | from guided_diffusion.resample import create_named_schedule_sampler
10 | from guided_diffusion.script_util import (
11 | model_and_diffusion_defaults,
12 | create_model_and_diffusion,
13 | args_to_dict,
14 | add_dict_to_argparser,
15 | )
16 | from guided_diffusion.train_util import TrainLoop
17 |
18 |
19 | def main():
20 | args = create_argparser().parse_args()
21 |
22 | dist_util.setup_dist()
23 | logger.configure()
24 |
25 | logger.log("creating model and diffusion...")
26 | model, diffusion = create_model_and_diffusion(
27 | **args_to_dict(args, model_and_diffusion_defaults().keys())
28 | )
29 | model.to(dist_util.dev())
30 | schedule_sampler = create_named_schedule_sampler(args.schedule_sampler, diffusion)
31 |
32 | logger.log("creating data loader...")
33 | data = load_data(
34 | data_dir=args.data_dir,
35 | batch_size=args.batch_size,
36 | image_size=args.image_size,
37 | class_cond=args.class_cond,
38 | )
39 |
40 | logger.log("training...")
41 | TrainLoop(
42 | model=model,
43 | diffusion=diffusion,
44 | data=data,
45 | batch_size=args.batch_size,
46 | microbatch=args.microbatch,
47 | lr=args.lr,
48 | ema_rate=args.ema_rate,
49 | log_interval=args.log_interval,
50 | save_interval=args.save_interval,
51 | resume_checkpoint=args.resume_checkpoint,
52 | use_fp16=args.use_fp16,
53 | fp16_scale_growth=args.fp16_scale_growth,
54 | schedule_sampler=schedule_sampler,
55 | weight_decay=args.weight_decay,
56 | lr_anneal_steps=args.lr_anneal_steps,
57 | ).run_loop()
58 |
59 |
60 | def create_argparser():
61 | defaults = dict(
62 | data_dir="",
63 | schedule_sampler="uniform",
64 | lr=1e-4,
65 | weight_decay=0.0,
66 | lr_anneal_steps=0,
67 | batch_size=1,
68 | microbatch=-1, # -1 disables microbatches
69 | ema_rate="0.9999", # comma-separated list of EMA values
70 | log_interval=10,
71 | save_interval=10000,
72 | resume_checkpoint="",
73 | use_fp16=False,
74 | fp16_scale_growth=1e-3,
75 | )
76 | defaults.update(model_and_diffusion_defaults())
77 | parser = argparse.ArgumentParser()
78 | add_dict_to_argparser(parser, defaults)
79 | return parser
80 |
81 |
82 | if __name__ == "__main__":
83 | main()
84 |
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/scripts/super_res_sample.py:
--------------------------------------------------------------------------------
1 | """
2 | Generate a large batch of samples from a super resolution model, given a batch
3 | of samples from a regular model from image_sample.py.
4 | """
5 |
6 | import argparse
7 | import os
8 |
9 | import blobfile as bf
10 | import numpy as np
11 | import torch as th
12 | import torch.distributed as dist
13 |
14 | from guided_diffusion import dist_util, logger
15 | from guided_diffusion.script_util import (
16 | sr_model_and_diffusion_defaults,
17 | sr_create_model_and_diffusion,
18 | args_to_dict,
19 | add_dict_to_argparser,
20 | )
21 |
22 |
23 | def main():
24 | args = create_argparser().parse_args()
25 |
26 | dist_util.setup_dist()
27 | logger.configure()
28 |
29 | logger.log("creating model...")
30 | model, diffusion = sr_create_model_and_diffusion(
31 | **args_to_dict(args, sr_model_and_diffusion_defaults().keys())
32 | )
33 | model.load_state_dict(
34 | dist_util.load_state_dict(args.model_path, map_location="cpu")
35 | )
36 | model.to(dist_util.dev())
37 | if args.use_fp16:
38 | model.convert_to_fp16()
39 | model.eval()
40 |
41 | logger.log("loading data...")
42 | data = load_data_for_worker(args.base_samples, args.batch_size, args.class_cond)
43 |
44 | logger.log("creating samples...")
45 | all_images = []
46 | while len(all_images) * args.batch_size < args.num_samples:
47 | model_kwargs = next(data)
48 | model_kwargs = {k: v.to(dist_util.dev()) for k, v in model_kwargs.items()}
49 | sample = diffusion.p_sample_loop(
50 | model,
51 | (args.batch_size, 3, args.large_size, args.large_size),
52 | clip_denoised=args.clip_denoised,
53 | model_kwargs=model_kwargs,
54 | )
55 | sample = ((sample + 1) * 127.5).clamp(0, 255).to(th.uint8)
56 | sample = sample.permute(0, 2, 3, 1)
57 | sample = sample.contiguous()
58 |
59 | all_samples = [th.zeros_like(sample) for _ in range(dist.get_world_size())]
60 | dist.all_gather(all_samples, sample) # gather not supported with NCCL
61 | for sample in all_samples:
62 | all_images.append(sample.cpu().numpy())
63 | logger.log(f"created {len(all_images) * args.batch_size} samples")
64 |
65 | arr = np.concatenate(all_images, axis=0)
66 | arr = arr[: args.num_samples]
67 | if dist.get_rank() == 0:
68 | shape_str = "x".join([str(x) for x in arr.shape])
69 | out_path = os.path.join(logger.get_dir(), f"samples_{shape_str}.npz")
70 | logger.log(f"saving to {out_path}")
71 | np.savez(out_path, arr)
72 |
73 | dist.barrier()
74 | logger.log("sampling complete")
75 |
76 |
77 | def load_data_for_worker(base_samples, batch_size, class_cond):
78 | with bf.BlobFile(base_samples, "rb") as f:
79 | obj = np.load(f)
80 | image_arr = obj["arr_0"]
81 | if class_cond:
82 | label_arr = obj["arr_1"]
83 | rank = dist.get_rank()
84 | num_ranks = dist.get_world_size()
85 | buffer = []
86 | label_buffer = []
87 | while True:
88 | for i in range(rank, len(image_arr), num_ranks):
89 | buffer.append(image_arr[i])
90 | if class_cond:
91 | label_buffer.append(label_arr[i])
92 | if len(buffer) == batch_size:
93 | batch = th.from_numpy(np.stack(buffer)).float()
94 | batch = batch / 127.5 - 1.0
95 | batch = batch.permute(0, 3, 1, 2)
96 | res = dict(low_res=batch)
97 | if class_cond:
98 | res["y"] = th.from_numpy(np.stack(label_buffer))
99 | yield res
100 | buffer, label_buffer = [], []
101 |
102 |
103 | def create_argparser():
104 | defaults = dict(
105 | clip_denoised=True,
106 | num_samples=10000,
107 | batch_size=16,
108 | use_ddim=False,
109 | base_samples="",
110 | model_path="",
111 | )
112 | defaults.update(sr_model_and_diffusion_defaults())
113 | parser = argparse.ArgumentParser()
114 | add_dict_to_argparser(parser, defaults)
115 | return parser
116 |
117 |
118 | if __name__ == "__main__":
119 | main()
120 |
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/scripts/super_res_train.py:
--------------------------------------------------------------------------------
1 | """
2 | Train a super-resolution model.
3 | """
4 |
5 | import argparse
6 |
7 | import torch.nn.functional as F
8 |
9 | from guided_diffusion import dist_util, logger
10 | from guided_diffusion.image_datasets import load_data
11 | from guided_diffusion.resample import create_named_schedule_sampler
12 | from guided_diffusion.script_util import (
13 | sr_model_and_diffusion_defaults,
14 | sr_create_model_and_diffusion,
15 | args_to_dict,
16 | add_dict_to_argparser,
17 | )
18 | from guided_diffusion.train_util import TrainLoop
19 |
20 |
21 | def main():
22 | args = create_argparser().parse_args()
23 |
24 | dist_util.setup_dist()
25 | logger.configure()
26 |
27 | logger.log("creating model...")
28 | model, diffusion = sr_create_model_and_diffusion(
29 | **args_to_dict(args, sr_model_and_diffusion_defaults().keys())
30 | )
31 | model.to(dist_util.dev())
32 | schedule_sampler = create_named_schedule_sampler(args.schedule_sampler, diffusion)
33 |
34 | logger.log("creating data loader...")
35 | data = load_superres_data(
36 | args.data_dir,
37 | args.batch_size,
38 | large_size=args.large_size,
39 | small_size=args.small_size,
40 | class_cond=args.class_cond,
41 | )
42 |
43 | logger.log("training...")
44 | TrainLoop(
45 | model=model,
46 | diffusion=diffusion,
47 | data=data,
48 | batch_size=args.batch_size,
49 | microbatch=args.microbatch,
50 | lr=args.lr,
51 | ema_rate=args.ema_rate,
52 | log_interval=args.log_interval,
53 | save_interval=args.save_interval,
54 | resume_checkpoint=args.resume_checkpoint,
55 | use_fp16=args.use_fp16,
56 | fp16_scale_growth=args.fp16_scale_growth,
57 | schedule_sampler=schedule_sampler,
58 | weight_decay=args.weight_decay,
59 | lr_anneal_steps=args.lr_anneal_steps,
60 | ).run_loop()
61 |
62 |
63 | def load_superres_data(data_dir, batch_size, large_size, small_size, class_cond=False):
64 | data = load_data(
65 | data_dir=data_dir,
66 | batch_size=batch_size,
67 | image_size=large_size,
68 | class_cond=class_cond,
69 | )
70 | for large_batch, model_kwargs in data:
71 | model_kwargs["low_res"] = F.interpolate(large_batch, small_size, mode="area")
72 | yield large_batch, model_kwargs
73 |
74 |
75 | def create_argparser():
76 | defaults = dict(
77 | data_dir="",
78 | schedule_sampler="uniform",
79 | lr=1e-4,
80 | weight_decay=0.0,
81 | lr_anneal_steps=0,
82 | batch_size=1,
83 | microbatch=-1,
84 | ema_rate="0.9999",
85 | log_interval=10,
86 | save_interval=10000,
87 | resume_checkpoint="",
88 | use_fp16=False,
89 | fp16_scale_growth=1e-3,
90 | )
91 | defaults.update(sr_model_and_diffusion_defaults())
92 | parser = argparse.ArgumentParser()
93 | add_dict_to_argparser(parser, defaults)
94 | return parser
95 |
96 |
97 | if __name__ == "__main__":
98 | main()
99 |
--------------------------------------------------------------------------------
/generation/DiT/guided_diffusion/setup.py:
--------------------------------------------------------------------------------
1 | from setuptools import setup
2 |
3 | setup(
4 | name="guided-diffusion",
5 | py_modules=["guided_diffusion"],
6 | install_requires=["blobfile>=1.0.5", "torch", "tqdm"],
7 | )
8 |
--------------------------------------------------------------------------------
/generation/DiT/sample.py:
--------------------------------------------------------------------------------
1 | # Copyright (c) Meta Platforms, Inc. and affiliates.
2 | # All rights reserved.
3 |
4 | # This source code is licensed under the license found in the
5 | # LICENSE file in the root directory of this source tree.
6 |
7 | """
8 | Sample new images from a pre-trained DiT.
9 | """
10 | import torch
11 | torch.backends.cuda.matmul.allow_tf32 = True
12 | torch.backends.cudnn.allow_tf32 = True
13 | from torchvision.utils import save_image
14 | from diffusion import create_diffusion
15 | from diffusers.models import AutoencoderKL
16 | from download import find_model
17 | from models import DiT_models
18 | import argparse
19 |
20 |
21 | def main(args):
22 | # Setup PyTorch:
23 | torch.manual_seed(args.seed)
24 | torch.set_grad_enabled(False)
25 | device = "cuda" if torch.cuda.is_available() else "cpu"
26 |
27 | if args.ckpt is None:
28 | assert args.model == "DiT-XL/2", "Only DiT-XL/2 models are available for auto-download."
29 | assert args.image_size in [256, 512]
30 | assert args.num_classes == 1000
31 |
32 | # Load model:
33 | latent_size = args.image_size // 8
34 | model = DiT_models[args.model](
35 | input_size=latent_size,
36 | num_classes=args.num_classes
37 | ).to(device)
38 | # Auto-download a pre-trained model or load a custom DiT checkpoint from train.py:
39 | ckpt_path = args.ckpt or f"DiT-XL-2-{args.image_size}x{args.image_size}.pt"
40 | state_dict = find_model(ckpt_path)
41 | model.load_state_dict(state_dict)
42 | model.eval() # important!
43 | diffusion = create_diffusion(str(args.num_sampling_steps))
44 | vae = AutoencoderKL.from_pretrained(f"stabilityai/sd-vae-ft-{args.vae}").to(device)
45 |
46 | # Labels to condition the model with (feel free to change):
47 | class_labels = [207, 360, 387, 974, 88, 979, 417, 279]
48 |
49 | # Create sampling noise:
50 | n = len(class_labels)
51 | z = torch.randn(n, 4, latent_size, latent_size, device=device)
52 | y = torch.tensor(class_labels, device=device)
53 |
54 | # Setup classifier-free guidance:
55 | z = torch.cat([z, z], 0)
56 | y_null = torch.tensor([1000] * n, device=device)
57 | y = torch.cat([y, y_null], 0)
58 | model_kwargs = dict(y=y, cfg_scale=args.cfg_scale)
59 |
60 | # Sample images:
61 | samples = diffusion.p_sample_loop(
62 | model.forward_with_cfg, z.shape, z, clip_denoised=False, model_kwargs=model_kwargs, progress=True, device=device
63 | )
64 | samples, _ = samples.chunk(2, dim=0) # Remove null class samples
65 | samples = vae.decode(samples / 0.18215).sample
66 |
67 | # Save and display images:
68 | save_image(samples, "sample.png", nrow=4, normalize=True, value_range=(-1, 1))
69 |
70 |
71 | if __name__ == "__main__":
72 | parser = argparse.ArgumentParser()
73 | parser.add_argument("--model", type=str, choices=list(DiT_models.keys()), default="DiT-XL/2")
74 | parser.add_argument("--vae", type=str, choices=["ema", "mse"], default="mse")
75 | parser.add_argument("--image-size", type=int, choices=[256, 512], default=256)
76 | parser.add_argument("--num-classes", type=int, default=1000)
77 | parser.add_argument("--cfg-scale", type=float, default=4.0)
78 | parser.add_argument("--num-sampling-steps", type=int, default=250)
79 | parser.add_argument("--seed", type=int, default=0)
80 | parser.add_argument("--ckpt", type=str, default=None,
81 | help="Optional path to a DiT checkpoint (default: auto-download a pre-trained DiT-XL/2 model).")
82 | args = parser.parse_args()
83 | main(args)
84 |
--------------------------------------------------------------------------------
/generation/DiT/sample_ddp.py:
--------------------------------------------------------------------------------
1 | # Copyright (c) Meta Platforms, Inc. and affiliates.
2 | # All rights reserved.
3 |
4 | # This source code is licensed under the license found in the
5 | # LICENSE file in the root directory of this source tree.
6 |
7 | """
8 | Samples a large number of images from a pre-trained DiT model using DDP.
9 | Subsequently saves a .npz file that can be used to compute FID and other
10 | evaluation metrics via the ADM repo: https://github.com/openai/guided-diffusion/tree/main/evaluations
11 |
12 | For a simple single-GPU/CPU sampling script, see sample.py.
13 | """
14 | import torch
15 | import torch.distributed as dist
16 | from models import DiT_models
17 | from download import find_model
18 | from diffusion import create_diffusion
19 | from diffusers.models import AutoencoderKL
20 | from tqdm import tqdm
21 | import os
22 | from PIL import Image
23 | import numpy as np
24 | import math
25 | import argparse
26 |
27 |
28 | def create_npz_from_sample_folder(sample_dir, num=50_000):
29 | """
30 | Builds a single .npz file from a folder of .png samples.
31 | """
32 | samples = []
33 | for i in tqdm(range(num), desc="Building .npz file from samples"):
34 | sample_pil = Image.open(f"{sample_dir}/{i:06d}.png")
35 | sample_np = np.asarray(sample_pil).astype(np.uint8)
36 | samples.append(sample_np)
37 | samples = np.stack(samples)
38 | assert samples.shape == (num, samples.shape[1], samples.shape[2], 3)
39 | npz_path = f"{sample_dir}.npz"
40 | np.savez(npz_path, arr_0=samples)
41 | print(f"Saved .npz file to {npz_path} [shape={samples.shape}].")
42 | return npz_path
43 |
44 |
45 | def main(args):
46 | """
47 | Run sampling.
48 | """
49 | torch.backends.cuda.matmul.allow_tf32 = args.tf32 # True: fast but may lead to some small numerical differences
50 | assert torch.cuda.is_available(), "Sampling with DDP requires at least one GPU. sample.py supports CPU-only usage"
51 | torch.set_grad_enabled(False)
52 |
53 | # Setup DDP:
54 | dist.init_process_group("nccl")
55 | rank = dist.get_rank()
56 | device = rank % torch.cuda.device_count()
57 | seed = args.global_seed * dist.get_world_size() + rank
58 | torch.manual_seed(seed)
59 | torch.cuda.set_device(device)
60 | print(f"Starting rank={rank}, seed={seed}, world_size={dist.get_world_size()}.")
61 |
62 | if args.ckpt is None:
63 | assert args.model == "DiT-XL/2", "Only DiT-XL/2 models are available for auto-download."
64 | assert args.image_size in [256, 512]
65 | assert args.num_classes == 1000
66 |
67 | # Load model:
68 | latent_size = args.image_size // 8
69 | model = DiT_models[args.model](
70 | input_size=latent_size,
71 | num_classes=args.num_classes
72 | ).to(device)
73 | # Auto-download a pre-trained model or load a custom DiT checkpoint from train.py:
74 | ckpt_path = args.ckpt or f"DiT-XL-2-{args.image_size}x{args.image_size}.pt"
75 | state_dict = find_model(ckpt_path)
76 | model.load_state_dict(state_dict)
77 | model.eval() # important!
78 | diffusion = create_diffusion(str(args.num_sampling_steps))
79 | vae = AutoencoderKL.from_pretrained(f"/mnt/workspace/common/models/sd-vae-ft-ema").to(device)
80 | assert args.cfg_scale >= 1.0, "In almost all cases, cfg_scale be >= 1.0"
81 | using_cfg = args.cfg_scale > 1.0
82 |
83 | # Create folder to save samples:
84 | model_string_name = args.model.replace("/", "-")
85 | ckpt_string_name = os.path.basename(args.ckpt).replace(".pt", "") if args.ckpt else "pretrained"
86 | folder_name = f"{model_string_name}-{ckpt_string_name}-size-{args.image_size}-vae-{args.vae}-" \
87 | f"cfg-{args.cfg_scale}-seed-{args.global_seed}"
88 | sample_folder_dir = f"{args.sample_dir}/{folder_name}"
89 | if rank == 0:
90 | os.makedirs(sample_folder_dir, exist_ok=True)
91 | print(f"Saving .png samples at {sample_folder_dir}")
92 | dist.barrier()
93 |
94 | if not len(os.listdir(sample_folder_dir)) > 0:
95 |
96 |
97 | # Figure out how many samples we need to generate on each GPU and how many iterations we need to run:
98 | n = args.per_proc_batch_size
99 | global_batch_size = n * dist.get_world_size()
100 | # To make things evenly-divisible, we'll sample a bit more than we need and then discard the extra samples:
101 | total_samples = int(math.ceil(args.num_fid_samples / global_batch_size) * global_batch_size)
102 | if rank == 0:
103 | print(f"Total number of images that will be sampled: {total_samples}")
104 | assert total_samples % dist.get_world_size() == 0, "total_samples must be divisible by world_size"
105 | samples_needed_this_gpu = int(total_samples // dist.get_world_size())
106 | assert samples_needed_this_gpu % n == 0, "samples_needed_this_gpu must be divisible by the per-GPU batch size"
107 | iterations = int(samples_needed_this_gpu // n)
108 | pbar = range(iterations)
109 | pbar = tqdm(pbar) if rank == 0 else pbar
110 | total = 0
111 | for _ in pbar:
112 | # Sample inputs:
113 | z = torch.randn(n, model.in_channels, latent_size, latent_size, device=device)
114 | y = torch.randint(0, args.num_classes, (n,), device=device)
115 |
116 | # Setup classifier-free guidance:
117 | if using_cfg:
118 | z = torch.cat([z, z], 0)
119 | y_null = torch.tensor([1000] * n, device=device)
120 | y = torch.cat([y, y_null], 0)
121 | model_kwargs = dict(y=y, cfg_scale=args.cfg_scale)
122 | sample_fn = model.forward_with_cfg
123 | else:
124 | model_kwargs = dict(y=y)
125 | sample_fn = model.forward
126 |
127 | # Sample images:
128 | samples = diffusion.p_sample_loop(
129 | sample_fn, z.shape, z, clip_denoised=False, model_kwargs=model_kwargs, progress=False, device=device
130 | )
131 | if using_cfg:
132 | samples, _ = samples.chunk(2, dim=0) # Remove null class samples
133 |
134 | samples = vae.decode(samples / 0.18215).sample
135 | samples = torch.clamp(127.5 * samples + 128.0, 0, 255).permute(0, 2, 3, 1).to("cpu", dtype=torch.uint8).numpy()
136 |
137 | # Save samples to disk as individual .png files
138 | for i, sample in enumerate(samples):
139 | index = i * dist.get_world_size() + rank + total
140 | Image.fromarray(sample).save(f"{sample_folder_dir}/{index:06d}.png")
141 | total += global_batch_size
142 | else:
143 | print(f'Results exists! Save .NPZ now')
144 |
145 | # Make sure all processes have finished saving their samples before attempting to convert to .npz
146 | dist.barrier()
147 | if rank == 0:
148 | create_npz_from_sample_folder(sample_folder_dir, args.num_fid_samples)
149 | print("Done.")
150 | dist.barrier()
151 | dist.destroy_process_group()
152 |
153 |
154 | if __name__ == "__main__":
155 | parser = argparse.ArgumentParser()
156 | parser.add_argument("--model", type=str, choices=list(DiT_models.keys()), default="DiT-XL/2")
157 | parser.add_argument("--vae", type=str, choices=["ema", "mse"], default="ema")
158 | parser.add_argument("--sample-dir", type=str, default="samples")
159 | parser.add_argument("--per-proc-batch-size", type=int, default=128)
160 | parser.add_argument("--num-fid-samples", type=int, default=50_000)
161 | parser.add_argument("--image-size", type=int, choices=[256, 512], default=256)
162 | parser.add_argument("--num-classes", type=int, default=1000)
163 | parser.add_argument("--cfg-scale", type=float, default=1.0)
164 | parser.add_argument("--num-sampling-steps", type=int, default=250)
165 | parser.add_argument("--global-seed", type=int, default=0)
166 | parser.add_argument("--tf32", action=argparse.BooleanOptionalAction, default=True,
167 | help="By default, use TF32 matmuls. This massively accelerates sampling on Ampere GPUs.")
168 | parser.add_argument("--ckpt", type=str, default=None,
169 | help="Optional path to a DiT checkpoint (default: auto-download a pre-trained DiT-XL/2 model).")
170 | args = parser.parse_args()
171 | main(args)
172 |
--------------------------------------------------------------------------------
/generation/DiT/sample_dit.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | model="DiT-XL/2-VAE-simple"
3 | ckpt="/PATH/TO/YOUR/FINETUNE_CHECKPOINT"
4 | sample_dir="./samples"
5 | cfg="1.0"
6 |
7 | torchrun --master_addr ${MASTER_ADDR} --master-port ${MASTER_PORT} \
8 | --nnodes ${WORLD_SIZE} --node_rank ${RANK} --nproc-per-node=${GPUS} sample_ddp.py \
9 | --model ${model} \
10 | --num-fid-samples 50000 \
11 | --ckpt ${ckpt} \
12 | --sample-dir ${sample_dir} \
13 | --per-proc-batch-size 128 \
14 | --cfg-scale ${cfg}
--------------------------------------------------------------------------------
/generation/DiT/train_dit.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | model=DiT-XL/2-VAE-simple # "DiT-XL/2-VAE-simple", "DiT-L/2-VAE-simple", "DiT-B/2-VAE-simple"
3 | data_path=/PATH/TO/YOUR/imagenet-1k/train
4 | finetune=/PATH/TO/YOUR/PRETRAINED_CHECKPOINT
5 | exp_name="DiT-XL/2-VAE-simple"
6 | global_batch_size="256"
7 |
8 | torchrun --master_addr ${MASTER_ADDR} --master-port ${MASTER_PORT} \
9 | --nnodes ${WORLD_SIZE} --node_rank ${RANK} --nproc-per-node=${GPUS} train.py \
10 | --model ${model} \
11 | --data-path ${data_path} \
12 | --finetune ${finetune} \
13 | --exp-name ${exp_name} \
14 | --global-batch-size ${global_batch_size}
15 |
16 |
17 |
--------------------------------------------------------------------------------
/generation/GENERATION.md:
--------------------------------------------------------------------------------
1 | # For DiTs
2 |
3 | ```bash
4 | cd generation/DiT
5 | ```
6 |
7 | **Training**
8 | ```bash
9 | sh train_dit.sh
10 | ```
11 | **Note:**
12 | - ${model}: "DiT-XL/2-VAE-simple", "DiT-L/2-VAE-simple", "DiT-B/2-VAE-simple".
13 | - During training, sampling occurs every `${eval-every}` steps, and the results are saved as a **NPZ file** for evaluation. You can also use the **script below** to sample any saved fine-tuned weights.
14 | - The default ${global-batch-size} is 256.
15 |
16 | **Infer**
17 | ```bash
18 | sh sample_dit.sh
19 | ```
20 | - Sampling using the weights saved during the **fine-tuning process** and saving them as an **NPZ file**, which can be used for evaluating metrics.
21 |
22 | **Eval**
23 | ```bash
24 | sh eval_dit.sh
25 | ```
26 | **Note:**
27 | - Same as [DiT](https://github.com/facebookresearch/DiT), we use [ADM's TensorFlow evaluation suite](https://github.com/openai/guided-diffusion/tree/main/evaluations) to calculate FID, Inception Score and other metrics.
28 | - VIRTUAL_imagenet256_labeled.npz can be downloaded from [ADM's TensorFlow evaluation suite](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/imagenet/256/VIRTUAL_imagenet256_labeled.npz)
29 |
30 |
31 | # For SiTs
32 |
33 | ```
34 | cd generation/SiT
35 | ```
36 |
37 | **Training**
38 | ```bash
39 | sh train_sit.sh
40 | ```
41 | **Note:**
42 | - ${model}: "SiT-XL/2-VAE-simple", "SiT-B/2-VAE-simple".
43 | - During training, sampling occurs every `${eval-every}` steps, and the results are saved as a **NPZ file** for evaluation. You can also use the **script below** to sample any saved fine-tuned weights.
44 | - The default ${global-batch-size} is 256.
45 |
46 | **Infer**
47 | ```bash
48 | sh sample_sit.sh
49 | ```
50 | - Sampling using the weights saved during the **fine-tuning process** and saving them as an **NPZ file**, which can be used for evaluating metrics.
51 |
52 | **Eval**
53 | - Same as the evaluation in the DiT.
--------------------------------------------------------------------------------
/generation/SiT/.gitignore:
--------------------------------------------------------------------------------
1 | __pycache__
2 | wandb
3 |
4 | .DS_store
5 | samples
6 | results
7 | pretrained_models
--------------------------------------------------------------------------------
/generation/SiT/LICENSE.txt:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) Meta Platforms, Inc. and affiliates.
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
--------------------------------------------------------------------------------
/generation/SiT/download.py:
--------------------------------------------------------------------------------
1 | # This source code is licensed under the license found in the
2 | # LICENSE file in the root directory of this source tree.
3 |
4 | """
5 | Functions for downloading pre-trained SiT models
6 | """
7 | from torchvision.datasets.utils import download_url
8 | import torch
9 | import os
10 |
11 |
12 | pretrained_models = {'SiT-XL-2-256x256.pt'}
13 |
14 |
15 | def find_model(model_name):
16 | """
17 | Finds a pre-trained SiT model, downloading it if necessary. Alternatively, loads a model from a local path.
18 | """
19 | if model_name in pretrained_models:
20 | return download_model(model_name)
21 | else:
22 | assert os.path.isfile(model_name), f'Could not find SiT checkpoint at {model_name}'
23 | checkpoint = torch.load(model_name, map_location=lambda storage, loc: storage)
24 | if "ema" in checkpoint: # supports checkpoints from train.py
25 | checkpoint = checkpoint["ema"]
26 | return checkpoint
27 |
28 |
29 | def download_model(model_name):
30 | """
31 | Downloads a pre-trained SiT model from the web.
32 | """
33 | assert model_name in pretrained_models
34 | local_path = f'pretrained_models/{model_name}'
35 | if not os.path.isfile(local_path):
36 | os.makedirs('pretrained_models', exist_ok=True)
37 | web_path = f'https://www.dl.dropboxusercontent.com/scl/fi/as9oeomcbub47de5g4be0/SiT-XL-2-256.pt?rlkey=uxzxmpicu46coq3msb17b9ofa&dl=0'
38 | download_url(web_path, 'pretrained_models', filename=model_name)
39 | model = torch.load(local_path, map_location=lambda storage, loc: storage)
40 | return model
41 |
--------------------------------------------------------------------------------
/generation/SiT/environment.yml:
--------------------------------------------------------------------------------
1 | name: SiT
2 | channels:
3 | - pytorch
4 | - nvidia
5 | dependencies:
6 | - python >= 3.8
7 | - pytorch >= 1.13
8 | - torchvision
9 | - pytorch-cuda >=11.7
10 | - pip
11 | - pip:
12 | - timm
13 | - diffusers
14 | - accelerate
15 | - torchdiffeq
16 | - wandb
17 |
--------------------------------------------------------------------------------
/generation/SiT/sample.py:
--------------------------------------------------------------------------------
1 | # This source code is licensed under the license found in the
2 | # LICENSE file in the root directory of this source tree.
3 |
4 | """
5 | Sample new images from a pre-trained SiT.
6 | """
7 | import torch
8 | torch.backends.cuda.matmul.allow_tf32 = True
9 | torch.backends.cudnn.allow_tf32 = True
10 | from torchvision.utils import save_image
11 | from diffusers.models import AutoencoderKL
12 | from download import find_model
13 | from models import SiT_models
14 | from train_utils import parse_ode_args, parse_sde_args, parse_transport_args
15 | from transport import create_transport, Sampler
16 | import argparse
17 | import sys
18 | from time import time
19 |
20 |
21 | def main(mode, args):
22 | # Setup PyTorch:
23 | torch.manual_seed(args.seed)
24 | torch.set_grad_enabled(False)
25 | device = "cuda" if torch.cuda.is_available() else "cpu"
26 |
27 | if args.ckpt is None:
28 | assert args.model == "SiT-XL/2", "Only SiT-XL/2 models are available for auto-download."
29 | assert args.image_size in [256, 512]
30 | assert args.num_classes == 1000
31 | assert args.image_size == 256, "512x512 models are not yet available for auto-download." # remove this line when 512x512 models are available
32 | learn_sigma = args.image_size == 256
33 | else:
34 | learn_sigma = False
35 |
36 | # Load model:
37 | latent_size = args.image_size // 8
38 | model = SiT_models[args.model](
39 | input_size=latent_size,
40 | num_classes=args.num_classes,
41 | learn_sigma=learn_sigma,
42 | ).to(device)
43 | # Auto-download a pre-trained model or load a custom SiT checkpoint from train.py:
44 | ckpt_path = args.ckpt or f"SiT-XL-2-{args.image_size}x{args.image_size}.pt"
45 | state_dict = find_model(ckpt_path)
46 | model.load_state_dict(state_dict)
47 | model.eval() # important!
48 | transport = create_transport(
49 | args.path_type,
50 | args.prediction,
51 | args.loss_weight,
52 | args.train_eps,
53 | args.sample_eps
54 | )
55 | sampler = Sampler(transport)
56 | if mode == "ODE":
57 | if args.likelihood:
58 | assert args.cfg_scale == 1, "Likelihood is incompatible with guidance"
59 | sample_fn = sampler.sample_ode_likelihood(
60 | sampling_method=args.sampling_method,
61 | num_steps=args.num_sampling_steps,
62 | atol=args.atol,
63 | rtol=args.rtol,
64 | )
65 | else:
66 | sample_fn = sampler.sample_ode(
67 | sampling_method=args.sampling_method,
68 | num_steps=args.num_sampling_steps,
69 | atol=args.atol,
70 | rtol=args.rtol,
71 | reverse=args.reverse
72 | )
73 |
74 | elif mode == "SDE":
75 | sample_fn = sampler.sample_sde(
76 | sampling_method=args.sampling_method,
77 | diffusion_form=args.diffusion_form,
78 | diffusion_norm=args.diffusion_norm,
79 | last_step=args.last_step,
80 | last_step_size=args.last_step_size,
81 | num_steps=args.num_sampling_steps,
82 | )
83 |
84 |
85 | vae = AutoencoderKL.from_pretrained(f"stabilityai/sd-vae-ft-{args.vae}").to(device)
86 |
87 | # Labels to condition the model with (feel free to change):
88 | class_labels = [207, 360, 387, 974, 88, 979, 417, 279]
89 |
90 | # Create sampling noise:
91 | n = len(class_labels)
92 | z = torch.randn(n, 4, latent_size, latent_size, device=device)
93 | y = torch.tensor(class_labels, device=device)
94 |
95 | # Setup classifier-free guidance:
96 | z = torch.cat([z, z], 0)
97 | y_null = torch.tensor([1000] * n, device=device)
98 | y = torch.cat([y, y_null], 0)
99 | model_kwargs = dict(y=y, cfg_scale=args.cfg_scale)
100 |
101 | # Sample images:
102 | start_time = time()
103 | samples = sample_fn(z, model.forward_with_cfg, **model_kwargs)[-1]
104 | samples, _ = samples.chunk(2, dim=0) # Remove null class samples
105 | samples = vae.decode(samples / 0.18215).sample
106 | print(f"Sampling took {time() - start_time:.2f} seconds.")
107 |
108 | # Save and display images:
109 | save_image(samples, "sample.png", nrow=4, normalize=True, value_range=(-1, 1))
110 |
111 |
112 | if __name__ == "__main__":
113 | parser = argparse.ArgumentParser()
114 |
115 | if len(sys.argv) < 2:
116 | print("Usage: program.py [options]")
117 | sys.exit(1)
118 |
119 | mode = sys.argv[1]
120 |
121 | assert mode[:2] != "--", "Usage: program.py [options]"
122 | assert mode in ["ODE", "SDE"], "Invalid mode. Please choose 'ODE' or 'SDE'"
123 |
124 | parser.add_argument("--model", type=str, choices=list(SiT_models.keys()), default="SiT-XL/2")
125 | parser.add_argument("--vae", type=str, choices=["ema", "mse"], default="mse")
126 | parser.add_argument("--image-size", type=int, choices=[256, 512], default=256)
127 | parser.add_argument("--num-classes", type=int, default=1000)
128 | parser.add_argument("--cfg-scale", type=float, default=4.0)
129 | parser.add_argument("--num-sampling-steps", type=int, default=250)
130 | parser.add_argument("--seed", type=int, default=0)
131 | parser.add_argument("--ckpt", type=str, default=None,
132 | help="Optional path to a SiT checkpoint (default: auto-download a pre-trained SiT-XL/2 model).")
133 |
134 |
135 | parse_transport_args(parser)
136 | if mode == "ODE":
137 | parse_ode_args(parser)
138 | # Further processing for ODE
139 | elif mode == "SDE":
140 | parse_sde_args(parser)
141 | # Further processing for SDE
142 |
143 | args = parser.parse_known_args()[0]
144 | main(mode, args)
145 |
--------------------------------------------------------------------------------
/generation/SiT/sample_ddp.py:
--------------------------------------------------------------------------------
1 | # This source code is licensed under the license found in the
2 | # LICENSE file in the root directory of this source tree.
3 |
4 | """
5 | Samples a large number of images from a pre-trained SiT model using DDP.
6 | Subsequently saves a .npz file that can be used to compute FID and other
7 | evaluation metrics via the ADM repo: https://github.com/openai/guided-diffusion/tree/main/evaluations
8 |
9 | For a simple single-GPU/CPU sampling script, see sample.py.
10 | """
11 | import torch
12 | import torch.distributed as dist
13 | from models import SiT_models
14 | from download import find_model
15 | from transport import create_transport, Sampler
16 | from diffusers.models import AutoencoderKL
17 | from train_utils import parse_ode_args, parse_sde_args, parse_transport_args
18 | from tqdm import tqdm
19 | import os
20 | from PIL import Image
21 | import numpy as np
22 | import math
23 | import argparse
24 | import sys
25 | from datetime import timedelta
26 |
27 |
28 | def create_npz_from_sample_folder(sample_dir, num=50_000):
29 | """
30 | Builds a single .npz file from a folder of .png samples.
31 | """
32 | samples = []
33 | for i in tqdm(range(num), desc="Building .npz file from samples"):
34 | sample_pil = Image.open(f"{sample_dir}/{i:06d}.png")
35 | sample_np = np.asarray(sample_pil).astype(np.uint8)
36 | samples.append(sample_np)
37 | samples = np.stack(samples)
38 | assert samples.shape == (num, samples.shape[1], samples.shape[2], 3)
39 | npz_path = f"{sample_dir}.npz"
40 | np.savez(npz_path, arr_0=samples)
41 | print(f"Saved .npz file to {npz_path} [shape={samples.shape}].")
42 | return npz_path
43 |
44 |
45 | def main(mode, args):
46 | """
47 | Run sampling.
48 | """
49 | torch.backends.cuda.matmul.allow_tf32 = args.tf32 # True: fast but may lead to some small numerical differences
50 | assert torch.cuda.is_available(), "Sampling with DDP requires at least one GPU. sample.py supports CPU-only usage"
51 | torch.set_grad_enabled(False)
52 |
53 | # Setup DDP:
54 | dist.init_process_group("nccl", timeout=timedelta(hours=1))
55 | rank = dist.get_rank()
56 | device = rank % torch.cuda.device_count()
57 | seed = args.global_seed * dist.get_world_size() + rank
58 | torch.manual_seed(seed)
59 | torch.cuda.set_device(device)
60 | print(f"Starting rank={rank}, seed={seed}, world_size={dist.get_world_size()}.")
61 |
62 | if args.ckpt is None:
63 | assert args.model == "SiT-XL/2", "Only SiT-XL/2 models are available for auto-download."
64 | assert args.image_size in [256, 512]
65 | assert args.num_classes == 1000
66 | assert args.image_size == 256, "512x512 models are not yet available for auto-download." # remove this line when 512x512 models are available
67 | learn_sigma = args.image_size == 256
68 | else:
69 | learn_sigma = True
70 |
71 | # Load model:
72 | latent_size = args.image_size // 8
73 | model = SiT_models[args.model](
74 | input_size=latent_size,
75 | num_classes=args.num_classes,
76 | learn_sigma=learn_sigma,
77 | ).to(device)
78 | # Auto-download a pre-trained model or load a custom SiT checkpoint from train.py:
79 | ckpt_path = args.ckpt or f"SiT-XL-2-{args.image_size}x{args.image_size}.pt"
80 | state_dict = find_model(ckpt_path)
81 | model.load_state_dict(state_dict)
82 | model.eval() # important!
83 |
84 |
85 | transport = create_transport(
86 | args.path_type,
87 | args.prediction,
88 | args.loss_weight,
89 | args.train_eps,
90 | args.sample_eps
91 | )
92 | sampler = Sampler(transport)
93 | if mode == "ODE":
94 | if args.likelihood:
95 | assert args.cfg_scale == 1, "Likelihood is incompatible with guidance"
96 | sample_fn = sampler.sample_ode_likelihood(
97 | sampling_method=args.sampling_method,
98 | num_steps=args.num_sampling_steps,
99 | atol=args.atol,
100 | rtol=args.rtol,
101 | )
102 | else:
103 | sample_fn = sampler.sample_ode(
104 | sampling_method=args.sampling_method,
105 | num_steps=args.num_sampling_steps,
106 | atol=args.atol,
107 | rtol=args.rtol,
108 | reverse=args.reverse
109 | )
110 | elif mode == "SDE":
111 | sample_fn = sampler.sample_sde(
112 | sampling_method=args.sampling_method,
113 | diffusion_form=args.diffusion_form,
114 | diffusion_norm=args.diffusion_norm,
115 | last_step=args.last_step,
116 | last_step_size=args.last_step_size,
117 | num_steps=args.num_sampling_steps,
118 | )
119 | vae = AutoencoderKL.from_pretrained(f"stabilityai/sd-vae-ft-{args.vae}").to(device)
120 | assert args.cfg_scale >= 1.0, "In almost all cases, cfg_scale be >= 1.0"
121 | using_cfg = args.cfg_scale > 1.0
122 |
123 | # Create folder to save samples:
124 | model_string_name = args.model.replace("/", "-")
125 | ckpt_string_name = os.path.basename(args.ckpt).replace(".pt", "") if args.ckpt else "pretrained"
126 | if mode == "ODE":
127 | folder_name = f"{model_string_name}-{ckpt_string_name}-" \
128 | f"cfg-{args.cfg_scale}-{args.per_proc_batch_size}-"\
129 | f"{mode}-{args.num_sampling_steps}-{args.sampling_method}"
130 | elif mode == "SDE":
131 | folder_name = f"{model_string_name}-{ckpt_string_name}-" \
132 | f"cfg-{args.cfg_scale}-{args.per_proc_batch_size}-"\
133 | f"{mode}-{args.num_sampling_steps}-{args.sampling_method}-"\
134 | f"{args.diffusion_form}-{args.last_step}-{args.last_step_size}"
135 | sample_folder_dir = f"{args.sample_dir}/{folder_name}"
136 | if rank == 0:
137 | os.makedirs(sample_folder_dir, exist_ok=True)
138 | print(f"Saving .png samples at {sample_folder_dir}")
139 | dist.barrier()
140 |
141 | # Figure out how many samples we need to generate on each GPU and how many iterations we need to run:
142 | n = args.per_proc_batch_size
143 | global_batch_size = n * dist.get_world_size()
144 | # To make things evenly-divisible, we'll sample a bit more than we need and then discard the extra samples:
145 | num_samples = len([name for name in os.listdir(sample_folder_dir) if (os.path.isfile(os.path.join(sample_folder_dir, name)) and ".png" in name)])
146 | total_samples = int(math.ceil(args.num_fid_samples / global_batch_size) * global_batch_size)
147 | if rank == 0:
148 | print(f"Total number of images that will be sampled: {total_samples}")
149 | assert total_samples % dist.get_world_size() == 0, "total_samples must be divisible by world_size"
150 | samples_needed_this_gpu = int(total_samples // dist.get_world_size())
151 | assert samples_needed_this_gpu % n == 0, "samples_needed_this_gpu must be divisible by the per-GPU batch size"
152 | iterations = int(samples_needed_this_gpu // n)
153 | done_iterations = int( int(num_samples // dist.get_world_size()) // n)
154 | pbar = range(iterations)
155 | pbar = tqdm(pbar) if rank == 0 else pbar
156 | total = 0
157 |
158 | for i in pbar:
159 | # Sample inputs:
160 | z = torch.randn(n, model.in_channels, latent_size, latent_size, device=device)
161 | y = torch.randint(0, args.num_classes, (n,), device=device)
162 |
163 | # Setup classifier-free guidance:
164 | if using_cfg:
165 | z = torch.cat([z, z], 0)
166 | y_null = torch.tensor([1000] * n, device=device)
167 | y = torch.cat([y, y_null], 0)
168 | model_kwargs = dict(y=y, cfg_scale=args.cfg_scale)
169 | model_fn = model.forward_with_cfg
170 | else:
171 | model_kwargs = dict(y=y)
172 | model_fn = model.forward
173 |
174 | samples = sample_fn(z, model_fn, **model_kwargs)[-1]
175 | if using_cfg:
176 | samples, _ = samples.chunk(2, dim=0) # Remove null class samples
177 |
178 | samples = vae.decode(samples / 0.18215).sample
179 | samples = torch.clamp(127.5 * samples + 128.0, 0, 255).permute(0, 2, 3, 1).to("cpu", dtype=torch.uint8).numpy()
180 |
181 | # Save samples to disk as individual .png files
182 | for i, sample in enumerate(samples):
183 | index = i * dist.get_world_size() + rank + total
184 | Image.fromarray(sample).save(f"{sample_folder_dir}/{index:06d}.png")
185 | total += global_batch_size
186 | dist.barrier()
187 |
188 | # Make sure all processes have finished saving their samples before attempting to convert to .npz
189 | dist.barrier()
190 | if rank == 0:
191 | create_npz_from_sample_folder(sample_folder_dir, args.num_fid_samples)
192 | print("Done.")
193 | dist.barrier()
194 | dist.destroy_process_group()
195 |
196 |
197 | if __name__ == "__main__":
198 |
199 | parser = argparse.ArgumentParser()
200 |
201 | if len(sys.argv) < 2:
202 | print("Usage: program.py [options]")
203 | sys.exit(1)
204 |
205 | mode = sys.argv[1]
206 |
207 | assert mode[:2] != "--", "Usage: program.py [options]"
208 | assert mode in ["ODE", "SDE"], "Invalid mode. Please choose 'ODE' or 'SDE'"
209 |
210 | parser.add_argument("--model", type=str, choices=list(SiT_models.keys()), default="SiT-XL/2")
211 | parser.add_argument("--vae", type=str, choices=["ema", "mse"], default="ema")
212 | parser.add_argument("--sample-dir", type=str, default="samples")
213 | parser.add_argument("--per-proc-batch-size", type=int, default=4)
214 | parser.add_argument("--num-fid-samples", type=int, default=50_000)
215 | parser.add_argument("--image-size", type=int, choices=[256, 512], default=256)
216 | parser.add_argument("--num-classes", type=int, default=1000)
217 | parser.add_argument("--cfg-scale", type=float, default=1.0)
218 | parser.add_argument("--num-sampling-steps", type=int, default=250)
219 | parser.add_argument("--global-seed", type=int, default=0)
220 | parser.add_argument("--tf32", action=argparse.BooleanOptionalAction, default=True,
221 | help="By default, use TF32 matmuls. This massively accelerates sampling on Ampere GPUs.")
222 | parser.add_argument("--ckpt", type=str, default=None,
223 | help="Optional path to a SiT checkpoint (default: auto-download a pre-trained SiT-XL/2 model).")
224 |
225 | parse_transport_args(parser)
226 | if mode == "ODE":
227 | parse_ode_args(parser)
228 | # Further processing for ODE
229 | elif mode == "SDE":
230 | parse_sde_args(parser)
231 | # Further processing for SDE
232 |
233 | args = parser.parse_known_args()[0]
234 | main(mode, args)
235 |
--------------------------------------------------------------------------------
/generation/SiT/sample_sit.sh:
--------------------------------------------------------------------------------
1 | mode="sde"
2 | model="SiT-XL/2-VAE-simple"
3 | ckpt="/PATH/TO/YOUR/FINETUNE_CHECKPOINT"
4 | sample_dir="./samples"
5 | cfg="1.0"
6 |
7 | torchrun --master_addr ${MASTER_ADDR} --master-port ${MASTER_PORT} \
8 | --nnodes ${WORLD_SIZE} --node_rank ${RANK} --nproc-per-node=${GPUS} sample_ddp.py \
9 | ${mode} \
10 | --model ${model} \
11 | --num-fid-samples 50000 \
12 | --ckpt ${ckpt} \
13 | --sample-dir ${sample_dir} \
14 | --per-proc-batch-size 128 \
15 | --cfg-scale ${cfg}
--------------------------------------------------------------------------------
/generation/SiT/train_sit.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | model="SiT-XL/2-VAE-simple" # "SiT-XL/2-VAE-simple", "SiT-B/2-VAE-simple"
3 | finetune="/PATH/TO/YOUR/PRETRAINED_CHECKPOINT"
4 | exp_name="SiT-XL/2-VAE-simple"
5 | data_path="/PATH/TO/YOUR/imagenet-1k/train"
6 | path_type="Linear"
7 | prediction="velocity"
8 | global_batch_size="256"
9 |
10 |
11 | torchrun --master_addr ${MASTER_ADDR} --master-port ${MASTER_PORT} \
12 | --nnodes ${WORLD_SIZE} --node_rank ${RANK} --nproc-per-node=${GPUS} train.py \
13 | --model ${model} \
14 | --data-path ${data_path} \
15 | --finetune ${finetune} \
16 | --exp-name ${exp_name} \
17 | --path-type ${path_type} \
18 | --prediction ${prediction} \
19 | --global-batch-size $global_batch_size
20 |
--------------------------------------------------------------------------------
/generation/SiT/train_utils.py:
--------------------------------------------------------------------------------
1 | def none_or_str(value):
2 | if value == 'None':
3 | return None
4 | return value
5 |
6 | def parse_transport_args(parser):
7 | group = parser.add_argument_group("Transport arguments")
8 | group.add_argument("--path-type", type=str, default="Linear", choices=["Linear", "GVP", "VP"])
9 | group.add_argument("--prediction", type=str, default="velocity", choices=["velocity", "score", "noise"])
10 | group.add_argument("--loss-weight", type=none_or_str, default=None, choices=[None, "velocity", "likelihood"])
11 | group.add_argument("--sample-eps", type=float)
12 | group.add_argument("--train-eps", type=float)
13 |
14 | def parse_ode_args(parser):
15 | group = parser.add_argument_group("ODE arguments")
16 | group.add_argument("--sampling-method", type=str, default="dopri5", help="blackbox ODE solver methods; for full list check https://github.com/rtqichen/torchdiffeq")
17 | group.add_argument("--atol", type=float, default=1e-6, help="Absolute tolerance")
18 | group.add_argument("--rtol", type=float, default=1e-3, help="Relative tolerance")
19 | group.add_argument("--reverse", action="store_true")
20 | group.add_argument("--likelihood", action="store_true")
21 |
22 | def parse_sde_args(parser):
23 | group = parser.add_argument_group("SDE arguments")
24 | group.add_argument("--sampling-method", type=str, default="Euler", choices=["Euler", "Heun"])
25 | group.add_argument("--diffusion-form", type=str, default="sigma", \
26 | choices=["constant", "SBDM", "sigma", "linear", "decreasing", "increasing-decreasing"],\
27 | help="form of diffusion coefficient in the SDE")
28 | group.add_argument("--diffusion-norm", type=float, default=1.0)
29 | group.add_argument("--last-step", type=none_or_str, default="Mean", choices=[None, "Mean", "Tweedie", "Euler"],\
30 | help="form of last step taken in the SDE")
31 | group.add_argument("--last-step-size", type=float, default=0.04, \
32 | help="size of the last step taken")
--------------------------------------------------------------------------------
/generation/SiT/transport/__init__.py:
--------------------------------------------------------------------------------
1 | from .transport import Transport, ModelType, WeightType, PathType, Sampler
2 |
3 | def create_transport(
4 | path_type='Linear',
5 | prediction="velocity",
6 | loss_weight=None,
7 | train_eps=None,
8 | sample_eps=None,
9 | ):
10 | """function for creating Transport object
11 | **Note**: model prediction defaults to velocity
12 | Args:
13 | - path_type: type of path to use; default to linear
14 | - learn_score: set model prediction to score
15 | - learn_noise: set model prediction to noise
16 | - velocity_weighted: weight loss by velocity weight
17 | - likelihood_weighted: weight loss by likelihood weight
18 | - train_eps: small epsilon for avoiding instability during training
19 | - sample_eps: small epsilon for avoiding instability during sampling
20 | """
21 |
22 | if prediction == "noise":
23 | model_type = ModelType.NOISE
24 | elif prediction == "score":
25 | model_type = ModelType.SCORE
26 | else:
27 | model_type = ModelType.VELOCITY
28 |
29 | if loss_weight == "velocity":
30 | loss_type = WeightType.VELOCITY
31 | elif loss_weight == "likelihood":
32 | loss_type = WeightType.LIKELIHOOD
33 | else:
34 | loss_type = WeightType.NONE
35 |
36 | path_choice = {
37 | "Linear": PathType.LINEAR,
38 | "GVP": PathType.GVP,
39 | "VP": PathType.VP,
40 | }
41 |
42 | path_type = path_choice[path_type]
43 |
44 | if (path_type in [PathType.VP]):
45 | train_eps = 1e-5 if train_eps is None else train_eps
46 | sample_eps = 1e-3 if train_eps is None else sample_eps
47 | elif (path_type in [PathType.GVP, PathType.LINEAR] and model_type != ModelType.VELOCITY):
48 | train_eps = 1e-3 if train_eps is None else train_eps
49 | sample_eps = 1e-3 if train_eps is None else sample_eps
50 | else: # velocity & [GVP, LINEAR] is stable everywhere
51 | train_eps = 0
52 | sample_eps = 0
53 |
54 | # create flow state
55 | state = Transport(
56 | model_type=model_type,
57 | path_type=path_type,
58 | loss_type=loss_type,
59 | train_eps=train_eps,
60 | sample_eps=sample_eps,
61 | )
62 |
63 | return state
--------------------------------------------------------------------------------
/generation/SiT/transport/integrators.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import torch as th
3 | import torch.nn as nn
4 | from torchdiffeq import odeint
5 | from functools import partial
6 | from tqdm import tqdm
7 |
8 | class sde:
9 | """SDE solver class"""
10 | def __init__(
11 | self,
12 | drift,
13 | diffusion,
14 | *,
15 | t0,
16 | t1,
17 | num_steps,
18 | sampler_type,
19 | ):
20 | assert t0 < t1, "SDE sampler has to be in forward time"
21 |
22 | self.num_timesteps = num_steps
23 | self.t = th.linspace(t0, t1, num_steps)
24 | self.dt = self.t[1] - self.t[0]
25 | self.drift = drift
26 | self.diffusion = diffusion
27 | self.sampler_type = sampler_type
28 |
29 | def __Euler_Maruyama_step(self, x, mean_x, t, model, **model_kwargs):
30 | w_cur = th.randn(x.size()).to(x)
31 | t = th.ones(x.size(0)).to(x) * t
32 | dw = w_cur * th.sqrt(self.dt)
33 | drift = self.drift(x, t, model, **model_kwargs)
34 | diffusion = self.diffusion(x, t)
35 | mean_x = x + drift * self.dt
36 | x = mean_x + th.sqrt(2 * diffusion) * dw
37 | return x, mean_x
38 |
39 | def __Heun_step(self, x, _, t, model, **model_kwargs):
40 | w_cur = th.randn(x.size()).to(x)
41 | dw = w_cur * th.sqrt(self.dt)
42 | t_cur = th.ones(x.size(0)).to(x) * t
43 | diffusion = self.diffusion(x, t_cur)
44 | xhat = x + th.sqrt(2 * diffusion) * dw
45 | K1 = self.drift(xhat, t_cur, model, **model_kwargs)
46 | xp = xhat + self.dt * K1
47 | K2 = self.drift(xp, t_cur + self.dt, model, **model_kwargs)
48 | return xhat + 0.5 * self.dt * (K1 + K2), xhat # at last time point we do not perform the heun step
49 |
50 | def __forward_fn(self):
51 | """TODO: generalize here by adding all private functions ending with steps to it"""
52 | sampler_dict = {
53 | "Euler": self.__Euler_Maruyama_step,
54 | "Heun": self.__Heun_step,
55 | }
56 |
57 | try:
58 | sampler = sampler_dict[self.sampler_type]
59 | except:
60 | raise NotImplementedError("Smapler type not implemented.")
61 |
62 | return sampler
63 |
64 | def sample(self, init, model, **model_kwargs):
65 | """forward loop of sde"""
66 | x = init
67 | mean_x = init
68 | samples = []
69 | sampler = self.__forward_fn()
70 | for ti in self.t[:-1]:
71 | with th.no_grad():
72 | x, mean_x = sampler(x, mean_x, ti, model, **model_kwargs)
73 | samples.append(x)
74 |
75 | return samples
76 |
77 | class ode:
78 | """ODE solver class"""
79 | def __init__(
80 | self,
81 | drift,
82 | *,
83 | t0,
84 | t1,
85 | sampler_type,
86 | num_steps,
87 | atol,
88 | rtol,
89 | ):
90 | assert t0 < t1, "ODE sampler has to be in forward time"
91 |
92 | self.drift = drift
93 | self.t = th.linspace(t0, t1, num_steps)
94 | self.atol = atol
95 | self.rtol = rtol
96 | self.sampler_type = sampler_type
97 |
98 | def sample(self, x, model, **model_kwargs):
99 |
100 | device = x[0].device if isinstance(x, tuple) else x.device
101 | def _fn(t, x):
102 | t = th.ones(x[0].size(0)).to(device) * t if isinstance(x, tuple) else th.ones(x.size(0)).to(device) * t
103 | model_output = self.drift(x, t, model, **model_kwargs)
104 | return model_output
105 |
106 | t = self.t.to(device)
107 | atol = [self.atol] * len(x) if isinstance(x, tuple) else [self.atol]
108 | rtol = [self.rtol] * len(x) if isinstance(x, tuple) else [self.rtol]
109 | samples = odeint(
110 | _fn,
111 | x,
112 | t,
113 | method=self.sampler_type,
114 | atol=atol,
115 | rtol=rtol
116 | )
117 | return samples
--------------------------------------------------------------------------------
/generation/SiT/transport/path.py:
--------------------------------------------------------------------------------
1 | import torch as th
2 | import numpy as np
3 | from functools import partial
4 |
5 | def expand_t_like_x(t, x):
6 | """Function to reshape time t to broadcastable dimension of x
7 | Args:
8 | t: [batch_dim,], time vector
9 | x: [batch_dim,...], data point
10 | """
11 | dims = [1] * (len(x.size()) - 1)
12 | t = t.view(t.size(0), *dims)
13 | return t
14 |
15 |
16 | #################### Coupling Plans ####################
17 |
18 | class ICPlan:
19 | """Linear Coupling Plan"""
20 | def __init__(self, sigma=0.0):
21 | self.sigma = sigma
22 |
23 | def compute_alpha_t(self, t):
24 | """Compute the data coefficient along the path"""
25 | return t, 1
26 |
27 | def compute_sigma_t(self, t):
28 | """Compute the noise coefficient along the path"""
29 | return 1 - t, -1
30 |
31 | def compute_d_alpha_alpha_ratio_t(self, t):
32 | """Compute the ratio between d_alpha and alpha"""
33 | return 1 / t
34 |
35 | def compute_drift(self, x, t):
36 | """We always output sde according to score parametrization; """
37 | t = expand_t_like_x(t, x)
38 | alpha_ratio = self.compute_d_alpha_alpha_ratio_t(t)
39 | sigma_t, d_sigma_t = self.compute_sigma_t(t)
40 | drift = alpha_ratio * x
41 | diffusion = alpha_ratio * (sigma_t ** 2) - sigma_t * d_sigma_t
42 |
43 | return -drift, diffusion
44 |
45 | def compute_diffusion(self, x, t, form="constant", norm=1.0):
46 | """Compute the diffusion term of the SDE
47 | Args:
48 | x: [batch_dim, ...], data point
49 | t: [batch_dim,], time vector
50 | form: str, form of the diffusion term
51 | norm: float, norm of the diffusion term
52 | """
53 | t = expand_t_like_x(t, x)
54 | choices = {
55 | "constant": norm,
56 | "SBDM": norm * self.compute_drift(x, t)[1],
57 | "sigma": norm * self.compute_sigma_t(t)[0],
58 | "linear": norm * (1 - t),
59 | "decreasing": 0.25 * (norm * th.cos(np.pi * t) + 1) ** 2,
60 | "inccreasing-decreasing": norm * th.sin(np.pi * t) ** 2,
61 | }
62 |
63 | try:
64 | diffusion = choices[form]
65 | except KeyError:
66 | raise NotImplementedError(f"Diffusion form {form} not implemented")
67 |
68 | return diffusion
69 |
70 | def get_score_from_velocity(self, velocity, x, t):
71 | """Wrapper function: transfrom velocity prediction model to score
72 | Args:
73 | velocity: [batch_dim, ...] shaped tensor; velocity model output
74 | x: [batch_dim, ...] shaped tensor; x_t data point
75 | t: [batch_dim,] time tensor
76 | """
77 | t = expand_t_like_x(t, x)
78 | alpha_t, d_alpha_t = self.compute_alpha_t(t)
79 | sigma_t, d_sigma_t = self.compute_sigma_t(t)
80 | mean = x
81 | reverse_alpha_ratio = alpha_t / d_alpha_t
82 | var = sigma_t**2 - reverse_alpha_ratio * d_sigma_t * sigma_t
83 | score = (reverse_alpha_ratio * velocity - mean) / var
84 | return score
85 |
86 | def get_noise_from_velocity(self, velocity, x, t):
87 | """Wrapper function: transfrom velocity prediction model to denoiser
88 | Args:
89 | velocity: [batch_dim, ...] shaped tensor; velocity model output
90 | x: [batch_dim, ...] shaped tensor; x_t data point
91 | t: [batch_dim,] time tensor
92 | """
93 | t = expand_t_like_x(t, x)
94 | alpha_t, d_alpha_t = self.compute_alpha_t(t)
95 | sigma_t, d_sigma_t = self.compute_sigma_t(t)
96 | mean = x
97 | reverse_alpha_ratio = alpha_t / d_alpha_t
98 | var = reverse_alpha_ratio * d_sigma_t - sigma_t
99 | noise = (reverse_alpha_ratio * velocity - mean) / var
100 | return noise
101 |
102 | def get_velocity_from_score(self, score, x, t):
103 | """Wrapper function: transfrom score prediction model to velocity
104 | Args:
105 | score: [batch_dim, ...] shaped tensor; score model output
106 | x: [batch_dim, ...] shaped tensor; x_t data point
107 | t: [batch_dim,] time tensor
108 | """
109 | t = expand_t_like_x(t, x)
110 | drift, var = self.compute_drift(x, t)
111 | velocity = var * score - drift
112 | return velocity
113 |
114 | def compute_mu_t(self, t, x0, x1):
115 | """Compute the mean of time-dependent density p_t"""
116 | t = expand_t_like_x(t, x1)
117 | alpha_t, _ = self.compute_alpha_t(t)
118 | sigma_t, _ = self.compute_sigma_t(t)
119 | return alpha_t * x1 + sigma_t * x0
120 |
121 | def compute_xt(self, t, x0, x1):
122 | """Sample xt from time-dependent density p_t; rng is required"""
123 | xt = self.compute_mu_t(t, x0, x1)
124 | return xt
125 |
126 | def compute_ut(self, t, x0, x1, xt):
127 | """Compute the vector field corresponding to p_t"""
128 | t = expand_t_like_x(t, x1)
129 | _, d_alpha_t = self.compute_alpha_t(t)
130 | _, d_sigma_t = self.compute_sigma_t(t)
131 | return d_alpha_t * x1 + d_sigma_t * x0
132 |
133 | def plan(self, t, x0, x1):
134 | xt = self.compute_xt(t, x0, x1)
135 | ut = self.compute_ut(t, x0, x1, xt)
136 | return t, xt, ut
137 |
138 |
139 | class VPCPlan(ICPlan):
140 | """class for VP path flow matching"""
141 |
142 | def __init__(self, sigma_min=0.1, sigma_max=20.0):
143 | self.sigma_min = sigma_min
144 | self.sigma_max = sigma_max
145 | self.log_mean_coeff = lambda t: -0.25 * ((1 - t) ** 2) * (self.sigma_max - self.sigma_min) - 0.5 * (1 - t) * self.sigma_min
146 | self.d_log_mean_coeff = lambda t: 0.5 * (1 - t) * (self.sigma_max - self.sigma_min) + 0.5 * self.sigma_min
147 |
148 |
149 | def compute_alpha_t(self, t):
150 | """Compute coefficient of x1"""
151 | alpha_t = self.log_mean_coeff(t)
152 | alpha_t = th.exp(alpha_t)
153 | d_alpha_t = alpha_t * self.d_log_mean_coeff(t)
154 | return alpha_t, d_alpha_t
155 |
156 | def compute_sigma_t(self, t):
157 | """Compute coefficient of x0"""
158 | p_sigma_t = 2 * self.log_mean_coeff(t)
159 | sigma_t = th.sqrt(1 - th.exp(p_sigma_t))
160 | d_sigma_t = th.exp(p_sigma_t) * (2 * self.d_log_mean_coeff(t)) / (-2 * sigma_t)
161 | return sigma_t, d_sigma_t
162 |
163 | def compute_d_alpha_alpha_ratio_t(self, t):
164 | """Special purposed function for computing numerical stabled d_alpha_t / alpha_t"""
165 | return self.d_log_mean_coeff(t)
166 |
167 | def compute_drift(self, x, t):
168 | """Compute the drift term of the SDE"""
169 | t = expand_t_like_x(t, x)
170 | beta_t = self.sigma_min + (1 - t) * (self.sigma_max - self.sigma_min)
171 | return -0.5 * beta_t * x, beta_t / 2
172 |
173 |
174 | class GVPCPlan(ICPlan):
175 | def __init__(self, sigma=0.0):
176 | super().__init__(sigma)
177 |
178 | def compute_alpha_t(self, t):
179 | """Compute coefficient of x1"""
180 | alpha_t = th.sin(t * np.pi / 2)
181 | d_alpha_t = np.pi / 2 * th.cos(t * np.pi / 2)
182 | return alpha_t, d_alpha_t
183 |
184 | def compute_sigma_t(self, t):
185 | """Compute coefficient of x0"""
186 | sigma_t = th.cos(t * np.pi / 2)
187 | d_sigma_t = -np.pi / 2 * th.sin(t * np.pi / 2)
188 | return sigma_t, d_sigma_t
189 |
190 | def compute_d_alpha_alpha_ratio_t(self, t):
191 | """Special purposed function for computing numerical stabled d_alpha_t / alpha_t"""
192 | return np.pi / (2 * th.tan(t * np.pi / 2))
--------------------------------------------------------------------------------
/generation/SiT/transport/utils.py:
--------------------------------------------------------------------------------
1 | import torch as th
2 |
3 | class EasyDict:
4 |
5 | def __init__(self, sub_dict):
6 | for k, v in sub_dict.items():
7 | setattr(self, k, v)
8 |
9 | def __getitem__(self, key):
10 | return getattr(self, key)
11 |
12 | def mean_flat(x):
13 | """
14 | Take the mean over all non-batch dimensions.
15 | """
16 | return th.mean(x, dim=list(range(1, len(x.size()))))
17 |
18 | def log_state(state):
19 | result = []
20 |
21 | sorted_state = dict(sorted(state.items()))
22 | for key, value in sorted_state.items():
23 | # Check if the value is an instance of a class
24 | if "