Diffusion Models have recently gained popularity in the field of image generation, with widely used products such as Stable Diffusion employing this approach and yielding impressive results. While GANs are also recognized for their efficiency, in what scenarios do I need to choose GANs over Diffusion Models and do GANs have any advantages compared to Diffusion Models in image generation?

Here are a few reasons I can think of:

  • Diffusion Models take more time and larger datasets to train.
  • To train a Diffusion Model project, one must have substantial computational resources (a lot of GPUs), compared to GANs.
  • The codebases of some popular Diffusion Models projects are not open source.

I don’t know if these are correct. As for the mathematical aspect, I’m not an expert in that area.

  • huehue12132@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    The reasons you listed are actually not true.

    1. Diffusion models can be trained just fine on the same datasets as GANs. They also do not take longer to train as you generally just sample one “time step” (noise level) per training step. What does take longer is inference, as GANs need a single generator execution while diffusion models require multiple.
    2. Diffusion models also do not inherently need more resources than GANs. It’s basically the same: GANs have a generator and a discriminator, while diffusion models often follow a “encoder-decoder”-style U-net architecture. You can train small diffusion models on MNIST or whatever, you can train gigantic GANs (look up GigaGAN), this is not inherent to the type of model.
    3. That is, again, not an advantage of GANs per se, also you will have a hard time finding anything remotely comparable to Stable Diffusion that is based on GANs (unless I missed some big release).