Things You Need To Know To Get Started With Diffusion Models

author

Calibraint

Author

October 29, 2023

Last updated: August 13, 2024

Guide To Diffusion Models

What is a Diffusion Model?

Diffusion Models are a class of generative models that can produce realistic and diverse data, such as images, text, audio, and video. They are based on the idea of transforming the data distribution into a simple noise distribution through a series of random diffusion steps. By reversing this process, we can sample new data from the noise distribution using a learned score function that guides the diffusion towards the data distribution.

AI generated image using diffusion models

What are the advantages of Diffusion Models?

Diffusion Models have several advantages over other generative models, such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). Some of these advantages are:

  • They do not suffer from mode collapse, where the stable diffusion models only generate a few modes of data distribution and ignore the rest.
  • They do not require adversarial training, which can be unstable and hard to tune.
  • They can handle discrete and continuous data without any special tricks or modifications. 
  • They can generate high-resolution and high-fidelity data with fewer parameters and less computation.

What are forward and reverse diffusion processes?

The forward and reverse diffusion processes are the core components of the Diffusion Model. They define how the data is transformed into noise and how the noise is transformed back into data.

Forward diffusion process

The forward diffusion process is a Markov chain that starts from the original data x and ends at a noise sample ε. At each step t, the data is corrupted by adding Gaussian noise to it. The noise level increases as t increases until it reaches 1 at the final step T. At this point, x_T is completely random and independent of x.

x_t = √(1 – βt) * x(t-1) + √β_t * η_t
where β_t is the noise level at step t, and η_t is a standard Gaussian random variable. The noise level β_t increases as t increases until it reaches 1 at the final step T. At this point, x_T is completely random and independent of x.

Reverse diffusion process

The reverse diffusion process is the inverse of the forward diffusion process. It starts from a noise sample ε and ends at a data sample x. At each step t, the noise is reduced by subtracting Gaussian noise from it. The noise level decreases as t decreases until it reaches 0 at the initial step 0. At this point, ε0 is equal to x.

ε_t = √(1 – β_t) * ε(t+1) – √β_t * η_t

where β_t is the same noise level as in the forward diffusion process, and η_t is a standard Gaussian random variable. The noise level β_t decreases as t decreases until it reaches 0 at the initial step 0. At this point, ε_0 is equal to x.

Denoising process

How to set up the forward and reverse diffusion processes?

In practice, we do not know the exact value of ηt at each step. Therefore, we need a score function s_t(x_t) that estimates the conditional distribution of x(t-1) given x_t. The score function s_t(x_t) tells us how likely x_(t-1) is for a given x_t, and how to adjust x_t to make it closer to x_(t-1). We can use the score function s_t(x_t) to sample from the reverse diffusion process using Langevin dynamics:

x_(t-1) = x_t + α_t * s_t(x_t) + √(2 * α_t) * ζ

where α_t is the step size at step t, and ζ is a standard Gaussian random variable. By repeating this process from t = T to t = 0, we can generate a data sample x from a noise sample ε.

How to choose the noise schedule and the number of steps?

The noise schedule and the number of steps are two important hyperparameters that affect the performance of the Diffusion Model. They determine how fast and how smoothly the data is transformed into noise and vice versa.

The noise schedule is a sequence of noise levels β_t that control the amount of Gaussian noise added or subtracted at each step t. A common choice for the noise schedule is to use a geometric progression:

β_t = β * (1 – β)^(T – 1 – t)

where β is a constant between 0 and 1, and T is the total number of steps. This noise schedule ensures that the variance of x_t is constant for all t, which simplifies the score function estimation.

The number of steps T is the length of the forward and reverse diffusion processes. It affects the quality and diversity of the generated data. A larger T means that the data is more corrupted by noise, which makes it harder to recover from the noise, but also allows for more variation in the data. A smaller T means that the data is less corrupted by noise, which makes it easier to recover from the noise, but also limits the variation in the data.

There is a trade-off between the noise schedule and the number of steps. A more aggressive noise schedule (larger β) requires more steps to achieve better quality, while a less aggressive noise schedule (smaller β) requires fewer steps to achieve good diversity. The optimal choice of these hyperparameters depends on the data domain, the score function architecture, and the computational budget.

Note:

β is a constant between 0 and 1 that controls the noise level in the Diffusion Model. A larger β means that more noise is added or subtracted at each step, while a smaller β means that less noise is added or subtracted at each step. A larger β makes the data more corrupted by noise, while a smaller β makes the data less corrupted by noise.

0.5 is the middle value of β and is neither considered as a small nor a large β. It is a middle value of β that balances the trade-off between quality and diversity in the Diffusion Model. It means that the noise level is 50% at the final step of the forward diffusion process and 50% at the initial step of the reverse diffusion process. It is a balanced choice that preserves some information and some variation in the data. 

However, it may not be the optimal choice for every data domain or score function architecture. You may need to experiment with different values of β to find the best one for your task. 

How to train a Diffusion Model

To sample from the trained Diffusion Model, we need to follow the reverse diffusion process using the score function and Langevin dynamics. 

Here are the steps to do that:

  1. Start from a random noise sample ε ~ N(0, I), where I is the identity matrix.
  1. Set t = T, where T is the total number of steps in the forward and reverse diffusion processes.
  1. While t > 0, do the following:
  1. Compute the score function output s_t(x_t) by feeding x_t to the neural network.
  1. Update x_(t-1) by using the Langevin dynamics formula:

x_(t-1) = x_t + α_t * s_t(x_t) + √(2 * α_t) * ζ

where α_t is the step size at step t, and ζ is a standard Gaussian random variable.

  1. Decrease t by 1. 
  2. Return x_0 as the sampled data.

Final note

Diffusion Model in AI is a promising research direction in the field of generative AI modeling. They have shown impressive results in various data domains, such as images, text, audio, and video. Applications of diffusion models can be found in areas such as data augmentation, super-resolution, inpainting, style transfer, and more. 

However, there are still some challenges and limitations that need to be addressed in the future. Experts are working on solutions to overcome the challenges and improve its results but until then Happy Diffusing readers.

Related Articles

field image

An Introduction To Comparison Of All LLMs Did you know the global NLP market is projected to grow from $13.5 billion in 2023 to over $45 billion by 2028? At the heart of this explosive growth are Large Language Models (LLMs), driving advancements in AI Development and AI applications like chatbots, virtual assistants, and content […]

author-image

Calibraint

Author

20 Nov 2024

field image

Natural Language Processing (NLP) is transforming how we interact with AI technology, enabling machines to understand and generate human language. A fundamental part of NLP—and one that lays the foundation for all text-based AI—is tokenization. If you’ve ever wondered how machines can break down sentences and words in ways that enable complex language understanding, you’re […]

author-image

Calibraint

Author

15 Nov 2024

field image

Efficiency is everything as time is money. Businesses need to adapt quickly to changing markets, respond to customer demands, and optimize operations to stay competitive. Adaptive AI will be the new breed of artificial intelligence that’s designed to learn and improve continuously in real-time, without requiring manual intervention. Unlike traditional AI, which follows pre-programmed rules […]

author-image

Calibraint

Author

14 Nov 2024

field image

Imagine teaching a student only the most relevant information without overwhelming them. This is what parameter efficient fine tuning (PEFT) does for artificial intelligence. In an era where AI models are scaling in complexity, fine-tuning every parameter becomes resource-intensive. PEFT, however, steps in like a master craftsman, allowing only select parameters to adapt to new […]

author-image

Calibraint

Author

24 Oct 2024

field image

What if machines can create artwork, write stories, compose music, and even invent new solutions for real-world problems? Welcome to the era of Generative AI—a branch of artificial intelligence that not only understands and processes data but also generates new, original content from it. With global AI adoption predicted to rise significantly in the coming years—expected […]

author-image

Calibraint

Author

22 Oct 2024

field image

A robust generative AI tech stack is the backbone of any successful system. It ensures that applications are not only scalable and reliable but also capable of performing efficiently in real-world scenarios. The right combination of tools, frameworks, models, development team, and infrastructure allows developers to build AI systems that can handle complex tasks, such […]

author-image

Calibraint

Author

30 Aug 2024

Let's Start A Conversation

Table of Contents