Research

All about Diffusion models

Learever 2023. 7. 25. 06:55

 

The gradient of the log probability density with respect to data (https://medium.com/@aminamollaysa/score-based-generative-models-bfe88808dc81)

 

A Pedagogical Introduction to Score Models - 2  Score Functions

We’re going to need some basic knowledge and terminology established first, otherwise, the terminology may become overwhelming, especially for those who are not well-versed in probabilistic modelling. As such, we’re going to start with a bunch of defin

ericmjl.github.io

 

Probability concepts explained: Maximum likelihood estimation

Introducing the method of maximum likelihood for parameter estimation

towardsdatascience.com

 

Guidance: a cheat code for diffusion models

A quick post with some thoughts on diffusion guidance

sander.ai

To sample from a diffusion model, an input is initialised to random noise, and is then iteratively denoised by taking steps in the direction of the score function (i.e. the direction in which the log-likelihood increases fastest), with some additional noise mixed in to avoid getting stuck in modes of the distribution

 

"Temperature"?: The temperature parameter in deep learning is used to control the randomness or uncertainty of the model's predictions. It is often used in the context of generating text or other types of data. A higher temperature value will result in more diverse and creative outputs, while a lower temperature value will result in more conservative and predictable outputs. The temperature parameter is typically used in conjunction with a softmax function to adjust the probabilities of each possible output.

즉, temperature가 높으면 uncertainty도 높다. Guidance: a cheat code for diffusion models에서 쓰임새를 보면 감마가 inverse temperature paramter라고 나온다. 감마가 1 초과인 경우 distribution 을 mode의 중심이 더 뾰족하고 (다만 이 지점의 probability가 더 높아지는건 아니다. probability <= 1 이니까. 더 작은 probability 들이 더 많이 작아지는것) probability 가 상대적으로 더 작은곳들의 값들이 더 낮아진다. 즉 uncertainty가 줄어든다. 고로 감마값이 커질수록 uncertainty가 낮아지므로 감마는 inverse temperature paramter.

 

 

Understanding the Variational Lower Bound

Introduction Variational Bayesian (VB) Methods are a family of techniques that are very popular in statistical Machine Learning. One powerful feature of VB methods is the inference-optimization duality [1]: we can view statistical inference problems (i.e.

xyang35.github.io

 

Evidence, KL-divergence, and ELBO

A blog series about Variational Inference. This post introduces the evidence, the ELBO, and the KL-divergence.

mpatacchiola.github.io

 

 

What does "variational" mean?

Does the use of "variational" always refer to optimization via variational inference? Examples: "Variational auto-encoder" "Variational Bayesian methods" "Variational renormalization group"

stats.stackexchange.com

https://mpatacchiola.github.io/blog/2021/01/25/intro-variational-inference.html

 

Evidence, KL-divergence, and ELBO

A blog series about Variational Inference. This post introduces the evidence, the ELBO, and the KL-divergence.

mpatacchiola.github.io

  • Interpreting conditonal distribution

with code: https://towardsdatascience.com/variational-autoencoder-demystified-with-pytorch-implementation-3a06bee395ed

 

Variational Autoencoder Demystified With PyTorch Implementation.

This tutorial implements a variational autoencoder for non-black and white images using PyTorch.

towardsdatascience.com

 

'Research' 카테고리의 다른 글

pytorch mutliple gpus  (0) 2023.08.10