The Lindahl Letter
The Lindahl Letter
Why is diffusion so popular?
0:00
-4:57

Why is diffusion so popular?

Transformers were the thing. They were a big thing in the machine learning field. It was glorious. People talked about them a lot and papers were published. Oh so many papers were published. Now it feels like diffusion might be the thing. You will find that the thing of the moment in the field of machine learning shifts rapidly. I was looking at a GitHub repository based on, “high-Resolution Image Synthesis with Latent Diffusion Models,” and it has over 2,000 stars and has been forked 242 times [1]. I started reading this Tweet from Sebastian Raschka back on January 30, 2022 that asked the question, “Has anyone tried diffusion-based models, yet? Heard that they produce better results than GA” [2]. That Tweet linked out to a paper on ArXiv called, “Diffusion Models Beat GANs on Image Synthesis” [3]. It was published back during May of 2021 by OpenAI researchers and has seen 4 revisions so far. The paper loaded very slowly for me which was surprising. Rarely do I ever watch an update bar slowly creep across the screen waiting for a file to load up. It was 44 pages and 38 megabytes of data. That file should have arrived a lot faster. I took a look at another GitHub repository on guided diffusion from OpenAI that had 1,700 stars [4]. The audience for these diffusion code sets seems to be about 2,000 people which is interesting. Machine learning in general gets roughly 10,000 people focusing on things making this a subset within that slightly larger universe of attention. 

Given that we have moved into the 2nd paragraph it might be a good time to talk about what exactly diffusion might be in the context of machine learning. Over in the field of thermodynamics you could study gas molecules. Maybe you want to learn about how those gas molecules would diffuse from a high density to a low density area and you would also want to know how those gas molecules would reverse course. That is the basic theoretical part of the equation you need to absorb at the moment. Within the field of machine learning people have been building models that learn how based on degree of noise to diffuse the data and then reverse that process. That is basically the diffusion process in a nutshell. You can imagine that the cost to do this is computationally expensive. Let’s jump from OpenAI over to the Google AI team who wrote about, “High Fidelity Image Generation Using Diffusion Models” [5]. If you get to read that last link from the Google AI team then you will get to see a bunch of examples of how this works in practice. Imagine a lower quality image that is smaller, being increased in both quality and size. Now you need to imagine that happening again which is what makes the model they are using seem really novel and exciting. I ended up going back to a 2015 paper, “Deep Unsupervised Learning using Nonequilibrium Thermodynamics,” and trying to get a little bit more detail on how the noise process works to create and reverse diffusion [6]. 

Links and thoughts:

Top 5 Tweets of the week:

Footnotes:

[1] https://github.com/CompVis/latent-diffusion 

[2]

Twitter avatar for @rasbt
Sebastian Raschka @rasbt
Has anyone tried diffusion-based models, yet? Heard that they produce better results than GAN (e.g,. arxiv.org/abs/2105.05233 is quite convincing) and heard they are easier to train. True? People also say they are pretty slow to sample from. Anyone any experience with these?

[3] https://arxiv.org/abs/2105.05233 

[4] https://github.com/openai/guided-diffusion

[5] https://ai.googleblog.com/2021/07/high-fidelity-image-generation-using.html

[6] https://arxiv.org/abs/1503.03585 

What’s next for The Lindahl Letter?

  • Week 80: Bayesian optimization (ML syllabus edition 1/8)

  • Week 81: Deep learning (ML syllabus edition 2/8)

  • Week 82: ML algorithms (ML syllabus edition 3/8)

  • Week 83: Neural networks (ML syllabus edition 4/8)

  • Week 84: Reinforcement learning (ML syllabus edition 5/8)

  • Week 85: Graph neural networks (ML syllabus edition 6/8)

  • Week 86: Neuroscience (ML syllabus edition 7/8)

  • Week 87: Ethics (fairness, bias, privacy) (ML syllabus edition 8/8)

I’ll try to keep the what’s next list forward looking with at least five weeks of posts in planning or review. If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the week ahead.

0 Comments
The Lindahl Letter
The Lindahl Letter
Thoughts about technology (AI/ML) in newsletter form every Friday
Listen on
Substack App
RSS Feed
Appears in episode
Dr. Nels Lindahl