Домой United States USA — software OpenAI's diffusion models beat GANs at what they do best

OpenAI's diffusion models beat GANs at what they do best

243
0
ПОДЕЛИТЬСЯ

OpenAI’s improvements to contemporary diffusion models see them beat the state-of-the-art generative adversarial networks (GANs) in both conditional and unconditional image generation tasks.
Generative Adversarial Networks (GANs) are a class of deep learning models that learn to produce new (or pseudo-real) data. Their advent in 2014 and refinement thereafter have led to them dominating the image generation domain for the past few years and laying the foundations of a new paradigm – deep fakes. Their ability to mimic training data and produce new samples similar to it has gone more or less unmatched. As such, they hold the state-of-the-art (SOTA) in most image generation tasks today. Despite these advantages, GANs are notoriously hard to train and are prone to issues like mode collapse and unintelligible training procedures. Moreover, researchers have realized that GANs focus more on fidelity rather than capturing a diverse set of the training data’s distribution. As such, researchers have been looking into improving GANs in this domain or eyeing other architectures that would perform better in the same domain. Two researchers, Prafulla Dhariwal and Alex Nichol from OpenAI, one of the leading AI-research labs, took up the question and looked towards other architectures. In their latest work «Diffusion Models Beat GANs on Image Synthesis», published in the preprint repository arXiv this week, they show that a different deep learning architecture, called diffusion models, addresses the aforementioned shortcomings of GANs. They show that not only are diffusion models better at capturing a greater breadth of the training data’s variance compared to GANs, but they also beat the SOTA GANs in image generation tasks.

Continue reading...