DCTdiff: Intriguing Properties of Image Generative Modeling in the DCT Space

Research output: Contribution to journalConference articleAcademicpeer-review

Abstract

This paper explores image modeling from the frequency space and introduces DCTdiff, an endto-end diffusion generative paradigm that efficiently models images in the discrete cosine transform (DCT) space. We investigate the design space of DCTdiff and reveal the key design factors. Experiments on different frameworks (UViT, DiT), generation tasks, and various diffusion samplers demonstrate that DCTdiff outperforms pixelbased diffusion models regarding generative quality and training efficiency. Remarkably, DCTdiff can seamlessly scale up to 512×512 resolution without using the latent diffusion paradigm and beats latent diffusion (using SD-VAE) with only 1/4 training cost. Finally, we illustrate several intriguing properties of DCT image modeling. For example, we provide a theoretical proof of why ‘image diffusion can be seen as spectral autoregression’, bridging the gap between diffusion and autoregressive models. The effectiveness of DCTdiff and the introduced properties suggest a promising direction for image modeling in the frequency space. The code is https: //github.com/forever208/DCTdiff.

Original languageEnglish
Pages (from-to)46498-46524
Number of pages27
JournalProceedings of Machine Learning Research
Volume267
Publication statusPublished - Aug 2025
Event42nd International Conference on Machine Learning, ICML 2025 - Vancouver, Canada
Duration: 13 Jul 202519 Jul 2025

Bibliographical note

Publisher Copyright:
© 2025, ML Research Press. All rights reserved.

Fingerprint

Dive into the research topics of 'DCTdiff: Intriguing Properties of Image Generative Modeling in the DCT Space'. Together they form a unique fingerprint.

Cite this