Antithetic Noise in Diffusion Models

1Department of Computer Science, Rutgers University
2Flatiron Institute
3Department of EECS, University of Michigan
4Department of Statistics, Rutgers University
*Co-corresponding authors

Abstract

We initiate a systematic study of antithetic initial noise in diffusion models. Across unconditional models trained on diverse datasets, text-conditioned latent-diffusion models, and diffusion-posterior samplers, we find that pairing each initial noise with its negation consistently yields strongly negatively correlated samples. To explain this phenomenon, we combine experiments and theoretical analysis, leading to a symmetry conjecture that the learned score function is approximately affine antisymmetric (odd symmetry up to a constant shift), and provide evidence supporting it. Leveraging this negative correlation, we enable two applications:

  • Enhancing image diversity in models like Stable Diffusion without quality loss,
  • Sharpening uncertainty quantification (e.g., up to 90% narrower confidence intervals) when estimating downstream statistics,

Building on these gains, we extend the two-point pairing to a randomized quasi-Monte Carlo estimator, which further improves estimation accuracy. Our framework is training-free, model-agnostic, and adds no runtime overhead.

BibTeX

@article{jia2025antithetic,
  title={Antithetic Noise in Diffusion Models},
  author={Jia, Jing and Liu, Sifan and Song, Bowen and Yuan, Wei and Shen, Liyue and Wang, Guanyang},
  journal={arXiv preprint arXiv:2506.06185},
  year={2025}
}