We initiate a systematic study of antithetic initial noise in diffusion models. Across unconditional models trained on diverse datasets, text-conditioned latent-diffusion models, and diffusion-posterior samplers, we find that pairing each initial noise with its negation consistently yields strongly negatively correlated samples. To explain this phenomenon, we combine experiments and theoretical analysis, leading to a symmetry conjecture that the learned score function is approximately affine antisymmetric (odd symmetry up to a constant shift), and provide evidence supporting it. Leveraging this negative correlation, we enable two applications:
Building on these gains, we extend the two-point pairing to a randomized quasi-Monte Carlo estimator, which further improves estimation accuracy. Our framework is training-free, model-agnostic, and adds no runtime overhead.
@article{jia2025antithetic,
title={Antithetic Noise in Diffusion Models},
author={Jia, Jing and Liu, Sifan and Song, Bowen and Yuan, Wei and Shen, Liyue and Wang, Guanyang},
journal={arXiv preprint arXiv:2506.06185},
year={2025}
}