504x298

aaaaa.jpg

๐Ÿงต FP32 vs FP16 on Nvidia Ampere

Anonymous No. 878351

https://www.nvidia.com/content/PDF/nvidia-ampere-ga-102-gpu-architecture-whitepaper-v2.1.pdf

On page 14, this document seems to be saying that using 16-bit floats instead of 32-bit no longer provides a ~2x performance increase that older architectures used to.

Is this accurate, or is the truth more complicated than that?
Also, why did nVidia do this? Radeon's 6000 series still gets the ~2x performance increase.

Does this change just not make a difference in games?
What about compute applications?

Anonymous No. 878356

FUCK AMD

LONG LIVE NVIDIA CHADS!!!!