High Bandwidth Memory

High Bandwidth Memory (HBM) is a computer memory interface for 3D-stacked synchronous dynamic random-access memory (SDRAM) initially from Samsung, AMD and SK Hynix. It is used in conjunction with high-performance graphics accelerators, network devices, high-performance datacenter AI ASICs, as on-package cache in CPUs[1] and on-package RAM in upcoming CPUs, and FPGAs and in some supercomputers (such as the NEC SX-Aurora TSUBASA and Fujitsu A64FX).[2] The first HBM memory chip was produced by SK Hynix in 2013,[3] and the first devices to use HBM were the AMD Fiji GPUs in 2015.[4][5]

HBM was adopted by JEDEC as an industry standard in October 2013.[6] The second generation, HBM2, was accepted by JEDEC in January 2016.[7] JEDEC officially announced the HBM3 standard on January 27, 2022.[8]

  1. ^ Shilov, Anton (December 30, 2020). "Intel Confirms On-Package HBM Memory Support for Sapphire Rapids". Tom's Hardware. Retrieved January 1, 2021.
  2. ^ ISSCC 2014 Trends Archived 2015-02-06 at the Wayback Machine page 118 "High-Bandwidth DRAM"
  3. ^ Cite error: The named reference hynix2010s was invoked but never defined (see the help page).
  4. ^ Smith, Ryan (2 July 2015). "The AMD Radeon R9 Fury X Review". Anandtech. Retrieved 1 August 2016.
  5. ^ Morgan, Timothy Prickett (March 25, 2014). "Future Nvidia 'Pascal' GPUs Pack 3D Memory, Homegrown Interconnect". EnterpriseTech. Retrieved 26 August 2014. Nvidia will be adopting the High Bandwidth Memory (HBM) variant of stacked DRAM that was developed by AMD and Hynix
  6. ^ High Bandwidth Memory (HBM) DRAM (JESD235), JEDEC, October 2013
  7. ^ "JESD235a: High Bandwidth Memory 2". 2016-01-12.
  8. ^ Cite error: The named reference HBM3 was invoked but never defined (see the help page).