Back in October of final yr, SK hynix introduced it was growing fourth technology excessive bandwidth reminiscence DRAM dubbed HBM3, and now simply seven months later it has entered the mass manufacturing part. NVIDIA would be the first to deploy HBM3 after having accomplished its efficiency analysis, although do not anticipate to see HBM3 on its next-gen GeForce RTX 40 sequence.
HBM by no means made a lot traction within the client area as a result of the of the comparatively excessive value in comparison with GDDR reminiscence. However, it is a totally different story within the knowledge middle. AI, machine studying, and intense simulations feast on reminiscence bandwidth, making it far simpler to justify the added value.
As such, NVIDIA is bolting SK Hynix’s HBM3 reminiscence to its Hopper H100 accelerators and DGX H100 techniques that have been formally launched just a few months in the past. Technically, a full-fat H100 GPU sports activities 96GB of HBM3, although the accessible quantity is 80GB of ECC-supported HBM3 tied to a 5120-bit bus on the identical bundle.
“We aim to become a solution provider that deeply understands and addresses our customers’ needs through continuous open collaboration,” mentioned Kevin (Jongwon) Noh, president and chief advertising and marketing officer at SK hynix.
HBM3 is taken into account a fourth technology product as a result of it follows HBM, HBM2, and HBM2E, the latter of which was an update to the HBM2 specification with elevated bandwidth and capacities. It serves up a whopping 819GB/s of reminiscence bandwidth. That’s an almost 78 p.c improve versus HBM2E. To put it into perspective, that form of reminiscence bandwidth is equal to transmitting 163 Full HD films (5GB every) in a single second.
SK hynix says it can develop its HBM3 quantity within the first half of subsequent yr in accordance with NVIDIA’s schedule.