Micron just dropped the big bomb in the HBM space by announcing the new 8-high 24GB HBM3 Gen2 memory.
Equipped with 50% improvement over the currently available HBM3 solutions at 1.2+TBps bandwidth and 9.2+Gbps pin speed, the performance gains are no joke and can be easily felt in critical workflows involving AI, particularly those running in dedicated data centers.
Power efficiency and performance per watt are 2.5x better than the previous generation making way for a higher ceiling breakthrough in AI inferencing alongside superior Total Cost of Ownership.
All of these are made possible by Micron’s industry-leading 1β (1-beta) DRAM process node allowing a 24Gb DRAM die to be assembled into an 8-high cube within an industry-standard package dimension.
They are not stopping at this checkpoint though as they will be sampling 12-high stack 36GB capacity starting Q1 2024.
Micron also expects HDM3 Gen 2 memory will lead to operational savings of up to $550 million over five years.
The gang over there also came in close contact with TSMC through the 3DFabric Alliance where all parties work together to come up with solutions that benefits AI and HPC design applications.