Kioxia has confirmed it is working with Nvidia on a new class of solid-state drives designed to hit 100 million random IOPS as early as 2027, according to Nikkei. Nvidia reportedly plans to connect multiple units directly to its GPUs, effectively doubling throughput to 200 million IOPS for AI systems.
Koichi Fukuda, CTO of Kioxia’s SSD division, said the company will proceed with development in line with Nvidia’s specifications, underscoring that these drives are being tailored specifically for AI workloads. Specifically, Team Green’s interest in 100 million IOPS SSDs has been discussed since earlier this year, while Kioxia has already revealed plans for AI SSDs capable of 10 million IOPS in 2026 based on its XL-Flash technology.
These efforts highlight the unique storage requirements of AI. Training and inference workloads depend heavily on frequent, small random reads — often on 512-byte blocks — to retrieve embeddings, model weights, and database entries. While 512B operations may not offer the bandwidth of 4K blocks, they are better suited to latency-sensitive tasks.
The big question is how Kioxia intends to deliver 100 million IOPS. XL-Flash, a single-level cell NAND with higher endurance and lower latency than conventional 3D NAND, is a likely foundation. Estimates suggest reaching that figure could require nearly a thousand dies, a configuration that would demand multi-controller designs or unconventional architectures to overcome scaling limits.
Another possibility is high bandwidth flash (HBF), which stacks NAND dies with a logic layer using TSVs to maximize parallelism. While not officially confirmed, HBF could provide the kind of breakthrough needed for this project.
Whether Kioxia pursues XL-Flash, HBF, or a hybrid approach, the development signals a shift in how storage and compute will interact in AI systems. By attaching ultra-fast SSDs directly to GPUs, Nvidia aims to accelerate access to the massive datasets driving modern AI models.