ASUS has unveiled its latest AI servers at Computex 2024, including the brand new ASUS ESC AI POD with NVIDIA® GB200 NVL72 system and other NVIDIA MGX-powered systems that are ready to serve the growing AI demands.
The highlight of the show is none other than the ESC AI POD that has the GB200 NVL72 system sitting inside it packing one Grace CPU and two B200 Blackwell GPUs to form a Superchip, the entire rack can take up to 36 of them, making it a 72-GPU NVLink domain that can logically be treated as one giant GPU to accelerate every in-house and collaborated AI models and algorithms to the brink.
The new system also showcases the transition to liquid-to-liquid cooling solutions that can provide more thermal control and better efficiency in general, although they still support liquid-to-air for flexibility purposes.
Aside from that, the MGX architecture designed for “jack of all trades plus a bit of AI computing capability” sees the ESC NM1-E1, ESC NM2-E1, and ESR1-511N-M1 running the currently available best setup GH200 Grace Hopper Superchip. The HGX servers on the other hand, run Intel Xeon/AMD EPYC CPU plus NVIDIA Blackwell GPUs so in short, it is still an x86 server but this time, enhanced thermal solutions and one-GPU-to-one-NIC configurations for maximal throughput are added, for example, the ESC N8 and ESC N8A.
ASUS also offers comprehensive data center solutions, including hardware, software, and support for NVIDIA BlueField-3 SuperNICs and DPUs. These solutions enhance generative AI infrastructures and expedite AI development, supported by collaborations with independent software vendors.