ASUS has announced an upgrade to its high-density AI server, the ESC A8A-E12U, which now supports the latest AMD Instinct MI350 series GPUs. This move is aimed at bolstering performance and efficiency for enterprises, research institutions, and cloud service providers engaged in large-scale AI and high-performance computing (HPC) workloads.
The AMD Instinct MI350 series, based on the 4th Gen AMD CDNA architecture, brings 288GB of HBM3E memory per GPU and bandwidth of up to 8TB/s. These enhancements enable faster, more energy-efficient execution of large AI models and complex simulations. The MI350 series also introduces support for low-precision formats such as FP4 and FP6, accelerating inference, generative AI, and machine learning applications.
Compatibility is a key focus, with the MI350 series offering drop-in support for existing systems built on the AMD Instinct MI300 series. This reduces the need for infrastructure overhauls, providing a cost-effective upgrade path for organizations.
ASUS’s ESC A8A-E12U, a 7U server featuring dual AMD EPYC 9005 processors, is now optimized to fully harness the MI350 GPUs. It’s designed for high-demand scenarios including training large language models (LLMs), fine-tuning generative AI models, and conducting scientific computing simulations.
The server’s high-bandwidth memory capacity per GPU allows organizations to use fewer GPUs to achieve the same output, minimizing system density, lowering power consumption, and streamlining scaling.
Security is another area of emphasis as the MI350 GPUs come with features such as Secure Boot, DICE attestation, SR-IOV for secure virtualization, and GPU-to-GPU encryption, making the ESC A8A-E12U a viable solution for regulated sectors like government, finance, and healthcare.