Supermicro Expands AI Solutions with the Upcoming NVIDIA HGX H200 and MGX Grace Hopper Platforms Featuring HBM3e Memory
SAN JOSE, Calif., and DENVER, Nov. 13, 2023 /PRNewswire/ -- Supercomputing Conference (SC23) -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is expanding its AI reach with the upcoming support for the new NVIDIA HGX H200 built with H200 Tensor Core GPUs. Supermicro's industry leading AI platforms, including 8U and 4U Universal GPU Systems, are drop-in ready for the HGX H200 8-GPU, 4-GPU, and with nearly 2x capacity and 1.4x higher bandwidth HBM3e memory compared to the NVIDIA H100 Tensor Core GPU. In addition, the broadest portfolio of Supermicro NVIDIA MGXTM systems supports the upcoming NVIDIA Grace Hopper Superchip with HBM3e memory. With unprecedented performance, scalability, and reliability, Supermicro's rack scale AI solutions accelerate the performance of computationally intensive generative AI, large language Model (LLM) training, and HPC applications while meeting the evolving demands of growing model sizes. Using the building block architecture, Supermicro can quickly bring new technology to market, enabling customers to become more productive sooner.
- In addition, the broadest portfolio of Supermicro NVIDIA MGXTM systems supports the upcoming NVIDIA Grace Hopper Superchip with HBM3e memory.
- "Supermicro partners with NVIDIA to design the most advanced systems for AI training and HPC applications," said Charles Liang, president and CEO of Supermicro.
- "The NVIDIA H200 GPU with high-speed HBM3e memory will be able to handle massive amounts of data for a variety of workloads."
- Additionally, the recently launched Supermicro MGX servers with the NVIDIA GH200 Grace Hopper Superchips are engineered to incorporate the NVIDIA H200 GPU with HBM3e memory.