HGX

Vultr Announces Addition of NVIDIA GH200 Grace Hopper Superchip to Its Cloud GPU Offerings for AI Training and Inference

Retrieved on: 
Monday, November 13, 2023

Today, Vultr , the world’s largest privately-held cloud computing platform, announced the addition of the NVIDIA® GH200 Grace Hopper™ Superchip to its Cloud GPU offering to accelerate AI training and inference across Vultr’s 32 cloud data center locations .

Key Points: 
  • Today, Vultr , the world’s largest privately-held cloud computing platform, announced the addition of the NVIDIA® GH200 Grace Hopper™ Superchip to its Cloud GPU offering to accelerate AI training and inference across Vultr’s 32 cloud data center locations .
  • Following the launch of its first-of-its-kind GPU Stack and Container Registry , Vultr is providing cloud access to the NVIDIA GH200 Grace Hopper Superchip.
  • “The NVIDIA GH200 Grace Hopper Superchip delivers unrivaled performance and TCO for scaling out AI inference.
  • The NVIDIA GH200 Grace Hopper Superchip brings the new NVIDIA NVLink®-C2C to connect NVIDIA Grace™ CPUs with NVIDIA Hopper™ GPUs , delivering 7X higher aggregate memory bandwidth to the GPU compared to today’s fastest servers with PCIe Gen 5.

Supermicro Expands AI Solutions with the Upcoming NVIDIA HGX H200 and MGX Grace Hopper Platforms Featuring HBM3e Memory

Retrieved on: 
Monday, November 13, 2023

SAN JOSE, Calif., and DENVER, Nov. 13, 2023 /PRNewswire/ -- Supercomputing Conference (SC23) -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is expanding its AI reach with the upcoming support for the new NVIDIA HGX H200 built with H200 Tensor Core GPUs. Supermicro's industry leading AI platforms, including 8U and 4U Universal GPU Systems, are drop-in ready for the HGX H200 8-GPU, 4-GPU, and with nearly 2x capacity and 1.4x higher bandwidth HBM3e memory compared to the NVIDIA H100 Tensor Core GPU. In addition, the broadest portfolio of Supermicro NVIDIA MGXTM systems supports the upcoming NVIDIA Grace Hopper Superchip with HBM3e memory. With unprecedented performance, scalability, and reliability, Supermicro's rack scale AI solutions accelerate the performance of computationally intensive generative AI, large language Model (LLM) training, and HPC applications while meeting the evolving demands of growing model sizes. Using the building block architecture, Supermicro can quickly bring new technology to market, enabling customers to become more productive sooner.

Key Points: 
  • In addition, the broadest portfolio of Supermicro NVIDIA MGXTM systems supports the upcoming NVIDIA Grace Hopper Superchip with HBM3e memory.
  • "Supermicro partners with NVIDIA to design the most advanced systems for AI training and HPC applications," said Charles Liang, president and CEO of Supermicro.
  • "The NVIDIA H200 GPU with high-speed HBM3e memory will be able to handle massive amounts of data for a variety of workloads."
  • Additionally, the recently launched Supermicro MGX servers with the NVIDIA GH200 Grace Hopper Superchips are engineered to incorporate the NVIDIA H200 GPU with HBM3e memory.

Supermicro Expands AI Solutions with the Upcoming NVIDIA HGX H200 and MGX Grace Hopper Platforms Featuring HBM3e Memory

Retrieved on: 
Monday, November 13, 2023

SAN JOSE, Calif., and DENVER, Nov. 13, 2023 /PRNewswire/ -- Supercomputing Conference (SC23) -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is expanding its AI reach with the upcoming support for the new NVIDIA HGX H200 built with H200 Tensor Core GPUs. Supermicro's industry leading AI platforms, including 8U and 4U Universal GPU Systems, are drop-in ready for the HGX H200 8-GPU, 4-GPU, and with nearly 2x capacity and 1.4x higher bandwidth HBM3e memory compared to the NVIDIA H100 Tensor Core GPU. In addition, the broadest portfolio of Supermicro NVIDIA MGXTM systems supports the upcoming NVIDIA Grace Hopper Superchip with HBM3e memory. With unprecedented performance, scalability, and reliability, Supermicro's rack scale AI solutions accelerate the performance of computationally intensive generative AI, large language Model (LLM) training, and HPC applications while meeting the evolving demands of growing model sizes. Using the building block architecture, Supermicro can quickly bring new technology to market, enabling customers to become more productive sooner.

Key Points: 
  • In addition, the broadest portfolio of Supermicro NVIDIA MGXTM systems supports the upcoming NVIDIA Grace Hopper Superchip with HBM3e memory.
  • "Supermicro partners with NVIDIA to design the most advanced systems for AI training and HPC applications," said Charles Liang, president and CEO of Supermicro.
  • "The NVIDIA H200 GPU with high-speed HBM3e memory will be able to handle massive amounts of data for a variety of workloads."
  • Additionally, the recently launched Supermicro MGX servers with the NVIDIA GH200 Grace Hopper Superchips are engineered to incorporate the NVIDIA H200 GPU with HBM3e memory.

2CRSi SA: 2CRSi joins the exclusive club of high-performance server manufacturers for next generation AI

Retrieved on: 
Tuesday, October 17, 2023

2CRSi joins the exclusive club of high-performance server manufacturers for next generation AI.

Key Points: 
  • 2CRSi joins the exclusive club of high-performance server manufacturers for next generation AI.
  • Servers dedicated to AI, highly dense, generating heat, and consuming significant amount of energy, is where 2CRSi excels, thanks to its innovative designs and cooling techniques.
  • With this technically highly complex solution, 2CRSi joins the exclusive club of manufacturers capable of integrating technologies from OAM or SXM integrators such as NVIDIA.
  • Deliveries will start in November 2023, and will run until February 2024, depending on chip deliveries from NVIDIA.

NVIDIA MGX Gives System Makers Modular Architecture to Meet Diverse Accelerated Computing Needs of World’s Data Centers

Retrieved on: 
Monday, May 29, 2023

TAIPEI, Taiwan, May 28, 2023 (GLOBE NEWSWIRE) -- COMPUTEX -- To meet the diverse accelerated computing needs of the world’s data centers, NVIDIA today unveiled the NVIDIA MGX™ server specification, which provides system manufacturers with a modular reference architecture to quickly and cost-effectively build more than 100 server variations to suit a wide range of AI, high performance computing and Omniverse applications.

Key Points: 
  • TAIPEI, Taiwan, May 28, 2023 (GLOBE NEWSWIRE) -- COMPUTEX -- To meet the diverse accelerated computing needs of the world’s data centers, NVIDIA today unveiled the NVIDIA MGX™ server specification, which provides system manufacturers with a modular reference architecture to quickly and cost-effectively build more than 100 server variations to suit a wide range of AI, high performance computing and Omniverse applications.
  • “Enterprises are seeking more accelerated computing options when architecting data centers that meet their specific business and application needs,” said Kaustubh Sanghani, vice president of GPU products at NVIDIA.
  • Now, the modular design of MGX gives system manufacturers the ability to more effectively meet each customer’s unique budget, power delivery, thermal design and mechanical requirements.
  • MGX is compatible with the Open Compute Project and Electronic Industries Alliance server racks, for quick integration into enterprise and cloud data centers.

Global AI Server Shipments Forecasted to Increase 40% in 2023 Amid Rising AI Demand, Says TrendForce

Retrieved on: 
Tuesday, May 30, 2023

This increase resonates with the mounting demand for AI servers and chips, resulting in AI servers poised to constitute nearly 9% of the total server shipments, a figure projected to increase to 15% by 2026.

Key Points: 
  • This increase resonates with the mounting demand for AI servers and chips, resulting in AI servers poised to constitute nearly 9% of the total server shipments, a figure projected to increase to 15% by 2026.
  • TrendForce has revised its CAGR forecast for AI server shipments between 2022 and 2026 upwards to an ambitious 22%.
  • Furthermore, AI chip shipments in 2023 are slated to increase by an impressive 46%.
  • TrendForce analysis indicates that NVIDIA’s GPUs currently dominate the AI server market, commanding an impressive 60–70% market share.

GIGABYTE’s AI Servers with Superchips Shine at COMPUTEX, Redefining a New Era of Computing

Retrieved on: 
Monday, May 29, 2023

Key Points: 
  • View the full release here: https://www.businesswire.com/news/home/20230525005657/en/
    GIGABYTE and its subsidiary, Giga Computing, are introducing unparalleled AI/HPC server lineups, leading the era of exascale supercomputing.
  • In addition, GIGABYTE is debuting AI computing servers supporting NVIDIA Grace CPU and Grace Hopper Superchips.
  • The high-density servers are accelerated with NVLink-C2C technology under the ARM Neoverse V2 platform, setting a new standard for AI/HPC computing efficiency and bandwidth.
  • GIGABYTE is exhibiting a wide range of servers and motherboards suitable for cloud computing, data storage, and edge computing , as well as servers displayed in EIA and OCP (Open Compute Project) standardized racks.

Supermicro Features Unparalleled Array of New Servers and Storage Systems at COMPUTEX 2023

Retrieved on: 
Monday, May 29, 2023

SAN JOSE, Calif. and TAIPEI, Taiwan, May 29, 2023 /PRNewswire/ -- Super Micro Computer, Inc. (Nasdaq: SMCI), a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, continues to innovate with a broad range of servers to meet IT requirements for modern workloads. Supermicro's Building Block Server® methodology enables a first-to-market delivery with the latest technology from Intel, AMD, and NVIDIA. Purpose built servers deliver exceptional performance for a wide range of AI, Cloud, and 5G workloads, from the data center to the edge.

Key Points: 
  • At the COMPUTEX 2023 event, Supermicro will be showcasing a wide range of servers and storage solutions and demonstrate the fully integrated rack with the newest liquid cooling technologies that enable unprecedented energy efficiency and fast deployment.
  • The highlights of the Supermicro lineup at COMPUTEX 2023 include the following:
    Rack Scale Liquid Cooling – Supermicro's full rack liquid cooling solution enables organizations to run the highest performing GPU servers and maintain the optimal operating conditions.
  • The X13 and H13 GPU systems are open, modular, standards-based servers that provide superior performance and serviceability with a hot-swappable, toolless design.
  • To learn more about Supermicro and talk to product experts at Computex Taipei 2023, visit

Supermicro Features Unparalleled Array of New Servers and Storage Systems at COMPUTEX 2023

Retrieved on: 
Monday, May 29, 2023

SAN JOSE, Calif. and TAIPEI, Taiwan, May 29, 2023 /PRNewswire/ -- Super Micro Computer, Inc. (Nasdaq: SMCI), a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, continues to innovate with a broad range of servers to meet IT requirements for modern workloads. Supermicro's Building Block Server® methodology enables a first-to-market delivery with the latest technology from Intel, AMD, and NVIDIA. Purpose built servers deliver exceptional performance for a wide range of AI, Cloud, and 5G workloads, from the data center to the edge.

Key Points: 
  • At the COMPUTEX 2023 event, Supermicro will be showcasing a wide range of servers and storage solutions and demonstrate the fully integrated rack with the newest liquid cooling technologies that enable unprecedented energy efficiency and fast deployment.
  • The highlights of the Supermicro lineup at COMPUTEX 2023 include the following:
    Rack Scale Liquid Cooling – Supermicro's full rack liquid cooling solution enables organizations to run the highest performing GPU servers and maintain the optimal operating conditions.
  • The X13 and H13 GPU systems are open, modular, standards-based servers that provide superior performance and serviceability with a hot-swappable, toolless design.
  • To learn more about Supermicro and talk to product experts at Computex Taipei 2023, visit

Supermicro Accelerates the Era of AI and the Metaverse with Top-of-the-Line Servers for AI Training, Deep Learning, HPC, and Generative AI, Featuring NVIDIA HGX and PCIe-Based H100 8-GPU Systems

Retrieved on: 
Tuesday, March 21, 2023

SAN JOSE, Calif., March 21, 2023 /PRNewswire/ -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI/ML, Cloud, Storage, and 5G/Edge, today has announced that it has begun shipping its top-of-the-line new GPU servers that feature the latest NVIDIA HGX H100 8-GPU system. Supermicro servers incorporate the new NVIDIA L4 Tensor Core GPU in a wide range of application-optimized servers from the edge to the data center.

Key Points: 
  • Supermicro servers incorporate the new NVIDIA L4 Tensor Core GPU in a wide range of application-optimized servers from the edge to the data center.
  • "With our new NVIDIA HGX H100 Delta-Next server, customers can expect 9x performance gains compared to the previous generation for AI training applications.
  • The Supermicro X13 SuperBlade® enclosure accommodates 20 NVIDIA H100 Tensor Core PCIe GPUs or 40 NVIDIA L40 GPUs in an 8U enclosure.
  • These new systems deliver the optimized acceleration ideal for running NVIDIA AI Enterprise, the software layer of the NVIDIA AI platform.