HGX

2CRSi SA: GTC NVIDIA 2024: A stronger relationship with Nvidia and a new sale for AI servers

Retrieved on: 
Wednesday, April 10, 2024

The show was also a commercial highlight, with the presence of most of international decision-makers, 2CRSi’s customers and potential customers.

Key Points: 
  • The show was also a commercial highlight, with the presence of most of international decision-makers, 2CRSi’s customers and potential customers.
  • It was only natural that, 2CRSi Corp sales teams were able to win new orders for GODì 1.8SR-NV8 servers, dedicated to Artificial Intelligence.
  • The first order will be delivered before the end of the fiscal year, scheduled for the end of June 2024.
  • "I would like to thank our partner Nvidia, and especially Mr. Jensen Huang, CEO and founder, for their welcome during this new GTC Nvidia 2024.

NVIDIA Blackwell Platform Arrives to Power a New Era of Computing

Retrieved on: 
Monday, March 18, 2024

The NVIDIA GB200 Grace Blackwell Superchip connects two NVIDIA B200 Tensor Core GPUs to the NVIDIA Grace CPU over a 900GB/s ultra-low-power NVLink chip-to-chip interconnect.

Key Points: 
  • The NVIDIA GB200 Grace Blackwell Superchip connects two NVIDIA B200 Tensor Core GPUs to the NVIDIA Grace CPU over a 900GB/s ultra-low-power NVLink chip-to-chip interconnect.
  • It combines 36 Grace Blackwell Superchips, which include 72 Blackwell GPUs and 36 Grace CPUs interconnected by fifth-generation NVLink.
  • The Blackwell product portfolio is supported by NVIDIA AI Enterprise , the end-to-end operating system for production-grade AI.
  • To learn more about the NVIDIA Blackwell platform, watch the GTC keynote and register to attend sessions from NVIDIA and industry leaders at GTC, which runs through March 21.

Hewlett Packard Enterprise Debuts End-to-End AI-Native Portfolio for Generative AI

Retrieved on: 
Monday, March 18, 2024

Today at NVIDIA GTC, Hewlett Packard Enterprise (NYSE: HPE) announced updates to one of the industry’s most comprehensive AI-native portfolios to advance the operationalization of generative AI (GenAI), deep learning, and machine learning (ML) applications.

Key Points: 
  • Today at NVIDIA GTC, Hewlett Packard Enterprise (NYSE: HPE) announced updates to one of the industry’s most comprehensive AI-native portfolios to advance the operationalization of generative AI (GenAI), deep learning, and machine learning (ML) applications.
  • The solution is enhanced by HPE’s machine learning platform and analytics software, NVIDIA AI Enterprise 5.0 software with new NVIDIA NIM microservice for optimized inference of generative AI models, as well as NVIDIA NeMo Retriever and other data science and AI libraries.
  • For more information or to order it today, visit HPE’s enterprise computing solution for generative AI .
  • HPE’s AI software is available on both HPE’s supercomputing and enterprise computing solutions for generative AI to provide a consistent environment for customers to manage their GenAI workloads.

Supermicro Grows AI Optimized Product Portfolio with a New Generation of Systems and Rack Architectures Featuring New NVIDIA Blackwell Architecture Solutions

Retrieved on: 
Monday, March 18, 2024

SAN JOSE, Calif., March 18, 2024 /PRNewswire/ -- NVIDIA GTC 2024, -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing new AI systems for large-scale generative AI featuring NVIDIA's next-generation of data center products, including the latest NVIDIA GB200 Grace™ Blackwell Superchip, the NVIDIA B200 Tensor Core and B100 Tensor Core GPUs. Supermicro is enhancing its current NVIDIA HGX™ H100/H200 8-GPU systems to be drop-in ready for the NVIDIA HGX™ B100 8-GPU and enhanced to support the B200, resulting in a reduced time to delivery. Additionally, Supermicro will further strengthen its broad NVIDIA MGX™ systems lineup with new offerings featuring the NVIDIA GB200, including the NVIDIA GB200 NVL72, a complete rack level solution with 72 NVIDIA Blackwell GPUs. Supermicro is also adding new systems to its lineup, including the 4U NVIDIA HGX B200 8-GPU liquid-cooled system.

Key Points: 
  • Additionally, Supermicro will further strengthen its broad NVIDIA MGX™ systems lineup with new offerings featuring the NVIDIA GB200, including the NVIDIA GB200 NVL72, a complete rack level solution with 72 NVIDIA Blackwell GPUs.
  • "These new products are built upon Supermicro and NVIDIA's proven HGX and MGX system architecture, optimizing for the new capabilities of NVIDIA Blackwell GPUs.
  • Optimized for the NVIDIA Blackwell architecture, the NVIDIA Quantum-X800, and Spectrum-X800 will deliver the highest level of networking performance for AI infrastructures.
  • Supermicro will also showcase two rack-level solutions, including a concept rack with systems featuring the upcoming NVIDIA GB200 with 72 liquid-cooled GPUs interconnected with fifth-generation NVLink.

Supermicro Grows AI Optimized Product Portfolio with a New Generation of Systems and Rack Architectures Featuring New NVIDIA Blackwell Architecture Solutions

Retrieved on: 
Monday, March 18, 2024

SAN JOSE, Calif., March 18, 2024 /PRNewswire/ -- NVIDIA GTC 2024, -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing new AI systems for large-scale generative AI featuring NVIDIA's next-generation of data center products, including the latest NVIDIA GB200 Grace™ Blackwell Superchip, the NVIDIA B200 Tensor Core and B100 Tensor Core GPUs. Supermicro is enhancing its current NVIDIA HGX™ H100/H200 8-GPU systems to be drop-in ready for the NVIDIA HGX™ B100 8-GPU and enhanced to support the B200, resulting in a reduced time to delivery. Additionally, Supermicro will further strengthen its broad NVIDIA MGX™ systems lineup with new offerings featuring the NVIDIA GB200, including the NVIDIA GB200 NVL72, a complete rack level solution with 72 NVIDIA Blackwell GPUs. Supermicro is also adding new systems to its lineup, including the 4U NVIDIA HGX B200 8-GPU liquid-cooled system.

Key Points: 
  • Additionally, Supermicro will further strengthen its broad NVIDIA MGX™ systems lineup with new offerings featuring the NVIDIA GB200, including the NVIDIA GB200 NVL72, a complete rack level solution with 72 NVIDIA Blackwell GPUs.
  • "These new products are built upon Supermicro and NVIDIA's proven HGX and MGX system architecture, optimizing for the new capabilities of NVIDIA Blackwell GPUs.
  • Optimized for the NVIDIA Blackwell architecture, the NVIDIA Quantum-X800, and Spectrum-X800 will deliver the highest level of networking performance for AI infrastructures.
  • Supermicro will also showcase two rack-level solutions, including a concept rack with systems featuring the upcoming NVIDIA GB200 with 72 liquid-cooled GPUs interconnected with fifth-generation NVLink.

Vultr Expands Footprint with New NVIDIA Cloud GPU Capacity Using Clean, Renewable, Hydropower in Sabey Data Centers

Retrieved on: 
Tuesday, March 5, 2024

Vultr , the world’s largest privately-held cloud computing platform, today announced the expansion of its Seattle cloud data center region at Sabey Data Centers’ ​​ SDC Columbia location.

Key Points: 
  • Vultr , the world’s largest privately-held cloud computing platform, today announced the expansion of its Seattle cloud data center region at Sabey Data Centers’ ​​ SDC Columbia location.
  • Vultr’s expansion includes a significant new inventory of NVIDIA HGX H100 GPU clusters, available both on demand and through reserved instance contracts.
  • Sabey, one of the largest privately-owned multi-tenant data center operators in the U.S., builds and maintains energy-efficient data centers with the goal of reaching net-zero carbon emissions by 2029.
  • For more information about Vultr's cloud computing solutions and cloud data center locations, visit https://www.vultr.com/products/cloud-gpu/ .

AMD Delivers Leadership Portfolio of Data Center AI Solutions with AMD Instinct MI300 Series

Retrieved on: 
Wednesday, December 6, 2023

SANTA CLARA, Calif., Dec. 06, 2023 (GLOBE NEWSWIRE) -- Today, AMD (NASDAQ: AMD) announced the availability of the AMD Instinct™ MI300X accelerators – with industry leading memory bandwidth for generative AI1 and leadership performance for large language model (LLM) training and inferencing – as well as the AMD Instinct™ MI300A accelerated processing unit (APU) – combining the latest AMD CDNA™ 3 architecture and “Zen 4” CPUs to deliver breakthrough performance for HPC and AI workloads.

Key Points: 
  • “AMD Instinct MI300 Series accelerators are designed with our most advanced technologies, delivering leadership performance, and will be in large scale cloud and enterprise deployments,” said Victor Peng, president, AMD.
  • Oracle Cloud Infrastructure plans to add AMD Instinct MI300X-based bare metal instances to the company’s high-performance accelerated computing instances for AI.
  • Dell showcased the Dell PowerEdge XE9680 server featuring eight AMD Instinct MI300 Series accelerators and the new Dell Validated Design for Generative AI with AMD ROCm-powered AI frameworks.
  • Supermicro announced new additions to its H13 generation of accelerated servers powered by 4th Gen AMD EPYC™ CPUs and AMD Instinct MI300 Series accelerators.

NVIDIA Supercharges Hopper, the World’s Leading AI Computing Platform

Retrieved on: 
Monday, November 13, 2023

DENVER, Nov. 13, 2023 (GLOBE NEWSWIRE) -- NVIDIA today announced it has supercharged the world’s leading AI computing platform with the introduction of the NVIDIA HGX™ H200.

Key Points: 
  • DENVER, Nov. 13, 2023 (GLOBE NEWSWIRE) -- NVIDIA today announced it has supercharged the world’s leading AI computing platform with the introduction of the NVIDIA HGX™ H200.
  • Based on NVIDIA Hopper™ architecture, the platform features the NVIDIA H200 Tensor Core GPU with advanced memory to handle massive amounts of data for generative AI and high performance computing workloads.
  • The NVIDIA H200 is the first GPU to offer HBM3e — faster, larger memory to fuel the acceleration of generative AI and large language models, while advancing scientific computing for HPC workloads.
  • NVIDIA’s accelerated computing platform is supported by powerful software tools that enable developers and enterprises to build and accelerate production-ready applications from AI to HPC.

Vultr Achieves Elite Partner Status in NVIDIA Partner Network

Retrieved on: 
Monday, November 13, 2023

Today, Vultr , the world’s largest privately-held cloud computing platform, announced that it has achieved Elite status as a part of the NVIDIA Partner Network (NPN) for cloud service providers.

Key Points: 
  • Today, Vultr , the world’s largest privately-held cloud computing platform, announced that it has achieved Elite status as a part of the NVIDIA Partner Network (NPN) for cloud service providers.
  • “Achieving Elite status in the NVIDIA Partner Network highlights our longstanding partnership with NVIDIA, as well as our commitment to giving customers global cloud access to state-of-the-art NVIDIA GPUs,” said J.J. Kardwell, CEO of Vultr’s parent company, Constant .
  • The NVIDIA GH200 Grace Hopper Superchip joins Vultr’s other NVIDIA GPU offerings, which include the HGX H100 , A100 Tensor Core , L40S , A40 , and A16 GPUs.
  • “The NVIDIA Partner Network aims to create a network of valued partners to help our customers find the perfect solutions to address current business needs and achieve success in today’s ever-changing market,” said Matt McGrigg, director, global business development, NVIDIA cloud partners.

Vultr Announces Addition of NVIDIA GH200 Grace Hopper Superchip to Its Cloud GPU Offerings for AI Training and Inference

Retrieved on: 
Monday, November 13, 2023

Today, Vultr , the world’s largest privately-held cloud computing platform, announced the addition of the NVIDIA® GH200 Grace Hopper™ Superchip to its Cloud GPU offering to accelerate AI training and inference across Vultr’s 32 cloud data center locations .

Key Points: 
  • Today, Vultr , the world’s largest privately-held cloud computing platform, announced the addition of the NVIDIA® GH200 Grace Hopper™ Superchip to its Cloud GPU offering to accelerate AI training and inference across Vultr’s 32 cloud data center locations .
  • Following the launch of its first-of-its-kind GPU Stack and Container Registry , Vultr is providing cloud access to the NVIDIA GH200 Grace Hopper Superchip.
  • “The NVIDIA GH200 Grace Hopper Superchip delivers unrivaled performance and TCO for scaling out AI inference.
  • The NVIDIA GH200 Grace Hopper Superchip brings the new NVIDIA NVLink®-C2C to connect NVIDIA Grace™ CPUs with NVIDIA Hopper™ GPUs , delivering 7X higher aggregate memory bandwidth to the GPU compared to today’s fastest servers with PCIe Gen 5.