HGX

AI Technology Making Businesses More Efficient and Profitable

Retrieved on: 
星期二, 五月 21, 2024

Vancouver, Kelowna and Delta, British Columbia--(Newsfile Corp. - May 21, 2024) - Investorideas.com , a global investor news source covering Artificial Intelligence (AI) and technology stocks releases a snapshot looking at the evolution and integration of AI solutions to make businesses more efficient and profitable.

Key Points: 
  • Vancouver, Kelowna and Delta, British Columbia--(Newsfile Corp. - May 21, 2024) - Investorideas.com , a global investor news source covering Artificial Intelligence (AI) and technology stocks releases a snapshot looking at the evolution and integration of AI solutions to make businesses more efficient and profitable.
  • The collaboration between Vertex AI Ventures and Nom Nom underscores the transformative power of AI in reshaping industries and driving efficiency.
  • As organizations increasingly turn to automation and AI technologies to eliminate human error and streamline operations, data security remains paramount.
  • If Nvidia can pull off another winning earnings report, AI stocks will most likely see another run as they have following the leader, Nvidia, in the last AI Bull Run.

Supermicro's Rack Scale Liquid-Cooled Solutions with the Industry's Latest Accelerators Target AI and HPC Convergence

Retrieved on: 
星期一, 五月 13, 2024

SAN JOSE, Calif. and HAMBURG, Germany, May 13, 2024 /PRNewswire/ -- International Supercomputing Conference (ISC) -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is addressing the most demanding requirements from customers who want to expand their AI and HPC capacities while reducing data center power requirements. Supermicro delivers complete liquid-cooled solutions, including cold plates, CDUs, CDMs, and entire cooling towers. A significant reduction in the PUE of a data center is quickly realized with data center liquid-cooled servers and infrastructure, and this can reduce overall power consumption in the data center by up to 40%.

Key Points: 
  • "Supermicro continues to work with our AI and HPC customers to bring the latest technology, including total liquid cooling solutions, into their data centers," said Charles Liang, president and CEO of Supermicro.
  • "Our complete liquid cooling solutions can handle up to 100 kW per rack, which reduces the TCO in data centers and allows for denser AI and HPC computing.
  • Supermicro will also showcase and demonstrate a wide range of solutions designed specifically for HPC and AI environments at ISC 2024.
  • Supermicro's Petascale storage systems, which are critical for large-scale HPC and AI workloads, will also be displayed.

Supermicro's Rack Scale Liquid-Cooled Solutions with the Industry's Latest Accelerators Target AI and HPC Convergence

Retrieved on: 
星期一, 五月 13, 2024

SAN JOSE, Calif. and HAMBURG, Germany, May 13, 2024 /PRNewswire/ -- International Supercomputing Conference (ISC) -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is addressing the most demanding requirements from customers who want to expand their AI and HPC capacities while reducing data center power requirements. Supermicro delivers complete liquid-cooled solutions, including cold plates, CDUs, CDMs, and entire cooling towers. A significant reduction in the PUE of a data center is quickly realized with data center liquid-cooled servers and infrastructure, and this can reduce overall power consumption in the data center by up to 40%.

Key Points: 
  • "Supermicro continues to work with our AI and HPC customers to bring the latest technology, including total liquid cooling solutions, into their data centers," said Charles Liang, president and CEO of Supermicro.
  • "Our complete liquid cooling solutions can handle up to 100 kW per rack, which reduces the TCO in data centers and allows for denser AI and HPC computing.
  • Supermicro will also showcase and demonstrate a wide range of solutions designed specifically for HPC and AI environments at ISC 2024.
  • Supermicro's Petascale storage systems, which are critical for large-scale HPC and AI workloads, will also be displayed.

Hyve Solutions Named Design Partner for NVIDIA HGX Product Line

Retrieved on: 
星期三, 五月 1, 2024

Hyve Solutions Corporation , a wholly owned subsidiary of TD SYNNEX Corporation (NYSE: SNX) and a leading provider of hyperscale digital infrastructures, today announced it has become a design partner for the NVIDIA HGX platform.

Key Points: 
  • Hyve Solutions Corporation , a wholly owned subsidiary of TD SYNNEX Corporation (NYSE: SNX) and a leading provider of hyperscale digital infrastructures, today announced it has become a design partner for the NVIDIA HGX platform.
  • “NVIDIA’s HGX platform empowers organizations worldwide with powerful performance and scalability,” said Steve Ichinaga, President, Hyve Solutions.
  • “Designation as a design partner underscores our commitment to meet the evolving needs of our customers through the swift and efficient creation of next-generation AI solutions that drive transformation and shape the future of computing.”
    As an NVIDIA HGX design partner, Hyve offers a wide range of NVIDIA AI solutions that are optimized for NVIDIA H100 , H200 , and Blackwell GPUs while incorporating foundational industry technologies such as liquid cooling and DC busbar power architectures.
  • “As an NVIDIA HGX designer partner, Hyve Solutions is empowered to help design and manufacturing teams across organizations overcome their most complex datacenter infrastructure challenges.”
    As a fully vertical integrated original design manufacturer with US-based SMT operations, Hyve leverages its extensive design and manufacturing expertise, experience and global footprint to deploy AI datacenter architectures quickly and efficiently.

2CRSi SA: GTC NVIDIA 2024: A stronger relationship with Nvidia and a new sale for AI servers

Retrieved on: 
星期三, 四月 10, 2024

The show was also a commercial highlight, with the presence of most of international decision-makers, 2CRSi’s customers and potential customers.

Key Points: 
  • The show was also a commercial highlight, with the presence of most of international decision-makers, 2CRSi’s customers and potential customers.
  • It was only natural that, 2CRSi Corp sales teams were able to win new orders for GODì 1.8SR-NV8 servers, dedicated to Artificial Intelligence.
  • The first order will be delivered before the end of the fiscal year, scheduled for the end of June 2024.
  • "I would like to thank our partner Nvidia, and especially Mr. Jensen Huang, CEO and founder, for their welcome during this new GTC Nvidia 2024.

NVIDIA Blackwell Platform Arrives to Power a New Era of Computing

Retrieved on: 
星期一, 三月 18, 2024

The NVIDIA GB200 Grace Blackwell Superchip connects two NVIDIA B200 Tensor Core GPUs to the NVIDIA Grace CPU over a 900GB/s ultra-low-power NVLink chip-to-chip interconnect.

Key Points: 
  • The NVIDIA GB200 Grace Blackwell Superchip connects two NVIDIA B200 Tensor Core GPUs to the NVIDIA Grace CPU over a 900GB/s ultra-low-power NVLink chip-to-chip interconnect.
  • It combines 36 Grace Blackwell Superchips, which include 72 Blackwell GPUs and 36 Grace CPUs interconnected by fifth-generation NVLink.
  • The Blackwell product portfolio is supported by NVIDIA AI Enterprise , the end-to-end operating system for production-grade AI.
  • To learn more about the NVIDIA Blackwell platform, watch the GTC keynote and register to attend sessions from NVIDIA and industry leaders at GTC, which runs through March 21.

Hewlett Packard Enterprise Debuts End-to-End AI-Native Portfolio for Generative AI

Retrieved on: 
星期一, 三月 18, 2024

Today at NVIDIA GTC, Hewlett Packard Enterprise (NYSE: HPE) announced updates to one of the industry’s most comprehensive AI-native portfolios to advance the operationalization of generative AI (GenAI), deep learning, and machine learning (ML) applications.

Key Points: 
  • Today at NVIDIA GTC, Hewlett Packard Enterprise (NYSE: HPE) announced updates to one of the industry’s most comprehensive AI-native portfolios to advance the operationalization of generative AI (GenAI), deep learning, and machine learning (ML) applications.
  • The solution is enhanced by HPE’s machine learning platform and analytics software, NVIDIA AI Enterprise 5.0 software with new NVIDIA NIM microservice for optimized inference of generative AI models, as well as NVIDIA NeMo Retriever and other data science and AI libraries.
  • For more information or to order it today, visit HPE’s enterprise computing solution for generative AI .
  • HPE’s AI software is available on both HPE’s supercomputing and enterprise computing solutions for generative AI to provide a consistent environment for customers to manage their GenAI workloads.

Supermicro Grows AI Optimized Product Portfolio with a New Generation of Systems and Rack Architectures Featuring New NVIDIA Blackwell Architecture Solutions

Retrieved on: 
星期一, 三月 18, 2024

SAN JOSE, Calif., March 18, 2024 /PRNewswire/ -- NVIDIA GTC 2024, -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing new AI systems for large-scale generative AI featuring NVIDIA's next-generation of data center products, including the latest NVIDIA GB200 Grace™ Blackwell Superchip, the NVIDIA B200 Tensor Core and B100 Tensor Core GPUs. Supermicro is enhancing its current NVIDIA HGX™ H100/H200 8-GPU systems to be drop-in ready for the NVIDIA HGX™ B100 8-GPU and enhanced to support the B200, resulting in a reduced time to delivery. Additionally, Supermicro will further strengthen its broad NVIDIA MGX™ systems lineup with new offerings featuring the NVIDIA GB200, including the NVIDIA GB200 NVL72, a complete rack level solution with 72 NVIDIA Blackwell GPUs. Supermicro is also adding new systems to its lineup, including the 4U NVIDIA HGX B200 8-GPU liquid-cooled system.

Key Points: 
  • Additionally, Supermicro will further strengthen its broad NVIDIA MGX™ systems lineup with new offerings featuring the NVIDIA GB200, including the NVIDIA GB200 NVL72, a complete rack level solution with 72 NVIDIA Blackwell GPUs.
  • "These new products are built upon Supermicro and NVIDIA's proven HGX and MGX system architecture, optimizing for the new capabilities of NVIDIA Blackwell GPUs.
  • Optimized for the NVIDIA Blackwell architecture, the NVIDIA Quantum-X800, and Spectrum-X800 will deliver the highest level of networking performance for AI infrastructures.
  • Supermicro will also showcase two rack-level solutions, including a concept rack with systems featuring the upcoming NVIDIA GB200 with 72 liquid-cooled GPUs interconnected with fifth-generation NVLink.

Supermicro Grows AI Optimized Product Portfolio with a New Generation of Systems and Rack Architectures Featuring New NVIDIA Blackwell Architecture Solutions

Retrieved on: 
星期一, 三月 18, 2024

SAN JOSE, Calif., March 18, 2024 /PRNewswire/ -- NVIDIA GTC 2024, -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing new AI systems for large-scale generative AI featuring NVIDIA's next-generation of data center products, including the latest NVIDIA GB200 Grace™ Blackwell Superchip, the NVIDIA B200 Tensor Core and B100 Tensor Core GPUs. Supermicro is enhancing its current NVIDIA HGX™ H100/H200 8-GPU systems to be drop-in ready for the NVIDIA HGX™ B100 8-GPU and enhanced to support the B200, resulting in a reduced time to delivery. Additionally, Supermicro will further strengthen its broad NVIDIA MGX™ systems lineup with new offerings featuring the NVIDIA GB200, including the NVIDIA GB200 NVL72, a complete rack level solution with 72 NVIDIA Blackwell GPUs. Supermicro is also adding new systems to its lineup, including the 4U NVIDIA HGX B200 8-GPU liquid-cooled system.

Key Points: 
  • Additionally, Supermicro will further strengthen its broad NVIDIA MGX™ systems lineup with new offerings featuring the NVIDIA GB200, including the NVIDIA GB200 NVL72, a complete rack level solution with 72 NVIDIA Blackwell GPUs.
  • "These new products are built upon Supermicro and NVIDIA's proven HGX and MGX system architecture, optimizing for the new capabilities of NVIDIA Blackwell GPUs.
  • Optimized for the NVIDIA Blackwell architecture, the NVIDIA Quantum-X800, and Spectrum-X800 will deliver the highest level of networking performance for AI infrastructures.
  • Supermicro will also showcase two rack-level solutions, including a concept rack with systems featuring the upcoming NVIDIA GB200 with 72 liquid-cooled GPUs interconnected with fifth-generation NVLink.

Vultr Expands Footprint with New NVIDIA Cloud GPU Capacity Using Clean, Renewable, Hydropower in Sabey Data Centers

Retrieved on: 
星期二, 三月 5, 2024

Vultr , the world’s largest privately-held cloud computing platform, today announced the expansion of its Seattle cloud data center region at Sabey Data Centers’ ​​ SDC Columbia location.

Key Points: 
  • Vultr , the world’s largest privately-held cloud computing platform, today announced the expansion of its Seattle cloud data center region at Sabey Data Centers’ ​​ SDC Columbia location.
  • Vultr’s expansion includes a significant new inventory of NVIDIA HGX H100 GPU clusters, available both on demand and through reserved instance contracts.
  • Sabey, one of the largest privately-owned multi-tenant data center operators in the U.S., builds and maintains energy-efficient data centers with the goal of reaching net-zero carbon emissions by 2029.
  • For more information about Vultr's cloud computing solutions and cloud data center locations, visit https://www.vultr.com/products/cloud-gpu/ .