Deep learning super sampling

Supermicro Drives Advanced AI Capabilities to Edge Computing Environments with New Industry-Leading System Portfolio

Retrieved on: 
Tuesday, February 20, 2024

SAN JOSE, Calif., Feb. 20, 2024 /PRNewswire/ -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Manufacturer for AI, Cloud, Storage, and 5G/Edge, is expanding its portfolio of AI solutions, allowing customers to leverage the power and capability of AI in edge locations, such as public spaces, retail stores, or industrial infrastructure. Using Supermicro application-optimized servers with NVIDIA GPUs makes it easier to fine-tune pre-trained models and for AI inference solutions to be deployed at the edge where the data is generated, improving response times and decision-making.

Key Points: 
  • "Supermicro has the broadest portfolio of Edge AI solutions, capable of supporting pre-trained models for our customers' edge environments," said Charles Liang, president and CEO of Supermicro.
  • Supermicro continues to provide the industry with optimized solutions as enterprises build a competitive advantage by processing AI data at their edge locations."
  • "The new Supermicro NVIDIA-Certified Systems, powered by the NVIDIA AI platform, are built to deliver the highest-performing accelerated computing infrastructure, as well as NVIDIA AI Enterprise software to help run edge AI workloads."
  • These GPUs give the Supermicro Hyper-E sufficient computing power to process AI workloads at edge environments where data is collected, analyzed, and stored.

Supermicro Drives Advanced AI Capabilities to Edge Computing Environments with New Industry-Leading System Portfolio

Retrieved on: 
Tuesday, February 20, 2024

SAN JOSE, Calif., Feb. 20, 2024 /PRNewswire/ -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Manufacturer for AI, Cloud, Storage, and 5G/Edge, is expanding its portfolio of AI solutions, allowing customers to leverage the power and capability of AI in edge locations, such as public spaces, retail stores, or industrial infrastructure. Using Supermicro application-optimized servers with NVIDIA GPUs makes it easier to fine-tune pre-trained models and for AI inference solutions to be deployed at the edge where the data is generated, improving response times and decision-making.

Key Points: 
  • "Supermicro has the broadest portfolio of Edge AI solutions, capable of supporting pre-trained models for our customers' edge environments," said Charles Liang, president and CEO of Supermicro.
  • Supermicro continues to provide the industry with optimized solutions as enterprises build a competitive advantage by processing AI data at their edge locations."
  • "The new Supermicro NVIDIA-Certified Systems, powered by the NVIDIA AI platform, are built to deliver the highest-performing accelerated computing infrastructure, as well as NVIDIA AI Enterprise software to help run edge AI workloads."
  • These GPUs give the Supermicro Hyper-E sufficient computing power to process AI workloads at edge environments where data is collected, analyzed, and stored.

Lambda Raises $320M to Build a GPU Cloud for AI

Retrieved on: 
Thursday, February 15, 2024

The company will use this new equity financing to expand its AI cloud business, including Lambda’s popular on-demand and reserved cloud offerings.

Key Points: 
  • The company will use this new equity financing to expand its AI cloud business, including Lambda’s popular on-demand and reserved cloud offerings.
  • Founded in 2012, Lambda has over a decade of experience building AI infrastructure at scale and has amassed over 100,000 customer sign-ups on Lambda Cloud.
  • "Lambda is addressing a critical market need for accessible, affordable, highly-performant cloud infrastructure designed specifically for AI workloads.
  • Our ongoing partnership with Lambda expands customer choice and paves the way forward in state-of-the-art price-performance in AI."

OSS Ships New Gen 5 AI Edge Compute Accelerator

Retrieved on: 
Wednesday, February 7, 2024

ESCONDIDO, Calif., Feb. 07, 2024 (GLOBE NEWSWIRE) -- One Stop Systems, Inc. (Nasdaq: OSS), a leader in AI Transportable solutions at the edge, has begun shipping its latest Gen 5 4U Pro Accelerator System to a large composable infrastructure provider.

Key Points: 
  • ESCONDIDO, Calif., Feb. 07, 2024 (GLOBE NEWSWIRE) -- One Stop Systems, Inc. (Nasdaq: OSS), a leader in AI Transportable solutions at the edge, has begun shipping its latest Gen 5 4U Pro Accelerator System to a large composable infrastructure provider.
  • OSS expects shipments of this compute accelerator to the customer to total between $4 million and $6 million over the next three years.
  • For AI workflows at the edge, this latest Gen 5 4U Pro Accelerator delivers twice the interconnect bandwidth performance over Gen 4.
  • The accelerator also includes upgraded power and cooling to support multiple NVIDIA H100 Tensor Core GPUs with PCIe Gen 5, resulting in 4.8x the AI inference performance using FP8 precision compared to the previous generation.

Cisco and NVIDIA to Help Enterprises Quickly and Easily Deploy and Manage Secure AI Infrastructure

Retrieved on: 
Tuesday, February 6, 2024

AMSTERDAM, Feb. 6, 2024 /PRNewswire/ -- CISCO LIVE EMEA -- Cisco and NVIDIA today announced plans to deliver AI infrastructure solutions for the data center that are easy to deploy and manage, enabling the massive computing power that enterprises need to succeed in the AI era.

Key Points: 
  • Companies to offer enterprises simplified cloud-based and on-premises AI infrastructure, networking and software, including infrastructure management, secure AI infrastructure, observable end-to-end AI solutions and access to NVIDIA AI Enterprise software that supports the building and deployment of advanced AI and generative AI workloads.
  • AMSTERDAM, Feb. 6, 2024 /PRNewswire/ -- CISCO LIVE EMEA -- Cisco and NVIDIA today announced plans to deliver AI infrastructure solutions for the data center that are easy to deploy and manage, enabling the massive computing power that enterprises need to succeed in the AI era.
  • "Strengthening our great partnership with NVIDIA is going to arm enterprises with the technology and the expertise they need to build, deploy, manage, and secure AI solutions at scale."
  • Supporting Cisco Networking Cloud: Cisco simplified AI infrastructure management and operations through both on-premises and cloud-based management with Cisco Nexus Dashboard and Cisco Intersight.

Cisco and NVIDIA to Help Enterprises Quickly and Easily Deploy and Manage Secure AI Infrastructure

Retrieved on: 
Tuesday, February 6, 2024

AMSTERDAM, Feb. 6, 2024 /PRNewswire/ -- CISCO LIVE EMEA -- Cisco and NVIDIA today announced plans to deliver AI infrastructure solutions for the data center that are easy to deploy and manage, enabling the massive computing power that enterprises need to succeed in the AI era.

Key Points: 
  • Companies to offer enterprises simplified cloud-based and on-premises AI infrastructure, networking and software, including infrastructure management, secure AI infrastructure, observable end-to-end AI solutions and access to NVIDIA AI Enterprise software that supports the building and deployment of advanced AI and generative AI workloads.
  • AMSTERDAM, Feb. 6, 2024 /PRNewswire/ -- CISCO LIVE EMEA -- Cisco and NVIDIA today announced plans to deliver AI infrastructure solutions for the data center that are easy to deploy and manage, enabling the massive computing power that enterprises need to succeed in the AI era.
  • "Strengthening our great partnership with NVIDIA is going to arm enterprises with the technology and the expertise they need to build, deploy, manage, and secure AI solutions at scale."
  • Supporting Cisco Networking Cloud: Cisco simplified AI infrastructure management and operations through both on-premises and cloud-based management with Cisco Nexus Dashboard and Cisco Intersight.

DigitalOcean Announces Availability of NVIDIA H100 GPUs on Paperspace Platform, Expanding Access to AI Compute for Startups and Growing Digital Businesses

Retrieved on: 
Thursday, January 18, 2024

DigitalOcean Holdings, Inc. (NYSE: DOCN), the developer cloud optimized for startups and growing digital businesses, today announced virtualized availability of NVIDIA H100 Tensor Core GPUs on its Paperspace platform .

Key Points: 
  • DigitalOcean Holdings, Inc. (NYSE: DOCN), the developer cloud optimized for startups and growing digital businesses, today announced virtualized availability of NVIDIA H100 Tensor Core GPUs on its Paperspace platform .
  • This provides startups and growing digital businesses with state-of-the-art infrastructure crucial for developing the next generation of artificial intelligence/machine learning (AI/ML) applications.
  • The surge in interest for accelerated AI computing from businesses looking to elevate their capabilities in AI/ML has fueled the demand for NVIDIA H100 GPUs.
  • “While many vendors are optimizing their offerings to serve large enterprises, DigitalOcean is proud to offer startups and growing digital businesses with reliable and flexible access to NVIDIA H100 GPUs,” said Kanishka Roychoudhury, GM of AI/ML at DigitalOcean.

Google Cloud and Hugging Face Announce Strategic Partnership to Accelerate Generative AI and ML Development

Retrieved on: 
Thursday, January 25, 2024

SUNNYVALE, Calif., Jan. 25, 2024 /PRNewswire/ -- Google Cloud and Hugging Face today announced a new strategic partnership that will allow developers to utilize Google Cloud's infrastructure for all Hugging Face services, and will enable training and serving of Hugging Face models on Google Cloud.

Key Points: 
  • Developers will be able to train, tune, and serve open models quickly and cost-effectively on Google Cloud
    SUNNYVALE, Calif., Jan. 25, 2024 /PRNewswire/ -- Google Cloud and Hugging Face today announced a new strategic partnership that will allow developers to utilize Google Cloud's infrastructure for all Hugging Face services, and will enable training and serving of Hugging Face models on Google Cloud.
  • The partnership advances Hugging Face's mission to democratize AI and furthers Google Cloud's support for open source AI ecosystem development.
  • With this partnership, Google Cloud becomes a strategic cloud partner for Hugging Face, and a preferred destination for Hugging Face training and inference workloads.
  • "Google Cloud and Hugging Face share a vision for making generative AI more accessible and impactful for developers," said Thomas Kurian, CEO at Google Cloud.

AI and Semiconductors Spearhead Surge in Server GPU Market with Estimated Growth to $61.7 Billion by 2028

Retrieved on: 
Wednesday, January 24, 2024

The global AI and semiconductor - a server GPU market accounted for $15.4 billion in 2023 and is expected to grow at a CAGR of 31.99% and reach $61.7 billion by 2028.

Key Points: 
  • The global AI and semiconductor - a server GPU market accounted for $15.4 billion in 2023 and is expected to grow at a CAGR of 31.99% and reach $61.7 billion by 2028.
  • A key element of AI and ML is the training of sophisticated neural networks, which is accelerated in large part by GPU servers.
  • The end-use application segment is a part of the application segment for the worldwide AI and semiconductor - server GPU market.
  • GPU servers can transfer certain computations from conventional CPUs to GPU servers, which improves overall performance and reduces energy consumption.

WEKA Achieves NVIDIA DGX BasePOD Certification

Retrieved on: 
Wednesday, January 10, 2024

CAMPBELL, Calif., Jan. 10, 2024 /PRNewswire/ -- WekaIO (WEKA), the data platform software provider for AI, announced today that it has received certification for an NVIDIA DGX BasePOD™ reference architecture built on NVIDIA DGX H100 systems and the WEKA Data® Platform. This rack-dense architecture delivers massive data storage throughput starting at 600GB/s and 22M IOPs in 8 rack units to optimize the DGX H100 systems.

Key Points: 
  • With NVIDIA DGX BasePOD Certification Complete, Company Sets Its Sights on NVIDIA DGX SuperPOD Certification
    CAMPBELL, Calif., Jan. 10, 2024 /PRNewswire/ -- WekaIO ( WEKA ), the data platform software provider for AI, announced today that it has received certification for an NVIDIA DGX BasePOD™ reference architecture built on NVIDIA DGX H100 systems and the WEKA Data® Platform.
  • With NVIDIA DGX BasePOD Certification Complete, WEKA Sets Its Sights on NVIDIA DGX SuperPOD Certification
    The WEKA Data Platform provides the critical data infrastructure foundation required to support next-generation, performance-intensive workloads like generative AI model training and inference at scale.
  • Key benefits of the new WEKA with NVIDIA DGX BasePOD reference architecture include:
    Extreme Performance for the Most Demanding AI Workloads: Delivers 10x the bandwidth and 6x more IOPs than the previous WEKA with NVIDIA DGX BasePOD configuration based on NVIDIA DGX A100 systems.
  • With our DGX BasePOD certification completed, our DGX SuperPOD certification is now in progress," said Nilesh Patel, chief product officer at WEKA.