NVIDIA GPU

Smarter AI for All: Lenovo Unveils Hybrid AI Solutions that Deliver the Power of Tailored Generative AI to Every Enterprise and Cloud in Collaboration with NVIDIA

Retrieved on: 
Monday, March 18, 2024

Today, at NVIDIA GTC , a global AI conference, Lenovo announced new hybrid AI solutions, built in collaboration with NVIDIA, that deliver the power of tailored generative AI applications to every enterprise and cloud, bringing transformational capabilities to every industry.

Key Points: 
  • Today, at NVIDIA GTC , a global AI conference, Lenovo announced new hybrid AI solutions, built in collaboration with NVIDIA, that deliver the power of tailored generative AI applications to every enterprise and cloud, bringing transformational capabilities to every industry.
  • Lenovo ThinkSystem AI servers with NVIDIA B200 Tensor Core GPUs are set to power the new era of generative AI.
  • In close collaboration with NVIDIA, Lenovo will deliver GB200 rack systems that supercharge AI training, data processing, engineering design and simulation.
  • New Fast Start Generative AI services with NVIDIA - Leverages powerful data insights and achieves a competitive advantage with Generative AI.

OpenStack Caracal Delivers Substantial New Capabilities as OpenStack Demand Skyrockets, Driven by AI Workloads and Users Seeking VMware Alternatives

Retrieved on: 
Wednesday, April 3, 2024

AUSTIN, Texas, April 3, 2024 /PRNewswire-PRWeb/ -- The OpenStack community today released Caracal ('keh•ruh•kal), the 29th version of the world's most widely deployed open source cloud infrastructure software. OpenStack is deployed globally by organizations comprising a multitude of sizes and industries, with more than 45 million cores in production. Recently, OpenStack has seen increasing demand among users who are hosting demanding artificial intelligence (AI) and high-performance computing (HPC) workloads, as well as by users who want to run virtualized workloads at a massive scale while avoiding the vendor lock-in nature of proprietary solutions.

Key Points: 
  • Currently, the big drivers of OpenStack demand are AI workloads and VMware users looking for alternative virtualization solutions, and the Caracal release includes improvements that will help in both of those areas.
  • OpenStack enables users to make great leaps in productivity through its support of AI and HPC workloads.
  • For example, Nova now supports vGPU live migrations, a big win for hardware enablement and accelerated workloads.
  • OpenStack Caracal makes several improvements in agility and performance, including the following:
    Designate now supports Catalog Zones (RFC 9432).

Luxcore's Groundbreaking AI Technology Set to Disrupt Industries Worldwide

Retrieved on: 
Thursday, March 28, 2024

ATLANTA, March 28, 2024 /PRNewswire-PRWeb/ -- Luxcore, an early-stage AI cloud computing startup, is thrilled to announce its partnerships with IBM and NVIDIA. This collaboration is set to integrate a comprehensive AI and ML stack on its decentralized and distributed cloud platform, a move that will revolutionize the way medium enterprise customers operate. By leveraging the power of artificial intelligence and machine learning, these customers can now embark on generative AI tasks, enhanced automation, informed decision-making, and the discovery of new business opportunities.

Key Points: 
  • These Partnerships Integrate Full AI and ML Stacks for Enterprise Customers, Leveraging IBM Watsonx.ai, IBM Cloud, and NVIDIA Technologies.
  • ATLANTA, March 28, 2024 /PRNewswire-PRWeb/ -- Luxcore, an early-stage AI cloud computing startup, is thrilled to announce its partnerships with IBM and NVIDIA.
  • This collaboration is set to integrate a comprehensive AI and ML stack on its decentralized and distributed cloud platform, a move that will revolutionize the way medium enterprise customers operate.
  • Its unique features are set to disrupt the industry and open new possibilities for our customers.

SUSE Strengthens Container Management Portfolio to Help Platform Engineering Teams Manage at Scale, Support AI/ML Workloads

Retrieved on: 
Tuesday, March 19, 2024

"At SUSE, our commercial and open source users are equally important," said Peter Smails, general manager of the SUSE Enterprise Container Management business unit.

Key Points: 
  • "At SUSE, our commercial and open source users are equally important," said Peter Smails, general manager of the SUSE Enterprise Container Management business unit.
  • New capabilities in Rancher Prime 3.0 help platform engineering teams deliver self-service Platform-as-a-Service (PaaS) to their developer communities, and enhanced support for AI workloads.
  • SUSE is also introducing Rancher Enterprise, a single package and price for the entire portfolio of Rancher Prime including multi-cluster management, OS management, VM management, persistent storage, and SUSE's certified Linux OS, SUSE Linux Enterprise Micro.
  • SUSE continues to invest in open source innovation across its entire cloud native portfolio to support its large community of users.

SUSE Strengthens Container Management Portfolio to Help Platform Engineering Teams Manage at Scale, Support AI/ML Workloads

Retrieved on: 
Tuesday, March 19, 2024

"At SUSE, our commercial and open source users are equally important," said Peter Smails, general manager of the SUSE Enterprise Container Management business unit.

Key Points: 
  • "At SUSE, our commercial and open source users are equally important," said Peter Smails, general manager of the SUSE Enterprise Container Management business unit.
  • New capabilities in Rancher Prime 3.0 help platform engineering teams deliver self-service Platform-as-a-Service (PaaS) to their developer communities, and enhanced support for AI workloads.
  • SUSE is also introducing Rancher Enterprise, a single package and price for the entire portfolio of Rancher Prime including multi-cluster management, OS management, VM management, persistent storage, and SUSE's certified Linux OS, SUSE Linux Enterprise Micro.
  • SUSE continues to invest in open source innovation across its entire cloud native portfolio to support its large community of users.

Dell Offers Complete NVIDIA-Powered AI Factory Solutions to Help Global Enterprises Accelerate AI Adoption

Retrieved on: 
Monday, March 18, 2024

SAN JOSE, Calif., March 18, 2024 /PRNewswire/ -- NVIDIA GTC 2024

Key Points: 
  • By expanding the Dell Generative AI Solutions portfolio, including with the new Dell AI Factory with NVIDIA, organizations can accelerate integration of their data, AI tools and on-premises infrastructure to maximize their generative AI (GenAI) investments.
  • Customers can also take advantage of enterprise-grade professional services that help organizations accelerate their strategy, data preparation, implementation and adoption of the AI Factory, advancing AI capabilities.
  • Dell Generative AI Solutions with NVIDIA – Retrieval-Augmented Generation (RAG) leverages new microservices in NVIDIA AI Enterprise to offer a ­­pre-validated, full-stack solution to speed enterprise AI adoption with RAG.
  • Dell Generative AI Solutions with NVIDIA – Model Training will be available globally through traditional channels and Dell APEX in April 2024.

Luminary Cloud Emerges from Stealth, Empowering R&D with Realtime Engineering

Retrieved on: 
Wednesday, March 13, 2024

SAN MATEO, Calif., March 13, 2024 /PRNewswire/ -- Luminary Cloud, a pioneer in realtime engineering, today announced its official launch out of stealth. A computer-aided engineering (CAE) SaaS platform, Luminary empowers smarter and faster design cycles, allowing engineers to develop better products in a fraction of the time. Backed by Sutter Hill Ventures, which led its $115 million funding, Luminary's customers span industries including aerospace and defense, automotive, sporting goods, industrial equipment, and more.

Key Points: 
  • A computer-aided engineering (CAE) SaaS platform, Luminary empowers smarter and faster design cycles, allowing engineers to develop better products in a fraction of the time.
  • "While software engineering has become more agile thanks to advances in cloud technologies, physical engineering hasn't kept pace, despite increasing pressure to deliver advanced products faster and more efficiently," said Jason Lango, co-founder and CEO of Luminary Cloud.
  • With Luminary, engineering teams save months in their R&D schedules, drastically reducing product testing costs and reducing risk in the process.
  • Luminary has launched their realtime engineering platform for enterprise consumption.

Mirantis OpenStack for Kubernetes Improves Support for AI and Windows Workloads

Retrieved on: 
Tuesday, March 5, 2024

Mirantis today announced the latest release of Mirantis OpenStack for Kubernetes (MOSK).

Key Points: 
  • Mirantis today announced the latest release of Mirantis OpenStack for Kubernetes (MOSK).
  • MOSK 24.1 is built on OpenStack Antelope – the first OpenStack release designated as SLURP (Skip Level Upgrade Release Process).
  • Users of MOSK 24.1 running OpenStack Yoga will be able to upgrade directly to Antelope, skipping the intermediate OpenStack Zed.
  • Mirantis offers a TCO calculator that will provide an approximation of how much can be saved by moving to Mirantis OpenStack for Kubernetes from other infrastructure, such as VMware.

Vultr Announces Addition of NVIDIA GH200 Grace Hopper Superchip to Its Cloud GPU Offerings for AI Training and Inference

Retrieved on: 
Monday, November 13, 2023

Today, Vultr , the world’s largest privately-held cloud computing platform, announced the addition of the NVIDIA® GH200 Grace Hopper™ Superchip to its Cloud GPU offering to accelerate AI training and inference across Vultr’s 32 cloud data center locations .

Key Points: 
  • Today, Vultr , the world’s largest privately-held cloud computing platform, announced the addition of the NVIDIA® GH200 Grace Hopper™ Superchip to its Cloud GPU offering to accelerate AI training and inference across Vultr’s 32 cloud data center locations .
  • Following the launch of its first-of-its-kind GPU Stack and Container Registry , Vultr is providing cloud access to the NVIDIA GH200 Grace Hopper Superchip.
  • “The NVIDIA GH200 Grace Hopper Superchip delivers unrivaled performance and TCO for scaling out AI inference.
  • The NVIDIA GH200 Grace Hopper Superchip brings the new NVIDIA NVLink®-C2C to connect NVIDIA Grace™ CPUs with NVIDIA Hopper™ GPUs , delivering 7X higher aggregate memory bandwidth to the GPU compared to today’s fastest servers with PCIe Gen 5.

Companies Gaining Competitive Advantage Through Deploying Private AI Infrastructure at Equinix

Retrieved on: 
Wednesday, December 13, 2023

REDWOOD CITY, Calif., Dec. 13, 2023 /PRNewswire/ -- After working with customers on a variety of AI deployments over the past several years, Equinix, Inc. (Nasdaq: EQIX), the world's digital infrastructure company®, is gaining traction as a preferred location for deploying private AI infrastructure. Both enterprises and service providers are finding that Platform Equinix's cloud adjacency, global reach, robust ecosystems and low-latency interconnection to the world's networks are critical components in private AI infrastructure.

Key Points: 
  • Deploying private AI infrastructure at Equinix enables digital leaders to leverage public models while ensuring that an enterprise's most valuable business data does not enter the public domain.
  • Private AI offers risk protection while delivering the benefits of trained AI models, enabling businesses to realize the full potential of AI.
  • Enterprises are directly utilizing Equinix for deploying their private AI infrastructure, and service providers are utilizing Equinix to provide private AI services for their customers.
  • "These same needs propelled the widespread adoption of private cloud years ago and are now driving demand for private AI.