AWS Announces Amazon EC2 Capacity Blocks for ML Workloads
Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), today announced the general availability of Amazon Elastic Compute Cloud (EC2) Capacity Blocks for ML, an industry-first consumption model that enables any customer to access highly sought-after GPU compute capacity to run their short duration machine learning (ML) workloads.
- Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), today announced the general availability of Amazon Elastic Compute Cloud (EC2) Capacity Blocks for ML, an industry-first consumption model that enables any customer to access highly sought-after GPU compute capacity to run their short duration machine learning (ML) workloads.
- With EC2 Capacity Blocks, customers can reserve hundreds of NVIDIA GPUs colocated in Amazon EC2 UltraClusters designed for high-performance ML workloads.
- EC2 Capacity Blocks help ensure customers have reliable, predictable, and uninterrupted access to the GPU compute capacity required for their critical ML projects.
- With EC2 Capacity Blocks, customers can reserve the amount of GPU capacity they need for short durations to run their ML workloads, eliminating the need to hold onto GPU capacity when not in use.