Groq® LPU™ Inference Engine Leads in First Independent LLM Benchmark
MOUNTAIN VIEW, Calif., Feb. 13, 2024 /PRNewswire/ -- Groq®, a generative AI solutions company, is the clear winner in the latest large language model (LLM) benchmark by ArtificialAnalysis.ai, besting eight top cloud providers in key performance indicators including Latency vs. Throughput, Throughput over Time, Total Response Time, and Throughput Variance. The Groq LPU™ Inference Engine performed so well with a leading open-source LLM from Meta AI, Llama 2 70b, that axes had to be extended to plot Groq on the Latency vs. Throughput chart. Groq participated in its first public LLM benchmark in January 2024 with competition-crushing results.
- The Groq LPU™ Inference Engine performed so well with a leading open-source LLM from Meta AI, Llama 2 70b, that axes had to be extended to plot Groq on the Latency vs. Throughput chart.
- Groq participated in its first public LLM benchmark in January 2024 with competition-crushing results.
- "Inference is critical to achieving that goal because speed is what turns developers' ideas into business solutions and life-changing applications.
- The LPU Inference Engine is available through the Groq API.