MLPerf Results Show Rapid AI Performance Gains
The MLPerf Training benchmark suite comprises full system tests that stress machine learning models, software, and hardware for a broad range of applications.
- The MLPerf Training benchmark suite comprises full system tests that stress machine learning models, software, and hardware for a broad range of applications.
- The open-source and peer-reviewed benchmark suite provides a level playing field for competition that drives innovation, performance, and energy-efficiency for the entire industry.
- The first is a large language model (LLM) using the GPT-3 reference model that reflects the rapid adoption of generative AI.
- “And the combined effect of software and hardware performance improvements are 1000-fold in some areas compared to our initial reference benchmark results, which shows the pace that innovation is happening in the field.”
To view the results for MLPerf Training v3.0 and MLPerf Tiny v1.1, and to find additional information about the benchmarks please visit: