New MLPerf Training Benchmark Results Highlight Hardware and Software Innovations in AI Systems
The MLPerf Training benchmark suite comprises full system tests that stress machine learning (ML) models, software, and hardware for a broad range of applications.
- The MLPerf Training benchmark suite comprises full system tests that stress machine learning (ML) models, software, and hardware for a broad range of applications.
- The Training v4.0 results demonstrate broad industry participation and showcase substantial performance gains in ML systems and software.
- We hope the addition of a GNN based benchmark in MLPerf Training broadens the challenges offered by the suite and spurs software and hardware innovations for this critical class of workload,” said Ritika Borkar MLPerf Training working group co-chair.
- To view the full results for MLPerf Training v4.0 and find additional information about the benchmarks, please visit the Training benchmark page .