Tachyum Demonstrates Full BF16 AI Support in GCC and PyTorch
BF16, or bfloat16, is a shortened floating point data type based on the IEEE 32-bit single-precision floating point data type (f32).
- BF16, or bfloat16, is a shortened floating point data type based on the IEEE 32-bit single-precision floating point data type (f32).
- Tachyum now fully supports BF16 for use with GCC 13.2 (GNU Compiler Collection); HPC/linear algebra Eigen library optimized for Prodigy Universal Processor; and PyTorch AI framework.
- This built-in support offers high performance for AI training and inference workloads, increases performance and reduces memory utilization.
- The demonstrated ResNet model has been quantized using the BF16 data type to take advantage of Prodigy BF16 vector instruction, particularly activation, loss and reduction functions.