GigaIO Introduces New Scalability for AI Workloads with FabreX™ 2.2 for Dynamically Configured Rack-scale Architectures
This new release introduces an industry first in scalability over a PCIe fabric for AI workloads by enabling the creation of composable GigaPods\xe2\x84\xa2 and GigaClusters\xe2\x84\xa2 with cascaded and interlinked switches.
- This new release introduces an industry first in scalability over a PCIe fabric for AI workloads by enabling the creation of composable GigaPods\xe2\x84\xa2 and GigaClusters\xe2\x84\xa2 with cascaded and interlinked switches.
- Leaf and spine, dragonfly, and other scale-out topologies are fully supported.\n\xe2\x80\x9cThe GigaIO FabreX environment with Intel Optane SSDs is enabling scalable performance with significantly lower latency than other options for NVMe.
- The company\xe2\x80\x99s patented network technology optimizes cluster and rack system performance, and greatly reduces total cost of ownership.
- Follow GigaIO on Twitter and LinkedIn .\nView source version on businesswire.com: https://www.businesswire.com/news/home/20210419006000/en/\n'