Let me give you one example of that, what Safra is describing, is we got enough Nvidia GPUs for Elon Musk's company, xAI, to bring up the first version -- the first available version of their large language model called Grok. They got that up and running. But boy did they want a lot more, boy did they want a lot more GPUs than we gave them. We gave them quite a few, but they wanted more, and we're in the process of getting them more.[5]
So, the demand, we got that up pretty quickly. I -- they were able to use it, but they want dramatically more as there's this gold rush toward building the world's greatest large language model. And we are doing our best to keep -- give our customers what we can this quarter and then dramatically increase our ability to give them more and more capacity each succeeding quarter.[5:1]
While training the extremely large Grok model, Colossus achieves unprecedented network performance. Across all three tiers of the network fabric, the system has experienced zero application latency degradation or packet loss due to flow collisions. It has maintained 95% data throughput enabled by Spectrum-X congestion control.
This level of performance cannot be achieved at scale with standard Ethernet, which creates thousands of flow collisions while delivering only 60% data throughput.