You realize you can get point to point Ethernet circuits with a fixed speed/fiber path right? I have 100G between my data centers and QoS/buffering is something I don’t need to worry about on my network. Some are these paths are dark fiber and can be easily upgraded to 400G when the time comes.
I do, of course, know about the possibility of changing the characteristics of a channel -- it's very likely that this question is not meant to be inflammatory but because it's such a basic thing that I cannot read it any other way.
For what it's worth, at the time you could not get 100 Gbps links, but it's unrelated to the fundamental queuing problem and how they are solved in packet switched networks (by dropping packets in the best-effort systems such as IP/Ethernet and by managing credits/counters in guaranteed bandwidth systems like Fibre Channel/ATM).
You will still have packet loss if you try to send packets at 600 Gbps in the case of a channel whose upper and lower bounds of capacity are 400 Gbps. There are no infinity capacity channels.
Since you have likely more than 400 systems connected at 1Gbps in each data center, you have over provisioned that 400 Gbps link and if each node does 1Gbps you will have packet loss. The value of each packet is probably not equal and so it may make sense to do some kind of QoS for those scenarios (or not, I certainly don't have enough information to answer). This is a problem you can have at any time. If you are in data center operations, you'll do capacity planning to try to mitigate this, but it's still a problem (until there are no overprovisioned paths, which is wasteful and bad engineering in most circumstances).
The point of the essay wasn't to describe the scale of a data center, it was to talk about packet loss in a network with queues using a system that was designed around this, and what that means for users of the network.