NVIDIA announced the H100 to enable companies to slash costs for deploying AI, "delivering the same AI performance with 3.5x more energy efficiency and 3x lower total cost of ownership, while using 5x fewer server nodes over the previous generation."
What inside the product can confirm this announcement?
- The thinner engraving of the chip reduces the surface and thus the energy required to power the chip
- Thanks to innovations like the new data format FP8 (8bits) more calculations are done with the same amount of consumption resulting in time and energy optimization
In addition, at Scaleway we decided to localize our H100 PCIe instances in the adiabatic Data Center DC5. With a PUE (Power User Effectiveness) of 1.15 (average is usually 1.6) this datacenter saves between 30% and 50% electricity compared with a conventional data centre.
Stay tuned for our benchmarks on the topic!