Nvidia shrinks GPUs to help squeeze AI into your data center, make its VMware friendship work
Creates two new mini models because it’s assumed you won’t build silos to host huge hot monsters
Share
Copy
GTC Nvidia has created a pair of small data-center-friendly GPUs because it doesn’t think customers will get into AI acceleration unless they can use the servers they already operate.
The new models – the A10 and A30 – require one and two full-height full-length PCIe slots, respectively. Both employ the Ampere architecture Nvidia uses on its other graphics processors. But both are rather smaller than the company’s other GPUs, and that matters in the context of the recently launched AI Enterprise bundle that Nvidia packages exclusively on VMware’s vSphere.