NVIDIA unveils A100 AI chip with 54 billion transistors
NVIDIA CEO Jensen Huang has unveiled its A100 AI chip that’s built for supercomputing tasks.
The A100 chip is comprised of 54 billion transistors capable of up to 5 petaflops of performance, which is around 20 times more powerful than the previous generation Volta chips. It’s built on the Ampere architecture, which could make its way to consumer-grade GPUs in the upcoming RTX 3000-series.
The DGX A100 is NVIDIA’s third-generation AI DGX platform built around the A100 chip. Huang said, “it essentially puts the capabilities of an entire data center into a single rack.” “You get all of the overhead of additional memory, CPUs, and power supplies of 56 servers collapsed into one.”
As an example, AI training tasks that require 600 CPU systems to handle millions of queries for data center applications could cost USD 11 million, and require 225 racks of servers and 630 kilowatts of power. With NVIDIA’s new A100 system, the same task can be handled by a single server rack, costing USD 1 million, and 28 kilowatts of power.
The A100 and DGX A100 systems are now available today, after almost two months of delays due to the pandemic.