NVIDIA Hopper GPU: bonkers 140 billion transistors on next-gen AI chip
NVIDIA's next-gen Hopper GPU rumored to measure in at 140 billion transistors, surfs in on TSMC's fresh new 5nm process node.
NVIDIA is going to be one of, if not the biggest GPU ever made when it unleashes its next-gen Hopper GPU architecture -- with fresh rumors teasing it rocks a Godzilla-sized 140 billion transistors -- 2.6x the transistors of the Ampere-based GA100 GPU.
The new rumors are coming from Chiphell, which states one of the flagship NVIDIA Hopper GPUs will have a huge 140 billion transistors, and be made on TSMC 5nm. We might see a monster monolithic design in the GH100 -- but also a multi-chip module (MCM) design in the GH102 -- with 140 billion transistors somewhere in the 5nm mix.
NVIDIA's new Hopper GPU architecture whether it be in monolithic or MCM GPU design, will be going up against AMD's next-gen Aldebaran GPU -- a new MCM-based GPU with up to 128GB of HBM2e memory and on TSMC 6nm -- as well as Intel's upcoming Ponte Vecchio GPU (which will use an Intel 7 base GPU tile, TSMC N7 for Xe-Link, and TSMC N5 for the compute tile) with up to 128GB of HBM2e memory.
- Read more: NVIDIA's next-gen GH100 Hopper GPU: 1000mm2 and monster 1000W+ power
- Read more: NVIDIA 'GPU-N': GH100 GPU, up to 233GB HBM2e at 6.3TB/sec (!!!)
NVIDIA should reveal its next-gen Hopper GPU architecture during its upcoming GTC (GPU Technology Conference) in just a few weeks time in March 2022.