28 February 2024
Industry Insights
A Future State of GPUs: NVIDIA’s Blackwell B100 rumours

Key Highlights
- Discussed at a SuperComputing 2023 special address, NVIDIA's Blackwell B100 GPU are estimated to offer up to four times the power of its predecessor, promising a huge leap in AI and deep learning capabilities.
- Packed with 178 billion transistors and cutting-edge Micron memory, the B100 is set to revolutionize AI tasks, from self-driving cars to personalized medicine.
- SMC's HyperCube infrastructure is ready for the cooling requirements of the B100 when it comes, showing our commitment to staying ahead in AI.
The tech industry stands on the brink of a computational revolution with NVIDIA's late 2023 preview of the next-gen B100 Blackwell GPU, a leap that promises to redefine the landscape of artificial intelligence and deep learning. With a performance that's touted to be up to four times as fast as its predecessor, the H100, the B100 Blackwell is not just an incremental upgrade; it's a monumental leap forward.
NVIDIA data center / AI GPU Roadmap Source: NVIDIA/Guru3D.com
NVIDIA's unveiling of the B100 comes at a time when the demand for more powerful and efficient computing resources is skyrocketing. The B100 is rumored to be equipped with a staggering 178 billion transistors, doubling down on its computational capabilities, and incorporating the latest HBM3e memory from Micron. This combination of features is expected to deliver groundbreaking performance, particularly in AI inference tasks, a critical component in everything from autonomous vehicles to personalised medicine.
SMC's infrastructure, particularly the HyperCube, is primed for the seamless integration and operation of these advanced chips, underscoring our commitment to future-proof technology. As an NVIDIA CSP partner, SMC is committed to deploying scaled access to next-gen platforms as well as providing the necessary AI support for clients looking to leverage the B100s in their AI infrastructure.
SMC's readiness to host B100 GPUs is about more than just having the right physical infrastructure. It's about understanding the needs of the future—anticipating the computational demands of tomorrow's AI and ML applications and ensuring that the cloud services provided are up to the task. With large-scale H100 GPU clusters already deployed in Singapore - and expanding to new AZs in India, Thailand and Europe - we are not just keeping pace with technological advancements; we’re staying ahead of the curve.
As the tech community eagerly awaits more details about the B100 Blackwell's capabilities and performance benchmarks, one thing is clear: the future of GPUs and, by extension, AI and deep learning, is about to get a significant upgrade. A question looms large: is the industry truly prepared for the seismic shift these GPUs represent?
Currently, only a scant 5% of global data centres boast the capability to support rack densities exceeding 50kw, a figure that pales in comparison to the demands of cutting-edge AI servers. The B100, rumoured to be designed around a formidable 700W+ Thermal Design Power (TDP) exemplifies the escalating requirements of deep learning technologies.
Sources:
NVIDIA data center / AI GPU Roadmap Source: NVIDIA/YouTube.com