NVIDIA H200 now available
Enquire nowLog in

NVIDIA H200: A new era of memory and energy-efficiency.

Significantly reduce the time required to train large AI models.

  • 1.4x
    memory bandwidth than H100s
  • 2x
    LLM inference performance
  • 110x
    HPC performance
  • Upto 80%
    less cost to train

NVIDIA H200 use cases

Train and fine-tune expansive LLMs for creative and conversational AI.

GENAI

Accelerate genomics and molecular research for drug discovery and precision medicine.

HEALTHCARE

Optimise simulations and digital twins with faster model iterations.

MANUFACTURING

Use cases that benefit from energy-efficient H200 infrastructure.

More than just an incremental upgrade. Increased storage results in faster training.

instance: H200

H200

Augment performance, maximise utilisation

Supercharge generative AI and high-performance computing (HPC) workloads with the world's leading AI computing platform. Increased multi-tenancy on each cluster, with 1.4x more memory bandwidth.

Based on NVIDIA's Hopper architecture, the H200 offers unmatched performance, throughput, and scale for every stage of your AI journey.

MEMORY
2048GB DDR5 4800MT/S RDIMMS
NETWORK
CONNECTX-7
LINK
NVSWITCH 900 GBPS
GPU
H200 x 8 141 GB SXM
CPU
Intel XEON Platinum 8462Y+ 128 VCPU
STORAGE
30TB NVME

Lower energy, lower costs

The best tokens and parameters per watt​​. Modern LLM GenAI models, whether text, code, or multi-modal, run as performantly as they would on competing platforms, but with the energy-efficiency SMC is known for.

Support that's truly 24/7. Gain access to experts who provide seamless support from onboarding to migration. Streamline deployment from day one.

Compare our results here

Frequently asked questions

  • What is the difference between the NVIDIA H200s and H100s?

    The NVIDIA H200 GPUs feature 141GB of memory per GPU, nearly double that of the H100, and deliver up to 1.4x faster memory bandwidth. These advancements enable training of larger AI models, faster inference, and more efficient handling of complex datasets.

  • How does the H200 support large-scale AI workloads?

    With its expanded memory and enhanced bandwidth, the H200 is designed for advanced workloads such as training large language models (LLMs), generative AI applications, and real-time data analytics. The increased memory eliminates bottlenecks, allowing for seamless scalability in demanding HPC environments.

  • What makes SMC's architecture unique for hosting H200 GPUs?

    SMC combines H200 GPUs with an advanced hosting environment that features immersion cooling, HPC-grade support, and NVIDIA reference designs. This ensures reduced operational costs, optimised performance, and unmatched scalability tailored to enterprise needs.

  • How does the H200 contribute to cost savings?

    SMC’s infrastructure already optimises energy consumption and reduces energy requirements by up to 50%. Additionally, the H200's improved performance (a 50% reduction in energy use when compared to the H100) allows for faster time-to-results, reducing overall project costs.

  • What are some use cases of the H200s?

    The H200 is ideal for industries requiring high-performance AI capabilities, such as manufacturing (risk modelling), healthcare (genomics and molecular simulations), media (real-time rendering), and research (large-scale simulations).

  • How can I access H200 GPUs?

    SMC offers H200 GPUs on a pre-sale basis with tailored support for enterprises. Contact our team to learn more about deployment options and early access benefits. Coming January 2025.