Harnessing sustainability to build a better AI cloud.

High-performance deep learning and Omniverse GPU clusters delivered with scale, speed and low cost. Enabled by genuine, breakthrough sustainability, SMC is the AI metal cloud service of ST Telemedia Global Data Centres – scaling rapidly through our data centre platform in Asia, India and Europe.

  • Perfectly Built for Generative AI & Omniverse

    Large scale, reserved HGX & OVX clusters, built to NVIDIA’s standards for high performance, distributed supercomputing. No-compromise HPC AI in the cloud, scaling to 2,500 GPUs per cluster delivered via metal and when needed with AI development tools included.

  • Delivered by AI Factories

    A100, H100, A40 & L40 clusters hosted centrally and securely from within ST Telemedia Global Data Centres’ worldwide platform, including Singapore & India. SMC is a new global standard in accessibility, democratizing access to train and inference AI workloads.

  • Real Sustainability. Zero Greenwashing.

    SMC’s hyper-efficient hosting infrastructure cuts energy use & CO2 by up to 50%. Our service level agreements guarantee the low CO2 output of our cloud clusters and infrastructure with the reliability of a CSP, and the performance of a supercomputer.

  • The Sustainability Discount

    True efficiency cuts costs. How much? up to 50% below major CSPs, and as competitive as dedicated GPU clouds. Straightforward pricing with multi-year TCOs demonstrably below on-premise deployment or other CSPs.

  • Hyperscale AI clusters
    A100/ A40

    Designed to accommodate the largest and most demanding AI workloads.

    SMC is built to power AI supercomputing and HPC clusters in the cloud. Expect no-compromise bare metal infrastructure in a range of configurations, with instances matching NVIDIA HGX & OVX architecture optimized for AI & Omniverse, perfect for large language models, digital twins and transformer engines.

    SMC metal features cutting-edge technology such as A100 & H100 GPUs, Bluefield 3 DPUs, all NVMe storage, and high core count AMD EPYC and Intel Xeon processors. Dedicated high speed GPU- Direct RDMA non-blocking NDR networking guarantees the scale and node performance to run large AI training & inference workloads.

    Early access to H100 HGX clusters can be facilitated for qualified customers, starting in Q4, 2023. Submit an enquiry to learn more.

    SMC A2 – A100 SXM Enquire
    vCPUs 256
    RAM 2,048GB
    GPU type A100 80GB
    Bundled SSD 30,000GB
    Port Speed 200G
    Non-blocking? Yes
    InfiniBand/RDMA Yes
    SMC B2 – A40 PCIe Enquire
    vCPUs 128
    RAM 1,024GB
    GPU type A40 48GB
    Bundled SSD 19,000GB
    Port Speed 200G
    Non-blocking? Yes
    InfiniBand/RDMA No
  • The sustainability discount
    up to 50% more cost effective

    Globally available and cost-effective, large-scale GPU access.

    Breakthrough infrastructure efficiency is uniformly cost- effective in all our cloud regions.

    By reinventing what it means to be sustainable, SMC’s game-changing hosting technology and vertically integrated infrastructure stack allows for the delivery of the world’s most powerful AI metal at some of the lowest TCOs for computing infrastructure. Period.

    Far from attracting a premium, the sustainability of SMC’s infrastructure and pure focus on delivering high performance AI clusters delivers 1, 3, and 5 year TCOs that are substantially lower than on-premise, legacy CSPs, or even specialist GPU clouds.

    Using up to 50% less power, alongside a denser, more efficient hosted environment, affords uniquely low-cost access to powerful AI clusters.

  • A growing footprint
    AZs ready for expansion

    As the bare metal service of ST Telemedia Global Data Centres built for AI workloads, we’re everywhere you need us.

    With a footprint of over 50 data centres worldwide – and growing fast – SMC is ready to scale to your needs.

    SMC has launched first in Singapore, India and Australia in 2023, with new AZs in the works for more locations in Asia and Europe just around the corner.

    Where access to secure, large scale and local GPU infrastructure matters to you, SMC is the solution. Enquire now to learn more and reserve capacity in upcoming regions.

  • Real sustainability, no greenwashing
    less CO2

    This is not just PPAs or carbon offsets. Genuine CO2 savings in your compute environment.

    A dedication to genuine sustainability is the foundation on which SMC is built. SMC reduces the total CO2 emitted by GPU clouds by up to 50%.

    How? Our delivery replaces traditional infrastructure with immersion-cooled hosting technology, leading to an ultra-low PUE and saving substantial CO2 on operational clusters.

    A sustainable future is a core objective of SMC’s mission. Powered with efficiency, SMC can run NVIDIA AI & Omniverse workloads with one of the lowest carbon impact anywhere in the world.

Delivered from Sustainable AI Factories

Pure, raw, NVIDIA reference architecture delivering large-scale GPU access from ST Telemedia Global Data Centres’ worldwide footprint.

SMC’s uniquely efficient hosting technology, paired with an expansion footprint into our data centers across the globe, is redefining scaled AI infrastructure to accommodate for exponential growth in AI applications.

Each cloud region launches with multi-thousand GPU clusters, all networked with non-blocking high speed fabrics. Vertically integrated, SMC benefits from partners who lead their fields in world-class sustainable infrastructure and compute deployments. A flexible, cost-effective platform, the same infrastructure that powers SMC is – for the first time – also available in bespoke configurations for large enterprise or CSP users to operate AZs of their own. Contract us to learn more.

Combining global partnerships with next-gen infrastructures; requiring no server fans, and designed with custom high-efficiency PSUs, SMC’s sustainable AI factories can operate from anywhere in the world and are changing the world of AI infrastructure, one country at a time.

Up to
GPUs per cluster

Scale in region, scale in cluster and scale with performance. SMC is designed for ambitious workloads and getting there quickly. Typical AZs launch with up clusters of up to 850 A100, A40, H100 or L40 GPU nodes, networked with a non-blocking RDMA enabled fast fabric, closely following the reference architecture for AI supercomputing.

Each cluster scales to 2,500 GPU nodes, with larger deployments of up to 10,000 GPUs engineered for larger users.

Reserve access to A100 & H100 clusters now.