Skip to main content

Resource Specification List

Last Update: 2025-04-21 16:09:02

Overview

The following GPU instance specifications cater to high-performance computing, deep learning, and graphics-intensive workloads. For GPU model and Inference model pricings, please refer to Pricing Page.


Application Scenarios

  • Large-scale machine learning training and inference
  • High-performance computing (HPC): computational finance, quantum simulation, molecular modeling
  • AI/GPU-accelerated workloads: rendering, video processing, genomics

Specification List

1. NVIDIA H100 SXM (80GB HBM3)

CPU: Intel Xeon Platinum 8480C
GPU Memory: 80GB per GPU
Storage: Standard
Location: ap-southeast-1a

Instance Type
vCPU
Memory (GB)
GPU Count
GPU Memory (GB)
h100.1.cpu.20.mem.194
20
194
1
80
h100.2.cpu.40.mem.388
40
388
2
160
h100.3.cpu.60.mem.582
60
582
3
240
h100.4.cpu.80.mem.776
80
776
4
320
h100.5.cpu.100.mem.970
100
970
5
400
h100.8.cpu.160.mem.1552
160
1552
8
640

2. NVIDIA GeForce RTX 3090 (24GB GDDR6X)

CPU: AMD EPYC 7T83
GPU Memory: 24GB per GPU
Storage: Standard
Location: ap-southeast-1a

Instance Type
vCPU
Memory (GB)
GPU Count
GPU Memory (GB)
rtx3090.1.cpu.20.mem.600
20
600
1
24
rtx3090.2.cpu.40.mem.1200
40
1200
2
48

Key Notes

  • Availability: Check with your account manager for real-time updates on instance access.
  • Storage: All GPU instances default to Standard storage.