GPU Servers
GPU Servers
Discover a kaleidoscope of cutting-edge GPU solutions meticulously crafted to elevate your deep learning endeavours within the dynamic landscape of modern data centres. Our array of optimised offerings ensures unparalleled performance and seamless integration, empowering you to unlock the full potential of your data-driven innovations.
Modular Building Block Design, Future Proof Open-Standards Based Platform in 4U, 5U, or 8U for Large Scale AI training and HPC Applications
GPU:
NVIDIA HGX H100/A100 4-GPU/8-GPU, AMD Instinct MI300X/MI250 OAM Accelerator, Intel Data Center GPU Max Series
CPU:
Intel® Xeon® or AMD EPYC™
Memory:
Up to 32 DIMMs, 8TB
Drives:
Up to 24 Hot-swap U.2 or 2.5" NVMe/SATA drives
Modular Building Block Design, Future Proof Open-Standards Based Platform in 4U, 5U, or 8U for Large Scale AI training and HPC Applications
GPU:
NVIDIA HGX H100/A100 4-GPU/8-GPU, AMD Instinct MI300X/MI250 OAM Accelerator, Intel Data Center GPU Max Series
CPU:
Intel® Xeon® or AMD EPYC™
Memory:
Up to 32 DIMMs, 8TB
Drives:
Up to 24 Hot-swap U.2 or 2.5" NVMe/SATA drives
Direct-to-chip liquid-cooled systems for high-density AI infrastructure at scale.
GPU:
NVIDIA HGX H100/H200/B200 4-GPU/8-GPUs
CPU:
Intel® Xeon® or AMD EPYC™
Memory:
Up to 32 DIMMs, 9TB
Drives:
Up to 24 Hot-swap U.2 or 2.5" NVMe/SATA drives
Direct-to-chip liquid-cooled systems for high-density AI infrastructure at scale.
GPU:
NVIDIA HGX H100/H200/B200 4-GPU/8-GPUs
CPU:
Intel® Xeon® or AMD EPYC™
Memory:
Up to 32 DIMMs, 9TB
Drives:
Up to 24 Hot-swap U.2 or 2.5" NVMe/SATA drives
Maximum Acceleration and Flexibility for AI/Deep Learning and HPC Applications
GPU:
Up to 10 NVIDIA H100 PCIe GPUs, or up to 10 double-width PCIe GPUs
CPU:
Intel® Xeon® or AMD EPYC™
Memory:
Up to 32 DIMMs, 9TB
Drives:
Up to 24 Hot-swap U.2 or 2.5" NVMe/SATA drives
Maximum Acceleration and Flexibility for AI/Deep Learning and HPC Applications
GPU:
Up to 10 NVIDIA H100 PCIe GPUs, or up to 10 double-width PCIe GPUs
CPU:
Intel® Xeon® or AMD EPYC™
Memory:
Up to 32 DIMMs, 9TB
Drives:
Up to 24 Hot-swap U.2 or 2.5" NVMe/SATA drives
Modular Building Block Platform Supporting Today's and Future GPUs, CPUs, and DPUs
GPU:
Up to 4 NVIDIA PCIe GPUs including H100, H100 NVL, and L40S
CPU:
NVIDIA GH200 Grace Hopper™ Superchip, Grace™ CPU Superchip, or Intel® Xeon®
Memory:
Up to 960GB ingegrated LPDDR5X memory (Grace Hopper or Grace CPU Superchip) or 16 DIMMs, 4TB DRAM (Intel)
Drives:
Up to 8 E1.S + 4 M.2 drives
Modular Building Block Platform Supporting Today's and Future GPUs, CPUs, and DPUs
GPU:
Up to 4 NVIDIA PCIe GPUs including H100, H100 NVL, and L40S
CPU:
NVIDIA GH200 Grace Hopper™ Superchip, Grace™ CPU Superchip, or Intel® Xeon®
Memory:
Up to 960GB ingegrated LPDDR5X memory (Grace Hopper or Grace CPU Superchip) or 16 DIMMs, 4TB DRAM (Intel)
Drives:
Up to 8 E1.S + 4 M.2 drives
Multi-processor system combining CPU and GPU, Designed for the Convergence of AI and HPC
GPU:
4 AMD Instinct MI300A Accelerated Processing Unit (APU)
CPU:
AMD Instinct™ MI300A Accelerated Processing Unit (APU)
Memory:
Up to 512GB integrated HBM3 memory (4x 128GB)
Drives:
Up to 8 2.5" NVMe or Optional 24 2.5" SATA/SAS via storage add-on card + 2 M.2 drives
Multi-processor system combining CPU and GPU, Designed for the Convergence of AI and HPC
GPU:
4 AMD Instinct MI300A Accelerated Processing Unit (APU)
CPU:
AMD Instinct™ MI300A Accelerated Processing Unit (APU)
Memory:
Up to 512GB integrated HBM3 memory (4x 128GB)
Drives:
Up to 8 2.5" NVMe or Optional 24 2.5" SATA/SAS via storage add-on card + 2 M.2 drives
Flexible Design for AI and Graphically Intensive Workloads, Supporting Up to 10 GPUs
GPU:
NVIDIA HGX A100 8-GPU with NVLink, or up to 10 double-width PCIe GPUs
CPU:
Intel® Xeon® or AMD EPYC™
Memory:
Up to 32 DIMMs, 8TB DRAM or 12TB DRAM + PMem
Drives:
Up to 24 Hot-swap 2.5" SATA/SAS/NVMe
Flexible Design for AI and Graphically Intensive Workloads, Supporting Up to 10 GPUs
GPU:
NVIDIA HGX A100 8-GPU with NVLink, or up to 10 double-width PCIe GPUs
CPU:
Intel® Xeon® or AMD EPYC™
Memory:
Up to 32 DIMMs, 8TB DRAM or 12TB DRAM + PMem
Drives:
Up to 24 Hot-swap 2.5" SATA/SAS/NVMe
Dense and Resource-saving Multi-GPU Architecture for Cloud-Scale Data Center Applications
GPU:
Up to 3 double-width PCIe GPUs per node
CPU:
Intel® Xeon® or AMD EPYC™
Memory:
Up to 8 DIMMs, 2TB per node
Drives:
Up to 2 front hot-swap 2.5” U.2 per node
Dense and Resource-saving Multi-GPU Architecture for Cloud-Scale Data Center Applications
GPU:
Up to 3 double-width PCIe GPUs per node
CPU:
Intel® Xeon® or AMD EPYC™
Memory:
Up to 8 DIMMs, 2TB per node
Drives:
Up to 2 front hot-swap 2.5” U.2 per node
Flexible Solution for AI/Deep Learning Practitioners and High-end Graphics Professionals
GPU:
Up to 4 double-width PCIe GPUs
CPU:
Intel® Xeon®
Memory:
Up to 16 DIMMs, 6TB
Drives:
Up to 8 hot-swap 2.5” SATA/NVMe
Flexible Solution for AI/Deep Learning Practitioners and High-end Graphics Professionals
GPU:
Up to 4 double-width PCIe GPUs
CPU:
Intel® Xeon®
Memory:
Up to 16 DIMMs, 6TB
Drives:
Up to 8 hot-swap 2.5” SATA/NVMe
Modular Building Block Design, Future Proof Open-Standards Based Platform in 4U, 5U, or 8U for Large Scale AI training and HPC Applications
GPU:
NVIDIA HGX H100/A100 4-GPU/8-GPU, AMD Instinct MI300X/MI250 OAM Accelerator, Intel Data Center GPU Max Series
CPU:
Intel® Xeon® or AMD EPYC™
Memory:
Up to 32 DIMMs, 8TB
Drives:
Up to 24 Hot-swap U.2 or 2.5" NVMe/SATA drives
Modular Building Block Design, Future Proof Open-Standards Based Platform in 4U, 5U, or 8U for Large Scale AI training and HPC Applications
GPU:
NVIDIA HGX H100/A100 4-GPU/8-GPU, AMD Instinct MI300X/MI250 OAM Accelerator, Intel Data Center GPU Max Series
CPU:
Intel® Xeon® or AMD EPYC™
Memory:
Up to 32 DIMMs, 8TB
Drives:
Up to 24 Hot-swap U.2 or 2.5" NVMe/SATA drives
Direct-to-chip liquid-cooled systems for high-density AI infrastructure at scale.
GPU:
NVIDIA HGX H100/H200/B200 4-GPU/8-GPUs
CPU:
Intel® Xeon® or AMD EPYC™
Memory:
Up to 32 DIMMs, 9TB
Drives:
Up to 24 Hot-swap U.2 or 2.5" NVMe/SATA drives
Direct-to-chip liquid-cooled systems for high-density AI infrastructure at scale.
GPU:
NVIDIA HGX H100/H200/B200 4-GPU/8-GPUs
CPU:
Intel® Xeon® or AMD EPYC™
Memory:
Up to 32 DIMMs, 9TB
Drives:
Up to 24 Hot-swap U.2 or 2.5" NVMe/SATA drives
Maximum Acceleration and Flexibility for AI/Deep Learning and HPC Applications
GPU:
Up to 10 NVIDIA H100 PCIe GPUs, or up to 10 double-width PCIe GPUs
CPU:
Intel® Xeon® or AMD EPYC™
Memory:
Up to 32 DIMMs, 9TB
Drives:
Up to 24 Hot-swap U.2 or 2.5" NVMe/SATA drives
Maximum Acceleration and Flexibility for AI/Deep Learning and HPC Applications
GPU:
Up to 10 NVIDIA H100 PCIe GPUs, or up to 10 double-width PCIe GPUs
CPU:
Intel® Xeon® or AMD EPYC™
Memory:
Up to 32 DIMMs, 9TB
Drives:
Up to 24 Hot-swap U.2 or 2.5" NVMe/SATA drives
Modular Building Block Platform Supporting Today's and Future GPUs, CPUs, and DPUs
GPU:
Up to 4 NVIDIA PCIe GPUs including H100, H100 NVL, and L40S
CPU:
NVIDIA GH200 Grace Hopper™ Superchip, Grace™ CPU Superchip, or Intel® Xeon®
Memory:
Up to 960GB ingegrated LPDDR5X memory (Grace Hopper or Grace CPU Superchip) or 16 DIMMs, 4TB DRAM (Intel)
Drives:
Up to 8 E1.S + 4 M.2 drives
Modular Building Block Platform Supporting Today's and Future GPUs, CPUs, and DPUs
GPU:
Up to 4 NVIDIA PCIe GPUs including H100, H100 NVL, and L40S
CPU:
NVIDIA GH200 Grace Hopper™ Superchip, Grace™ CPU Superchip, or Intel® Xeon®
Memory:
Up to 960GB ingegrated LPDDR5X memory (Grace Hopper or Grace CPU Superchip) or 16 DIMMs, 4TB DRAM (Intel)
Drives:
Up to 8 E1.S + 4 M.2 drives
Multi-processor system combining CPU and GPU, Designed for the Convergence of AI and HPC
GPU:
4 AMD Instinct MI300A Accelerated Processing Unit (APU)
CPU:
AMD Instinct™ MI300A Accelerated Processing Unit (APU)
Memory:
Up to 512GB integrated HBM3 memory (4x 128GB)
Drives:
Up to 8 2.5" NVMe or Optional 24 2.5" SATA/SAS via storage add-on card + 2 M.2 drives
Multi-processor system combining CPU and GPU, Designed for the Convergence of AI and HPC
GPU:
4 AMD Instinct MI300A Accelerated Processing Unit (APU)
CPU:
AMD Instinct™ MI300A Accelerated Processing Unit (APU)
Memory:
Up to 512GB integrated HBM3 memory (4x 128GB)
Drives:
Up to 8 2.5" NVMe or Optional 24 2.5" SATA/SAS via storage add-on card + 2 M.2 drives
Flexible Design for AI and Graphically Intensive Workloads, Supporting Up to 10 GPUs
GPU:
NVIDIA HGX A100 8-GPU with NVLink, or up to 10 double-width PCIe GPUs
CPU:
Intel® Xeon® or AMD EPYC™
Memory:
Up to 32 DIMMs, 8TB DRAM or 12TB DRAM + PMem
Drives:
Up to 24 Hot-swap 2.5" SATA/SAS/NVMe
Flexible Design for AI and Graphically Intensive Workloads, Supporting Up to 10 GPUs
GPU:
NVIDIA HGX A100 8-GPU with NVLink, or up to 10 double-width PCIe GPUs
CPU:
Intel® Xeon® or AMD EPYC™
Memory:
Up to 32 DIMMs, 8TB DRAM or 12TB DRAM + PMem
Drives:
Up to 24 Hot-swap 2.5" SATA/SAS/NVMe
Dense and Resource-saving Multi-GPU Architecture for Cloud-Scale Data Center Applications
GPU:
Up to 3 double-width PCIe GPUs per node
CPU:
Intel® Xeon® or AMD EPYC™
Memory:
Up to 8 DIMMs, 2TB per node
Drives:
Up to 2 front hot-swap 2.5” U.2 per node
Dense and Resource-saving Multi-GPU Architecture for Cloud-Scale Data Center Applications
GPU:
Up to 3 double-width PCIe GPUs per node
CPU:
Intel® Xeon® or AMD EPYC™
Memory:
Up to 8 DIMMs, 2TB per node
Drives:
Up to 2 front hot-swap 2.5” U.2 per node
Flexible Solution for AI/Deep Learning Practitioners and High-end Graphics Professionals
GPU:
Up to 4 double-width PCIe GPUs
CPU:
Intel® Xeon®
Memory:
Up to 16 DIMMs, 6TB
Drives:
Up to 8 hot-swap 2.5” SATA/NVMe
Flexible Solution for AI/Deep Learning Practitioners and High-end Graphics Professionals
GPU:
Up to 4 double-width PCIe GPUs
CPU:
Intel® Xeon®
Memory:
Up to 16 DIMMs, 6TB
Drives:
Up to 8 hot-swap 2.5” SATA/NVMe