Giga Computing Announces New Server Built on NVIDIA GB200 NVL4 Platform
Giga Computing announced the launch of its next-generation AI and HPC server, GIGABYTE XN24-VC0-LA61, powered by the NVIDIA GB200 NVL4 platform. Purpose-built for its heterogeneous architecture of CPU and GPU, showcasing Giga Computing’s latest innovation in accelerated computing with liquid cooling technologies.
The XN24 server has been selected for the RIKEN Center for Computational Science (R-CCS) next-generation HPC-Quantum hybrid platform. The new platform integrates GIGABYTE servers into the development of FugakuNEXT, the eventual successor to Japan’s flagship supercomputer, Fugaku. This integration will use the NVIDIA CUDA-Q platform to support the development of hybrid quantum-GPU supercomputing systems and research into advanced scientific applications that bridge quantum and traditional high-performance computing.
Beyond the XN24, Giga Computing is also demonstrating a comprehensive deployment portfolio at the event—ranging from enterprise-grade AI infrastructure to rack-scale data center solutions—underscoring its role as a pivotal hardware provider for global high-end scientific computing data centers.
Liquid-Cooled: Accelerated Computing
The GIGABYTE XN24-VC0-LA61 is a NVIDIA Grace Blackwell server platform based on NVIDIA MGX modular architecture. Featuring a 2U dual-processor design, it incorporates Direct Liquid Cooling (DLC) technology. Designed for modular scalability, the XN24 offers a flexible alternative for organizations to deploy NVIDIA Blackwell-class computing power without the immediate requirement of a full rack-scale infrastructure. It provides a core foundation for building scalable, highly efficient AI infrastructure.
Extreme Computing Density: Powered by NVIDIA GB200 NVL4, the system integrates two NVIDIA Grace CPUs based on the Arm architecture and four NVIDIA Blackwell GPUs. Each Superchip is equipped with 480GB of LPDDR5X ECC CPU memory, while the GPU provides up to 186GB of HBM3E memory, significantly accelerating scientific simulations, Large Language Model (LLM) training, and high-throughput inference tasks.
High-Speed Network Integration: The system supports the NVIDIA Quantum-X800 InfiniBand or Spectrum-X Ethernet platform, with 800Gb/s InfiniBand or 400Gb/s Ethernet per port, and utilizes NVIDIA ConnectX-8 SuperNIC solutions to ensure low-latency, high-bandwidth communication across multi-node clusters.
Flexible Storage and Expansion: It offers up to twelve PCIe Gen5 NVMe drive bays and supports optional DPUs (data processing units), such as NVIDIA BlueField, for hardware-accelerated offloading of compute and security tasks. The system is also equipped with 80 PLUS Titanium redundant power supplies to ensure efficient and stable data center operations.
