あなたの製品を検索
あなたの製品を検索
-
QuantaGrid S74G-2U
Breakthrough accelerated performance for giant-scale AI-HPC applications
- Introducing the first gen NVIDIA® MGX™ architecture with modular infrastructure
- Powered by NVIDIA® Grace™ Hopper™ Superchip
- Coherent memory between CPU and GPU with NVLink®- C2C interconnect
- Optimized for memory intensive inference and HPC performance
- arm SystemReady
-
QuantaGrid-D43N-3U
Optimized Accelerated Server
- Flexible acceleration card configuration optimized for both compute and graphic intensive workloads
- Up to 128 CPU cores with 8TB memory capacity to feed high throughput accelerator cards
- Up to 2x HDR/200GbE networking to cluster computing
- Easy maintenance design for minimum downtime
-
QuantaGrid D74A-7U
Accelerated Parallel Computing Performance for the Most Extreme AI-HPC Workloads
- Multiple-GPUs Server for HPC / AI Training ( e.g LLMs / NLP ).
- Powered by 2x 4th AMD EPYC9004 Processors and Compatible with Next-Gen AMD EPYC™ Processor in the future.
- Introducing NVIDIA HGX Architecture and Flexible with 8x NVIDIA H100/H200 GPUs or 8x AMD MI300X GPUs.
- 18x SFF All-NVMe Drive Bays for GPUDirect Storage and Boot Drive.
- 10x OCP NIC 3.0 TSFF for GPUDirect RDMA.
- Modularized Design for Easy Serviceability.
-
QuantaGrid D54U-3U
Endless Flexibility for Diverse Applications
- Powered by 5th/4th Gen Intel® Xeon® Scalable Processor
- Powered by NVIDIA® GPUs
- PCle 5.0 & DDR5 platform ready
- Up to 4x DW accelerator or 8x SW accelerator
- Support active type and passive type accelerator
- Up to 10x PCIe 5.0 NVMe drive to speed up data-loading
- PCle 5.0 400Gb Networking for Scale-out
- Enhanced serviceability with tool-less, hot-plug designs
-
QuantaGrid D74H-7U
Advanced Performance for the Most Extreme AI-HPC Workloads
- 2x Top Bin 5th/4th Gen Intel® Xeon® Processor
- 18x SFF All-NVMe drive bays for GPUDirect storage and boot drive
- 10x OCP NIC 3.0 TSFF for GPUDirect RDMA
- 8x Hopper H100/H200 SXM5 GPU modules with HGX baseboard
-
QuantaGrid D52G-4U
AIとHPCに挑戦するオールインワンボックス
- NVIDIA® Tesla® V100 最大8枚 、NVLink™ サポート、最大300GB/s GPU-GPU間通信
- デュアルワイド300W GPU 10枚、またはシングルワイド75W GPU最大16枚
- さまざまなGPUトポロジーを採用することで様々な並列コンピューティングワークロードに対応
- 最大4枚の100Gb/s 高バンド幅 RDMA-ネットワークに対応
- 深層学習を加速させる、最大8枚のNVMe ストレージ