ASUS has announced ESC4000A-E10, a new GPU server powered by the NVIDIA A100 PCIe GPU, featuring PCIe 4.0 expansion and OCP 3.0 networking. With faster compute and GPU performance, ESC4000A-E10 is designed to accelerate and optimise data centres for high utilisation, while providing a low total cost of ownership. ASUS continues to build a strong partnership with NVIDIA to deliver unprecedented acceleration and flexibility to power the world’s highest-performing elastic data centers for AI, data analytics and high-performance computing (HPC) applications.
ASUS ESC4000A-E10 is a 2U GPU server powered by AMD EPYC 7002 series processors, which can deliver up to 2X the performance and 4X the floating-point capability compared to the previous 7001 generation. Designed for AI, HPC and virtual desktop infrastructure (VDI) applications in data center or enterprise environments requiring powerful CPU cores, multiple GPU support and faster transmission speeds, ESC4000A-E10 delivers GPU-optimised performance with support for up to four high-performance double-slot or eight single-slot GPUs, including the latest NVIDIA A100 PCIe GPUs built on the NVIDIA Ampere architecture, Tesla T4 and Quadro GPUs. This performance also provides benefits for virtualisation by consolidating GPU resources into a shared pool, enabling users to utilise resources in more efficient ways.
ASUS ESC4000A-E10 also features up to 11 PCIe 4.0 slots for compute, graphics, storage and networking expansion. PCIe 4.0 provides transfer speeds of up to 16 GT/s — double the bandwidth of PCIe 3.0 — and delivers lower power consumption, better lane scalability and backwards compatibility. For networking, ESC4000A-E10 supports an OCP 3.0 network interface card, which supports up to 200 Gigabit Ethernet to meet the demands of high-bandwidth applications. With a flexible chassis design, ESC4000A-E10 accommodates up to eight hot-swappable 3.5-inch or 2.5-inch hard drives, four of which can be configured as NVMe SSDs.
The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration and flexibility to power the world’s highest-performing elastic data centers for AI, data analytics and HPC applications. As the engine of the NVIDIA data center platform, the A100 GPU provides up to 20X higher performance than V100 GPUs and can efficiently scale up to thousands of GPUs or be partitioned into seven isolated GPU instances with new multi-Instance GPU (MIG) capability to accelerate workloads of all sizes.
The NVIDIA A100 GPU features third-generation Tensor Core technology that supports a broad range of math precisions providing a unified workload accelerator for data analytics, AI training, AI inference, and HPC. Accelerating both scale-up and scale-out workloads on one platform enables elastic data centers that can dynamically adjust to shifting application workload demands. This simultaneously boosts throughput and drives down the cost of data centers.
Combined with the NVIDIA software stack, the A100 GPU accelerates all major deep learning and data analytics frameworks and over 700 HPC applications. NVIDIA NGC, a hub for GPU-optimised software containers for AI and HPC, simplifies application deployments so researchers and developers can focus on building their solutions. ASUS will also offer A100-powered NGC-Ready systems, which are built to run deep-learning and machine-learning workloads and are tested for functionality and performance of the AI stack.
Tags: AMD, ASUS, HP, NVIDIA