HPE Ships First NVIDIA Blackwell-Based AI Solution, GB200 NVL72, for Scalable AI Clusters

HPE
Image Courtesy: HPE

Hewlett Packard Enterprise (NYSE: HPE) has announced the shipment of its first NVIDIA Blackwell family-based solution, the NVIDIA GB200 NVL72, a rack-scale AI system designed to support large-scale AI clusters with advanced direct liquid cooling technology for enhanced efficiency and performance. The GB200 NVL72 is optimized for AI service providers and enterprises looking to scale AI model training and inferencing efficiently. Featuring shared-memory, low-latency architecture, it seamlessly integrates NVIDIA CPUs, GPUs, networking, and software, enabling high-speed data processing for trillion-parameter AI models.

“As AI workloads become more complex, enterprises require solutions that deliver extreme performance, scalability, and rapid deployment,” said Trish Damkroger, Senior Vice President & General Manager of HPC & AI Infrastructure Solutions at HPE. “With industry-leading liquid cooling expertise, HPE provides the lowest cost per token training and best-in-class AI cluster performance.”

The NVIDIA GB200 NVL72 system is engineered for high-performance AI computing, featuring an advanced architecture that seamlessly integrates 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs. These components are connected through high-speed NVIDIA NVLink, enabling rapid data transfer and optimized parallel processing.

To support intensive workloads, the system is equipped with up to 13.5 TB of HBM3e memory, delivering an impressive 576 TB/sec bandwidth for enhanced computational efficiency. Additionally, HPE’s direct liquid cooling technology ensures superior power efficiency, maintaining optimal performance while reducing energy consumption. HPE’s five decades of liquid cooling expertise have made it a leader in high-performance computing, with eight of the top 15 Green500 supercomputers and seven of the world’s 10 fastest systems.

Bob Pette, Vice President of Enterprise Platforms at NVIDIA, highlighted the collaboration’s impact: “HPE’s first shipment of NVIDIA GB200 NVL72 will enable enterprises to efficiently build and scale AI clusters, powered by cutting-edge liquid cooling technology.”

HPE provides comprehensive support for AI infrastructure on a global scale, ensuring optimal system performance and reliability. With onsite engineering services, businesses can enhance system efficiency and maintain maximum uptime, allowing AI workloads to operate seamlessly. Additionally, HPE offers performance benchmarking, enabling organizations to fine-tune AI models for improved accuracy and effectiveness.

In addition to performance optimization, HPE prioritizes sustainability by offering specialized services to monitor and reduce energy consumption. By integrating energy-efficient solutions, businesses can minimize their environmental impact while maintaining high-performance AI operations, aligning with global sustainability goals. The NVIDIA GB200 NVL72 by HPE is part of a broader portfolio of AI, high-performance computing, and supercomputing solutions, designed to accelerate GenAI, scientific research, and compute-intensive workloads.