HPE Introduces Supercomputing Solution for Accelerated AI Model Training

HPE-NVIDIA

Hewlett Packard Enterprise (HPE) has unveiled a novel supercomputing solution tailored for generative AI, aimed at large enterprises, research institutions, and government organisations. This innovative solution, intended to facilitate AI model training using private data sets, integrates a suite of software tools enabling the development and tuning of AI models and applications.

The system, underpinned by HPE Cray supercomputing technology, harnesses the power of NVIDIA Grace Hopper GH200 Superchips, offering exceptional scale and performance for complex AI workloads. It includes HPE’s Machine Learning Development Environment, which demonstrated its capabilities by fine-tuning the 70 billion-parameter Llama 2 model in less than 3 minutes.

HPE’s collaboration with NVIDIA enhances the solution’s capacity to drive AI initiatives, providing customers with the necessary performance for breakthroughs in generative AI. The solution’s network capabilities are supported by HPE Slingshot Interconnect, enabling high-speed networking essential for AI workloads.

Emphasizing energy efficiency, HPE integrates liquid-cooling capabilities in its supercomputing solution, significantly reducing energy consumption. This approach aligns with the increasing power requirements for AI workloads in data centers and HPE’s commitment to sustainable computing.

Set for general availability in December across more than 30 countries, HPE’s supercomputing solution for generative AI marks a significant stride in combining advanced computing technology with AI development goals.