OpenAI is set to leverage AWS’s latest UltraServer infrastructure, which includes hundreds of thousands of NVIDIA GPUs and millions of CPUs optimized for large-scale AI training and inference tasks. The EC2 UltraServer is designed for ultra-low latency performance and security, supporting next-generation model training and real-time inference for services like ChatGPT. OpenAI’s CEO, Sam Altman, emphasized the necessity of reliable computing power to scale advanced AI technologies, while AWS CEO Andy Jassy noted that their optimized infrastructure will bolster OpenAI’s innovations and facilitate faster delivery of cutting-edge AI technology to global organizations.
This partnership sees OpenAI utilizing AWS’s computing capacity immediately, with full deployment expected by the end of 2026, along with potential further expansions beyond that. Once exclusively using Microsoft Azure, OpenAI has been diversifying its partnerships with various infrastructure providers, strengthening its model through initiatives like Amazon Bedrock. In addition, OpenAI recently entered a $300 billion agreement with Oracle, highlighting its commitment to cloud infrastructure diversification for enhanced AI capabilities.
👉 Pročitaj original: CIO Magazine