Image from 采集站点

[Global Times Comprehensive Report] “Google May Assist OpenAI in Reducing its Reliance on Nvidia” – The Indian Times reported on the 28th that OpenAI has for the first time adopted Google’s AI chips to provide computational power support for its ChatGPT and other products. Prior to this, OpenAI had been one of the largest purchasers of Nvidia’s graphics processing unit (GPU). The report suggests that this AI company is seeking diversification in chip suppliers to reduce its reliance on Nvidia.
According to a report by Reuters on the 27th, this rental of Google’s TPU marks the first substantial use of non-Nvidia chips by OpenAI, reflecting the company’s shift away from its reliance on Microsoft’s data center. Previously, OpenAI primarily obtained Nvidia chips through partnerships with Microsoft and Oracle for model training and deployment.
The American technology industry media “The Information” believes that this move may push Google’s TPU to become a more cost-effective alternative to Nvidia’s GPU. However, sources mentioned that although OpenAI hoped to reduce inference costs by renting TPUs, Google did not offer its most powerful version to OpenAI, indicating that Google’s most advanced TPUs are still reserved for internal use.
For Google, Reuters analyzed that this collaboration is happening at a time when it is expanding its own TPU supply to external customers. Previously, TPUs were mainly used internally before being opened to external clients, attracting tech companies including Apple and US AI startup Anthropic.
Currently, leading tech companies are strengthening their control over computing infrastructure through self-developed chips.

According to a previous report by The New York Times, Amazon, Supermicro Semiconductors, and several startups have begun offering reliable alternatives to Nvidia chips, especially in the “inference” phase of AI development. Among them, Amazon announced the launch of its new Trainium 2 chip-based computing services, which have received positive feedback from potential users including Apple.
On the 12th of this month, Super Micro Semiconductors released its new AI servers—the MI350 series and MI400 series—set to be launched in 2026, and compared them with Nvidia’s Blackwell series. According to Reuters, Suzion Fukumoto, CEO of Super Micro Semiconductors, stated that these chips will compete with Nvidia’s Blackwell series. Super Micro Semiconductors has struggled to take over the rapidly growing AI chip market share from Nvidia. “The future of AI will not be built by any single company or closed ecosystem; it will be shaped by open collaboration across the industry,” said Fukumoto.
However, according to a report by the Canadian technology media website Wccftech on the 27th, despite tech giants investing heavily in developing their own chips to reduce reliance on Nvidia, Nvidia’s AI chips still maintain a leading position within the industry. The ambition of Microsoft to overturn Nvidia’s dominance in AI is being shaken, with the first independently developed AI chip “Braga” delayed due to performance not meeting expectations. Preliminary performance evaluation reports indicate that its performance cannot surpass that of Nvidia’s Blackwell released in 2024.

By word

Leave a Reply

Your email address will not be published. Required fields are marked *