OpenAI continues to use NVIDIA’s GPUs; Says it has no plans to deploy Google’s in- house AI chip

Recently, a few reports suggested that OpenAI was exploring the use of Google’s TPUs to meet rising demand for AI computing, but it seems like OpenAI currently has no such plans.
A spokesperson for OpenAI said to Reuters that while the AI lab is in early testing with some of Google’s tensor processing units (TPUs), it has no plans to deploy them at scale right now. Google has declined to comment on this particular matter.
Currently, OpenAI is actively using NVIDIA’s graphics processing units (GPUs) and AMD’s AI chips to power its growing demand. NVIDIA has officially posted on the X platform that the company is proud to partner with OpenAI.
We’re proud to partner with @OpenAI and continue powering the foundation of their work. pic.twitter.com/CWgsn1uAIk
— NVIDIA (@nvidia) July 7, 2025
OpenAI is also said to be developing its AI chip in order to reduce OpenAI’s reliance on third-party chips. It is said that the design of the chip will be finalized and sent for manufacturing later this year.
Previously, OpenAI has partnered with Google Cloud to expand its computing capacity, as reported exclusively by Reuters last month. Though it is said that most of the computing power used by OpenAI would be from GPU servers powered by CoreWeave.
Stay tuned with The Tech Outlook for more latest tech updates.