Google unveiled its eighth-generation Tensor Processing Unit (TPU) artificial intelligence (AI) chip while reaffirming its partnership with Nvidia.
Mark Lohmeyer, Google Cloud vice president (VP) for AI and compute infrastructure, said at a Google Cloud Next press briefing at the Mandalay Bay Convention Center in Las Vegas on the 23rd, local time, that "we love Nvidia, and Nvidia loves us," adding, "many of our customers use Nvidia's graphics processing units (GPUs), and we work closely with them."
He also noted, "the core of the Google Cloud strategy is customer choice and openness, so we are working ever more closely with Nvidia," adding, "we plan to introduce Nvidia's new AI accelerator 'Vera Rubin' on Google Cloud by the end of this year."
At the same time, he introduced that Thinking Machines Lab, a startup founded by former OpenAI chief technology officer (CTO) Mira Murati, is using Nvidia GPUs through Google Cloud.
Still, Google stressed the competitiveness of its in-house chips. Unlike Google, which is using eighth-generation TPUs, it said it is ahead of rivals such as Amazon Web Services (AWS) and Microsoft (MS) Azure, which still have only first- to third-generation in-house chips.
In on-premise environments running on customers' internal servers, dependence on Nvidia GPUs remains high. While Google is already operating eighth-generation TPUs, Amazon Web Services (AWS) and Microsoft (MS) Azure remain at the first- to third-generation level and are therefore lagging technologically, it said.
Google's emphasis on cooperation with Nvidia, even as it highlights its TPU technology, is seen as stemming from continued strong demand for GPUs.
Nvidia also stressed the partnership. On the first day of Google Cloud Next on the 22nd, Nvidia said on its official blog that "Nvidia and Google Cloud have collaborated for more than a decade" and that "we have jointly developed an AI platform that spans every layer of the technology stack."