Home United States USA — IT Google Announces Its Next-Gen Cloud TPU v5p AI Accelerator Chips & AI...

Google Announces Its Next-Gen Cloud TPU v5p AI Accelerator Chips & AI Hypercomputer

167
0
SHARE

Google has announced the company’s « most powerful » AI accelerator, dubbed as the Cloud TPU v5p along with a new AI Hypercomputer model.
Google has announced the company’s « most powerful » scalable, and flexible AI accelerator, dubbed the Cloud TPU v5p along with a new AI Hypercomputer model.Google Plans on Taking The Reigns of The AI Bandwagon Through Its Brand New Cloud TPV v5p Chip & AI Hypercomputer Solutions
With the rapidly progressing AI markets, companies are moving towards their solutions when it comes to providing computing power to ongoing developments. Firms like Microsoft with their Maia 100 AI Accelerator and Amazon with their Trainium2 aim to excel past each other when it comes to performance-optimized hardware to tackle AI workloads, and Google has indeed joined the list.
Now Google has unveiled several exciting elements such as their new Gemini model for the AI industry, but our coverage will be more focused on the hardware side of things. The Cloud TPU v5p is Google’s most capable and cost-effective TPU (Cloud Tensor Processing Unit) up to date. Each TPU v5p pod consists of a whopping 8,960 chips interconnected using the highest-bandwidth inter-chip connection at 4,800 Gbps per chip, ensuring rapidly fast transfer speeds and optimal performance. Google doesn’t look to hold back, as the upcoming generational leap figures will amaze you.
When compared to the TPU v4, the newly-released v5p comes with two times the greater FLOPS (Floating-point operations per second) and three times more high-memory bandwidth, which is amazing when considered in the domain of artificial intelligence.
Moreover, coming to model training, the TPU v5p shows a 2.8 times generational jump in LLM training speeds. Google has also created space to squeeze out more computing power as well since the TPU v5p is « 4X more scalable than TPU v4 in terms of total available FLOPs per pod ».

Continue reading...