
Google has announced the release of details about one of its artificial intelligence supercomputers, which it claims is faster and more efficient than competing Nvidia systems.
Despite Nvidia’s market dominance in AI model training and deployment, with over 90% of the market share, Google has been working on designing and deploying its own AI chips called Tensor Processing Units (TPUs) since 2016.
However, the company has faced criticism for lagging behind in commercializing its AI inventions, and has been racing internally to release products to prove it has not squandered its lead, CNBC reported.
The new system, which consists of over 4,000 TPUs and custom components, was used to train Google’s PaLM model over a period of 50 days and has been operational since 2020.
This development comes as the demand for power-hungry machine learning models continues to grow within the tech industry.
Written by staff