Google on Tuesday published a scientific paper detailing how it has strung more than 4,000 of the chips together into a supercomputer using its own custom-developed optical switches to help connect individual machines. — Bloomberg
NEW YORK: Alphabet Inc’s Google released new details about the supercomputers it uses to train its artificial intelligence models, saying the systems are both faster and more power-efficient than comparable systems from Nvidia Corp.
Google has designed its own custom chip called the Tensor Processing Unit (TPU). It uses those chips for more than 90% of the company’s work on artificial intelligence training, the process of feeding data through models to make them useful for tasks such as responding to queries with human-like text or generating images.
Already a subscriber? Log in
Save 30% OFF The Star Digital Access
Cancel anytime. Ad-free. Unlimited access with perks.
