Microsoft strung together tens of thousands of chips in a pricey supercomputer for OpenAI


When OpenAI or Microsoft is training a large AI model, the work happens at one time. It’s divided across all the GPUs and at certain points, the units need to talk to each other to share the work they’ve done. — Reuters

When Microsoft Corp invested US$1bil (RM4.5bil) in OpenAI in 2019, it agreed to build a massive, cutting-edge supercomputer for the artificial intelligence research startup. The only problem: Microsoft didn’t have anything like what OpenAI needed and wasn’t totally sure it could build something that big in its Azure cloud service without it breaking.

OpenAI was trying to train an increasingly large set of artificial intelligence programs called models, which were ingesting greater volumes of data and learning more and more parameters, the variables the AI system has sussed out through training and retraining. That meant OpenAI needed access to powerful cloud computing services for long periods of time.

Subscribe now and get 30% off The Star Yearly Plan

Monthly Plan

RM 13.90/month

RM 9.73/month

Billed as RM 9.73 for the 1st month, RM 13.90 thereafter.

Best Value

Annual Plan

RM 12.33/month

RM 8.63/month

Billed as RM 103.60 for the 1st year, RM 148 thereafter.


Follow us on our official WhatsApp channel for breaking news alerts and key updates!
Computer , PC

Others Also Read


Want to listen to full audio?

Unlock unlimited access to enjoy personalise features on the TheStar.com.my

Already a subscriber? Log In