
When OpenAI or Microsoft is training a large AI model, the work happens at one time. It’s divided across all the GPUs and at certain points, the units need to talk to each other to share the work they’ve done. — Reuters
When Microsoft Corp invested US$1bil (RM4.5bil) in OpenAI in 2019, it agreed to build a massive, cutting-edge supercomputer for the artificial intelligence research startup. The only problem: Microsoft didn’t have anything like what OpenAI needed and wasn’t totally sure it could build something that big in its Azure cloud service without it breaking.
OpenAI was trying to train an increasingly large set of artificial intelligence programs called models, which were ingesting greater volumes of data and learning more and more parameters, the variables the AI system has sussed out through training and retraining. That meant OpenAI needed access to powerful cloud computing services for long periods of time.
Already a subscriber? Log in
Subscribe now and get 30% off The Star Yearly Plan
Cancel anytime. Ad-free. Unlimited access with perks.