Microsoft president says no chance of super-intelligent AI soon

FILE PHOTO: Vice Chairman of Microsoft Brad Smith looks on during the 5th Summit of "Christchurch Call", at the Elysee Presidential Palace in Paris, France November 10, 2023. LUDOVIC MARIN/Pool via REUTERS/File Photo

LONDON (Reuters) -The president of tech giant Microsoft said there is no chance of super-intelligent artificial intelligence being created within the next 12 months, and cautioned that the technology could be decades away.

OpenAI cofounder Sam Altman earlier this month was removed as CEO by the company’s board of directors, but was swiftly reinstated after a weekend of outcry from employees and shareholders.

Reuters last week exclusively reported that the ouster came shortly after researchers had contacted the board, warning of a dangerous discovery they feared could have unintended consequences.

The internal project named Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one source told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

However, Microsoft President Brad Smith, speaking to reporters in Britain on Thursday, rejected claims of a dangerous breakthrough.

"There's absolutely no probability that you're going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It's going to take years, if not many decades, but I still think the time to focus on safety is now," he said.

Sources told Reuters that the warning to OpenAI's board was one factor among a longer list of grievances that led to Altman's firing, as well as concerns over commercializing advances before assessing their risks.

Asked if such a discovery contributed to Altman's removal, Smith said: "I don't think that is the case at all. I think there obviously was a divergence between the board and others, but it wasn't fundamentally about a concern like that.

“What we really need are safety brakes. Just like you have a safety break in an elevator, a circuit breaker for electricity, an emergency brake for a bus – there ought to be safety breaks in AI systems that control critical infrastructure, so that they always remain under human control,” Smith added.

(Reporting by Martin Coulter; Editing by Sharon Singleton and Mark Porter)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!


Next In Tech News

Google says its AI image-generator would sometimes ‘overcompensate’ for diversity
Authorities troll LockBit boss on his commandeered Darkweb site
Meta’s Mark Zuckerberg seeks out of lawsuits blaming him for Instagram addiction
Canadian federal police says they were targeted by cyberattack
Intuitive Machines shares descend fast after the CEO says the moon lander is on its side
This week’s cellphone outage makes it clear: In the United States, landlines are languishing
EIA to temporarily suspend bitcoin miner survey after lawsuit -court document
Bezos, Nvidia join OpenAI in funding humanoid robot startup, Bloomberg reports
Julius Baer suffered IT crash last week
Nvidia hits $2 trillion valuation as AI frenzy grips Wall Street

Others Also Read