Microsoft president says no chance of super-intelligent AI soon


FILE PHOTO: Vice Chairman of Microsoft Brad Smith looks on during the 5th Summit of "Christchurch Call", at the Elysee Presidential Palace in Paris, France November 10, 2023. LUDOVIC MARIN/Pool via REUTERS/File Photo

LONDON (Reuters) -The president of tech giant Microsoft said there is no chance of super-intelligent artificial intelligence being created within the next 12 months, and cautioned that the technology could be decades away.

OpenAI cofounder Sam Altman earlier this month was removed as CEO by the company’s board of directors, but was swiftly reinstated after a weekend of outcry from employees and shareholders.

Reuters last week exclusively reported that the ouster came shortly after researchers had contacted the board, warning of a dangerous discovery they feared could have unintended consequences.

The internal project named Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one source told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

However, Microsoft President Brad Smith, speaking to reporters in Britain on Thursday, rejected claims of a dangerous breakthrough.

"There's absolutely no probability that you're going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It's going to take years, if not many decades, but I still think the time to focus on safety is now," he said.

Sources told Reuters that the warning to OpenAI's board was one factor among a longer list of grievances that led to Altman's firing, as well as concerns over commercializing advances before assessing their risks.

Asked if such a discovery contributed to Altman's removal, Smith said: "I don't think that is the case at all. I think there obviously was a divergence between the board and others, but it wasn't fundamentally about a concern like that.

“What we really need are safety brakes. Just like you have a safety break in an elevator, a circuit breaker for electricity, an emergency brake for a bus – there ought to be safety breaks in AI systems that control critical infrastructure, so that they always remain under human control,” Smith added.

(Reporting by Martin Coulter; Editing by Sharon Singleton and Mark Porter)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

   

Next In Tech News

Video games bad? You might need to switch your opinion, study shows
Indie developer emptyvessel reveals squad-based cyberpunk shooter ‘Defect’
Preview: ‘Star Wars Outlaws’ is the Han Solo simulator fans always wanted
Are you fact-checking your Facebook feed?
We train AI. AI might be training us, too, US researchers find
A 'true crime' video about a man’s 'secret affair' with his murderous stepson is going viral. It’s fake
Dubai nightclub scam: Tinder 'dates' vanish after leaving men with the bill
California issues draft regulations for operating autonomous trucks
OpenAI names political veteran Lehane as head of global policy, NYT reports
Cinematic evolution: Embracing gaming influences in movie-making

Others Also Read