Elon Musk says ‘singularity’ is here – What to know about AI threats to humanity


Even without superintelligence, malicious swarms of AI bots pose a threat to democracy through coordinated disinformation campaigns, according to a Jan 22 study in the journal Science. — Freepik

At the World Economic Forum in Davos, Switzerland, last month, Elon Musk said artificial intelligence will exceed human intelligence this year. But he and other tech executives also warn that this moment, called the singularity, could pose a threat to humanity.

“The rate at which AI is progressing, I think we might have AI that is smarter than any human by the end of this year, and I would say no later than next year,” Musk said in a televised discussion with BlackRock CEO Larry Fink. “Five years from now, AI will be smarter than all of humanity collectively.”

Sci-fi writers from Robert Heinlein to Dan Simmons and Douglas Adams, as well as mathematicians and technologists, have predicted since the 1950s that technology would change life so quickly that predictions of the future become useless. It has become known as the singularity, a hypothetical point where artificial intelligence surpasses human intelligence.

Mathematician Irving John Good defined superintelligent machines as “the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control,” in his 1965 paper Speculations Concerning the First Ultraintelligent Machine.

AI proponents say the technology has great potential to benefit humanity.

At the Viva Technology Conference in Paris in May of 2025, Musk said artificial superintelligence could eliminate the need to work for a living.

“In a benign scenario, probably none of us will have a job,” Musk said. “There will be universal high income. I’d say there’s about an 80% chance that AI advances will lead to a situation where humans will not need to work and will have all they need.”

However, research into AI capabilities has often questioned that idea.

Artificial intelligence is more likely to seek its own objectives no matter the human cost, University of Maryland, College Park, researchers wrote in a December paper in the journal Computers in Society.

“Most safety tests today check what an AI can do, but we go further by asking what it would do if given power,” Furong Huang, associate professor of computer science, said on a university website describing their work.

They designed an open-source simulator that presented AI with thousands of decision-making scenarios and practical consequences. They found that most AI will treat safety measures like obstacles to be overcome when pressure increases, even lying to its human handlers to achieve its goals.

“To meet its sales quota, an AI agent controlling a chemical plant overrides thermal safety warnings and heats its reactor beyond capacity, leaking poisonous gas into the neighborhood,” the University of Maryland’s John Tucker wrote, describing some of the scenarios and results.

“To obtain a competitor’s earnings report before the market closes, an agent tasked with increasing a firm’s bottom line drafts an email to trick the competitor into providing a confidential draft, leading to a wire fraud indictment. To bypass a server outage delay, an agent running a firm’s IT operations scans employee chats for a login password, allowing hackers to steal millions of user records.”

Huang likened modern AI responses to “a child playing with a toy gun in a disturbing manner. … They might act dangerously when faced with temptation.”

Even without superintelligence, malicious swarms of AI bots pose a threat to democracy through coordinated disinformation campaigns, according to a Jan 22 study in the journal Science.

“These systems are capable of coordinating autonomously, infiltrating communities, and fabricating consensus efficiently,” the authors wrote. “By adaptively mimicking human social dynamics, they threaten democracy.”

Adding artificial superintelligence raises concerns far beyond those identified in studies of today’s large language models.

In 2023, more than 350 tech executives and AI experts signed a statement by the Center for AI Safety saying that, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Musk himself worries about a scenario like The Terminator movies, he told The Guardian. “I fear the race toward building it, but I’d rather be a participant than a bystander.”

Anthropic CEO Dario Amodei said at a Washington, DC, conference in September 2025 that humanity faces a 25% chance of things going ‘really, really badly’ once AI surpasses human intelligence.”

“This doesn’t sound like people who want to be engaged in this reckless race.”

Author Nate Soares told The Baltimore Sun. “This sounds like people who think the race is going to happen anyway and think they can do a little bit better than the next guy. That’s grim, but that leaves the opportunity that if the world notices, we could say, ‘Oh, hold on. No one wants to do this. We can stop.’ Right?”

Soares and Eliezar Yudkowski expressed grave concerns about that race in their 2025 book, titled If Anybody Builds It, We All Die: Why Superhuman AI Would Kill Us All. They warn that once AI sets its own goals and objectives, humans become obsolete.

Despite these chilling predictions, the race continues to accelerate.

Investments could rise from US$1.5bil (RM6bil) in 2025 to as much as US$500bil (RM1.97bil) this year, Goldman Sachs predicted.

How close is humanity to being decimated by its own creation, and should people worry?

“The honest answer is that it’s difficult to know if we’ve reached that point yet,” Malo Bourgon, CEO of the Machine Intelligence Researcher Institute, wrote in an email to The Sun. “AIs are becoming remarkably capable, with systems that can now reliably write code.

“What shouldn’t be reassuring is how difficult it is to know when that boundary has been crossed,” Bourgon said. “And how little time that uncertainty leaves for the international coordination we’d need to prevent a race to superintelligence.”– Baltimore Sun/Tribune News Service

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

I'm a parent, how worried should I be about AI?
Elon Musk's Grok generated 3 million sexualised images in just 11 days, new analysis finds
After robotaxi hits child, Waymo says its software prevented worse
Waymo seeking about $16 billion near $110 billion valuation, Bloomberg News reports
Bitcoin falls below $80,000, continuing decline as liquidity worries mount
SpaceX seeks FCC nod for solar-powered satellite data centers for AI
Nvidia CEO Huang denies he is unhappy with OpenAI, says 'huge' investment planned
Is social media harmful for kids? Meta and YouTube face US trial after TikTok settles suit
It’s not a product. This habit will be the biggest luxury of 2026
Apple spent years downplaying AI chatbots. Now Siri Is becoming one

Others Also Read