Some artificial intelligence questions to be answered this year


The OpenAI logo is seen displayed on a cell phone in front of an image on a computer screen generated by ChatGPT's Dall-E text-to-image model, Friday, Dec. 8, 2023, in Boston. (AP Photo/Michael Dwyer, file)

THE word of the year for 2025, according to Merriam-Webster, was “slop,” referring to the deluge of low-quality content churned out by artificial intelligence (AI).

It’s a fitting reflection of the awkward phase we’ve entered three years after ChatGPT kicked off the global AI boom.

We were promised tools to cure disease and solve climate change. What we mostly got in 2025 were fake bunnies jumping on trampolines, and an Internet that feels a little more spammy every day.

AI now fuels heated debates from the boardroom to the classroom.

Yet for all the hype, and all the money, some of the biggest questions about how this tech revolution will play out remain unanswered. Here are some I’d like clearer answers to in 2026.

What is in the training data?

Is it thousands of copyrighted creative works? Or an outsize amount of material that perpetuates English-language, eurocentric perspectives?

The answer appears to be “yes” to all of the above. But we can’t know for sure because the companies building these systems refuse to say.

The secrecy is increasingly indefensible as AI systems creep into high-stakes environments like schools, hospitals, hiring tools and government services.

The more decision-making and agency we hand over to machines, the more urgent it becomes to understand what’s going into them.

Instead, companies have treated training data as a trade secret (or a liability, as copyright lawsuits mount). But this battle over transparency will likely come to a head in the new year.

The European Union is set to require companies to share detailed summaries of training data by mid-2027.

Other jurisdictions should follow their lead.

How will we measure AGI?

I don’t expect anyone to credibly declare that we’ve achieved artificial general intelligence in 2026.

But before we argue over whether we have, it would help to collectively agree on what it actually is.

As Google Deepmind researchers wrote in a paper last year: “If you were to ask 100 AI experts to define what they mean by ‘AGI,’ you would likely get 100 related but different definitions.”

Meanwhile, this vague concept has become the North Star for the entire global industry, used to justify hundreds of billions of dollars of investment.

The most widely used definition, from OpenAI’s charter, describes it as “highly autonomous systems that outperform humans at most economically valuable work.”

But even that’s a little bit fuzzy, as Chief Executive Officer Sam Altman admitted over the past year.

It also makes it a moving target as automation encroaches on more of the economy.

Internally, OpenAI and Microsoft have previously put a financial target on AGI: achieving $100 billion in total

profits, at least according to the Information.

But getting consumers to pay for brain-rot apps seems a far cry from a true measure of “intelligence.”

I don’t think AGI is a useful phrase. It foments hype and fear cycles rather than serious discussions about AI’s societal impact — or how to regulate it.

The industry won’t drop the term anytime soon.

But the least it could do is agree on an empirical way to measure it.

Where is the regulation?

It’s no surprise that Big Tech companies don’t want to be burdened by regulation, and governments don’t want to do anything to risk falling behind geopolitical competition.

But it’s going to be harder for policymakers to ignore the mounting societal concerns about AI’s impact on everything from young, developing minds to electricity bills.

Outside of Europe, few jurisdictions have made any serious attempts to address these threats.

Lawmakers would be wise to get ahead of the backlash before the harms have already scaled.

We can’t rely on the companies poised to profit form this technology writing all the rules.

What will it take to burst the bubble?

In recent months, more people inside the industry seem to have accepted that we’re in the throes of some kind of bubble.

That doesn’t mean AI won’t be transformative, but the eye-watering valuations for companies that haven’t turned a profit and the seemingly circular investments are beginning to look like red flags.

Still, the euphoria has proven remarkably resilient for three years, showing some jitters here and there but no signs of slowing down.

A fear-of-missing-out sentiment is still going strong.

Something will eventually test that.

Maybe it will be slowing revenue growth once the early adopters are saturated, or the rise of powerful, free open-source models that erode the pricing power of closed systems.

It likely won’t be a single event that derails the global hype train.

But in 2026, I expect more investors to start questioning how they can avoid being the ones still dancing when

the music stops and conduct more clear-eyed assessments of risk and return.

Where is the money?

Companies will soon have to start proving that there is at least some sustainable path to profitability for all the amount of money being spent on AI.

The chipmakers have already cashed in.

But for the model makers, it’s much murkier.

This will especially be an issue in China, where competition is fierce and frugal consumers have shown a reluctance to spend on software services.

But even in Silicon Valley, where the biggest players have started to post real revenue, the numbers are still dwarfed by the huge amount of money being spent on data centers and scaling.

We’ll probably see a lot more attempts at introducing new revenue streams, such as targeted advertising or TikTok clones, whether consumers want them or not.

But at some point, investors will demand more than just promises of AGI’s future potential.

Will AI take my job?

This is by far the most common question I get when I talk about AI in the real world.

The anxiety is already here.

We’ve already seen investments in AI being used as a cover for layoffs in pockets of the tech sector, and I expect we’ll see plenty more of that in other industries.

Policymakers and business leaders will increasingly have to come up with solutions for how to deal with the mass labor-market disruptions on the horizon.

If there’s a silver linings to the year of slop, it’s that there seems to be a hunger for human ideas and creativity that the machines can’t quite capture at scale yet.

I don’t expect 2026 to deliver all the answers.

But the questions we press — about power, accountability, money and meaning — will decide how we let AI reshape our world.

Here’s to staying curious, skeptical and stubbornly human in the new year.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
AI , slop , spam , digital , OpenAI , ChatGPT

Next In Business News

Pharmora succeeds in Apex Healthcare takeover offer, en route to delist
MSB Global unit enters JV with Thai company to expand automotive parts business in Thailand
Saliran redesignates MD, chairperson
Pan Malaysia sells chocolate firm, trademarks for RM15mil
Ringgit ends lower amid geopolitical uncertainty after Venezuela attack
Infoline wins RM9.65mil electrical works job
Farhash exits MMAG, resigns as chairman
Ingenieur disposes of land for RM22mil
CGS MY appoints Khairi Shahrin Arief Baki as CEO, Alan Inn Wei Loon as country head
Titijaya Land appoints new CFO

Others Also Read