Don't get too high on AI


It is important for employers to really understand data management, data ownership and data handling to combat Shadow AI.

THE RUSH to adopt AI in the workplace often begins innocently enough; to speed up certain work processes, to free up employees from doing routine tasks and; to quickly collate and clarify information.

But somewhere in between all these processes that’s been handed off to AI to handle, something else is quietly slipping through the cracks.

Data, and in the case of the workplace, company data, is being exchanged, often with an entity outside the organisation – sometimes without employees or employers fully realising it.

Experts and stakeholders are now sounding the alarm, urging companies to pay closer attention to personal data protection laws and the need for a comprehensive internal policy governing AI usage in the workplace.

While Putrajaya is actively developing national-level policies on the use of such tools – including amendments to the Personal Data Protection Act – organisations cannot rely on government regulation alone, says Mohd Redzuan Affandi Abdul Rahim, who is the director of SME Digitalisation at Malaysian Digital Economy Corporation (MDEC).

“At the firm level, (there is) the importance of organisations having clarity in terms of AI policy or digital technology policy within the company.

“It goes back to things like data sharing. It goes back to things like, for example, when it comes to cybersecurity, who has access to what?

“We have the Personal Data Protection Act, which I think many employers need to really understand, especially the part on data management, data ownership and data handling,” he says.

Shadow AI

This understanding is crucial in combating one of the most risks facing organisations today; “shadow AI” – the unsanctioned and often untracked use of AI tools by employees outside company systems.

As AI tools become more accessible than ever, companies may not be aware of their employees’ usage of AI and what kind of company information is being shared with such tools, says Malaysian Employers Federation (MEF) council member Amiruddin Abdul Shukor.

“Because for staff members, when they are given tight deadlines, the fastest way to do now is to go into those AI platforms that can give them an immediate response that the bosses want to hear.

“And at the same time, your company’s information flows out into the ocean and this will be used by anyone else who are using or sharing a similar platform, because AI learns from everyone.

“Whatever you feed them, they use that,” he says.

Sharon Goh, founder of business management consultancy Syuen Zens Resources, describes the issue as “tremendously serious”.

She warns that since the launch of ChatGPT in 2022, shadow AI has been seeping into organisations “slowly and quietly”.

“You think you want to save costs, but your stinginess is the one that causes you to get into trouble.

“Why? Because you are not buying the AI tools for (your employees). It is cheap now, they can subscribe themselves without even you knowing. You also do not know what other information they’re feeding in,” she says.

The legal and financial consequences have also become more serious.

With the recent PDPA amendments increasing penalties and imprisonment for data breach, the liability ultimately rests with the employers, and not the individual employee, Goh says.

“You now have to be the firewall of the company to actually safe-check everything your team is feeding into these large language models (LLMs).

“If you breach that thing, you’re going to get caught. I’m so sorry, the boss has to go to jail or pay the penalty. Not the staff,” Goh warns.

The magic number

Prof Ir Dr Chan Chee Seng from the AI Department of Universiti Malaya’s Faculty of Computer Science and Information Technology vividly demonstrated how AI models ingest and reproduce data with an interesting experiment.

In a room of about 80 people, he asked everyone to type in a prompt to their preferred AI tool: “Give me a random number between one and 10.”

“How many people got a number that is not seven?” he asked the room afterwards.

Only one person raised their hand.

He then repeated the experiment, this time changing the range to between one and 50.

“Anyone who doesn’t get the number 27, you let me know.”

Fewer than 10 people raised their hand.

“I’m a fortune-teller,” he jokes, before explaining that AI has a tendency to generate information based on the collective data it has been fed.

“Why seven? The rainbow has seven colours. Our favourite movie, James Bond, has the codename 007. For those who watch football, Cristiano Ronaldo’s jersey number, is also seven. The stories that we tell our kids, Snow White and the Seven Dwarves. And finally, those who go to the casino, how do you strike a jackpot? Seven, seven, seven,” he explains.

This simple experiment, he argues, reveals a hidden danger.

“If it does not really control how these things work, and you’re blindly using that, you can see how dangerous this whole system can be manipulated.”

The lesson from the experiment is also clear: a company’s rush towards AI efficiency without any internal guidelines could very well result in the reckless surrender of one of its most vital assets – data.

 

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Focus

Back to coal as conflict chokes gas supply
The pipeline that arms cartels
Behind Germany’s far-right surge
Big Tech’s military bet is paying off
The winter that killed the oyster renaissance
Sinaloa warms to US strikes
A pub crawl, but hold the booze
Congo’s race to save its past
Tears and triumph at the border
Copy, paste and retaliate

Others Also Read