KUALA LUMPUR: Social media platforms must ensure that artificial intelligence (AI)-generated content does not spread falsehoods, says Communications Minister Datuk Fahmi Fadzil.
He said this was why the government is considering a mandatory “AI-generated” label for such content under the Online Safety Act 2024, expected to come into force at the end of the year.
ALSO READ: Govt considering mandatory 'AI generated' label under Online Safety Act, says Fahmi
“We have to be alert. But sometimes, being alert is not enough," he said during Minister's Question Time in the Dewan Rakyat on Tuesday (July 29).
"Social media platforms should have a responsibility to ensure that such deep fake content isn’t disseminated.
“That is why the MCMC is looking at the need for social media platforms to put an ‘AI-generated’ or ‘AI-enhanced’ label."
Fahmi also said the Act, passed in Parliament in December, will include several new regulations by the Malaysian Communications and Multimedia Commission (MCMC).
ALSO READ: INTERACTIVE: Fake or fact? Only three in 10 Malaysians verify info online
Datuk Mohd Suhaimi Abdullah (PN-Langkawi) wanted to know how the government plans to address the rise in misleading content and scams brought on by generative AI.
“We even see fake videos of the prime minister, as well as other icons, selling shares on YouTube. We are concerned,” Suhaimi said.
At the same time, Fahmi said, Malaysians themselves must learn to distinguish between real and fake content on social media.
ALSO READ: Social media scammers in crosshairs of new law
“We must have an ability to do a simple verification (of content) and we must also be careful,” he added.
The Online Safety Act aims, among other things, to address the spread of defamatory content, fraud and threats to public order.
