Fake videos prompt need for law


  • Letters
  • Wednesday, 19 Jun 2019

TECHNOLOGY has advanced so much that one can now produce or alter audio or video content to show or present something that actually didn’t happen.

With deepfake technology (which combines “deep learning” with “fake”), one can, for example, superimpose someone’s face over another person’s to create a video to support his or her own agenda. The video is then circulated online, with disastrous consequences on the victim if the purpose is vile in nature, such as the sex video that is currently doing its rounds on social media in Malaysia.

Deepfake is artificial intelligence (AI) at work, and there is little you can do to prevent it from happening to you, as highly-paid Hollywood actress Scarlett Johansson lamented. The subject of a fake porn video, she told the Washington Post (Dec 31, 2018): “The truth is, there is no difference between someone hacking my account or someone hacking the person standing behind me on line at the grocery store’s account. It just depends on whether or not someone has the desire to target you.

“Obviously, if a person has more resources, they may employ various forces to build a bigger wall around their digital identity. But nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired. There are basically no rules on the Internet because it is an abyss that remains virtually lawless, withstanding US policies which, again, only apply here.”

And the process is not really illegal. The film industry, for example, uses it to paste the face of an actor on the stunt man. You think the actor is doing the stunt, but it’s not him.

The technology was started in December 2017 when an anonymous Reddit (social news website and forum) user combined the machine learning software and AI to swap the faces of porn performers with those of famous actresses.

Deepfake has many potentially negative consequences, including on national security. For example, convincing but false images can influence campaigns or trigger a state to act against another country. A war could easily result.

It could be a global security threat if used by adversaries and strategic competitors against a country, as the United States director of National Intelligence has testified to the Senate Select Committee on Intelligence. As such, deep fake videos could become powerful ways to spread disinformation and distrust.

Are there laws to deal with deepfake?

In common law jurisdictions like the US, United Kingdom and, yes, Malaysia too, the victim of a deepfake video may be able to sue the creator in civil proceedings. The claimant must prove that he or she is represented falsely and in a way that would be embarrassing or offensive to the average person.

If used for commercial purposes, civil action can be brought based on the misappropriation or the right of publicity. For example, the great-grandson of Mahatma Gandhi has taken steps to prevent the misuse of his great-grandfather’s name or image in advertising, such as in the marketing of meat products, lingerie, weapons and alcohol. If used for commercial purposes, permission is contingent upon payment to the Mahatma Gandhi Foundation. However, this right to protect a person’s image or likeness from unauthorised commercial exploitation is not universally recognised.

A victim may also be able to sue for the intentional infliction of emotional distress, cyberbullying or even sexual harassment.

The sex video saga here could be investigated under the Penal Code for sodomy, allegedly distributing pornographic materials or for alleged “intentional insult with intent to provoke a breach of the peace”, or the Communications and Multimedia Act for alleged improper use of network facilities. All of these offences are punishable with hefty jail terms and fines.

But there are serious practical obstacles to overcome. Firstly, it may not be easy to establish the creator of the video especially by individuals in civil proceedings. The location of the creator (local or outside the country) is another issue.

A victim could ask the Internet company to remove the deepfake or face “online intermediary liability”. But this has its own difficulties too.

Could action be taken under the Personal Data Protection Act 2010? This Act is meant to be the vanguard of protection for information collected on an individual, but it only protects against the inappropriate use of personal data for commercial purposes. There is no provision that specifically addresses the issue of online privacy nor does it apply to personal data processed outside Malaysia.

Secondly, the creator may claim the right of free speech or creative expression. The offence under the Communications and Multimedia Act (for transmitting offensive material online) has been held as unconstitutional in countries such as India and Canada, especially if the deepfake victim is a political or public figure or it depicts a satirical cartoon or parody.

It may of course be easier to prove if the deepfake is outrageously obscene (as in the present controversial video) or incites actual criminal behaviour.

This prompts the need to regulate deepfakes. In England, advertisements featuring airbrushed images of famous actresses to promote products have been banned on the ground that they portray misleading results of using those products.

GURDIAL SINGH NIJAR

Petaling Jaya


   

Across The Star Online


Air Pollutant Index

Highest API Readings

    Select State and Location to view the latest API reading

    Source: Department of Environment, Malaysia