Artificially intelligent: The evolving threat of deepfakes

I was interviewed by The Star newspaper regarding the responsibilities on the use of AI-generated content. I said-

What the future holds

From the perspective of intellectual property and information technology lawyer Foong Cheng Leong, platforms need to take more responsibility over what is being posted by users, which includes stricter enforcement over the overt labelling of ­content that is AI-generated.

He says this labelling should apply, whether it is done by the users uploading content on the platforms, the platform’s automated detection systems, or even the service provider for the tools generating AI content, whether expressly as something like a watermark, or embedding within the file.

Foong adds that post filters should also be implemented to disallow any works that can cause potential harm, in the same way that commonly ­available AI tools for example disallow the generation of ­pornographic materials.

I also added that platforms should be held accountable when such incidents or damage occur due to AI videos or deepfakes being allowed on their service, just as they are traditionally liable for defamatory content, among others.

In respect of fraud caused by AI-generated content, there is currently no known recourse against platform providers. The claim lies only against those who defrauded the victim. I take the view that those who were defrauded should obtain compensation only from those who defrauded them.

If we allow them to claim compensation from other intermediaries, such as platform providers or AI generators, this would make potential victims complacent and less vigilant.

Nevertheless, there should be laws to make these platform providers liable for certain offences, especially where the fraud has been widely circulated and reported to them, and they do nothing about it.


Posted

in

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *