As there is currently no UK law dedicated specially for one’s own image, we are likely to see victims of Deepfakes relying on defamation on a more frequent basis – but how much protection can the law of defamation actually provide for individuals who fall victim to this potentially malicious A.I?
Section 5 of the Defamation Act 2013 could be regarded as quite relevant here as it governs the rights of website operators who are posting the Deepfake videos. This section provides a defence for the operator to show that it was not the operator itself who posted the Deepfake video on the website, but a user of that site. Although this section appears to provide protection for operators of websites that host user-generated content and not the victim themselves, if the original poster cannot be found, the website operator would be found liable in their place.
However, this leaves the question for the victims of Deepfakes whose image gets published on the World Wide Web as to whether it would be possible for them to take legal action in another country? Previously, the CJEU held action could be taken in each member state with respect of the damages within that member state. However the case of Bolagsupplysningen saw that the European level of the ability to do this has been quite restricted. The effect of this ruling reaffirmed that where the libel action is taken will most often tend to be in the claimant’s home state making it more difficult for victims to seek protect their image outside their jurisdiction.
Another difficulty that can be identified in relation to the use of a Deepfake video and one’s image is the possible reliance on social media as a form of protection.
In the law of defamation, any person who distributes a defamatory statement could be held liable and this includes not only the author, but also the printer, publisher, wholesaler or retailer. The difficulty arises here when it comes to the plaintiff identifying the originator of the defamatory post as Deepfakes are most commonly posted anonymously (usually via proxy sites that hide the users I.P address). Due to the considerably high level of anonymity of the posters of Deepfake videos, it is likely that finding the author or publisher would be an obstacle hard to surpass. Therefore, without being able to identify the originator, a victim whose image has been used and posted online may find it difficult to bring rise to a claim. It is however important to note here that in such circumstances, the court may be able to grant a Norwich Pharmacal Order. This order requires third parties to disclose documents of information as to the identity of the wrong doer. As such, in the instance a victim is granted such order, they will be in a position to request for the defendant to disclose their details.
Defamation law can therefore be seen to protect persons against statements – in this case – videos – that have been made to have a detrimental effect on an individual. Yet as it stands, defamation is only in very limited cases able to provide protection for a victim of Deepfake A.I.
To overcome this, the UK must, sooner than later, consider dedicating a law which solely relates to the protection of ones image. Until then, the future for these victims will continue to remain uncertain.