This week, on January 26, British broadcaster ITV is releasing a TV show onto its ITVX streaming platform which will feature celebrities living in a community as neighbours, getting into petty disputes and creating some unlikely pop culture interactions. However, don’t be deceived! Despite the stars of the show initially appearing to be some of the biggest celebs around – for example, rapper Stormzy, footballer Harry Kane and actor Tom Holland – they are not real. In fact, the ‘celebrities’ are portrayed by impressionists using ‘deepfake’ technology – a kind of digital mask which uses AI to replace someone’s appearance with the likeness of someone else.
Produced by Tiger Aspect in collaboration with synthetic media company StudioNeural, the six-episode series ‘Deep Fake Neighbour Wars’ is the world’s first long-form narrative show that uses deepfake technology. But this isn’t the first time we’ve seen this rapidly developing tech be deployed in the entertainment and commercial world. In 2021, an AI-powered content creation platform called Deepcake worked with Russian telecom company Megafon to create an ad campaign that featured Bruce Willis – allowing the action hero to roll back the years despite his worsening health condition by using a ‘digital twin’ instead of the actual actor.
Each episode of ‘Deep Fake Neighbour Wars’ will begin with a disclaimer, and judging by a recent interview with the British newspaper The Guardian, the creators aren’t concerned about any legal repercussions of the show using celebrities’ likenesses. However, as with all pioneering ideas, the concept may raise some questions when operating in a space without much existing legal or ethical precedent – especially on a mainstream media platform like ITVX. So what are the global, legal precedents surrounding this new technology? And what does it mean for its potential use in the commercial industry – a la Bruce Willis? We spoke with Ron Moscona, partner, and Ryan Meyer, Of Counsel, at international law firm Dorsey & Whitney – experts in intellectual property and technology law – to find out.
Ron is based in London and his practice focuses on his clients’ long-term commercial interests, helping them make the best of their technology, intellectual property and brands. Discussing ITV’s new deepfake show, he says that there are “clearly legal concerns” which warrant extra care from the production team, “This kind of show definitely tests the limits. It would need to make it abundantly clear that the deepfake images are not real and also that the show is not sponsored or approved by the individuals being portrayed.” He continues, “This is usually not a problem if the comedy clearly makes fun of celebrities by way of parody or pastiche. However, the deepfake technology – particularly if it is high quality – clearly increases the risk of people getting the wrong end of the stick.”
A secondary concern he highlights is the second life that this deepfake content could have on social media after the initial broadcast. Clips of the show could be shared without context to a wider audience online, making it more difficult for people to determine whether or not the footage is real or celebrity-approved. “It would make sense for the production to use the images in a way that minimises the risk of the images being re-used and circulated out of context,” he adds. “Like any comedy show, there are risks of complaints about bad taste, abuse of privacy, or even defamation (libel). But as long as the show makes it very clear that these are not real people or that the real people did not endorse it, and that the idea is to make fun of them, free speech principles should protect the show from liability.”
LBB recently explored the legal POV on AI-generated art with Ryan Meyer – discovering that the law can often take a significant length of time to catch up with new, developing technologies. So how up-to-date is the law, when it comes to deepfakes? Ryan, who specialises in US intellectual property (IP) law, explains that many of the tech’s aspects are already covered by existing legislation – despite still being “a relative novelty”. He says, “A person could be liable for using deepfake technology to infringe another entity’s intellectual property rights or a person’s publicity or privacy rights. And the technology can itself be protected by intellectual property rights. Using deepfakes maliciously could also constitute fraud, defamation, identity theft, and other civil and criminal violations.”
However, according to Ryan, there are only a few jurisdictions in the US that have statutes specifically relating to deepfake technology – mainly with regard to pornography and election tampering. Agreeing with his colleague across the pond, he also reiterates the danger of the second life that deepfake footage can have when circulated online – out of the control of its creator, and beyond local legal jurisdiction.
“Thanks to the internet, state and national borders are notoriously permeable to videos and other media, and something that is legal in one jurisdiction might be illegal in another,” he says. “Even if someone creates a deepfake for innocent purposes and with clear disclaimers, once it is released to the world, they can’t control where it goes, how many people see it, or how many of those people will be fooled into believing it’s real.” He continues, “Perhaps more importantly, they can’t control the harm that occurs to the real, original person as a result of the deepfake. The international reach of technology puts the creator at some degree of unknown risk, and it also creates challenges for law enforcement and victims seeking to block malicious deepfakes.”
So how can someone protect their image from deepfakes? Ron shares that, while there is no copyright for a person’s image, a celebrity can control the exploitation of their likeness through other legal concepts. “You can protect the right to commercially exploit your image (your name or visual ‘likeness’) only if you can show that your image is recognisable and has some commercial value or that you are already exploiting it,” he says. “In the UK, you can stop someone exploiting your image commercially without your permission, under ‘the law of passing off’, if you can show that you acquired ‘goodwill’ (commercial value) in your name or likeness and that their exploitation without your permission would ‘deceive’ the public to believe that you ‘authorised’ or ‘approved’ the commercial exploitation.”
Other jurisdictions take it a step further and entitle people to reap all the benefits of value created by their name or image, says Ron, but in the UK, the law still requires a person to have previously exploited their own image commercially to protect its usage or to obtain a registration for a trademark. Alternatively, he says that “privacy and data protection laws can often also be relied on to object to the unauthorised exploitation of a person’s name or photo.”
Bruce Willis is just one example of a celebrity who has ‘licensed’ their own image to another company for commercial purposes – but what does this mean exactly? Ron defines this ‘licensing’ process as an agreement that allows a company to use a celebrity’s likeness, without fear of being challenged for its use. An added bonus is the active sponsorship and media support from the celebrity, which goes beyond the passive use of an image and is often an agreed part of the deal. He says, “If the celebrity tried to distance themselves from the commercial exploitation, that could seriously undermine the commercial value of that exploitation. So the licensing arrangements and the active cooperation of the individual are usually essential for extracting value and credibility.”
While Ryan hasn’t seen the exact terms of Bruce Willis’ agreement with ‘Deepcake’, he understands that the actor granted the content platform his ‘digital twin’ rights. Based on his prior knowledge of similar deals, Ryan speculates that this agreement would address specifics of how the deepfake ‘likeness’ could be used, such as depicting Bruce with different clothes or hair, at different ages, or using his voice, catchphrases and other characteristics associated with his public image. This kind of control allows a celebrity to prevent the use of their digital twin in specific contexts – such as prohibiting pornographic use, the re-dubbing of their voice, or being depicted in scenarios or with products that conflict with their personal values.
Even with this degree of control, Ryan warns that – by definition – licensing something means you are giving some of your rights away. “That’s true, even if the license is very narrow,” he says. “Sometimes, parties to a license reasonably interpret the provisions of the license differently, which might result in the licensee exploiting the license in ways that the licensor didn’t expect or intend. This could be particularly problematic for complex works like TV commercials or movies where it might be physically impossible for a busy celebrity to review all of the content featuring their digital double.”
For big names and brands looking to work with deepfake tech in the future, he adds that they should expect precedents to start being set as the technology matures and more legal issues surrounding it enter courts around the world, “There probably will be litigation over licensing disputes, particularly while the technology and these licensing arrangements are new.” And as deepfakes in the entertainment and commercial worlds become more common, he also suggests that this new avenue for celebrity endorsement may become standardised in future entertainment contracts – including “digital twin licences for promotional or merchandising purposes” alongside the main arrangement of the deal.
Nevertheless, as troubling as it might be to have a digital double out there in the world, Ryan expects that many celebrities will soon be following in the footsteps of the Die Hard star – licensing their likenesses for ads and more. “Celebrities will see them as an opportunity to be many places at once,” he says, “doing many jobs at once that the celebrity would not otherwise be able or willing to do.”