By Maria Sassian, Triple-I consultant
Videos and voice recordings manipulated with previously unheard-of sophistication – known as “deepfakes“ – have proliferated and pose a growing threat to individuals, businesses, and national security, as Triple-I warned back in 2018.
Deepfake creators use machine-learning technology to manipulate existing images or recordings to make people appear to do and say things they never did. Deepfakes have the potential to disrupt elections and threaten foreign relations. Already, a suspected deepfake may have influenced an attempted coup in Gabon and a failed effort to discredit Malaysia’s economic affairs minister, according to Brookings Institution.
Most deepfakes today are used to degrade, harass, and intimidate women. A recent study determined that up to 95 percent of the thousands of deepfakes on the internet were pornographic and up to 90 percent of those involved nonconsensual use of women’s images.
Businesses also can be harmed by deepfakes. In 2019, an executive at a U.K. energy company was tricked into transferring $243,000 to a secret account by what sounded like his boss’s voice on the phone but was later suspected to be thieves armed with deepfake software.
“The software was able to imitate the voice, and not only the voice: the tonality, the punctuation, the German accent,” said a spokesperson for Euler Hermes SA, the unnamed energy company’s insurer. Security firm Symantec said it is aware of several similar cases of CEO voice spoofing, which cost the victims millions of dollars.
A plausible – but still hypothetical – scenario involves manipulating video of executives to embarrass them or misrepresent market-moving news.
Insurance coverage still a question
Cyber insurance or crime insurance might provide some coverage for damage due to deepfakes, but it depends on whether and how those policies are triggered, according to Insurance Business. While cyber insurance policies might include coverage for financial loss from reputational harm due to a breach, most policies require network penetration or a cyberattack before it will pay a claim. Such a breach isn’t typically present in a deepfake.
The theft of funds by using deepfakes to impersonate a company executive (what happened to the U.K. energy company) would likely be covered by a crime insurance policy.
Little legal recourse
Victims of deepfakes currently have little legal recourse. Kevin Carroll, security expert and Partner in Wiggin and Dana, a Washington D.C. law firm, said in an email: “The key to quickly proving that an image or especially an audio or video clip is a deepfake is having access to supercomputer time. So, you could try to legally prohibit deepfakes, but it would be very hard for an ordinary private litigant (as opposed to the U.S. government) to promptly pursue a successful court action against the maker of a deepfake, unless they could afford to rent that kind of computer horsepower and obtain expert witness testimony.”
An exception might be wealthy celebrities, Carroll said, but they could use existing defamation and intellectual property laws to combat, for example, deepfake pornography that uses their images commercially without the subject’s authorization.
A law banning deepfakes outright would run into First Amendment issues, Carroll said, because not all of them are created for nefarious purposes. Political parodies created by using deepfakes, for example, are First Amendment-protected speech.
It will be hard for private companies to protect themselves from the most sophisticated deepfakes, Carroll said, because “the really good ones will likely be generated by adversary state actors, who are difficult (although not impossible) to sue and recover from.”
Existing defamation and intellectual property laws are probably the best remedies, Carroll said.
Potential for insurance fraud
Insurers need to become better prepared to prevent and mitigate fraud that deepfakes are capable of aiding, as the industry relies heavily on customers submitting photos and video in self-service claims. Only 39 percent of insurers said they are either taking or planning steps to mitigate the risk of deepfakes, according to a survey by Attestiv.
Business owners and risk managers are advised to read and understand their policies and meet with their insurer, agent or broker to review the terms of their coverage.