Picture a world where sophisticated machine-learning algorithms generate hyper-realistic video footage of you doing things you’ve never done and saying things you’ve never said.
If that sounds like a nightmare, I’ve got bad news for you. That world is increasingly our world. Those videos, called “deepfakes,” are already being created, often for unsavory purposes.
(You can watch a deepfake video of former president Obama giving a fake speech here.)
Anyone can download the software needed for DIY deepfake videos. And even though deepfake technology hasn’t yet been perfected, it’s getting better every day.
This has obvious implications for national security. Indeed, congressmen have already expressed concerns about the use of deepfakes as weapons of international intrigue.
What about insurance – could deepfakes be the next frontier of risk? It’s not hard to imagine some scenarios:
- Cyber: A deepfaked audio recording of a CFO directs a company’s billing department to route thousands of dollars to a fake bank account.
- Directors and officers: A deepfake video is created of a large corporation’s CEO reporting (fabricated) negative financial results, leading to a significant drop in the company’s stock.
- Employment-practices liability: An employee is “deepfaked” to portray him making disparaging remarks about coworkers and engaging in harassment.
- General liability: Someone creates a deepfake video of a person slipping in a grocery store and “injuring” herself, leading to allegations of negligence.
I’m sure you could think of a hundred more like these. How will we adapt to a world where video and audio can’t be trusted to tell the truth? What if deepfakes become too sophisticated to detect – how will this impact insurance claims and fraud prevention?
If the worst comes to pass, deepfakes could soon become an insurance nightmare.
But hopefully, the best case scenario happens instead: detection technology keeps up with deepfake advancements – and keeps everyone honest.