Deepfake technology could create huge potential for social unrest and even trigger wars

RASHMEE ROSHAN LALL August 1, 2018

Chinese Premier Li Keqiang with British Foreign Minister Jeremy Hunt, who called his Chinese wife Japanese. Ng Han Guan / Getty

True or false? In Washington, the president of the United States and the prime minister of Italy jointly express scepticism about non-European migration to the West and decide to set up an action taskforce on the issue.

Four million Bengali-speaking Muslim residents of the northeastern Indian state of Assam are asked to prove their right to live in the country or be deported.

In China, the visiting British foreign secretary describes his Chinese wife as “Japanese”.

And an Asian country declares it will lob nuclear missiles at its richer neighbour.

Just two of those statements are true; the first and last are not. Although Donald Trump and Italy’s Giuseppe Comte did meet in the White House on Monday and discussed the challenges posed by illegal immigration, they didn’t publicly call for a taskforce on the non-European influx. But soon enough, there could be video and audio of Mr Trump and Mr Comte saying exactly that.

It’s all because of deepfake technology, which can use artificial intelligence to create video and audio of real people doing and saying things they never did or said.

With machines using something called deep learning to copy a person’s voice, speech patterns and facial expressions accurately, deepfake videos and audios could become powerful tools of public disinformation. They could trigger social unrest, political controversy, international tensions and could even lead to war.

They will make it difficult to separate truth from lies or something that really happened from what appeared to have happened. Considering the technology to distinguish between genuine and fake is at least a year away, the potential for societal dissonance is huge.

That’s what the experts gloomily predict. This month American law professors Bobby Chesney and Danielle Citron delivered a paper at a Washington think tank, warning of the looming dangers of “robust” and “persuasive” deepfakes in the hands of unscrupulous actors.

As the technology diffuses and democratises, they said, deepfakes can create so-called information cascades or a cycle of sharing and forwarding content that acquires unstoppable force and momentum. “One need only imagine a fake video depicting an American soldier murdering an innocent civilian in an Afghan village,” the professors said.

But deepfakes can have ramifications in private life too — in the workplace, matters of the heart, family relationships and small businesses. Armed with a deepfake, rivals can deal knockout blows to each other, imperilling reputation, good standing and probity.

The implications for law enforcement are dire. When Lyrebird, a Montreal-based AI start-up, revealed its voice imitation algorithm last year, it acknowledged “voice recordings are currently considered as strong pieces of evidence in our societies and in particular in jurisdictions of many countries”.

The implications could be profound when technology allows the construction of reality — an alternate reality — backed up by images, video and audio that purport to be true.

It could not only change how we respond to the present and prepare for the future, but create false memories of the past, according to American academic Elizabeth Loftus, who did pioneering research into false memory formation. Deepfakes could literally be hard for people to get out of their minds, she says.

Until there is a technological fix — trusted digital content comes with a foolproof internal certification — there are few real legal or regulatory remedies. Suing content creators and platforms that carry deepfakes, prosecution and penalties cannot really be a solution.

So what happens when audio and video can be manufactured at will and can no longer be trusted without question? It’s probably the point in society where technology forces us to hit the reset button. Face-to-face interactions, live attendance at political rallies, the evidence of one’s own eyes and ears will count but not video or audio circulating on news media or social platforms.

That will be a blessing, not a curse.

The only way to counteract the threat of deepfakes is for media, other organisations and private individuals to accept only the evidence of their own direct experience or authoritative proven sources. On one level, we will have to return to relying on a small number of trusted sources that can vouch for having heard or seen something themselves.

This recalibration of our attitude to what we see and hear will be necessary because deepfakes go way beyond the mere distortion of reality. The situation will be vastly different to the great hoaxes of history, such as the 1835 New York newspaper series about the discovery of life on the moon.

It’s also not quite the half-truths or even outright falsities peddled by tabloids and gossip websites but something far more insidious than that.

In the age of deepfakes, all we can really do is to remake our attitude to digitised portrayals of any sort. If we manage that, deepfakes would have unwittingly served an excellent purpose. They would restore trust in our networked world.