Skip to content

The societal implications of ‘deepfake’ video

24/05/2018

This is a rather lengthy post, but please take the time to read it.

RNZ Morning Report recently ran the segment below

Tech agencies fear a new type of AI that can paste a person’s face onto another person’s body in a video will be used to blackmail, ridicule and even alter the course of elections. Fake videos dubbed “deepfakes” emerged a few months ago, made by geeks wanting to insert celebrities into porn. But Netsafe and InternetNZ are among those worried deepfakes will eventually be used for far more nefarious purposes.

This ties in with the concerns and issues raised in this recent article in The Atlantic.

The Atlantic piece was one Adam had been thinking about for a while, and the RNZ piece helped to crystallise thinking.

Franklin Foer in The Atlantic

In a dank corner of the internet, it is possible to find actresses from Game of Thrones or Harry Potter engaged in all manner of sex acts. Or at least to the world the carnal figures look like those actresses, and the faces in the videos are indeed their own. Everything south of the neck, however, belongs to different women. An artificial intelligence has almost seamlessly stitched the familiar visages into pornographic scenes, one face swapped for another. The genre is one of the cruelest, most invasive forms of identity theft invented in the internet era. At the core of the cruelty is the acuity of the technology: A casual observer can’t easily detect the hoax.

The opening to Foer’s article illustrates the points made in the RNZ clip at the start of this post

This development, which has been the subject of much hand-wringing in the tech press, is the work of a programmer who goes by the nom de hack “deepfakes.” And it is merely a beta version of a much more ambitious project. One of deepfakes’s compatriots told Vice’s Motherboard site in January that he intends to democratize this work. He wants to refine the process, further automating it, which would allow anyone to transpose the disembodied head of a crush or an ex or a co-worker into an extant pornographic clip with just a few simple steps. No technical knowledge would be required. And because academic and commercial labs are developing even more-sophisticated tools for non-pornographic purposes—algorithms that map facial expressions and mimic voices with precision—the sordid fakes will soon acquire even greater verisimilitude.

The evil that could be wrought by this is all too easily imagined, especially if one takes into account the way facial recognition software is now being used. Thus the potential for determined hackers is probably limitless.

Vladimir Nabokov once wrote that reality is one of the few words that means nothing without quotation marks. He was sardonically making a basic point about relative perceptions: When you and I look at the same object, how do you really know that we see the same thing? Still, institutions (media, government, academia) have helped people coalesce around a consensus—rooted in a faith in reason and empiricism—about how to describe the world, albeit a fragile consensus that has been unraveling in recent years.

Yet as Foer writes

Social media have helped bring on a new era, enabling individuated encounters with the news that confirm biases and sieve out contravening facts.

Then he makes a salient point.

The current president has further hastened the arrival of a world beyond truth, providing the imprimatur of the highest office to falsehood and conspiracy.

Trump’s relentless attacks on media and others, aided and abetted by people such as Hannity and Kellyanne Conway have enabled this; coupled with the faux outrage pursued for nonsense issues by many activists and others, especially of the so called progressive faction

But soon this may seem an age of innocence. We’ll shortly live in a world where our eyes routinely deceive us. Put differently, we’re not so far from the collapse of reality. We cling to reality today, crave it even. We still very much live in Abraham Zapruder’s world. That is, we venerate the sort of raw footage exemplified by the 8 mm home movie of John F. Kennedy’s assassination that the Dallas clothier captured by happenstance. Unedited video has acquired an outsize authority in our culture. That’s because the public has developed a blinding, irrational cynicism toward reporting and other material that the media have handled and processed—an overreaction to a century of advertising, propaganda, and hyperbolic TV news. The essayist David Shields calls our voraciousness for the unvarnished “reality hunger.”

As Foer notes:

Scandalous behavior stirs mass outrage most reliably when it is “caught on tape.”

For example

Donald Trump, improbably, recovered from the Access Hollywood tape, in which he bragged about sexually assaulting women, but that tape aroused the public’s passions and conscience like nothing else in the 2016 presidential race.
Then:
Video has likewise provided the proximate trigger for many other recent social conflagrations.

Foer gives a number of examples in the article, but  as Foer notes 

That all takes us to the nub of the problem. It’s natural to trust one’s own senses, to believe what one sees—a hardwired tendency that the coming age of manipulated video will exploit. Consider recent flash points in what the University of Michigan’s Aviv Ovadya calls the “infopocalypse”—and imagine just how much worse they would have been with manipulated video. Take Pizzagate, and then add concocted footage of John Podesta leering at a child, or worse. Falsehoods will suddenly acquire a whole new, explosive emotional intensity.

Problematic and hugely consequential as that is, we get to the nub of the matter

But the problem isn’t just the proliferation of falsehoods. Fabricated videos will create new and understandable suspicions about everything we watch. Politicians and publicists will exploit those doubts. When captured in a moment of wrongdoing, a culprit will simply declare the visual evidence a malicious concoction.

Foer offers this evidence

The president, reportedly, has already pioneered this tactic: Even though he initially conceded the authenticity of the Access Hollywood video, he now privately casts doubt on whether the voice on the tape is his own.

Soon we may not be able to believe the evidence of what we see or hear. In fact we may have reached that point, given how so many seem to believe untruths even when they are repeatedly exposed.  Indeed, given the number of times Trump’s lies have been exposed and those of others one could consider that many prefer to credit lies over reality. Trump might well say ‘Mission Accomplished’. In fact it seems that Trump is a disciple of Joseph Goebbels and the concept of the ‘Big Lie”.

“If you tell a lie big enough and keep repeating it, people will eventually come to believe it. The lie can be maintained only for such time as the State can shield the people from the political, economic and/or military consequences of the lie. It thus becomes vitally important for the State to use all of its powers to repress dissent, for the truth is the mortal enemy of the lie, and thus by extension, the truth is the greatest enemy of the State.”

For example as this video illustrates:-

Trump’s Fake News strategy is part or that. Indeed his recent success in pressuring Rod Rosenstein over ‘Spygate’ suggests he has proceeded to follow through on Goebbels prescription:-

This goes some way to suggesting and supporting Foer’s argument :-

In other words, manipulated video will ultimately destroy faith in our strongest remaining tether to the idea of common reality. As Ian Goodfellow, a scientist at Google, told MIT Technology Review, “It’s been a little bit of a fluke, historically, that we’re able to rely on videos as evidence that something really happened.”

Truth has been an historical fluke. Welcome to the modern Salem.
Then a chilling statement:

The collapse of reality isn’t an unintended consequence of artificial intelligence. It’s long been an objective—or at least a dalliance—of some of technology’s most storied architects. In many ways, Silicon Valley’s narrative begins in the early 1960s with the International Foundation for Advanced Study, not far from the legendary engineering labs clumped around Stanford. The foundation specialized in experiments with LSD. Some of the techies working in the neighborhood couldn’t resist taking a mind-bending trip themselves, undoubtedly in the name of science. These developers wanted to create machines that could transform consciousness in much the same way that drugs did. Computers would also rip a hole in reality, leading humanity away from the quotidian, gray-flannel banality of Leave It to Beaver America and toward a far groovier, more holistic state of mind. Steve Jobs described LSD as “one of the two or three most important” experiences of his life.

We are witnessing just how prescient George Orwell was . His nightmare is now our reality. Trump could well be Big Brother and Sean Hannity the Minister of Truth.

Given recent behaviours of Facebook and others this proposition is not without validity. The FAANGS are the enablers of this nightmare, willing or not.

But, Foer then considers the implications of Virtual Reality:-

Fake-but-realistic video clips are not the end point of the flight from reality that technologists would have us take. The apotheosis of this vision is virtual reality. VR’s fundamental purpose is to create a comprehensive illusion of being in another place. With its goggles and gloves, it sets out to trick our senses and subvert our perceptions.

The mindbending of VR adds even more danger to this:

But if the hype around VR eventually pans out, then, like the personal computer or social media, it will grow into a massive industry, intent on addicting consumers for the sake of its own profit, and possibly dominated by just one or two exceptionally powerful companies. (Facebook’s investments in VR, with its purchase of the start-up Oculus, is hardly reassuring.)The ability to manipulate consumers will grow because VR definitionally creates confusion about what is real.

This is really concerning and  Foer makes this point later on

Researchers in Germany who have attempted to codify ethics for VR have warned that its “comprehensive character” introduces “opportunities for new and especially powerful forms of both mental and behavioral manipulation, especially when commercial, political, religious, or governmental interests are behind the creation and maintenance of the virtual worlds.”

Somehow, Adam does not see ethics being prominent in this environment. The comment below resonated:-

As the VR pioneer Jaron Lanier writes in his recently published memoir, “Never has a medium been so potent for beauty and so vulnerable to creepiness. Virtual reality will test us. It will amplify our character more than other media ever have.”

Both an opportunity and a threat. Adam welcomes change, including disruptive change and is no Luddite, but frankly these changes worry him, they offer huge opportunity, but at the same time huge threats:

Perhaps society will find ways to cope with these changes. Maybe we’ll learn the skepticism required to navigate them.

We can but hope, however:-

Thus far, however, human beings have displayed a near-infinite susceptibility to getting duped and conned—falling easily into worlds congenial to their own beliefs or self-image, regardless of how eccentric or flat-out wrong those beliefs may be.

Adam subscribes wholeheartedly to that comment. Then Foer articulates a key point:-

Governments have been slow to respond to the social challenges that new technologies create, and might rather avoid this one. The question of deciding what constitutes reality isn’t just epistemological; it is political and would involve declaring certain deeply held beliefs specious.

He suggests:-

Few individuals will have the time or perhaps the capacity to sort elaborate fabulation from truth. Our best hope may be outsourcing the problem, restoring cultural authority to trusted validators with training and knowledge: newspapers, universities.

Therein lies a key problem. Many of these validators are no longer to be trusted. Many universities are no longer bastions of free thought, but validators of intolerance and bias. Many newspapers simply regurgitate clickbait.

Foer, makes Adam considers a major error here.

Then:-

Perhaps big technology companies will understand this crisis and assume this role, too. Since they control the most-important access points to news and information, they could most easily squash manipulated videos, for instance. But to play this role, they would have to accept certain responsibilities that they have so far largely resisted.

A nice thought, but to date their behaviour is not encouraging. For example see this Atlantic podcast on Facebooks’s inability to tell the truth

Perhaps it would be better to place faith in the people, as Abraham Lincoln did when he said:-

You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time.

That is Adam’s sincere hope

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

%d bloggers like this: