When Orson Welles adapted The War of the Worlds for a CBS radio drama in 1938, it caused widespread panic.

There were reports of suicides, car crashes, mass hysteria and stampedes.

Police rushed to the radio station and demanded the show be pulled off air.

As one of the actors, Stefan Schnabel, recalled:

“A few policemen trickled in, then a few more. Soon, the room was full of policemen and a massive struggle was going on between the police, page boys, and CBS executives, who were trying to prevent the cops from busting in and stopping the show. It was a show to witness.”

The next day Welles held a press conference about the broadcast. His statements about the broadcast were printed in newspapers nationwide.

Orson Welles at his press conference the day after broadcastOrson Welles at his press conference the day after broadcastOrson Welles at his press conference the day after broadcast. Source: public domain

He said he anticipated “nothing unusual” from the broadcast. And when asked if he should have “toned down” the drama, he stated: “No, you don’t play murder in soft words.”

This incident has gone down in history, and been studied by academics ever since. The current consensus is that the panic was exaggerated. But the police reports and media hysteria was very real at the time.

Still, something like that could never happen today. People just aren’t that gullible or naïve any more.

Or are they?

The camera never lies, until it does

The thing is, we are just as easy to fool. We will just get fooled in different ways.

People today would never be sent into blind panic by a radio drama. You’d simply take a look at your TV or phone to see if the “drama” was real.

We’re also a lot more sceptical today. We grew up with motion pictures and impossible-to-believe special effects. If you flicked over to a T. rex rampaging through Jurassic Park, you’d know, or at least assume, it was a film and not real.

But what if you turned on the news to a video of Donald Trump beating a man to death with a golf club, caught in glorious high definition on the iPhone of a fellow golfer?

Again, you’d check your phone and laptop and laptop to see if it was real. But what if all the news sources you went to said the same thing and showed the same video?

And what if a number of other videos from horrified onlookers emerged, showing exactly the same incident from different angles?

It would be pretty hard to doubt then.

This is the reality we now live in, thanks to the release of deepfakes.

And it’s not just beatings that can be faked. It’s anything. Anyone with an average PC and access to the internet could create a realistic video of Theresa May saying we’d declared war on North Korea.

In fact, you could easily make anyone say or do basically anything. And not in a weird-looking CGI way. No. In an actual real video.

Here’s how deepfakes works, and why it is a very big deal.

The story of deepfakes

Like most new video technology – VHS, DVD, Blue-ray, high definition, virtual reality – deepfakes was pioneered by porn.

The software was created by a Reddit user called “deepfakes”. They used it to swap the face of Wonder Woman’s Gal Gadot on to the face or a porn actor. And later to swap a whole range of celebrities’ faces on to porn actors.

Face swapping is nothing new. But the way deepfakes goes about it is. They used machine learning to do all the heavy lifting. And they created a program that made it easy for anyone to do the same.

The program simply needs to be fed images or videos of the two people you want to face swap and then it learns how to do it. No green screen or motion tracking required. No expert computer knowledge. Just a load of images or videos.

And thanks to the internet, hundreds, if not thousands, of images of almost anyone are freely available. Just think how many photos there are of people you know on Facebook alone.

So, you download a load of images, put them into the program, leave it for a few hours to “learn” and then you have your very own fake video generator.

And the results are very good. The longer you leave it to learn and the more and better quality images you supply it, the better the result. But with just a few hours’ work, anyone can make videos like this. (Click video to watch on YouTube).

Software to create deepfakes this program is open source. Which means anyone is free to use and improve it.Software to create deepfakes this program is open source. Which means anyone is free to use and improve it.

This program is open source. Which means anyone is free to use and improve it. There are now hundreds of videos springing up, like that one above, of people playing with it.

We are now in a post-deepfakes world

What does this mean for you and me?

Well, firstly, we can no longer trust videos at face value.

With CGI, even really good CGI, you can usually tell it’s CGI. Not so with deepfakes.

This is more like having a Photoshop for video. Most of the magazine images we see we accept have been “photoshopped” to some degree. We should probably start applying that same scepticism to moving pictures as well.

In the future we may see celebrities “lending their face out” to ad campaigns or films and TV. A lesser model or actor will play the part and the celeb’s face will be “deepfaked” over the top in post-production.

We’ll probably soon see the ability to change your face to your friend’s face or favourite celeb’s face in Snapchat or Instagram fairly soon.

Online shops will be able to show you exactly what you’d look like with certain makeup or clothes by you giving them access to your Facebook photos.

The possibilities are endless.

We will also see this used in more malevolent ways. There will be videos that emerge of celebs doing or saying illegal or outrageous things.

Thanks to the nature of news coverage and social media, these fakes will spread instantly. Before they are proven, the damage will be done.

I’ve yet to see any of these fake videos make the news, but they will. It’s not a matter of if, but of when.

I wonder who will be the first large-scale victim of deepfaking (other than the actors in the now infamous porn videos).

And the thing is, this technology isn’t too well known at the moment. I know about it and you now know about it because we are into technology. But most people on the street will have no idea this kind of thing is possible.

When the first deepfakes scandal emerges, it will take a lot of people in. And I’d imagine more than a few journalists, too.

What can be done?

Well, the main reason I’m writing about this now – and not when deepfakes first hit lines a few months ago – is because I just heard about a possible solution.

My sister is a neuroscientist. She’s just about to complete her PhD. And aside from making me the underachiever in the family, it means she gets access to some very interesting people and technology.

Last week she told me about a talk she’d been in with Kang Lee, a professor at the University of Toronto.

He’s created an iPhone app that can analyse videos and detect your heartrate and breathing to infer your mood. The aim of the app is to be a lie detector of some sort.

And it doesn’t need to be used on live footage. In his demonstration, he used it to analyse a clip of Bill Clinton being interviewed about Monica Lewinsky.

The app works in the same way as a heartrate monitor on a Fitbit watch. It analyses the light coming off your skin and detects dilation or constriction of blood vessels.

Technology like this, I’d imagine will have to be used in a post-deepfakes world to authenticate important videos.

It may now be easy to fake the way someone looks. But faking the constriction of their blood vessels and their breathing is still some way off yet.

Maybe that will be addressed in deepfakes v2.

Either way, the world is now a different place thanks to deepfakes.

Do you think we’re ready to live in a world of deepfakes? Let me know in the comments below.

Until next time,

Harry Hamburg
Editor, Exponential Investor

Related Articles: