Last fall, Tumblr users raved about Martin Scorcese’s 1973 film “Goncharov,” starring Robert De Niro and Al Pacino. They hailed the lesser-known Scorcese classic as the greatest mafia movie ever made. It was ahead of its time, never earning the acclaim it deserved.
By now, we should all know that “Goncharov” is not a real film, even though Scorcese himself got in on the joke. It’s just a Tumblr meme that spiraled into a site-wide gag, as users toiled away in shared Google Docs, collaboratively working together to “yes, and” one another with such sincerity and precision that the made-up movie could seem legit.
As AI image generation gets better, the internet feels a bit like Tumblr in the moment of “Goncharov.” We are creating new collective realities, only this time, we are accidentally generating them simply by posting without much thought.
When AI image generator Midjourney opened access to the Midjourney 5 model two weeks ago, its hyper-realistic outputs quickly went viral. Some of the resulting images illustrated the dangers of these easily accessible tools, like a series of fake photos depicting Donald Trump’s arrest.
The dramatic set of images shows Trump forcibly resisting arrest, then sprinting away from a squad of police officers, among other scenes. The images, which spread like wildfire around the internet, were generated by Eliot Higgins, founder of investigative journalism outlet Bellingcat. Higgins initial tweets made it clear he was just playing around with AI image generators, but they quickly spread without the accompanying context.
Other viral AI images are more benign, like a widely circulated photo of Pope Francis II serving a high fashion look in a stunning, white Balenciaga coat. Just one of the viral tweets about the image was viewed over 26 million times.
That wasn’t even the only Balenciaga-related AI meme to go viral in the last week. A creator using the name demonflyingfox has been using Midjourney images, Eleven Labs speech, and D-ID animation to make surreal, AI-powered parody videos. His most widely spread video is “Harry Potter by Balenciaga,” a high-fashion-style advertisement in which synthetic renditions of characters from the franchise say things like, “You are Balenciaga, Harry.”
Demonflyingfox told TechCrunch that he intentionally tries to make his videos weird enough that no one could accidentally believe that they’re real. (It doesn’t take a savvy viewer to know that Joe Rogan has never actually interviewed Jesus Christ and Dumbledore regrettably did not wear a leather jacket and sunglasses in the Harry Potter movies.) He also deliberately uses Midjourney 4 instead of the updated software so that his characters don’t look so realistic that they could come off as believable.
“What I’m doing is so obviously fake that I don’t really worry about spreading misinformation, nor is it my intention,” demonflyingfox said. “But I know of the power of the tools and that it’s very easy right now to do so.”
The more believable the premise, the more likely we are to register a synthetic image as fact. If you’re not an expert on papal attire, you could feasibly believe that the Pope has a kick-ass white puffer jacket for winter outings – this is the same Pope whose prog rock album got reviewed on Pitchfork, and who once worked as a nightclub bouncer in Argentina.
Even supermodel Chrissy Teigen fell for the joke. “I thought the pope’s puffer jacket was real and didn’t give it a second thought,” she tweeted. “No way am I surviving the future of technology.”
The Trump AI arrest images also went viral, but could be fact-checked quickly enough by anyone willing to scan their Twitter timeline for evidence of the real news breaking. That process is a little less obvious for AI-generated historical events.
Just days ago, Reddit user u/Arctic_Chilean posted a set of AI-generated photos on the Midjourney subreddit, claiming that they illustrated “The 2001 Great Cascadia 9.1 Earthquake & Tsunami,” which hit the U.S. and Canada. Within the confines of that subreddit, which is dedicated to experiments with Midjourney’s AI tools, it’s obvious that the Great Cascadia Earthquake is not a real event. But if r/Midjourney is just one of many subreddits on your feed, it’s easy to consume the images without giving them a second thought.
“Was I the only one who was like ‘How come I don’t remember this happening?’… until seeing the subreddit?” one Reddit user wrote.
The images parallel reality closely enough that they could be true. The Cascadia subduction zone is a real fault line near the Pacific Northwest and the site of a massive, devastating earthquake in 1700. People in the region worry that a disaster on that scale could occur again in our lifetime, a knowledge that imbues the AI-generated scenes of potential future events yet to unfold with a sense of foreboding.
Sometimes, AI tools like ChatGPT and Bing AI “hallucinate” — a term used when they confidently answers questions with fake information. Awash in a sea of synthetic imagery, we might all be on the cusp of collectively hallucinating.
“I was about to trash you for posting old news in this subreddit lol,” a Redditor in r/midjourney commented. “These look so real it’s insane.”
Internet communities have a tendency to converge around a niche idea and build out in-depth lore about it, like an act of collaborative world-building (see: “Goncharov.”) Naturally, the same thing happened with these fake historical events, as Reddit users began creating their own history about how the 2001 earthquake impacted global politics.
“Despite the magnitude and devastation caused by the Great Concordia Earthquake of 2001, it seems that ten years later, few people remember the event,” wrote another user in a made-up news article, misnaming the earthquake that never happened. “This is likely due to a combination of factors, including the global war on terrorism, the Iraq War, and the Great Recession.”
The Redditor who originally posted the images theorized about how a major disaster just before 9/11 would impact the War on Terror, positing that the calamity’s economic impact might undermine support for the invasion of Iraq.
As widely-available AI image generators rapidly become more sophisticated, their creations might outpace our ability to adjust to a flood of believable but completely false images. For better or worse, it’s now possible to create our own “Goncharov” in an instant, turning any nebulous fiction into something tangible and putting it online.
While many of these creations are harmless, it’s not difficult to imagine how synthetic images might manipulate public knowledge of current or historical events.
It turns out that “Balenciaga Pope” was the brainchild of a thirty-one-year-old on shrooms, who told Buzzfeed News that he hadn’t considered the consequences of AI images. “It’s definitely going to get serious if they don’t start implementing laws to regulate it,” he said.
Indeed, legislators in the UK and some U.S. states have enacted bans on nonconsensual, deepfake AI porn. But memes like the Great Cascadia Earthquake and Balenciaga Pope aren’t as intrinsically harmful and won’t be facing regulatory guardrails any time soon.
AI-generated images still aren’t perfect. Street signs look like they’re written in Simlish, and there are more three-fingered hands than you’d find in the natural world. Usually, if you look closely enough at an AI-generated image, you can find some weird aberration, a cluster of pixels that tells you that something is wrong. But it’s getting harder to distinguish what’s real or imagined — and harder to imagine what comes next.
From Balenciaga Pope to the Great Cascadia Earthquake, everything is ‘Goncharov’ by Amanda Silberling originally published on TechCrunch