You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Obsidian/00.03 News/The Taylor Swift deepfakes ...

25 KiB

Alias Tag Date DocType Hierarchy TimeStamp Link location CollapseMetaTable
🎭
🇺🇸
🎤
🤖
🚫
2024-01-28 WebClipping 2024-01-28 https://www.platformer.news/taylor-swift-deepfake-nudes-x/?utm_source=substack&utm_medium=email true

Parent:: @News Read:: 2024-01-31


name Save
type command
action Save current file
id Save

^button-TheTaylorSwiftdeepfakesareawarningNSave

The Taylor Swift deepfakes are a warning

For years, researchers predicted a huge wave of AI-powered harassment. Now it's all happening on X

Casey Newton

Jan 25, 2024 — 10 min read

The Taylor Swift deepfakes are a warning

Taylor Swift attends the Golden Globe Awards in Beverly Hills this month. (FilmMagic / Getty Images)

Is it too early to say that, on balance, generative artificial intelligence has been bad for the internet?

One, its rise has led to a flood of AI-generated spam that researchers say now outperforms human-written stories in Google search results. The resulting decline in advertising revenue is a key reason that the journalism industry has been devastated by layoffs over the past year.

Two, generative AI tools are responsible for a new category of electioneering and fraud. This month synthetic voices were used to deceive in the New Hampshire primary and Harlem politics. And the Financial Times reported that the technology is increasingly used in scams and bank fraud.

Three — and what I want to talk about today — is how generative AI tools are being used in harassment campaigns. 

The subject gained wide attention on Wednesday when sexually explicit, AI-generated images of Taylor Swift flooded X. And at a time when the term “going viral” is wildly overused, these truly did find a huge audience. 

Heres Jess Weatherbed at The Verge:

One of the most prominent examples on X attracted more than 45 million views, 24,000 reposts, and hundreds of thousands of likes and bookmarks before the verified user who shared the images had their account suspended for violating platform policy. The post was live on the platform for around 17 hours prior to its removal.

But as users began to discuss the viral post, the images began to spread and were reposted across other accounts. Many still remain up, and a deluge of new graphic fakes have since appeared. In some regions, the term “Taylor Swift AI” became featured as a trending topic, promoting the images to wider audiences.

At its most basic level, this is a story about X, and not a particularly surprising one at that. When Elon Musk took over X, he dismantled its trust and safety teams and began enforcing its written policies — or not — depending on his whims. The resulting chaos has caused advertisers to flee and regulators to open investigations around the world. (X didn't respond to my request for comment.)

Given those circumstances, it's only natural that the platform would be flooded with graphic AI-generated images. While it is rarely discussed in polite company, X is one of the biggest porn apps in the world, thanks to its longstanding policy allowing explicit photos and videos and Apple's willingness to turn a blind eye to a company that has long flouted its rules. (X is officially rated 17+ for "Infrequent/Mild Sexual Content and Nudity," a historic understatement.)

Separating consensual, permissible adult content from AI-generated harassment requires strong policies, dedicated teams and rapid enforcement capabilities. X has none of those, and that's how you get 45 million views on a single post harassing Taylor Swift.

It would be a mistake, though, to consider Swift's harassment this week solely through the lens of X's failure. A second, necessary lens is how platforms that have rejected calls to actively moderate content have created a means for bad actors to organize, create harmful content, and distribute it at scale. In particular, researchers now have repeatedly observed a pipeline between the messaging app Telegram and X, where harmful campaigns are organized and created on the former and then distributed on the latter.

And indeed, the Telegram-to-X pipeline also brought us the Swift deepfakes, report Emanuel Maiberg and Samantha Cole at 404 Media:

Sexually explicit AI-generated images of Taylor Swift went viral on Twitter after jumping from a specific Telegram group dedicated to abusive images of women, 404 Media has found. At least one tool the group uses is a free Microsoft text-to-image AI generator. [...]

404 Media has seen the exact same images that flooded Twitter last night posted to the Telegram a day earlier. After the tweets went viral, people in the group also joked about how the attention the images were getting on Twitter could lead to the Telegram group shutting down. 

I'd say there's little chance of that, given that Telegram won't even disallow the trading of child sexual abuse material. In any case, with each passing day it becomes clear that Telegram, which has more than 700 million monthly users, deserves as much scrutiny as any other major social platform — and possibly more.

As a final lens through which to consider the Swift story, and possibly the most important, has to do with the technology itself. The Telegram-to-X pipeline described above was only possible because Microsoft's free generative AI tool Designer, which is currently in beta, created the images.

And while Microsoft had blocked the relevant keywords within a few hours of the story gaining traction, soon it is all but inevitable that some free, open-source tool will generate images even more realistic than the ones that polluted X this week.

It would be a gift if this were a story about content moderation: about platforms moving to remove harmful material, whether out of a sense of responsibility or legal obligation.

But generative AI tools are already free to anyone with a computer, and they are becoming more broadly accessible every day. The fact that we now have scaled-up social platforms that enable the spread of harmful content through a combination of policy and negligence only compounds the risk.

And we should not make the mistake of thinking that it is only celebrities like Swift who will suffer.

On 4chan, groups of trolls are watching livestreams of municipal courtrooms and then creating non-consensual nude imagery of women who take the witness stand. This month, nonconsensual nude deepfakes were spotted at the top of Google and Bing search results. Deepfake creators are taking requests on Discord and selling them through their websites. And so far, only 10 states have addressed deepfakes through legislation; there is no federal law prohibiting them. (Those last three links come from NBC's Kat Tenbarge, who has been doing essential work on this beat.)

The rise of this sort of abuse is particularly galling given that researchers have been warning about it for a long time now.

"This is 100% a thing that was “predicted” (obvious) *years* in advance," said Renee DiResta, research manager at the Stanford Internet Observatory, in a post on Threads. "The number of panels and articles where those of us who followed the development of the technology pointed out that yeah, disinformation tactics would change, but harassment and revenge porn and [non-consensual intimate imagery] were going to be the most significant form of abuse."

The past decade offers little hope that Congress will work to pass legislation on this subject in any reasonable amount of time. But they will at the very least have the chance soon to grandstand: on Wednesday, nominal X CEO Linda Yaccarino will make her first appearance before Congress as part of a hearing about child safety. (She'll be joined by the CEOs of Meta, Snap, Discord, and TikTok.)

In 2019, Congress blasted Facebook for declining to remove a video that artificially slowed down then-House Speaker Nancy Pelosi's speech, making her appear to slur her words. Five years later, the manipulated media is much more graphic — and the scale of harm already dwarfs what we saw back then. How many more warnings do lawmakers need to see before they take action?

Generative AI clearly has many positive, creative uses, and I still believe in its potential to do good. But looking back over the past year, it's clear that any benefits we have seen today have come at a high cost. And unless those in power take action, and soon, the number of victims who will pay that cost is only going to increase.


Elsewhere in fakes:


On the podcast this week: Kevin and I try to talk Andreessen Horowitz's Chris Dixon out of continuing to invest in crypto. Plus, sorting through AI's effect on the news industry, and the year's first round of HatGPT.

Apple | Spotify | Stitcher | Amazon | Google | YouTube


Whoops

On Tuesday the newsletter for paid subscribers inadvertently pasted the Governing links twice, including over where the Industry links should have gone. We updated those links on the site soon after; if you missed them and want to catch up you can find them here. Sorry about that! And thanks to all the readers who wrote in to point it out.


Governing


Industry


Those good posts

For more good posts every day, follow Caseys Instagram stories.

(Link)

(Link)

(Link)


Talk to us

Send us tips, comments, questions, and anti-harassment AI: casey@platformer.news and zoe@platformer.news.


$= dv.el('center', 'Source: ' + dv.current().Link + ', ' + dv.current().Date.toLocaleString("fr-FR"))