Viewpoint

Amal Attar-Guzman: The Taylor Swift deepfake porn scandal highlights how dangerous AI can be

Solving the deepfake porn problem must be a priority for policymakers when considering AI
Taylor Swift performs as part of the "Eras Tour" at the Tokyo Dome, Wednesday, Feb. 7, 2024, in Tokyo. Toru Hanai/AP Photo.

AI is both a gift and a curse. On one hand, it has opened up a new world of possibilities and helped make everyday life easier, faster, and more efficient. On the other hand, it’s opened up Pandora’s box, letting loose a variety of negative, complex social outcomes that are quickly becoming difficult to contain or mitigate.  

Early in March 2023 I wrote about some of the under-discussed issues women have been facing as the result of these new and emerging technologies. One such issue that stood out to me was the rise of non-consensual deepfake pornography. As of December, deepfakes have risen exponentially, with researchers now predicting that there will be over 5.2 million deepfakes in 2024. 

So when it was time to make The Hub’s 2024 predictions, I decided to raise the alarm once again, predicting that deepfakes and AI technology would soon move to the forefront of Canadian public policy. With its exponential rise threatening democracy and its institutions and the recent case of deepfake child pornography of teenage girls in Winnipeg, the evidence now seems clear. Not only is this fast becoming a major technology story, it is now becoming a gender-based and societal issue at large.

What I did not expect was for my prediction to come true so soon, just a few weeks into 2024. Nor did I expect to see non-consensual deepfake pornography of Taylor Swift on my X feed. 

Now, I am not much of a Swiftie. But not only did I feel disgusted seeing those images on my social media account, I also felt a deep-seated rage seeing people immediately target and harass her. This visceral anger only got worse when I saw the alleged Canadian creator of these images starting to gloat and revel in his newfound online fame. What was even more heartbreaking was that as the backdrop to all this, Taylor Swift was dealing with a recent stalking case

Thankfully, fellow Swifties quickly rallied together to push X to take down these images. The company complied. Both X and Meta have put regulations in place to deal with these deepfake cases, and while they are not perfect, at least something is now there to deal with the issue head-on. 

Seeing this happen to Taylor Swift really puts things into perspective. If this A-list, worldwide celebrity and billionaire, who won multiple Grammys, accolades and sold billions of dollars in concert tickets, can be a target of non-consensual deepfake pornography, what about the rest of us women? 

Sadly, this is fast becoming a reality. Between 2022 and 2023 there was a notable 1,740 percent deepfake surge in North America, one of the highest reported around the world. Women are disproportionately affected. According to a 2023 study by Amsterdam-based company Sensity, 96 percent of deepfakes were non-consensual pornography depicting women.

This is fast becoming a fear for women from all backgrounds. Deepfakes are used to silence and shame women in the public eye and even in the private sphere. If they say what they think, stand up for themselves, or even go against the societal grain, one way or another, they can get punished by anyone, whether they know them or not, without remorse or recourse. No one is safe from this new form of sexual harassment, defamation, and public humiliation. 

While many on my X feed shared these same concerns, one perspective caught my attention. Jesse Brown, journalist and publisher of Canadaland, posted on X that while the situation was gross, policymakers need to be careful about legislating on these issues. As he put it: “[While Canada] need[s] laws against deepfake porn that tells convincing lies,” Swift’s case amounted to “horny fanfic” that “should not be a crime.” 

Brown’s comments attracted a negative reaction, including from me. I decided to listen to the Canadaland episode to see if there was a perspective I might be missing. After listening to the conversation, a couple of thoughts came to mind. 

First, with all due respect, I disagree with the notion that what happened to Taylor Swift was just “horny fanfic.” There is a huge difference between creating fanfiction of fictional characters and celebrities that are meant to celebrate them, versus creating degrading deepfake pornographic images. 

Intent is key. Generating deep fake porn of Taylor Swift wasn’t done to celebrate or even idolize her. It was done to humiliate and degrade her, encouraging people to harass her. If the same thing happened to politicians, journalists, or ordinary citizens, it would still be wrong and some sort of regulation and legal recourse would be needed. 

I also disagreed with the notion that because the deepfake pornographic images looked fake and not realistic—that it’s not a “convincing lie”— it’s not as big of a deal as realistic-looking deepfake pornography that might actually convince people that it’s real. Here’s the thing: even if someone is not completely recognizable, harm is still there. Deepfake porn is the modern-day pornographic drawing of a girl in a boy’s bathroom. It is a form of sexual harassment even if it “looks fake.” 

However, there was one point that Brown raised that was worthwhile. When it comes to regulating or even criminalizing deepfakes, we must be cognizant of the effects on freedom of expression. What about in cases of political satire, or even political messaging or advertising? How should we think about that? 

In such circumstances, a complete ban or extreme restriction of deepfakes would be an overstep. Already, we’ve seen some backlash to Meta’s policy regarding deepfakes, which saw its oversight board acknowledge that in some cases where media is manipulated for humour, parody, or satire, it “should be protected.” As Brown rightly asked, “If you cannot draw rude pictures of the elite, then are you truly free?” 

This screenshot made on Monday, Jan. 29, 2024, shows a Taylor Swift search error on social media platform X. X has blocked some searches for Swift as pornographic deepfake images of the singer have circulated online. (AP Photo)

In terms of Canadian policy and law, if future legislation does not strike the right chord, it could lead to a constitutional challenge. AI experts, policymakers, and legal scholars will need to ask themselves this question: how can public policy provide regulations on deepfakes that, if brought to a court of law, could survive a Charter challenge in terms of “reasonable limits”?

Perhaps the answer lies within current law regarding sexual harassment, defamation, and copyright law. Deepfakes and other AI-manipulated content that provide misinformation and disinformation with criminal and defamatory intent will need to be captured, while explicitly ensuring that cases of political commentary, advertisement, and satire aren’t subject to restrictions. Maybe in the latter’s case providing disclaimers will be needed to allow its promotion. 

These are not easy solutions. Which responsibilities and obligations should fall under the realm of social media platforms or the government needs to be explored and debated. And, no doubt about it, mistakes will be made. 

But regardless, it is clear that there needs to be some explicit public policy and legal recourse to deal with the issue head-on—one where freedom of expression is preserved, but as importantly, people are prioritized and protected.

Sign up for FREE and receive The Hub’s weekly email newsletter.

You'll get our weekly newsletter featuring The Hub’s thought-provoking insights and analysis of Canadian policy issues and in-depth interviews with the world’s sharpest minds and thinkers.