Donald Trump is a notorious media bully. He uses lawsuits, executive power, and political pressure to punish critics and bend institutions to his will. Disney, Meta, and Paramount have since paid out multi-million-dollar settlements over content disputes. CBS News leaders resigned. Colbert’s show was canceled. The AP was barred from the White House. Even Rupert Murdoch is now being sued over unflattering coverage. Trump targets law firms, universities, and online services like TikTok, the moment they stop serving his interests.
Despite this well-established pattern of silencing dissent, lawmakers handed him the Take It Down Act: a sweeping censorship weapon he has openly vowed to wield against his critics.
Recently, South Park fired back in its first new episode in years, with a bold, refreshing, and unapologetically crude parody of the Christian “He Gets Us” campaign—featuring a deepfaked, fully nude Donald Trump wandering the desert as a solemn narrator asks, “When things heat up, who will deliver us from temptation?” The “public service announcement” ends with a glowing political endorsement from Trump’s wide-eyed, “teeny tiny penis.”
It’s brilliant satire that cuts right to the heart of American political delusion. It’s also potentially criminal under the law Trump championed. Welcome to the new reality: mocking the President with AI could now land you in prison.
The Take It Down Act criminalizes the non-consensual publication of “intimate visual depictions.” This includes depictions that were generated using AI. Intimate visual depictions include images showing uncovered genitals. To qualify, the depiction must appear, in the eyes of a reasonable person, indistinguishable from a real image. The identifiable individual must not have provided consent, nor voluntarily exposed the depicted material in prior public or commercial settings. The publisher of the material must have either intended to cause or actually did cause harm to the depicted individual. The law leaves some room to assess whether the depiction is a matter of public concern, but there are no express carveouts for lawful speech such as commentary, satire or parody. Violations of this provision can incur both financial penalties and jail time.
The broadcast and streaming versions of the South Park PSA are likely out of scope. Take It Down applies only to publishing through the use of an interactive computer service (as defined under Section 230). However, South Park also uploaded the PSA to YouTube and the site HeTrumpedUs.com, which could be a problem. The depiction includes a full, graphic display of Trump’s “teeny tiny” member. And it’s probably safe to assume that neither Trump, nor anyone on his behalf, consented to it.
From the live broadcast, it might be unclear at first whether the depiction of Donald Trump was real or AI generated. On the one hand, it’s absolutely a line South Park would cross, and their fans know that. On the other hand, we might wonder whether Trey Parker and Matt Stone, the creators of South Park, would so willingly adopt AI given the controversy surrounding its use in the entertainment industry (though we know now that they are quite enthusiastic about it). Upon first impression, it’s possible they merely spliced a real video of Trump walking. South Park does this all the time—taking real images of public figures and effectively pasting them onto cartoon bodies. The way Trump swings his arms, and his gait, seemed typical of the President. It’s only when he starts stripping off his clothes that the use of AI becomes apparent. Even then, there are legitimate videos and images of Trump in his most natural form circling the web. We’ll spare you that evidence, though this might also leave open the question as to whether the PSA depicts any materials that Trump himself has voluntarily exposed in a public or commercial setting.
The point is one could plausibly argue that, in the eyes of a reasonable person, the depiction of Trump in the PSA is indistinguishable from reality. Sure, the South Park of it all might tip-off viewers that the content is likely fake. South Park is notorious for precisely this type of raunchy, over-the-top political satire. But outside that context, it depends. For instance, if you search for nude images of Trump (which we don’t recommend at all), you will find out-of-context screenshots from the PSA of nude Trump. Plus, the creators recruited “the best deepfake artists in the world” for this project. Does that matter in terms of making the content indistinguishable? It’s one of many open questions for Trump-friendly prosecutors: in the eyes of a reasonable person, is the depiction indistinguishable? Maybe.
This also leaves open whether the online services hosting and spreading the video and screenshots of AI nude Trump could be on the hook. The Take It Down Act imposes civil penalties on online services that fail to remove intimate deepfake content upon request. The White House could send take down requests to social media companies that currently make the content available. This could potentially erase the content from existence, especially if the episode is ever banned from streaming services. As fans might recall, television and streaming companies banned South Park episodes 200 and 201 for merely depicting Muslim Prophet Muhammad.
Ultimately, whether the PSA violates the law will come down to whether it’s a matter of public interest. Most criticism of public figures, especially elected officials, is a matter of public concern. The mere fact that the White House weighed-in on the episode suggests its importance. But it’s especially the case when you consider the underlying messages Parker and Stone are trying to convey about the Trump Administration to the public: a cutting commentary on how the MAGA movement holds Trump out as their god-king in hopes he one day leads them to eternal salvation (i.e. a promised land devoid of minorities and woke-ness), illustrating the evaporating line between church and state. More obviously, it’s a riff on The Emperor’s New Clothes—the tale of a vain ruler duped into believing he’s draped in invisible finery while parading around naked. The fable endures as a parable of mass delusion, where truth is swallowed for fear of offending power. And that’s precisely the dynamic at play today, as media empires continue to buckle under Trump’s relentless bullying, pretending not to see what’s right in front of them. In that context, the public undeniably has a compelling interest in knowing that the President is lying to them.
It’s especially significant that South Park was the one to take this shot. The show has long been known for skewering both the left and the right, cultivating an audience that prides itself on rejecting political correctness and ideological rigidity. That ethos even inspired the term “South Park Republican”—a loosely defined label for those who mock partisanship from the sidelines. The show’s core demographic—predominantly men aged 18 to 49—overlaps meaningfully with the audiences of figures like Joe Rogan and, to a lesser extent, Andrew Tate. So, unlike overtly partisan media, South Park holds a rare cultural position in that it can potentially speak directly to groups adjacent to the MAGA movement without preaching, pandering, or being immediately dismissed. That gives its political commentary a unique kind of weight to the extent it has potential to actually move the needle in shaping public opinion, and by extension, the direction of the country’s leadership.
While the broader message is undeniably important, some might ask whether the commentary on the size of Trump’s penis is really a matter of public interest. Could the creators have made their point without the deepfaked, talking genitalia? From a First Amendment perspective, it shouldn’t matter. The depiction—however crude—is unlikely to fall into any of the narrow exceptions to protected speech, such as obscenity. And under the logic of the Take It Down Act, Trump’s endowment might well qualify as a matter of public concern. After all, he made it one. During the 2016 campaign, he famously implied that his penis was larger than Marco Rubio’s, citing their respective hand sizes as evidence. Once a candidate brings his genitals into the public discourse, this kind of satire seems obviously fair game.
Realistically, Parker and Stone will be fine if the DOJ comes knocking. They’ve got the weight of mainstream media credibility, an army of Paramount lawyers, and—at least for now—that pesky First Amendment the Trump administration hasn’t quite managed to extinguish. But their case also serves as a useful illustration of what happens when AI regulations, especially those targeting deepfakes, are crafted without any real regard for the lawful, valuable, and politically vital speech that will inevitably get caught in the dragnet.
This is especially troubling given the increasingly precarious status of First Amendment protections for AI-generated content. Recall that in the NetChoice cases, Justice Barrett floated the idea that certain uses of AI in publishing might fall outside the scope of the First Amendment. Not long after, a federal judge concluded that outputs from Character AI don’t qualify as protected speech. Legal scholars are arguing much the same.
It may seem absurd to suggest that South Park’s latest episode—a brazen, satirical, political public service announcement—might not count as protected expression. But under the emerging logic of AI speech exceptionalism, that outcome is far from unthinkable.
Which is dangerous. AI-generated speech is increasingly overlooked as worthy of constitutional protection. As a result, laws like the Take It Down Act are sailing through Congress with little regard for the types of lawful, socially valuable, and politically consequential expression they risk sweeping away. As AI becomes ever more entangled in creative production, and the imaginary line between human and machine expression continues to blur, this blind spot becomes a powerful tool for censorship. If policymakers can’t ban the message, they may decide to ban the method—the use of AI—instead.
Hence, South Park also offers a timely reminder that deepfakes aren’t inherently exploitative. They can be powerful tools for criticism, commentary, and satire, particularly when aimed at public figures. That nuance is often lost in deepfake proposals. The No Fakes Act, for example, gestures toward protecting parody, commentary, and satire, but explicitly withdraws that protection if the content is sexual in nature. It is also notably silent about when that content is aimed at public figures. The carveout, then, would do nothing to shield South Park.
Plus, sexual satire has long been a potent vehicle for confronting power. Consider Borat—one of the most talked-about films of the early 2000s. Its infamous nude wrestling scene was grotesque, jarring, and undeniably effective. It sparked debate, shattered taboos, and forced audiences to examine their own cultural assumptions. The provocation was the point.
South Park belongs to that same lineage. Its creators have made a career of using shock to expose hypocrisy. They understand that good satire isn’t supposed to comfort but to unsettle, provoke, and push people to reflect. Our elected officials may not always appreciate that. But that’s why we have the First Amendment.
Perhaps most troubling is the emergence of a two-tiered system for political satire. South Park and Paramount can afford to take this risk (and a big one at that). But what about an anonymous Redditor using AI? Can the average person realistically challenge the king—especially when jail time is on the table?
If everyday creators are too afraid to speak, and the few with power keep backing down—Paramount included—then who’s left to confront authority? Who will be left to say the unsayable?
The Trump Administration, and those who follow, will always pose the gravest threat to speech and democracy. South Park dared to say it out loud. But in doing so, they revealed something deeper: that the fight over AI-generated content isn’t just about technology. It’s about power. It’s about who gets to speak, and who gets silenced.
AI is the next great battlefront for free expression. Like the early Internet, it is messy, disruptive, and often uncomfortable. But that’s exactly why it matters. And that’s exactly why it must be protected. Because if we allow fear, moral panic, or political convenience to strip AI-generated speech of First Amendment protection, then we’ve handed censors the easiest tool they’ve ever had.
And when that happens, it won’t just be the machines that go quiet. It’ll be us.
Jess Miers is an Assistant Professor of Law at the University of Akron School of Law. Kerry Smith is a rising second-year law student at the University of Akron School of Law
From Techdirt via this RSS feed