The Road to Enforcement Chaos: The Hidden Dangers of the TAKE IT DOWN Act
Why the well-intentioned deepfake "revenge porn" law risks muzzling online speech.
In his March address to a joint session of Congress, President Donald Trump praised a bill that would go after the use of AI deepfakes to create “revenge porn,” adding "I’m going to use that bill for myself too, if you don’t mind, because nobody gets treated worse than I do online, nobody.”
Few would deny that the creation of nonconsensual intimate images and videos is one of the most worrying developments in our AI era. But when a president whose administration has repeatedly challenged and pushed the boundaries of the First Amendment makes such proclamations, even in jest, we should carefully consider the levers of power we are granting to the government to police such content.
On April 28, Congress overwhelmingly passed the bipartisan Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (TAKE IT DOWN) Act. The legislation, now awaiting President Trump’s signature, criminalizes the nonconsensual distribution of intimate images (NDII), whether real or digitally altered, and requires a notice and take-down mechanism.
At its core, the TAKE IT DOWN Act is a well-intentioned effort to address nonconsensual distribution of intimate images. But beneath its protective promise lie serious threats to free expression and even the rights of the very individuals it aims to protect. The Act’s vague definitions, sweeping authority, and lack of meaningful safeguards open the door to unconstitutional censorship, while also falling short of combating the real harms of NDII.
A System Without Guardrails
The bill’s notice and removal mechanism requires companies to remove any content described as an “intimate visual depiction” within 48 hours of receiving a takedown request from either the person depicted or their authorized representative who claims it was published without consent. While the legislation’s criminal provisions carve out exceptions for consensually shared explicit images and for content used in medical, legal, or educational contexts, the notice and removal provisions do not.
This absence of nuance, coupled with the 48-hour constraint, encourages platforms to rely on automated filtering systems already prone to over-removal. The Act also does not require people to file requests under penalty of perjury, and it does not punish false claims. This means people can report content simply because it doesn’t align with their personal beliefs, even if it is not private or sexual at all. As a result, platforms are likely to take down speech even if it is consensual and constitutionally protected.
The Digital Millennium Copyright Act (DMCA) has a somewhat similar notice and takedown system, but unlike the TAKE IT DOWN system, it requires complainants to attest they are authorized to file, has a counter-notice system, and imposes liability on those who made knowing material misrepresentations. Even with those safeguards, the DMCA system has reportedly still been used to silence legal criticism and expression. The system under this new law will undoubtedly overwhelm tech companies, threatening the ability of smaller platforms to operate and hindering justice for NDII victims.
Equally troubling, this law also prohibits counterclaims against platforms that act “in good faith” on takedown requests, regardless of whether the content is lawful or not. The Cyber Civil Rights Initiative clearly explains how this removes any avenue of redress for individuals harmed by the removal of protected speech while eliminating any meaningful incentive for platforms to safeguard legal expression when responding to takedown demands.
A New Government Tool
The TAKE IT DOWN Act vests significant enforcement authority in the Federal Trade Commission (FTC), empowering the agency to treat noncompliance with the takedown provisions as an “unfair or deceptive act or practice” under Section 5 of the FTC Act. This grants the FTC broad discretion to investigate, penalize, and potentially litigate against online platforms, even in cases where takedown requests are legally or factually dubious. This is particularly concerning in an administration that has repeatedly weaponized legislation to target media outlets on ideological grounds.
Overbreadth and Ambiguity
The Act’s overbreadth exacerbates these concerns. While its criminal provisions adopt a relatively narrow and constitutionally sound definition of nonconsensual intimate imagery (NDII), the takedown provision applies to a far broader category: any “intimate visual depiction” alleged to be nonconsensual by an “identifiable individual.” The law offers no clear standard for evaluating such claims, no requirement for platforms to assess their validity, and no meaningful distinction between authentic images and those that are fabricated or manipulated.
Congressional efforts to protect victims of non-consensual intimate imagery are incredibly important and laudable. But good intentions do not always yield good policy and constitutional laws. Protecting victims and protecting free speech are not mutually exclusive, and we must be more vigilant now than ever to safeguard both.
Ashkhen Kazaryan is a Senior Legal Fellow at The Future of Free Speech, where she leads initiatives to protect free expression and shape policies that uphold the First Amendment in the digital age.
Ashley Haek is a communications coordinator at The Future of Free Speech.