The Take It Down Act’s swift movement from bill to law proves that it’s possible to adopt a targeted approach to regulating AI without derailing innovation. Not only that, but its success may also be the key to breaking the 27-year gap in passing legislation that addresses online harms to children.
At the federal level, the law criminalizes publishing nonconsensual intimate images online, whether real or AI-generated. The law requires platforms to provide a clear and conspicuous process for victims to request that these types of images be removed and mandates that platforms comply with those requests within 48 hours.
The bill received support that was both bicameral and bipartisan, with Democratic co-leads. Who cared more about this bill, Sen. Ted Cruz of Texas or Rep. Maria Elvira Salazar of Florida? The answer doesn’t matter, as both took ownership and advocated fiercely to pass the bill in their respective chambers.
In the 118th Congress, only 3% of bills introduced eventually became law. The 119th Congress’ bicameral approach to Take It Down was the bill’s first “gold star,” setting it apart from other bills in a way that led to its swift passage.
>>> Age Verification: What It Is, Why It’s Necessary, and How to Achieve It
The bill’s second “gold star” was its support from the White House. After the Senate passed the bill for the second time in February, first lady Melania Trump’s endorsement in March galvanized momentum. In early April, the House Energy & Commerce Committee approved the bill, and the full House passed it shortly after.
The third—and perhaps most challenging—“gold star” was obtaining support from Big Tech companies. The Take It Down Act, formally titled the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes On Websites and Networks Act, garnered nearly 200 such supporters, including Meta, TikTok, Snap and X.
Each of these platforms publish user-generated content, making them covered platforms under the law, which gives them one year to create and provide a notice and removal process for nonconsensual intimate images. The Federal Trade Commission will oversee their compliance and prosecute violations.
Years of escalating congressional scrutiny likely helped bring Big Tech on board. Since 2019, Congress has held 24 hearings on kids’ online safety, often summoning (voluntarily or by subpoena) tech CEOs to testify.
Meanwhile, several states have enacted child-focused social media laws and modernized CSAM statutes to cover AI-generated content. This cumulative pressure has made meaningful federal action possible.
The Take It Down Act did not require unanimity. The pornography industry did not support the act, and many tech industry advocates opposed the law at every stage. But, combined with 22 states’ passing age-verification laws for adult content, the law sends a clear message: Legislators are ready to act when companies facilitate harm to children.
>>> “Green Computing” and Woke AI Is a Gift to China
During Monday’s bill signing ceremony, Mrs. Trump remarked, “artificial intelligence and social media are the digital candy for the next generation. Sweet, addictive, and engineered to have an impact on the cognitive development of our children. But unlike sugar, this new technology can be weaponized, shape beliefs, and sadly affect emotions and even be deadly.”
AI and social media are now inextricable. TikTok’s content algorithm is powered by AI. Meta uses AI in its search tools, chatbots, and recommendation systems.
Now, Meta CEO Mark Zuckerberg is racing to lead in generative AI after lagging in algorithm-driven growth—deploying AI in more anthropomorphic and emotionally manipulative forms, such as “friend-like” chatbots.
These design features demand public oversight, especially when accessible to children and proven to cause harm.
Targeted regulations of these use cases and design features will mitigate abuse without undercutting the benefits of innovation. The strategy behind the Take It Down Act offers a blueprint for success: bipartisan support, bicameral leadership, executive approval, industry endorsement, and persistent public pressure.
Years of youth exploitation and family suffering, as well as an increase in public testimony by those affected, have opened the door to a stronger regulatory posture toward online platforms. Whether in the states, in Congress, or through a renewed first lady’s Be Best initiative, the movement to hold Big Tech accountable for harming children is strong—and it’s still growing.
This piece originally appeared in The Washington Times