This article may contain commentary
which reflects the author’s opinion.
The U.S. Senate on Thursday unanimously approved the Disrupt Explicit Forged Images and Non-Consensual Edits Act, or DEFIANCE Act, a bipartisan effort to strengthen legal protections for individuals targeted by nonconsensual deepfake imagery. The bill now proceeds to the House of Representatives for consideration.
The measure would allow individuals depicted in sexually explicit deepfake images or videos — digital fabrications created without their consent — to pursue civil damages of at least $150,000 per violation against persons responsible for creating or sharing such content.
Legislators who supported the bill said existing legal frameworks are insufficient to address the growing prevalence of deepfake technology and the unique harms it causes. They noted the legislation builds on earlier federal and state laws aimed at curbing nonconsensual intimate imagery, but expands the scope and clarity of remedies available under federal civil law.
Deepfake content — synthetic media produced using artificial intelligence and machine learning — has surged in recent years, raising concerns in Washington about privacy, harassment, fraud, and national security. Lawmakers from both parties have pushed a series of proposals in recent sessions to update laws governing digital impersonation and nonconsensual imagery.
Earlier legislative efforts focused on criminal penalties for creating or distributing explicit deepfakes of public officials or election candidates, or unauthorized alterations of videos used in political context. Other bills aimed to enhance law enforcement’s ability to investigate and prosecute deepfake-related fraud and identity theft.
The DEFIANCE Act differs from those proposals by creating a federal civil right of action, enabling private individuals — not just government prosecutors — to seek monetary damages in federal court. The bill would supplement state laws that vary widely in enforcement and penalties related to deepfake and revenge-porn imagery.
Advertisement
Supporters have argued that civil remedies are crucial because many victims face ongoing reputational harm and emotional distress long after illicit content is published. Civil suits, proponents say, can provide both compensation and deterrence.
If the House approves the DEFIANCE Act and the president signs it into law, the new provisions would expand legal avenues for victims of nonconsensual deepfakes and related digital forgeries. Advocates for stronger protections have said the approach could serve as a model for future legislation addressing other forms of digitally manipulated content.
Opposition in the Senate was nonexistent, reflecting bipartisan agreement on the need to update legal tools in the face of rapid advances in artificial intelligence and digital media technologies.
The bill’s proponents say it represents a significant step in the federal government’s response to technology that can create convincing but fraudulent depictions of real people, often used to harass, humiliate or exploit victims.
Meanwhile, social media influencer and entrepreneur Paris Hilton joined Democratic Rep. Alexandria Ocasio-Cortez in announcing a new collaborative effort this week aimed at combating the creation and distribution of AI-generated sexually explicit imagery without consent.
The initiative, unveiled Thursday, seeks to raise awareness of the growing prevalence of artificial intelligence tools that can produce realistic deepfake pornography using the likenesses of real individuals. The effort calls for legislative and technological solutions to protect potential victims and hold creators and distributors accountable.
Hilton, who has previously spoken publicly about being targeted by nonconsensual explicit content earlier in her career, said that the proliferation of AI tools “makes it easier than ever” for deceptive imagery to spread and cause harm. She urged lawmakers and technology companies to act urgently to establish safeguards.
Ocasio-Cortez, a member of the House Committee on Energy and Commerce, emphasized the need for stronger legal frameworks that can address the unique challenges posed by AI-generated content. She noted that traditional privacy and harassment statutes may not fully encompass the nuances of AI-enabled manipulation.
The pair’s announcement follows increased attention from lawmakers and advocacy groups concerned about how advances in artificial intelligence intersect with issues of consent, privacy and online safety. AI researchers and civil liberties organizations have also called for clearer standards and potential regulatory measures to limit the misuse of image synthesis technologies.

