Jacobi Journal of Insurance Investigation

Unveiling the truth behind insurance claims.
Protecting integrity in every investigation.

March 8, 2025 | JacobiJournal.com — A Minnesota deepfake pornography bill aims to combat AI-generated explicit content by cracking down on companies that provide “nudification” technology. The legislation, introduced by Democratic Sen. Erin Maye Quade, would impose civil penalties of up to $500,000 on websites and apps that allow users in Minnesota to generate explicit fake images.

The deepfake pornography bill is seen as a direct response to growing public concern over AI-driven exploitation. Advocates argue that current laws do not adequately address the harm caused when nonconsensual explicit images are created, even if they are not widely distributed. By targeting platforms that enable the generation of these images, Minnesota lawmakers hope to close a dangerous legal gap, protect victims from reputational damage, and set a precedent for other states considering similar legislation.

Why Advocates Say the Law Is Needed

Supporters argue the law is necessary because AI deepfake technology has evolved rapidly, making it easy to create realistic, nonconsensual explicit content. Molly Kelley, a Minnesota woman, testified that someone she knew used AI to generate fake nude images of her and at least 80 other women, all with ties to the offender.

“It’s not just about the distribution of these images—it’s the fact that they exist at all,” Maye Quade emphasized.

Privacy experts add that victims of AI-generated abuse often face long-lasting harm, including reputational damage, emotional trauma, and difficulty removing images once they spread online. According to the Cyber Civil Rights Initiative, more than 90% of individuals impacted by nonconsensual explicit imagery report significant mental health effects, underscoring why advocates argue swift legislative action is critical.

State and Federal Efforts to Regulate AI Deepfakes

Minnesota’s proposal aligns with growing nationwide efforts to regulate AI-generated sexual content. The U.S. Senate recently passed a bill requiring social media platforms to remove nonconsensual AI-generated explicit images within 48 hours. Meanwhile, states like Kansas, Florida, and New York are introducing legislation to criminalize AI-generated child exploitation material.

Legal analysts note that this patchwork of state and federal measures reflects both the urgency of the issue and the complexity of regulating rapidly evolving technology. The National Conference of State Legislatures (NCSL) reports that more than a dozen states considered or passed laws addressing deepfakes in 2024 alone, ranging from criminal penalties for nonconsensual sexual imagery to rules against election interference. This trend suggests lawmakers across the country are racing to establish guardrails before the technology becomes even harder to control.

Legal and Constitutional Challenges

Despite the bill’s intentions, AI law experts warn it may face constitutional challenges under the First Amendment. Riana Pfefferkorn of Stanford University notes that restricting content creation—rather than distribution—could conflict with existing federal internet laws.

However, Maye Quade insists her bill is legally sound. “This technology is inherently harmful. Tech companies cannot keep unleashing it without consequences.”

Learn more about the policy and legal challenges of AI technology in society by visiting the Brookings Institution’s deepfake research.


FAQs: Minnesota Deepfake Pornography Bill

What does the Minnesota deepfake pornography bill propose?

The bill would fine websites and apps up to $500,000 if they allow users to generate explicit AI images.

Why was the deepfake pornography bill introduced?

It was introduced to address the rise of nonconsensual AI-generated explicit content that harms victims, many of whom are targeted by acquaintances.

How does the Minnesota deepfake pornography bill compare to federal efforts?

The bill aligns with federal action requiring platforms to remove AI-generated explicit content within 48 hours.

Could the deepfake pornography bill face legal challenges?

Yes. Experts warn it may face First Amendment challenges, though supporters argue the bill is legally sound and urgently needed.


Stay ahead of the latest legal battles over AI, technology, and public safety. Subscribe now at JacobiJournal.com for trusted reporting and expert insights.

Read More from JacobiJournal.com

Leave a Reply