Getting your Trinity Audio player ready...
|
Federal prosecutors in Wisconsin are appealing a district court ruling which held that artificially intelligence (AI) generated child pornography was protected under the First Amendment of the U.S. Constitution.
The ruling was made in the case of Steven Anderegg, 42, who according to a U.S. Department of Justice (DOJ) press release allegedly “used a text-to-image generative artificial intelligence (GenAI) model called Stable Diffusion to create thousands of realistic images of prepubescent minors.”

At the time of Angeregg’s indictment, principal deputy assistant attorney general Nicole M. Argentieri said his arrest on charges related to AI child sexual image crimes was meant to make it clear that U.S. law prohibits the practice.
“Today’s announcement sends a clear message: using AI to produce sexually explicit depictions of children is illegal, and the Justice Department will not hesitate to hold accountable those who possess, produce, or distribute AI-generated child sexual abuse material,” Argentieri said.
Federal prosecutors argue the 2003 Protect Act bans production, distribution, receipt or possession with intent to distribute, “a visual depiction of, including a drawing, cartoon, sculpture, or painting, that depicts a minor engaging in sexually explicit conduct.”
The law notes that a “visual depiction” includes any “computer generated image or picture, whether made or produced by electronic, mechanical, or other means.”
Should the lower court’s ruling stand on appeal, though, it would prevent future prosecutions for the possession of AI generated child pornography.

***
South Carolina lawmakers are working to pass statutes which would make AI-generated child pornography, as well as morphed images of real underaged individuals, illegal under state law.
S.C. attorney general Alan Wilson has championed these statutory changes over the course of multiple legislative sessions.
“These bills are a crucial step forward in ensuring South Carolina remains at the forefront of protecting our children,” said Wilson in a recent press statement. “As technology evolves, so do the dangers our kids face. We must act swiftly to close legal loopholes and give law enforcement the tools needed to combat these horrific crimes.”
“This is a situation where our laws have not kept up with technology,” Wilson added. “Someone can use artificial intelligence to create child sexual abuse material of a child that doesn’t really exist or take innocent photos of a real child from social media and use AI to generate explicit content.”
A S.C. Senate bill (S. 28) which seeks to criminalize “obscene visual representations of child sexual abuse” mirrors the language of the 2003 protect act. The bill has already cleared the Senate, and has now been referred to the House judiciary committee.
FITSNews spoke this week with the bill’s author, Senate minority leader Brad Hutto.

“We are all thankful we live in the country that gives us Constitutional rights and protects our freedoms, but at the same time, we don’t want bad people to prey on young people and other vulnerable people,” Hutto said. “If we can do anything to protect them, we’ll do it, and if the language that we have in S. 28 doesn’t quite meet the standard of what the court has suggested, we’ll have to adjust it.”
“Of course,” Hutto added, “that’s a district court ruling and will be subject to appellate review.”
House and Senate lawmakers have also tried to address “morphed images” legislatively. Senate bill (S. 29) – which has also cleared the Senate and been referred to the House judiciary committee – would criminalize those images, which generate sexually explicit content by editing an identifiable image of a real person.
“Right now these images could legally be produced with AI,” representative Brandon Guffey told us.
According to law enforcement experts seeing an ever increasing amount of AI generated child pornography, this problem is only going to get worse in future years.
“This is the worst, in terms of image quality, that AI technology will ever be,” a 2023 Internet Watch Foundation report concludes, noting that “Generative AI only surfaced in the public consciousness in the past year; a consideration of what it will look like in another year – or, indeed, five years – should give pause.”
“At some point on this timeline, realistic full-motion video content will become commonplace,” the report continued. “The first examples of short AI child sexual abuse material (CSAM) videos have already been seen – these are only going to get more realistic and more widespread.”
***
ABOUT THE AUTHOR …
(Via: Travis Bell)
Dylan Nolan is the director of special projects at FITSNews. He graduated from the Darla Moore school of business in 2021 with an accounting degree. Got a tip or story idea for Dylan? Email him here. You can also engage him socially @DNolan2000.
***
WANNA SOUND OFF?
Got something you’d like to say in response to one of our articles? Or an issue you’d like to address proactively? We have an open microphone policy! Submit your letter to the editor (or guest column) via email HERE. Got a tip for a story? CLICK HERE. Got a technical question or a glitch to report? CLICK HERE.
1 comment
Ignore the courts. Tell them to enforce the ruling.