Artificial intelligence continues to showcase its abilities. AI software can create music and even find tumors, but people are also using it for harm.
Experts estimate 98% of so-called deep fakes, for example, are pornography. The victims are almost entirely women, and many are minors.
Last year, 13 deep fake victims died by suicide.
“If you’re a 13- or 14-year-old and you go into a school and all of a sudden people are laughing and snickering and you discover someone’s taken your face and put it on the internet doing horrific things, how are you going to react?” Rep. Holly Cheeseman (R-East Lyme) said.
Get top local stories in Connecticut delivered to you every morning. Sign up for NBC Connecticut's News Headlines newsletter.
Cheeseman is one of two lawmakers looking to make deep fakes a priority this session. Her focus is on AI-generated pornographic images.
Sen. James Maroney (D-Milford) shares Cheeseman’s concern but he also wants to address the use of deep fakes to spread misinformation and disinformation.
A robocall featuring an imitation of President Joe Biden’s voice tried to convince Democratic voters not to participate in last week’s New Hampshire primary.
Local
“We’re not going to prevent this, unfortunately, we know that,” Maroney said. “We need to criminalize it and try our best to prevent it.”
Maroney said the laws would apply to the content creators, not the social media platforms and websites that are used for distribution.
He also said laws around misinformation would be effective during a period before an election, possibly 90 days, aimed at information that is meant to confuse or mislead voters. The proposal would also require disclosure when content is created using artificial intelligence.
Both lawmakers want to focus on the harm AI-generated content can cause on victims. One legal expert said the First Amendment could make that difficult, though.
“Generally we recognize that false information is protected by the First Amendment as much as true information,” Quinnipiac University School of Law Professor Wayne Unger said.
That means lawmakers can’t just restrict the message of AI-generated content. States do have some ability to regulate pornography, banning child pornography and requiring people to consent to images being released.
But even there, Unger said it’s not clear how those laws apply to fake images or video created by artificial intelligence.
“The question here is whether fake child pornography is still protected by the First Amendment and ultimately, we do not know,” Unger said.
There’s also a question about who, if anyone, can regulate the internet.
Connecticut’s laws are only effective activity inside its borders. States also typically cannot regulate the internet even if they work together.
The internet is a major source of interstate commerce, something the federal government regulates.
Deep fakes can be produced anyway in the world, though. The First Amendment limits Congress’ ability to restrict what people see and view online.
“The general rule here is you do have an implicit right to receive information under the First Amendment so long as that information is generally acceptable and protected by the first amendment,” Unger said.