A bipartisan bill titled “The AI Fraud Deterrence Act” has been proposed by two lawmakers, and it aims to address the growing misuse of artificial intelligence in scams, particularly those involving deepfakes that target federal officials.
This legislation will focus on modernizing the existing U.S. fraud laws so that they account for the role artificial intelligence technology has played in recent high-profile incidents where AI-generated audio or video tricked officials or the public.
The bill, proposed by Representatives Ted Lieu, D-Calif., and Neal Dunn, R-Fla., two lawmakers who have had enough of the ease and audacity with which criminals now use AI in their deceptive practices, seeks to expand penalties for AI scams and criminalize impersonating federal officials using AI.
“As AI technology advances at a rapid pace, our laws must keep up,” Dunn said in a statement announcing the bill. “The AI Fraud Deterrence Act strengthens penalties for crimes related to fraud committed with the help of AI. I am proud to co-lead this legislation to protect the identities of the public and prevent misuse of this innovative technology.”
Lieu agrees and last week, told NBC News that the majority of Americans want “sensible guardrails on AI,” as they don’t think a “complete Wild West is helpful.”
Under the proposed law, the maximum penalty for defrauding is to be doubled from $1 million to $2 million in cases where AI is deliberately used to facilitate the crime. It would also include AI-mediated deception in the definitions of both mail fraud and wire fraud, which means it could be possible to charge individuals who use AI to conduct either type of fraud.
In either case, criminals could get slammed with million-dollar fines and up to two decades behind bars for mail fraud and three for wire fraud.
The draft also frowns upon the impersonation of federal officials with AI deepfakes, highlighting cases in which AIs were used in attempts to mimic White House Chief of Staff Susie Wiles and Secretary of State Marco Rubio earlier this year.
Fraud has been around for as long as humans have, but experts say that thanks to AI, the quality of fraudulent outputs has gone up.
In December, the FBI warned that “generative AI reduces the time and effort criminals must expend to deceive their targets,” adding that AI “can correct for human errors that might otherwise serve as warning signs for fraud.”
Maura R. Grossman, a research professor of computer science at the University of Waterloo in Ontario and a lawyer, also believes AI enables a new era of deception: “AI presents a scale, a scope, and a speed for fraud that is very, very different from frauds in the past.”
However, the fact that it facilitates crime is not the biggest problem. Observers are now worried that current institutions, like the judicial courts, are struggling to keep up with the rapid development happening in the AI sector.
“AI years are dog years,” Hany Farid, professor of computer science at the University of California, Berkeley, and co-founder of GetReal Security, a leading digital-media authentication company, said of the speed of AI progress.
In the past, it was not too difficult to tell AI content apart from real content, especially where images were concerned; however today, they have become more advanced, and even the most experienced users now struggle to determine if a media piece is real or generated.
The FBI’s warning in December advised individuals to keep an eye out for discrepancies in images and videos to identify AI-generated media: “Look for subtle imperfections in images and videos, such as distorted hands or feet.”
However, Farid believes this old advice is wrong and even harmful. “The multiple hands trick, that’s not true anymore,” Farid said. “You can’t look for hands or feet. None of that stuff works.”
Lieu and Dunn’s proposed bill lays emphasis on the importance of labeling AI-generated content. It also clarifies that there is a time and place for AI-generated media. For instance, if a content is labeled satirical, the tag clarifies it is not authentic and so is exempted from punishment.
Join a premium crypto trading community free for 30 days - normally $100/mo.