A University of San Diego professor is looking into how artificial intelligence could be a serious threat to the legal system. She says it could be used to make fake evidence or tamper with real evidence.
“Everybody’s talking about AI,” Dr. Orly Lobel, a University of San Diego law professor, said. "Everybody is thinking and experiencing the changes that we're undergoing with advancements in AI."
More AI stories:
Criminal defense attorney Peter Blair and law researchers like Lobel reveal the potential for AI deepfakes to have already entered the courtroom is not just plausible, but likely.
Get top local stories in San Diego delivered to you every morning. Sign up for NBC San Diego's News Headlines newsletter.
“That's a little more scary, I think,” Blair said.
Deepfakes are images, sounds and videos that seem like they’re real, but they’re not. A computer made them. AI could be used in the courtroom to tamper with and submit fake evidence. Take a suspect on trial for example.
“The AI learns exactly their face, the facial movement, and then generates a video with a timestamp where they were at a different place than what they're said to have been doing in this criminal proceeding. Like, ‘They didn't engage in this current theft because they were at this bar at the exact same time,'" Lobel said. "The technology itself does not have bounds.”
Blair said the idea of AI existing to this extent is intimidating but doesn’t remove the obligation to be ethical.
“Obviously, you have a duty to your client to present the best case possible, but your duty to your client does not trump your duty to be to be fully open, truthful and candid with the court,” Blair said.
AI in the court system isn’t all fraudulent. It can just as easily be an asset used to transcribe or translate proceedings or take a load off clerks and judges having to document.
“It's a little bit of a cat and mouse game always,” Lobel said.
She said AI isn’t going anywhere, so we may as well embrace the good while we work to manage the bad.
In his 2023 executive order about AI, President Joe Biden called for apps to layer an invisible watermark over its results that a computer can later recognize as AI generated.