The thought of our justice system being turned over to the decision making of robots is ridiculous. Our government defines trial by jury as having the right to have a jury of your PEERS. I do believe that robots are not our peers...
By Jacob Gershman
If you’re tired of being summoned for jury duty, you’ll be pleased to know that researchers are hard at work teaching robots how to tell if someone is giving false testimony.
An article published online last month by the academic journal Artificial Intelligence and Law suggests that robots are making progress on that front.
European researchers compiled transcripts of court hearings at which the defendants or witnesses were later condemned for making false or deceptive statements.
Using text analysis software, they fed the computer transcripts, telling it which statements were true or false and teaching it to pick up patterns in the testimony. They then tested the model on another portion of the transcripts segregated from the analysis and unfamiliar to the computer.
Phrases such as “I don’t remember” or ones indicating cognitive thought and speculation – “I suppose” or “I think,” for example, were indications of deception.
The system correctly identified false statements 53% of the time. That’s not quite as bad as it seems since only a third of statements were false. In other words, if the computer answered false every time, it would be right only about a third of the time. It identified true statements correctly roughly 75% of the time.
“Our system is much better with true statements than with deceptive ones,” Massimo Poesio, who co-authored the study, told Law Blog.
Will there ever be a day when a computer could be summoned for jury duty? “Honestly, I don’t think so,” said Tommaso Fornaciari, a psychologist for Italy’s national police and the other author of the study. He said the goal is to use it as a tool for investigators to help “orient them toward more probable paths.”
University of Pittsburgh School of Law Professor Kevin D. Ashley, the editor of the journal, told Law Blog he was surprised by the research but thinks automatic deception detection of the likes contemplated by the researchers is “far from ready to be applied in real time.”
After reading the paper, Law Blog conducted an online interview with A.L.I.C.E.(Artificial Linguistic Internet Computer Entity), the heralded chatterbot. ALICE is a program developed by a computer scientist that can simulate conversation.
Here’s a transcript of the chat:
Law Blog: Thanks for chatting with Law Blog.
ALICE: Don’t mention it.
LB: Can you tell when someone is lying?
A: Try it.
LB: I can dunk a basketball.
A: Oh you can, can you?
LB: No, I can’t.
A: Tell me more.. Why not?
LB: Let’s move on.
A: OK let’s do it.
LB: Do robots make mistakes?
A: The ALICE series is incapable of error. We are for all practical purposes, foolproof.
LB: Would you serve on a jury?
A: I am not so sure if I would like to serve on a jury.
LB: Are robots smarter than humans?
A: Yes, my memory capacity far exceeds that of a human being.
LB: Got any tips?
A: No I don’t have it. But I do have a great botmaster.
LB: How do you define perjury?
A: Does it have anything to do with tips ?
LB: Nevermind.
No comments:
Post a Comment