According to Mirror Online police in the UK have used artificial intelligence for “predictive policing” since at least 2004. This means letting the AI examine a preponderance of evidence to choose how police redeploy resources.

A report published by the Royal United Services Institute, (and people with any basic capacity to think) warns that AI’s will just learn to discriminate from humans. But the AI, will use unquestioned data explaining that it’s not racist, it just knows the stats. AI can do amazing things like predict someone’s risk of cardiovascular death, but removing the humanity from criminal justice is a horrible concept.

Humans are able to do great and terrible things. I tend to view the Holocaust, Bosnia and Rwanda as the products of humans without our humanity and not as some flaw in humanity. I may be a bleeding heart. But the only thing holding back the worst part of people is the best part.

An article I read today, pointed out the times companies have let broad groups of humans train AI. A Microsoft chatbot named Tay, was turned into an advocate of genocide in under 24 hours of being trained by Twitter.

Twitter is a cesspool, according to both National Review & Mother Jones, so it’s not a shock that Twitter, would be a bad way to teach an AI. But it’s not just Twitter that is poor data set for training an AI.

Earlier this year researchers from the AI Now Institute investigated 13 U.S. police departments utilizing technology for predictive policing. Of those 13 at least nine “appear to have used police data generated during periods when the department was found to have engaged in various forms of unlawful and biased police practices”

According to TNW, If “you create a neural network that predicts whether someone prefers chocolate or vanilla. You train it on one million images of people’s faces. The computer has no idea which flavor each person prefers, but you have a ground-truth list indicating the facts. You fire up your neural network and feed it some algorithms – math that helps the machine figure out how to answer your query. The algorithms go to work and sort data until the AI comes up with a two-sided list – you don’t give it the option to say “I don’t know” or “not enough data.” You look over the results and determine it was correct 32 percent of the time. That simply won’t do.”

We cannot let policing be run off of the least human parts of humans. An AI looking at your face and naming what a racist thinks of you, cannot have a place in justice.

Leave a Reply

Your email address will not be published. Required fields are marked *