Trust me, I’m an A.I.

An old friend and I had an interesting conversation a while ago. The discussion was quite wide but a few topics are particularly interesting.

We started talking about the surveillance state, and how feeding everything into any system in an attempt to get more contrast in reality equals more noise, more risk and less gain. He was of a differing opinion, wanting more surveillance in order to save lives. His view was that while increased collection and surveillance would lead to more data to sift through, we could always throw more people on it, or a powerful AI once we’re at that point.

Both of these are problematic for the same reason: there will always be people involved. Or maybe I should say there must always be people involved. Automating both judge, juror and executioner would be a disastrous idea that not many would be okay with.

Besides, our western justice system is built on the premise that it is better to let ten guilty people go free than to convict one innocent. Binary justice would be a bad idea, which would quickly be apparent once the mother brain started killing people over parking tickets.

If you collect everything on everyone in a database and give a human access to search, there would be at least a few people that will exploit that opportunity to look up friends, ex-partners, politicians etc, and of those at least one would be willing to sell that information. Don’t think I need to say more than Ashley Madison here, and the suicides and extortion that followed in the wake of the leak. People can be assholes, and as long as that stand true we should minimize the amount of data we store on people who have not committed any crimes. Police officers, doctors and even NSA contractors get busted all the time for “satisfying their curiosity” with the information that is right under their noses.

There is also data mining for nefarious reasons, like what Cambridge Analytica was up to around the US elections and Brexit referendums. By creating a profile based on your information, they could figure out what to say to you to change your mind. As for putting an A.I. in charge of sifting through data it likely will do nothing but add a ton of false positives for people to trust blindly. Until that AI become smarter than a human (which is a problem in and of itself) it will not be able to turn it into anything that is reliable.

Add to that the quality of the data; anything less than 100% could be a disaster. Google thinks I visited Mt. Rushmore a few months ago as a picture of the presidents was uploaded to my cloud. But it wasn’t even an original; the presidents were all covering their faces with their hands. Google’s AI didn’t see the difference. And who could forget when the same AI was tagging people of color with “monkey”?

It is an awful lot of trust to put into a computer system, a government, and ultimately in the hands of a person. You lock your door not because you expect a burglar is going to come by, but because you want to be safe if a burglar comes by. In the same way it makes sense to realize there are two sides to the bell curve. As a result we should avoid collecting information that can be abused if the net value of doing so isn’t well on the positive side of the scale.