Menu

The problem with letting algorithms make most of our decisions

Knight Rider Kitt

Image source: Knight Rider’s KITT – My finished replica!

Nicholas Carr asks some serious questions about things like self-driving cars and our increased reliance on algorithms for decision-making in Moral code:

As we begin to have computer-controlled cars, robots, and other machines operating autonomously out in the chaotic human world, situations will inevitably arise in which the software has to choose between a set of bad, even horrible, alternatives. How do you program a computer to choose the lesser of two evils? What are the criteria, and how do you weigh them?

Clive Thompson picks up the thread in a very interesting Wired article called Relying on Algorithms and Bots Can Be Really, Really Dangerous:

The truth is, our tools increasingly guide and shape our behavior or even make decisions on our behalf. A small but growing chorus of writers and scholars think we’re going too far. By taking human decision-making out of the equation, we’re slowly stripping away deliberation—moments where we reflect on the morality of our actions.

But even stepping away from the morality issues, there are some other undesirable side-effects to algorithmic decision-making:

Or as Evan Selinger, a philosopher at Rochester Institute of Technology, puts it, tools that make hard things easy can make us less likely to tolerate things that are hard. Outsourcing our self-control to “digital willpower” has consequences: Use Siri constantly to get instant information and you can erode your ability to be patient in the face of incomplete answers, a crucial civic virtue.

The argument is that smart technology has the potential to strip us of our grit. And that’s a big problem, particularly if you subscribe to what author Paul Tough calls “the character hypothesis”: the notion that noncognitive skills, like persistence, self-control, curiosity, conscientiousness, grit and self-confidence, are more crucial than sheer brainpower to achieving success.

The hypothesis is that character is created by encountering and overcoming difficult situations. Therefore one of the big dangers of algorithms making our decisions for us is that if it removes challenges from our lives, it reduces our ability to develop grit and build character. It’s like an Axiom for our brains.

Update: I came across a couple more articles about these issues. See More on algorithmic decision-making.