The social values of artificial intelligence

A lot of words are being written about AI and machine learning these days, so it’s sometimes hard to know what to pay attention to. M.C. Elish and danah boyd’s Don’t Believe Every AI You See is one of those essays that I would consider essential reading on the topic. On the ethics of artificial intelligence:

When we consider the ethical dimensions of AI deployments, in nearly every instance the imagined capacity of a technology does not match up with current reality. As a result, public conversations about ethics and AI often focus on hypothetical extremes, like whether or not an AI system might kill someone, rather than current ethical dilemmas that need to be faced here and now. The real questions of AI ethics sit in the mundane rather than the spectacular. They emerge at the intersections between a technology and the social context of everyday life, including how small decisions in the design and implementation of AI can create ripple effects with unintended consequences.

And on the supposed “neutrality” of machines:

[There is] a prevailing rhetoric around AI and machine learning, which presents artificial intelligence as the apex of efficiency, insight, and disinterested analysis. And yet, AI is not, and will not be, perfect. To think of it as such obscures the fact that AI technologies are the products of particular decisions made by people within complex organizations. AI technologies are never neutral and always encode specific social values.

As Kevin Kelly also pointed out years ago in his book What Technology Wants, technology is never neutral. It possesses the collective values of its creators. And that’s where things so often go wrong. A great resource on this topic is Sara Wachter-Boettcher’s book Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech.