Kashmir Hill asks an interesting question: Who do we blame when a robot threatens to kill people?
Last week, police showed up at the home of Amsterdam Web developer Jeffry van der Goot because a Twitter account under van der Goot’s control had tweeted, according to the Guardian, “I seriously want to kill people.” But the menacing tweet wasn’t written by van der Goot; it was written by a robot.
He goes on:
Bots will be bots. They won’t know if they’re doing something wrong unless we program them to realize it, and it’s impossible to program them to recognize all possible wrong and illegal behavior. So we’ve got challenges ahead. In the short term, [Clément Hertling, a Paris-based university student who wrote the software that powered the bot] suggested Twitter — and any other platforms bots might live on — could solve the offensive speech problem by allowing bots to self-identify in an obvious way as bots. “That would allow people (law enforcement included) to ignore what they say when it becomes problematic.”
This issue only gets scarier as the question expands to wondering what happens when we put self-driving cars in morally ambiguous situations.