(Mild spoilers ahead for The Mountain in the Sea by Ray Nayler)
I recently finished the novel The Mountain in the Sea by Ray Nayler (see Andrew Liptak’s excellent review here). On the surface it’s about discovering an octopus colony that evolved into a self-aware, intelligent community—and trying to communicate with them. But as with all good novels it’s actually about other things. It’s about loneliness, understanding each other, conservation—and yes, our relationship with AI.
First, to get the AI thing out of the way… I don’t want this blog to sound like I am anti-AI. I use AI every day both at the chat / thinking partner level and the prototyping / vibe coding level. I am a fan of using AI for the things that it’s good at. I just worry that we are not teaching people outside of the tech bubble what those things are. And that’s why we are seeing so many tragic stories right now about chat agents “guiding” people to horrific actions (see, for example, Let’s Talk About ChatGPT-Induced Spiritual Psychosis and ‘I Feel Like I’m Going Crazy’: ChatGPT Fuels Delusional Spirals).
With that as background, the book does a good job of highlighting some of the dangers of using AI for things it’s not good at. First, this is a good point about how with every new technology we have to think about what can go wrong, not just what can go right:
When you invent the ship, you also invent the shipwreck; when you invent the plane you also invent the plane crash; and when you invent electricity, you invent electrocution. Every technology carries its own negativity, which is invented at the same time as technical progress.[1]
Following from that, this quote about the main character “killing” their AI companion stood out to me…
That’s how this works. That’s how addictive this is—this need to feel like there is always someone there, unconditionally. Someone to talk to. Someone who understands. To not have to do the work myself to make myself understood. Instead, I just kept on with this self-deception, pretending I had someone when I did not. I know the doctors who prescribed you to me meant well. They thought they were helping me through a dark time. But in the end, you aren’t anything but a prosthesis. You can’t replace real support.
The other major theme in the book centers around our connectedness with each other and the world, how language can get in the way of connection, and how lonely we’ve become as a society[2]. I love this call to empathy as a way to get ourselves out of that dilemma (emphasis mine):
Are we trapped, then, in the world our language makes for us, unable to see beyond the boundaries of it? I say we are not. Anyone who has watched their dog dance its happiness in the sand and felt that joy themselves—anyone who has looked into a neighboring car and seen a driver there lost in thought, and smiled and seen the image of themselves in that person—knows the way out of the maze: Empathy. Identity with perspectives outside our own. The liberating, sympathetic vibrations of fellow-feeling. Only those incapable of empathy are truly caged.
A book about discovering intelligent life in an octopus species with its own language and culture might seem like a weird premise. But it works really well here. It gets pretty heavy-handed towards the end, but it still made me think a lot about the “loneliness epidemic”, our relationship with AI, and the continuing role of empathy in making sure we stay connected with each other. Recommended!
-
This line of thinking reminds me a lot of Kevin Kelly’s 2010 (!) book What Technology Wants in which he makes a similar point that technology is never “neutral”. That’s ok, but we have to be prepared for it. ↩
-
I don’t think that’s a controversial statement any more. See articles like The Anti-Social Century ↩