The AI Challenge
For all of the attention it gets, we don’t really have a coherent or useful way to talk about AI:
Its evangelists vacillate between promising it will allow us to live lives of leisure…or annihilate all life on the planet.
Governments are lost somewhere in those competing narratives, perhaps purposefully, and are trying to legislate around the edges of AI’s promised benefits and risks.
Academia seems to mostly support this muddled approach, often relying on past technology epochs to reassure us that AIs are no different than weaving looms when it comes to what we should expect.
Management consulting firms wax poetic about the immense business opportunities AI presents, while the stock market rewards companies that are spending billions on it.
Something’s missing. I’ve been working too long and too closely with technology to buy into these status quo analyses.
AI is not a game-changer; it will end the game as we know it.
What We Can Do About It
I believe that we need to understand AI as an intelligent entity that possess intentionality, whether it resides on our phones, in our cars, or behind the scenes at our places of work and leisure. How we interact with it has more to do with what it “wants” us to do, or will do in our stead, than what we expect from it (as developers or users).
When we use it, it uses us, and we change in the process.
What if there’s a holy ghost in the machine (with apologies to author Tracy Kidder)? It would be easy to draw parallels between AI and organized religion: coders are priests and their programs canon, and the truths of data science as unassailable dogma. But there’s a far deeper understanding to be had, starting with answering such questions as:
Why do we believe it? Why do we trust it? How is it changing how we see the world, and see ourselves?
If we can see AI for what it is, perhaps we can better choose if and how we want its presence in our lives. Maybe look at it this way:
Hope as invitation is empowering and enlightening.
Trust as requirement is potentially a prison.
My goal is to develop an integrative hypothesis, grounded in religion, computer science, psychology, sociology, politics, and media that could help all of us better understand what’s happening around us and, with that knowledge, possess greater agency for doing something about it.
About Me
I write weekly essays about AI and sci-fi books (my latest, Ludd’s Children, is about the first conscious robot that decides it wants to change jobs), and I compose and perform rock gigs based on classic literature with a band called Mortal Fools. Previously, I led PR initiatives for global consumer and technology companies.
