A central tenet of AI development is that human beings, and all of the messiness of our consciousness, are nothing more than complicated machines.
Believing it makes the boffins working on AGI think that they can make machines that aren’t just brilliant mimics but de facto entities that are aware just like people.
It also provides cover to the practices of treating people like machines, which includes pushing our buttons and pressing our levers to make us jump, purchase, and obey.
My mind is flooded with quotes and cliches. “The prediction…contributed to its own accomplishment” (Edward Gibbon). “If one has a hammer one tends to look for nails” (Silvan Tomkins).
“What you look for, you’ll find” (Tony Robbins).
The funny thing is that nobody can explain human consciousness, or at least there’s no scientific agreement about what it is, where it resides, or what it does.
Aristotle and Plato couldn’t agree on whether or not the mind and brain were one in the same, and Descartes offered a compromise by locating the ethereal mind in the corporeal pineal gland. Concepts of individuality and a soul depended on the premise that consciousness was something more than a sum of its physical parts, even if that meant it couldn’t be adequately explained.
This all changed when neuroscientists got involved in the debate.
Ever-improving techniques allowed scientists to untangle masses of neurons to reveal how information was captured and then distributed, while watching and measuring what brains did when they did it. Attributes that had once seemed core to humanness, like emotions and the mechanisms behind choice, looked less mystical when cameras and sensors revealed them.
They looked more mechanical.
Psychology followed a similar development path, going from the study of an impenetrable black box of the mind to treating the physical operation of the brain. Scientists decided that, in large part, what people thought was a physiological phenomenon, not something purely psychological.
Earlier this week, a study of over 600,000 mouse brain cells revealed incredibly detailed insights into how mice perceive and then interact with their world. But the study concluded:
“It is currently not possible to say whether the observed brain activity directly causes a decision to be made or is only associated with the process.”
So, did the mouse decide to go for the cheese, or was its apparent intentionality the result of its digestive system telling it what to do?
The question doesn’t matter to the development of AI which, along with its basis in data science, can accomplish amazing things without acknowledging that there may or may not be a ghost in the machine.
The prompts that make us tick when it comes to buying stuff or getting angry about politics can be mapped from past behavior. Chatbots can construct sentences that appear human because the ways we construct sentences are so predictable.
But the assumption that getting better and better at it will somehow, one day, yield a machine that is as self-aware as a human is, well, nothing more than an assumption. There’s no scientific proof that it’s possible, let alone likely or desirable.
What we get in the meantime is a world in which we’re treated more and more like machines.
AI systems will get better at assessing our needs and interests, our desires and fears, and its functions will become more embedded in how we make our choices. Who needs to recognize consciousness if other triggers, apparent or otherwise unknown, can be triggered to move us in this or that direction?
There’s no esoteric quality of “humanness” to respect if we’re just machines making decisions based on prompts and predictions.
It won’t matter if AIs ever find consciousness. Their development is dependent on us losing sight of ours.
What if we could create machines who, in a matter of days, could all be instructed, globally, to stop going outside, being with others... whether for fun or to celebrate their last breaths, stop serving others' essential needs in exchange for a living wage, worshiping in groups, educating their children... and keep it up for a few years. What if? I'm just sayin'. Not that it could ever happen.
Exactly that!
“Their development is dependent on us losing sight of ours.”