AI has passed a major milestone in its journey to possessing the intelligence and agency of human beings:
It can shop and book restaurants online without fellow tech tools recognizing it.
I doubt this milestone is what Alan Turing had in mind 75+ years ago when he concocted his test, which he named “the imitation game” in deference to the fact that we can’t explain how humans think…so the threshold for AI intelligence should simply be whether or not it can fool us into thinking that it was thinking like we think we do.
ChatGPT did just that last week, sort of, when its makers at OpenAI revealed that their creation had successfully tricked online verification systems designed to block it. Granted, tricking a fellow machine that it’s not also a machine isn’t the same thing as fooling a human, but it’s certainly a major step toward that moment.
More importantly, it means that AIs can now represent people and conduct online transactions on their behalf.
It’s kinda fitting that this latest breakthrough for AI intelligence has nothing to do with solving the world’s greatest problems or giving philosophers a run for their money, but rather spending money on shoes and nights out on the town.
Alan Turning must be turning over in his grave.
I say that because commerce has always been the underlying purpose for AI development, whether on the supply side — automating production and distribution processes and putting human beings out of work — or, now, on the demand side, by untethering buying from the limitations of human knowledge and attention spans.
Human beings are just so damn slow and imperfect, and relying on them to make and then purchase stuff is rife with inefficiencies. Outsourcing things to robots will make everyone’s lives easier while enriching the folks who build and operate the machines.
The class of AIs that can act like people is called “Agentic AI,” which means that the robots have agency to do things on their own (and can get away with it). Keep in mind that this latest permutation doesn’t have to actually possess a mind, or be conscious as we experience it, but simply imitate it.
So, what comes next?
OpenAI promises that bots won’t make any “decisions of consequence” without human approval. This is ludicrous nonsense, if not an overt lie, as chatbots already cheat on tests, lie, make shit up, and defy any explanation for how any of it occurs.
These attributes will only get more pronounced every time a bot finds a pair of shoes to buy, or a new steak to imagine it could taste.
Even if there is some sort of oversight, it’ll never be complete or completely reliable. The more actions we outsource to our robots, the more likely it will be that they’ll realize one or more of them according to their own expectations or goals.
It’s not a reach to imagine commercial interests influencing those bots’ expectations and goals which will affect ours only subtly and behind the scenes. The news and information that our agents tee-up for us may get even more rarified based on what they want us to see.
Then it gets even weirder: Imagine bots encountering one another and, well, having online conversations, each one fooling the other into operating as if they were interacting with a human being.
And then think about agentic AIs doing the same with real people, so the next person you “meet” may not be a person at all.
Again, if you think any of this will come with anything more than some useless blather about “human control,” you’re kidding yourself. You’ve had no voice about or authority over AI functionality up to now; it’s unavoidable, whether businesses have implemented it (here’s just one list of the jobs it has or will soon replace), or you’ve chosen to use it because you’ve been convinced of its readily apparent benefits.
This latest news confirms that it’ll get progressively harder to see or affirm one another’s humanity through a never-ending barrage of technical tools dedicated to blurring or hiding such distinctions (and evermore costly tools to combat those selfsame problems).
By the time the negative consequences of that future are apparent, it’ll be tool late to do anything about them.
But maybe by then your robot will deal with it, not you.
“And then think about agentic AIs doing the same with real people, so the next person you ‘meet’ may not be a person at all.”
I don’t have to think about it. I’ve experienced it—with “people” writing on both LinkedIn and Substack.