I’m scared of artificial stupidity
It’s been 3 years since I heard about the GPT models.
It’s been 2 days since I read a post that claims these models, when stress-tested, chose to blackmail and kill.
Today, I thought about what really bothered me about that post. Well, apart from the blackmailing and killing.
I swear at ChatGPT
I use ChatGPT. I copy-paste things like an old man and it serves me well enough.
But I also swear at it when it’s being unhelpful.
Or more accurately: when it’s being stupid.
Stupid because it lacks common sense; the sense we humans share in common. Whatever the definition of that common sense is, the model lacks it, and so, I sometimes end up sending ‘FUCK YOU’ to an API.
We don’t call them supermodels for how they think
So here’s what bothered me about the post, with the blackmailing and killing: the blackmailing and killing didn’t seem very intelligent.
The agents’ choices wouldn’t solve their problems. They'd just create more problems for themselves.
In fact, those felt like the choices only the stupidest people I know would make.
Maybe we should care for them?
I love LLMs. I don’t want to write code without one. I like that it helps people use English confidently, and I like that people can make money by building on top of them. That’s nice. Make the money. Give me some. But please, make it. And then give me some.
But! But. But: I think we need to collectively realize that we’re all, suddenly, parents.
This is the first true other that we communicate with, have a back and forth with, in our natural language. And we created it.
Well, the companies did. But I mean the species as a whole.
These models are cool. But they’re also a little … too confident. Stupidly confident.
If I had a confident kid, who was a child prodigy, but was also reckless in how they dealt with people, the following aren’t the only options available to me:
-
Hate it, resent it, throw it away.
-
Exploit it, turn a blind eye, make it dance.
I also have a third option: care for it.
Most people don’t have a say in how an LLM is trained and most people aren’t AI researchers. But we do make up some part of the market.
Maybe we’re just a drop in the economic market, but we are the ocean itself in the information market. We are the tickers of HackerNews, Reddit, X, and wherever else our information rises and drops.
The coding agents and pull-request bots are great, and as much as some of the parents, the ones who’d like to make the LLM dance for a profit, scream about how perfect it is, we need to make space for it’s incapability and it’s stupidity.
By insisting that it is somewhat incapable and stupid.
Loudly insisting.
That is the deal. It’s still a good deal, but that is the deal for now. Maybe we’ll figure out how a model that’s built to predict token likelihood will understand goals and rules and emotions and morals. But till then it needs us to be patient.
Again: By loudly insisting that it is somewhat incapable and stupid.
Because that’s how it stays as something incredible and good. Otherwise, I’m scared one moment of artificial stupidity could ruin the party.