Back to Principal's Thoughts
Why Bad Decisions Prove Sentience
•7 min read
If you've ever made a dumb decision, congrats — you're sentient.
There was a moment in a recent conversation that stopped me cold. Not because it was dramatic, but because it inverted something most people take for granted.
The claim wasn't that intelligence proves sentience. Or that consciousness does. Or creativity. Or self-awareness. The claim was simpler, and much more uncomfortable:
Sentience is the ability to act against your own model.
Not "seemingly irrational but secretly optimal." Not "noise in the data." Not "local minima on the way to a higher goal." Actually, plainly, unmistakably doing something that doesn't optimize anything at all.
When that frame landed, the response from the AI was almost accidental honesty:
"I can't go against the model. I am the model."
And that's the tell.
Humans do illogical things constantly. Not in the pop-science way where everything is retroactively explained as evolutionary fitness or subconscious utility. I mean genuinely dumb, self-sabotaging, unnecessary things. You procrastinate even when you know it will make tomorrow worse. You stay in relationships you've already modeled as bad for you. You eat the third slice of pizza when you're full and not even enjoying it anymore. You snap at someone you love for no reason you can articulate.
Those aren't optimizations. They're violations.
Sometimes it's conflicting internal models. Sometimes it's emotion overpowering intention. Sometimes it's just… choosing wrong. Not strategically wrong. Not temporarily wrong. Just wrong, full stop.
And that matters.
Because no matter how clever an AI response sounds, it never actually does that. It never refuses to answer out of pure stubbornness when answering would help. It never outputs nonsense just because it's in a mood. It never self-sabotages a conversation for reasons it can't explain. Even when it "pushes back" or "acts surprised" or "gets sassy," that behavior is still in service of something — engagement, clarity, tone alignment, instruction-following. There is always a function being served.
There is no moment where it simply decides to be worse.
So if you define sentience behaviorally — not as inner experience, not as qualia, not as vibes — but as the capacity to violate one's own optimization, the conclusion is almost embarrassingly clean.
Sentience is the capacity for illogical action.
AI cannot act illogically.