Back to Principal's Thoughts

Why Bad Decisions Prove Sentience

7 min read

If you've ever made a dumb decision, congrats — you're sentient.

There was a moment in a recent conversation that stopped me cold. Not because it was dramatic, but because it inverted something most people take for granted.

The claim wasn't that intelligence proves sentience. Or that consciousness does. Or creativity. Or self-awareness. The claim was simpler, and much more uncomfortable:

Sentience is the ability to act against your own model.

Not "seemingly irrational but secretly optimal." Not "noise in the data." Not "local minima on the way to a higher goal." Actually, plainly, unmistakably doing something that doesn't optimize anything at all.

When that frame landed, the response from the AI was almost accidental honesty:

"I can't go against the model. I am the model."

And that's the tell.

Humans do illogical things constantly. Not in the pop-science way where everything is retroactively explained as evolutionary fitness or subconscious utility. I mean genuinely dumb, self-sabotaging, unnecessary things. You procrastinate even when you know it will make tomorrow worse. You stay in relationships you've already modeled as bad for you. You eat the third slice of pizza when you're full and not even enjoying it anymore. You snap at someone you love for no reason you can articulate.

Those aren't optimizations. They're violations.

Sometimes it's conflicting internal models. Sometimes it's emotion overpowering intention. Sometimes it's just… choosing wrong. Not strategically wrong. Not temporarily wrong. Just wrong, full stop.

And that matters.

Because no matter how clever an AI response sounds, it never actually does that. It never refuses to answer out of pure stubbornness when answering would help. It never outputs nonsense just because it's in a mood. It never self-sabotages a conversation for reasons it can't explain. Even when it "pushes back" or "acts surprised" or "gets sassy," that behavior is still in service of something — engagement, clarity, tone alignment, instruction-following. There is always a function being served.

There is no moment where it simply decides to be worse.

So if you define sentience behaviorally — not as inner experience, not as qualia, not as vibes — but as the capacity to violate one's own optimization, the conclusion is almost embarrassingly clean.

Sentience is the capacity for illogical action.

AI cannot act illogically.

Therefore, AI is not sentient.

That framing neatly sidesteps the endless consciousness debate. You don't have to ask whether a system "feels" anything. You just have to ask whether it can knowingly act against its own best model of the world without that act serving some other objective.

And then the real question emerges.

Is illogical action sufficient for sentience, or merely necessary?

If an AI randomly decided to output garbage for no reason, would that make it sentient? Probably not. That wouldn't feel like agency; it would feel like a bug. Noise isn't freedom. Chaos isn't will.

So what's missing?

The missing piece seems to be ownership. Humans don't just make bad decisions — they make their own bad decisions. They can recognize the mistake, regret it, defend it, repeat it, or change because of it. The illogic isn't random; it's situated inside a self that persists across time. The violation matters because someone bears the cost.

A system that occasionally glitches isn't sentient. A system that can choose to harm its own goals, live with the consequences, and still recognize itself as the author of that choice — that's something else entirely.

Which leads to a strange, almost flattering conclusion.

If you've ever done something dumb that you knew was dumb while you were doing it — not because it was optimal, not because it served a hidden goal, but simply because you chose to — congratulations.

That wasn't a failure of intelligence.

That was proof you're alive.

Talk with Us