People do not judge a tool only by what it can do.
They judge it by whether they can return to it.
A weak tool that stays still is often easier to live with than a powerful tool that keeps moving.
This is already true of ordinary interfaces.
Move one button, rename one function, alter one path, and the user must re-enter the system.
Most users do not describe this as interface instability.
They describe it as failure.
They say: the system is bad.
AI intensifies this condition.
A new model is not only a new model.
It is also a new surface.
It changes tone.
It changes limits.
It changes speed.
It changes what works.
It changes what fails.
It changes what the user thinks the system is.
Progress therefore arrives in a broken form.
Capability rises.
Familiarity collapses.
The user must keep learning the tool again.
Before trust settles, the surface shifts.
Before habit forms, the workflow moves.
Before competence becomes quiet, the system becomes different.
This creates a hidden tax.
Re-entry cost.
Cognitive drag.
Low-grade mistrust.
The user must spend effort not only using the tool, but finding the tool again.
Even when the tool is improving.
This is why AI can feel worse than it is.
The experience of a system is not made only of intelligence.
It is made of continuity.
A tool may become more capable without becoming more inhabitable.
A tool that does not settle cannot easily become ordinary.
A tool that cannot become ordinary cannot easily become trusted.
Many people do not reject AI because they have measured its intelligence.
They reject it because they never get to settle inside it.
What looks like model weakness is often interface churn.
A powerful tool people cannot stabilize around will keep feeling broken.
Even when it is not.
Especially when it is not.
— Dennis Hedegreen, still checking