There is a version of the AI conversation that almost nobody has. Everyone talks about what AI can do. Fewer people talk about why most organizations still use it badly. And almost nobody talks about one of the deeper structural reasons.

The problem is not only technical. It is not only about cost, security, regulation, or unclear use cases. Those things matter. But there is another factor underneath them, and it is much less comfortable to say out loud: the people inside an organization who are best positioned to implement AI into a workflow are often the same people whose role becomes smaller if that implementation actually works.

That is not the whole story. But it is a larger part of the story than most organizations want to admit.

Incentives, Not Ignorance

This is not stupidity. It is not laziness. It is not even simple hypocrisy. It is rational self-preservation inside an incentive structure that asks people to help dissolve part of their own importance.

A middle manager whose team handles coordination, reporting, scheduling, and oversight can often see very clearly that AI can compress those tasks. That same manager also has every reason not to compress them too far. A team is budget. A budget is status. Status is survival. The better the compression works, the less organizational mass remains around the role.

So the incentive is rarely to reject AI completely. The incentive is to adopt it just enough to look current, strategic, and cooperative. Just enough to satisfy leadership, the board, or the internal transformation narrative. But not enough to remove the layer that gave the role its reason to exist.

The Gatekeeper Problem

The issue goes beyond middle management. Many senior people built their careers by learning how to navigate large systems: procurement, compliance, reporting hierarchies, budget cycles, approval chains, stakeholder management. Those things are not just neutral processes. They are also power structures. The person who knows how to move through them holds leverage. The person who helps simplify them away removes some of that leverage.

AI compresses process. It shortens the distance between intent and output. It is good for the organization. It is more threatening for the person whose value was partly tied to the complexity itself. If the maze gets thinner, the map matters less.

Why Speed Feels Dangerous

This is where the conversation often gets confused. People talk as if organizations are hesitating because the tools are still imperfect. But imperfect tools can still be structurally dangerous.

A reporting workflow that used to require an analyst, a manager, a reviewer, and a slide pass can suddenly be compressed into one competent operator with the right tools. That does not mean every institution should move at solo speed. Some layers exist because mistakes are expensive. Legal review exists for a reason. Compliance exists for a reason. Certain approvals exist because failure has a cost.

But many organizations no longer know the difference between necessary control and inherited thickness. They cannot easily tell which layers protect quality and which layers mainly preserve the shape of the old machine. That is where AI becomes politically dangerous inside a company, because it does not only speed up tasks. It pressures the legitimacy of the layers around the tasks.

The Electricity Problem Again

We have seen this pattern before. When electric motors arrived, factories did not immediately reorganize around them. They often bolted electric power onto layouts designed for steam. New power source. Old factory logic.

The real gain came later, when the system itself was redesigned around distributed power. But redesign threatens the structures built around the old model. AI is being used the same way now in many organizations: bolted onto the existing system to produce faster emails, faster summaries, faster slide decks, faster reports.

The harder question is still being avoided: what if the report, the meeting, and the approval chain are themselves part of the inefficiency? That question is uncomfortable for an obvious reason. The people in the room often have careers built on those structures.

What This Actually Means

I am not predicting institutional collapse. Large organizations can survive a great deal of inefficiency. They have capital, distribution, brand inertia, legal insulation, and political cover. That is not the point.

The point is that a widening gap is opening between what low-friction operators can do and what large organizations can produce relative to their size. That gap already existed. AI is making it wider. And many organizations will close it slowly, because real implementation does not just threaten old tasks. It threatens old justifications.

That is why so much AI adoption still looks shallow. The tool arrives. The language changes. The structure absorbs it. For a while, that is enough.

Until a low-friction operator produces more than the structure can justify its layers.

— Dennis Hedegreen, trying to see the structure