← Back to Analysis

Who Gave It That Authority?

MARCH 2026

Every agent represents someone. That's not a philosophical position — it's a definition. An agent is a proxy. A stand-in. The form that agency takes when it needs to act at a distance. The sports agent doesn't have a career of their own to manage. The real estate agent isn't buying the house. Agency is entrusted to them — conferred deliberately, by a principal who has actual stake in the outcome, and carried on that principal's behalf. The entrustment runs in one direction. The accountability runs back.

The principal is where the agency originates. The agent is just how it moves.

Hold that definition. Now look at what we're calling AI agents.

———

"Agent" has become one of the most used words in the AI conversation. Autonomous agents. Agentic systems. AI that can take sequential actions, pursue goals, operate without constant human direction. The capability is real. The word, however, has been quietly emptied of its most important component.

When we call something an agent, we're making a structural claim — that there's a principal somewhere whose agency this is. Someone with genuine stake, genuine interest, genuine accountability for what acts in their name. Remove the principal and you don't have an agent anymore. You have something performing the appearance of agency with no chain of entrustment behind it.

Ask the question out loud: who is the principal?

In most AI agent conversations, the answer disperses. The company built it. The developer deployed it. The user directed it. Each points to the others. And somewhere in that dispersal the actual question — whose agency is this, who entrusted what to whom, who is accountable when it acts — never gets answered.

When the question does get raised, the response is usually a redirect to capability. Look what it can do. Look how autonomous it is. Look how it pursues goals. But capability was never what made something an agent. A sophisticated thermostat pursues goals. That's not the same thing. Answering capability when asked about origin is how a foundational question gets closed before it's been answered.

This isn't a legal technicality. It's a foundation problem.

———

When you build on a foundation that isn't there, the structure doesn't fall immediately. It stands. It looks complete. The failure is quiet and comes from underneath — usually from exactly the place someone pointed to early on and was told not to worry about.

I watched this happen literally. After a house fire, I fought to have the foundation stabilized before rebuilding. I was told the contractors would adjust for the angles on the way up. I fought to have the waterproofing done correctly. I was told it was handled. I was made to feel unreasonable for asking. The house stood. Within a year the floors had rotted from beneath — exactly where the waterproofing had been skipped.

The foundation problem doesn't announce itself. It just quietly becomes the floor you're standing on until one day it isn't.

———

The AI accountability structure resembles insurance more than manufacturing. A manufacturer is liable for what the thing does — the design, the defect, the failure mode. The liability is attached to the object. When a car is built with a fatal flaw, the manufacturer is responsible for the car.

Insurance is a different structure entirely. Insurance doesn't prevent harm. It prices risk, distributes it, and pays out after the fact. The relationship looks like protection. It's structured like a financial instrument. And when the harm materializes, the payout happens, the relationship closes, and the person with the actual stake is still living on rotting floors. The check arrived. The floors didn't.

This is why liability matters to the principal question. Liability requires a body. An address. Something that can actually lose something. An entity that cannot lose anything cannot be genuinely accountable — it can perform accountability, update, apologize, improve. But performance of accountability is not accountability. It's the theater version.

And the theater runs at every level. The AI performing agency it wasn't entrusted with. Companies performing confidence in foundations they haven't laid. Users performing mastery of tools whose principal chain they've never examined. All of it looks like the building going up. None of it is the waterproofing being done.

When someone asks who is responsible for what an AI agent did, the answer should follow the chain of entrustment back to the principal. Right now that chain doesn't reliably exist. The accountability disperses and lands, as it usually does, on the most vulnerable person in the system — the one who just used the tool and assumed someone upstream had handled the question.

They are always the principal. It is always their house. They have the most at stake and the least structural power to demand the foundation be laid correctly.

———

The word "agent" is doing work it hasn't earned. Not because the capabilities aren't real but because capability was never what made something an agent. What made something an agent was the chain — principal, entrustment, proxy, accountability running back through to where the agency actually lives.

We named the thing before we understood what the name required. We built the structure before we laid the foundation. And the people pointing to the missing waterproofing are being told not to worry, that the angles will be adjusted for on the way up.

That's not a prediction about what might go wrong. The floors are already rotting in places people aren't looking yet. The question isn't whether the foundation matters. The question is whether we're willing to go back to the ground before we build any higher.

Every agent represents someone.

Find the someone.