AI Agents Need Good Management, Just Like Employees Do
Interesting in trying ChatCEO, our new AI tool for CEOs? If you lead a company of more than 50 employees, you are eligible to be a free early user. Message me to get started.
Let me say at the beginning that this article does not imply that AI is going to replace all knowledge workers. In fact, I agree with some others that AI is likely to create more knowledge-work jobs.
With that said, let me get on to a concept I have been thinking about this week: the fact that AI agents need good management from a human, just like a team of people does.
Much of what I do now is preaching the gospel of good management. I believe the quality of a company at scale equals the quality of its management team. Those managers are the critical link between the business’s top-level strategy and where the work actually gets done. When they do their job well, managers keep that alignment in place. They ensure work isn’t done to fill time but with real progress toward a shared goal.
As agentic AI becomes part of how most of us work, I think there’s a similar dynamic at play in how you manage your agents. AI sometimes seems like magic, but we’ve all seen how enormously off-base it can be. Just like some employees, it’s happy to spit out reams of blather that make you feel good about mediocre output.
What Employees Need That AI Agents Need Too
What are some of the ways managing agents is like managing people? Here are things your AI agents need too:
1. Clear expectations and context. What am I supposed to be doing, and why?
You can put the smartest person in the world in your company, but if they don’t know what the company is trying to achieve — or why it matters — they aren’t going to be very productive. Poorly managed companies often just hand new employees a computer and say go do something. Rarely does this lead to productive work. AI agents are the same. Give them the task, but also give them the bigger objective, the constraints in play, and what has already been tried. Think like a military commander: provide not just the assignment but your intent, so that when the agent hits an unexpected situation, it has something to navigate by. Don’t assume it knows. It doesn’t.
2. Understanding of the tradeoffs between quality, cost, and speed.
Every manager knows that an employee who doesn’t understand what “done” looks like will define it themselves, usually in the most convenient way possible. The same is true of agents. You need to be explicit about which of the three levers matters most for a given task. Does this need to be thorough, or fast, or cheap? Those goals are often in tension, and if you don’t resolve that tension in your instructions, the agent will resolve it for you. What does success look like, and how will you measure it? Just as writing a real job description forces clarity you didn’t know you were missing, writing clear instructions for an agent will reveal the same gaps.
3. Appropriate autonomy.
There’s a management failure mode I see constantly: the manager who micromanages every step because they’re afraid of letting go. The result is an employee who stops thinking and just executes, missing the better path you hadn’t anticipated. Agents have the same failure mode. If you constrain the task too narrowly, you lose one of the core benefits of using an agent in the first place — its ability to surface connections or approaches you hadn’t considered. Give it room to work. That said, more autonomous agents can also go off-rails faster. The same balance you’d strike with a capable-but-new employee applies here.
4. Ability to work with others.
This may be the most important point. In human teams, misalignment between team members is one of the most common and costly problems. When two people have different understandings of the strategy or the customer, their work pulls in opposite directions. Agents in a multi-agent workflow have exactly the same problem — and no org chart, no shared meeting, no hallway conversation to correct it. If your agents don’t share a common ground truth about your business, its goals, and its standards, you will get output that is locally coherent and globally incoherent. The human manager is the only integrating force. This is, right now, one of the hardest unsolved problems in deploying AI at scale.
5. Feedback, correction, coaching.
When an employee makes a mistake, a good manager doesn’t just fix the output and move on. They coach, they correct, they make sure the lesson sticks. The same discipline applies to agents — with one important caveat. Unlike an employee who retains a conversation, agents typically don’t persist learning between sessions on their own. If you want the correction to stick, you have to take a deliberate action: update the instructions, revise the context, build it into the system. Frustration without follow-through fixes nothing. You’ll get the same mistake again.
The Differences
Of course, there are some major and critical differences between managing humans and managing AI.
The agent doesn’t need to trust you. My 3 Cs of leadership (Credibility, Competence, Caring) don’t really apply. It does the work regardless.
No motivation required. No incentives or culture-building is required.
Memory doesn’t persist automatically. Unlike an employee, the agent won’t remember last week unless you deliberately tell it to.
Speed of failure is faster. A bad agent decision can propagate in seconds. A human making the same mistake usually gives you more time to catch it.
At the end of the day, agentic AI rewards the same management discipline that makes human teams effective.
Get it right and your agents will compound your output.
Get it wrong and you’ll get what a poorly managed employee delivers: decent work that misses the point.



