TL;DR Companies are rolling out AI by trying agents on whatever comes to mind, without the process mapping discipline that used to precede any serious automation. It’s worth asking whether that discipline is about to matter again, as agent architecture, token economics, and regulation start catching up with the experimentation.
It’s a strange question to ask in 2026, when agents are at the top of most companies’ agendas and calling yourself «AI-native» has become almost mandatory. Against that backdrop, process excellence sounds like a phrase from another era, when any system rollout was preceded by a separate phase: processes were described, aligned on, and prioritized, and only then would anyone decide what to actually automate.
And yet, the closer you look at how companies are currently deploying AI, the more you get the sense that we might be about to rediscover something we quietly skipped.
1. What AI rollouts actually look like inside companies today
A mid-size company decides it needs to «become AI-native.» Within six months it has thirty to forty active initiatives: an agent in support, a copilot in sales, a document assistant in legal, AI resume screening in HR, and a scattering of internal LLM tools that engineers built over a weekend.
Everyone is busy. Budgets are moving. Vendors are happy.
Then you start asking questions:
-
Which end-to-end processes have actually changed, and by how much? A pause, then a slide listing tools, not processes.
-
Which processes were deliberately left without AI, and why? Silence.
-
How much does the company spend per month on tokens across all these initiatives combined? A number that quietly turns out to be two to three times what everyone expected, because nobody had aggregated it.
This isn’t a failure of AI. It’s more of an open question about how we’re thinking around it.
2. The step we used to take, and quietly dropped
Not long ago, before any system rollout (ERP, CRM, BPM, workflow, even a serious RPA project), companies drew their processes first.
Not because they loved diagrams, but because you physically couldn’t automate something you hadn’t agreed on. You had to name:
-
the steps;
-
the owners;
-
the inputs and outputs;
-
the exception paths;
-
what was in scope, and what stayed manual.
That drawing work was boring, slow, and politically painful. It was also the thing that forced a company to have a shared picture of how it actually works before changing anything.
With agents, we quietly skipped that step. Whether that’s a problem or just a sign of the times is worth sitting with for a moment.
3. Why this time felt different
It’s understandable why. An agent, unlike traditional automation, doesn’t require configuration against a process model. It just kind of works, out of the box, on whatever you throw at it. The barrier for «let’s try something» dropped essentially to zero.
So the natural thing happened: everyone started trying things. Pick a painful task, point an agent at it, see what happens. Multiply by every team in the company.
In the first year of a technology this powerful, this is actually a good thing. It’s how people build intuition.
The question worth asking is whether many companies are starting to live in this mode permanently, as if the discovery phase is the strategy.
4. A case for a more comprehensive approach
One argument for bringing back some of the old discipline isn’t nostalgia. It’s architecture.
If you want to design a serious agent system (not a single assistant, but something that actually participates in an end-to-end process), it helps to have a picture of that process:
-
where the work starts;
-
what the handoffs are;
-
what information has to be present at each step;
-
which decisions are reversible and which aren’t;
-
where a human absolutely has to stay in the loop.
Without that picture, you often end up with what a lot of companies already have: several agents, built by different teams, living inside the same business process without knowing about each other. Each re-reading the same documents. Each making its own LLM calls. Each maintaining its own context.
It works. It’s also expensive in a way that only becomes visible when the monthly bill arrives.
5. Token cost as a quiet teacher
When you look at a real production agent pipeline and trace where the money goes, a surprising share of it is often spent on things a process map would have prevented:
-
an agent re-ingesting a 40-page document that another agent has already summarized two steps upstream;
-
two agents fetching the same CRM record independently because nobody drew the boundary between them;
-
a long context window being reloaded from scratch every turn because the workflow wasn’t decomposed into stages with clear state handoff.
None of this tends to show up in a POC. Most of it shows up at scale.
You can solve these problems after the fact, with caching layers, shared memory, orchestration frameworks, observability. Many teams are doing exactly that right now, and it’s useful work. But it might be a more expensive way of arriving at a place you could have reached by spending a week drawing the process first.
The things a good process picture gives you (step boundaries, data contracts between steps, explicit scope decisions, clear ownership) happen to be the same things a well-designed agent architecture needs.
Probably not a coincidence. Maybe just the same problem viewed from different decades.
6. Economics and regulation as pressure points
There’s also a less technical reason to think the old discipline might become relevant again.
At some point, plausibly soon:
-
the CFO will want ROI not per pilot but across the AI portfolio;
-
internal audit will start asking which decisions in which processes are being influenced by an agent, and how that’s controlled;
-
EU regulators are already asking.
None of these questions can be answered by a list of tools. They can only be answered by a map of processes with the AI components marked on it.
Companies that wait for these pressures may end up doing the mapping work under deadline, with consultants, at a premium. Companies that start earlier will have the same map, just cheaper and in-house. Whether that’s urgent or just worth keeping in mind probably depends on the industry.
7. What a new kind of process excellence might look like
When I ask whether a new wave of process excellence is coming, I’m not imagining a return to thick binders and heavy modeling tools.
What seems more likely is that the same underlying need (a shared, honest picture of how the company actually works, with deliberate decisions about what to automate and what not to) becomes relevant again in a different form.
It probably wouldn’t look like the old discipline on the surface. It might be:
-
lighter than a full BPM program;
-
more code-adjacent, versioned in a repo instead of a modeling tool;
-
closer to the teams building agents than to a standalone transformation office.
But the core idea is the same: you can’t really architect what you haven’t described.
If this hypothesis holds, the companies that get ahead probably won’t do it by hiring process consultants. They’ll do it by treating the process map as part of the foundation of their agent architecture, rather than as a deliverable for a transformation program.
8. Something to try before building the next agent
The practical implication, if any of this resonates, is simple and slightly unfashionable:
Before adding the next agent, consider spending a day drawing the process it’s supposed to live in.
That day might surface answers to:
-
Who does what today, in what order, with what data?
-
Where is the actual pain, measured in time or money, not in vibes?
-
Which steps are reversible and which aren’t?
-
Which steps are already being touched by other agents in the company?
-
Does the thing you’re about to build belong as a new step, a replacement for an existing step, or a shared capability serving several steps?
It sounds like a throwback. In practice, it might just be the fastest way to:
-
stop overpaying for tokens,
-
stop duplicating work across teams,
-
and start having an agent landscape you can reason about, which is probably the one your CFO, your auditor, and eventually your regulator will want to see anyway.
9. Whatever you end up calling it
The thing that comes out of all this might not even get called process excellence. The name isn’t what matters.
What matters is that someone still has to put the picture together: which processes exist, where agents already live inside them, where they overlap, where the boundaries are, where AI clearly shouldn’t be. Without that, there’s no way to design an agent architecture, no way to calculate what any of this actually costs, no way to answer an auditor or a regulator.
The paradox is that in the past, assembling this picture was slow, expensive, and done by consultants. That’s exactly why companies stopped doing it. But now there’s a non-obvious option: have AI agents assemble it themselves.
That’s what we’re building at kawaru.ai. Agents run short interviews with teams, capture how the work actually happens, and stitch everything into an end-to-end process map in days rather than months. From there, it becomes much clearer what to automate, what to buy off the shelf, and what to build custom, and how much it will really cost.
In effect, it’s the same old discipline, redrawn for a world where the systems being orchestrated are made of language models. Except now the tool for doing the drawing is made of them too.
That’s the hypothesis, anyway. Curious whether others are seeing the same pattern, or reading it differently. And also: how is your company currently handling the «assemble the picture» part?
#AI #AIAgents #LLM #AINative #EnterpriseAI #AITransformation #ProcessMapping #ProcessExcellence #BPM #AgentArchitecture #MLOps #DigitalTransformation
ссылка на оригинал статьи https://habr.com/ru/articles/1025118/