The 90-day Operational Process Reset
- Z. Maseko
- Mar 6
- 6 min read
Updated: Mar 15

A distribution company holds a weekly operations meeting every Monday morning. The team reviews the previous week's issues, agrees on root causes, and assigns actions. By Thursday, most of these actions have mostly been completed. The following Monday brings a new list of problems.
The team uses dashboards that provide real-time visibility into delays, inventory gaps, and route exceptions. The information is readily available, but the ability to act on it within the same day is missing because all non-routine decisions must go through the Monday meeting.
This pattern, common in operations, is common enough that it has a name in operations research: the planning cycle trap. The organization has optimized for structured review rather than fast response, and the structure that makes review efficient is precisely what makes response slow. The first article in this series explained the systemic reasons for this. This article outlines a 90-day fix, focusing on one flow, without waiting for a full transformation program.
The Case for Targeted Operational Process Improvement
The standard response to a broken process is a program. You identify the problem, scope a project, find the budget, form a working group, and schedule a kick-off meeting. Twelve months later, if the working group survives, a new process is implemented. Sometimes, a new platform is also introduced. Either way, the broken process continues to cost money throughout the entire duration.
The 90-day reset begins with a different premise: most process failures are local, occurring within a single flow, between a small number of people, due to a specific gap in decision authority or response design. Addressing this gap doesn't require a program, but a focused intervention on the right issue, with enough structure to stay on track and enough discipline to remain scoped.
The key comparison is between the monthly cost of the broken flow and the willingness to keep paying it while the program catches up. Research on operational program delivery consistently shows that large-scale initiatives take longer and cost more than scoped interventions. Organizations that improve fastest tend to run many small, parallel experiments rather than one large, coordinated change.
What Slow Decisions Cost Per Month
Before scoping a reset, it's worth putting a number on the problem. Most operations leaders have an intuitive sense that their processes are slow, but few have measured the cost of that slowness in terms of their P&L.
The calculation is straightforward: (hourly cost of the problem) x (average decision time) x (incident frequency). For example, a plant experiencing five major incidents per month, with a four-hour decision latency and a downtime cost of $4,000 per hour, faces an annual exposure of nearly $1 million. This figure changes the conversation about whether a 90-day intervention is worth the required management time.
Use the calculator below to estimate the costs for your target flow before proceeding.
The 90-Day Operational Process Improvement Framework
The reset unfolds in four sequential phases, each with a primary output and a clear handoff to the next. The total elapsed time is 90 days. The total headcount required is one process owner and a small working group of three to five people involved in the flow.
The phases are sequential because each one sets the stage for the next. You can't design a good response playbook without first measuring decision latency, nor can you delegate authority without knowing what the decision requires. Skipping phases results in a playbook that nobody trusts and authority that nobody uses.
Phase 1: Diagnose (Days 1 to 20)
The output of Phase 1 is a latency baseline: the measured time between the first signal and the first corrective action in your target flow, across the last five incidents, with every approval and system touch documented.
Select one flow based on three criteria: high and measurable cost of delayed response, frequent decision points, and repeatable patterns that can be standardized. Examples include a production scheduling flow, a logistics exception process, or a credit approval queue: anything where the gap between knowing and acting costs money on a predictable schedule.
Walk through the last five incidents with the people involved. For each incident, note when the first signal appeared, when it was first noticed, how many people were involved before the first action, and how many systems were touched. The median time from signal to action becomes your baseline, the metric the rest of the reset aims to improve.
Phase 2: Delegate (Days 21 to 45)
The output of Phase 2 is a decision authority map: a document explicitly stating who can act on which classes of operational event, within what limits, without escalation.
Most organizations skip this phase, causing process improvements to stall. While you can redesign the workflow and select the tooling, if the person closest to the problem still needs permission before acting, latency persists in the permission chain.
Begin by categorizing the incidents from Phase 1. Identify incidents that were well-understood and low-risk or had a clear, correct answer that any competent team member could have executed. These are candidates for frontline delegation. Define operating limits, such as a budget threshold, a volume ceiling, or a set of pre-approved suppliers or routes. Within these limits, the frontline team acts without escalating. Outside these limits, the escalation path is clear and fast.
The conversation that unlocks this phase is usually about risk, not trust. Managers often retain authority because it feels safer than delegating it. The Phase 1 baseline changes this calculation by revealing the monthly cost of the current authority structure and comparing it to the potential cost of a frontline team making a wrong call within the defined limits. This comparison is often clarifying.
Phase 3: Design (Days 46 to 70)
The output of Phase 3 is a response playbook: a set of documented, pre-approved responses paired to the alert types identified in Phase 1, incorporating the decision authority from Phase 2.
A playbook is a decision aid, not a procedure manual. It tells a frontline operator exactly what they can do right now, without making a call. Each entry covers the trigger condition, the approved response options, the limits of independent action, and the escalation path for anything outside those limits.
Keep it concise. A playbook that takes ten minutes to read under pressure will be ignored. Aim for one page per process category, with clear trigger conditions and response options. The test is whether a competent team member, reading it for the first time at 11 pm, knows exactly what to do.
This phase also identifies tooling gaps that would facilitate playbook execution, such as a simple alert routing rule, a pre-populated form, or a shared channel with the right people. These workflow decisions can usually be implemented in days using existing tools.
Phase 4: Deploy (Days 71 to 90)
The output of Phase 4 is a measured improvement in decision latency on the target flow, with the playbook and authority structure in live use.
Deploy means running the playbook on real incidents for three weeks, measuring the latency on each one, and comparing it against the Phase 1 baseline. The goal is evidence. Three weeks of data showing that frontline teams are acting faster, with fewer escalations, on the categories identified in Phase 2 validates the approach and justifies extending it to the next flow.
Expect friction. The first week will reveal gaps in the playbook and edge cases the authority map didn't anticipate. This is the point. Each gap fixed in Phase 4 improves the playbook, and each documented edge case informs the next iteration. The reset ends with a measurably faster process and a team that has built the skill to keep improving it. Perfection is not the point.
This phase also highlights the connection to broader operational performance. Organizations that systematically run these targeted resets across multiple flows don't need a transformation program. The transformation happens one flow at a time, at the speed of execution rather than the speed of governance.
What Comes Next
A successful reset produces one orchestrated flow. More importantly, it produces a methodology: a documented approach to diagnosing latency, designing authority structures, and building response playbooks that the organization can apply to subsequent flows.
After applying this methodology to multiple flows (Stage 3), the organization is ready for the platform investment described in Part 1 of this series. Operational intelligence systems work best when they integrate with an operating model that already defines who should respond and what they are authorized to do. The reset builds that model, and the platform scales it.
The further horizon involves automated decision making: systems that respond to signals within defined parameters without human intervention. This is addressed in the agentic operations piece published separately. The prerequisite for that level of automation is the same as for the platform: clear authority structures, well-documented decision boundaries, and a team that trusts the process enough to let it run.
The 90-day reset is the first step in that sequence. It's also the step that most organizations skip, which is precisely why so many platform investments and automation projects fail to land.




Comments