Planned vs Unplanned Downtime: A CNC Shop Guide
- Matt Ulepic
- 17 hours ago
- 9 min read

Planned vs Unplanned Downtime: How CNC Shops Should Classify Stops
Most CNC shops don’t have a downtime “data” problem—they have a downtime classification problem. The numbers look official in the ERP, the daily report has totals, and everyone has a story for why the machine wasn’t cutting. But if planned and unplanned downtime get blended (or labeled differently by different people), your utilization and availability baselines stop meaning what you think they mean.
The practical goal isn’t textbook OEE purity. It’s operational visibility: stable categories that make shift-to-shift reports comparable, make “where did capacity go?” questions answerable, and keep weekly meetings from turning into debates about definitions.
TL;DR — planned vs unplanned downtime
Use the schedule (intent) as the dividing line: was the loss of run time intentionally planned ahead of time?
A single stop can include both planned and unplanned portions; split by timestamps when an overrun occurs.
“Waiting” (material, program release, inspection) is usually unplanned disruption unless explicitly scheduled into the plan.
Don’t bury true disruptions (tool breakage, alarms, missing prerequisites) inside planned changeover/setup buckets.
Classification consistency matters more than perfect granularity; stable buckets speed decisions.
Multi-shift shops need a simple policy for thresholds and handoffs to prevent “half-labeled” stops.
If a stop forces resequencing, expediting, or overtime, treat it as an unplanned event that deserves a root cause owner.
Key takeaway — Classification is a capacity tool. When planned loss (intentional, scheduled) gets mixed with unplanned disruption (unexpected, plan-breaking), you hide where time is actually leaking—often in “waiting,” overruns, and shift handoffs. Splitting events by timestamps and applying the same rules across shifts turns downtime reports into decisions instead of arguments.
Why planned vs unplanned downtime is hard in real CNC shops
In a real shop, downtime rarely fits neatly into one box. Stops evolve. A maintenance task that was on the calendar can uncover a seized fitting or a broken connector. A setup window that “should be routine” turns into waiting on a fixture, then waiting on a program revision, then waiting on inspection. If your logging system forces one label, you end up telling a simplified story that doesn’t match machine behavior.
Another complication: different departments naturally name the same loss differently. Maintenance may call a stop “PM,” production may call it “down,” and programming may call it “prove-out delay.” None of them are trying to be misleading—but the report becomes inconsistent, and the ERP ends up reflecting opinions instead of operational facts.
Multi-shift handoffs amplify the problem. Second shift inherits a machine that’s been sitting for 30–60 minutes with no clean reason code, or a vague note like “setup.” By the time a supervisor reviews the report, the people who lived the event are gone, and the classification becomes a guess. When categories aren’t stable, weekly downtime reviews turn into debates about what counts as planned rather than a discussion of what to fix next.
The practical definition: the schedule is the dividing line
In CNC operations, the cleanest definition is not “maintenance vs production.” It’s intent: what the schedule meant to happen.
Planned downtime is an intentional, scheduled loss of run time—known in advance and done on purpose. Unplanned downtime is an unexpected loss of run time that disrupts the plan and forces you to react.
A practical rule that works across mixed fleets and multiple shifts: Was it planned before the shift started (or before the job was released)? If yes, it’s planned. If no, it’s unplanned. This also makes it easier to reconcile what the ERP “thought” would happen versus what the machine and the crew actually experienced.
One more critical point: a single event can include both. If you want reporting that drives action, you often need to split by timestamp. That’s where downtime tracking becomes more than a checkbox—because near-real-time data is only as useful as the reason codes attached to it. For broader context on capturing and using downtime data operationally, see machine downtime tracking.
What belongs in planned downtime (and what people misfile there)
Planned downtime should be the set of losses you intentionally allocate for. In CNC shops, that typically includes scheduled PM windows, planned warm-up routines (when they’re standardized and expected), scheduled meetings, and planned changeover windows—if those windows are truly on the schedule rather than wishful thinking.
Planned can also include engineering trials and program prove-out time when leadership deliberately expects it and allocates time for it. The key is that it’s visible ahead of time, so downstream promises (due dates, staffing, secondary operations) aren’t based on fantasy run time.
Where shops get into trouble is using planned buckets as a catch-all for messy reality. Waiting on material, waiting on a program release, and waiting on inspection are common misfiles. Unless you explicitly scheduled those constraints into the plan, they’re disruptions. If they get labeled “planned” or buried inside “setup,” you’ll conclude you have less capacity than you actually do—and you’ll be tempted to buy another machine before recovering the time you’re already losing.
A simple boundary rule: if it’s avoidable by executing the existing plan better, it’s not automatically “planned.” “We always wait for inspection” is a description of habit, not proof that the schedule intended it.
What belongs in unplanned downtime (and what gets minimized as 'normal')
Unplanned downtime is the set of disruptions that show up at the moment you need to run. That includes breakdowns, alarms, unexpected tool failures, crash recovery, and infrastructure issues like power, air, or coolant delivery problems. These events force decisions: resequence the schedule, expedite tooling, call maintenance, or move the job.
It also includes missing prerequisites at the moment of need: material not staged, a fixture that’s still in use, or a program that isn’t released. Those are unplanned not because the tasks are unusual, but because the plan assumed they were ready—and reality disagreed.
Shops often struggle with micro-stops: short interruptions that don’t feel “serious” individually. It’s reasonable to set a threshold policy (for example, logging only stops above a certain duration, or grouping repeated short alarms), but be careful not to erase patterns. If a stop type repeats across a shift, it can be a major source of utilization leakage even when each instance looks small.
A practical rule you can enforce: if it forces resequencing, expediting, or overtime, treat it as an unplanned disruption—even if the underlying activity (tooling, inspection, programming) is “normal” work.
Why classification accuracy changes the decisions you make
When planned and unplanned downtime are mixed, the headline number (availability/utilization) becomes less actionable. You still see “lost time,” but you can’t tell whether you’re losing time because you intentionally allocated it (planned) or because the shop got surprised (unplanned). That distinction drives completely different actions: scheduling and staffing decisions versus reliability and response decisions.
Misclassification also creates false narratives. If program-release delays and inspection queues get labeled as “setup,” the report will imply production is slow at changeovers. If maintenance overruns are labeled as “PM,” the report will imply maintenance is under control—even when the disruptive portion is rising.
Mini-example #1: planned PM window with an overrun
Scenario: You schedule a 2-hour PM window from 1:00–3:00. During the work, a fitting is seized and the job runs long. The machine is still down until 3:45.
If you log the entire 1:00–3:45 as “planned maintenance,” your report hides the disruptive part. The better classification is to split it by timestamp: 1:00–3:00 planned (scheduled PM), and 3:00–3:45 unplanned (maintenance overrun / unexpected issue). That split exposes the true unplanned maintenance signal without pretending the plan was wrong to schedule PM in the first place.
Mini-example #2: scheduled start, but prerequisites aren’t ready
Scenario: A job is scheduled to start at 6:00 AM. The machine sits idle until 7:10 because the program isn’t released, and first-article inspection is backed up once the first part is ready to check.
Calling that idle time “planned” (or lumping it into “setup”) changes accountability. “Planned” suggests the schedule intended the machine to wait. “Setup” suggests the operator was in a normal changeover. But the real operational issue is prerequisite readiness: engineering/release discipline and inspection capacity. If you classify it as unplanned “waiting on program release” and “waiting on inspection,” the report points to scheduling and workflow constraints, not operator speed.
For shops that want this to roll up cleanly into higher-level metrics without turning the conversation into OEE theory, the key is consistency. (If you want a deeper read on how downtime rolls into utilization decisions, start with machine utilization tracking software rather than trying to fix it inside spreadsheets.)
A simple classification framework your shop can enforce
You don’t need a perfect taxonomy to get value. You need a repeatable rule set that supervisors can apply the same way across people, machines, and shifts. Use three questions:
Was it scheduled? Was the downtime intentionally on the plan before the shift started (or before the job was released)?
Who owns the prerequisite? Maintenance, production, programming/engineering, material handling, or inspection?
Could it have been prevented by executing the existing plan? If yes, it’s a disruption against plan execution—not an intentional planned loss.
Next, require timestamped splits when planned work overruns into disruption. This one policy handles the “planned turns into unplanned” reality without arguments.
Keep the structure simple: two top-level buckets (planned vs unplanned) plus a small set of sub-reasons that reflect CNC reality (for example: planned PM, planned changeover, planned prove-out; unplanned breakdown/alarm, unplanned tool failure, unplanned waiting on program, unplanned waiting on material, unplanned waiting on inspection). Avoid letting the list grow into a “reason code encyclopedia.”
Governance matters more than tools: run one short weekly review where a supervisor and one cross-functional partner reclassify edge cases and tighten definitions across shifts. If you do use software, focus on whether it makes the classification easier at the moment of the stop, not whether it has flashy displays. (If you’re evaluating platforms, this overview of machine monitoring systems is a helpful baseline.)
Edge cases: changeovers, prove-outs, and 'waiting' time
The gray areas are where classification falls apart—especially in job shops where routings change, first-article expectations vary, and the “plan” lives in people’s heads. Here are practical rules that reduce inconsistent logging.
Changeovers
Changeover is planned only if it has an intentional window. If the window is 30–60 minutes and the team executes within it, classify it as planned changeover. If changeover runs long because you’re waiting on a fixture, waiting on tools, or waiting on a program revision beyond the planned window, split it: planned changeover (the window) and unplanned waiting (the overrun cause). This prevents “setup” from becoming a hiding place for poor release discipline or kitting gaps.
Prove-out and first-article inspection
Prove-out/first-article time is planned if you deliberately allocate it (new part, new toolpath strategy, new fixture). It becomes unplanned when it’s driven by missing information, unexpected rework loops, or an inspection queue that wasn’t part of the plan. If inspection capacity is a recurring constraint, treating it as “planned” without explicitly scheduling it will keep the same fire burning every week.
Operator breaks and coverage gaps
Breaks are planned if they are policy-based and accounted for in the schedule. If machines stop because there’s a coverage gap that wasn’t planned (for example, a single operator covering multiple machines without an agreed coverage plan), treat the resulting idle as unplanned disruption. Otherwise, the report implies “we planned not to run,” when the real issue is staffing assumptions.
Tool breaks mid-cycle (don’t normalize it)
Scenario: A tool breaks mid-cycle and causes a 22-minute stop while the team replaces the tool and reruns. Even though tool changes are “normal,” the break is still unplanned downtime because it disrupts the intended run. Don’t bury it inside planned changeover/setup. Keep it visible as unplanned tool failure so you can see patterns by tool, material, program, or operator practice—and decide whether the fix belongs in tooling standards, offsets, feeds/speeds discipline, or inspection checks.
If you’re trying to tighten classification without adding clerical burden, focus on two things: (1) capturing stops while they’re fresh (near the machine, during the shift), and (2) making “split by timestamp” easy when an event crosses from planned into disruption. Some shops also benefit from automated interpretation support that helps supervisors review exceptions and recurring patterns, such as an AI Production Assistant that summarizes downtime narratives consistently.
Implementation-wise, keep the rollout practical: agree on the dividing-line definition (schedule intent), define your short list of sub-reasons, set a stop threshold policy, and run a weekly reclassification review for 4–8 weeks until edge cases stop consuming meeting time. If you’re considering software to support this, make sure the cost model is understandable and aligns with how you scale across a mixed machine fleet; you can review pricing details without getting trapped in a long evaluation cycle.
If you want to pressure-test your current definitions, bring one week of downtime events and ask one diagnostic question: “Which of these stops would we classify differently on first shift vs second shift?” If the answer is “a lot,” your categories are costing you decision speed. When you’re ready, you can schedule a demo to see a practical workflow for capturing and splitting downtime so the report reflects what actually happened—not what everyone wishes happened.

.png)








