top of page

Production Downtime Calculation (CNC Shop Guide)


Learn production downtime calculation for CNC shops: pick the right time base, use consistent formulas, and convert minutes into lost capacity and parts

Production Downtime Calculation: A CNC Shop Method That Holds Up

Most “downtime %” arguments in CNC job shops aren’t really about performance—they’re about math. Specifically: two people used two different denominators, then treated the results like the same metric. If your ERP says one number, the supervisor’s notes say another, and the shop floor “feels” like a third, you don’t have a downtime problem yet—you have a definition problem.


This guide standardizes production downtime calculation so you can attribute loss by machine, by shift, and by cause, then convert minutes into capacity language (hours and parts) that supports decisions this week—not a quarterly report.


TL;DR — production downtime calculation

  • Pick the denominator first: Scheduled Time vs Planned Production Time changes the story.

  • Separate planned stops (breaks/meetings/PM) from unplanned loss (waiting, faults, no material).

  • Decide how you treat setup/changeover: downtime vs planned changeover—then report it consistently.

  • Use minutes and percent together: minutes expose capacity leakage; percent enables comparisons.

  • Don’t double-count shared causes (like material shortage) across machines—separate root cause from machine impact.

  • Micro-stops (bar feeder empty, chips, probing retries) add up; aggregate them or you’ll miss the real loss.

  • Convert downtime minutes to lost spindle hours and (when valid) lost parts to tie numbers to shipments.

Key takeaway If production downtime isn’t anchored to a consistent time base and separated into planned vs unplanned buckets, your % will drift by shift and by machine—and your ERP will look “right” while the shop floor reality is different. The point of the calculation is capacity recovery: turning repeated idle patterns (especially small stops) into attributable minutes you can assign to a machine, a shift, and a cause.


Start with the only question that matters: “downtime against what time base?”

Before you calculate anything, lock the denominator. In CNC shops, downtime disputes usually come from mixing “we were scheduled to run” with “we were actually expected to produce.” Use three time buckets so your reporting holds up across machines and shifts:


  • Scheduled Time: the full shift window (e.g., 6:00–16:00), regardless of breaks or meetings.

  • Planned Production Time: Scheduled Time minus planned stops (breaks, meetings, planned maintenance).

  • Runtime (in-cycle): the minutes the machine is actually cutting/in-cycle (or otherwise performing the programmed cycle).

Why this matters: if one person includes breaks and a weekly PM window and another excludes them, both can be “correct” and still be talking past each other. For CNC job shops, a practical way to avoid arguments is to publish two numbers:


  • Unplanned Downtime % of Planned Production Time (clean view of losses you’re trying to eliminate).

  • Total Non-Run % of Scheduled Time (big-picture view of how much of the shift wasn’t cutting).

Also note the ERP trap: ERP timestamps often reflect paperwork flow—job started, op completed, traveler moved—not machine state. That’s why two shifts can “look similar” in the system even when Shift 2 has more first-hour warmup delays, tool crib waiting, and different supervisor availability. If you want downtime that’s attributable by machine/shift/time window, you need a definition that does not rely on after-the-fact entries.


Production downtime calculation: the core formulas (with clear inclusions/exclusions)

These formulas are spreadsheet-operational. The key is that the same rules apply across machines, shifts, and weeks.


Formula 1: Planned Production Time Planned Production Time = Scheduled Time − Planned Stops Planned Stops include breaks, meetings, and planned maintenance (e.g., a scheduled 30–60 minute PM block).


Formula 2: Unplanned Downtime (minutes) Unplanned Downtime = Planned Production Time − Runtime − Planned Changeover (if separated)


Formula 3: Unplanned Downtime % Unplanned Downtime % = Unplanned Downtime ÷ Planned Production Time


Optional parallel view: Total Non-Run % Total Non-Run % = (Scheduled Time − Runtime) ÷ Scheduled Time


Setup/changeover is where high-mix shops get inconsistent. In a job shop with frequent setups, “not cutting” can be setup, first-article approval, waiting on a program edit, or waiting on material. You have two workable options:


  • Option A (separate it): treat changeover as Planned Changeover and subtract it in Formula 2. Then you report changeover minutes as its own capacity consumer.

  • Option B (count it as downtime): if you’re actively trying to compress setup, you can put it in the unplanned bucket—but you must label it clearly and keep the rule consistent across shifts.

If you’re building a broader tracking discipline, align your definitions with machine downtime tracking so the calculation method and the way events get captured don’t drift apart.


Step-by-step example: one machine, one shift (and how the denominator changes the story)

Below is a fully worked example you can copy into a spreadsheet. Assumptions (one machine, one 10-hour shift):


Scheduled Time


10 hours = 600 minutes


Planned Stops


Two breaks (2×15=30), meeting (30), planned PM (45) = 105 minutes


Runtime (in-cycle)


410 minutes


Planned Changeover (separated)


35 minutes


Step 1: Planned Production Time 600 − 105 = 495 minutes


Step 2: Unplanned Downtime (minutes) 495 − 410 − 35 = 50 minutes


Step 3: Unplanned Downtime % (Planned Production Time denominator) 50 ÷ 495 = 10.1% (rounded)


Now recalculate using Scheduled Time as the denominator (this is where teams argue):


Total Non-Run % (Scheduled Time denominator) (600 − 410) ÷ 600 = 31.7% (rounded)


Same shift, same machine, two different—but both useful—numbers. The first isolates true losses inside the time you intended to produce. The second describes how much of the shift wasn’t cutting, which can highlight staffing model choices (especially nights) and how much time is consumed by planned stops and changeovers.


To support the 50 minutes of unplanned loss, keep a simple event log by cause. Example (same shift):


  • Waiting on tool crib (Shift 2 first hour): 12 minutes

  • Program edit / prove-out delay: 18 minutes

  • Chip conveyor full (unattended window): 8 minutes

  • Bar feeder empty (micro-stops aggregated): 12 minutes

That last point matters for unattended running: the machine may “stop” for short bursts during a night shift due to chips, bar feeder empty, or a door-open interruption. If you don’t aggregate those micro-stops, you’ll understate the capacity hit and miss the simplest fixes (standard checks, chip management, refill cadence).


From downtime minutes to lost capacity and lost parts (the ‘so what’ conversion)

Downtime % is only useful if it converts to capacity language. Start with spindle hours, then translate to parts when the job mix supports it.


1) Downtime minutes → lost spindle hours Lost Spindle Hours = Downtime Minutes ÷ 60


Using the example above: 50 minutes is 0.83 hours of lost cutting opportunity for that machine that shift. Convert the same way to daily/weekly totals per machine, then roll up by cell or asset group. This is where “small daily leakage” becomes visible: 20–40 minutes of repeated idle per machine compounds quickly across 20 machines and multiple shifts, even if no single stop feels catastrophic.


2) Lost spindle hours → lost parts (when valid) Lost Parts = Downtime Minutes ÷ Ideal Cycle Time (or observed average cycle time)


Example (hypothetical): if a repeat job family averages 5 minutes cycle time, 50 minutes of downtime equates to about 10 parts of opportunity on that machine in that window. Caveat: in a high-mix job shop with frequent setups and first-article approval, cycle-time translation can mislead because the “next part” isn’t always ready to run. In that environment, use lost scheduled hours by machine/shift and tie the recovery work to constraints (program prove-out, inspection response time, tool staging, material presentation).


A helpful framing (without turning it into ROI hype) is “equivalent machine count”: when reclaimed hours across a week start to resemble what an additional fraction of a machine could provide, it’s a signal to eliminate hidden time loss before considering capital expenditure. This is where machine utilization tracking software often supports the conversation—because the compounding minutes are hard to see in ERP timestamps.


Avoid the 6 most common downtime calculation errors in CNC job shops

If your downtime number feels “unstable,” it usually traces back to one of these measurement traps:


  • Mixing planned and unplanned stops. If breaks and meetings are in the same bucket as faults and waiting, downtime % becomes a scheduling artifact—not a loss metric.

  • Counting “no operator assigned” without clarifying the staffing model. On nights, some machines are intentionally unattended. Treat “no operator” as its own category (or planned coverage) so Shift 3 doesn’t look “worse” simply because it’s structured differently.

  • Double-counting shared causes (material shortage). If material is late and five machines wait, you should report (a) each machine’s waiting minutes (capacity impact) and (b) one root cause event for “material shortage” (systemic cause). Don’t add machine minutes together and then also add the root-cause duration as if it’s extra time.

  • Relying on operator-entered codes after the fact. End-of-shift batching and recall bias turns a day of short stops into one vague reason. If you’re looking for this week’s action list, you need tighter attribution than memory-based entries.

  • Ignoring micro-stops. Chip conveyor full, bar feeder empty, probing retries, door opens, small alarms—these often accumulate into hours across unattended windows. Aggregate them or they disappear.

  • Averaging across machines hides constraints. A plant-wide average can look “fine” while one pacer machine is where schedules slip. Calculate per machine (or per asset group), then roll up.

If you’re considering more automated capture to reduce recall bias and missed micro-stops, keep the focus on measurement integrity rather than dashboards. This overview of machine monitoring systems is a useful boundary-setter for what should be captured vs what should remain planned time.


How to report downtime so it drives decisions (shift, machine, and cause views)

Once you standardize the calculation, the reporting format determines whether it turns into action or “dashboard theater.” Keep it simple and attributable.


Minimum reporting slices: by machine, by shift, by day; then top 3 causes by minutes. This is where multi-shift differences become operational, not personal. For example, if Shift 2 shows higher unplanned downtime driven by first-hour warmup routines, tool crib delays, and fewer immediate escalations, you can separate planned warmup from true loss and define a coverage path that matches reality.


Use both minutes and percent: minutes show capacity (what you could have run); percent makes it comparable across different scheduled hours and planned-stop patterns. In high-mix environments, minutes are often the cleaner leadership metric because they avoid cycle-time debates.


Build a “lost capacity ladder”: (1) top machines by lost hours, then (2) top causes inside those machines. This prevents the common trap of ranking causes plant-wide while ignoring that one pacer asset is driving missed shipments.


Cadence that supports decision speed:


  • Daily huddle (yesterday): which machines lost the most time, and why?

  • Weekly pattern view: what’s repeating (tool staging, first-piece approval delays, program edit loops, material presentation)?

  • Monthly definition check: confirm planned vs unplanned rules stayed consistent across shifts and new jobs.

This structure also handles the most common “system” scenario cleanly: a mid-shift material shortage that leaves multiple machines waiting. Your daily view shows which machines were impacted (capacity hit), while the weekly view keeps “material shortage” as a root-cause pattern to fix upstream (receiving cadence, kitting, saw schedule, vendor lead time variability), without inflating totals by double-counting.


If you have enough data volume that interpreting patterns becomes the bottleneck (especially across 20–50 machines and multiple shifts), an assistant-style layer can help translate states and reasons into a short operational narrative. That’s the intent behind an AI Production Assistant: speed up interpretation so you can act while the week is still recoverable.


Implementation-wise, keep cost discussions tied to scope: how many machines, how many shifts, and how you want to separate planned changeover from downtime. If you want a straightforward framing without hunting for numbers inside proposals, start with the pricing page to align expectations before you invest time in deeper mapping.


If you want to sanity-check your current downtime math against shop-floor reality (especially where ERP timestamps disagree with what supervisors see), the fastest next step is a diagnostic walkthrough: pick one pacer machine, define planned stops, run the formulas above, then compare by shift and by top causes. When you’re ready, schedule a demo to see what a reliable, attributable downtime feed looks like on a mixed fleet without turning this into a long IT project.

FAQ

bottom of page