top of page

Analyze Costs and Downtime: Convert Minutes to Dollars

Convert Minutes to Dollars

Analyze Costs and Downtime: Convert Minutes to Dollars (Without Guessing)


Most CNC shops don’t have a “downtime data” problem—they have a downtime-to-dollars problem. ERP and spreadsheets can tell you what was planned and what shipped, but they usually can’t defend which stoppages actually cost money, how much, and why it shows up differently on second shift. The common myth is that every idle minute equals lost revenue. In reality, some downtime breaks shipments, some creates overtime, and some is just noise unless it hits the constraint.


Below is a practical, repeatable method for owners and ops managers running 10–50 machines across multiple shifts: start with timestamped downtime events, convert minutes into lost capacity by machine and shift, then translate that capacity loss into dollars using assumptions you can explain in an ops review.


TL;DR — analyze costs and downtime

  • Don’t price every idle minute as lost revenue; separate lost shipments, lost margin, and added recovery costs.

  • Convert downtime into lost capacity per machine/shift before converting to dollars.

  • Micro-stops (10–20 minutes, repeated) become material when you aggregate weekly and tie to run rate.

  • Use constraint-based math for the bottleneck; use cost-of-recovery for non-bottlenecks.

  • Always label assumptions (run rate, margin, recovery method) and use a sensitivity range.

  • Rank downtime reasons by $ impact, not by total hours, to speed weekly decisions.

  • Cap “unknown”/“other” so bad coding doesn’t hide the real utilization leakage.


Key takeaway

Downtime becomes defensible money only when you connect shop-floor events (start/stop, reason, shift, machine) to the type of loss: shipments on the constraint, margin on critical jobs, or recovery costs like overtime and expediting. The fastest path to capacity recovery is ranking the biggest utilization leaks by $ impact—especially when ERP “should have run” doesn’t match what machines actually did by shift.


What “downtime cost” actually means in a CNC job shop


“Downtime cost” gets misused because it mixes three different money outcomes:

  • Lost revenue (missed shipments): you couldn’t ship what customers needed, so revenue moves out (or gets lost if the order is canceled or you lose repeat work).

  • Lost contribution margin: the more operationally useful view for many shops—what margin you would have earned on the parts you didn’t produce on time.

  • Added cost: overtime premium, expediting, subcontracting, extra setups, scrap/rework, and schedule instability that shows up as firefighting.


The key distinction is bottleneck vs non-bottleneck. Downtime converts cleanly into lost shipments when it hits the constraint (or a job with a hard due date and no buffer). If a machine has slack capacity or a WIP buffer, the same downtime may not reduce revenue—it may just push work later in the week, increasing queue time or forcing overtime to recover.


This is why “utilization leakage” matters: the gap between planned capacity (what ERP says should have happened) and actual productive time (what the machines actually did). To close that gap, you need minimum viable context, not perfect accounting: timestamped downtime events (start/stop), machine, duration, a usable reason, and at least one slicer like shift and/or part family.

If you’re still building that foundation, start with machine downtime tracking so the numbers you price later hold up.


Step-by-step: convert downtime minutes into lost capacity

The goal here is simple: take downtime logs and turn them into lost run minutes that can be priced consistently, by machine and shift, each week.


1) Define available minutes per machine per shift

For each machine and shift: calendar minutes minus planned breaks. Keep it practical—use what supervisors agree is “scheduled to run.” If second shift is shorter, or has different break structure, that difference must stay visible (don’t average shifts together).


2) Separate planned non-production from unplanned/avoidable downtime

Scheduled PM, planned calibration, or intentionally blocked time should not be mixed with unplanned stops. Otherwise, your top “downtime cost” category becomes “things we meant to do,” which slows decisions. Treat planned non-production as a separate bucket; focus the $ ranking on unplanned or avoidable events.


3) Handle micro-stops without letting them disappear

Multi-shift shops often debate whether frequent 10–20 minute stoppages on second shift are “just reality”: tool offset checks, minor alarms, waiting on first-article approval, or “nobody to sign off.” The fix is not arguing about single events—it’s aggregation.

Use a thresholding rule such as: keep all events (even <2 minutes), but sum weekly and price the weekly total. That prevents death-by-a-thousand-cuts from getting labeled as noise.


4) Segment before you summarize

Do not start with a plant-wide average. Break down by machine, shift, and reason code. Many shops discover the “ERP vs actual” gap is really a shift pattern issue: similar schedules on paper, very different stoppage profiles in practice. If you’re using automated capture or considering it, understanding the baseline of machine monitoring systems helps you keep event timing consistent across modern and legacy equipment.


Translate lost capacity into dollars: 3 defensible approaches

Once you have lost run minutes by machine/shift/reason, you need a pricing method that matches how your shop actually feels pain. You can use one approach consistently, or use different approaches depending on whether the machine is a constraint.


Approach A (constraint-based): lost throughput × contribution margin per part

Best when you’re capacity constrained or a specific 5-axis/mill-turn is the pacer. Steps: (1) convert downtime into lost parts using run rate, then (2) multiply by contribution margin per part (or gross margin if that’s what you can defend). This directly answers, “What margin did we fail to produce on the constraint?”


Approach B (machine-hour value): effective $/machine-hour

If your mix is too varied to price by part quickly, estimate an effective value per machine-hour from recent shipped mix, standards, or quoting history. Decide whether you’re valuing revenue (good for shipment impact discussions) or margin (better for prioritizing fixes). Then: Downtime hours × $/machine-hour = $ impact (range). It’s not perfect, but it’s consistent and reviewable.


Approach C (cost-of-recovery): overtime, expediting, subcontracting, instability

For non-bottlenecks, downtime often shows up as how you recovered: weekend hours, overtime premium, hot-shot freight, re-routing to another machine, or outside processing. This approach prices the added cost you incurred because the downtime happened, even if revenue still shipped.

Rule that keeps this honest: always label assumptions (run rate, margin, whether downtime truly constrained shipments, recovery method) and compute a sensitivity range (best case / worst case). Decision speed comes from consistency, not false precision.


Worked example 1: bottleneck machine downtime → lost shipment margin


Scenario: a single 5-axis (or mill-turn) is the current constraint. A 2-hour unplanned stop forces weekend overtime and expediting. We’ll separate (1) direct lost throughput/margin from (2) secondary recovery costs.


Events (week)

Total downtime (min)

Notes

Spindle/drive fault

120

Unplanned stop mid-week

Waiting on tool

75

Crib delay / wrong preset

Program prove-out / restart

60

Restart and verification time


Step 1: Convert downtime minutes to lost run hours. Total unplanned/avoidable downtime minutes = 120 + 75 + 60 = 255 minutes = 4.25 hours.


Step 2: Convert lost run hours to lost parts (constraint-based). Assumption (explicit): average effective run rate on this constraint = 1.5–2.0 parts/hour (mix dependent). Lost parts range = 4.25 hr × (1.5–2.0 parts/hr) = 6.4–8.5 parts.


Step 3: Convert lost parts to lost contribution margin. Assumption (explicit): contribution margin per part on the constrained mix = $220–$320 (use your quoting/standard margin). Lost margin range = (6.4–8.5 parts) × ($220–$320/part) = $1,400–$2,700 (rounded, example only).


Step 4: Add secondary recovery costs (separate line). If the 2-hour fault triggered a weekend recovery: assume 6–10 overtime hours across operators/programming/support at an overtime premium (the premium portion, not the whole wage). Add any expedite freight or outside processing directly tied to that late job. Keep this as a second bucket so you don’t double-count “lost margin” and “overtime cost” for the same parts.

Rank reasons by $ impact (not by hours). In this example, the 120-minute fault likely dominates $ impact because it hits the constraint and triggers recovery actions. “Waiting on tool” might be less time, but still expensive if it repeats and interrupts flow on the pacer.


This is where capacity recovery starts—before you justify more machines. To keep the focus on utilization leakage, pair this with a utilization view from machine utilization tracking software so you can see which assets are truly gating shipments.


Worked example 2: non-bottleneck downtime → added cost (overtime/queue) not lost revenue


Scenario: a 3-axis mill on second shift has frequent 10–20 minute stops—tool offset checks, minor alarms, and waiting on first-article approval. Day shift argues it’s “normal second shift noise.” But shipments still go out because there’s slack capacity and WIP buffer. Here, pricing all downtime as lost revenue would overstate the case and hurt credibility.


Step 1: Aggregate the micro-stops weekly. Example week (second shift): 18 stoppages averaging 10–20 minutes, totaling 240–300 minutes (4.0–5.0 hours). Individually, none looks dramatic. Weekly, it’s a real chunk of lost productive time on that shift.


Step 2: Decide whether it reduced shipments.

If the machine is not the constraint and you still shipped, treat the primary impact as recovery cost and flow disruption, not lost sales. Common signs: you “made it up” with late-week overtime, pulled an operator from another area, or created additional setups by splitting lots.


Step 3: Attribute added cost back to downtime categories. Example (hypothetical but structured): you ran a Saturday makeup shift of 6–8 hours for the cell, and the ops team agrees (based on schedule notes and what was late) that 50–70% of that makeup time was due to second-shift stoppages on this mill and its adjacent inspection approvals. Added cost to attribute = (overtime premium portion) × (attributed overtime hours). If you don’t want wages in the discussion, use an internal “overtime burden” estimate that finance already accepts.


Output: decision-ready view.

You now have two levers: (1) fix the repeatable causes (approval flow, alarm handling, offsets/tooling process), or (2) change the plan (shift support coverage, inspection availability, standard work for first-article signoff) if the stoppages are structurally tied to how second shift operates. Either way, you’re not guessing—you’re tying dollars to the specific utilization leak.

Optional but common: if downtime is being logged as “setup” when it’s really “waiting on material” (or “waiting on approval”), your $ ranking will be wrong even if total downtime hours don’t change. Better coding doesn’t magically reduce downtime—but it changes which problems rise to the top and which teams own the fix.


Make the analysis decision-speed friendly (weekly cadence, not a quarterly project)


The value isn’t the spreadsheet—it’s the cadence. A weekly loop lets you spot shift-level patterns and rank the biggest $ drivers while the context is still fresh.


Weekly review structure (30–45 minutes)

  • Top 5 downtime categories by $ impact (not hours), shop-wide.

  • Constraint vs non-constraint split: what hit the pacer machines vs what created recovery cost elsewhere.

  • Shift view: where second shift patterns differ (micro-stops, approvals, tool readiness, support coverage).

  • Two decisions: what do we fix this week, and what do we change in the plan (staffing/coverage/flow) to stop paying the same recovery cost?


Guardrails against bad data

  • Cap “unknown/other” (for example: if it’s more than a small fraction of downtime minutes, review and correct it before doing $ ranking).

  • Reason-code hygiene: if “setup” is a catch-all for waiting/material/approval, your cost allocation will mislead actions.

  • Outlier review: isolate the few longest events and verify the reason/context with the lead before they drive weekly priorities.


Use trends sparingly; chase repeatable leaks

Trend lines are useful only when they change a decision. Instead of building a quarterly deck, focus on “what got worse this week” and “what keeps recurring” by shift and machine. This keeps attention on utilization leakage between plan and actual behavior.


When interpretation becomes the bottleneck (lots of events, mixed fleet, multiple shifts), an assistant that can summarize event patterns and surface likely drivers can help the review stay operational—see the AI Production Assistant for an example of how teams accelerate root-cause conversations without turning it into a dashboard project.


What decisions this should trigger

  • Maintenance priority: focus on high-$ faults on constraint machines first.

  • Tooling/process changes: reduce repeatable “waiting on tool” and offset-check patterns with standard work.

  • Approval flow fixes: first-article and inspection signoffs that stall second shift are often cheaper to fix than buying capacity.

  • Staffing/scheduling: adjust support coverage where the $ impact is concentrated, not where complaints are loudest.


If you want to operationalize this without turning it into a long IT project, the practical considerations are (1) consistent event capture across a mixed fleet, (2) reason codes that don’t collapse into “misc,” and (3) a weekly output that ranks $ impact by machine/shift/reason. For cost framing and rollout expectations (without forcing a complex platform decision), review pricing to understand what an implementation typically includes and what drives scope.


If you already have downtime timestamps (even imperfect ones), a focused walkthrough can validate your assumptions, identify whether the constraint math or recovery-cost math fits your shop, and produce a first pass “top $ losses” ranking you can use immediately. schedule a demo to pressure-test your downtime-to-dollars method against your actual shift patterns and pacer machines.

FAQ

bottom of page