Changeover Time: Track Planned vs Unplanned Setup Delays
- Matt Ulepic
- Mar 27
- 9 min read

Changeover Time: How to Measure It Without Hiding Downtime
Most CNC job shops don’t have a “changeover problem.” They have a measurement problem: changeover time is either buried inside generic downtime labels, treated as “planned so it doesn’t matter,” or split so inconsistently across shifts that it can’t be acted on.
That’s how an ERP can show a packed schedule while the floor feels starved for capacity. If you can’t separate normal setup from the avoidable delays that creep into setup, you’ll argue about whether to buy another machine instead of fixing what’s actually leaking time.
TL;DR — Changeover time
Changeover is a chain of steps, not one block—measurement has to reflect that reality.
Pick and enforce one boundary rule (for example, last good part to first good part) so shifts don’t “define” time differently.
Split reporting into Planned Changeover vs Changeover Exceptions to expose avoidable delays.
Don’t let “Idle,” “Maintenance,” or a catch-all “Setup” code absorb setup-related stalls.
Time windows alone can mislead; event-based start/stop rules prevent false setup inflation in cells.
Keep reason codes shallow (1 planned + 5–8 exception codes) so operators can code consistently.
Use the split to decide what’s recoverable before adding machines or overtime.
Key takeaway Changeover time should not be a single “planned setup” bucket. When you separate expected setup work from setup exceptions (missing tools, program issues, QA holds, late staging), the gap between ERP assumptions and actual machine behavior becomes visible—especially by shift. That separation turns changeover from a debate into a capacity recovery tool you can manage week to week.
Why changeover time is quietly inflating your downtime numbers
In a CNC job shop, a “changeover” isn’t one action—it’s a sequence: tear-down, fixture swap, tool loading, offsets, program load, warm-up moves, probing, first-article, and the small handoffs that happen between people and shifts. Some of those steps are expected. Others are avoidable interruptions that show up as stop time but get labeled as something else.
The problem starts when changeover is treated as “planned,” so it escapes scrutiny. If everything during setup is automatically acceptable, then missing kits, tool shortages, and program questions become invisible inside a respectable-sounding category. On paper, downtime looks “normal.” In reality, utilization leakage is accumulating in the one window where machines are most likely to be stopped.
Misclassification triggers the wrong conversation. Instead of “our setup handoff is breaking down,” you get “we’re at capacity” or “we need another machine.” The decision pressure is real in 20–50 machine environments where the owner or ops manager can’t watch every pacer machine across multiple shifts. That’s why this article sits under a broader downtime visibility framework like machine downtime tracking: if setup-related time isn’t separated correctly, every downstream metric and staffing decision gets distorted.
A practical way to think about it is: changeover creates both planned downtime (expected setup work) and unplanned downtime (exceptions that stop the plan). The shop needs reporting that reflects both, otherwise “setup” becomes a blanket label that hides the true drivers of stop time.
Define the boundary: what counts as changeover time (and what doesn’t)
If two operators measure changeover differently, you don’t have data—you have opinions. Start by choosing one boundary rule and enforcing it across machines and shifts. Two common options work, as long as you pick one:
Last good part to first good part: changeover begins after the last acceptable part of Job A and ends when the first acceptable part of Job B is produced.
Last cycle end to first cycle start: changeover begins when Job A stops cycling and ends when Job B starts cycling.
The first option is stricter and naturally captures first-article and prove-out effects. The second option is easier to capture automatically but can undercount quality or prove-out delays if they happen “after the cycle starts.” Either can work—what matters is consistency, because consistency is what lets you compare Shift A vs Shift B or one machine family vs another.
Next, clearly separate standard setup tasks from non-setup tasks. Standard changeover tasks usually include: fixture/clamping changes, tool loading, offsets, probe calibration, program load, warm-up moves, coolant/nozzle adjustments, and basic verification steps required by your normal process.
What should not be counted as changeover: breaks, meetings, unrelated maintenance, or waiting on a forklift/material when it’s not part of your setup standard. Those are real losses, but if you lump them into setup, setup becomes a junk drawer category and you lose the ability to improve either area.
Gray areas need a rule in advance. First-article inspection, program prove-out, and in-process tool presetting can be legitimate parts of launching the next job, but they shouldn’t automatically be treated as “normal setup” if only certain parts or certain shifts experience them. The goal isn’t to debate lean terminology—it’s to classify the work the same way every time so reporting stays enforceable.
Planned vs unplanned inside the same changeover: a reporting model that works
Here’s the split that prevents changeover from becoming “planned therefore ignored”:
Planned changeover: expected setup work that matches your standard for that machine/job family.
Changeover exceptions (unplanned): delays that interrupt setup and should trigger follow-up.
Think of it as two layers in the downtime taxonomy: “Changeover (Planned)” plus “Changeover Exceptions.” This keeps normal setup visible (so scheduling can account for it) while exposing the specific reasons setup is running long.
End-to-end timeline example #1: 2nd-shift horizontal mill
Illustrative example (not a benchmark): A 2nd-shift horizontal mill finishes Job A. The operator begins the planned setup—pull fixture, load the next tombstone, call up the next program, start tool checks. Then the changeover stalls because a required toolholder isn’t in the crib location, and the operator spends time searching and calling for help. The machine sits stopped, and later the time gets recorded as generic “Idle” or even “Maintenance” because that’s the closest option on the sheet.
Under the planned vs exception model, that single changeover gets split:
Planned changeover: fixture swap, tool load/verification, offsets, warm-up moves.
Exception: Missing toolholder (tooling not staged / tool management failure).
Now the conversation changes. You don’t argue whether “setups are too long.” You can see that the planned portion is relatively stable while the leakage is coming from a specific exception that can be fixed through kitting, crib accuracy, or pre-stage checks.
End-to-end timeline example #2: Swiss lathe prove-out and first-article loop
Another illustrative example: A Swiss lathe changes from a repeating job to a lower-volume job with a recent program revision. The operator performs normal setup—guide bushing, tooling, bar feeder adjustments, offsets. Then the changeover extends because program prove-out and first-article inspection repeat: tweak a feed, re-cut, re-measure, adjust, and repeat. The ERP may treat all of this as “setup,” but operationally it’s different work with different owners.
Split it cleanly:
Planned changeover: standard Swiss setup steps you expect for that job family.
Exception: Engineering / program prove-out.
Exception: QA hold / first-article workflow delays (waiting on inspection or rework loop).
That separation enables a management decision: do you standardize the prove-out process, pre-verify code before release, or schedule prove-outs differently (for example, not burying them on 2nd shift with limited support)? Without the split, it just looks like “Swiss setups are long,” which isn’t actionable.
If you need help interpreting patterns once data is captured, an analysis layer like an AI Production Assistant can be useful—not to replace definitions, but to help ops teams summarize which exceptions dominate by machine family and shift so follow-ups happen faster.
How changeover time shows up in downtime tracking (and how it gets mislabeled)
In many shops, the downtime log doesn’t have a clean place for changeover exceptions, so they end up mislabeled. The common failure modes are predictable:
Idle/No operator: used when the machine is stopped but nobody wants to “pick a reason.”
Maintenance: used as a catch-all for anything that feels mechanical, even if the real issue is missing tooling or program questions.
Waiting on material: used broadly, even when the delay is actually staging discipline inside the changeover process.
Setup (no detail): one code that mixes planned and unplanned and guarantees that nothing changes.
This is the “planned bucket problem”: planned categories often escape scrutiny. If your only setup code is planned by definition, you’ll never see that most of the stop time inside the setup window is actually exceptions. That’s why changeover reporting needs the planned vs unplanned split, even if you keep everything else in your downtime system simple.
Shift handoffs make it worse. Changeover may start late on 1st shift and finish on 2nd; if each shift logs only what they touched, you can end up with time that’s split oddly, double-counted, or not counted at all. Consistent boundary rules reduce this, but you also need a practical capture method—ideally near-real-time—so the event isn’t reconstructed from memory at the end of the shift.
Manual entry based on end-of-shift notes creates systematic bias: fast setups get forgotten, long setups get rounded, and exceptions get “simplified” into whatever generic code is least controversial. This is one reason shops look at machine monitoring systems to capture machine states automatically, then prompt operators to classify the reason while it’s fresh.
Reason codes and time rules: make changeover measurable across 10–50 machines
To make changeover measurable across a mixed fleet, keep coding simple and rules explicit. The goal is high compliance, not a perfect taxonomy
1) Keep the tree shallow. Use one planned code plus a small set of exception codes (5–8 max). Example structure:
Changeover (Planned)
Changeover Exception: Missing tools/holders
Changeover Exception: Missing fixture/gage
Changeover Exception: Program issue / prove-out
Changeover Exception: QA hold / first-article workflow
Changeover Exception: Material not ready / staging late
Changeover Exception: Staffing/handoff gap
2) Define who codes what and when. If the operator is closest to the truth, have the operator code exceptions at the machine when possible. Supervisors can review later for coaching and consistency, but “fixing codes after the fact” becomes political and slow. Whatever method you choose, apply it the same way across shifts to prevent definition drift.
3) Set time rules so data doesn’t get noisy. Decide how you handle micro-stoppages and split events during setup:
Use a minimum duration threshold (for example, “only code exceptions longer than 2–5 minutes”) so tiny interruptions don’t swamp the report.
Allow splitting: if planned setup is underway and then a missing-tool delay hits, record planned up to that point, then the exception, then return to planned.
Avoid time-window assumptions in cells; use start/stop rules tied to actual events (explained below).
4) Run a weekly audit loop focused on systems, not blame. Review the top changeover exceptions by machine family and by shift. Ask: is the problem process design (no kitting standard), support coverage (prove-outs stranded on 2nd shift), or execution variability (handoff gaps)? The same reporting discipline that improves changeover accuracy also supports cleaner machine utilization tracking software outputs, because you’re no longer smearing setup leakage across unrelated categories.
What to do with the data: decisions you can make in weeks, not quarters
Once you can see planned changeover separately from exceptions, you can make faster decisions without waiting for a full “initiative.” The key is to use the split to distinguish structural time (normal setup you must schedule for) from recoverable time (exceptions you can eliminate).
Capacity math before capital spend. If your exception categories are dominating, buying another machine won’t fix late staging, missing holders, or prove-outs bottlenecked by engineering/QA. Clean reporting helps you eliminate hidden time loss before you add equipment or overtime.
Scheduling policy decisions grounded in reality. When planned setup is measured consistently, you can decide whether to batch by family or prioritize flow based on your own internal data—not assumptions. This stays within downtime/accounting scope: you’re not redesigning scheduling software, you’re ensuring scheduled setup expectations match what actually happens at the spindle.
Targeted process fixes by exception driver. Because exceptions are categorized, countermeasures get specific:
Missing tools/fixtures: kitting discipline, crib accuracy checks, pre-stage verification.
Program issues/prove-out: release gates, pre-verification steps, scheduled prove-out windows with support coverage.
QA hold/first-article workflow: explicit handoff rules, inspection queue visibility, defined “first-article ready” signals.
Material not ready: staging checklists tied to the boundary rule (what must be ready before the last good part of the prior job).
Required scenario: why event-based rules matter in a multi-machine cell
In a multi-machine cell, a common trap is using a simple “changeover window” (for example, 1:00–2:00) and assuming all stop time inside that window is setup. If an operator stages material late, machines may show intermittent stop patterns: one machine cycles briefly, another pauses, then both stop while material arrives, then one restarts. A time window will over-attribute stops to setup and under-attribute the real cause (staging discipline).
Event-based start/stop rules prevent this. Tie changeover to a defined machine event (cycle end of last job, cycle start of next job, or first good part logic), then code the actual interruption inside that period as a changeover exception (material not ready). This keeps your setup reporting honest across complex cells where “setup time” is rarely a clean block.
If you’re implementing better tracking, the practical considerations are usually less about “features” and more about friction: mixed legacy/modern machines, multi-shift adoption, and how quickly you can get consistent reason coding without a big IT project. When you evaluate rollout scope and ongoing costs, reference your internal constraints and use a simple cost frame (license + installation effort + operator time). If helpful, you can review approach-level details on pricing to align expectations—without needing exact numbers to decide whether a pilot is worth it.
A diagnostic you can run this week: pick one pacer machine on each shift and require the planned vs exception split for every changeover. If you can’t do it cleanly with your current logs, that’s the visibility gap—not operator effort—that’s slowing decisions.
If you want to see what this looks like when changeover boundaries, machine states, and exception coding are captured consistently across a mixed fleet, you can schedule a demo. The goal is straightforward: separate planned setup from avoidable setup delays so you can recover capacity before you spend capital.

.png)








