Lead Time vs Cycle Time (CNC): What Actually Drives Delays
- Matt Ulepic
- 4 hours ago
- 9 min read

Lead Time vs Cycle Time: Why CNC Due Dates Slip Even When Cycle Time Looks “Good”
A common shop-floor myth is that “if our cycle times are on target, lead times should follow.” That belief is usually reinforced by ERP timestamps that look clean on paper—while due dates keep sliding, expedite pressure grows, and the floor feels busy without finishing what was promised.
In CNC job shops, cycle time can stay stable for weeks while lead time quietly expands because the lost time isn’t in cutting—it’s in waiting, handoffs, setup readiness, inspection holds, and small interruptions that never change the “average cycle time per part.” The practical win is separating these buckets so you can recover capacity before you assume you need more machines.
TL;DR — lead time vs cycle time
Cycle time can be stable while lead time grows because waiting multiplies across operations and shifts.
Treat “touch time” (setup + run) and “wait time” (queue + holds + blocked) as separate realities.
ERP timestamps often reflect transactions, not true start/stop at the machine—especially across shifts.
Expedites usually don’t change cycle time; they change queue order, changeover frequency, and “planned vs actual start” gaps.
Look for idle-with-queue, blocked-for-inspection, and short stops as utilization leakage signals.
Use a weekly list of jobs with high lead-time inflation (lead time ÷ touch time) to focus improvements.
Fixes should map to a bucket (queue, setup readiness, inspection/handling, interruptions), not generic “process improvement.”
Key takeaway Lead time is a system outcome; cycle time is only one component. When you separate touch time from waiting—and then tie waiting to machine states and shift-level handoffs—you can see where capacity is leaking (idle, blocked, extended setups) versus where workflow is simply slow (queue, inspection holds, dispatch delays). That visibility shortens decision loops without assuming the answer is more equipment.
Why cycle time can look great while lead time keeps slipping
Cycle time is the processing portion—what the machine does to make a part during a cycle. Lead time is the elapsed calendar time it takes a job to go from “released” (or however you define the start) to complete. If you’re only watching cycle time, you’re staring at the cutting window while most of the schedule risk sits outside it.
In job shops, the common reason “lead time grows” is waiting: jobs sitting queued between ops, waiting on inspection, waiting on a fixture or tool, waiting on a program revision, or waiting because dispatch priorities weren’t clear at shift change. None of those problems necessarily change a per-part cycle time, so they don’t show up in the metric people argue about most.
Utilization leakage also hides in short, frequent interruptions—10–30 minute gaps that get hand-waved as “normal.” They don’t always move the average cycle time, but they do move completion dates because they compound across multiple machines, multiple routings, and multiple shifts. The goal here isn’t to debate definitions; it’s to connect both metrics to decisions you can make faster with better operational visibility—run vs idle vs setup vs blocked, not “busy vs not busy.”
If you’re already exploring machine utilization tracking software, this lead-time vs cycle-time split is a practical way to decide what you need to measure first: waiting and handoffs, or true machining performance.
Lead time vs cycle time in a CNC job shop: the minimum definitions that matter
Cycle time is the time to complete one unit (or one machine cycle) at a specific operation. In practice, shops track it in different ways: some include only in-cycle cutting time; others include automatic tool changes; others blend in operator interaction. The key is consistency—cycle time should describe the processing window at the machine, not the calendar time the job spent “in the department.”
Lead time is elapsed calendar time from a defined start point to a defined finish point. Many CNC shops should choose release-to-complete as the operational definition (rather than quote-to-ship), because release is where you can actually control flow. Others may use order acceptance to shipment for customer-facing quoting; just don’t mix the two in the same conversation.
A more useful split than arguing edge cases is touch time vs wait time. Touch time is the labor/machine time that advances the job (setup + run + required checks). Wait time is everything that doesn’t: queue, holds, approvals, missing tools, unclear priority, blocked downstream. When lead time is high, it’s usually because wait time dominates, even if touch time is well understood.
Two pitfalls create bad decisions fast: (1) comparing per-part cycle time to per-job lead time as if they’re equivalent, and (2) using inconsistent start/stop timestamps across shifts (first shift clocks “start” when they print the traveler; second shift clocks “start” when the first good part comes off). If you can’t trust the timestamps, you’ll “fix” the wrong problem.
The ‘time stack’ that connects them: where lead time actually comes from
To connect lead time vs cycle time in a job shop, use a simple “time stack” for each job (or for the critical operations on that job):
Lead time = queue/wait + setup/changeover + run time (cycle time × quantity) + handling/inspection + interruptions/rework
Some buckets are closely tied to machine utilization (extended setups, idle-with-queue, blocked states, frequent short stops). Others are workflow-policy driven (batching rules, inspection batching, approvals, when jobs are released). You need both to make correct calls, but they demand different fixes.
Batching is a classic example. Larger batches can reduce the reported setup frequency and make a single machine look “efficient,” while increasing lead time by holding work in queue longer and delaying downstream operations. It can also raise expedite risk: when a hot job arrives, you interrupt a long batch, create more changeovers, and inject more waiting elsewhere.
Multi-operation routings multiply waiting. A two-op part that waits “just a few hours” between ops becomes a part that burns a full day once you factor in shift boundaries, inspection availability, and material handling timing. If you want a practical foundation for capturing run/idle/setup states that feed this time stack, see machine monitoring systems—not for dashboards, but for turning “it sat somewhere” into a specific, actionable reason.
Scenario 1: Shift handoff makes lead time explode while cycle time stays flat
Multi-shift handoffs are where lead time quietly balloons. Second shift often inherits partially complete work with ambiguous next steps: “Is inspection required before Op 20?” “Where’s the fixture?” “Did Op 10 finish all pieces or just the first article?” Cycle time per part can be perfectly fine while the job spends most of its life waiting.
Worked mini-example (simple time stack):
Op 10 run time: 45–60 minutes total (cycle time per part unchanged)
Setup at Op 10: 30–60 minutes
Queue between Op 10 and inspection: 6–10 hours (end of shift + no slot)
Inspection/handling: 20–40 minutes
Queue before Op 20: 2–6 hours (priority unclear; job sits)
Cycle time data would tell you “Op 10 is normal.” Lead time shows the reality: the job is dominated by queue and handoff ambiguity. This is where real-time shop-floor visibility changes the conversation from “the machine wasn’t running” to something precise like “idle with a queue present,” “blocked waiting on inspection,” or “waiting on program/tooling.” That precision is how you shorten decision loops across shifts.
Operational fixes should map to the bucket that’s expanding:
Standard handoff rules: what “done” means for Op 10, and what must be staged for Op 20
WIP location/status: a consistent “where is it and what’s next” signal
Inspection slotting: reserve same-shift windows for jobs that unblock downstream ops
Dispatch triggers: don’t wait for a meeting; trigger next-op work when the job hits a defined state
If you’re trying to categorize those “in-between” gaps consistently, downtime and stop reasons are part of the same measurement discipline as lead time. (Related: machine downtime tracking.)
Scenario 2: Expedites don’t change cycle time—they change everyone’s waiting time
An expedite event is the fastest way to expose the difference between lead time and cycle time. You insert a hot job mid-day and the machining cycle time for that job may be exactly what you expected. The damage shows up elsewhere: queues reshuffle, jobs pause mid-routing, and changeovers become more frequent.
Worked mini-example (what changes when a hot job is inserted):
Planned: Machine runs Job A for 3–4 hours, then a single changeover to Job B.
Reality: Hot Job X is inserted, creating two extra changeovers (A→X, X→A, then A→B later).
Micro-idle events appear: tool offsets, first-article checks, finding the correct fixture, waiting on sign-off (each 10–30 minutes).
Cycle time per part can remain identical, while lead time for Jobs A and B expands because their “waiting time stack” grows.
This is why lead time is a system metric: it reflects dispatching and flow, not just machining performance. What helps operationally is watching signals that indicate decision lag and utilization leakage:
Growing “planned vs actual start” gaps for operations (work is released, but doesn’t truly begin)
More frequent setup states and short stops
Idle time that occurs while work is visibly waiting in queue (a dispatch/priority issue, not “no work”)
Decision levers are practical and specific: define an expedite “freeze window” (don’t reshuffle everything every hour), create a dedicated expedite lane with explicit rules, and pre-kit tools/fixtures so the expedite doesn’t create unnecessary idle time. If you’re in vendor-evaluation mode, a good test is whether the system helps you interpret these patterns quickly, not just collect them. That’s the role of an AI Production Assistant: turning noisy states and comments into a short list of “what’s causing waiting right now?” so dispatch decisions happen faster.
How to measure both metrics without lying to yourself (ERP timestamps vs shop-floor signals)
Measurement errors are why lead time vs cycle time arguments go nowhere. The fix is not a bigger KPI list—it’s explicit start/end points and a minimum set of signals you can trust across shifts and across a mixed fleet of machines.
1) Define start and end points (and write them down)
For lead time, pick one definition for internal control (commonly release-to-complete) and stick with it. For cycle time, decide whether you mean “machine cycle” or “run time per part including load/unload,” and don’t switch midstream. Also define operation start: is it “operator clocks in,” “first cycle begins,” or “first good part”?
2) Understand what ERP timestamps really represent
ERP is often transaction time: someone moved an operation, issued material, or completed a step. That’s still useful for routing accountability, but it’s lagging and can be inconsistent during busy periods or shift changes. If you use it as “truth,” you’ll miss the waiting time that drives lead time inflation.
3) Capture shop-floor reality: machine and operator signals
Machine and operator inputs fill the gap: run/idle/setup states, stop reasons, and “waiting on” categories (inspection, tooling, program, material, priority). This is how you identify utilization leakage: time when a resource could be producing but isn’t, for a specific reason. If you want a baseline for what those state models typically look like, see machine utilization tracking software as a reference point.
Minimum viable measurement (what to collect first)
You don’t need perfection to get value. A practical minimum for a 10–50 machine, multi-shift shop is: consistent routing timestamps (release, op start, op complete), machine state (run/idle/setup), and a short set of reason codes for your top downtime/stop categories. That gives you enough resolution to separate “workflow waiting” from “machine-time loss” and focus improvement where it actually returns capacity.
If you’re wondering what implementation and ongoing cost typically depend on (number of machines, state model depth, and support approach), this is where it’s reasonable to look at pricing—not for a number, but to understand what’s included versus what becomes internal overhead.
Using lead time and cycle time together to find utilization leakage (a quick diagnostic)
Once you’re measuring consistently, use lead time and cycle time together as a diagnostic—not a scoreboard. The pattern tells you where to look first:
If lead time is up while cycle time is flat
Assume waiting is growing. Investigate queue time, dispatch rules, inspection/handling constraints, and blocked states. This is where the ERP-vs-reality gap hurts: the ERP may show “started” and “completed,” but the job may have been parked for hours between those clicks. Look for idle-with-queue conditions: machines waiting despite work being available, often driven by unclear priority or missing readiness items.
If both lead time and cycle time are up
Now you may have process instability or readiness problems: setup creep, tooling/program availability, first-article loops, or rework. You’re not just dealing with queue time; you’re consuming more touch time than planned. This is also where frequent short stops create a “death by a thousand cuts” effect: each one is small, but together they extend completion dates.
If cycle time is down but lead time is unchanged
Waiting dominates. Your machining got faster, but jobs still spend most of their life not being processed. This is where you should resist capital expenditure assumptions. Before buying capacity, confirm whether the constraint is actually flow: handoffs, WIP visibility, inspection policy, release discipline, and dispatch timing.
A simple weekly review that stays practical
Keep it enforceable for a multi-shift shop:
Top jobs by lead-time inflation factor (lead time ÷ touch time). A high factor means waiting is the real issue.
Top machines by idle-with-queue (idle while WIP exists), which indicates dispatch/readiness leakage.
Top stop reasons by frequency (short stops matter when they’re constant).
Optional scenario to watch for: a machine can show high utilization while job lead times remain long because it’s running long batches that starve upstream/downstream steps—or creates blocked WIP after it runs. That’s another version of the same issue: local efficiency masking system waiting time. The fix still starts with a time stack and clear states, not a new KPI.
If you want to sanity-check your current lead time inflation and idle-with-queue patterns using your actual shift data (without turning it into a months-long IT project), the next step is to schedule a demo. Bring one problem job and one “busy but late” machine—we’ll map the time stack and show what you can verify quickly on the floor.

.png)








