Production Tracker: See Run vs Idle Time by Shift
- Matt Ulepic
- 24 hours ago
- 9 min read

Production Tracker: How CNC Shops Turn Run vs Idle Visibility into Recovered Capacity
If your ERP says the shop is loaded but the floor still feels like you’re “short on hours,” the problem usually isn’t demand—it’s unmeasured time loss inside the shift. In a 10–50 machine CNC job shop, that loss hides in setups that stretch, first-article loops that stall a cell, and short idle bursts that never make it into anyone’s notes.
A production tracker is useful when it creates a shared, time-based truth: what was scheduled, what actually ran, what sat idle, and when those patterns repeat—by machine and by shift—so you can assign ownership and remove the constraint before you buy another machine.
TL;DR — Production tracker
A production tracker is primarily a run-vs-idle time tool; output counts alone won’t show where scheduled hours disappear.
Machine-level timelines (with timestamps) expose setup creep, queue starvation, approval waits, and inspection holds.
“Busy” operators can still produce frequent idle blocks if material, tools, or programs aren’t ready-to-run.
Shift-to-shift comparisons often reveal process gaps (handoff, approvals, tribal knowledge), not “people problems.”
Read the data as signals: long idle blocks vs repeated short interruptions point to different causes and owners.
Start with a short list of idle reasons tied to decisions; reduce “unknown” through a weekly data-quality loop.
Use a 10-minute daily review to prioritize one or two constraints to remove—not to build a vanity dashboard.
Key takeaway The operational value of a production tracker is exposing the gap between scheduled time and true cutting time by machine and shift, then tying idle patterns to specific causes (approvals, kitting, setup readiness, inspection holds). Once that gap is visible, you can recover capacity with focused process fixes—often faster and with less risk than adding equipment—because the “why” becomes discussable and assignable.
What a production tracker should make visible (run vs idle, not just output)
In a CNC job shop, “tracking production” is often treated as counting parts or logging hours at the end of the day. The limitation is that output is a lagging indicator: you can hit the part count and still lose large chunks of available machine time to waiting, setup extensions, and stop-start interruptions. A production tracker earns its keep when it makes time states visible while the shift is happening.
The core is a time-based truth: scheduled time versus actual run time versus non-cutting states. That doesn’t require a long theory lesson about utilization—it’s simply a way to see, “Was the spindle actually making chips when we thought it was?” and “When it wasn’t, what did the idle look like?”
Granularity matters. End-of-day summaries can hide the difference between one 90-minute stoppage and nine 10-minute disruptions, even though the operational fixes are completely different. A machine-level timeline with timestamps lets you connect the time loss to what was happening on the floor: a first-article call, a tool that wasn’t preset, a fixture that wasn’t staged, or a material cart that arrived late.
This is also where “busy” stops being a reliable signal—especially in multi-machine tending. An operator can be moving constantly (deburr, checks, paperwork, finding tools) while the machine sits idle in short bursts. A tracker makes the distinction visible without blaming anyone: it separates human motion from machine motion.
If you want deeper background on utilization framing and baselining, you can reference machine utilization tracking software—but the practical test here is simple: can the tracker show run vs idle by machine and shift in a way that drives action today?
Where capacity actually gets lost in CNC shops (common idle patterns)
The fastest way to decide whether you need a production tracker is to audit the types of “invisible” losses you already suspect. Most mid-market job shops don’t have a single big culprit; they have multiple repeatable patterns that bleed time across machines and shifts.
Micro-stops and short idles that add up
These are the 3–12 minute interruptions that rarely get written down: waiting on a tool, clearing chips, grabbing gages, looking for the right offsets, or getting an approval. Each event feels small. In aggregate, they can consume meaningful capacity because they repeat across the shift and across multiple machines.
Setup and changeover creep
High-mix work creates legitimate setup work—fixture swaps, probing, first part verification, offsets, and prove-out. The problem is when “scheduled setup” becomes a bucket for everything: tool hunting, fixture staging, program edits, in-process inspection holds, and waiting for a lead to come over. A tracker can separate planned non-cutting time from avoidable delays inside that window.
Queue starvation (the floor looks busy, machines aren’t)
A common scenario is “perceived busyness”: operators are bouncing between tasks, but machines repeatedly pause because material kits aren’t ready, tool presetting is behind, or the next job traveler packet isn’t complete. Without time-state visibility, you can mistake motion for throughput.
Downstream holds
Inspection backlog, first-article loops, deburr/secondary ops capacity, and QA sign-offs can all strand machines. The schedule may assume smooth flow, but the shop lives in exceptions. A production tracker makes it obvious when the constraint isn’t “the machine” at all—it’s the wait state around it. For a deeper dive on capturing and governing downtime reasons, see machine downtime tracking.
How to read a production tracker like an operations tool (not a report)
The trap with production data is treating it like a monthly scorecard. The operational advantage comes from reading patterns quickly and turning them into a small number of questions the team can answer today.
Timeline patterns: long blocks vs frequent interruptions
A single long idle block often points to a discrete bottleneck (waiting on inspection, missing material, program not released, maintenance response). Repeated short interruptions usually indicate readiness problems (tools not preset, gage hunting, unclear setup sheets, queue not staged). Both reduce run time, but they require different fixes and different owners.
Comparisons that reveal truth
Useful comparisons are rarely “plant vs plant.” In a job shop, the revealing cuts are machine-to-machine (same part family on two machines), shift-to-shift (same machine, same work, different results), and job-to-job (similar setups, different setup duration). These comparisons expose the gap between plan and actual behavior without turning into a blame exercise.
Run/idle by hour to catch handoffs
Breaking the shift into hours can surface consistent “dead zones”: the first hour after shift start, the last hour before shift change, lunch coverage, or the moment the programmer leaves. The point isn’t to police breaks—it’s to identify systematic handoffs and constraints that can be engineered out (ready-to-run packets, pre-shift validation, clear escalation rules).
Turn a signal into a question and an owner
The most useful operational questions are plain: “What are we waiting on?” “Is this a readiness issue (tools/material/program) or a downstream hold (inspection/secondary)?” “What has to be true before this machine can run the next hour?” When the tracker helps answer those quickly, decisions speed up.
Some shops also benefit from an interpretation layer that helps translate patterns into likely causes and next actions. If you’re looking for that kind of guided analysis, see the AI Production Assistant as an example of turning raw time-states into operational prompts—without turning the exercise into generic KPI reporting.
Scenario: The ‘same machine, different shift’ problem
Scenario: A horizontal mill runs a repeating part family. First shift looks strong—steady run blocks with predictable setup time. Second shift, the supervisor hears, “We’re working on it,” but deliveries slip and the machine “just never gets going.”
What the tracker shows: Second shift has idle spikes clustered around the start of the job and after the first couple parts—short run bursts followed by long pauses. The timestamps line up with program edits, offset changes, and first-article checks. The schedule said the machine was assigned work; the timeline shows it wasn’t truly running for long stretches.
Root cause discovered: An approval bottleneck and tribal knowledge gap. Second shift can load the job, but when the first-article deviates or an offset needs confirmation, they wait for a lead, QA, or a programmer who’s no longer on site. The result is extended first-article loops and “permission to proceed” delays that don’t show up as a distinct problem in manual logs.
Action taken: Build ready-to-run packets and rules that match reality: standardized setup sheets with verified offsets, pre-shift program validation on repeat jobs, a clear escalation path for first-article/offset approvals, and an agreed boundary for what second shift can approve without waiting. None of this requires a new system; it requires clear ownership triggered by what the tracker is revealing.
Measured outcome (time recovered): Use arithmetic your team can verify. If second shift experiences three “approval waits” per night at 20–40 minutes each, that’s 60–120 minutes of lost run opportunity per night. Across 5 nights, that’s 5–10 hours per week on one machine—before you count the ripple effect on downstream operations. The win isn’t a KPI; it’s reducing the repeatable waiting pattern.
Scenario: Setup creep and tool readiness on high-mix work
Scenario: Changeovers are scheduled at 45 minutes on a set of high-mix mills and lathes. In reality, they regularly land in the 70–90 minute range. The shop knows it’s happening, but the debate is always, “Is it the setup, or is something else getting folded into setup time?”
What the tracker shows: During the “setup window,” the machine doesn’t stay consistently non-running in one block. Instead, it toggles: short run attempts, stops, idle while someone searches for tools, then another short run, then a pause for in-process inspection. That stop-start signature points to readiness gaps more than pure setup complexity.
Root cause discovered: Tool presetting isn’t synchronized with the schedule, fixtures aren’t consistently staged, and inspection interrupts setup because there’s no predictable slot for first-piece checks. Operators look “busy” because they’re walking, asking, and waiting—while the machine sits.
Action taken: Tighten the readiness system instead of re-litigating the schedule: enforce kitting discipline, set tool cart standards, pre-stage fixtures for the next job, and create an inspection window (or clear trigger) so QA isn’t an ad-hoc interruption. If you also need better visibility into when “setup” becomes “down” and why, that connects naturally to machine monitoring systems as the broader mechanism that feeds time-state data.
Simple arithmetic to prioritize: You don’t need perfect math—just transparent math. If the tracker shows you can remove 15–20 minutes of avoidable idle inside each changeover (tool hunting + waiting + staging), and you do 6 changeovers/day across a small cell, that’s 90–120 minutes/day of additional run opportunity. Over a week, that becomes a meaningful block of time you can schedule against—without new equipment.
Choosing metrics that drive action (and avoiding vanity tracking)
If you’re problem-aware, the goal isn’t to build a dashboard; it’s to run the shop with fewer surprises. The best production tracking views are the minimum set that creates accountability and faster decisions.
Primary view: run vs idle per machine per shift (with timestamps)
This is the non-negotiable starting point. You want to see which machines are giving back time, when it happens, and whether the pattern is stable or shifting. When the tracker highlights shift-level differences, it gives you a direct place to intervene: handoffs, readiness, and approvals.
Secondary view: top idle reasons (short list) and minimizing “unknown”
Keep the reason list tight—categories that lead to an owner and an action (material not ready, tooling not ready, waiting on program, waiting on first article/inspection, maintenance response, setup in progress). “Unknown” will exist at first; the goal is to reduce it via a weekly review, not by forcing operators into long menus.
Daily cadence: what to review in a 10-minute tier meeting
A practical cadence is: pick the top one or two machines that lost the most run time last shift, identify the dominant idle pattern, and assign a single countermeasure with an owner. This keeps the tracker anchored to capacity recovery, not policing.
Guardrails: avoid over-engineering at the start
Many shops drift into complexity early—too many categories, too many KPIs, and debates about definitions that delay action. Start with run vs idle, timestamps, and a limited reason list. Expand only when the additional detail consistently changes what you do on the floor.
Implementation reality: getting accurate run/idle without slowing the floor
Manual methods (paper logs, whiteboards, end-of-shift notes) can work when the owner or plant manager can see every pacer machine. Once you’re running multiple shifts and 20–50 machines, those methods break down for predictable reasons: people are busy, memory is imperfect, and the “why” gets simplified into generic buckets. Automation is the scalable evolution—not to create more reports, but to create consistent time-state capture without adding friction.
Define what “running” means and keep it consistent
Shops vary: some consider “running” as cycle start/end; others use spindle-on as a proxy. The key is consistency across machines so shift comparisons and job comparisons are credible. If the definition changes by machine, you’ll create noise that looks like performance variation.
Minimize operator burden: capture only what explains idle
The floor will reject any tracking that feels like extra paperwork. The practical approach is to automate run/idle capture and ask for human input only when it improves decision-making—typically when an idle exceeds a threshold or repeats in a pattern. The goal is not perfect categorization; it’s enough clarity to remove the constraint.
Reason codes: start short and tie each to ownership
A good test for a reason code is: if it becomes the top idle reason, do you know who should fix it? “Waiting on program” points to programming release and validation. “Material not ready” points to kitting and staging. “Inspection hold” points to QA flow and first-article policy. Avoid categories that are just frustration labels.
Data quality loop: review “unknown idle” weekly and fix capture
Treat “unknown” as a process problem, not an operator problem. Once a week, review the largest unknown blocks and decide: do we need a new category, a clearer definition, or a better readiness process so that wait state stops happening? This is how tracking stays operational instead of becoming surveillance.
Implementation also has a cost dimension, but it should be framed around eliminating hidden time loss before capital expenditure. If you’re evaluating what rollout might look like and how it scales across mixed fleets, you can review pricing with an eye toward whether the approach fits your operational constraints (multi-shift, legacy + modern machines) without adding corporate-IT overhead.
If you want to pressure-test whether a production tracker would expose your biggest leakage points (setup creep, queue starvation, approval delays, inspection holds), the fastest path is a focused walk-through of your machines, shifts, and current data habits. You can schedule a demo to review what “run vs idle truth” would look like in your shop and what decisions you’d be able to make faster with it.

.png)








