Pareto Analysis for Lost Spindle Time in CNC Shops
- Matt Ulepic
- 1 hour ago
- 9 min read

Pareto Analysis for Lost Spindle Time: Find the Biggest Leak per Shift
If your shop is behind, you can usually list a dozen “reasons” in two minutes. The problem is that most of those reasons aren’t equally expensive in spindle minutes—and arguing about the list burns more time than it saves. What you need is a way to take real lost-time events from the floor and turn them into a short, ranked set of causes you can act on within the week.
Pareto analysis for lost spindle time is that method: group lost cutting minutes by cause, sort them, and focus on the “vital few” that dominate a specific machine and shift—without averaging away the truth.
TL;DR — pareto analysis for lost spindle time
Define “lost” as minutes the spindle could be cutting during scheduled production time but isn’t.
Separate planned stops (breaks, meetings, scheduled maintenance) before ranking causes.
Use time-stamped run/idle/down events; don’t rely on ERP labor or job timestamps for minute-level loss.
Keep reasons mutually exclusive and auditable; cap “Other” so it can’t become the biggest bar.
Build the Pareto by minutes (not by number of stops) for one machine/shift/window at a time.
Treat the top 1–2 bars as the only priorities until they move.
Verify countermeasures with the same data source, then rerun the Pareto.
Key takeaway Pareto only works when you’re ranking comparable minutes: scheduled production time, captured as time-based run/idle/down events with clean reasons. When you separate planned stops and analyze by machine and shift (not plant averages), one category usually dominates—and that’s where you recover capacity before you consider adding equipment.
What counts as “lost spindle time” (and what doesn’t)
For Pareto to be enforceable, “lost spindle time” must mean one thing: minutes when the spindle could be cutting but isn’t, within scheduled production time. That boundary matters because the math is brutally honest—if you mix unlike time buckets, you’ll end up improving the wrong thing.
Start by separating planned from unplanned. Planned stops include breaks, shift meetings, scheduled maintenance, and known changeover windows you deliberately accepted. Unplanned (or avoidable) losses include waiting on material, program not ready, first-article approval delays, tool issues, unexpected inspection holds, or the operator being pulled away. You can still track planned time, but you don’t rank it alongside avoidable leakage.
Next, choose your scope and stick to it: one machine, one cell, one shift, or one part family. Pareto is a prioritization tool, not a universal scoreboard. If you change the scope midstream—say, mixing a high-mix lathe with a dedicated production mill—the “top cause” becomes a compromise instead of a decision.
Finally, be careful with ERP timestamps. ERP job starts, labor entries, and move tickets are usually too coarse and too delayed for minute-level loss analysis. They’re great for costing and routing discipline; they’re unreliable for identifying the 10–30 minute leaks that add up across a shift. If you want a ranked list you can trust, your source has to reflect actual machine behavior—run/idle/down with real timestamps.
The minimum data you need to run a Pareto on utilization leakage
You don’t need a perfect data lake to do this. You need two inputs that are consistent enough to stop debates: (1) a time-stamped state history to compute minutes, and (2) a reason attached to the minutes that makes sense in your shop.
First, capture an event timeline: run/idle/down (or run/stop) states with start/stop timestamps. That timeline is the “minutes engine.” It lets you calculate lost spindle time without guessing. If you’re still building that foundation, start with the practical mechanics of machine utilization tracking software so you have consistent time-based states before you try to prioritize causes.
Second, attribute a reason to lost time using a small set of mutually exclusive categories. “Mutually exclusive” means one stop fits one bucket without interpretation. For CNC job shops, shop-relevant categories often include: setup/changeover, probing/offsets, waiting on program, program edits/revisions, first-article approval, tool issue (breakage/wear/offset), waiting on material/kitting, inspection/QA hold, chip/coolant housekeeping, and operator unavailable.
Third, define attribution rules so the same event doesn’t get coded three different ways depending on who is working. Decide:
Who enters the reason (operator, lead, supervisor) and when (at restart, at shift end, or prompted during the stop).
How to handle multi-cause events (pick the “gating” cause that prevented restart; don’t split hairs).
When a stop is planned vs unplanned (e.g., planned break is not “operator unavailable”).
Fourth, keep the data clean enough to trust. Cap “Other,” audit the top categories weekly, and fix ambiguous labels immediately. If your shop is already tracking stoppages, you can strengthen visibility by tightening how you do machine downtime tracking so the time and the reasons align.
How to do Pareto analysis for lost spindle time (step-by-step)
The goal is a ranked list of causes by minutes for a specific scope. Not a dashboard tour. Not an all-hands argument. A short list you can assign.
Step 1: Pick a decision window
Choose a time window that matches how fast you can act: one shift, one day, or one week. If you’re trying to address a second-shift issue, run the analysis on second shift only. If you’re stabilizing a constraint machine, keep the window tight enough that the results reflect current reality.
Step 2: Compute lost minutes from the time-stamped states
From your event history, total the minutes where the machine is not cutting during scheduled production time. Exclude planned stops (tracked separately). This prevents “operator break” from artificially becoming your biggest bar just because it’s consistently recorded.
Step 3: Group by reason, sum minutes, sort descending
Create a table: reason category → total lost minutes in the window. Sort the table from largest to smallest. This is the moment the conversation changes from “we have lots of problems” to “this is the one dominating this shift.”
Step 4: Add cumulative minutes and cumulative percent
Compute a running total and cumulative percent. Don’t get hung up on “80/20” as a law; in job shops you’ll often see a 70–90% capture point where a few categories account for most of the lost time. Your output is a ranked list (bars) plus a cumulative curve—simple enough to review in 10 minutes.
Step 5: Repeat by machine and shift (avoid averages)
If first shift says “setup takes too long” while second shift reports “machine issues,” do not average them together. Run two Paretos. It’s common to find different top bars by shift—meaning different countermeasures, different ownership, and different verification. If you blend them, both sides stay “right,” and nothing changes.
Implementation note: once you have consistent event minutes and reasons, the rest is arithmetic. The real challenge is building a reliable input stream. That’s the layer covered by machine monitoring systems—not to “add dashboards,” but to give you time-based truth you can prioritize against.
A simple 80/20 chart is the fastest way to end shop floor arguments. But to build an accurate chart, your software needs to capture the right inputs automatically. Learn exactly what goes into this process in our breakdown of machine downtime tracking and pareto analysis data.
Worked example: one machine, one week—finding the biggest leak fast
Below is a simple example for one high-mix turning center over one week (one shift). Assume you’ve already excluded planned stops (breaks, scheduled PM) and you’re only summing lost minutes during scheduled production time.
Reason category | Lost minutes | Cumulative % |
Waiting on program revisions | 420 | 30% |
First-article approval / QC release | 310 | 52% |
Setup / changeover | 240 | 69% |
Tool issue (offset/tool life/breakage) | 190 | 83% |
Waiting on material / kitting | 140 | 93% |
Probing / offsets | 60 | 97% |
Operator unavailable (unplanned) | 40 | 100% |
Interpretation: this machine’s lost time feels like random short stops, but the top two categories dominate the week: waiting on program revisions and first-article approval. Operationally, that means the leak isn’t primarily “the operator” or “the machine.” It’s the release/approval workflow: programming changes arriving late, and QC sign-off not synchronized to when the machine is ready to run.
Decision rule: attack the top 1–2 categories before touching the rest. In this example, a setup-reduction event might be valuable later, but it’s not where the next week’s capacity is hiding.
Translate the top bar into a specific process question: “For this turning center, in which step does a program revision block restart—post, prove-out, tool list, or approval—and who must clear it?” That question is narrow enough to assign and verify. If you need help interpreting stop patterns at speed (especially across multiple shifts), an assistive layer like an AI Production Assistant can help summarize recurring narratives from the same underlying time-and-reason events—without turning the exercise into a long meeting.
Common traps that make the Pareto lie (and how to prevent them)
Pareto doesn’t fail because the math is hard. It fails because the categories and boundaries allow politics and ambiguity back into the result. These are the traps that typically break credibility.
Trap 1: “Other” becomes the biggest bar
If “Other” wins, you didn’t discover a process problem—you discovered a measurement problem. Fix it by capping “Other” (for example, requiring a note after a threshold) and by converting frequent “Other” notes into a real category within a week or two.
Trap 2: Mixing planned and unplanned time
A common failure mode: a cell’s biggest “loss” shows up as “Operator break.” That’s not a lever; it’s a schedule decision. Separate planned stops first. In many shops, once planned time is reclassified, the true top loss shifts to something actionable like “waiting on material/kitting.” That’s a very different countermeasure (kit completeness, staging rules, material presentation), and it’s exactly why boundaries matter.
Trap 3: Counting events instead of minutes
Frequent short stops get attention, but a few long delays can consume more spindle minutes. Always rank by time. A single 90-minute wait for first-article approval is more expensive than six quick offset checks, even if the offset checks are more annoying.
Trap 4: Combining machines or shifts “for simplicity”
When second shift says “machine issues” but first shift insists “setup takes too long,” a blended Pareto often produces a diplomatic tie. Run them separately. If second shift’s top bar is tool issues and first shift’s is setup, you now have two specific owners and two different fixes instead of one endless debate.
Trap 5: Reason codes are too granular—or too vague
Over-granular codes create noise (“Tool #7 insert chip,” “Tool #7 insert break,” “Tool #7 offset tweak”), while vague codes create arguments (“program issue,” “quality issue”). The sweet spot is a small list that maps to a countermeasure type. You can add detail in notes, but your Pareto categories should point to action.
Turning the top Pareto bar into a countermeasure (without a big project)
Pareto is only valuable if the top bar becomes a small, testable change—not a multi-month initiative that never closes. Use this execution loop to move fast and keep the data credible.
1) Write a narrow problem statement in minutes
Example: “On second shift, Machine LT-2 lost the most time to ‘tool issue’ this week, concentrated in the first half of the shift.” Or for the high-mix turning center: “Lost time is dominated by program revisions and first-article approval holds.” Tie it to where it occurs (machine/shift) and what category it is (the bar you’re targeting).
2) Pick one countermeasure type
Match the fix to the bar:
Waiting on material/kitting → kitting checklist, staging rules, “kit complete” gate before release.
Waiting on program revisions → program release gate, revision cutoffs, prove-out ownership, handoff timing.
First-article approval → approval SLA, pre-scheduled QA check windows, clearer “ready for FA” signal.
Setup/changeover → setup checklist, tool staging, standardized offsets/probing routine.
This is where shift-specific Paretos prevent the wrong project. If first shift’s top loss is setup, standard work might be the lever there. If second shift’s top loss is “machine issues” that are actually tool-related, then tool staging, offset discipline, and escalation rules may be the faster win.
3) Assign ownership and a short test window
Pick an owner and define a test window like the next five shifts. Set a measurable target tied to the same category (minutes reduced in that bar) rather than a vague outcome like “run better.” This is also where you can run a mid-process diagnostic: if your “operator break” bar disappears after you separated planned stops and now “waiting on material/kitting” dominates, your owner and test should move accordingly.
4) Verify using the same data source—and watch for migration
After the test window, rerun the Pareto for the same machine/shift/time window. If the top category drops, check whether minutes moved into a neighboring bucket (for example, “waiting on program” turning into “setup” because the boundary is fuzzy). That’s not failure; it’s a signal to tighten definitions and keep improving the workflow step that truly gates restart.
5) Rerun Pareto—don’t pre-plan five initiatives
Once the top cause moves, rerun the analysis and let the next “vital few” reveal itself. This is how you recover capacity before considering capital expense: eliminate hidden time loss inside the shift first, then decide whether you still need another machine.
If you’re thinking about implementing time-based tracking to support this (especially across a mixed fleet), keep the rollout evaluation grounded: how quickly you can capture run/idle/down events, how easy reason entry is on the floor, and how much effort it takes to keep categories clean. For practical rollout and cost framing without digging into numbers here, review the implementation context on our pricing page.
If you want a quick diagnostic on your own data—e.g., “What’s the single biggest utilization leak on second shift for our constraint machines?”—we can walk through a Pareto-ready setup and what your top bars would likely look like once planned time and categories are cleaned up. schedule a demo.

.png)








