MTConnect vs FOCAS for Monitoring Integration
- Matt Ulepic
- Feb 26
- 9 min read

MTConnect vs FOCAS for Machine Monitoring Integration: How to Choose in a Mixed CNC Shop
Most monitoring rollouts don’t stall because the dashboard is “bad.” They stall because the integration path chosen on day one can’t scale across the controller mix you actually have—or it produces machine states that look plausible but aren’t trustworthy across shifts. That’s the real decision behind “MTConnect vs FANUC FOCAS.”
If you’re running 10–50 machines across multiple shifts, you’re not buying a protocol. You’re trying to recover hidden capacity by getting consistent run/idle/down behavior, credible downtime reasons, and part-count logic you can defend when the night shift looks “worse” than day shift.
TL;DR — MTConnect vs FOCAS for monitoring integration
MTConnect favors cross-brand consistency; FOCAS favors FANUC-specific depth (options/model dependent).
“Real-time” is not the goal; consistent state definitions across machines and shifts are.
MTConnect usually means agent + adapter quality is the swing factor.
FOCAS commonly hinges on network access, licensing/options, and control generation compatibility.
Mixed fleets often land on hybrid: standardize core KPIs, enrich FANUCs selectively.
Validate on 3–5 representative machines per control type, including both shifts, before scaling.
If ERP and manual logs disagree with machine behavior, fix the signal mapping before debating reports.
Key takeaway The integration choice is really about closing the gap between what ERP/manual reporting says happened and what the machines actually did—consistently across brands and shifts. Pick the simplest path that gets trustworthy run/idle/down and part logic first, then deepen with richer signals where they reduce investigation time. The goal is to eliminate utilization leakage and shift-to-shift disagreements before you consider adding machines or capacity.
What you’re really choosing: data consistency vs controller-depth
In shop terms, MTConnect vs FOCAS isn’t a philosophical standards debate. It’s a decision about time-to-first-data (how fast you see machine states), time-to-trust (how fast you believe them), and time-to-scale (how much per-machine custom work is required as you expand across the fleet).
This matters because “real-time” by itself doesn’t solve the operational problem. If one machine reports “idle” when the door is open, another reports “down” on feed hold, and a third reports “ready,” your utilization view becomes a debate—especially across shifts. That’s where the ERP vs actual behavior gap shows up: manual logs and ERP entries may say “running,” while the control is sitting in a non-cutting state that never gets categorized consistently.
The central tradeoff is straightforward:
FOCAS can expose deeper FANUC-specific context (depending on model/options), which can help explain “why” a machine is stopped.
MTConnect can act as a normalization layer across brands, making it easier to drive a consistent utilization and downtime model across a mixed shop—when the adapters/implementations are solid.
In practice, both paths can work, and hybrid strategies are common—especially in job shops where the equipment mix changes over time and no one wants a monitoring project that requires constant babysitting.
MTConnect for monitoring: where it fits (and where it surprises teams)
MTConnect is best understood as a standard data model and transport for manufacturing equipment observations. In a typical setup, an MTConnect agent publishes data in a standardized format. Often, an adapter sits between the machine/controller and that agent to translate whatever the control can provide into MTConnect’s vocabulary.
The strength is obvious for a 10–50 machine job shop: MTConnect is built for the idea of cross-vendor normalization. For many monitoring outcomes—run/idle/down behavior, availability, and (where supported) part-related signals—it can help you build one consistent utilization view across multiple controller brands without inventing a separate reporting model per machine.
The surprises are where teams lose time:
Adapter availability and quality varies. Some controls have mature implementations; others rely on custom adapters that can drift over time.
Same word, different semantics. Two machines can both report something that maps to “idle,” but one may be waiting on an operator while the other is paused by optional stop logic.
Part counts are rarely universal. Whether you can count cycles, parts, or bar pulls depends on the control and on how the adapter is written.
Operationally, MTConnect tends to be the faster fleet-wide path when the goal is consistent utilization and structured machine downtime tracking—as long as you treat “state mapping” as a first-class task, not an afterthought.
FOCAS for monitoring: when FANUC-specific access is the advantage
FANUC FOCAS is FANUC’s API layer for accessing certain CNC control data. For compatible controls and configurations, it can provide more direct access to signals that help explain operational behavior—especially when you need more than “running/not running.”
The advantage is depth (still controller/option dependent). Depending on the model and enabled options, shops may be able to pull things like alarms, modes, override context, or program-related details that make it easier to classify stoppages and reduce back-and-forth investigation between Ops, programming, and maintenance.
The implementation reality is where many evaluations get “real” quickly:
Licensing/options can be gating factors. What you can access may require specific FANUC options or configurations.
Model/version compatibility matters. Older 0i/16i-era controls can be workable, but connectivity details are often less straightforward than newer installs.
Network access is not “free.” IT security rules, segmentation, and controller exposure policies can become the longest pole in the tent.
Operational implication: FOCAS can be an excellent tool for deeper diagnostics on FANUC machines, but it doesn’t solve the mixed-fleet requirement by itself. If you also have Haas, Okuma, and other controls, you still need a strategy for standardizing your core KPIs and definitions across the non-FANUC portion of the shop.
Side-by-side decision criteria for a 10–50 machine shop
Use the criteria below to make a decision you can defend operationally—not just technically. The goal is to reach trustworthy utilization and downtime classifications quickly, then expand coverage without reworking every machine.
Fleet composition
If you’re mostly FANUC, FOCAS can be a strong foundation—assuming compatibility and network access are feasible. If you’re mixed-brand, you’ll usually feel immediate pain trying to build one utilization view using a FANUC-only integration path.
Scenario: 18 machines total—10 FANUC, 5 Haas, 3 Okuma—two shifts. Ops wants one utilization view without rewriting integration per machine. In this situation, an MTConnect-centric approach often helps you normalize the basic run/idle/down model across all 18, while still leaving room to pull deeper FANUC context where it actually changes decisions.
Data requirements
Be explicit about what decisions you want the data to support:
If you’re primarily trying to expose utilization leakage (run/idle/down patterns, recurring stoppages, shift differences), prioritize consistent state mapping and downtime capture. This is where machine utilization tracking software succeeds or fails based on signal consistency, not on how many fields you collect.
If you need deeper diagnostics to reduce investigation time on specific machines, FANUC-specific signals available through FOCAS may be worth the effort—especially for alarms/modes/overrides where they clarify “why it stopped.”
Also acknowledge the manual baseline: operator logs, whiteboards, and ERP time stamps can work when a leader can “see” the pacer machines. But at 20–50 machines across shifts, manual inputs drift, categories get inconsistently applied, and the ERP ends up reflecting what was planned or what was reported—rather than what the controls actually did minute-to-minute.
IT/security constraints
In many mid-market shops, the limiting factor isn’t the protocol—it’s permission and exposure. Segmented networks, restricted ports, and policies around controller access can dictate whether FOCAS connectivity is quick or turns into a multi-week dependency chain.
Scenario: legacy 0i/16i-era FANUC controls, plus IT security restrictions. Even if FOCAS is technically possible, network access/licensing/options may not be “day one.” A practical fallback is to aim for basic states fast (to start closing the ERP vs reality gap), then deepen later once connectivity hurdles are cleared. The key is not to let “perfect integration” delay operational visibility.
Maintainability and scalability
Ask who will own this after the pilot. If the solution requires frequent per-machine tweaks, you’ll eventually stop trusting the data and the screens become wall art. MTConnect implementations can be very maintainable when they rely on stable agents/adapters, but “custom adapter per oddball machine” adds burden. FOCAS can be stable on supported FANUCs, but it centralizes risk around option dependencies and network access controls.
A useful litmus test: if you add two more machines next quarter, can you extend the state model in hours/days—or does it become a special integration project each time?
If you need broader context on what a monitoring stack typically includes (without drifting into protocol trivia), see machine monitoring systems.
Mixed-environment architectures that actually work (including hybrid)
Mixed fleets are normal in job shops. The integration architecture should reflect that reality and reduce one-off work as you scale across shifts, cells, and new equipment.
Pattern A: MTConnect-first for normalization, then enrich FANUC via FOCAS
Start by standardizing the core signals you need for utilization and downtime: run/idle/down state, cycle start/stop where available, and clearly defined part-count logic (even if that logic differs by machine family). Then, on FANUCs where deeper context will reduce “why did it stop?” time, add FOCAS-derived fields selectively (alarms/modes/overrides as feasible).
Mixed-fleet example: for the 18-machine shop (10 FANUC, 5 Haas, 3 Okuma), you can often standardize a single utilization view across all machines (normalized states), while using FANUC-only depth to break ties on ambiguous stoppages on the 10 FANUCs—without forcing the Haas/Okuma machines into a FANUC-shaped model.
Pattern B: FOCAS-first for FANUC-heavy shops, bridge non-FANUC into the same KPI model
If 80–90% of your machines are FANUC and you know you need richer diagnostics, you may build around FOCAS for the FANUC fleet. But you still need a normalization plan for the remaining machines (often via MTConnect or equivalent adapter approaches) so your shop-wide KPIs don’t split into “FANUC truth” and “everything else.”
Normalize the state model (this is where trust is won)
Regardless of protocol, define what your shop means by run/idle/down and map each controller’s available signals into that model. This is how you avoid shift-to-shift arguments and “the data is wrong” outcomes. It also sets you up to use the data as a capacity recovery tool: you can’t fix idle patterns you can’t classify consistently.
When you start layering in interpretation and workflows, tools like an AI Production Assistant can be useful for turning raw events into operationally readable explanations—provided the underlying state mapping rules are defined and validated first.
Implementation sequencing: utilization first, depth second
The fastest path to value is usually: (1) establish trustworthy utilization and downtime categorization, (2) expose leakage by shift and by pacer machine, and only then (3) add deeper diagnostic fields where they reduce manual investigation. This sequencing also helps you avoid premature capital expenditure—recover the hidden time loss before you decide you “need” another machine.
Common failure modes (and how to prevent bad data from day one)
The biggest risk in MTConnect/FOCAS decisions is not choosing the “wrong” standard—it’s deploying something that looks authoritative but encodes the wrong logic. Here are the failure modes that hit multi-shift shops hardest.
False confidence from misclassified states
A live dashboard can still be wrong. Optional stop, feed hold, door open, M00/M01 behavior, and “waiting on operator” states can get lumped into whatever bucket is easiest. That’s how you end up chasing the wrong root cause.
Scenario: night shift shows higher “idle” time. Investigation reveals the issue isn’t discipline—it’s interpretation. On some machines, feed hold and door open are treated as “idle”; on others, similar conditions are treated as “down” or “not running.” Integration choice matters because it affects what signals you can observe and how cleanly you can separate these conditions. The fix is to standardize the classification rules and validate them against operator reality on both shifts.
Inconsistent part count logic
Part counts can mean: completed cycles, good parts, bar pulls, pallet cycles, or “one program end.” If you don’t define it explicitly, you’ll get arguments between the floor, programming, and the ERP. Even with a solid protocol connection, the measurement definition must be written down per process family.
Alarm overload without operational mapping
Collecting alarms is easy; turning them into action is hard. Alarms need to be mapped to downtime categories and workflows (who responds, what counts as setup vs waiting vs maintenance). Otherwise you’ll generate noise and still rely on manual explanations at the end of the shift.
Pilot checklist (keep it enforceable)
Before you roll out to the full fleet, validate on 3–5 representative machines per controller type (including older controls). Do the check on both shifts and compare the system’s state changes to what operators and supervisors say is happening.
Pick one pacer machine, one “problem child,” and one typical machine for each control family.
Force-test ambiguous states (optional stop, feed hold, door open) and confirm how they classify.
Document part-count definition per machine/process and confirm it matches production reporting needs.
Confirm the downtime workflow: who assigns reasons, when, and how much detail is required.
Practical recommendation: choose the simplest path to trustworthy utilization, then deepen
If you’re making this decision as an owner or ops leader, the practical recommendation is to optimize for trustworthy utilization and downtime visibility first. Once you can see consistent run/idle/down behavior across shifts—and you’re confident the definitions match how the shop operates—then add deeper signals where they reduce manual troubleshooting.
Guidance that holds up in most 10–50 machine environments:
Mixed-brand, utilization-focused: prioritize standardization and consistent state mapping (often MTConnect-centric), prove you can compare machines and shifts without arguments, then expand coverage.
FANUC-heavy with deeper diagnostic needs: use FOCAS where it reduces time spent figuring out stoppages, but don’t let FANUC depth delay shop-wide visibility.
Hybrid is often the “grown-up” answer: standardize the core KPIs across the fleet, then selectively enrich FANUC machines with deeper context.
What to document before (and during) your pilot so you don’t lose weeks later:
Controller models and generations (including legacy FANUCs), plus any known options/licensing constraints.
Network constraints: segmentation, allowed ports, who approves access, and the expected timeline.
Required signals (core states first), plus the written state mapping rules your shop will use.
Downtime categories and the operator workflow for assigning reasons (minimum viable detail).
Implementation also has cost and effort implications (hardware, connectivity work, and ongoing support). If you’re aligning stakeholders internally, it can help to review packaging expectations without forcing a pricing conversation—start with what needs to be connected, how many machines, and how fast you want to scale. For planning context, see pricing.
If you want to pressure-test which path (MTConnect, FOCAS, or hybrid) fits your controller mix and your shift-level visibility goals, the fastest next step is a short diagnostic conversation focused on models, options, and the state definitions you need to standardize. You can schedule a demo and walk through a practical pilot plan without committing to a long integration project upfront.

.png)








