How FOCAS Connects CNC Machines to Monitoring Systems
- Matt Ulepic
- 3 hours ago
- 9 min read

How FOCAS Connects CNC Machines to Monitoring Systems
FANUC “connectivity” projects usually don’t fail because Ethernet is hard. They fail because teams confuse data access with operationally trustworthy monitoring data. You can have a successful FOCAS connection and still end up with dashboards that don’t match what supervisors see on the floor—especially across multiple shifts and a mixed fleet of controls.
This article stays implementation-first: where FOCAS sits in the monitoring stack, what has to be true on the control and network, how collectors typically read data, and the part that determines whether you recover capacity—how raw controller signals get mapped into run/idle/alarm without hiding time loss.
TL;DR — How FOCAS Connects CNC Machines to Machine Monitoring Systems
FOCAS is an interface layer that lets external software read FANUC controller data; it isn’t a monitoring system by itself.
Connectivity requires more than “we can ping it”: control series/options, network paths, and permissions determine what’s readable.
Most monitoring uses a collector that opens sessions and polls a small set of status/alarm points on a steady cadence.
Update rate is a tradeoff: prioritize state and alarms over high-frequency motion data for utilization visibility.
Utilization accuracy depends on state modeling (e.g., feed hold vs cycle running) more than the physical connection.
Mixed FANUC generations won’t expose identical data; define a minimum viable dataset and normalize consistently.
Acceptance tests should focus on transitions, alarms, disconnect handling, and shift-level reports—not screenshots.
Key takeaway FOCAS can prove that data is accessible from a FANUC control, but it doesn’t guarantee that “run time” means productive machining. The capacity you recover comes from interpreting controller signals into consistent run/idle/alarm states, then tying those durations to shifts so idle patterns and misclassification don’t get averaged away.
What FOCAS is in a monitoring architecture (and what it isn’t)
FOCAS (FANUC Open CNC API Specifications) is FANUC’s interface for reading CNC controller information from an external application. In practical terms, it’s the bridge that lets software outside the control ask questions like: “Is the machine in alarm?”, “Is it in AUTO?”, “Is cycle running or stopped?”, and “What overrides are active?”—subject to the control model and enabled options.
In a typical machine monitoring architecture, FOCAS sits between the control and the monitoring platform:
CNC control ↔ FOCAS API ↔ collector/agent ↔ monitoring system storage/UI
What FOCAS enables is near-real-time readout of monitoring-relevant signals: status, alarms/codes, certain modes, override values, and some program context (where permitted). If you’re still clarifying what a monitoring platform typically captures and how it’s used operationally, start with machine monitoring systems—then come back here to scope the FANUC connection correctly.
What FOCAS does not solve is the hard operational layer: defining productive time, applying shift calendars, capturing downtime reasons, or normalizing “run/idle/stop” consistently across a mixed fleet. Those decisions sit above the connection, and they’re exactly where utilization leakage appears when the mapping is vague.
Connection prerequisites: control compatibility, options, and access
Before you touch the network, validate the prerequisites that determine whether a FOCAS pilot becomes a real rollout (especially in 10–50 machine shops where “one-off” integrations don’t scale).
Typical prerequisites
Most FOCAS monitoring integrations assume:
Ethernet availability on the CNC (physical port and network configuration).
FOCAS-capable control series/generation (varies by control family and vintage).
Enabled communication options/licensing where applicable (some capabilities require options to be turned on).
Access requirements (and why ping isn’t proof)
Access is usually a combination of IP addressing, port reachability, and whatever credentials/permissions model applies in your environment. In segmented OT networks, it’s common to have a machine that responds on the network but still can’t be read by the collector due to firewall rules, NAT boundaries, or blocked services.
The practical IT/OT lesson: “We can ping it” only confirms Layer 3 reachability. It doesn’t confirm the collector can establish sessions, maintain them reliably across shifts, or read the specific endpoints you need for run/idle/alarm logic.
Validation steps that prevent stalled pilots
Keep validation focused on what operations will actually use:
Record each machine’s control model, generation, and (if available) software/version context.
Confirm network topology: where the collector will live (server, edge PC, VM), and the path to each CNC.
List the minimum data points required: alarms, cycle state, mode, and any part/cycle signals you plan to use.
Test read access on 2–3 representative machines (newer and older) before you assume the rest will behave the same.
How data is extracted: polling, session handling, and data points commonly read
Most monitoring deployments use a collector (agent) that establishes a session to each FANUC control and polls a defined set of FOCAS endpoints on a cadence. The goal isn’t to harvest every possible controller variable—it’s to capture enough signal to produce trustworthy state durations and alarm visibility across many machines.
Polling cadence: speed vs load
Polling is a balancing act between responsiveness and overhead. For utilization monitoring, you typically prioritize state/alarm signals over high-frequency motion or servo data. Practical cadences are often in the “every few seconds” range, adjusted for network constraints, machine count, and how fine-grained you need state transitions to appear. The important part: pick a cadence that scales when you expand from a pilot to dozens of connections.
Common monitoring-focused data reads
While availability varies by control/options, monitoring systems commonly read:
Run/stop and cycle-related status (enough to infer whether the machine is cutting vs not).
Alarm state and alarm codes (what happened and when).
Mode (AUTO/MDI/JOG) to separate production from setup tendencies.
Feed/spindle override values that can explain slow cycles or troubleshooting behavior.
Program context such as program number/name, when the control exposes it.
Disconnect handling: don’t turn comm loss into “idle”
Collectors need explicit logic for timeouts, retries, and session resets. Operationally, the monitoring system should represent “communication loss/offline” as a distinct state—not silently fold it into stop/idle—because otherwise you’ll report artificial idle time that’s actually a network issue. This becomes more important as you expand monitoring across VLANs, Wi-Fi bridges, or long cable runs.
From raw signals to ‘run/idle/alarm’: state modeling that avoids utilization leakage
The difference between “connected machines” and “useful monitoring” is the state model. Raw CNC status does not automatically equal utilization because setup, warm-up, waiting on an operator, optional stops, and troubleshooting can share similar controller signatures unless you apply a rule set.
A simple precedence model (example)
A practical way to turn polled reads into operational states is to apply precedence rules. Here’s a text “pseudo-flow” that many shops start with and then tune:
If offline/timeout → state = OFFLINE
Else if alarm active → state = ALARM (capture code)
Else if E-stop active → state = E-STOP
Else if cycle running (cycle start / running flag) → state = RUN
Else if feed hold / program stop condition → state = HOLD/STOP (often the most nuanced bucket)
Else if ready, not running → state = IDLE/READY
That hierarchy is deliberately not “one size fits all.” It’s a starting point that forces clarity: if your system can’t separate RUN from HOLD reliably, you’ll misclassify time in ways that hide real capacity constraints.
Mini walk-through #1: collector read → state durations
Imagine a collector polling three core items: (1) connectivity/session health, (2) alarm active + code, (3) cycle status + mode. Over a 10–30 minute window, the sampled points are converted into state-change events:
08:10:12 — mode=AUTO, cycle=RUNNING, alarm=OFF → RUN starts
08:18:40 — cycle=STOPPED, alarm=OFF, mode=AUTO → transition to HOLD/STOP
08:20:05 — alarm=ON (code captured) → transition to ALARM
08:24:30 — alarm=OFF, mode=MDI, cycle=STOPPED → transition to IDLE/READY or SETUP bucket (depending on your rules)
The monitoring system then rolls those events into duration buckets for the shift. This is where machine utilization tracking software either earns trust (states align with reality) or creates arguments (states don’t match what happened).
Mini walk-through #2: the “machines ran” night shift problem
Required scenario: night shift reports “machines ran,” but monitoring shows long idle gaps. This is often not a connectivity failure—FOCAS is connected and reading status—but a mapping issue:
If your rules treat feed hold or optional stop (M01) as RUN, the system will overstate runtime and hide stoppages that operators considered “not really running.”
If your rules ignore mode changes (AUTO → JOG/MDI), setup and troubleshooting can get lumped into generic idle, making the gaps look like “no one worked.”
If door-open/setup patterns aren’t modeled (even as a proxy using mode/stop combinations), a machine can appear READY while actually blocked by in-process setup work.
The fix is not “poll faster.” The fix is to tune the precedence and interpretation so stop/hold behavior is visible—and then review those timelines with the night lead to confirm that the categories match what they mean operationally.
If you want a deeper operational read on how shops use these state buckets to isolate lost time (without turning this into a reason-coding workflow), see machine downtime tracking.
Integrating into a machine monitoring system: normalization, timestamps, and multi-machine consistency
Once FOCAS data is being read, the monitoring system has to make it comparable across machines and shifts. This is where “it works on one machine” becomes “it’s trusted across the floor.”
Normalization: IDs, clocks, and state enums
Normalization work is unglamorous but essential: consistent machine identifiers (names that match what the floor uses), time zone handling, and protection against clock drift when multiple collectors or servers are involved. Equally important is a standardized state enumeration so “RUN” means the same thing on a newer 5-axis as it does on an older 2-axis lathe.
Samples vs events: converting polls into durations
Because many integrations are polling-based, you’re collecting a stream of samples. Monitoring systems typically convert those samples into state-change events and then roll up durations (e.g., “RUN for 22 minutes, HOLD for 6 minutes, ALARM for 3 minutes”). That conversion step needs clear rules for “no data received,” “stale data,” and short blips that should (or shouldn’t) count as real state changes.
Mixed FANUC generations: define a minimum viable dataset
Required scenario: a shop adds 8 more FANUC machines to monitoring and hits inconsistent data availability. This is common. Some controls expose part count or cycle completion cleanly; others require alternate signals (or don’t provide a reliable part counter at all). If you try to force identical KPIs from non-identical inputs, you’ll either create gaps or distort results.
The practical approach is to define a minimum viable dataset that every connected machine can support (often: connectivity, alarm presence/code, mode, and cycle/run status). Then layer in “enhanced” metrics only where the control/options support them—without changing the core logic that your ops team uses to compare shifts and cells.
Shift context: make leakage visible by shift, not averaged away
Multi-shift shops often feel the ERP-versus-reality gap most sharply: the schedule says jobs were “on machines,” but actual behavior includes long idle pockets, frequent holds, or repeated restarts. Attaching states to a shift calendar (and reporting by shift) prevents those differences from disappearing into daily averages—and it makes coaching and process fixes possible without guessing.
Implementation reality: rollout steps, security, and acceptance testing
A FOCAS integration that survives real production has two goals: (1) stable data collection, and (2) trusted interpretation. If either is missing, you’ll end up back in manual reporting—exactly where ERP entries drift away from what machines actually did.
Rollout sequence that scales
Keep the rollout tight and representative: pilot 2–3 machines (different generations, different work content), validate the state rules with supervisors on two shifts, then expand by cell/line. This avoids the common failure mode where a pilot “works,” but the logic collapses when you add more machines, more operators, and more variability.
Security basics that don’t create friction
Treat the CNC network like production infrastructure: least-privilege access, segmented OT VLANs where appropriate, documented firewall rules, and simple change control so the monitoring connection doesn’t “mysteriously” break after network updates. This is especially relevant in mid-market shops that need solutions to work without heavy corporate IT overhead.
Acceptance tests that matter to operations
Use acceptance tests tied to how your floor runs, not generic “data is showing” checks:
Alarm capture accuracy: alarm on/off transitions and correct code logging.
Run/idle transitions: cycle start, feed hold, reset, and stop behaviors classified intentionally.
Disconnect representation: offline/comm loss does not masquerade as idle.
Shift reporting: state durations by shift align with observed production patterns and supervisor expectations.
When to escalate (and what to do instead)
If key data isn’t available due to control options or permissions, escalate early: either enable the needed options, define alternate signals, or adjust KPI expectations so you don’t build reports on assumptions. This is also where interpretation help matters—especially when you’re trying to turn raw states into consistent, shift-level visibility without spending weeks debating definitions.
For teams that want help explaining “what changed” in plain language (without burying supervisors in logs), an assistant layer can reduce the time spent translating machine behavior into action. See the AI Production Assistant for an example of how state and alarm context can be summarized for faster triage.
If you’re scoping rollout effort, licensing, or what a deployment implies operationally (without guessing at numbers), review pricing to frame implementation expectations around your machine count and shift needs.
If you’re evaluating whether your FANUC mix can produce trustworthy run/idle/alarm visibility—and you want to confirm the control prerequisites and state mapping approach before expanding—schedule a demo. The most productive demos start with 2–3 representative machines and a clear definition of what “running” needs to mean on your floor.

.png)








