ServiceNow ITOM UMO Licensing Explained | Hidden Cost of Poor ITOM Design

The ITOM Architecture Tax, Part 6: The Cost You Didn’t Decide to Take On

Unresolved Monitored Objects

Series Recap

This is the final part of a six-part series on how ITOM architecture decisions become licensing costs. Parts 1 through 5 covered Servers, PaaS Resources, Containers, EUC Devices, and FaaS Resources. Each one traced a path from an implementation decision – a Discovery schedule, a connector enabled, an agent deployed – to a SU count that no one intended.

Part 6 covers Unresolved Monitored Objects. UMOs are different from every resource type covered so far. There is no implementation decision to trace. No connector to review, no agent to scope, no classification policy to define. UMOs emerge from a gap – between what monitoring tools know about and what the CMDB contains – and they accrue before anyone knows to look for them. By the time they are identified, gap-period usage may already be captured in prior daily counts.

One framework note that applies across all six parts: a resource can exist in the CMDB without consuming a Subscription Unit. Three conditions must all be true before a CI counts – it is in a table mapped to a licensable category, it is in scope for an ITOM product, and any category-specific requirements are satisfied.

UMOs are the exception to this framework: a UMO is counted when ITOM Health receives an event or metric that cannot be resolved to any CI in CMDB, and is tracked in em_unique_nodes rather than in a CMDB table. UMOs require no CI, no table mapping, and no ITOM scope decision – they emerge from coverage gaps alone.

What Unresolved Monitored Objects Are

When ITOM Health or Event Management receives an event or metric it cannot correlate to an existing CI, that unmatched source is treated as an Unresolved Monitored Object – a distinct managed resource type counted at a 1:4 ratio while it remains unresolved. As long as events continue to arrive from a node without a matching CI, that node contributes to UMO usage.

With every other resource type in this series, SU exposure is tied to an explicit onboarding decision: which servers to discover, which connectors to enable, which agents to deploy. UMO exposure requires none of those decisions. It can accumulate simply because monitoring was enabled before CI coverage was adequate – which is the most common implementation sequence, not an exception.

The Irreversibility Risk: Improving CI coverage and correlation rules prevents additional UMO usage going forward. Do not assume remediation rewrites prior daily counts. Validate the effect in the ITOM Licensing dashboard and with your account team. The reliable control is ensuring CMDB coverage and event-to-CI match rules are in place before monitoring tools go live.

Where UMOs Come From

UMOs are not a single problem. They are four distinct problems that produce the same outcome – an unmatched event source counted at 1:4 until resolved.

Source
How It Produces UMOs
Monitoring before CMDB is ready
Events begin flowing before the CIs they reference exist. The most common source. Every event from an unmatched node creates an unresolved object immediately.
Stale Discovery data
Renamed, re-IPed, or decommissioned infrastructure creates orphaned identifiers that incoming events can no longer resolve against existing CIs.
Hostname mismatches
Monitoring tools and Discovery assign different names to the same infrastructure. Persistent unresolved nodes result even when the underlying hardware is fully represented in CMDB.
Shadow IT and unmanaged devices
Devices generating events outside Discovery scope contribute silently. Often the last source identified in a UMO remediation effort because the CIs were never intended to exist.

The monitoring and licensing teams are often looking at the same problem from different angles and neither is aware the other has it. Treating em_unique_nodes as a monitoring hygiene metric rather than a licensing dashboard means cost continues to accumulate until someone connects the two – and by then the gap-period usage has already been reported.

How UMOs Are Triggered

Unlike every other resource type in this series, there is no ingestion path to configure or scope. UMOs are triggered passively – by events and metrics that arrive without a matching CI to receive them.

Ratios in this series reflect the ServiceNow ITOM Subscription Unit Overview effective February 1, 2024. Customers contracted prior to that date may be subject to different ratios. Confirm which version governs your executed agreement before using these figures for planning or renewal modeling.

Trigger
How It Works
Example Sources
SU Ratio
Counts?
Event or metric with no matching CI
ITOM Health or Event Management receives an event or metric with no matching CI in CMDB. The unmatched node is tracked in em_unique_nodes and counted at 1:4 for as long as it remains unresolved during usage calculation periods.
Event Management, ITOM Health, em_unique_nodes table
1:4
YES - for each period the node remains unresolved. Remediation stops additional usage from accruing but may not affect prior daily counts (see Irreversibility Risk above).
Action Item
Review em_unique_nodes before and after every major monitoring integration or Discovery expansion. UMO count reduction is both a licensing exercise and a CMDB quality initiative. Every unresolved node is also a gap in event correlation fidelity - the licensing exposure and the operational gap are the same problem.

Real-World Scenario: UMO Accumulation

An organization deploys a network monitoring tool that begins routing alerts into Event Management before their CMDB population project is complete. Over 60 days the monitoring tool generates events from 1,200 unique infrastructure nodes. Only 700 of those nodes have corresponding CIs in CMDB at the time the events arrive. The remaining 500 become unresolved nodes immediately.

Approach
Monitored Nodes
CIs in CMDB
UMOs Generated
UMO SUs Consumed
Monitoring deployed before CMDB is complete
1,200
700 at go-live
500
125 SUs at 1:4, accruing for each period those nodes remain unresolved
CMDB population completed before monitoring goes liveCMDB population completed before monitoring goes li
1,200
1,200 at go-live
0
Every incoming event resolves to a CI. No unresolved nodes. No UMO SUs.
Retroactive CMDB remediation after 60-day gap
1,200
1,200 after remediation
500 historical
125 UMO SUs consumed during the gap period. Future events resolve correctly. Gap-period daily counts may not be affected by remediation - validate with your account team.
Key Takeaway
Both approaches produce identical eventual CMDB coverage. The difference is 125 UMO SUs. The reliable control is ensuring CMDB coverage and event-to-CI match rules are in place before monitoring tools go live. Remediation after the fact closes the operational gap but may not eliminate SU exposure already captured in prior daily counts - validate with your account team.

Common Misconceptions

“We don’t use ITOM Health, so UMOs don’t apply to us.”

UMO exposure exists anywhere events or alerts are being routed into Event Management, regardless of whether ITOM Health is a named workload in the implementation. Organizations routing monitoring alerts from external tools into Event Management can accumulate UMO SUs if those events reference infrastructure not present as CIs in CMDB. Review em_unique_nodes even if ITOM Health is not deployed.

“Once we fix our CMDB coverage, our UMO SUs will go down.”

Fixing CI coverage prevents additional UMO usage going forward. It does not guarantee that prior daily counts are revised. Validate the effect with your account team.

“UMOs are a monitoring problem, not a licensing problem.”

In the current ITOM SU model, UMOs are a distinct licensable resource type. Every unresolved node is both a gap in event correlation fidelity and a cost that accrues until someone connects em_unique_nodes to the licensing dashboard. Treating it as purely a monitoring hygiene metric misses the licensing dimension entirely.

Full Series: Architecture Review Checklist

Across all six resource types, the pattern is the same. SU exposure is not set at renewal. It is set at implementation, by decisions made before anyone thought to ask what they would cost.

The table below consolidates the key review questions from all six parts of this series. None of this requires a licensing audit. It requires one structured look at the right data – the ITOM Licensing dashboard, CI type mappings, Discovery scope definitions, and em_unique_nodes. In most environments, the largest reduction opportunities are visible within the first few hours.

Resource Type
Review Questions
Servers / VMs
Are dev/test, decommission-pending, and non-production servers excluded from ITOM scope in the Licensing module, or do they exist in CMDB with no exclusions defined?

Were CI type mappings reviewed before the first Discovery schedule ran? Has that review been repeated as the environment has changed

Is there a documented process for adding servers to ITOM scope that treats the scoping decision as a licensing decision?
PaaS Resources
Was a PaaS classification policy in place and validated before your cloud connectors were enabled? If not, has a retroactive review been completed?

Are there stale CIs from deprovisioned cloud resources still mapped to a licensable PaaS category?

Are cloud-side tags explicitly mapped to ITOM Licensing scope within ServiceNow, or is the assumption that tagging in the provider console controls what counts?
Containers
Are container SU estimates based on 90-day daily averages, or on point-in-time counts?

Were dev/test namespaces and CI/CD pipeline containers excluded from the Container licensable category before integration was enabled?

Is there a defined review cadence for container scope that matches the pace at which development teams provision new namespaces and workloads?
EUC Devices
Is there a documented approval process for ACC-V deployments that includes SU impact modeling?

Are all endpoint CIs arriving via SGC ingestion, or has ACC-V been deployed to devices not explicitly scoped as EUC SU consumers?

Has endpoint CI classification been validated to ensure CIs land in EUC tables rather than server-class tables?
FaaS Resources
Are FaaS CI types explicitly mapped to the FaaS licensable category, or could they be landing in PaaS at 1:3?

When did someone last review the function count against what development teams have actually deployed?

Are reconciliation and aging rules configured to retire stale FaaS CIs when functions are deleted in the cloud provider?
UMOs
When did someone last review em_unique_nodes? Was it reviewed before and after every major monitoring expansion in the last 12 months?

Are monitoring integrations sequenced behind CMDB population - or does monitoring go live first and CI coverage catch up later?

Are event-to-CI correlation rules current? Hostname mismatches and stale Discovery data produce UMOs even when the underlying infrastructure exists in CMDB.

Closing: What Good Architecture Actually Looks Like

Every part of this series covered a different resource type, but the finding is the same across all six. SU exposure is not an inevitable cost of running ITOM at scale. It is the cost of implementation decisions made without full visibility into their licensing consequences.

The organizations that manage ITOM licensing well are not the ones with the most sophisticated contracts. They are the ones where the people building the platform understand what counts, what does not, and why – before the first Discovery schedule runs.

Certain connector types may affect SU consumption in ways that fall outside the standard three-condition framework. Confirm connector-specific behavior with your ServiceNow account team before relying on scope exclusions as a licensing control.

If the checklist above produced more uncertainty than answers, that is a conversation worth having before your next renewal.

Elevsis Delgadillo, SVP of Customer Success at KeenStack

Elevsis Delgadillo

Senior Vice President, Customer Success
Former VP of IT at Banner Health with deep expertise in I&O, Enterprise Architecture, and Enterprise Digital transformation.​