When discussing industrial digital transformation, data is often at the centre of the discussion. Industrial organisations grapple with a range of data challenges: in some cases, there is little accessible data because it remains locked in sensors, controllers, or even on paper; in others, especially well-planned greenfield setups, there can be an abundance of data. Yet, regardless of how much data is captured, much of it is not in a form that is truly usable for downstream applications and analysis.
Applications close to the data source, such as SCADA/HMI, can often work with this data because the context is preserved, since the gap between source and application is small. However, as data travels up to enterprise layers like MES/ERP, its context often gets lost. This makes it difficult for applications at these higher levels to interpret the data’s meaning, limiting their ability to perform meaningful analysis. As a result, “stovepipe” architectures emerge, where each application builds its own channels to the data sources, since only the application understands the context in which the data is coming and being consumed.
What is needed is a way for the tags (in software applications that house and move the data) to carry the context as they move through the stack. Further, all applications consuming data should access the data from a common location. The process of contextualising data for broader usability is what Industrial DataOps seeks to address. The process of centralising is what is now more popularly known as the Unified Namespace (UNS). DataOps and UNS are the two topics this article focuses on.
What is Industrial DataOps?

Image 1: The image shows the placement of DataOps after data is acquired and before it is stored and/or consumed by other applications (industrial nodes), as well as the two key components of DataOps.
Industrial DataOps is the process of enhancing data quality and providing structure and context for accurate logical representation, to ensure usability by downstream applications. It is built on two foundational pillars:
- Data Quality Management: The process of verifying and rectifying OT data sourced from industrial assets and systems.
- Data Modelling: The process of creating a logical representation of assets and processes, where individual entities are structured and contextualised to reflect their relationships and attributes accurately.
This article focuses on the second pillar: Data Modeling.
Data modeling involves:
- Structuring: Organizing entities (like sensors or machines) into hierarchies, establishing relationships and dependencies.
- Contextualizing: Enriching entities with attributes (both static, like name/type, and dynamic, like value/status), standardizing data through transformation (scaling, unit conversion, aggregation, etc.), and defining relationships (one-to-many, hierarchical, etc.).

Image 2: Relationships connect entities in and outside a model, defining how they interact.
Two main types of data models:
- Asset Models: Digital blueprints of physical items, cataloging information such as the structure, behavior, and interrelations.
- Use-Case Models: Represent operational processes and workflows, encapsulating logic for tasks such as predictive maintenance, energy optimization, etc.
Data modeling transforms raw data into valuable information, clarifying what entities represent, how they relate, and their roles in the broader system. The centralized place where the contextualized data resides is called the Unified Namespace (UNS).
What is UNS?
A Unified Namespace (UNS) is a centralized framework that organizes and manages all enterprise data in real time, enabling industrial nodes to share and access structured data from a single, standardized location. It uses event streaming and message brokers to facilitate data exchange.
As an architecture, UNS is technology-agnostic. However, in practice, most industry implementations focus on MQTT as the backbone. MQTT is a lightweight, publish-subscribe messaging protocol well-suited for real-time, scalable industrial data exchange. Sparkplug B builds on MQTT by standardizing payloads and topic namespaces, ensuring consistent interpretation of data from different devices and systems. OPC-UA can also serve as the technology for UNS – either as a standalone OPC-UA server or using OPC-UA over MQTT, which combines the semantic modeling strengths of OPC-UA with the efficient transport of MQTT. Both OPC-UA and MQTT/Sparkplug B can be used to achieve UNS, but each has distinct strengths:
- OPC-UA is a comprehensive information modeling standard. It allows explicit, rich modeling of nodes, relationships, attributes, and methods. OPC-UA’s NodeSet files act as blueprints for systems, devices, and processes, supporting deep semantic modeling and plug-and-play interoperability. It excels where explicit definition of hierarchies and relationships is needed.
- MQTT/Sparkplug B focuses on efficient, scalable data exchange. Sparkplug B standardizes how data is published and accessed, creating a uniform data model that acts as a single source of truth. However, Sparkplug B is less prescriptive about relationships – these are often implied via topic hierarchy, and complex modeling may require external support (relational, NoSQL, or graph-based databases) for full representation.
There is discussion in the industrial DataOps vendor community that the best approach for UNS is multi-protocol support. Relying solely on MQTT topics can result in a flat context. MQTT-focused platforms like HiveMQ Pulse are moving toward data models constructed from strongly typed nodes, much like OPC-UA’s references, that can be hydrated with data from multiple sources and protocols. This allows for both hierarchical and cross-node relationships, capturing complex dependencies across assets, lines, and sites.
Implementing a UNS is not plug-and-play. It requires significant conceptualisation and ongoing maintenance. Challenges include establishing and maintaining a consistent data model, handling evolving operational requirements, and ensuring all stakeholders adhere to the agreed standards. Data model governance, as such, becomes a key element of industrial DataOps. It is the process of managing and overseeing the development and deployment of data models in an organisation (or across the broader industrial community when models are handled by industry consortia/bodies). It involves setting and enforcing policies, standards, and best practices for creating, testing, validating, monitoring, and auditing data models.
Without strong governance, there is a real risk of fractured localised namespaces – different teams or sites creating their own naming conventions, leading to fragmentation and lost value. Within a single organisation, business units or sites with differing data needs must be brought under a unified governance framework. UNS as an architecture provides the structure, but it’s up to central teams, including DataOps vendors and system integrators, to provide the practical guidance and enforcement needed to make it work at scale.
Data model governance includes:
- Access control: Managing who can view or modify data models and their outputs.
- Version control: Tracking changes and maintaining a history of model updates.
- Storage: Securely storing data models and related files in a central location.
It is also important to note that, in multi-organisational or federated environments (such as smart cities or cross-company supply chains), a centralised UNS may not be viable. These scenarios demand federated data architectures that allow controlled data sharing while respecting organisational autonomy.
The value of getting DataOps right
Industrial DataOps is not a silver bullet, but it is a foundational enabler for making industrial data usable, trustworthy, and valuable. By investing in robust data modelling, adopting UNS, and enforcing governance, organisations can avoid the pitfalls of poor-quality data, which compounds problems as it moves through the process. Ultimately, organisations that get DataOps right position themselves to unlock new efficiencies and respond with agility to changing business needs. At the same time, those that do not risk falling behind as their data becomes increasingly fragmented, inconsistent, and underutilised.
Disclaimer: This article is a guest contribution. All definitions and images are based on the IoT Analytics Industrial Connectivity Market Report 2024–2028, which the writer of this article authored. The views, opinions, and information expressed in this piece are solely those of the author. Machine Maker does not verify the authenticity of the content and bears no responsibility for its accuracy or completeness. All claims, statements, and perspectives are attributable to the author alone.