Flow transforms Industry 3.0 data projects from fragile, scattered systems into a robust, centralized infrastructure that's easy to govern and ready to scale. Don't just manage, innovate.
No Code / Low Code
Flow empowers users to manage and transform data with minimal coding, making it accessible for non-technical subject matter experts to contribute effectively and streamline workflows.
Universal Governance
Flow ensures consistent data governance across the platform, allowing for standardized control and compliance, which is crucial as operations scale. You can ensure your data is cleansed, calculated, and presented the way you want it across your entire organization.
Flexible Architecture
Built on a flexible infrastructure that includes Docker, Linux, and Windows, Flow supports both horizontal and vertical scaling and can be deployed at the edge, on-site, or in the cloud to meet diverse operational needs.
Platform Agnostic
No matter what variety of SCADA, historian, MES, ERP, or SQL sources you need to connect, Flow can unify your data. Flow also maintains compatibility with a wide range of data consumers, ensuring seamless integration and data flow regardless of cloud platform, business intelligence vendor, or database.
Template Library
Flow's Template Library enhances scalability by allowing you to templatize and manage the information models you build. Templates can be instantiated multiple times, nested within each other, and versioned to accommodate variations of the same foundational model, streamlining deployment and maintenance across different scenarios.
API Driven
With its robust API-driven approach, Flow allows for extensive customization and integration, facilitating automation and connectivity between various systems and technologies.
We start by creating a single consolidated information model to abstract and unify multiple underlying namespaces. That's a mouthful; what does it mean?
Flow works independent of your data sources, remaining platform agnostic. You can connect and unify dozens of data sources while remaining independent of their functional namespace.
Finally, a way to unite time series historians, SQL dbs, real time data brokers and servers, and even manual data capture. ERP, MES, Historians, LIMS, CMMS, and many other operational systems can easily be connected.
The Operator, Team Leader or Manager accessing the Flow Model doesn't need to know which tag or SQL query was used to link data to Flow. In fact, they don't want to know, nor do they care! They just want information they can trust, and that requires first cleansing the data of anomalies, outliers, and bad data. Flow makes this easy by giving you a single location to create rules around how you want to cleanse and normalize data before moving it further downstream.
The Flow Model is hierarchical and generic by design. We can build our model using ISA95, ISA88, PackML, custom asset, twin thing, entity meta-model, or any combination of these. (We're not sure twin thing is really a thing, but you get the idea). The Flow Model represents physical assets, performance indicators, and logical entities. You can structure this model by area, department, or both. The point is - it is flexible. And, despite its hierarchical nature, the Flow Model allows for object "linking" across the structure.
The Flow Model can be standardized across multiple sites or production facilities. The source of a KPI will differ across sites, but the name will be consistent. On Site A, the KPI represents tag "FL001-123-FQ001.PV" (see why the managers don't care!) and on Site B, the measure represents a manually input value. But both KPIs are named "Line 1 Filler Volume", and that is what everyone will know it as, everywhere they go. Flow Templates allow for this model standardization.
In many ways, the Flow Model is the "uber" Unified Namespace, consolidating multiple underlying namespaces, whether they are Historian namespaces, SQL namespaces or even MQTT namespaces - Flow brings them all together into one persisted model and as you will learn later, will even make historical data access possible, helping you build value-added analytics apps.
The Flow Model is the "uber" Unified Namespace, consolidating multiple underlying namespaces and exposing ready to use information.
For us, the transformation pipeline is the most exciting part. Your engineering core's unparalleled knowledge of your process and operation must be added to your data. This is where Flow really shines.
At its core, Flow enriches every piece of data with two essential contexts: time and model. As data streams into Flow—whether marking an event or contributing to a calculated metric—it is immediately contextualized by these dimensions.
Flow utilizes event-framed periods defined by specific triggers in the data stream, such as machine start and stop events. The creation of these event frames relies heavily on the engineering expertise and intimate process knowledge of your operations team (context that is completely lost when data is just streamed to the cloud). They provide crucial context, like reasons for a machine’s downtime, which adds a layer of rich, meaningful insight to the data. This expertise transforms raw data into actionable information, enabling precise monitoring and analysis of operational events.
In addition to event-based framing, time acts as the continuous thread through all Flow systems, anchoring every piece of information. Flow further breaks down time into comprehensible slices or periods—minutes, hours, shifts, days, weeks, months, quarters, and years. These calendar-based periods are essential for making insightful comparisons, such as assessing shift performance or analyzing year-over-year process efficiencies.
As data streams into Flow, it is cleaned, contextualized, and transformed by a set of calculations services that include:
User-defined functions are used to encapsulate complex algorithms and standardize and lock down calculations throughout the Flow Model.
The Flow transformation pipeline applies these contextualization and calculation processes to multiple data streams simultaneously, removing the silos between them as they blend in near real-time. The pipeline allows us to build calculated measures that take inputs from more than one data source or trigger event periods using one data source while attributing its context from other data sources, whether these data sources are time-series or transactional in nature. The possibilities are limitless!
Remove data silos as they blend in near real-time
Flow is anything but a "black box". It contains your information and is open for you to easily access it via industry-standard protocols. Flow is your bridge from OT and IoT data streams to analytics-ready information.
Flow exposes an industry-standard REST API for model discovery and information access that can be used to build third-party apps or integration into existing applications.
Flow provides integration components to automatically publish information out to your other systems via industry standard protocols in near real-time. How about pushing maintenance information like running hours or stroke counts up to your Asset Management system? Or actual production figures up to your ERP system? What about sending information to your Machine Learning platform in the cloud? Or even just back to your SCADA for operator visibility of KPIs calculated from multiple data sources? Flow currently integrates with:
Flow Systems can publish information to other Flow Systems! Why would this be useful? Imagine you were a multi-site organization, possibly spanning the globe, and each of your sites' Flow Systems is publishing its information up to your HQ Flow System? The HQ Flow System would provide invaluable fleet-wide information for site comparisons, benchmarking, and logistics planning. How about cost or efficiency comparisons between types of equipment? The possibilities are limitless.
Flow is your bridge from OT and IoT data streams to analytics-ready information
Ultimately, Flow provides value in the form of decision-support, insight and action by presenting the "single source of truth" in a way that is seen and understood.
Flow reports, charts and dashboards are easily configured and served via a web browser to operators, team leaders and managers. Chart configuration employs built-in visualization best-practice, thus maximizing the transfer of information to the human visual cortex:
Reports and charts enable comment entry to add human context to our information.
Sometimes it is more convenient for the information to find us rather than for us to find the information. Flow automatically compiles and distributes information and PDF exports as and when required. Distribution is secure and handled via mechanisms such as:
Maximize the transfer of information to the human visual cortex