Do You Need A
Scalable No-Code
Analytics Framework?

With Flow, you can collect, combine, and normalize data from multiple disparate OT and IT sources, create calculations and event frames, and contextualize the results using time and model information.

Think of Flow as a hub

Flow is a centralized hub that allows you to collect data from multiple disparate sources, combine it, normalize it, perform calculations on it, and then store the resultant information within time and model context. We call this the data transformation process, and Flow is the hub that allows you to manage that transformation pipeline. This Analytics Hub represents what we call the "single source of truth", the one place your users need to know about to access the information they need to make decisions in real time.

As data streams into the Flow Analytics Hub and is transformed by the pipeline, it immediately becomes available for presentation via charts and dashboards, and for publishing out to other systems that require its consolidation and calculation capability.
Flow is the hub that allows you to manage your data transformation pipeline

Here's how it works

Model

We start by creating a single consolidated model to abstract and unify multiple underlying namespaces. That's a mouthful; what does it mean?

Decoupled

In most cases, we have a number of underlying data sources (e.g. Historians, SQL Databases, etc.). We access this data using tagnames or queries, but we can provide a more meaningful and standardized name for a "piece of information". Let's call this "piece of information" a Measure.

Abstracted

The Operator, Team Leader or Manager accessing the Flow Model doesn't need to know which tag or SQL query was used to create that Measure, that "piece of information" that they use to make key decisions. In fact, they don't want to know, nor do they care! They just want their information!

Templatized

The Flow Model can be standardized across multiple sites or production facilities. The source of a Measure will differ across sites, but the name will be consistent. On Site A, the measure represents tag "FL001-123-FQ001.PV" (see why the managers don't care!) and on Site B, the measure represents a manually input value. But both measures are named "Line 1 Filler Volume", and that is what everyone will know it as, everywhere they go. Flow Templates allow for this model standardization.

Structured but flexible

The Flow Model is hierarchical and generic by design. We can build our model using ISA95, ISA88, PackML, custom asset, twin thing, entity meta-model, or any combination of these. (We're not sure twin thing is really a thing, but you get the idea). The Flow Model represents physical assets, performance indicators, and logical entities. You can structure this model by area, department, or both. The point is - it is flexible. And, despite its hierarchical nature, the Flow Model allows for object "linking" across the structure.

Unified but secure

In many ways, the Flow Model is the "uber" Unified Namespace, consolidating multiple underlying namespaces, whether they are Historian namespaces, SQL namespaces or even MQTT namespaces - Flow brings them all together into one persisted model. Together with a configurable security construct, this Unified Information Model presents the foundation for building value-added IT apps.

The Flow Model is the "uber" Unified Namespace, consolidating multiple underlying namespaces

Connect

As we build out our Flow Model, we start filling it with information. We do this automatically, using data from existing sources, or manually, through Flow Forms.

Data Sources

Flow connects to and ingests data from multiple sources, meaning we can leverage the investments you have already made:

  • Industrial Historians - Canary Historian, AVEVA PI (formally OSIsoft PI) Historian, Ignition Historian, GE Historian, AVEVA Historian (formally Wonderware Historian), additional OPC HDA based historians, etc.
  • IoT and Cloud Platforms - REST APIs, Metering Solutions, Weather Platforms, Power Distribution APIs, etc.
  • SQL Databases - Microsoft SQL, MySQL, Oracle, PostgreSQL, etc.
  • NoSQL Databases - InfluxDB, etc.
  • Realtime Systems - MQTT, OPCUA, Telegraf, etc.

Scalability Matters

Data contained by the connected data sources is never replicated. Rather, it is referenced when required to perform aggregations and calculations. Flow stores only the results of this retrieval process, in the context of time and model. Flow guarantees fast and efficient access via charts and dashboards as and when needed by storing the resultant information only. But, more importantly, this efficient information storage allows Flow Systems to scale enormously, without losing the ability to drill into the underlying data source when necessary!

Data Entry

There will always be data that cannot be captured automatically, whether it's data read from an instrument indicator, or external data coming from email or paper-based systems. Flow handles manually captured data elegantly through the use of Flow Forms. Flow Forms are easily configured and served via a web browser to data capturers in a familiar and intuitive spreadsheet-like interface. No more spreadsheet spaghetti! The best part is that as soon as someone captures data in a Flow Form, any calculations or transforms in the downstream pipeline that depend on that entry are automatically processed and available for additional analytics.

Leverage the investments you have already made in your data infrastructure

Transform

For us, the transformation pipeline is the most exciting part. This is where Flow really shines.

Context

Out of the box, and at its foundation, Flow enforces two critical pieces of context against which measure information is enriched, namely, time and model. Every data point streaming into Flow, whether used for event framing or calculated into a measure's value, is contextualized by time and model to become part of the information that will ultimately serve our decision making processes.

Time is the base that runs through all Flow Systems, a thread against which all information is stored. However, to present and publish this information as analytics-ready, Flow normalizes time into slices or periods:

Calendar-based periods include minutes, hours, shifts, days, weeks, months, quarters and years. All these periods are required to make meaningful comparisons to derive insight from your information. For example, how is the current shift running? How does our process this year compare to the same time last year? This information is at your fingertips.

Event-framed periods are derived from triggers in the underlying data. Flow monitors for start and stop triggers to generate periods against which you can attribute additional context dynamically. For example, Flow will monitor the necessary tags, or combination of tags, to record when a machine stops and starts up again. Additional information, like the reason for the stop, will be attributed to that event period, providing invaluable insight over time as to how often, how long, and why the machine stops.

Calculation Services

As data streams into Flow, it is cleaned, contextualized, and transformed by a set of calculations services that include:

  • primary aggregations and filters
  • cumulative and secondary aggregations
  • moving window calculations
  • expression based calculations
  • evaluations against limits or targets
  • secondary aggregations on event periods

User-defined functions are used to encapsulate complex algorithms and standardize and lock down calculations throughout the Flow Model.

Power of Multiple

The Flow transformation pipeline applies these contextualization and calculation processes to multiple data streams simultaneously, removing the silos between them as they blend in near real-time. The pipeline allows us to build calculated measures that take inputs from more than one data source or trigger event periods using one data source while attributing its context from other data sources, whether these data sources are time-series or transactional in nature. The possibilities are limitless!

Remove data silos as they blend in near real-time

Visualize

Ultimately, Flow provides value in the form of decision-support, insight and action by presenting the "single source of truth" in a way that is seen and understood.

Dashboarding

Flow reports, charts and dashboards are easily configured and served via a web browser to operators, team leaders and managers. Chart configuration employs built-in visualization best-practice, thus maximizing the transfer of information to the human visual cortex:

  • Big screens in production areas or hand-over rooms
  • Interactive team meetings, in-person or remote
  • Individual consumption via laptops or devices

Reports and charts enable comment entry to add human context to our information.

Messaging

Sometimes it is more convenient for the information to find us rather than for us to find the information. Flow automatically compiles and distributes information and PDF exports as and when required. Distribution is secure and handled via mechanisms such as:

  • Email
  • Slack
  • Microsoft Teams
  • Telegram
  • SMS

Maximize the transfer of information to the human visual cortex

Bridge

Flow is anything but a "black box". It contains your information and is open for you to easily access it via industry-standard protocols. Flow is your bridge from OT and IoT data streams to analytics-ready information.

API

Flow exposes an industry-standard REST API for model discovery and information access that can be used to build third-party apps or integration into existing applications.

Publish

Flow provides integration components to automatically publish information out to your other systems via industry standard protocols in near real-time. How about pushing maintenance information like running hours or stroke counts up to your Asset Management system? Or actual production figures up to your ERP system? What about sending information to your Machine Learning platform in the cloud? Or even just back to your SCADA for operator visibility of KPIs calculated from multiple data sources? Flow currently integrates with:

  • Industrial Historians - Canary Historian
  • SQL Databases - Microsoft SQL, MySQL, Oracle, PostgreSQL, etc.
  • Realtime Systems - MQTT (including SparkplugB)

Flow Tiering

Flow Systems can publish information to other Flow Systems! Why would this be useful? Imagine you were a multi-site organization, possibly spanning the globe, and each of your sites' Flow Systems is publishing its information up to your HQ Flow System? The HQ Flow System would provide invaluable fleet-wide information for site comparisons, benchmarking, and logistics planning. How about cost or efficiency comparisons between types of equipment? The possibilities are limitless.

Flow is your bridge from OT and IoT data streams to analytics-ready information

How does this all fit together?