Flow is your Analytics Foundation

Analytics Hub

Flow is an analytics hub that allows you to consume data from multiple sources, blend it, normalize it, perform calculations on it, and then store the results within time and model context. We call this the data transformation process, and Flow is the platform that allows you to manage that transformation pipeline.

Flow is the one place, a single-source-of-truth, where data is:

  • Unified
  • Accessible
  • Transformed
  • Trustworthy

As data streams into Flow and is transformed by the pipeline, it immediately becomes available for presentation via reports and dashboards and publishing out to other systems requiring consolidation and calculation capability.

Build a solid analytics foundation in 5 steps:

  • Model - structured, hierarchical representation of your assets, KPIs and more
  • Connect - consume data from multiple input streams (including realtime, timeseries, relational, and manual entry)
  • Transform - point normalization, period normalization, cleansing, classification, contextualizing, and calculation services
  • Visualize - browser based reporting, charting and dashboarding for description, diagnostic and predictive analytics
  • Bridge - analytics-ready information published to or pulled by your enterprise tools (ERP, ML, BI, messaging, etc.)

Edge to Enterprise

Scalability

When required, Flow Systems can be distributed across multiple servers, virtual or physical. When installed on a server, the Flow Bootstrap allows other components to be deployed to the server. In a sense, the Flow Bootstrap is the communication bus between components running a Flow System. This distributed architecture allows for large Flow Systems to be scaled.

Edge

Running the Flow Bootstrap on an edge device extends the Flow System beyond its network boundary. As long as an outbound connection can be made from the edge device to the Flow System, Data Sources, Data Consumers, and Message Services can be deployed as extensions to that system. For example, a Flow System running in private cloud infrastructure would use this edge architecture to access on-premise Data Sources, Data Consumers and Message Services.

Tiering / Rollup

Flow Systems can be tiered together to transfer information from lower level Flow Systems to higher level Flow Systems. A typical use-case is Flow Systems at each production site feeding a single HQ Flow System. Selected KPIs from Sites A, B, and C automatically propagate up to HQ as soon as they become available (typically a few minutes), presenting HQ-level analytics in near real-time!

Typical Architectures

Flow is a modular system that can be deployed in a variety of architectures (as below) or in a combination of these.

1. Simple

  • Single on-premise Flow Server including Microsoft SQL Server
  • Connected to multiple data sources including PLCs, Historians, and SQL databases
  • Workstations running Flow Config and Visualization via browsers

2. Deployed Data Sources

  • Connected to deployed data sources to ensure collection is local to source
  • Deployed Data Sources require the Flow Bootstrap to be installed on the respective servers

3. Distributed

  • Multiple Flow Servers to distribute computing load
  • Dedicated Microsoft SQL Server

4. Edge and Cloud

  • Flow Server deployed to cloud provider (AWS, Azure, etc.)
  • Data Sources deployed to the edge (on-premise) using a single outbound connection

5. Enterprise

  • Enterprise rollup from multiple Tier 1 (Site) Flow Systems to one or more Tier 2 (Enterprise) Flow Systems
  • Ideal for cross-site benchmarking, comparison, and planning
  • Central model template management

System requirements

For details of system sizing and their supported requirements, see the following:

Ready to try Flow?