Updated: Nov 18
Let’s face facts: there are many reporting tools out there.
Most purport to transform your data into actionable information. Some are masquerading as configurable tools when, in reality, they need a fair degree of customization (read: scripting/coding/IT skills).
So, where does Flow fit in? Is Flow another reporting tool? You be the judge…
What makes a reporting tool a good reporting tool?
It’s one thing to convert data into information. It’s another thing to achieve that goal while:
- Being performant,
- Contextualizing information using data from many sources, and
- Preserving the integrity of the data, with an audit trail.
Consider these one at a time.
1. Optimizing Reporting Performance
In the real world where there are so many inputs vying for your attention, you need to ensure that you have the right information to win the competition. This is influenced by the duration one must wait to access information.
Where a tool relies on accessing data from sources at runtime (that is, on the fly), there is likely an impact in reporting performance due to raw data being retrieved. While the user waits for the report to render.
The problem compounds when you need to aggregate the raw data.
For example, when you need to add 24 hours' worth of data points to compute a daily number. The same issue arises when you need to evaluate KPIs based on raw data from various production units. The reporting performance is inversely proportional to the complexity of the KPI.
Granted, native reporting tools (tools that the provider of the data source built) can harness proprietary data retrieval mechanisms. But when your data resides in multiple sources, some of which are from other providers, these tools are limited.
Also, you need to clean your data before its used in reports. Examples of this are what to do with “missing” data points, such as null values that were stored while an instrument was offline. Or, how to handle spurious data points caused by noise in a signal, or totalizer rollover during a reporting interval.
The above scenarios are common. If your reporting tool is correcting these at report rendering time, performance will degrade.
Flow mitigates the impact on poor performance by:
Retrieving data from data sources as soon as the data is available, and not at report runtime.
Using proprietary data retrieval mechanisms, where available, to achieve near-native retrieval performance. Flow is “data source agnostic”. It can retrieve data from multiple sources, in a performant manner.
Cleaning data at retrieval time, such as the filtering of spurious data points or the handling of totalizer rollover. Flow then stores the cleansed data.
Performing aggregations or calculations on the data before you need the results. The user does not need to wait at report runtime for complex calculations to be evaluated.
2. Using Data from Multiple Sources to Contextualize Information
It’s rare to find a facility with a single data source, or multiple data sources from the same vendor.
Best-in-breed tools focus on the genre of data/information in which those tools have expertise. This is understandable.
For example, you will not expect the provider of an industry-leader in manufacturing of data storage tool, to also provide an industry-leading ERP tool. Possible, but not probable.
What you can, and should, expect is that your reporting tool can collect data from your various data harvesting tools.
Viewing data from various tools in their “silos” is unacceptable. You need to contextualize your information by marrying data from various tools. Without context, you cannot answer some of the fundamental questions that an operation must answer. Questions such as:
How much did we produce per shift? How much of each SKU did we produce per shift?
What losses did we incur last month?
Which of the production units performed best, and which were the worst? What were the differences between these?
What were our usages (electricity/water/steam/gas/etc.) this week? Were the usages influenced by time-of-day, SKU, or other factors?
Why were our usages different? Our people usually know what the issues are – can we include their insights in our reporting?
Can we visualize/report the above numbers in dollars? Can we display these dollar values in near real-time on operator displays?
As a reporting tool, Flow can answer the above questions, and more, in the affirmative. By connecting to multiple data sources, Flow can provide you with context-rich information. These sources may contain manufacturing, planning, finance, or human resources data.
The “human data source” is an imperative one that Flow does not ignore. Operators can capture comments against data points. These comments are then available wherever you report on those data points thereafter. These invaluable insights help to focus your improvements.
Flow provides a layer of abstraction away from your data sources. Flow functions as a reporting platform. The report user need not understand, or be aware of, where the data came from. They can view this information if they desire. Or, they can focus on the information that they need to make decisions with. With confidence that the underlying data has been reliably extracted and transformed.
Flow can also make this information available to your other systems. Do you want certain KPIs displayed on your SCADA systems? Flow can do it.
3. Data Integrity and Truth
In an attempt to answer some of the challenges posed earlier, some tools sacrifice standardization and governance best practices.
The example of the ubiquitous spreadsheet tool comes to mind. With effort, one can use a spreadsheet to consolidate data from multiple sources and even include commentary. There are some caveats though:
How do you prevent intentional/accidental changes to KPI calculations?
Even with protection, how do you prevent the creation of different versions of a spreadsheet? Each with their own set of calculations?
How do you know who has captured the data points that make up your reportable KPI?
What happens when you need to update/correct a data point?
Besides the above governance-related issues, consider the following standardization-relation issues that present themselves:
How do you define an Information Model, so that reporting across all your sites is consistent?
What happens when you introduce a new asset of an existing asset type to a site? Do you then need to manually update multiple calculations in your tool? Worse yet, do you need to get an external supplier to update your KPI calculations on your behalf?
When your shift pattern changes, such as in a peak production period, does your KPI reporting tool understand the new shift pattern? And does it factor these changes into “rollup” measurements, such as daily and weekly numbers?
If your reporting tool cannot achieve the above, then you need to ask why! Flow can and does ensure data integrity.
What you see in a Flow report is a true reflection of the underlying data. Where someone has modified that data, Flow tracks and highlights this to the report user. It also shows you what the updates to the data were and who made them. And, if user comments are available, Flow displays that as well.
In short, Flow generates a full audit trail to ensure that what you see is the single version of the truth.
Is Flow another Reporting Tool?
No! Not all reporting tools are equal.
Watch the video for a more detailed discussion around this topic.
Flow understands the complexities that arise in production environments. Flow recognizes that you likely have various data sources. You need to bring that data together, to give context to your information. And you need to view your information in a performant manner.
The questions posed in this blog are real questions that Flow was designed to answer.
Learn what Flow can do for you by viewing some of our videos/webinars.
Get in touch and we’ll answer any questions you have and/or give you a no-obligation demo of what Flow can do.